Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  SanDisk Extreme Pro SanDisk Extreme II Intel SSD 730 Intel SSD 530 OCZ Vector 150
25% Spare Area

Similar to the Extreme II, the IO consistency is just awesome. SanDisk's firmware design is unique in the sense that instead of pushing high IOPS at the beginning, the performance drops close to 10K IOPS at first and then rises to over 50K and stays there for a period of time. The higher the capacity, the longer the high IOPS period: the 960GB Extreme Pro takes ~800 seconds before the IOPS drops to 10K (i.e. the drive reaches steady-state). I do not know why SanDisk's behavior is so different (maybe it has something to do with nCache?) but it definitely works well. Furthermore, SanDisk seems to be the only manufacturer that has really nailed IO consistency with a Marvell controller because Crucial/Micron and Plextor have had some difficulties and their performance is not even close to SanDisk.

However, I would not say that the Extreme Pro is unique. Both Intel SSD 730 and OCZ Vector 150 provide the same or even better performance at steady-state, and with added over-provisioning the difference is even more significant. That is not to say that the Extreme Pro is inconsistent, not at all, but for a pure 4KB random write workload there are drives that offer (slightly) better performance.

  SanDisk Extreme Pro SanDisk Extreme II Intel SSD 730 Intel SSD 530 OCZ Vector 150
25% Spare Area


  SanDisk Extreme Pro SanDisk Extreme II Intel SSD 730 Intel SSD 530 OCZ Vector 150
25% Spare Area


TRIM Validation

To test TRIM, I filled the drive with sequential data and proceeded with 60 minutes of 4KB random writes at queue depth of 32. I measured performance with HD Tach after issuing a single TRIM pass to the drive.

TRIM works for sure as the write speed is at steady 400MB/s.


Introduction, The Drives & The Test AnandTech Storage Bench 2013
Comments Locked


View All Comments

  • vaayu64 - Tuesday, June 17, 2014 - link

    Look for the review here on anandtech....
  • Shiitaki - Monday, June 16, 2014 - link

    Actually, Samsung sells the 840 EVO up to 1TB in a msata.
  • apertotes - Monday, June 16, 2014 - link

    Well, I am just a simple architect, and probably I am not representative of Anandtech userbase, but, why do you feel hardware encryption is so important? Am I missing something? Should I be worried that my hard drives are not encrypted?
  • r3loaded - Monday, June 16, 2014 - link

    I'm wondering about the focus on encryption too. I have a Samsung Evo in my Sandy Bridge desktop machine and it supposedly supports this Opal hardware encryption. Trouble is, I have no idea how to enable it. Apparently, I can set a disk password but I see no option for that in my computer's firmware. Someone mentioned enabling BitLocker in Windows 8.1, but I get an error message saying my system doesn't have a TPM chip. I have zero idea how to go about enabling it on Linux, but I suppose dm-crypt for software encryption should work just fine.
  • andychow - Monday, June 16, 2014 - link

    I don't get it either. If your file system is not encrypted, then simply plugin the ssd to another machine and get all the data. If your file system is encrypted, then what does it matter if the hardware is encrypted? I think they just offer it because it costs them almost nothing to have a hardware encryption chip, and enterprises like buzzwords.

    @reloaded, you don't have to enable it, it's always on. The password is on the controller, if you destroy that, the data could not be recuperated. But why would you destroy the controller and not the rest of the drive is beyond me.

    I personally don't see the point.
  • chiechien - Tuesday, June 17, 2014 - link

    Your BIOS has to support disk passwords, and you have to enable it. For drives with hardware encryption, the ATA/disk/whatever password has to be known to get at the encryption key, otherwise it's just left at some default that every machine can read.
  • Kristian Vättö - Tuesday, June 17, 2014 - link

    Hardware encryption means that the encryption is done by the hardware rather than software. The benefit is that because it's done at the device-level, it doesn't consume the host CPU like software encryption and it's more secure.

    While SSDs often encrypt all data that is written to them, you still have to enable encryption from the host to ensure that the data can't be accessed by a third party. Otherwise the SSD think that any machine (and user) is allowed to access the data.
  • thomas-hrb - Monday, June 16, 2014 - link

    Good point, personally encryption at the hardware level is not important to me. I find common sense and protection of property out weighs the continuous management overhead of encryption keys especially for a developer like me who regularly and repeatedly reformats partitions.

    Especially when working with cloudbased/corporate share online only storage. I do wonder however, I have a samsung 840 pro (256GIB version) I have 2 vertex4's (256 GiB version). I have on many occasions rebuilt/reformatted and transplanted my SSD's from one machine to the next. I don't know if I have to explicitly enable/disable the feature on the SSD, but IMHO, if I could easily read the SSD contents simply by moving it to another computer, then what ever encryption is on the drive is simply not effective.
  • chiechien - Tuesday, June 17, 2014 - link

    The Samsung (and probably the Vertex) encrypts (just by the nature of how it works) everything that is written to the drive as a matter of course. Unless you set an ATA password in your BIOS, though, the password/encryption key/whatever is left 'blank' or default or whatever, so it is readable by any other motherboard. If you set an ATA password, though, that changes the encryption key, you will probably have to wipe the drive to enable it, and then the drive will not be readable in another system unless you also configure it for ATA passwords as well, and then type it in.
  • hojnikb - Monday, June 16, 2014 - link

    I wonder why Sandisk didn't went with marvel *89 controller ?

Log in

Don't have an account? Sign up now