A year ago, Samsung brought their PCIe SSD technology to the retail market in the form of the Samsung 950 Pro, an NVMe M.2 SSD with Samsung's 3D V-NAND flash memory. The 950 Pro didn't appear out of nowhere—Samsung had shipped two generations of M.2 PCIe SSDs to OEMs, but before the 950 Pro they hadn't targeted consumers directly.

Now, the successor to the 950 Pro is about to hit the market. The Samsung 960 Pro is from one perspective just a generational refresh of the 950 Pro: the 32-layer V-NAND is replaced with 48-layer V-NAND that has twice the capacity per die, and the UBX SSD controller is replaced by its Polaris successor that debuted earlier this year in the SM961 and PM961 OEM SSDs. However...

Samsung 960 PRO Specifications Comparison
  960 PRO 2TB 960 PRO 1TB 960 PRO 512GB 950 PRO
512GB
950 PRO
256GB
Form Factor Single-sided M.2 2280 Single-sided M.2 2280
Controller Samsung Polaris Samsung UBX
Interface PCIe 3.0 x4 PCIe 3.0 x4
NAND Samsung 48-layer 256Gb MLC V-NAND Samsung 32-layer 128Gbit MLC V-NAND
Sequential Read 3500 MB/s 3500 MB/s 3500 MB/s 2500MB/s 2200MB/s
Sequential Write 2100 MB/s 2100 MB/s 2100 MB/s 1500MB/s 900MB/s
4kB Random Read (QD1) 14k IOPS 12k IOPS 11k IOPS
4kB Random Write (QD1) 50k IOPS 43k IOPS 43k IOPS
4kB Random Read (QD32) 440k IOPS 440k IOPS 330k IOPS 300k IOPS 270k IOPS
4kB Random Write (QD32) 360k IOPS 360k IOPS 330k IOPS 110k IOPS 85k IOPS
Read Power 5.8W 5.3W 5.1W 5.7W (average) 5.1W (average)
Write Power 5.0W 5.2W 4.7W
Endurance 1200TB 800TB 400TB 400TB 200TB
Warranty 5 Year 5 Year
Launch MSRP $1299 $629 $329 $350 $200

... looking at the performance specifications of the 960 Pro, it clearly is much more than just a refresh. Part of this is due to the fact that PCIe SSDs simply have more room to advance than SATA SSDs, so it's possible for Samsung to add 1GB/s to the sequential read speed and to triple the random write speed. But to bring about those improvements and stay at the top of a market segment that is seeing new competition every month, Samsung has had to make significant changes to almost every aspect of the hardware.

We've already analyzed Samsung's 48-layer V-NAND in reviewing the 4TB 850 EVO it first premiered in. The Samsung 960 Pro uses the 256Gb MLC variant, which allows for a single 16-die package to contain 512GB of NAND, twice what was possible for the 950 Pro. Samsung has managed another doubling of drive capacity by squeezing four NAND packages on to a single side of the M.2 2280 card. Doing this while keeping to that single-sided design required freeing up the space taken by the DRAM, which is now stacked on top of the controller in a package-on-package configuration.

Samsung's Polaris controller is also a major change from the UBX controller used in the 950 Pro. Meeting the much higher performance targets of the 960 Pro with the UBX controller architecture would have required significantly higher clock speeds that the drive's power budget wouldn't allow for. Instead, the Polaris controller widens from three ARM cores to five, and now dedicates one core for communication with the host system.

The small size of the M.2 form factor combined with the higher power required to perform at the level expected of a PCIe 3.0 x4 SSD means that heat is a serious concern for M.2 PCIe SSDs. In general, these SSDs can be forced to throttle themselves rather than overheat when subjected by intensive benchmarks and stress tests, but at the same time most drives avoid thermal throttling during typical real-world use. Most heavy workloads are bursty, especially at 2GB/sec.

Even so, many users would prefer the benefits of reliable sustained performance offered by a well-cooled PCIe SSD, and almost every M.2 PCIe SSD is now doing something to address thermal concerns. Toshiba's OCZ RD400 is available with an optional PCIe x4 to M.2 add-in card that puts a thermal pad directly behind the SSD controller. Silicon Motion's SM2260 controller integrates a thin copper heatspreader on the top of the controller package. Plextor's M8Pe is available with a whole-drive heatspreader. Samsung has decided to put a few layers of copper into the label stuck on the back side of the 960 Pro. This is thin enough to not have any impact on the drive's mechanical compatibility with systems that require a single-sided drive, but the heatspreader-label does make a significant improvement in the thermal behavior of the 960 Pro, according to Samsung.

(click for full resolution close-up)

The warranty on the 960 Pro is five years, the same as for the 950 Pro but half of what is offered with the 850 Pro. When the 950 Pro was introduced, Samsung explained that the decreased warranty period on a higher-end product was due to NVMe and PCIe SSDs being a less mature technology than SATA SSDs. Despite having a very successful year with the 950 Pro, Samsung isn't bumping the warranty period back up to 10 years, and I would be surprised if they ever release a consumer SSD with such a long warranty period again.

Going hand in hand with the warranty period is the write endurance rating. The 512GB and 1TB models have endurance ratings that are equivalent to the drive writes per day offered by the 950 Pro. The 2TB 960 Pro's endurance rating falls short at 1200TB instead of the 1600TB that would be double the rating on the 1TB 960 Pro. When asked about this discrepancy during the Q&A session at Samsung's SSD Global Summit where the 960 Pro was announced, Samsung dodged the question and did not offer a satisfactory explanation.

The one other area where the 960 Pro does not promise significant progress is price. Despite switching to denser NAND, the MSRP of the 512GB 960 Pro is only slightly lower than the MSRP the 512GB 950 Pro launched with, and slightly higher than the current retail price of the 950 Pro. The 960 Pro is using more advanced packaging for the controller and NAND and the controller itself likely costs a bit more, but the bigger factor keeping the price up is probably the dearth of serious competition.

When the Samsung 950 Pro launched, its main competition in the PCIe space was the Intel SSD 750, a derivative of their enterprise PCIe SSD line equipped with consumer-oriented firmware. It's big and power hungry, but brought NVMe to the consumer market and set quite a few performance records in the process. The 950 Pro couldn't beat the SSD 750 in every test, but it comes out ahead where it matters most for everyday client workloads. Since then, new NVMe controllers have arrived from Marvell, Silicon Motion and Phison. We reviewed the OCZ RD400 and found it was able to beat the 950 Pro in several tests, especially when considering the 1TB RD400 against the largest 950 Pro that is only 512GB. We will be comparing the 2TB Samsung 960 Pro against its predecessor and these competing high-end PCIe SSDs, as well as three 2TB-class SATA SSDs.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz
(Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Pro (BIOS 2701)
Chipset Intel Z97
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Desktop Resolution 1920 x 1200
OS Windows 8.1 x64
A Note About Drivers
Comments Locked

72 Comments

View All Comments

  • Gigaplex - Tuesday, October 18, 2016 - link

    "Because of that, all consumer friendly file systems have resilience against small data losses."

    And for those to work, cache flush requests need to be functional for the journalling to work correctly. Disabling cache flushing will reintroduce the serious corruption issues.
  • emn13 - Wednesday, October 19, 2016 - link

    "100% data protection is not needed": at some level that's obviously true. But it's nice to have *some* guarantees so you know which risks you need to mitigate and which you can ignore.

    Also, NVMe has the potential to make this problem much worse: it's plausible that the underlying NAND+controller cannot outperform SATA alternatives to the degree they appear to; and that to achieve that (marketable) advantage, they need to rely more on buffering and write merging. If so, then it's possible you may be losing still only milliseconds of data, but that might cause quite a lot of corruption given how much data that can be on NVMe. Even though "100%" safe is possibly unnecessary, that would make the NVMe value proposition much worse: not only are such drives much more expensive, they also (in this hypothesis) would be more likely to cause data corruption - I certainly wouldn't buy one given that tradeoff; the performance gains are simply too slim (in almost any normal workload).

    Also, it's not quite true that "all consumer friendly file systems have resilience against small data losses". Journalled filesystems typically only journal metadata; not data - so you may still have a bunch of corrupted files. And, critically - the journaling algorithms rely on proper drive flushing! If a drive can lose data that has been flushed (pre-fsync-writes), then even a journalled filesystem can (easily!) be corrupted extensively. If anything, journalled filesystems are even more vulnerable to that than plain old fat, because they rely on clever interactions of multiple (conflicting) sources of truth in the event of a crash, and when the assumptions the FS makes turn out to be invalid, it (by design) will draw incorrect inferences about which data is "real" and which due to the crash. You can easily lose whole directories (say, user directories) at once like this.
  • HollyDOL - Wednesday, October 19, 2016 - link

    Tbh I consider whole this argument strongly obsolete... if you have close to $1300 spare to buy 2TB SSD monster, you definitely should have $250-350ish to buy decent UPS.

    Or, if you run several thousand USD machine without any, you more than deserve what you can get.

    It's same argument like you won't build double Titan XP monster and power it with chinesse noname PSU. There are things which are simply no go.
  • bcronce - Tuesday, October 18, 2016 - link

    As an ex-IT who used to manage thousands of computers, I have never seen catastrophic data loss caused by a power outage, and I have seen many of them. What I have seen are harddrives or PSUs dying and recently committed data was lost, but never fully committed data.

    That being said. SSDs are a special beast because many times writing new data requires moving existing data, and this is dangerous.

    Most modern filesystems since the 90s, except FAT32, were meant to handle unexpected powerloss. NTFS was the first FS from MS that pretty much got rid of powerloss issues.
  • KAlmquist - Tuesday, October 18, 2016 - link

    The functionality that a file system like NTFS requires to avoid corruption in the case of a power failure is a write barrier. A write barrier is a directive that says that the storage device should perform all writes prior to the write barrier before performing any of the writes issued after the write barrier.

    On a device using flash memory, write barriers should have minimal performance impact. It is not possible to overwrite flash memory, so when an SSD gets a write request, it will allocate a new page (or multiple pages) of flash memory to hold the data begin written. After it writes the data, it will update the mapping table so to point to the newly written page(s). If an SSD gets a whole bunch of writes, it can perform the data write operations in parallel as long as the pages being written all reside on different flash chips.

    If an SSD gets a bunch of writes separated by write barriers, it can write the data to flash just like it would without the write barriers. The only change is in when a write completes, the SSD cannot update the mapping table to point to the new data until earlier writes have completed.

    This is different from a mechanical hard drive. If you issue a bunch of writes to a mechanical hard drive, the drive will attempt to perform the writes in an order that will minimize seek time and rotational latency. If you place write barriers between the write requests, then the drive will execute the writes in the same order you issued them, resulting in lower throughput.

    Now suppose you are unable to use write barriers for some reason. You can achieve the same effect by issuing commands to flush the disk after every write, but that will prevent the device from executing mulitple write commands in parallel. A mechanical hard drive can only execute one write at a time, so cache flushes are a viable alternative to write barriers if you know you are using a mechanical hard drive. But on SSD's, parallel writes are not only possible, they are essential to performance. The write speeds of individual flash chips are slower than hard drive write speeds; the reason that sequential writes on most SSD's are faster than on a hard drive is that the SSD writes to multiple chips in parallel. So if you are talking to an SSD, you do not want to use cache flushes to get the effect of write barriers.

    I take it from what shodanshok wrote is that Microsoft Windows doesn't use write barriers on NVME devices, giving you the choice of either using cache flushes or risking file system corruption on loss of power. A quick look at the NVME specification suggests that this is the fault of Intel, not Microsoft. Unless I've missed it, Intel inexplicably omitted write barrier functionality from the specification, forcing Microsoft to use cache flushing as a work-around:

    http://www.nvmexpress.org/wp-content/uploads/NVM_E...

    On SSD devices, write barriers are essentially free. There is no need for a separate write barrier command; the write command could contain a field indicating that the write operation should be preceded by a write barrier. Users shouldn't have to chose between data protection and performance when the correct use of a sensibly designed protocol would given them both without them having to worry about it.
  • Dorkaman - Monday, November 28, 2016 - link

    So this drive has capacitors to help write out anything in the buffer if the power goes out:

    https://youtu.be/nwCzcFvmbX0 skip to 2:00

    23 power-loss capacitors used to keep the SSD's controller running just long enough, in the event of an outage, to flush all pending writes:

    http://www.tomshardware.com/reviews/samsung-845dc-...

    Will the 960 Evo have that? Would this prevent something like this (RAID 0 lost due to power outage):

    https://youtu.be/-Qddrz1o9AQ
  • Nitas - Tuesday, October 18, 2016 - link

    This may be silly of me but why did they use W8.1 instead of 10?
  • Billy Tallis - Tuesday, October 18, 2016 - link

    I'm still on Windows 8.1 because this is still our 2015 SSD testbed and benchmark suite. I am planning to switch to Windows 10 soon, but that will mean that new benchmark results are not directly comparable to our current catalog of results, so I'll have to re-test all the drives I still have on hand, and I'll probably take the opportunity to make a few other adjustments to the test protocol.

    Switching to Windows 10 hasn't been a priority because of the hassle it entails and the fact that it's something of a moving target, but particularly with the direction the NVMe market is headed the Windows version is starting to become an important factor.
  • Nitas - Tuesday, October 18, 2016 - link

    I see, thanks for clearing that up!
  • Samus - Wednesday, October 19, 2016 - link

    Windows 8.1 will have virtually no difference in performance compared to Windows 10 for the purpose of benchmarking SSD's...

Log in

Don't have an account? Sign up now