I don't think it's an overstatement to say that Intel introduced us to the era of modern SSDs back in 2008 with the X25-M. It wasn't the first SSD on the market, but it was the first drive that delivered the aspects we now take for granted: high, consistent and reliable performance. Many SSDs in the early days focused solely on sequential performance as that was a common performance metric for hard drives, but Intel understood that the key to better user performance wasn't the maximum throughput, but the small random IOs that take unbearably long to complete on HDDs. Thanks to Intel's early understanding of real world workloads and implementing the knowledge to a well designed product, it took several years before others were able to fully catch up with the X25-M.

But when the time came to upgrade to SATA 6Gbps, Intel missed the train. The initial SATA 6Gbps drives had to rely on third party silicon because Intel's own SATA 6Gbps controller was still in development, and to put it frankly the SSD 510 and SSD 520 just didn't pack the same punch as the X25-M did. The others had also done their homework and gone back to drawing board, which meant that Intel was no longer in the special position it was in 2008. Once the SSD DC S3700 with in-house Intel SATA 6Gbps controller finally materialized in late 2012, it quickly built back the Intel image that the company had in the X25-M days. The DC S3700 wasn't as revolutionary as the X25-M was, but it again focused on areas where other manufacturers had been lacking, namely performance consistency.

The first and second generation Intel X25-M

While Intel was arguably late to the SATA 6Gbps game, the company already had something much bigger in its mind. Something that would abandon the bottlenecks of SATA interface and challenge the X25-M in significance in the history of SSDs. That product was the SSD DC P3700, the world's first drive with custom PCIe NVMe controller and the first NVMe drive that was widely available.

Ever since our SSD DC P3700 review, there's been massive interest from enthusiasts and professionals for a more client-oriented product based on the same platform. With eMLC, ten drive writes per day endurance and a full enterprise-class feature set, the SSD DC P3700 was simply out of reach for consumers at $3 per gigabyte because the smallest 400GB SKU cost the same as a decent high power PC build. Intel didn't ignore your prayers and wishes and with today's release of the SSD 750 Intel is delivering what many of you have been craving for months: NVMe with a consumer friendly price tag in a 2.5" form factor via SFF-8639 or a PCIe add-in card.

Intel SSD 750 Specifications
Capacity 400GB 1.2TB
Form Factor 2.5" 15mm SFF-8639 or PCIe Add-In Card (HHHL)
Interface PCIe 3.0 x4 - NVMe
Controller Intel CH29AE41AB0
NAND Intel 20nm 128Gbit MLC
Sequential Read 2,200MB/s 2,400MB/s
Sequential Write 900MB/s 1,200MB/s
4KB Random Read 430K IOPS 440K IOPS
4KB Random Write 230K IOPS 290K IOPS
Idle Power Consumption 4W 4W
Read/Write Power Consumption 9W / 12W 10W / 22W
Encryption N/A
Endurance 70GB Writes per Day for Five Years
Warranty Five Years
MSRP $389 $1,029

Even though the SSD 750 is built upon the SSD DC P3700 platform, it's a completely different product. Intel spent a lot of time on redesigning the firmware to be more suitable for client applications, which differ greatly from typical enterprise workloads. The SSD 750 is supposed to be more focused on random performance as the majority of IOs in client workloads tend to have random patterns and be small in size. The sequential write speeds may seem a bit low for such high capacities for that reason, but ultimately Intel's goal was to provide better real world performance rather than focus on maximum benchmark numbers, which has been Intel's strategy ever since the X25-M days.

At the time of launch, the SSD 750 will only be available in capacities of 400GB and 1.2TB. An 800GB SKU is being considered, but I think Intel is still testing the waters with the SSD 750 and thus the initial lineup is limited to just two SKUs. After all, the ultra high-end is a niche market and even in that space the SSD 750 is much more expensive that existing SATA drives, so a gradual roll out makes a lot of sense. I think for enthusiasts the 400GB model is the sweet spot because it provides enough capacity for the OS and applications/games, whereas professionals will likely want to spring for the 1.2TB if they are looking for high-speed storage for work files (video editing is a prime example). 

The SSD 750 utilizes Intel-Micron's 20nm 128Gbit MLC NAND. The die configuration is actually fairly interesting because the packages on the front-side on the PCB (i.e. the one that's covered by the heat sink and where the controller is) are quad-die with 64GiB capacity (4x128Gbit), whereas the packages on the back-side of the PCB are all single-die. I suspect Intel did this for heat reasons because PCIe is more capable of utilizing NAND to its full potential, which increases the heat output and obviously four dies inside one package generate more heat than a single die. With 18 packages on the front-side and 14 on the backside, the raw NAND capacity comes in at 1,376GiB, resulting in effective over-provisioning of 18.8% with 1,200GB of usable capacity.

The controller is the same 18-channel behemoth running at 400MHz that is found inside the SSD DC P3700. Nearly all client-grade controllers today are 8-channel designs, so with over twice the number of channels Intel has a clear NAND bandwidth advantage over the more client-oriented designs. That said, the controller is also much more power hungry and the 1.2TB SSD 750 consumes over 20W under load, so you won't be seeing an M.2 variant with this controller. 

Similar to the SSD DC P3700, the SSD 750 features full power loss protection that protects all data in the DRAM, including user data in flight. I'm happy to see that Intel understands how power loss protection can be a critical feature for the high-end client segment as well because especially professional users can't have the risk of losing any data.

The Form Factors & SFF-8639 Connector

The SSD 750 is available in two form factors: a traditional half-height, half-length add-in card and 2.5" 15mm drive. The 2.5" form factor utilizes an SFF-8639 connector that is mostly used in the enterprise, but it's slowly making its way to the high-end client side as well (ASUS just announced TUF Sabertooth X99 two weeks ago at CeBit). The SFF-8639 is essentially SATA Express on steroids and offers four lanes of PCIe connectivity for up to 4GB/s of bandwidth with PCIe 3.0 (although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiency). Honestly, aside from the awkward name, SFF-8639 is what SATA Express should have been from the beginning because nearly all upcoming PCIe controller designs will feature four PCIe lanes, which renders SATA Express useless as there's no point in handicapping a drive with an interface that's only capable of providing half of the available bandwidth. That said, I wasn't at the table when SATA-IO made the decision, but it's clear that the spec wasn't fully thought through. 

The SFF-8639 connector

Similar to SATA Express, SFF-8639 has a separate SATA power input in the cable. That's admittedly quite unwieldy, but it's necessary to keep the motherboard and cable costs reasonable. The SSD 750 requires both 3.3V and 12V rails for power, so if the drive was to draw power from PCIe it would have required some additional components on the motherboard side, which is something that the motherboard OEMs are hesitant about due to the added cost, especially since it's just one port that may not even be used by the end-user.

The motherboard end of the SFF-8639 cable

As the industry moves forward and PCIe becomes more common, I think we'll see SFF-8639 being adopted more widely. The 2.5" form factor is really the best for a desktop system because the drive location is not fixed to one spot on the motherboard or in the case. While M.2 and add-in cards provide a cleaner look thanks to the lack of cables, they both eat precious motherboard area that could be used for something else. That's the reason why motherboards don't usually have more than one M.2 slot as the area taken by the slot can't really be used for any other components. Another issue especially with add-in cards is the heat coming from other PCIe cards (namely high power GPUs) that can potentially throttle the drive, whereas drive bays tend to be located in the front of the case with good airflow and no heat coming from surrounding components. 

Utilizing the Full Potential of NVMe

Because the SSD 750 is a PCIe 3.0 design, it must be connected directly to the CPU's PCIe 3.0 lanes for maximum throughout. All the chipsets in Intel's current line up are of the slower PCIe 2.0 flavor, which would effectively cut the maximum throughput to half of what the SSD 750 is capable of. The even bigger issue is that the DMI 2.0 interface that connects the platform controller hub (PCH) to the CPU is only four lanes wide (i.e. up to 2GB/s), so if you connect the SSD 750 to the PCH's PCIe lanes and access other devices connected to the PCB (e.g. USB, SATA or LAN) at the same time the performance would be even further handicapped.

 

Intel Z97 chipset block diagram

Utilizing the CPU's PCIe lanes presents some possible bottlenecks for the users of Z97 chipset because the normal Haswell CPUs feature only sixteen PCIe 3.0 lanes. In other words, if you wish to use the SSD 750 with a Z97 chipset you have to give up some GPU PCIe bandwidth because the SSD 750 will take four lanes out of the sixteen. With a single GPU setup that's hardly an issue, but with SLI/CrossFire setup there's a possibility of some bandwidth handicapping if the GPUs and SSD are utilizing the interface simultaneously. Also, with NVIDIA's PCIe x8 requirement, it limits itself to a single NVIDIA card implementation. Fortunately it's quite rare that an application would tax the GPUs and storage at the same time since games tend to load data to RAM for faster access and especially with the help of PCIe switches it's possible to grant all devices the lanes they require (although the maximum bandwidth isn't increased, but switches allow full x16 bandwidth to the GPUs when they need it). 

Intel X99 chipset block diagram

With Haswell-E and its 40 PCIe 3.0 lanes, there are obviously no issues with bandwidth even with an SLI/CrossFire setup and two SSD 750s. Unfortunately the X99 (or any other chipset) doesn't support PCIe RAID, so if you were to put two SSD 750s in RAID 0 the only option would be to use software RAID. That in turn will render the volume unbootable and I had some performance issues with two Samsung XP941s in software RAID, so at this point I would advice against RAIDing the SSD 750s. We'll have to wait for Intel's next generation chipsets to get proper RAID support for PCIe SSDs.

As for older chipsets, Intel isn't guaranteeing compatibility with 8-series chipsets and older. The main issue here is that the motherboard OEMs aren't usually willing to support older chipsets in the form of BIOS updates and the SSD 750 (and NVMe in general) requires some BIOS modifications in order to be bootable. That said, some older motherboards may work with the SSD 750 just fine, but I suggest you do some research online or contact the motherboard manufacturer before pulling the trigger on the SSD 750.

Bootable? Yes

Understandably the big question many of you have is whether the SSD 750 can be used as a boot drive. I've confirmed that the drive is bootable in my testbed with ASUS Z97 Deluxe motherboard with the latest BIOS and it should be bootable on any motherboard with proper NVMe support. Intel will have a list of supported motherboards on the SSD 750 product page, which are all X99 and Z97 based at the moment but the support will likely expand over time (it's up to the motherboard manufacturers to release a BIOS version with NVMe support). 

Furthermore, I know many of you want to see some actual real world tests that compare NVMe to SATA drives and I'm working on a basic test suite to cover that. Unfortunately, I didn't have the time to include it in this review due to this and last weeks' NDAs, but I will publish it as a separate article as soon as it's done. If there are any specific tests that you would like to see, feel free to make suggestions in the comments below and I'll see what I can do.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Deluxe (BIOS 2205)
Chipset Intel Z97
Chipset Drivers Intel 10.0.24+ Intel RST 13.2.4.1000
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 8.1 x64
Performance Consistency
Comments Locked

132 Comments

View All Comments

  • Kristian Vättö - Thursday, April 2, 2015 - link

    That's up to the motherboard manufacturers. If they provide BIOS with NVMe support then yes, but I wouldn't get my hopes up as the motherboard OEMs don't usually do updates for old boards.
  • vailr - Thursday, April 2, 2015 - link

    If Z97 board bioses from Asus, Gigabyte, etc. are going to be upgradeable to support Broadwell for all desktop (socket 1150) motherboards, wouldn't they also want to include NVMe support? I'm assuming such support is at least within the realm of possibility, for both Z87 and Z97 boards.
  • TheRealPD - Thursday, April 2, 2015 - link

    Has anyone worked out exactly what the limitation is/why the bios needs upgrading yet?

    Simply that I had the idea that the P3700 had it's own nvme orom, nominally akin to a raid card... ...& that people have had issues with the updated mobo bioses replacing intel's one with a generic one...

    ...which kind of suggests that the bios update could conceivably not be a requirement for some nvme drives.
  • vailr - Friday, April 3, 2015 - link

    A motherboard bios update would be required to provide bootability. Without that update, an NVMe drive could only function as a secondary storage drive. As stated elsewhere, each device model needs specific support added to the motherboard bios. Samsung's SM941 (an M.2 SSD form factor device) is a prime example of this conundrum, and why it's not generally available as a retail device. Although it can be found for sale at Newegg or on eBay.
  • TheRealPD - Friday, April 3, 2015 - link

    Ummmm... Well, for example, looking at http://www.thessdreview.com/Forums/ssd-discussion/... then the P3700 could be used as a boot drive on a Z87 board in July 2014 - so clearly that wasn't using a mobo bios with an added nvme orom as ami hadn't even released their generic nvme orom that's being added to the Z97 boards.

    (& from recollection, on Z97 boards, in Windows the P3700 is detected as an intel nvme device without the bios update... ...& an ami nvme one with the update)

    This appears to effectively the same as, say, an lsi sas raid card loading it's own orom during the boot process & the drives on it becoming bootable - as obviously, as new raid cards with new feature sets are introduced, you don't have to have updates for every mobo bios.

    Now, whilst I can clearly appreciate that *if* a nvme drive didn't have it's own orom then there would be issues, it really doesn't seem to be the case with drives that do... ...so is there some other issue with the nvme feature set or...?

    Now, obviously this review is about another intel nvme pcie ssd - so it might be reasonable to imagine that it could well also have it's own orom - but, more generally, I'm questioning the assumption that just because it's an nvme drive you can *only* fully utilise it with a board with an updated bios...

    ...& that if it's the case that some nvme ssds will & some won't have their own orom (& it doesn't affect the feature set), it would be a handy thing to see talked about in the reviews as it means that people with older machines are neither put off buying nor buy an inappropriate ssd when more consumer orientated ones are released.
  • TheRealPD - Saturday, April 4, 2015 - link

    I think I've kind of found the answer via a few different sources - it's not that nvme drives necessarily won't work properly with booting & whatnot on older boards... it's that there's no stated consistency as to what will & won't work...

    So apparently they can simply not work on some boards d.t. a bios conflict & there can separately be address space issues... So the ami nvme orom & uefi bios updates are about compatibility - *not* that an nvme ssd with its own orom will or won't necessarily work without them on any particular setup.

    it would be very useful if there was some extra info about this though...

    - well, it's conceivable that at least part of the problem is akin to the issues on much older boards with the free bios capacity for oroms & multiple raid configurations... ...where if you attempted to both enable all of the onboard controllers for raid (as this alters the bios behaviour to load them) &/or had too many additional controllers then one or more of them simply wouldn't operate d.t. the bios limitation; whereas they'd all work both individually & with smaller no's enabled/installed... ...so people with older machines who haven't seen this issue previously simply because they've never used cards with their own oroms or the ssd is the extra thing where they're hitting the limit, are now seeing what some of us experienced years ago.

    - or, similarly, that there's a min uefi version that's needed - I know that intel's recommending 2.3.1 or later for compatibility but clearly they were working on some boards prior to that...
  • pesho00 - Thursday, April 2, 2015 - link

    Why they omit M2? I really think this is a mistake missing the whole mobile market while SM951 will penetrate both!
  • Kristian Vättö - Thursday, April 2, 2015 - link

    Because M.2 would melt with that beast of a controller.
  • metayoshi - Thursday, April 2, 2015 - link

    The Idle power spec of this drive is 4W, while the SM951 is at 50 mW with an L1.2 power consumption at 2mW. Your notebook's battery life will suffer greatly with a drive this power hungry.
  • jwilliams4200 - Thursday, April 2, 2015 - link

    Even though you could not run the performance tests with additional overprovisioning on the 750, you should still show the comparison SSDs with additional overprovisioning.

    The fair comparison is NOT with the Intel 750 no OP versus other SSDs with no OP. The comparison you should be showing is similar capacity vs. similar capacity. So, for example, a 512GB Samsung 850 Pro with OP to leave it with 400GB usable, versus and Intel 750 with 400GB usable.

    I also think it would be good testing policy to test ALL SSDs twice, once with no OP, and once with 50% overprovisioning, running them through all the tests with 0% and 50% OP. The point is not that 50% OP is typical, but rather that it will reveal the best and worst case performance that the SSD is capable of. The reason I say 50% rather than 20% or 25% is that the optimal OP varies from SSD to SSD, especially among models that already come with significant OP. So, to be sure that you OP enough that you reach optimal performance, and to provide historical comparison tests, it is best just to arbitrarily choose 50% OP since that should be more than enough to achieve optimal sustained performance on any SSD.

Log in

Don't have an account? Sign up now