Testing is nearly complete on the last Corsair SSD that came my way, but this morning UPS dropped off another surprise: the Corsair Force SSD. Based on a derivative of the controller in the OCZ Vertex LE I reviewed earlier this year, the Force uses the mainstream version of SandForce's technology. Here's how it breaks down. Last year's Vertex 2 Pro used a SF-1500 controller, the Vertex LE uses something in between a SF-1500 and SF-1200 (closer to the SF-1500 in performance) while the Corsair Force uses a SF-1200.

The SF-1200 has all of the goodness of the SF-1500, just without some of the more enterprise-y features. I haven't been able to get a straight answer from anyone as to exactly what you give up by going to the SF-1200 but you do gain a much more affordable price. The Vertex LE is only low in price because it is using a limited run of early controllers from SF, presumably so SandForce can raise capital. The SF-1200 based SSDs should be price competitive with current Indilinx offerings.

You'll notice that like the Vertex LE there's no supercap on the Force's PCB. There's also no external DRAM cache thanks to a large amount of on-die cache and SandForce's real time data compression/deduplication technology. As you may remember from my Vertex 2 Pro and Vertex LE reviews, SandForce achieves higher performance by simply reducing the amount of data it has to write to NAND (similar to lossless compression or data deduplication). 

I've got the Force on my SSD testbench now and I should have the first results by the end of the day today. This one is exciting as it could give us a preview of what the performance mainstream SSD marketplace will look like for the rest of 2010.

More pics of the drive in the Gallery!



View All Comments

  • SandmanWN - Tuesday, April 13, 2010 - link

    yeah, thats what im getting at. just trying to point it out now before the full article comes out. don't want the numbers to change in a later ssd comparison and have in-congruent charts for spare-to-drive space performance. Reply
  • Anand Lal Shimpi - Tuesday, April 13, 2010 - link

    Yep these are still very enterprise-oriented drives. SandForce plans on delivering a version with less spare area but that's behind these SF-1500/1200 derivatives.

    I talked a bit about spare area on the SF drives here: http://anandtech.com/show/2899/5
  • Luke212 - Tuesday, April 13, 2010 - link

    do you think having a large reserve is cheating for marketing purposes? They sustain the performance long enough to get it past reviewers, but long term users will encounter a slow down as the spare area eventually is used.

    my x25 g1 is grinding to a halt these days. i freed up 26gb but of course it makes no difference now until I image and secure erase.
  • jimhsu - Tuesday, April 13, 2010 - link

    That's only if TRIM doesn't work or is not supported (i.e. for the G1 drives). For any modern drive with TRIM, there should be no "progressive slowdown" or anything of that sort. Performance is solely dependent on the percentage of free blocks. Reply
  • jimhsu - Tuesday, April 13, 2010 - link

    In contrast, I think the 80GB intel drives are lacking in free space too much. 7.4% if I'm correct ... just from empirical testing I noticed that anything below 20GB of free space (in Windows) creates a noticeable slowdown esp. in sequential write scenarios (i.e. the drive bursts at the maximum transfer rate followed by intermittent pauses). Reply
  • Exodite - Tuesday, April 13, 2010 - link

    While the SandForce drives are impressive, as are most new SSDs really, I find myself unwilling to part with serious money for these drives until they've migrated to SATA 6.0 Gbps.

    The SandForce drives especially, since both their sequential read- and write-speeds are pretty much at the limit of what SATA 3.0 Gbps can do once you take into account overhead. Frankly I don't understand while the developers didn't consider this in the first place.

    Oh well, waiting won't cost me anything.
  • DigitalFreak - Tuesday, April 13, 2010 - link

    I'm assuming that by the time the SATA 3 spec was finalized, they were too far into development of their controller to switch. Reply
  • JarredWalton - Tuesday, April 13, 2010 - link

    First we need good SATA6G implementations... then we can worry about 6G drives. :-) Seriously, though, the current 6G chipsets often seem to reduce performance relative to 3G. I wouldn't be surprised to see it take the integration of 6G into the Northbridge before we get proper performance across the board. Reply
  • vol7ron - Tuesday, April 13, 2010 - link

    That only applies to drives that don't exceed the 3G threshold.

    A drive that exceeds the 3G theoretical limit (on 6G) still makes 6G worth it, despite the fact that the 6G controller is not yet matured, or as efficient as it could be. For those SATA2 drives that can't exceed the 3G limit, yes: stick to 3G.
  • JarredWalton - Tuesday, April 13, 2010 - link

    No, there are plenty of cases where 6G SATA implementations aren't doing as well as they should:

    Sequential read has Marvel's 6G in the lead, provided it's paired with PCIe 2.0. AMD's 890GX is slightly behind, but 3G off native X58 is better than 6G off PCIe 1.x.

    On random read, the X58 solution is within spitting distance of the best 6G scores, and oddly enough AMD's 890GX does quite poorly (BIOS updates may have fixed this by now).

    Random write has AMD's 890GX in the lead, but the Intel X58 beats all the Marvel results.

    So I stand by my statement: we need better implementations of 6G before it makes a huge difference. The sequential read/write performance is nice for benchmark charts, but random performance is far more common and it's what really makes SSDs shine compared to HDDs. X58 has a very robust 3G implementation, it seems, and if that's what you're running you'd only lose a very small amount of performance worst case, and in other cases you'd end up quite a bit faster.

Log in

Don't have an account? Sign up now