For months SandForce has been telling me that the market is really going to get exciting once its next-generation controller is ready. I didn’t really believe it, simply because that’s what every company tells me. But in this case, at least based on what SandForce showed me, I probably should have.

What we have today are the official specs of the second-generation SandForce SSDs, the SF-2000 series. Drives will be sampling to enterprise customers in the coming weeks, but we probably won’t see shipping hardware until Q1 2011 if everything goes according to plan. And the specs are astounding:

We'll get to the how in a moment, but let's start at the basics. The overall architecture of the SF-2000 remains unchanged from what we have today with the SF-1200/SF-1500 controllers.

SandForce’s controller gets around the inherent problems with writing to NAND by simply writing less. Using real time compression and data deduplication algorithms, the SF controllers store a representation of your data and not the actual data itself. The reduced data stored on the drive is also encrypted and stored redundantly across the NAND to guarantee against dataloss from page level or block level failures. Both of these features are made possible by the fact that there’s simply less data to manage.

Another side effect of SandForce’s write-less policy is there’s no need for an external DRAM to handle large mapping tables. It reduces the total BOM cost of the SSD and allows SandForce to charge a premium for its controllers.

These are the basics and as I mentioned above, they haven’t changed. The new SF-2000 controller is faster but the fundamental algorithms remain the same. The three areas that have been improved however are the NAND interface, the on-chip memories, and the encryption engine.

NAND Support: Everything
Comments Locked

84 Comments

View All Comments

  • karndog - Thursday, October 7, 2010 - link

    Put two of these babys in RAID0 for 1GB/s reads AND writes. Very nice IF it lives up to expectations!
  • Silenus - Thursday, October 7, 2010 - link

    Indeed. We will have to wait and see. Hopefully the numbers are not too optimistic. Hopefully there are not too many firmware pains. Still...it's an exciting time for SSD development. Beginning of next year is when I will be ready to buy an SSD for my desktop (have one in my laptop already). Should be nice new choices by then!
  • Nihility - Thursday, October 7, 2010 - link

    It'll be 1 GB/s only on non-compressed / non-random data.
    Still, very cool.
  • mailman65er - Thursday, October 7, 2010 - link

    better yet, put that behind Nvelo's "Dataplex" software, and use it as a cache for your disk(s). Seems like a waste to use it as a storage drive, most bits sitting idle most of the time...
  • vol7ron - Thursday, October 7, 2010 - link

    "most bits sitting idle most of the time... "

    Thus, the extenuation life.
  • mailman65er - Thursday, October 7, 2010 - link

    Thus, the extenuation life.

    Well yes, you could get infinite life out of it (or any other SSD) if you never actually used it...
    The point is that if you are going to spend the $$'s for the SSD that uses this controller (I assume both NAND and controller will be spendy), then you want to actually "use" it, and get the max efficiency out of it. Using it as a storage drive means that most bits are sitting idle, using it as a cache drive keeps it working more. Get that Ferrari out of the barn and drive it!
  • mindless1 - Tuesday, October 19, 2010 - link

    Actually no, the last thing you want to use a MLC flash SSD drive for is mere, constant write, caching.
  • Havor - Friday, October 8, 2010 - link

    I really don't get the obsession whit raid specially raid 0

    Its the IOPs that count for how fast your PC boots ore starts programs and whit 60k IOPs i think you're covert.

    Putting these drives in R0 could actually for some data patterns slow them down as data is divided over 2 drives they have to arrive at the same time ore one of the drives have to wait for the other to catch up.

    Yes you will see a huge boost in sequential reads/writs but whit small random data the benefit would negative, and the overall benefit would be around up to a 5% benefit. and the down side would be the higher risk of data loss if one of the drives breaks down.
  • mindless1 - Tuesday, October 19, 2010 - link

    No it isn't. Typical PC boot and app loading is linear in nature, it's only benchmarks that try to do several things (IOPS) simultaneously, very limited apps or servers which need IOPS significantly more than random read/write performance.

    You are also incorrect about slowing them down waiting because if not the drives' DRAM cache, there is the system main memory cache, and on some raid controllers (mid to higher end discrete cards) there is even the *3rd* level of controller cache on the card.

    Overall benefit 5%? LOL, if you are going to make up numbers at least try harder to get close or, get ready for it, actually try it as-in actually RAIDing two then run typical PC usage representative benchmarks.

    Overall the benefit will depend highly on task, or to put it another way you probably don't need to speed up things that are already reasonably quick, rather to focus on the slowest or most demanding tasks on that "PC".
  • Golgatha - Thursday, October 7, 2010 - link

    DO WANT!!!

Log in

Don't have an account? Sign up now