During its iPad mini launch event today Apple updated many members of its Mac lineup. The 13-inch MacBook Pro, iMac and Mac mini all got updated today. For the iMac and Mac mini, Apple introduced a new feature that I honestly expected it to debut much earlier: Fusion Drive. 

The idea is simple. Apple offers either solid state or mechanical HDD storage in its iMac and Mac mini. End users have to choose between performance or capacity/cost-per-GB. With Fusion Drive, Apple is attempting to offer the best of both worlds.

The new iMac and Mac mini can be outfitted with a Fusion Drive option that couples 128GB of NAND flash with either a 1TB or 3TB hard drive. The Fusion part comes in courtesy of Apple's software that takes the two independent drives and presents them to the user as a single volume. Originally I thought this might be SSD caching but after poking around the new iMacs and talking to Apple I have a better understanding of what's going on. 

For starters, the 128GB of NAND is simply an SSD on a custom form factor PCB with the same connector that's used in the new MacBook Air and rMBP models. I would expect this SSD to use the same Toshiba or Samsung controllers we've seen in other Macs. The iMac I played with had a Samsung based SSD inside. 

Total volume size is the sum of both parts. In the case of the 128GB + 1TB option, the total available storage is ~1.1TB. The same is true for the 128GB + 3TB option (~3.1TB total storage).

By default the OS and all preloaded applications are physically stored on the 128GB of NAND flash. But what happens when you go to write to the array?

With Fusion Drive enabled, Apple creates a 4GB write buffer on the NAND itself. Any writes that come in to the array hit this 4GB buffer first, which acts as sort of a write cache. Any additional writes cause the buffer to spill over to the hard disk. The idea here is that hopefully 4GB will be enough to accommodate any small file random writes which could otherwise significantly bog down performance. Having those writes buffer in NAND helps deliver SSD-like performance for light use workloads.

That 4GB write buffer is the only cache-like component to Apple's Fusion Drive. Everything else works as an OS directed pinning algorithm instead of an SSD cache. In other words, Mountain Lion will physically move frequently used files, data and entire applications to the 128GB of NAND Flash storage and move less frequently used items to the hard disk. The moves aren't committed until the copy is complete (meaning if you pull the plug on your machine while Fusion Drive is moving files around you shouldn't lose any data). After the copy is complete, the original is deleted and free space recovered.

After a few accesses Fusion Drive should be able to figure out if it needs to pull something new into NAND. The 128GB size is near ideal for most light client workloads, although I do suspect heavier users might be better served by something closer to 200GB. 

There is no user interface for Fusion Drive management within OS X. Once the volume is created it cannot be broken through a standard OS X tool (although clever users should be able to find a way around that). I'm not sure what a Fusion Drive will look like under Boot Camp, it's entirely possible that Apple will put a Boot Camp partition on the HDD alone. OS X doesn't hide the fact that there are two physical drives in your system from you. A System Report generated on a Fusion Drive enabled Mac will show both drives connected via SATA.

The concept is interesting, at least for mainstream users. Power users will still get better performance (and reliability benefits) of going purely with solid state storage. Users who don't want to deal with managing data and applications across two different volumes are still the target for Fusion Drive (in other words, the ultra mainstream customer).

With a 128GB NAND component Fusion Drive could work reasonable well. We'll have to wait and see what happens when we get our hands on an iMac next month.

Comments Locked

87 Comments

View All Comments

  • orthorim - Wednesday, October 24, 2012 - link

    I am not sure why file moving is very important? Moving from where to where? Even with a thunderbolt connector, you'd need an external SSD to see a difference. Otherwise either the connector - ie USB 3 - or the external storage would be the bottleneck.

    The 4GB write cache is for one edge case, which is that programs like to write lots of little files - file locks, temporary small stuff, whatever. The most recently used algorithm would miss those because they are brand new files - there's no usage data available. So they all go in that 4GB cache first - it's a great solution IMO.

    For all other cases, the SSD and HDD dynamically re-arrange their contents. So if you're editing a movie, that movie will be on the SSD, for example. (I was about to bring up the case of a 300MB photoshop file but then realized that easily fits in that 4GB buffer too... whoops... 4GB is quite a bit, only movie editing will really exceed that).
  • ijmmany - Thursday, November 1, 2012 - link

    from what i saw at the launch also playing with in mac Store you can adjust cache accordingly i think up to 12GB write-cache
  • tipoo - Wednesday, October 24, 2012 - link

    The more they use for the write buffer the less is available to files and programs I guess. 4GB just for writes should be enough for most people in most cases though, only when you transfer something larger than that would you take a hit to performance.
  • bsd228 - Thursday, October 25, 2012 - link

    4GB is more than enough - you only need to store the last X seconds of writes and then flush them out. ZFS's ZIL partition for a home filer can be fine even at 2GB.

    As for writing everything to SSD and then migrating out slow stuff to HDD, that requires a lot more SSD to work. Veritas's file system vxfs v5 includes dynamic storage tiering (DST) that behaves this way, but you have a target mix of fast/slower storage that is 30/70, not 1/8 or 1/24 like these Apple offerings. There is also a considerable overhead cost to tracking and refreshing file locations with their product.
  • LeftSide - Wednesday, October 24, 2012 - link

    I would think that a block based cache drive would offer more fine grained performance. The concept is interesting, I can't wait to see some performance reviews.
  • Freakie - Wednesday, October 24, 2012 - link

    It still could be block based, could it not? I mean, SRT is block-based yet it caches whole files, not just a few blocks that the file takes up :P If a file is on blocks 2, 3, 4, 5, 7, 11 then all of those blocks should be accessed an equal number of times, ensuring the entire file and all of it's blocks are transferred.
  • orthorim - Wednesday, October 24, 2012 - link

    Absolutely not. The opposite is the case. An algorithm that works on the OS level can make much better decisions on what to keep on the SSD and what to move to the HDD.

    It could move all my media files onto the HDD, for example - I might watch movies, listen to music and look at my pictures, but the case that I'd edit these *and* the editing would incur a performance penalty is very small. It could keep all system files on the SSD. And so on. The OS has much more information about how your files might get used so it can make much better decisions.
  • Zink - Friday, October 26, 2012 - link

    It could even keep track of which applications benefit heavily from the SSD and which don't to help make sure a photoshop install that gets used a couple times a month doesn't get pushed off the ssd to make room for big 10GB games that get played many times a week.
  • CharonPDX - Wednesday, October 24, 2012 - link

    SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort.

    This seems to hit the sweet spot. Automatic immediate caching for small amounts (a la Turbo Memory,) with large amount automatic repositioning between drives based on usage load.

    No manual keeping track, but MUCH more benefit than the existing solutions. In all honesty, this is what I thought both Turbo Memory and SSD caching *WERE* until I read more in to them. This makes a lot more sense. Use the spinning drive as "volume" storage as it should, then when you figure out what smaller amounts of data should be on the higher-speed, move them.

    Make new writes of smaller amounts of data go to the SSD, then write to the spinning drive when workload allows. No risk of losing data like a "regular" write-back cache.
  • MarkLuvsCS - Wednesday, October 24, 2012 - link

    "SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort."

    I don't understand this at all. Have you ever tried to use Intel's SSD caching debuted on the z68 platform? I'm guessing you probably are just spewing someone's thoughts on the platform without ever trying it.

    I have used SSD caching on my z68 platform since I've started putting it together. After about 5-10 mins and a few reboots, SSD caching was up and running and incredibly noticeable. Aside from the brief setup, I've never spent another thought on the matter. It's the easiest solution out there. If the SSD dies randomly, guess what? nothing happens other than me replacing the SSD. My single drive stores everything and maintains cache for anything I load often.

Log in

Don't have an account? Sign up now