Western Digital has announced two new helium-filled hard drives targeting consumer and business NAS applications. The new WD Red and WD Red Pro HDDs increase capacity of WD’s NAS drives to 10 TB, boost their performance and also reduce their power consumption. Therefore, the new drives enable makers of NAS units to increase capacities of their products to 80 TB (or 160TB) while increasing speeds and cutting down power.

After introducing its first hermetically sealed helium-filled NAS and video-surveillance HDDs with 8 TB capacity and six platters last year, Western Digital is refreshing its Red and Purple lineups with more advanced drives offering 10 TB capacity and using seven 1.42 TB platters. The new WD Red and WD Red Pro with 10 TB capacity are based on revamped 5400 RPM and 7200 RPM HelioSeal platforms that can support a higher number of platters. The drives also feature increased areal density and 256 MB of cache, enabling ~17% higher sequential read/write performance compared to its predecessors, as well as a lower power consumption compared to previous-gen helium WD Red hard drives. Other than that, Western Digital does not really disclose the feature set of its platform for helium-filled HDDs for NAS applications.

The WD Red 10 TB drive is engineered for personal or small business NAS systems with up to eight bays, is optimized for mixed workloads and has a 5400 RPM spindle speed. By contrast, the WD Red Pro 10 TB is aimed at medium business and enterprise-class NAS systems up to 16 bays, which is why the HDD features additional protection against vibrations as well as improved random read performance due to both 7200 RPM spindle speed and firmware tuning.  Just like their predecessors, the new WD Red/WD Red Pro hard drives come with SATA 6 Gbps interface.

Comparison of Western Digital's Helium-Filled NAS HDDs
  WD Red
WD100EFAX
WD Red
WD80EFZX
WD Red Pro
WD101KFBX
WD Red Pro
WD8001FFWX
Capacity 10 TB 8 TB 10 TB 8 TB
RPM 5400 RPM 7200 RPM
Interface SATA 6 Gbps
DRAM Cache 256 MB 128 MB 256 MB 128 MB
Data Transfer Rate (host to/from drive) 210 MB/s 178 MB/s 240 MB/s 205 MB/s
MTBF 1 million hours
Rated Workload (read and write) 180 TB/year 300 TB/year
Acoustics (Seek) 29 dBA 29 dBA 36 dBA
Power Consumption Sequential read/write 5.7 W 6.4 W 5.7 W 8.3 W
Idle 2.8 W 5.7 W 2.8 W 5.2 W
Sleep 0.5 W 0.7 W 0.5 W 0.7 W
Warranty 3 Years 5 Years
Price (as of May 2017) $494 $266.25 $533 $359.99
$0.049 per GB $0.033 per GB $0.05 per GB $0.045 per GB
20 GB per $ 30 GB per $ 18.76 GB per $ 22.2 GB per $

It is interesting to note that WD has improved the power consumption of the 10TB drives over the older 8TB drives. We are asking how exactly WD is doing that, as details were not given with the press release.

The 10TB WD Red and 10TB WD Red Pro are available in the U.S. from select retailers and distributors. The WD Red 10 TB is covered by a three-year warranty and has a price tag of $494. The more advanced WD Red Pro 10 TB features a five-year warranty and has a $533 MSRP.

Related Reading:

Source: Western Digital

Comments Locked

19 Comments

View All Comments

  • bill.rookard - Monday, May 22, 2017 - link

    Hmm.. while I do appreciate the higher capacity, the 8TB drives coming in at half the price seem to be the much better deal. At this point I could reduce my 5x2TB RAID5 into a 2x8TB RAID1 and gain quite a bit of confidence.
  • Space Jam - Monday, May 22, 2017 - link

    5x2TB RAID5 is a pretty brave decision.

    2x8TB RAID1 would be a performance sacrifice but you would gain A LOT of much needed confidence as that 5x2TB RAID5 is one drive failure and a near guaranteed URE waiting to happen.
  • SirMaster - Monday, May 22, 2017 - link

    Near guaranteed URE? That's complete nonsense. I've done a bunch of testing myself and find nothing of this sort to be remotely true.

    I built a 10x2TB MD RAID 5 array, filled it with data and then kept removing a drive and rebuilding the array, then verifying all the data. I rebuilt it and verified it 20 times before I needed the disks for something else. I never came across a URE.

    I also have a 12x4TB ZFS pool that I scrub twice a month. ZFS notified you of UREs and repairs them when they are found. I've scrubbed my 70% filled pool more than 100 times with no URE encountered.

    URE's are not as common as you apparently think they are.
  • DanNeely - Monday, May 22, 2017 - link

    "URE's are not as common as you apparently think they are."

    More to the point they're nowhere near as common as the specsheets imply they are. The specsheet numbers (unchanged for a decade or two despite all the capacity increases) imply a failure rate of once per ~12.5TB (100 terrabits); which would make RAIDs above a few terabytes a crapshoot to rebuild and ones a few times larger nearly impossible. The reddit post I've linked has some actual test run data from someone else whose results were the same as yours. Some of the replies, eg the one claiming that the average is dominated by much less frequent failure modes that generate large numbers of errors at once, are interesting as well. Either way the naive prediction of how big an array you can rebuild before being likely doomed by a URE is much too small. Rebuilding an array can fail today just like it could a decade or two ago; but the apocalyptic predictions from early this century never came to pass.

    https://www.reddit.com/r/zfs/comments/3gpkm9/stati...
  • SirMaster - Monday, May 22, 2017 - link

    Which specsheets show 1 URE in 12.5TB. All the specsheets for my disks say < 1 in 10^15 as in *less than*. So If i see 1 URE in 100TB or 1 URE in 1PB, that's certainly *less than* 1 in 10^15.
  • SirMaster - Monday, May 22, 2017 - link

    I miss-wrote on my previous comment.

    All the datasheets that I have seen say < 1 in 10^14.

    See WD Red for datasheet for these 10TB disks:

    https://www.wdc.com/content/dam/wdc/website/downlo...

    It does not say equal to 1 in 10^14, so there is no reason to think thing you are anywhere near guaranteed to get a URE in 12.5TB.
  • Maltz - Monday, May 22, 2017 - link

    The implication is there because there exist drives with the <1 in 10^15 (1 error per 1000TB) spec. Since these drives are not expected to be that reliable, the implication is that there is a reasonable chance of an error somewhere between 100TB and 1000TB for a drive rated <1 in 10^14.

    Of course in reality, other factors come into play. Personally, I chose RAID6 for my NAS. Spinning storage is cheap, so why not.
  • Robert Pankiw - Monday, May 22, 2017 - link

    I think the error is a percent chance, which doesn't accumulate the same way as a "1 per xxTB written" would accumulate.

    It's probably something like (1-10^-15)^(10^15) which is about 37%, meaning that if you wrote about 113TBs (10^15 is about 113TB, unless you're one of THOSE people in which case it's 125TB) then you can expect about 1 URE. If you did that but didn't get a URE, and wrote another 113TBs, then expect the odds of not getting a URE to go from 37% to 13.5%.
  • ddriver - Monday, May 22, 2017 - link

    I'd take ZFS over hardware raid any time.
  • Sivar - Wednesday, May 24, 2017 - link

    ZFS has issues of its own. For example, they require ECC system RAM for reliability to a far greater extent than most file systems. Performance can be much lower (because of ZFS's design, not because of hardware RAID performance). Operating system support is somewhat more limited. Of course, ZFS and hardware RAID are not mutually exclusive or even necessarily competitors.

Log in

Don't have an account? Sign up now