Latency Percentile and Power Consumption
Latency Percentile
We are replacing our old ‘Average transaction Time’ results with Latency Percentile data, as this exclusive new testing paints a much clearer picture than simple averages (or even plots of average over time) can show.
For reads, I’ll first explain what this chart is doing with respect to HDDs and queue depth. At low queue depths, the percentile plot is on a slight slope due to the varying latency due to varying seek lengths (due to the random workload) and rotational latency. At higher queue depths, the profiles will ‘stretch’ to higher latencies, keeping to a similar minimum latency but the maximum (tail) latency will shift further to the right. This is because queued commands may be shifted to the end of the queue when re-ordered by the HDD firmware. You’ll note that at higher QD, the overall latency per IO is higher, but the HDD is also putting out higher IOPS overall. This increase is because the HDD can better optimize its read pattern when it is operating with a deeper queue.
Getting into the results, I’ve included the He8 data for comparison, but remember that is a 7200 RPM unit while both Reds are spinning at 5400 RPM. A combination of rotational latency and seek time at QD=1 is responsible for the 3ms shift between the He8 and the two Reds. Both Reds start off with nearly identical IOPS and latency profiles, but as load increases, we see the 8TB Red start to behave more like the faster enterprise-rated He8. The same can be said for the IOPS results (added in with the legend) – at QD=32, the 8TB Red is running closer to the IOPS of the He8 than it is to the 6TB Red. That is not bad at all considering the He8 was spinning 33% faster than the other two.
HDD Writes work out much differently when compared to reads. While reading, a hard drive can’t reply to an IO request until it actually has the data read from the disk. For writes, the drive can employ its cache and reply to the host significantly faster, especially at lower queue depths. What actually takes place inside the drive is that it functions with its own internal queue that in practice can run significantly higher than the SATA QD=32 limit. Since the drive buffers and takes control over its own internal queue during writes, modern hard drives will generally operate at their maximum possible IOPS based on that internal queue, meaning that they will reach maximum possible IOPS even at QD=1. The only real difference seen at higher host QD is higher per-IO latency. Since the latency percentile test is performed at steady state in this case, higher QD simply translates to a longer wait for each IO, which shifts the nearly vertical profiles to the right.
With the results seen above, I must first explain that the He8 employs a media cache architecture that gives it a significant advantage in random write performance. Note the disproportionally higher IOPS (and lower latency) figures when compared to the two Reds, which do not employ that technology. Looking at the new vs. old reds, the new 8TB Red actually pulls a trick not seen in either the He8 or the 6TB Red. Note how the profiles of the 8TB Red are more sloped than the other drives here. Normally this would be a bad thing, but in this case the tail (top right) is still faster than the 6TB Red. The 8TB profiles are more sloped *in a good way* since they are reaching further to the left (bottom / 0%), meaning that even though the maximum latencies are similar between both Reds, the 8TB sees many of its IO requests are being serviced faster than the 6TB model. This delta is also responsible for the ~12% IOPS increase seen in the 8TB Red.
Power Consumption
Note that the new Red is borrowing enterprise drive electronics and spindle motor from the HGST line, so some of the figures are higher based on the power draw of those parts. As an example, note the standby consumption is identical between the 8TB Red and the He8. From the figures, we can see that the spindle motor appears to take a bit more power as well. Despite the Helium filling, idle power draw for the 8TB Red is higher than the 6TB (air filled) model, yet the lower spindle speed of the Red enables lower idle draw when compared to the He8. It appears this is due to a less efficient spindle motor design carried over from HGST. The increased power use at standby (electronics only) and idle (electronics + motor) carry over into higher per-drive power use for all active use.
When comparing these power figures, those specing out an array or NAS might be more concerned about power use that takes capacity into account. You’ll need fewer 8TB than 6TB drives to achieve the same ultimate capacity. With the above chart we can see that when capacity compensated, the 8TB Red actually beats all drives in this comparison, with the exception of random write seeks coming in slightly higher than the He8.
How long will the helium
How long will the helium last?
Depending on the drive, some users may used it for cold backups, thus it may see intermittent use for well over 10 years. Most standard hard drives can handle this, but there is not much info for helium drives.
My understanding is that
My understanding is that there is negligible diffusion through the HelioSeal tech, so these should last as long as an air-filled drive would last prior to the bearing lubrication drying out. Remember that neither tech is shelf tested for 10 years in high capacities as they haven't existed long enough on either side. Also, consider that long term archival routines typically involve migrating data to newer mediums over time.
My interest was mainly due to
My interest was mainly due to my cold backup strategy, in addition to a NAS PC build which stores everything, I do cold backups; bare drives in a fire/ water resistant safe, and I overwrite the oldest drive with a new backup, thus allowing me to have 4-5 versioned backups of everything. While the drives will see use every months. I do a backup every month, with more frequent ones if I am about to experiment with something, thus they will experience times where they are just sitting in a safe.
I did this more as a cheaper way to have additional backups for far less money than it would take to backup everything to the cloud. It would be cool to be able to transition to 8+TB drives as a replacement for some of my aging drives.
http://i.imgur.com/zRkzBvk.jpg
I currently have my SSD perform incremental backups to my 1TB WD black drive, as well as incremental of it and the 4TB drive to the NAS. The 120GB has no backups as it is used only as a secondary steam library location.
I then use my eSATA dock to load a bare drive, and then backup my SSD, and important stuff from the 4TB drive every few weeks to a month.
Tapes and Blue ray DVDs are
Tapes and Blue ray DVDs are for backup/cold backup, hard drives are wasted on cold backup. Hard drives are for any data that needs/may need to be accessed more than once a week. For the cold data storage/backup tasks tapes are still the best value followed by DVDs/Blue Ray. Any hard drive that may break down and render to drive unreadable is not true backup medium by definition, as that would be what DVDs/CDs and tapes are for, for storage outside of the reading/writing device and not tied to any R/W hardware’s potential failure.
……says somebody who
……says somebody who doesn’t have hundreds of TB to back up.
I wouldn’t be so quick to
I wouldn’t be so quick to write off tape. There are some seriously big & fast, mainframe style tape library systems out there.
Commonly they cache the backups via internal ram/ssd’s as they spray the parallel writes out over multiple tapes.
They are more tolerant to being dropped. And you don’t have concerns about the platter spindle sticking after 10 – 15 years of sitting idle.
(That said I often wonder if after 15 -20 years, they will still have the applications to read the data)
LTO tape is not tolerant to
LTO tape is not tolerant to being dropped.
I was looking at the Seagate
I was looking at the Seagate Archive 8TB drive as a cold/low-use backup of my DVD/Bluray library. At $222 on Newegg, it seems a lower-cost alternative. Of course, it’s not rated for NAS though…
Yeah, those archive /
Yeah, those archive / shingled drives are mainly only suited for sequential writing of large files. They can't handle any sort of array initialization / random access. Good price though.
I built a backup server
I built a backup server recently with 4x8TB archive drives.
Now that it’s set up, it works just fine. You’d never know they weren’t regular drives.
I did a Linux update while the array was still initializing, and it was slooooooooooow. I was getting 3-4 IOPS on the OS filesystem.
On the other hand, they have a neat trick where they can buffer up to 20GB of random writes in a cache area on the disk, and deliver
about 1000 IOPS until the cache fills up.
Yeah, on my home NAS
Yeah, on my home NAS performance is one of my lowest concerns. I added one of the 8TB archive drives into my array a few months ago (using btrfs so I can mix and match drives of varying sizes) and don’t have any issues with performance. The NAS is still easily able to handle home workloads. Once you spread reads/writes out across several low performance drives, you end up with pretty reasonable performance. I plan on adding another one when I can catch a sale around $200.
The one issue I ran into is that the Linux kernel has a bug before 4.4 where shingled drives occasionally get SATA timeouts under heavy loads.
https://bugzilla.kernel.org/show_bug.cgi?id=93581
What are your thoughts on
What are your thoughts on using these drives in a traditional RAID1/5/6 setup? I always see stories saying don’t go too big because of rebuild times, but I don’t know what to believe half the time. Are the drives better suited for a ZFS/Drobo/JBOD system?
Red drives are rated for
Red drives are rated for arrays up to 8 disks. I've run 6TB reds in a RAID-6 of 14 disks. Rebuilds are generally not an issue so long as you can maintain redundancy during the rebuild (RAID-6), but since Reds employ TLER, even if another drive sees bad sectors during the rebuild, it should still be able to complete or halt (meaning you can still copy off all data except for the data including the bad sectors). Remember that you should never have all of your data on just one 'storage device' (an array, a NAS, a Drobo, etc). Redundant arrays are not the same as a backup. Always have another copy, preferably at another location.
Can I use this as a regular
Can I use this as a regular daily drive?
Absolutely. The only possible
Absolutely. The only possible disadvantage is that a Red will 'give up' on a bad sector after 7 seconds of attempts, but in many cases that is a bonus. I would personally rather have a drive report that it can't read a sector instead of timing out (30+ seconds) and potentially dropping offline completely.
Thanks for doing this review!
Thanks for doing this review!
I think I need to upgrade the drives in my current NAS.
Thanks as always Allyn.
If
Thanks as always Allyn.
If i’m understanding correctly you can get these for about $250 from amazon here:
http://www.amazon.com/Book-Desktop-External-Drive-WDBFJK0080HBK-NESN/dp/B01B6BN0Q2
The QA section says that the 8tb version uses WD Reds (which makes sense as there’s no other WD 8tb drive that i know of), so you can get a deal there if you just want to take them out of the enclosure.
You’re probably right – why
You're probably right – why they are so much less than the bare drive is beyond me…
You take them out and give up
You take them out and give up on the warranty.
I bought an 8TB MyBook and
I bought an 8TB MyBook and inside was a WD80EZZX, not a WD Red.
Great review! This type of
Great review! This type of content is why I support you guys on Patreon.
Allyn and Ryan,
Thanks for
Allyn and Ryan,
Thanks for the review, I have been a long fan of all articles posted on PCPer.
This article is informative and detailed. I have a QNAP 6-bay server since 2010 and very happy with purchase. I am very surprise to see QNAP still supporting relatively old product by OS updates and new apps. I have been thinking about upgrading current 3 TB WD Red drives to higher drives as I need more storage space.
I would like to see how Drobo 8-bay server is performing under these drives and would wait to see WD 8 TB to be available widely.
Thank you Allyn for the
Thank you Allyn for the review…. These i depth reviews of storage products why I am a happy
patreon supporter! I would suggest that everyone to pony up something to support the Pcper crew, even if it’s the paypal recurring 3 dollar a month… one tenth of one percent for the hands on proof of the actual perfomamce data should prove to be invaluable!
Allyn – any thoughts on MTBF?
Allyn – any thoughts on MTBF? Should we expect to see these survive a longer time than traditional, non-He hdd’s?
These haven’t been around for
These haven't been around for long enough to have a good sample set of reliability data, but a sealed / controlled environment should, at least, eliminate any of the variables that would have contributed to some of the premature HDD failure scenarios. Electrical and mechanical components can still fail, so I wouldn't picture miracles happening, but my educated guess is these would be a bit more reliable than drives vented to atmosphere.
For a
For a 8TB/5400RPM/128MB/7platters drive, $330 is waaaay overpriced. Seagate 8TB DM/VN (7200RPM/256MB/6platters) perform 10 times better than 8TB Red/Purple, and they only cost $300 & $349 currently at Amazon. WD Red has always been a overpriced product, this one is no exception.
Pricing seems OK when you
Pricing seems OK when you consider the helium tech and sheer size of the drive, but having only 3 year warranty and 5400rpm makes it pretty questionable again.
I’m still waiting for the next generation of Velociraptors. I read somewhere that the decreased friction from helium can, instead of giving you more platters, give you a “free” speed increase (for example, from 10000rpm to 15000rpm) without the associated increase in heat and power consumption. A 3.5″ 15000rpm helium Velociraptor wouldn’t be cheap, but it could still be a lot cheaper than comparable high-quality multi-terabyte SSDs.
The days of the enduser hdd
The days of the enduser hdd are slowly drawing to a close. So your product is probably too niche ever to be developed.Slow and cheap ssd or fast and less cheap. Hdd will become like trucks, who buys a truck to drive around when cars n bikes make more sense?
Well – who buys cars and
Well – who buys cars and bikes when you can have a driver for less cost?
Correct – buying things in not a matter of need but a matter of want.
in my case I need tons of space for my video and photo work – I just bought 2 WD80EFZX drives and use them in a test scenario in a 2-bay DS-716+
Works flawlessly up to now. Later this year I will get a 8-bay Sinology – probably a 10 GBase-T configuration with a Denverton chip for handling my storage needs – all in all less than 4 k EUR for 64 TB RAW capacity – if I’d like to do the same thing with SSDs I’d have to pay currently probably 25 – 30 k EUR
Heat, noise and size are none of my concerns since e this array will be in a cabinet in the house hidden and safely locked away.
If you do not need these amounts of storage it is pointless to dispute about the price or old school fashion – for me HDDs are the cheapest way to get more space and this will continue for the years to come – the distance in pricing will get smaller but the possibility to have the highest available storage space for the lowest comparable price will be in favor of the HDD – no matter what price you pay for SSDs – HDDs will always have a healthy safety margin in price and that alone will make them attractive for people like me with huge storage ‘needs’
BTW the M.2 board in my nMP late 2013 gets me > 1 GB /sec encrypted data transfer – but the price is killing my budget – a 64 GB array of these M.2 boards wold likely cost 60 k EUR – needless to say it is at this point in time ridiculous.
Just a few thoughts
Hard drives are typically
The “Bernoulli Effect” is an incorrect explanation for the physics of the head-to-platter gap. Instead the heads ride on an air bearing created by the boundary layer of air attached to the rapidly spinning platters. This cushion of air is what keeps the heads consistently floating at the proper distance from the platters during operation. I don’t like to use the term “fly” when describing heads as the physics involved are not at all like that experienced by wings on a plane.
The reason drives have traditionally been vented is simply because it is much cheaper to vent than to pressurize and seal. The heads are designed to create a specific head-to-platter gap with air at a given pressure. Vented drives are normally rated for operation at up to only 10,000 ft. altitude, however sealed/pressurized or other specially designed (expensive) drives have been available for high-altitude applications: http://www.dell.com/support/article/us/en/04/SLN80457
You’re correct in that
You're correct in that Bernoulli does not push the head away from the platter (as that is boundary layer action), but that does not mean it doesn't apply or is incorrect. The effect is still very much at play, as it keeps the head from flying too high.
“The effect is still very
Bernoulli is not involved there either. That is in fact accomplished by the spring tension in the head arm (usually called the suspension), which would drive the head into contact with the platter were it not for the presence of the air bearing. The combination of air bearing pressure (pushing away from the platter) and spring tension (pushing towards the platter) is what maintains the intended gap between the head and platter during operation.
This is also why the heads on older drives forcibly “land” on the platters when powered off or spun-down. As the platters slow down the air bearing dissipates, thus allowing the spring tension in the head suspension to force the slider into direct contact with the platter. To minimize wear the heads would normally be positioned over a parking zone (containing no data) before spin-down. More information here: http://www.google.com/patents/US5729399
Modern drives ramp the heads off the platters before spin-down so there is never any contact between head sliders and platters. More information here: https://www.hgst.com/sites/default/files/resources/LoadUnload_white_paper_FINAL.pdf
Do you think you could
Do you think you could compare this to the nearest competitor, Seagate’s NAS ST8000VN0002 variant?
At this point, they are both 8TB NAS drives and have similar price points but with with one major difference – seagate has double the cache at 256. Unsure of its RPMs – despite being out first I am having trouble finding reviews of its speed, especially in detail to compare to your article!
I tried to find a websit
I tried to find a websit whcih compare the ST8000VN0002 with this one (very difficult) but it is possible!
And honestly, the seagate (at the same price and same capacity) is much better than the Western WD80EFZX,
me, i focus on facts, and my normal use, (NAS (raid 5)
seeing that is new technology or less or more rpm or less or more cache, i do not care at all,
i just watch: fiability, capacity, and performances in my use (webserver, nginx, Mysql, etc etc)
the rest (the theorical tests) not interested :p
So at the same price i prefer the ST8000VN0002 (and much much much better)
😉
I am wondering if this type
I am wondering if this type of hard disk could be compared to SSDs in their I/O rate I mean 6Gb/s. Is it really comparable?
If so, why sould someone buy ssd when it’s more expensive?
Thanks
When is the release date of
When is the release date of the 60TB (sixty terabyte) WD Red? I want to buy one. Hopefully cost is around $200.
I am considering hardware for
I am considering hardware for a new NAS and I was reading about the WD 8 TB Red. Why is this HDD only for up to 8-bay nas? According to WD I need the Red Pro for use in a 12-bay environment. I would really appreciate an explanation for this in “laymanterms” since I’m by no means any expert.
Thanks