Random Performance – Iometer (IOPS/latency), YAPT (random)
We are trying something different here. Folks tend to not like to click through pages and pages of benchmarks, so I'm going to weed out those that show little to no delta across different units (PCMark). I'm also going to group results performance trait tested. Here are the random access results:
Iometer:
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry. Intel later discontinued work on Iometer and passed it onto the Open Source Development Lab (OSDL). In November 2001, code was dropped on SourceForge.net. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuously improving, porting and extend the product.
Iometer – IOPS
It is clear here that the 8TB Red comes in as the highest 5400 RPM performer in all of our Iometer results, even nipping at the heels of the 4TB Black and RE in some areas.
Iometer – Average Transaction Time
This test is being phased out in favor of Latency Percentile testing (on the next page).
YAPT (random)
YAPT (yet another performance test) is a benchmark recommended by a pair of drive manufacturers and was incredibly difficult to locate as it hasn't been updated or used in quite some time. That doesn't make it irrelevant by any means though, as the benchmark is quite useful. It creates a test file of about 100 MB in size and runs both random and sequential read and write tests with it while changing the data I/O size in the process. The misaligned nature of this test exposes the read-modify-write performance of SSDs and Advanced Format HDDs.
YAPT is a 'misaligned' test, in that it does not adhere to 4k boundaries. This causes some drives to behave oddly, or in the case of the 8TB Red on random writes, I suspect it got caught trying to flush its larger buffer to disk later in that particular sequence.
How long will the helium
How long will the helium last?
Depending on the drive, some users may used it for cold backups, thus it may see intermittent use for well over 10 years. Most standard hard drives can handle this, but there is not much info for helium drives.
My understanding is that
My understanding is that there is negligible diffusion through the HelioSeal tech, so these should last as long as an air-filled drive would last prior to the bearing lubrication drying out. Remember that neither tech is shelf tested for 10 years in high capacities as they haven't existed long enough on either side. Also, consider that long term archival routines typically involve migrating data to newer mediums over time.
My interest was mainly due to
My interest was mainly due to my cold backup strategy, in addition to a NAS PC build which stores everything, I do cold backups; bare drives in a fire/ water resistant safe, and I overwrite the oldest drive with a new backup, thus allowing me to have 4-5 versioned backups of everything. While the drives will see use every months. I do a backup every month, with more frequent ones if I am about to experiment with something, thus they will experience times where they are just sitting in a safe.
I did this more as a cheaper way to have additional backups for far less money than it would take to backup everything to the cloud. It would be cool to be able to transition to 8+TB drives as a replacement for some of my aging drives.
http://i.imgur.com/zRkzBvk.jpg
I currently have my SSD perform incremental backups to my 1TB WD black drive, as well as incremental of it and the 4TB drive to the NAS. The 120GB has no backups as it is used only as a secondary steam library location.
I then use my eSATA dock to load a bare drive, and then backup my SSD, and important stuff from the 4TB drive every few weeks to a month.
Tapes and Blue ray DVDs are
Tapes and Blue ray DVDs are for backup/cold backup, hard drives are wasted on cold backup. Hard drives are for any data that needs/may need to be accessed more than once a week. For the cold data storage/backup tasks tapes are still the best value followed by DVDs/Blue Ray. Any hard drive that may break down and render to drive unreadable is not true backup medium by definition, as that would be what DVDs/CDs and tapes are for, for storage outside of the reading/writing device and not tied to any R/W hardware’s potential failure.
……says somebody who
……says somebody who doesn’t have hundreds of TB to back up.
I wouldn’t be so quick to
I wouldn’t be so quick to write off tape. There are some seriously big & fast, mainframe style tape library systems out there.
Commonly they cache the backups via internal ram/ssd’s as they spray the parallel writes out over multiple tapes.
They are more tolerant to being dropped. And you don’t have concerns about the platter spindle sticking after 10 – 15 years of sitting idle.
(That said I often wonder if after 15 -20 years, they will still have the applications to read the data)
LTO tape is not tolerant to
LTO tape is not tolerant to being dropped.
I was looking at the Seagate
I was looking at the Seagate Archive 8TB drive as a cold/low-use backup of my DVD/Bluray library. At $222 on Newegg, it seems a lower-cost alternative. Of course, it’s not rated for NAS though…
Yeah, those archive /
Yeah, those archive / shingled drives are mainly only suited for sequential writing of large files. They can't handle any sort of array initialization / random access. Good price though.
I built a backup server
I built a backup server recently with 4x8TB archive drives.
Now that it’s set up, it works just fine. You’d never know they weren’t regular drives.
I did a Linux update while the array was still initializing, and it was slooooooooooow. I was getting 3-4 IOPS on the OS filesystem.
On the other hand, they have a neat trick where they can buffer up to 20GB of random writes in a cache area on the disk, and deliver
about 1000 IOPS until the cache fills up.
Yeah, on my home NAS
Yeah, on my home NAS performance is one of my lowest concerns. I added one of the 8TB archive drives into my array a few months ago (using btrfs so I can mix and match drives of varying sizes) and don’t have any issues with performance. The NAS is still easily able to handle home workloads. Once you spread reads/writes out across several low performance drives, you end up with pretty reasonable performance. I plan on adding another one when I can catch a sale around $200.
The one issue I ran into is that the Linux kernel has a bug before 4.4 where shingled drives occasionally get SATA timeouts under heavy loads.
https://bugzilla.kernel.org/show_bug.cgi?id=93581
What are your thoughts on
What are your thoughts on using these drives in a traditional RAID1/5/6 setup? I always see stories saying don’t go too big because of rebuild times, but I don’t know what to believe half the time. Are the drives better suited for a ZFS/Drobo/JBOD system?
Red drives are rated for
Red drives are rated for arrays up to 8 disks. I've run 6TB reds in a RAID-6 of 14 disks. Rebuilds are generally not an issue so long as you can maintain redundancy during the rebuild (RAID-6), but since Reds employ TLER, even if another drive sees bad sectors during the rebuild, it should still be able to complete or halt (meaning you can still copy off all data except for the data including the bad sectors). Remember that you should never have all of your data on just one 'storage device' (an array, a NAS, a Drobo, etc). Redundant arrays are not the same as a backup. Always have another copy, preferably at another location.
Can I use this as a regular
Can I use this as a regular daily drive?
Absolutely. The only possible
Absolutely. The only possible disadvantage is that a Red will 'give up' on a bad sector after 7 seconds of attempts, but in many cases that is a bonus. I would personally rather have a drive report that it can't read a sector instead of timing out (30+ seconds) and potentially dropping offline completely.
Thanks for doing this review!
Thanks for doing this review!
I think I need to upgrade the drives in my current NAS.
Thanks as always Allyn.
If
Thanks as always Allyn.
If i’m understanding correctly you can get these for about $250 from amazon here:
http://www.amazon.com/Book-Desktop-External-Drive-WDBFJK0080HBK-NESN/dp/B01B6BN0Q2
The QA section says that the 8tb version uses WD Reds (which makes sense as there’s no other WD 8tb drive that i know of), so you can get a deal there if you just want to take them out of the enclosure.
You’re probably right – why
You're probably right – why they are so much less than the bare drive is beyond me…
You take them out and give up
You take them out and give up on the warranty.
I bought an 8TB MyBook and
I bought an 8TB MyBook and inside was a WD80EZZX, not a WD Red.
Great review! This type of
Great review! This type of content is why I support you guys on Patreon.
Allyn and Ryan,
Thanks for
Allyn and Ryan,
Thanks for the review, I have been a long fan of all articles posted on PCPer.
This article is informative and detailed. I have a QNAP 6-bay server since 2010 and very happy with purchase. I am very surprise to see QNAP still supporting relatively old product by OS updates and new apps. I have been thinking about upgrading current 3 TB WD Red drives to higher drives as I need more storage space.
I would like to see how Drobo 8-bay server is performing under these drives and would wait to see WD 8 TB to be available widely.
Thank you Allyn for the
Thank you Allyn for the review…. These i depth reviews of storage products why I am a happy
patreon supporter! I would suggest that everyone to pony up something to support the Pcper crew, even if it’s the paypal recurring 3 dollar a month… one tenth of one percent for the hands on proof of the actual perfomamce data should prove to be invaluable!
Allyn – any thoughts on MTBF?
Allyn – any thoughts on MTBF? Should we expect to see these survive a longer time than traditional, non-He hdd’s?
These haven’t been around for
These haven't been around for long enough to have a good sample set of reliability data, but a sealed / controlled environment should, at least, eliminate any of the variables that would have contributed to some of the premature HDD failure scenarios. Electrical and mechanical components can still fail, so I wouldn't picture miracles happening, but my educated guess is these would be a bit more reliable than drives vented to atmosphere.
For a
For a 8TB/5400RPM/128MB/7platters drive, $330 is waaaay overpriced. Seagate 8TB DM/VN (7200RPM/256MB/6platters) perform 10 times better than 8TB Red/Purple, and they only cost $300 & $349 currently at Amazon. WD Red has always been a overpriced product, this one is no exception.
Pricing seems OK when you
Pricing seems OK when you consider the helium tech and sheer size of the drive, but having only 3 year warranty and 5400rpm makes it pretty questionable again.
I’m still waiting for the next generation of Velociraptors. I read somewhere that the decreased friction from helium can, instead of giving you more platters, give you a “free” speed increase (for example, from 10000rpm to 15000rpm) without the associated increase in heat and power consumption. A 3.5″ 15000rpm helium Velociraptor wouldn’t be cheap, but it could still be a lot cheaper than comparable high-quality multi-terabyte SSDs.
The days of the enduser hdd
The days of the enduser hdd are slowly drawing to a close. So your product is probably too niche ever to be developed.Slow and cheap ssd or fast and less cheap. Hdd will become like trucks, who buys a truck to drive around when cars n bikes make more sense?
Well – who buys cars and
Well – who buys cars and bikes when you can have a driver for less cost?
Correct – buying things in not a matter of need but a matter of want.
in my case I need tons of space for my video and photo work – I just bought 2 WD80EFZX drives and use them in a test scenario in a 2-bay DS-716+
Works flawlessly up to now. Later this year I will get a 8-bay Sinology – probably a 10 GBase-T configuration with a Denverton chip for handling my storage needs – all in all less than 4 k EUR for 64 TB RAW capacity – if I’d like to do the same thing with SSDs I’d have to pay currently probably 25 – 30 k EUR
Heat, noise and size are none of my concerns since e this array will be in a cabinet in the house hidden and safely locked away.
If you do not need these amounts of storage it is pointless to dispute about the price or old school fashion – for me HDDs are the cheapest way to get more space and this will continue for the years to come – the distance in pricing will get smaller but the possibility to have the highest available storage space for the lowest comparable price will be in favor of the HDD – no matter what price you pay for SSDs – HDDs will always have a healthy safety margin in price and that alone will make them attractive for people like me with huge storage ‘needs’
BTW the M.2 board in my nMP late 2013 gets me > 1 GB /sec encrypted data transfer – but the price is killing my budget – a 64 GB array of these M.2 boards wold likely cost 60 k EUR – needless to say it is at this point in time ridiculous.
Just a few thoughts
Hard drives are typically
The “Bernoulli Effect” is an incorrect explanation for the physics of the head-to-platter gap. Instead the heads ride on an air bearing created by the boundary layer of air attached to the rapidly spinning platters. This cushion of air is what keeps the heads consistently floating at the proper distance from the platters during operation. I don’t like to use the term “fly” when describing heads as the physics involved are not at all like that experienced by wings on a plane.
The reason drives have traditionally been vented is simply because it is much cheaper to vent than to pressurize and seal. The heads are designed to create a specific head-to-platter gap with air at a given pressure. Vented drives are normally rated for operation at up to only 10,000 ft. altitude, however sealed/pressurized or other specially designed (expensive) drives have been available for high-altitude applications: http://www.dell.com/support/article/us/en/04/SLN80457
You’re correct in that
You're correct in that Bernoulli does not push the head away from the platter (as that is boundary layer action), but that does not mean it doesn't apply or is incorrect. The effect is still very much at play, as it keeps the head from flying too high.
“The effect is still very
Bernoulli is not involved there either. That is in fact accomplished by the spring tension in the head arm (usually called the suspension), which would drive the head into contact with the platter were it not for the presence of the air bearing. The combination of air bearing pressure (pushing away from the platter) and spring tension (pushing towards the platter) is what maintains the intended gap between the head and platter during operation.
This is also why the heads on older drives forcibly “land” on the platters when powered off or spun-down. As the platters slow down the air bearing dissipates, thus allowing the spring tension in the head suspension to force the slider into direct contact with the platter. To minimize wear the heads would normally be positioned over a parking zone (containing no data) before spin-down. More information here: http://www.google.com/patents/US5729399
Modern drives ramp the heads off the platters before spin-down so there is never any contact between head sliders and platters. More information here: https://www.hgst.com/sites/default/files/resources/LoadUnload_white_paper_FINAL.pdf
Do you think you could
Do you think you could compare this to the nearest competitor, Seagate’s NAS ST8000VN0002 variant?
At this point, they are both 8TB NAS drives and have similar price points but with with one major difference – seagate has double the cache at 256. Unsure of its RPMs – despite being out first I am having trouble finding reviews of its speed, especially in detail to compare to your article!
I tried to find a websit
I tried to find a websit whcih compare the ST8000VN0002 with this one (very difficult) but it is possible!
And honestly, the seagate (at the same price and same capacity) is much better than the Western WD80EFZX,
me, i focus on facts, and my normal use, (NAS (raid 5)
seeing that is new technology or less or more rpm or less or more cache, i do not care at all,
i just watch: fiability, capacity, and performances in my use (webserver, nginx, Mysql, etc etc)
the rest (the theorical tests) not interested :p
So at the same price i prefer the ST8000VN0002 (and much much much better)
😉
I am wondering if this type
I am wondering if this type of hard disk could be compared to SSDs in their I/O rate I mean 6Gb/s. Is it really comparable?
If so, why sould someone buy ssd when it’s more expensive?
Thanks
When is the release date of
When is the release date of the 60TB (sixty terabyte) WD Red? I want to buy one. Hopefully cost is around $200.
I am considering hardware for
I am considering hardware for a new NAS and I was reading about the WD 8 TB Red. Why is this HDD only for up to 8-bay nas? According to WD I need the Red Pro for use in a 12-bay environment. I would really appreciate an explanation for this in “laymanterms” since I’m by no means any expert.
Thanks