IOMeter – IOps
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.
Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Light desktop usage sees QD figures between 1 and 4. Heavy / power user loads run at 8 and higher. SATA connected devices are not capable of effectively handling anything higher than QD=32, which explains the plateaus. Regarding why we use this test as opposed to single-tasker tests like 4KB random reads or 4KB random writes, well, computers are just not single taskers. Writes take place at the same time as reads. We call this mixed-mode testing, and while SSDs come with side-of-box specs that boast what it can do while being a uni-tasker, our tests below tend to paint a very different picture.
The 4TB Red Pro (red line) performs very well in these tests, looking very much like an enterprise-grade RE series drive. It still can't match the 10,000 RPM VelociRaptor, but that is simple physics.
The 6TB Red is an entirely different story, as this is the single best test to exacerbate the misconfiguration bug present in the initial shipping firmware. Here we see an essentially flat line, indicating that the bug is causing an apparent failure in the ability to queue commands. To make the end-user experience clear here, a HDD effectively operating without NCQ removes the drives ability to scale when multiple commands are issued. Any case where multiple things are happening simultaneously (like streaming multiple videos or several users actively simultaneously accessing a NAS) will see a negative impact on performance. That ramp-up you see in the other drives equates to added IO capability under those conditions, meaning that once this bug has been fixed, the 6TB Red will be able to handle greater simultaneous workload when compared to an unpatched drive.
I am a capitalist. I do not
I am a capitalist. I do not use red drives!!! lol!
Then buy a purple drive, you
Then buy a purple drive, you fairy 😛
If you ain’t black, you ain’t
If you ain’t black, you ain’t crap.
Don’t be mean, buy Green.
Don’t be mean, buy Green.
Good to know these are
Good to know these are affordable. I’ve still got about a year before I run out of space using 4TB REDs, then I’ll start upgrading with 6TB drives. I tried Seagate’s NAS drives but the one’s I bought (at least) were way too loud for use at home.
3TB is still the best $/GB at
3TB is still the best $/GB at $0.043/GB. Better density for the NAS drives, though not sure the price is worth it.
No. 5TB Seagate externals
No. 5TB Seagate externals are $190. $0.038/GB. I can’t buy them fast enough!
I think your failure premise
I think your failure premise is a bit contrived. No one should be running a RAID system of any type without full SMART checks on a regular basis at the very least.
I’ve personally had RAIDs
I've personally had RAIDs fail in that scenario even with SMART checks in place, as well as weekly full array data scrubs. Fact is that unless you have some form of TLER, a second drive failure that occurs mid rebuild will cause most RAID controllers to offline the array.
Ryan had also had such a failure (using Seagate drives), and I had to recover his array by imaging the non-failed drives and manually de-striping in software.
Curious Allyn, which imaging
Curious Allyn, which imaging software did you use to rescue that array ???
It wasn’t the imaging
It wasn't the imaging software that did the rescue – all it did was create images of the drives (and read past the unreadable areas after hours of timeouts / retries). One I had the images, I coded something myself to re-stitch, using alternating parity (i.e. two drives had unreadable sectors in (mostly) alternating areas relative to each other).
That was for my array recovery. Ryan's was easier, as he had just one drive with a small cluster of bad sectors causing his array to timeout. I was able to image that drive and re-stitch that array back together with a tool from Runtime Software – but with some custom settings I had to come up with myself, as Ryan's array was not easy for that software to 'lock' onto in auto mode.
So the pro is “better” &
So the pro is “better” & expected to last longer, yet is 6dBA LOUDER than the standard WD RED….
Also, did anyone else notice they changed the “Non-recoverable read errors per bits” to look better despite being the same?
It’s a 7200 RPM enterprise
It's a 7200 RPM enterprise spec drive. *Of course* it is faster / louder.
You should look into how
You should look into how Seagate is intentionally crippling consumer HDDs with low APM states and special firmware to scare enterprise customers into buying more expensive drives.
Can we mix and match Green
Can we mix and match Green and Red drives?
always i used to read smaller
always i used to read smaller articles that also clear
their motive, and that is also happening withh this post which I am reading now.
my blog post como Adelgazar 10 kilos
Hi I’ve got 3 existing 1year
Hi I’ve got 3 existing 1year old WD RED 4TB drives and am moving to a new Synology 1515plus. I’d like to expand my storage and am considering a RED PRO instead of buying another standard 4TB RED drive
My question is this, is it bad to mix a new 4TB RED PRO with the older 4TB standard RED drives in a RAID?
Thanks in advance
I have almost the same
I have almost the same question, except i have 5 x one year old wd red 3tb’s and am expanding to 8 drives…Would it be better to use 3 new pro drives, or stick with 3 new standard red drives?
Thanks in advance.
oh, and by the way…..Merry Christmas to all on here!
Pro drives would be your best
Pro drives would be your best bet, since they have accelerometers in like the Se drives to actively reduce vibration. Though even the newer plain reds have nasware 3.0 and so have software based vibration reduction, allowing upto 8 drives.
WD have also said they will honour warranties of those using older (1-5 bay drives) in 8 bay configs.
Perhaps on cost, plain reds would be better, don’t forget the red pro isn’t a home NAS drive, it’s louder, faster, and uses more power (around 5w/drive more), designed for heavier use.