IOMeter – IOps
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.
Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Light desktop usage sees QD figures between 1 and 4. Heavy / power user loads run at 8 and higher. SATA connected devices are not capable of effectively handling anything higher than QD=32, which explains the plateaus. Regarding why we use this test as opposed to single-tasker tests like 4KB random reads or 4KB random writes, well, computers are just not single taskers. Writes take place at the same time as reads. We call this mixed-mode testing, and while SSDs come with side-of-box specs that boast what it can do while being a uni-tasker, our tests below tend to paint a very different picture.
The 4TB Red Pro (red line) performs very well in these tests, looking very much like an enterprise-grade RE series drive. It still can't match the 10,000 RPM VelociRaptor, but that is simple physics.
The 6TB Red is an entirely different story, as this is the single best test to exacerbate the misconfiguration bug present in the initial shipping firmware. Here we see an essentially flat line, indicating that the bug is causing an apparent failure in the ability to queue commands. To make the end-user experience clear here, a HDD effectively operating without NCQ removes the drives ability to scale when multiple commands are issued. Any case where multiple things are happening simultaneously (like streaming multiple videos or several users actively simultaneously accessing a NAS) will see a negative impact on performance. That ramp-up you see in the other drives equates to added IO capability under those conditions, meaning that once this bug has been fixed, the 6TB Red will be able to handle greater simultaneous workload when compared to an unpatched drive.