Random Performance – Iometer (IOPS/latency), YAPT (random)
We are trying something different here. Folks tend to not like to click through pages and pages of benchmarks, so I'm going to weed out those that show little to no delta across different units (PCMark). I'm also going to group results performance trait tested. Here are the random access results:
It's pretty easy to pick out the 5400 RPM and 7200 RPM drives here (and the 10K at the bottom). The only differences are some added latency to units that do not seek as aggressively in the interest of power consumption (the original 3TB Red and less so the 6TB Red).
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry. Intel later discontinued work on Iometer and passed it onto the Open Source Development Lab (OSDL). In November 2001, code was dropped on SourceForge.net. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Iometer – IOPS
For random performance at climbing queue depths, a 10k drive is always going to dominate over slower spinning ones, but the added cache does give the new Red Pro and Black a reasonable jump on other 7200 RPM units. In the Database test, we can see just how efficient command queiueing can be, as the random performance *doubles* from QD=1 to QD=32 (the maximum for SATA protocol devices).
Iometer – Average Transaction Time
For SSD reviews, HDD results are removed as they throw the scale too far to tell any meaningful difference in the results. Queue depth has been reduced to 8 to further clarify the results (especially as typical consumer workloads rarely exceed QD=8). Some notes for interpreting results:
- Times measured at QD=1 can double as a value of seek time (in HDD terms, that is).
- A 'flatter' line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously, especially if that line falls lower than competing units.
The new 6TB models were able to beat all other 7200 RPM units, likely due to it having at least double the DRAM cache of those others.
YAPT (yet another performance test) is a benchmark recommended by a pair of drive manufacturers and was incredibly difficult to locate as it hasn't been updated or used in quite some time. That doesn't make it irrelevant by any means though, as the benchmark is quite useful. It creates a test file of about 100 MB in size and runs both random and sequential read and write tests with it while changing the data I/O size in the process. The misaligned nature of this test exposes the read-modify-write performance of SSDs and Advanced Format HDDs.
These results may look odd, but there is a logical explanation. The 128MB of cache was enough to start giving this particular test enough of chance at cache hits occurring. We also suspect there is more aggressive caching of hot data occurring with these models.
Here we see two drives doing very well but not for the same reason. The bulk of the drives in this test employ Advanced Format, meaning they handle data internally aligned at 4k intervals. YAPT is not a 4k aligned test, which leaves the Advanced Format units (relatively) low in the pack compared to two outliers that perform very well. One was the 4TB RE series unit and the other was the first (FAEX) version of the 4TB Black – both of which did not employ Advanced Format, and were therefore able to better handle misaligned random writes. The FAEX Black has been phased out and now that entire line follows all other modern units, which makes sense since the vast majority of file system operations taking place on modern Operating Systems are 4k aligned