Latency Percentile and Power Consumption

Latency Percentile

We are replacing our old ‘Average transaction Time’ results with Latency Percentile data, as this exclusive new testing paints a much clearer picture than simple averages (or even plots of average over time) can show.

                             

For reads, I’ll first explain what this chart is doing with respect to HDDs and queue depth. At low queue depths, the percentile plot is on a slight slope due to the varying latency due to varying seek lengths (due to the random workload) and rotational latency. At higher queue depths, the profiles will ‘stretch’ to higher latencies, keeping to a similar minimum latency but the maximum (tail) latency will shift further to the right. This is because queued commands may be shifted to the end of the queue when re-ordered by the HDD firmware. You’ll note that at higher QD, the overall latency per IO is higher, but the HDD is also putting out higher IOPS overall. This increase is because the HDD can better optimize its read pattern when it is operating with a deeper queue.

Getting into the results, I’ve included the He8 data for comparison, but remember that is a 7200 RPM unit while both Reds are spinning at 5400 RPM. A combination of rotational latency and seek time at QD=1 is responsible for the 3ms shift between the He8 and the two Reds. Both Reds start off with nearly identical IOPS and latency profiles, but as load increases, we see the 8TB Red start to behave more like the faster enterprise-rated He8. The same can be said for the IOPS results (added in with the legend) – at QD=32, the 8TB Red is running closer to the IOPS of the He8 than it is to the 6TB Red. That is not bad at all considering the He8 was spinning 33% faster than the other two.

 

HDD Writes work out much differently when compared to reads. While reading, a hard drive can’t reply to an IO request until it actually has the data read from the disk. For writes, the drive can employ its cache and reply to the host significantly faster, especially at lower queue depths. What actually takes place inside the drive is that it functions with its own internal queue that in practice can run significantly higher than the SATA QD=32 limit. Since the drive buffers and takes control over its own internal queue during writes, modern hard drives will generally operate at their maximum possible IOPS based on that internal queue, meaning that they will reach maximum possible IOPS even at QD=1. The only real difference seen at higher host QD is higher per-IO latency. Since the latency percentile test is performed at steady state in this case, higher QD simply translates to a longer wait for each IO, which shifts the nearly vertical profiles to the right.

With the results seen above, I must first explain that the He8 employs a media cache architecture that gives it a significant advantage in random write performance. Note the disproportionally higher IOPS (and lower latency) figures when compared to the two Reds, which do not employ that technology. Looking at the new vs. old reds, the new 8TB Red actually pulls a trick not seen in either the He8 or the 6TB Red. Note how the profiles of the 8TB Red are more sloped than the other drives here. Normally this would be a bad thing, but in this case the tail (top right) is still faster than the 6TB Red. The 8TB profiles are more sloped *in a good way* since they are reaching further to the left (bottom / 0%), meaning that even though the maximum latencies are similar between both Reds, the 8TB sees many of its IO requests are being serviced faster than the 6TB model. This delta is also responsible for the ~12% IOPS increase seen in the 8TB Red.

Power Consumption

Note that the new Red is borrowing enterprise drive electronics and spindle motor from the HGST line, so some of the figures are higher based on the power draw of those parts. As an example, note the standby consumption is identical between the 8TB Red and the He8. From the figures, we can see that the spindle motor appears to take a bit more power as well. Despite the Helium filling, idle power draw for the 8TB Red is higher than the 6TB (air filled) model, yet the lower spindle speed of the Red enables lower idle draw when compared to the He8. It appears this is due to a less efficient spindle motor design carried over from HGST. The increased power use at standby (electronics only) and idle (electronics + motor) carry over into higher per-drive power use for all active use.

When comparing these power figures, those specing out an array or NAS might be more concerned about power use that takes capacity into account. You’ll need fewer 8TB than 6TB drives to achieve the same ultimate capacity. With the above chart we can see that when capacity compensated, the 8TB Red actually beats all drives in this comparison, with the exception of random write seeks coming in slightly higher than the He8.

« PreviousNext »