IOMeter – Average Transaction Time (rev 1)
I’ve never really liked how we used to present our average transaction time data, especially when trying to demonstrate SSD latency and access time. I have therefore changed how I will present this data from this point forward. First, I have removed HDD results as they throw the scale too far to tell any meaningful difference in the SSD’s you are trying to focus on. Second, I have reduced the queue depth scale down to 4. In practical terms of a running OS, queue depth is how many commands are ‘stacked up’ on the SSD at that time. An SSD is so fast at servicing requests that typical use will rarely see it increasing past 4. In the cases where it does, there is so much going on that you are more concerned with IOPS and throughput at that point than transaction time. The below charts are meant to show how nimble a given SSD is. Think of it as how well a car handles as opposed to how fast it can go.
Some notes for interpreting results:
- Times measured at QD=1 can serve as a more ‘real’ value of seek time.
- A ‘flatter’ line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously.
Now that we can actually see what we want to see, we note that most Intel models are very close in terms of how long they take to service a given request. The main exception is the supercharged TRIM-enabled X25-M G2 with its 2-3x drop in latency. The SSDNow 40GB peels away slightly from its brothers under load because it has only half of the available channels over which to spread that load. The Vertex starts out competitive, but quickly falls away from the pack as it has a very limited NCQ implementation and only 4 physical channels to its flash.