IOMeter – Average Transaction Time (rev 1)

I’ve never really liked how we used to present our average transaction time data, especially when trying to demonstrate SSD latency and access time.  I have therefore changed how this data is presented.  First, I have removed HDD results as they throw the scale too far to tell any meaningful difference in the SSD’s you are trying to focus on.  Second, I have reduced the queue depth scale down to 4.  In practical terms of a running OS, queue depth is how many commands are ‘stacked up’ on the SSD at that time.  An SSD is so fast at servicing requests that typical use will rarely see it increasing past 4.  In the cases where it does, there is so much going on that you are more concerned with IOPS and throughput at that point than transaction time.  The below charts are meant to show how nimble a given SSD is.  Think of it as how well a car handles as opposed to how fast it can go.

Some notes for interpreting results:

  • Times measured at QD=1 can serve as a more ‘real’ value of seek time.
  • A ‘flatter’ line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously.


OCZ Colossus 3.5-in Solid State Drive Review - Storage 36

OCZ Colossus 3.5-in Solid State Drive Review - Storage 37

OCZ Colossus 3.5-in Solid State Drive Review - Storage 38

OCZ Colossus 3.5-in Solid State Drive Review - Storage 39

With our new way of showing Transaction Time results, we can easily see the effects of a lack of NCQ scaling.  The Colossus perfectly doubles with each doubling of queue depth.  This makes sense, as it takes twice as long to service each doubling of the rate of transactions thrown at the drive.

Drives that take more advantage of NCQ are able to ramp up their IOPS at higher queue depths, keeping their transaction times lower even when faced with an onslaught of IO requests.  This is seen by Intel and (single) Indilinx units in the Web Server test.  Indilinx firmware is not yet refined enough to scale with intermixed writes, so the bottom three tests remain dominated by the Intel units, especially with the new 02HA equipped X25-M G2.

You may note the Samsung-controlled Summit climbing off the scale in the bottom two tests.  Samsung relies heavily on its large data cache when faced with combined reads and writes.  Written data stacks up in the cache, forcing it to occasionally purge this data, paying it out to the flash memory.  While this happens everything else is put on hold, causing considerable delays as seen in the Workstation and Database tests.

« PreviousNext »