IOMeter v2006.07.27 – IOps

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.

Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.

A quick lesson on IOMeter and Queue Depth (QD): For a given test, we run the same workload while scaling up the number of IO’s simultaneously issued to the drive. QD=2 means the OS issues two simultaneous IO requests, simultaneously. As IO’s are serviced (completed), IOMeter issues a new request as to keep the queue filled to the level spec’d by that particular instance of the test.

Here we see that every SSD solution behaves differently as a result of its construction and implementation. The ioDrive essentially speaks directly to a bank of flash memory, significantly reducing per-transaction latency. That combined with a bit of driver-level caching lets the ioDrive 160 excel at lower queue depths.

Once you hit QD >= 16, however, the R4 goes into Top Fuel Dragster mode and just leaves everything in a cloud of smoke. It would have been an even greater slaughtering if not for our ‘standard’ IOMeter test employing only a single worker. This is why we ‘only’ saw 160,000 IOPS, where OCZ claims the R4 can peak at 410,000 IOPS under best-case workloads.

A rather large disadvantage of the IODrive is that the driver puts the bulk of the flash management and IO duties onto the host system, consuming RAM and CPU cycles. This limits the ultimate performance you will see from a given IODrive, as it is limited by the resources of the host system. In contrast, the OCZ solutions (including the R4), use a simple SCSI StorPort device driver, presenting to Windows just like any other high end RAID solution. This frees up the host system to worry about processing the huge task of trying to throw nearly half a million IO’s per second down those PCIe channels!

« PreviousNext »