IOMeter – IOps

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.

Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.

We are running new version of IOMeter, but with a similar configuration as compared with prior versions (i.e. compressibility of data, etc), as to maintain consistency across the test data pool.

Light desktop usage sees QD figures between 1 and 4. Heavy / power user loads run at 8 and higher. Most SSD's are not capable of effectively handling anything higher than QD=32, which explains the plateaus.

Regarding why we use this test as opposed to single-tasker tests like 4KB random reads or 4KB random writes, well, computers are just not single taskers. Writes take place at the same time as reads. We call this mixed-mode testing, and while a given SSD comes with side-of-box specs that boast what it can do while being a uni-tasker, the tests above tend to paint a very different picture.

Before getting into the actual results, the way we typically configure this test is to run only one worker thread, as you can't get QD=1 results with >1 worker running at once. 1 worker thread pegs the its associated CPU at ~220k IOPS. This is not the fault of the SSD under test, it is a limit of the benchmark itself (in our desired configuration). While this has not been a limit in the past, it clearly is now, as we can see the DC P3700 almost comically walk all over the competition. I mean seriously, nothing holds a candle to this thing. Further, it's outperforming a fair number of these devices even at it's QD=1 point. This means that it performs so fast, that it is very likely to bulldoze through the workloads as they are thrown at it, continuously acting to shallow the queue. In other words, it goes so fast that the queue would never get the chance to build very high in the first place.

« PreviousNext »