IOMeter v2006.07.27 – IOps

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.

Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.

Light desktop usage sees QD figures between 1 and 4. Heavy / power user loads run at 8 and higher. Most SSD's are not capable of effectively handling anything higher than QD=32, which explains the plateaus.

Regarding why we use this test as opposed to single-tasker tests like 4KB random reads or 4KB random writes, well, computers are just not single taskers. Writes take place at the same time as reads. We call this mixed-mode testing, and while a given SSD comes with side-of-box specs that boast what it can do while being a uni-tasker, the tests above tend to paint a very different picture.

I've altered the presentation of these charts slightly as compared to the norm. They are usually in alphapetical order. This time I've arranged them in the order in which they are run. We run the series back-to-back, which gives the drives no time to catch their breath. This is normally not an issue, but in the case of caching SSDs, the SSD with the smaller cache has a greater chance of filling it. In this case, the 500GB EVO performs right up there with its 1TB brother right up until the point where the cache becomes full. Since this is a mixed workload test, it had the unfortunate side effect of tricking the 500GB model into attempting to write its cache to TLC while writes were still being requested of the drive itself. This resulted in overall speeds dropping to below that of the older TLC-only 840. I suspect this was due to the mixed and random workload combined with the cache overhead simply overtaxing the controller, resulting in a dip in random read IOPS, which ultimately held back overall IOPS.

Keep in mind this is not exactly a consumer oriented test. It's rare that any non-enterprise user is actually going to put an 840 EVO through this type of torture. Not for as long of a period as we do, anyway.

« PreviousNext »