IOMeter v2006.07.27 – IOps

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.

Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.

SSD Roundup: Indilinx vs. Samsung vs. Intel (or why size matters) - Storage 33

Indilinx does great in our Web Server test pattern, keeping pace with the X25-M up to a queue depth of 4.  Both Samsung units don’t seem to scale at all here, though the 128GB Summit just manages to squeak past Intel at QD=1.

SSD Roundup: Indilinx vs. Samsung vs. Intel (or why size matters) - Storage 34

The large cache of the Summit helped it come on strong in our File Server test, but higher queue depths hit its limit, causing it to quickly fall to match the P64.  Indilinx units all did well, with the V4S showing its SLC muscle.

SSD Roundup: Indilinx vs. Samsung vs. Intel (or why size matters) - Storage 35

The P64 fell on its sword during our Database test, quickly becoming overloaded.  The Vertex Turbo did best, though all Indilinx units were very close here.

SSD Roundup: Indilinx vs. Samsung vs. Intel (or why size matters) - Storage 36

The P64 threw in the towel before reaching our Workstation test, and the Summit, which had hung in there so far, finally filled up its cache.

We run very short IOMeter passes to help minimize fragmentation, but this was not enough for the Samsung drives to run out of steam.  Once their cache was full, they just gave up until the end of the test, where they could catch their breath and write all of their cached data back to the flash.  Indilinx units were all consistent, though the V4S fell to the bottom of the Indilinx pack.  This was likely caused by there being significantly less flash area (only 32 GB), meaning it fragments faster under this type of testing.

Things to consider when reading this data: Queue depth is used when commands are sent to the drive from multiple threads and/or applications in parallel.  The commands effectively ‘stack up’ on the drive.  The X25-M takes *significant* advantage of this, performing anywhere from 2-5x better than the others depending on demand.  Most of the competition stays at a constant performance level, so adding parallel demands on the SSD will result in a drop in speed as seen by each parallel application.  This is because those demands must be spread across a constant rate of task completion.  While the other drives maintain their status quo, the X25 just picks up steam.

« PreviousNext »