IOMeter v2006.07.27 – IOps
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry.
Meanwhile Intel has discontinued to work on Iometer and it was given to the Open Source Development Lab (OSDL). In November 2001, a project was registered at SourceForge.net and an initial drop was provided. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Uh oh. See that blue line that’s *under* many of the other drives? When it comes to pure IOPS testing, the 510 has a difficult time surpassing even the SATA 3Gb/sec units. It most certainly doesn’t ‘ramp up’ to take advantage of the added bandwidth as seen with the new SandForce controller.
Decent performance in the file server profile, but still significantly eclipsed by SandForce.
In our database profile, even the *first generation* X25-M was able to surpass the 510 at QD >= 32.
An interesting point is that in the last three benches, the tweaks Intel have made enable the 510 to take a significantly different ‘ramp’ as queue depth increases. The plot shifts to the more obvious exponential buildup we expect from a well engineered SSD. While this gives it a significant advantage over the original Marvell implementation at the important Queue Depths of 2-16, the controller eventually runs out of steam.
While I’ll grant you these are synthetic benchmarks and don’t always translate to a specific real world usage case, they do cover a broad spectrum of performance seen from a given SSD. Light usage equates to the left most points of the corresponding chart, while the heaviest multi-threaded usage sits at QD=32. Most SATA devices can’t handle QD>32, but we push our test further out as to look for inconsistencies when the drive is getting hit the hardest.
In these tests, while it appears Intel has worked some firmware magic to boost the throughput of this Marvell controller, there is only so much you can do with a given piece of silicon. When it comes to hitting these drives with multiple IO’s, the Marvell just can’t spread its wings like the previous (and even first) generation offerings from Intel. This is essentially confirmed by the fact that the 510 neatly matches the maximum IOPS (QD>=32) of the Marvell equipped C300. In short, if you hit the 510 with any sort of random IO, it hits its IOPS limit before hitting a bandwidth limit (even 3Gb/sec actually).