Random Performance – Iometer (IOPS/latency), YAPT (random)

We are trying something different here. Folks tend to not like to click through pages and pages of benchmarks, so I'm going to weed out those that show little to no delta across different units (PCMark). I'm also going to group results performance trait tested. Here are the random access results:

Iometer:

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry. Intel later discontinued work on Iometer and passed it onto the Open Source Development Lab (OSDL). In November 2001, code was dropped on SourceForge.net. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.

Iometer – IOPS

First a bit of a caveat – we run this test in a tight sequence on purpose – we want the best possible baseline controller and flash performance, but without fragmenting that flash in the process. This means the sequence is run very tightly (short on time). EVO's use their TurboWrite cache for as long as they can in the test, but eventually run out. The charts are presented int he sequence in which the tests were run, and you can see something changes during the Database test. The 120GB 850 EVO runs out of cache at QD 2-4, followed by the 500GB 840 EVO at QD 8-16. The 500GB 850 EVO holds out until QD 32-64, suggesting either more efficient caching, or perhaps a slightly larger SLC cache allocation.

Now with the cache action explained, let's look at the results. Where TurboWrite is able to do its work, the 850 EVO performs extremely well here. Note the top performance in the File Server test. Having SLC take care of the writes while the TLC area handles the reads is about as good as having two independent SSDs for each type of operation. End result: The 500GB 850 EVO wipes the floor with the competition in every single data point, all the way up to QD256. While SATA devices only handle QD <= 32, the nice flat line above that maximum suggests very good performance consistency.

Iometer – Average Transaction Time

For SSD reviews, HDD results are removed here as they throw the scale too far to tell any meaningful difference in the results. Queue depth has been reduced to 8 to further clarify the results (especially as typical consumer workloads rarely exceed QD=8). Some notes for interpreting results:

  • Times measured at QD=1 can double as a value of seek time (in HDD terms, that is).
  • A 'flatter' line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously, especially if that line falls lower than competing units.

YAPT (random)

YAPT (yet another performance test) is a benchmark recommended by a pair of drive manufacturers and was incredibly difficult to locate as it hasn't been updated or used in quite some time.  That doesn't make it irrelevant by any means though, as the benchmark is quite useful.  It creates a test file of about 100 MB in size and runs both random and sequential read and write tests with it while changing the data I/O size in the process.  The misaligned nature of this test exposes the read-modify-write performance of SSDs and Advanced Format HDDs.

Performance optimizations appear to be better suited to 4k aligned with the new 850 EVO's, however they do still perform very well in the face of misaligned random writes.

« PreviousNext »