Random Performance – Iometer (IOPS/latency), YAPT (random)
We are trying something different here. Folks tend to not like to click through pages and pages of benchmarks, so I'm going to weed out those that show little to no delta across different units (PCMark). I'm also going to group results performance trait tested. Here are the random access results:
Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It was originally developed by the Intel Corporation and announced at the Intel Developers Forum (IDF) on February 17, 1998 – since then it got wide spread within the industry. Intel later discontinued work on Iometer and passed it onto the Open Source Development Lab (OSDL). In November 2001, code was dropped on SourceForge.net. Since the relaunch in February 2003, the project is driven by an international group of individuals who are continuesly improving, porting and extend the product.
Iometer – IOPS
File and Web Server performance are as expected, with no noticable difference or dips in performance in the 2TB models.
Well this is interesting. While EVO models normally run out of SLC cache by this point in the test sequence (note the 500GB EVO at QD 32-64), the 2TB 850 EVO shows a different behavior entirely, with lower than expected perofmrance at low queue depths, but satisfactory performance at higher ones. Typically when an EVO runs out of cache, the low QD performance remains consistent with the SSD 'topping out' sooner. The 2TB 850 EVO appears to be doing the opposite here.
Here we see the same odd behavior as we did with the database testing. While the 850 Pro had no issue at the 2TB capacity, something is different with the 2TB capacity of the 850 EVO. After looking at these results and doing some other tinkering with the drive, it appears that the 2TB 850 EVO is more agressive when it comes to purging its SLC cache, and while doing so it seems to balance those operations with incoming requests from the host. This may be the side effect of what enabled that blazing result on our file copy test, or it may just be a cache/buffer tuning oversight by Samsung. It may also be that this SSD was tested on pre-release firmware that may not be fully optimized. I'll be investigating this issue further and coordinating with Samsung to replicate it on their end.
Iometer – Average Transaction Time
For SSD reviews, HDD results are removed as they throw the scale too far to tell any meaningful difference in the results. Queue depth has been reduced to 8 to further clarify the results (especially as typical consumer workloads rarely exceed QD=8). Some notes for interpreting results:
- Times measured at QD=1 can double as a value of seek time (in HDD terms, that is).
- A 'flatter' line means that drive will scale better and ramp up its IOPS when hit with multiple requests simultaneously, especially if that line falls lower than competing units.
That low queue depth performance of the 2TB 850 EVO really sticks out like a sore thumb in the latter two tests of this sequence.
YAPT (yet another performance test) is a benchmark recommended by a pair of drive manufacturers and was incredibly difficult to locate as it hasn't been updated or used in quite some time. That doesn't make it irrelevant by any means though, as the benchmark is quite useful. It creates a test file of about 100 MB in size and runs both random and sequential read and write tests with it while changing the data I/O size in the process. The misaligned nature of this test exposes the read-modify-write performance of SSDs and Advanced Format HDDs.
This test has no regard for 4k alignment, and it brings many SSDs to their knees rather quickly. Samsung SSDs have historically done very well with this test, however we do note the 2TB 850 EVO (pink) coming in a bit slower than its 1TB variant (purple), which had no issue sticking closer to full saturation of the SATA interface through the random write test.