Latency Percentile and Conclusion
Latency Percentile
Now for the fun part. Latency Percentile testing was introduced in our 950 Pro review, and has come in very handy identifying how performance differences impact the ‘feel’ of a system. With all tested SSDs undergoing identical pre-conditioning, I ran them through a custom test sequence to extract Latency Percentile data at varying queue depths. The same sequence and conditioning were applied to all three tested SSDs.
Writes:
First lets look at the default zoom:
Apparently we're going to need to magnify things a bit before we begin to explain:
I had to spread the read charts out below due to excessive overlap between data sets, but I was able to get all of the writes on a single chart since the plotted data swept neatly from left to right.
- QD=1
- The 850 EVO V2 (cyan) comes in first here.
- The 850 EVO V1 (orange) comes in on average ~2us more latent than the V1, but matches the 850 PRO (gray) result almost identically.
- QD=2
- The 850 EVO V2 (yellow) again leads here, this time by ~3us, which actually pushes its result close to the 850 PRO/850 EVO V1 results @ QD=1.
- The 850 EVO V1 (blue) is actually slightly bested by the 850 PRO (green) at QD=2.
- QD=4
- All three results are extremely close here, but we can see the 850 EVO V1 and 850 PRO both taper off sooner while the 850 EVO V2 holds its latency near vertical all the way to the 99th percentile (where the V2 leads by 2.6us).
- QD=8
- At this high of a load, the 850 PRO finally takes the lead (by ~1us average), with the 850 EVO V1 and V2 running very close together. That said, the IOPS (far right) of the EVO V2 remains very close to the PRO.
The take away here is that at low loads (typical for consumer), the new 850 EVO V2 is not only faster to respond to random write IO than the 850 EVO V1, it also beats the 850 PRO by a healthy margin.
Read:
For reads the same 4k random workload applied, but since read workload latencies partially overlap at the queue depths tested, I’ve separated each QD into its own chart. Before getting into the data, I’ll first explain that latencies shift to the right (longer) for reads as compared to writes. This is because an SSD can receive and acknowledge data from a host faster than it can respond to a read request. Responding to a read means going all the way to the flash (via the Flash Translation Layer), fetching the requested data, and transferring that data to the host. Writes are quicker from the host’s perspective as the SSD simply receives the data (technically completing the IO) and figures out where to put it after the fact. Now onto the results:
QD=1
- We see the familiar 'stepped' read latency profile of Samsung's controller / flash combination (seen at the bottom half of the page here). The profiles may be the same between both EVOs, but the new V2 is shifted neatly 8.7us to the left, which also pushes it faster than the 850 PRO for 67% of all IO's in this test run.
QD=2
- The 850 EVO V2 and 850 PRO are still duking it out for first place at QD=2.
QD=4
- As demand rises, we see the once vertical latency profiles start to slope and taper for all models, but the 850 EVO V2 continues to do well and keeps pace with the 850 PRO.
QD=8
- At this high of a load and just as we saw with writes, the 850 EVO V2 is finally outpaced by the 850 PRO, but the compounding latencies of the slower 850 EVO V1 have now pushed it out to 19us behind the pack (the gaps appear similar as we are on a logarithmic scale).
Conclusion
We are happy to confirm that there is nothing to worry about with Samsung’s mid-line swap of V-NAND in their 850 EVO line of SSDs. We are even happier to report that the new 48-layer TLC parts enable the 850 EVO to respond even faster to low queue depth IO requests when compared to its 32-layer TLC equipped predecessor, and even faster than the 32-layer MLC equipped 850 PRO! Apart from the noted latency improvements, other metrics such as sequential data transfers and higher queue depth workloads proved negligible in our testing of the 1TB capacity point. If and when Samsung updates their 850 PRO to a V2 / 48-Layer V-NAND combination, we will be standing by to repeat this testing accordingly.
Does anyone know if the
Does anyone know if the performance degradation issue that affected the 840 evo is present in the 850 evo?
Does anyone know when the 64
Does anyone know when the 64 layer TLC available for EVO 850?