Latency Percentile – Intro and Comparative Results
Intro
Our exclusive Latency Distribution / Latency Percentile testing was a long time in the making and was first introduced in my 950 Pro review (longer explanation at that link). To put it briefly, the thing that contributes greatest to the 'feel' of storage device speed is its latency. Simple average and maximum latencies don't paint nearly the full picture when it comes to the true performance of a given SSD. Stutters of only a few IO's out of the thousands delivered per second can simply be 'lost in the average'. This applies even if the average is plotted every second. The only true solution is to track the latency of each and every IO, no small feat when there are potentially hundreds of thousands (or millions) of IO's being delivered by the fastest SSDs.
Latency Distribution (V1)
Here the data has been converted into what is essentially a spectrum analyzer for IO latency. The more IO's taking place at lower latencies (towards the left of the 'spectrum') the better. While it is handy for seeing exactly where latencies fall for a given device, the results are generally hard to read and digest, so the data is further translated into a percentile:
Latency Percentile – IO Weighted (V1)
For those unfamiliar with this plot, the ideal result is a vertical line as far to the left as possible. Real world storage devices under load will tend to slant or slope, and some will 'turn' prior to hitting 100%, indicating that some of the IO's are taking longer (the point where the line curves back upwards indicates the latency of those remaining IO's).
This new testing has come a long way since it was first introduced. The most recent and significant change is to correct a glaring issue common to all IO percentile plots, caused by a bad assumption similar to that which comes with using averages. V1 Percentiles were calculated from the percentage of total IOs, which was in-line with what the rest of the industry has settled on. You might have seen enterprise SDS ratings claiming 99.99th (or some other variation e.g. 99.9% / 99.999%) percentile latency figures. As an example, a 99.99 percentile rating of 6ms would mean that 99.99% of all IOs were <= 6ms.
There is a flaw inherent in the above rating method. Using the 99.99% <= 6ms example above, imagine an SSD that completely stalled for one second in the middle of a 6-second run. For the other five seconds of the test, it performed at 200k IOPS. The resulting data would reflect one million total IO's and (assuming QD=1) a single IO taking a full second. The average IOPS would still be a decent 167k, but that nasty stutter was diluted – effectively 'lost in the average'. The same goes for 99.99% ("four nines") latency, which would miss that single IO. Despite hanging the entire system for 17% of the run, that single IO would not get caught unless you calculated out to 99.9999% ("six nines"), which nobody rates for.
The industry has settled on calculating this way mainly out of necessity and the limits of latency measurement. Most tools employ a coarse bucket scheme, meaning 99.99% values must be interpolated. Fortunately, our data gathering technique gives us far greater resolution into the data, meaning not only can we minimize interpolation, we can do something previously impossible. Getting away from IO-based percentages means we must correct our IO Percentile results by summing not just the IO's, but the time those IOs took to complete. When calculated this way, our hypothetical example above would show low latency only up to the 83% mark, where its result would ride that 83% line all the way to the one-second mark on the plot. With these percentiles now based on total time and not the unweighted sum of the IO's, we can more easily identify those intermittent stalls.
Latency Percentile – Time Weighted (V2)
I've created the above based on the new method but using the same source data as the earlier V1 plot. This data was based on reads, which typically don't suffer from the same inconsistent latencies seen in SSD writes. Even with more consistent results, we can see a difference in the plotted data. The RevoDrive 350 (red line) doesn't quite make it past 99% as quickly as it did in the V1 plot, and some of the faster SSDs taper off a bit earlier as well. The three HDDs also saw an impact, as longer seeks take up more of the total time of the run. If you're still not convinced as to the relevance or importance of this new presentation method, I'll just leave this worst-case example here, comparing the older IO-weighted results to the newer Latency-weighted translation of that same distribution data:
Yes, this was a real SSD and not a hypothetical example, even though it does mirror my example above rather remarkably.
Latency Percentile – Comparative Results
The workload chosen for these tests consists of completely filling an SSD with sequential data, then applying 4k random writes to an 8GB span. This is not the same as 'full span writes', which is more of an enterprise workload. Instead, we are emulating more of a consumer type of workload where only 8GB of the drive randomly written (typically by system log files, registry, MFT, directory structure, etc). The following is a random read Queue Depth sweep (1-32) of that same area, which tests how quickly the OS would retrieve those previously written directory structures and registry files.
Reads
I didn't have a lot of SATA percentile data handy for this review, and it does take some time to properly steady-state drives and collate the results, but I did include SSDs that also fall into the budget category. 750GB is certainly an 'odd' capacity, so I chose 500GB for the competing SSDs. Future reviews will have additional SATA results populated here.
At QD=1 reads, we noted higher latencies on the MX300, contributing to a halving of the observed IOPS when compared to the 750 EVO.
As we ramp up the queue depth, the MX300 turns in decent IOPS figures and starts to look a bit better on the latency up to QD=16…
…but at QD=32, Samsung's products kick things into overdrive with respect to Latency Percentiles. All three of those competing products hit nearly 100k IOPS and do so with an extremely consistent latency profile. The MX300 fares well at nearly 70k IOPS, but it sees nearly 10% of its IO time spent on IOs taking longer than 1ms.
Writes
Now for the fun part. These are all caching SSDs, meaning that during the test run, some IOs go to SLC while others go to TLC. I am currently developing new (currently prototype) methods of applying these write workloads in a more paced manner, which will more closely emulate typical consumer intermittent IO workloads. For now, we have to go into these results with the understanding that the workload is a 100-second crop out of a steady-state sustained application of random 4k writes to an 8GB span. Lower queue depths will naturally see lower demand, which means a greater chance that writes will go to SLC vs. TLC areas.
Starting at QD=1, we see the MX300 perform decently, nearly matching the older 840 EVO.
At QD=2 things start to spread out a bit. It's a close race between the MX300 and the new 500GB 750 EVO here, but the MX300 wins out on total IOPS.
The MX300 holds the 750 EVO at bay all the way to QD=32, where it turns in a respectable 53k IOPS. While the numbers are certainly good, there is just no catching the 850 EVO, which is also a 3D NAND (VNAND) SSD, but more expensive than both the MX300 and 750 EVO.
Nice price, but too small. 🙁
Nice price, but too small. 🙁
your not going to be seeing
your not going to be seeing really high capacity coming to these budget drives for a while. most of the industry is stuck at 1tb with only high end drives going further like the 850 pro. Once 480gb drops to the current 250gb prices then i would imagine 2tb and higher drives coming to take the top spot
Why can’t somebody make a
Why can’t somebody make a cheap 2tb SSD?!!!
Because it’s a waste of money
Because it’s a waste of money to store pr0n movies… 😉
Actually, if you’re a pcgamer
Actually, if you’re a pcgamer then you want 2tb otherwise you’ll have to uninstall and reinstall games to make room. 1tb is not enough, especially with newer games that take up a lot of space.
Even PC gamers are wasting
Even PC gamers are wasting their money for loading faster huge games from SSD with no performance gain in game. 😉
You should better use a HDD for massive storage like games or movies. If the cost doesn’t matter you should be able to buy any SSD capacity or quantity.
These charts that you are
These charts that you are using are absolutely horrible to read. For example, that HDTach chart, the read and writes should be separated and then the results should be ranked going from best to worst. Actually…. any results chart you have should be ranked from best to worst. That way you can easily see how thing compare. Those YAPT charts…. tell me nothing. I can’t tell the difference between any of them really. And have you guys ever stopped to think for a moment about someone who is color blind? Trying to find which line goes to what would be impossible. Even without the color issue it’s still pretty bad.
Been meaning to say this stuff for a while. I honestly have had to skip right to your conclusions in your product reviews because the data charts are just god awful. Please make this info easier to read.
Most of that testing is being
Most of that testing is being phased out. If you can't tell the difference in YAPT, that means all SSDs are saturating SATA (a good thing). There is really nothing we can do about colors in charts, unfortunately. Regarding sort order, keeping the subject of the article at the top keeps it more easily identifiable. People have to dig to find the new product in sorted charts, and coloring it differently clashes with your colorblind comment…
You don’t have to keep it at
You don’t have to keep it at the top. If sort the chart and highlight the new product then it’s easy to see. Guru3D has been doing their charts this way for years. It also makes it much easier to see how x product compares to y product. The non sorted charts are a mess and it’s very hard to see how products compare.
$200 for 750gb, does that
$200 for 750gb, does that mean on sale $150? I’ve been seeing lots of 960gb drives on sale for $180-$200 lately.
Allyn, why has there been no new SATA standard? NVME interfaces don’t seem to be the greatest, M.2 is great but what if I want 2-3 drives?
I don’t like RAID because I like to be able to break up the band without a big headache, or not seem to screw up my array when I flash a new BIOS.
There are motherboards with 3
There are motherboards with 3 m.2 slots, and they don't have to be in a RAID to be used. Additional m.2 devices can also be installed via adapter cards, etc. Sadly, PCIe is the way things are moving, and SATA 6Gbit may be the last iteration of it that we see in the wild.
I’d like to see USB 3.1 /
I’d like to see USB 3.1 / Type C replace SATA. It’s faster, delivers power and data over the same compact connector, and can do PCIEx4 in alt mode.
And you don’t need drive cases. Just plug the bare drive straight in to any available port.
That would be a cool idea,
That would be a cool idea, I’m not a fan of external boxes all over the place. I still like the tower case with plenty of expansion ports/drive bays. Yes, I still rocking the optical. I saw a post somewhere, the guy complained that he was installing an OS and it needed drivers for USB ports, but he had his drivers on a USB drive. I guess out of luck.
I wonder why no faster SATA, they can’t come up with a better shielded cable and up the clocks?
I’ve read elsewhere on forums
I’ve read elsewhere on forums covering this release that the MX300 “Appears to be underwhelming”. Another response stated, “Underwhelming is being polite”.
Could this be because they analyzed the product before you’re aforementioned fixes were implemented?
What’s the deal with the controller? Does Micron potentially have a better one around the corner and that is why they are classifying the MX300 as ‘Limited Edition’ so as not to hamper current sales?
All input is appreciated. Thanks for your great article 🙂
What’d I’d really like to
What’d I’d really like to know is if IMFT’s Floating Gate Technology on their 3D NAND is really going to materially increase performance or if it is just reducing power consumption and improving efficiency, as has been the case with DRAM every year.
Will Micron 3D NAND Product blow planar NAND out the water or will it just strongly compete?
Thanks,
Max (23 yr old investor)
I think you should draw a
I think you should draw a line of a product in review above every other lines on the graphs for better visibility
personally, i would like
personally, i would like someone to do a video or an article dedicated to this new DWA technology
the available information out there is very limited, and may not be that reliable.
if possible, please explain, my interest on this new technology is quite high, since i’ve never heard of it
and also, what’s up with the quad plane thing?
i hope someone explains it
thanks
It was linked in the
It was linked in the article:
https://pcper.com/reviews/Storage/Micron-M600-SSD-Review-Digging-Dynamic-Write-Acceleration/Dynamic-Write-Acceleration
CLEVER MARKETING
750 GB means
CLEVER MARKETING
750 GB means there’s no direct competitor.
This should really be the replacement for the BX200 (which deserves LEMON status)not the MX200-That would hopefully be replaced with 3D-MLC……………..
Be interesting to see what follows-Have a feeling speeds will tank on smaller drives-Be interesting to see this size with a 8 ch controller……….
For anyone in the UK, Amazon
For anyone in the UK, Amazon UK have this currently on sale for £109 in the current Prime Sale.
Thanks for the detailed
Thanks for the detailed analysis and also the very good quality pictures.
Regards.
They gave it an editor’s
They gave it an editor’s choice at $200. What about at $100? Well that’s what it’s going for on Amazon right now. (11-24-16 @4:00AM EST) That’s $0.13/GB!!!!
yeah same, on amazon uk at
yeah same, on amazon uk at £105.99 for today, seems a good deal to me 🙂