5100 MAX 960GB – Saturated (High-Res) QoS and PACED IOPS Sweep
Required reading (some additional context for those unfamiliar with our Percentile testing):
- Introduction of Latency Distribution / Latency Percentile (now called IO Percentile)
- Introduction of Latency Weighted Percentile (now called Latency Percentile – client use)
- PACED workload application to enterprise SSDs
Quality of Service (QoS)
QoS is specified in percentages (99.9%, 99.99%, 99.999%), and uniquely spoken (‘three nines’, ‘four nines’, ‘five nines’). It corresponds to the latency seen at the top 99.x% of all recorded IOs in a run. Enterprise IT managers and system builders care about varying levels of 9's because those long latencies lead to potential timeouts for time-sensitive operations, and increasing the 9's is how they quantify more stringent QoS requirements. Note that these comparative results are derived from IO Percentile data and *not* from Latency Percentile data.
If you have a hard time wrapping your head around the 9's thing, It may be easier to flip things around and think about it from the standpoint of the remaining longest-latency IO's that haven't been accounted for as the plot progresses. As an example, the 99.9% line near the center of the vertical axis represents the top 10% of the top 1% (0.1%) of all recorded IOs, where 'top' means those IOs of the longest latency.
These plots are tricky to make, as they are effectively an inverse log scale. Each major increment up from the zero axis corresponds to the top 90%, and the next increment after that shows the top 90% *of that previous value*, meaning it's an asymptotic scale which will never reach 100%. The plots below essentially take the top portion of the IO Percentile results and spread them out, exponentially zooming in on the results as they approach 100%.
First a refresher of the stated specs:
…and now the results (X'd as appropriate – note these are for QD=1 (left most line)):
For those having a hard time picking out where the crossover points lie, here's a different way of looking at / spelling out things:
Note that the 99.999 specs in this pair of charts go off the scale high (*very* high in the case of reads), meaning the 5100 falls well within the spec.
PACED workloads – 'It's not how fast you go, it's how well you go fast!'
I'm expanding on the PACED workload application introduced in the Micron 9100 review. Instead of selecting just a few workloads, I've taken things a step further, sweeping the IO range and (with a lot of number crunching) plotting the resulting 9's at the various loads. The brief explanation of what we want to see is how the tail latency looks at more realistic levels of loading compared to how manufacturers typically rate their parts:
As you can see, reduced load generally lowers latency and enables greater consistency. The catch for the 5100 appears to be that it is already so consistent that even the slightest disturbances during the test run can cause spikes in the curve. Bear in mind that the 5100 is *well* below its rated max latency at even the highest points in these charts.
micron = elpida =
micron = elpida = ????
samsung still the best for memory.
Elpida was acquired by Micron
Elpida was acquired by Micron recently.
FYI,
“Read on for our full
FYI,
“Read on for our full review of the Micron 5100 MAX 960GB and 5100 ECO 1920GB Enterprise SATA SSDs!” link goes to a different article.
Whoops! I fixed it for Allyn.
Whoops! I fixed it for Allyn.
These seem to be quite
These seem to be quite capable drives, but in Crucial/Micron and Intel’s consumer and prosumer drives there has been some degradation in terms of durability and performance.
I think both Intel and Crucial/Micron should have continued to sell their 16nm 2D MLC parts like their MX200 series, which is actually better than their MX300. The 600 series from Intel also sucks and this trend is going to push people like me from the MX200 class drives into enterprise drives because i actually want SOME overprovisioning and quality.
N-RAM and STT-MRAM cant replace consumer grade NAND fast enough for me.
am i wrong or are these iops
am i wrong or are these iops numbers, terrible??
What kind of shit review is
What kind of shit review is this where the graphs don’t have any other drives for comparison? These numbers are completely arbitrary based on different testing conditions unless you give other drives for comparison.
“completely arbitrary based
“completely arbitrary based on different testing conditions”
Good thing in their methodology they try to keep the testing conditions completely the same you so can see how the drive performs compared to it’s rated specs. Enterprise devices aren’t judged by “What is best?” but rather “What is best for us?”. A slight distinction, but important none the less.
A certain other website used
A certain other website used to be the top dog of AllTime on enterprise hardware reviews, but if PCPer and Allyn keep up this kind of work AnoTher website might see some pageviews stolen!