PC Perspective Custom Test Suite Introduction
A New Test Suite
We have implemented radically new test methodology. I'd grown tired of making excuses for benchmarks not meshing well with some SSD controllers, and that matter was amplified significantly by recent SLC+TLC hybrid SSDs that can be very picky about their workloads and how they are applied. The complexity of these caching methods has effectively flipped the SSD testing ecosystem on its head. The vast majority of benchmarking software and test methodologies out there were developed based on non-hybrid SLC, MLC, or TLC SSDs. All of those types were very consistent once a given workload was applied to them for long enough to reach a steady state condition. Once an SSD was properly prepared for testing, it would give you the same results all day long. No so for these new hybrids. The dynamic nature of the various caching mechanisms at play wreak havoc on modern tests. Even trace playback testing such as PCMark falter, as the playback of traces is typically done with idle gaps truncated to a smaller figure in the interest of accelerating the test. Caching SSDs rely on those same idle time gaps to flush their cache to higher capacity areas of their NAND. This mix up has resulted in products like the Intel SSD 600p, which bombed nearly all of the ‘legacy’ benchmarks yet did just fine once tested with a more realistic, spaced out workload.
To solve this, I needed a way to issue IO's to the SSD the same way that real-world scenarios do, and it needed to be in such a way that did not saturate the cache of hybrid SSDs. The answer, as it turned out, was staring me in the face.
Latency Percentile made its debut a year ago (ironically, with the 950 PRO review), and those results have proven to be a gold mine that continues to yield nuggets as we mine the data even further. Weighing the results allowed us to better visualize and demonstrate stutter performance even when those stutters were small enough to be lost in more common tests that employ 1-second averages. Merged with a steady pacing of the IO stream, it can provide true Quality of Service comparisons between competing enterprise SSDs, as well as high-resolution industry-standard QoS of saturated workloads. Sub-second IO burst throughput rates of simultaneous mixed workloads can be determined by additional number crunching. It is this last part that is the key to the new test methodology.
The primary goal of this new test suite is to get the most accurate sampling of real-world SSD performance possible. This meant evaluating across more dimensions than any modern benchmark is capable of. Several thousand sample points are obtained, spanning various read/write mixes, queue depths, and even varying amounts of additional data stored on the SSD. To better quantify real-world performance of SSDs employing an SLC cache, many of the samples are obtained with a new method of intermittently bursting IO requests. Each of those thousands of samples is accompanied by per-IO latency distribution data, and a Latency Percentile is calculated (for those counting, we’re up to millions of data points now). The Latency Percentiles are in turn used to derive the true instantaneous throughput and/or IOPS for each respective data point. The bursts are repeated multiple times per sample, but each completes in less than a second, so even the per-second logging employed by some of the finer review sites out there just won’t cut it.
Would you like some data with your data? Believe it or not, this is a portion of an intermittent calculation step – the Latency Percentile data has already been significantly reduced by this stage.
Each of the many additional dimensions of data obtained by the suite is tempered by a weighting system. Analyzing trace captures of live systems revealed *very* low Queue Depth (QD) under even the most demanding power-user scenarios, which means some of these more realistic values are not going to turn in the same high queue depth ‘max’ figures seen in saturation testing. I’ve looked all over, and nothing outside of benchmarks maxes out the queue. Ever. The vast majority of applications never exceed QD=1, and most are not even capable of multi-threaded disk IO. Games typically allocate a single thread for background level loads. For the vast majority of scenarios, the only way to exceed QD=1 is to have multiple applications hitting the disk at the same time, but even then it is less likely that those multiple processes will be completely saturating a read or write thread simultaneously, meaning the SSD is *still* not exceeding QD=1 most of the time. I pushed a slower SATA SSD relatively hard, launching multiple apps simultaneously, trying downloads while launching large games, etc. IO trace captures performed during these operations revealed >98% of all disk IO falling within QD=4, with the majority at QD=1. Results from the new suite will contain a section showing a simple set of results that should very closely match the true real-world performance of the tested devices.
While the above pertains to random accesses, bulk file copies are a different story. To increase throughput, file copy routines typically employ some form of threaded buffering, but it’s not the type of buffering that you might think. I’ve observed copy operations running at QD=8 or in some cases QD=16 to a slower destination drive. The catch is that instead of running at a constant 8 or 16 simultaneous IO’s as you would see with a saturation benchmark, the operations repeatedly fill and empty the queue, meaning the queue is filled, allowed to empty, and only then filled again. This is not the same as a saturation benchmark, which would constantly add requests to meet the maximum specified depth. The resulting speeds are therefore not what you would see at QD=8, but actually, a mixture of all of the queue steps from one to eight.
Conditioning
Some manufacturers achieve unrealistic ‘max IOPS’ figures by running tests that place a small file on an otherwise empty drive, essentially testing in what is referred to fresh out of box (FOB) condition. This is entirely unrealistic, as even the relatively small number of files placed during an OS install is enough to drop performance considerably from the high figures seen with a FOB test.
On the flip side, when it comes to 4KB random tests, I disagree with tests that apply a random workload across the full span of the SSD. This is an enterprise-only workload that will never be seen in any sort of realistic client scenario. Even the heaviest power users are not going to hit every square inch of an SSD with random writes, and if they are, they should be investing in a datacenter SSD that is purpose built for such a workload.
Calculation step showing full sweep of data taken at multiple amounts of fill.
So what’s the fairest preconditioning and testing scenario? I’ve spent the past several months working on that, and the conclusion I came to ended up matching Intel’s recommended client SSD conditioning pass, which is to completely fill the SSD sequentially, with the exception of an 8GB portion of the SSD meant solely for random access conditioning and tests. I add a bit of realism here by leaving ~16GB of space unallocated (even those with a full SSD will have *some* free space, after all). The randomly conditioned section only ever sees random, and the sequential section only ever sees sequential. This parallels the majority of real-world access. Registry hives, file tables, and other such areas typically see small random writes and small random reads. It’s fair to say that a given OS install ends up with ~8GB of such data. There are corner cases where files were randomly written and later sequentially read. Bittorrent is one example, but since those files are only laid down randomly on their first pass, background garbage collection should clean those up so that read performance will gradually shift towards sequential over time. Further, those writes are not as random as the more difficult workloads selected for our testing. I don't just fill the whole thing up right away though – I pause a few times along the way and resample *everything*, as you can see above.
Comparison of Saturated vs. Burst workloads applied to the Intel 600p. Note the write speeds match the rated speed of 560 MB/s when employing the Burst workload.
SSDs employing relatively slower TLC flash coupled with a faster SLC cache present problems for testing. Prolonged saturation tests that attempt to push the drive at full speeds for more than a few seconds will quickly fill the cache and result in some odd behavior depending on the cache implementation. Some SSDs pass all writes directly to the SLC even if that cache is full, resulting in a stuttery game of musical chairs as the controller scrambles, flushing SLC to TLC while still trying to accept additional writes from the host system. More refined implementations can put the cache on hold once full and simply shift incoming writes directly to the TLC. Some more complicated methods throw all of that away and dynamically change the modes of empty flash blocks or pages to whichever mode they deem appropriate. This method looks good on paper, but we’ve frequently seen it falter under heavier writes, where SLC areas must be cleared so those blocks can be flipped over to the higher capacity (yet slower) TLC mode. The new suite and Burst workloads give these SSDs adequate idle time to empty their cache, just as they would have in a typical system.
Apologies for the wall of text. Now onto the show!
I wonder how much cost is
I wonder how much cost is saves by moving to more dense nand and a smaller pcb footprint? Seeing the performance, it looks to me this was more of a shrink than a improvement.
Agreed. It’s as if they were
Agreed. It's as if they were trying too hard to make the SSD as economical as possible to produce, causing it to fall short in some areas.
It is a SATA SSD, they can’t
It is a SATA SSD, they can’t improve the performance much until the SATA bottleneck is lifted on the host side. Perhaps SATA IV is in order but I believe that will never happen. The 850 series already maxed out the SATA bus, so not exactly sure what performance improvements you would like to magically see Samsung improve upon. They already implemented improvements via M.2 PCIe ssd’s. If you want faster than SATA, you have to move on from SATA. Simple as that.
I agree: it’s as if the
I agree: it’s as if the storage “oligopoly” has conspired
to maintain an artificially low ceiling on 2.5″ SSD speeds.
Several years ago, we proposed a “SATA-IV” standard that
upped the transmission clock to 8G (like PCIe 3.0 lanes)
and changed the 8b/10b legacy frame to the 128b/130b
“jumbo frame” that is already standard in PCIe 3.0:
8 GHz / 8.125 bits per byte = 984.6 MB/second
i.e. exact same throughput as a single PCIe 3.0 lane.
Admittedly, that is not a massive increase; nevertheless,
one could easily approximate one NVMe port with
four such SSDs in a RAID-0 array, and the wiring
topologies for such a RAID array are ubiquitous.
FYI: here’s a copy of our SATA-IV Proposal to the
Storage Developer Conference in 2012:
http://supremelaw.org/patents/BayRAMFive/SATA-IV.Presentation.pdf
And, now that the PCIe 4.0
And, now that the PCIe 4.0 standard has been released,
a future SATA-IV standard should support a 16 GHz clock:
16G / 8.125 bits per byte = 1,969.2 MB/second.
Thus, 4 such SSDs in a RAID-0 array should max out
at ~ 7.87 GB/second (no overhead). Yes, the SATA
protocol does have more inherent overhead, but
its installed base is already HUGE. Increasing the
clock rate and upgrading to jumbo frames should be
a piece o’ cake for storage industry manufacturers.
And, RAID controllers could still support PCIe 3.0
edge connectors, while increasing the clock speed
on their SATA connectors to 16 GHz. Maybe Allyn
could offer this suggestion to Areca?
I doubt there will be another
I doubt there will be another SATA spec for SSD drives. SSD drives will move too PCIe and SATA will be for slower bulk storage.
Allyn, I think the last trim
Allyn, I think the last trim chart may have the wrong x-label, not sure, I got confused there.
You are correct! Thanks for
You are correct! Thanks for the catch. It is now fixed.
Hopefully the price of the
Hopefully the price of the 850’s will go down rather than be discontinued.
Allyn, pop quiz of the day.
Allyn, pop quiz of the day.
I have 3 256gb 850 pros in RAID 1 on my boot drive (I have no sensitive data on the raid). I have all my programs/games on this “drive”. I am approaching 200gb of free space left. As you know with todays games that could be 4 new AAA titles. I have toyed with the idea of getting a single 500gb drive windows and all apps, leaving my 7xx gb raid for Steam only. Is there any benefit to doing that with one of these drives or should I just snatch up another 256gb 850 pro and increase my raid?
***I am on Z97 so an NVME boot drive isn’t possible.
RAID 1 with three drives?
RAID 1 with three drives?
Yea… I mean raid 0. Got
Yea… I mean raid 0. Got ahead of my self last night.
So long as you are good at
So long as you are good at backing up, I'd just add another 256 to the RAID-0. If your stripe size is lower, you will see a nice boost to QD1 sequentials that are larger than the stripe size (since those transfers are split across multiple drives). You're good for up to 6 SSDs in RAID on that board. While you'll hit the DMI throughput limit at ~4 SATA devices, there are still advantages to splitting your IOs across additional SSDs – even when they are bottlenecked.
I appreciate it. It all
I appreciate it. It all started when I found a smoking deal years ago on two 256gb drives, and has grown from there. I guess another 256gb 850 pro is on the horizon. The more the merrier right!?
I have a FreeNAS for redundancy and a WD Blue 2tb drive in my system as well for the important stuff. I am pretty sure my ISP hates when I re-install windows yearly and re-download my whole steam library though. 😀
You have multiple drives and
You have multiple drives and you redownload your Steam library upon Windows installation? That’s just irresponsible.
On page
On page “https://www.pcper.com/reviews/Storage/Samsung-860-EVO-and-PRO-SATA-SSD-Review-512GB-1TB-and-4TB-Tested/Performance-Focus-0” at the end of the page you guys wrote “Being a PRO series SSD, the 2TB unit contains only MLC flash and no SLC cache.”.
The one being tested on that page is the 4TB version, there is no 2TB version being tested/shown.
Thanks! Fixed!
Thanks! Fixed!
So there will be 4TB M.2
So there will be 4TB M.2 variants? How long until we see a 4TB PCI-e M.2 from Samsung?
It’s down to PCB space. Not
It’s down to PCB space. Not enough room on M.2. Less room on mSATA.
I was wondering, with older
I was wondering, with older platform such as dual Xeon socket 2011 (v1) if there would be M.2 versions of these, would they function via an adapter?
I guess it would also be a general question on older platforms with NVME SSD or even Optain functioning or should I consider upgrade time to Threadripper/Ryzen?
Granted M.2 versions of these
Granted M.2 versions of these drives are not NVME, but a chance to remove cables would be a nice positive. Hence question stands for PCIE to M.2 adapters on older platforms for SATA/NVME/Optain
You’d have to use an M.2 to
You’d have to use an M.2 to SATA (not PCIe) adapter card.
The link at your article’s
The link at your article’s outset concerning the 850 line’s “silent migration to 64-layer V-NAND” actually links to your piece detailing the 850 EVO’s transition from 32- to 48-layer NAND. If a story exists about the switch to 64-layer NAND, I must have missed it.
In a similar vein, do you know if anyone has done testing to compare the 48-layer MLC/TLC versions of the 850 series drives to their 64-layer replacements?
It would be interesting to find out if the mixed workload performance and TRIM issues exhibited by the newly released 860 series were also present on the third revision of the 850 series, possibly indicating a limitation of the denser NAND rather than a bug in the new controller or firmware.
Hey, just to make sure: you
Hey, just to make sure: you did all the benchmarks in an identical environment and there was no possible microcode/Windows update for Meltdown/Spectre in between the benchmarks for the old and new SSDs, right?
Hi…..
860 pro cant support
Hi…..
860 pro cant support Raid ability?????????
It’s been 8 months since the
It’s been 8 months since the review, do you know if the TRIM issue has been fixed?
I’m deciding between the 850 or 860 currently.
Allyn have they fixed the
Allyn have they fixed the TRIM issue?