Performance Comparisons – Mixed Burst
These are the Mixed Burst results introduced in the Samsung 850 EVO 4TB Review. Some tweaks have been made, namely, QD reduced to a more realistic value of 2. Read bursts have been increased to 400MB each. 'Download' speed remains unchanged.
In an attempt to better represent the true performance of hybrid (SLC+TLC) SSDs and to include some general trace-style testing, I’m trying out a new test methodology. First, all tested SSDs are sequentially filled to near maximum capacity. Then the first 8GB span is preconditioned with 4KB random workload, resulting in the condition called out for in many of Intel’s client SSD testing guides. The idea is that most of the data on an SSD is sequential in nature (installed applications, MP3, video, etc), while some portions of the SSD have been written to in a random fashion (MFT, directory structure, log file updates, other randomly written files, etc). The 8GB figure is reasonably practical since 4KB random writes across the whole drive is not a workload that client SSDs are optimized for (it is reserved for enterprise). We may try larger spans in the future, but for now, we’re sticking with the 8GB random write area.
Using that condition as a base for our workload, we now needed a workload! I wanted to start with some background activity, so I captured a BitTorrent download:
This download was over a saturated 300 Mbit link. While the average download speed was reported as 30 MB/s, the application’s own internal caching meant the writes to disk were more ‘bursty’ in nature. We’re trying to adapt this workload to one that will allow SLC+TLC (caching) SSDs some time to unload their cache between write bursts, so I came to a simple pattern of 40 MB written every 2 seconds. These accesses are more random than sequential, so we will apply it to the designated 8GB span of our pre-conditioned SSD.
Now for the more important part. Since the above ‘download workload’ is a background task that would likely go unnoticed by the user, we also need is a workload that the user *would* be sensitive to. The times where someone really notices their SSD speed is when they are waiting for it to complete a task, and the most common tasks are application and game/level loads. I observed a round of different tasks and came to a 200MB figure for the typical amount of data requested when launching a modern application. Larger games can pull in as much as 2GB (or more), varying with game and level, so we will repeat the 200MB request 10 times during the recorded portion of the run. We will assume 64KB sequential access for this portion of the workload.
Assuming a max Queue Depth of 4 (reasonable for typical desktop apps), we end up with something that looks like this when applied to a couple of SSDs:
The OCZ Trion 150 (left) is able to keep up with the writes (dashed line) throughout the 60 seconds pictured, but note that the read requests occasionally catch it off guard. Apparently, if some SSDs are busy with a relatively small stream of incoming writes, read performance can suffer, which is exactly the sort of thing we are looking for here.
When we applied the same workload to the 4TB 850 EVO (right), we see an extremely consistent and speedy response to all IOs, regardless of if they are writes or reads. The 200MB read bursts are so fast that they all occur within the same second, and none of them spill over due to other delays caused by the simultaneous writes taking place.
Now that we have a reasonably practical workload, let’s see what happens when we run it on a small batch of SSDs:
From our Latency Percentile data, we are able to derive the total service time for both reads and writes, and independently show the throughputs seen for both. Remember that these workloads are being applied simultaneously, as to simulate launching apps or games during a 20 MB/s download. The above figures are not simple averages – they represent only the speed *during* each burst. Idle time is not counted.
The focus point here is the read speeds since it only matters if the write speeds are fast enough to keep up with the demand (they all are). The MX500's dynamic caching helps it offer up among the fastest write throughputs from all drives tested. Reads all remain within a relatively tight grouping, but we do note that while under a write load, the older 850 PRO fared better than both 860 PRO capacities tested. The same was witnessed with the EVO, but that was masked slightly as the 850 EVO saw slightly lower performance at the 4TB capacity point included in the results.
Now we are going to focus only on reads, and present some different data. I’ve added up the total service time seen during the 10x 400MB reads that take place during the recorded portion of the test. These figures represent how long you would be sitting there waiting for 4GB of data to be read, but remember this is happening while a download (or another similar background task) is simultaneously writing to the SSD. This metric should closely equate to the 'feel' of using each SSD in a moderate to heavy load.
Again, tight grouping from all products, but the 850 PRO 512 took the crown here – both it and the 850 EVO 1TB came in several seconds faster than the newer 860 models in this mixed workload test scenario.
I wonder how much cost is
I wonder how much cost is saves by moving to more dense nand and a smaller pcb footprint? Seeing the performance, it looks to me this was more of a shrink than a improvement.
Agreed. It’s as if they were
Agreed. It's as if they were trying too hard to make the SSD as economical as possible to produce, causing it to fall short in some areas.
It is a SATA SSD, they can’t
It is a SATA SSD, they can’t improve the performance much until the SATA bottleneck is lifted on the host side. Perhaps SATA IV is in order but I believe that will never happen. The 850 series already maxed out the SATA bus, so not exactly sure what performance improvements you would like to magically see Samsung improve upon. They already implemented improvements via M.2 PCIe ssd’s. If you want faster than SATA, you have to move on from SATA. Simple as that.
I agree: it’s as if the
I agree: it’s as if the storage “oligopoly” has conspired
to maintain an artificially low ceiling on 2.5″ SSD speeds.
Several years ago, we proposed a “SATA-IV” standard that
upped the transmission clock to 8G (like PCIe 3.0 lanes)
and changed the 8b/10b legacy frame to the 128b/130b
“jumbo frame” that is already standard in PCIe 3.0:
8 GHz / 8.125 bits per byte = 984.6 MB/second
i.e. exact same throughput as a single PCIe 3.0 lane.
Admittedly, that is not a massive increase; nevertheless,
one could easily approximate one NVMe port with
four such SSDs in a RAID-0 array, and the wiring
topologies for such a RAID array are ubiquitous.
FYI: here’s a copy of our SATA-IV Proposal to the
Storage Developer Conference in 2012:
http://supremelaw.org/patents/BayRAMFive/SATA-IV.Presentation.pdf
And, now that the PCIe 4.0
And, now that the PCIe 4.0 standard has been released,
a future SATA-IV standard should support a 16 GHz clock:
16G / 8.125 bits per byte = 1,969.2 MB/second.
Thus, 4 such SSDs in a RAID-0 array should max out
at ~ 7.87 GB/second (no overhead). Yes, the SATA
protocol does have more inherent overhead, but
its installed base is already HUGE. Increasing the
clock rate and upgrading to jumbo frames should be
a piece o’ cake for storage industry manufacturers.
And, RAID controllers could still support PCIe 3.0
edge connectors, while increasing the clock speed
on their SATA connectors to 16 GHz. Maybe Allyn
could offer this suggestion to Areca?
I doubt there will be another
I doubt there will be another SATA spec for SSD drives. SSD drives will move too PCIe and SATA will be for slower bulk storage.
Allyn, I think the last trim
Allyn, I think the last trim chart may have the wrong x-label, not sure, I got confused there.
You are correct! Thanks for
You are correct! Thanks for the catch. It is now fixed.
Hopefully the price of the
Hopefully the price of the 850’s will go down rather than be discontinued.
Allyn, pop quiz of the day.
Allyn, pop quiz of the day.
I have 3 256gb 850 pros in RAID 1 on my boot drive (I have no sensitive data on the raid). I have all my programs/games on this “drive”. I am approaching 200gb of free space left. As you know with todays games that could be 4 new AAA titles. I have toyed with the idea of getting a single 500gb drive windows and all apps, leaving my 7xx gb raid for Steam only. Is there any benefit to doing that with one of these drives or should I just snatch up another 256gb 850 pro and increase my raid?
***I am on Z97 so an NVME boot drive isn’t possible.
RAID 1 with three drives?
RAID 1 with three drives?
Yea… I mean raid 0. Got
Yea… I mean raid 0. Got ahead of my self last night.
So long as you are good at
So long as you are good at backing up, I'd just add another 256 to the RAID-0. If your stripe size is lower, you will see a nice boost to QD1 sequentials that are larger than the stripe size (since those transfers are split across multiple drives). You're good for up to 6 SSDs in RAID on that board. While you'll hit the DMI throughput limit at ~4 SATA devices, there are still advantages to splitting your IOs across additional SSDs – even when they are bottlenecked.
I appreciate it. It all
I appreciate it. It all started when I found a smoking deal years ago on two 256gb drives, and has grown from there. I guess another 256gb 850 pro is on the horizon. The more the merrier right!?
I have a FreeNAS for redundancy and a WD Blue 2tb drive in my system as well for the important stuff. I am pretty sure my ISP hates when I re-install windows yearly and re-download my whole steam library though. 😀
You have multiple drives and
You have multiple drives and you redownload your Steam library upon Windows installation? That’s just irresponsible.
On page
On page “https://www.pcper.com/reviews/Storage/Samsung-860-EVO-and-PRO-SATA-SSD-Review-512GB-1TB-and-4TB-Tested/Performance-Focus-0” at the end of the page you guys wrote “Being a PRO series SSD, the 2TB unit contains only MLC flash and no SLC cache.”.
The one being tested on that page is the 4TB version, there is no 2TB version being tested/shown.
Thanks! Fixed!
Thanks! Fixed!
So there will be 4TB M.2
So there will be 4TB M.2 variants? How long until we see a 4TB PCI-e M.2 from Samsung?
It’s down to PCB space. Not
It’s down to PCB space. Not enough room on M.2. Less room on mSATA.
I was wondering, with older
I was wondering, with older platform such as dual Xeon socket 2011 (v1) if there would be M.2 versions of these, would they function via an adapter?
I guess it would also be a general question on older platforms with NVME SSD or even Optain functioning or should I consider upgrade time to Threadripper/Ryzen?
Granted M.2 versions of these
Granted M.2 versions of these drives are not NVME, but a chance to remove cables would be a nice positive. Hence question stands for PCIE to M.2 adapters on older platforms for SATA/NVME/Optain
You’d have to use an M.2 to
You’d have to use an M.2 to SATA (not PCIe) adapter card.
The link at your article’s
The link at your article’s outset concerning the 850 line’s “silent migration to 64-layer V-NAND” actually links to your piece detailing the 850 EVO’s transition from 32- to 48-layer NAND. If a story exists about the switch to 64-layer NAND, I must have missed it.
In a similar vein, do you know if anyone has done testing to compare the 48-layer MLC/TLC versions of the 850 series drives to their 64-layer replacements?
It would be interesting to find out if the mixed workload performance and TRIM issues exhibited by the newly released 860 series were also present on the third revision of the 850 series, possibly indicating a limitation of the denser NAND rather than a bug in the new controller or firmware.
Hey, just to make sure: you
Hey, just to make sure: you did all the benchmarks in an identical environment and there was no possible microcode/Windows update for Meltdown/Spectre in between the benchmarks for the old and new SSDs, right?
Hi…..
860 pro cant support
Hi…..
860 pro cant support Raid ability?????????
It’s been 8 months since the
It’s been 8 months since the review, do you know if the TRIM issue has been fixed?
I’m deciding between the 850 or 860 currently.
Allyn have they fixed the
Allyn have they fixed the TRIM issue?