Performance Comparisons – Mixed Burst
These are the Mixed Burst results introduced in the Samsung 850 EVO 4TB Review. Some tweaks have been made, namely, QD reduced to a more realistic value of 2. Read bursts have been increased to 400MB each. 'Download' speed remains unchanged.
In an attempt to better represent the true performance of hybrid (SLC+TLC) SSDs and to include some general trace-style testing, I’m trying out a new test methodology. First, all tested SSDs are sequentially filled to near maximum capacity. Then the first 8GB span is preconditioned with 4KB random workload, resulting in the condition called out for in many of Intel’s client SSD testing guides. The idea is that most of the data on an SSD is sequential in nature (installed applications, MP3, video, etc), while some portions of the SSD have been written to in a random fashion (MFT, directory structure, log file updates, other randomly written files, etc). The 8GB figure is reasonably practical since 4KB random writes across the whole drive is not a workload that client SSDs are optimized for (it is reserved for enterprise). We may try larger spans in the future, but for now, we’re sticking with the 8GB random write area.
Using that condition as a base for our workload, we now needed a workload! I wanted to start with some background activity, so I captured a BitTorrent download:
This download was over a saturated 300 Mbit link. While the average download speed was reported as 30 MB/s, the application’s own internal caching meant the writes to disk were more ‘bursty’ in nature. We’re trying to adapt this workload to one that will allow SLC+TLC (caching) SSDs some time to unload their cache between write bursts, so I came to a simple pattern of 40 MB written every 2 seconds. These accesses are more random than sequential, so we will apply it to the designated 8GB span of our pre-conditioned SSD.
Now for the more important part. Since the above ‘download workload’ is a background task that would likely go unnoticed by the user, we also need is a workload that the user *would* be sensitive to. The times where someone really notices their SSD speed is when they are waiting for it to complete a task, and the most common tasks are application and game/level loads. I observed a round of different tasks and came to a 200MB figure for the typical amount of data requested when launching a modern application. Larger games can pull in as much as 2GB (or more), varying with game and level, so we will repeat the 200MB request 10 times during the recorded portion of the run. We will assume 64KB sequential access for this portion of the workload.
Assuming a max Queue Depth of 4 (reasonable for typical desktop apps), we end up with something that looks like this when applied to a couple of SSDs:
The OCZ Trion 150 (left) is able to keep up with the writes (dashed line) throughout the 60 seconds pictured, but note that the read requests occasionally catch it off guard. Apparently, if some SSDs are busy with a relatively small stream of incoming writes, read performance can suffer, which is exactly the sort of thing we are looking for here.
When we applied the same workload to the 4TB 850 EVO (right), we see an extremely consistent and speedy response to all IOs, regardless of if they are writes or reads. The 200MB read bursts are so fast that they all occur within the same second, and none of them spill over due to other delays caused by the simultaneous writes taking place.
Now for the results:
From our Latency Percentile data, we are able to derive the total service time for both reads and writes, and independently show the throughputs seen for both. Remember that these workloads are being applied simultaneously, as to simulate launching apps or games during a 20 MB/s download. The above figures are not simple averages – they represent only the speed *during* each burst. Idle time is not counted.
The important metric here is reads since writes would be in the background in this scenario. We can see that while some of the slower and older parts begin to stumble under a background write workload, the new WD and SanDisk SSDs maintain their previous proportion of read performance while also maintaining their lead in write throughput (less important here, but still relevant).
Now we are going to focus only on reads, and present some different data. I’ve added up the total service time seen during the 10x 400MB reads that take place during the recorded portion of the test. These figures represent how long you would be sitting there waiting for 4GB of data to be read, but remember this is happening while a download (or another similar background task) is simultaneously writing to the SSD. This metric should closely equate to the 'feel' of using each SSD in a moderate to heavy load. Total read service times should hopefully help you grasp the actual time spent waiting for such a task to complete in the face of background writes taking place.
The new WD and SanDisk SSDs fall short of beating Samsung, but they are still very quick when compared with other SSDs handling this same workload, and they fall within the top 10 of all SSDs we have ever tested (including Optane).
Im about to build a new
Im about to build a new system and all these new NVMe drives coming out which is starting to make the Samsung 960 EVO look antiquated. What to do?
Given the random read (low
Given the random read (low QD) performance falls slightly behind the 960 EVO, I'd consider both products roughly equal and go for the lower cost/GB unless you wanted the more proven (Samsung) part. Josh found 960 EVOs on sale at Newegg for $0.40/GB last night, so in that moment I'd go with the EVO.
Second chart on “Performance
Second chart on “Performance Focus – Western Digital WD Black NVMe 1TB SSD” page is shown as Throughput, but should be IOPs (unless these drives are magically pushing over 300GBps 🙂 ).
Ooh, good catch. That chart
Ooh, good catch. That chart has been wrong for a *long* time apparently…
Great review and very solid
Great review and very solid drive.
But pardon of my ignorance, how is those thermals(Do you have FLIR)? Any thermal throttle?
This drive runs cool enough
This drive runs cool enough that WD didn't even need to use a copper-layered label as some other SSDs do, so I wouldn't consider it a concern. The controller has the capability to throttle if it needs to, but you'd have to be unrealistically hard on it to get to that point. This is the case with most M.2 SSDs – folks run a continuous storage test on them for minutes at a time and then complain about throttling, but nothing other than benchmarks hits the SSD that hard.
Maybe I am missing something,
Maybe I am missing something, but why does the Mixed Burst section have a screenshot of an OCZ drive when the article is about WD/Sandisk drives?
It’s a pic comparing a drive
It's a pic comparing a drive that has a harder time with the workload (left) to a faster drive that executes more quickly and consistently over time (right).
Hmm, I dunno. I feel like
Hmm, I dunno. I feel like 760p has higher random and sequential read while costing less, although there is no 1TB option still.
You’re right there – the 760P
You're right there – the 760P does run closer to the Samsung parts in read performance, and also is competitive on cost, but not available in 1TB. I was trying to stick with a sampling of various SSDs at or above the 1TB capacity point but some models we have only tested 512GB (the previous WD Black), and the charts get too cluttered if we go higher than 10.
Why are they taking so long
Why are they taking so long for 1tb? 🙁 I might even want 2tb in the future… Or a 4tb MX500. Is it the controller?
I suspect that the issue is
I suspect that the issue is limited space for the dies which are required to support larger capacities.
I suspect that to be the case
I suspect that to be the case for the Intel since it’s m.2, but for MX500? I think there’s more room in there.
My X79 mobo was before m.2 so
My X79 mobo was before m.2 so I used an Intel 750. With no NVMe boot options, Windows and those calls come from an SATA SSD while programs and the Swap File are on the 750. I know this ‘parallel’ fetching isn’t meaningful, and the whole system is very fast (4930K – I only buy if I have to).
I remember an early m.2 mobo (Asus) that stood the drive up in the path of the front cooling fan, but heat doesn’t seem to be much of an issue with the ones lying down.
I have looked at all SSD
I have looked at all SSD reviews out there and the only 2 that stand out are PC Perspective and Anandtech. Reason being you actually devise tests to suit the underlying architecture and not run of the mill benchmark suites.
Would it be possible to specify under System Setup if the drive is plugged on the motherboard’s M.2 slot or is on a PCIe add-in card?
Also it would be nice if for the top 10 drives you could show the difference in latency based on whether the drive is used via M.2 PCIe AIC adapter vs M.2 through the PCH linked to CPU via DMI3.
I second Jabbadap’s request for thermal data. I agree that in real-world systems you cant heat up a drive but I am more interested in systems used in harsh environments. The idea being that a drive that generates less of its own heat is likely to perform better in hotter ambient temperatures. I know one can always stick a M.2 cooler on but since you are pushing the drives during testing, it is simply a matter of fixing a thermal camera aimed at the drive under test.
Once again, I really appreciate your testing methodology.