Performance Comparisons – Mixed Burst
These are the Mixed Burst results introduced in the Samsung 850 EVO 4TB Review. Some tweaks have been made, namely, QD reduced to a more realistic value of 2. Read bursts have been increased to 400MB each. 'Download' speed remains unchanged.
In an attempt to better represent the true performance of hybrid (SLC+TLC) SSDs and to include some general trace-style testing, I’m trying out a new test methodology. First, all tested SSDs are sequentially filled to near maximum capacity. Then the first 8GB span is preconditioned with 4KB random workload, resulting in the condition called out for in many of Intel’s client SSD testing guides. The idea is that most of the data on an SSD is sequential in nature (installed applications, MP3, video, etc), while some portions of the SSD have been written to in a random fashion (MFT, directory structure, log file updates, other randomly written files, etc). The 8GB figure is reasonably practical since 4KB random writes across the whole drive is not a workload that client SSDs are optimized for (it is reserved for enterprise). We may try larger spans in the future, but for now, we’re sticking with the 8GB random write area.
Using that condition as a base for our workload, we now needed a workload! I wanted to start with some background activity, so I captured a BitTorrent download:
This download was over a saturated 300 Mbit link. While the average download speed was reported as 30 MB/s, the application’s own internal caching meant the writes to disk were more ‘bursty’ in nature. We’re trying to adapt this workload to one that will allow SLC+TLC (caching) SSDs some time to unload their cache between write bursts, so I came to a simple pattern of 40 MB written every 2 seconds. These accesses are more random than sequential, so we will apply it to the designated 8GB span of our pre-conditioned SSD.
Now for the more important part. Since the above ‘download workload’ is a background task that would likely go unnoticed by the user, we also need is a workload that the user *would* be sensitive to. The times where someone really notices their SSD speed is when they are waiting for it to complete a task, and the most common tasks are application and game/level loads. I observed a round of different tasks and came to a 200MB figure for the typical amount of data requested when launching a modern application. Larger games can pull in as much as 2GB (or more), varying with game and level, so we will repeat the 200MB request 10 times during the recorded portion of the run. We will assume 64KB sequential access for this portion of the workload.
Assuming a max Queue Depth of 4 (reasonable for typical desktop apps), we end up with something that looks like this when applied to a couple of SSDs:
The OCZ Trion 150 (left) is able to keep up with the writes (dashed line) throughout the 60 seconds pictured, but note that the read requests occasionally catch it off guard. Apparently, if some SSDs are busy with a relatively small stream of incoming writes, read performance can suffer, which is exactly the sort of thing we are looking for here.
When we applied the same workload to the 4TB 850 EVO (right), we see an extremely consistent and speedy response to all IOs, regardless of if they are writes or reads. The 200MB read bursts are so fast that they all occur within the same second, and none of them spill over due to other delays caused by the simultaneous writes taking place.
Now that we have a reasonably practical workload, let’s see what happens when we run it on a small batch of SSDs:
From our Latency Percentile data, we are able to derive the total service time for both reads and writes, and independently show the throughputs seen for both. Remember that these workloads are being applied simultaneously, as to simulate launching apps or games during a 20 MB/s download. The above figures are not simple averages – they represent only the speed *during* each burst. Idle time is not counted.
The focus point here is the read speeds since it only matters if the write speeds fast enough to keep up with the demand (they all are). The MX500's dynamic caching helps it offer up among the fastest write throughputs from all drives tested, but the background activity that accompanies the cache action is holding back its read speeds slightly compared to the drives in the bottom half of the chart.
Now we are going to focus only on reads, and present some different data. I’ve added up the total service time seen during the 10x 400MB reads that take place during the recorded portion of the test. These figures represent how long you would be sitting there waiting for 4GB of data to be read, but remember this is happening while a download (or another similar background task) is simultaneously writing to the SSD. This metric should closely equate to the 'feel' of using each SSD in a moderate to heavy load.
Again, good results from the MX500, but the Samsung parts, the Intel 545s, and even the BX100 edge out slightly faster. Still a solid showing though.
The above results retained sorting consistent with the rest of the roundup charts. Below is a lot more data, sorted by performance:









Now go back on those long
Now go back on those long lists of SSD tested and put a red box around the SSD being tested because that’s some haystack of results to visually search through to see where the drive being tested compares to all those others in that very long List.
You can see the 4k and 128kb
You can see the 4k and 128kb scores in the 2 top charts, take that score and scroll down till you get to it.
The SSD being tested is at
The SSD being tested is at the top of the abbreviated charts – above the longer charts.
Allyn Malventano, Regarding
Allyn Malventano, Regarding the TRIM issues, can Crucial fix the problem with a firmware update? Thanks.
Most likely, yes.
Most likely, yes.
Looks like a solid
Looks like a solid alternative to 850 evo..
Allyn, what do you think of a
Allyn, what do you think of a MLC SSD with TLC cache?
TLC is slower than MLC, which
TLC is slower than MLC, which itself is slower than SLC. Micron has SLC mode caching for their smaller MLC/TLC drives because it improves speed.
A TLC cache would hurt performance.
I have the 1TB MLC Crucial MX200, which has enough flash that it doesnt need an SLC cache, however i do use the Momentum Cache which uses system DRAM as a fast cache. Its a good idea if you have a UPS, which i do.
Interesting, I wonder if,
Interesting, I wonder if, with the BX line being the ultra cheap ones, we’ll see it move to 3D QLC NAND before long, sure it’ll be slower than the others, but it’ll be a butt tonne cheaper.
get back to us when they are
get back to us when they are at $.10 a GB
Maybe in 5 years
Maybe in 5 years
With regards to what Jon
With regards to what Jon Tanguy said in the video about Power Loss Immunity eliminating the need for banks of capacitors – they were pretty cool to look at: https://i.imgur.com/wVXxOre.jpg
How does it compare with
How does it compare with MX300?
One of my takeaways is (trim
One of my takeaways is (trim speed aside) the performance on this isn’t all that different from a Vector. And the Vector was a monster (an unsafe hotrod that blew a gasket if you cycled power at the wrong time) of a client drive when it came out and was MLC only. It’s nice to see a budget TLC drive isn’t completely compromised.
Went from a 256 gig c300 at
Went from a 256 gig c300 at launch to a 500gig mx100, I just might upgrade to a 1 terabyte mx500.
Things are getting a bit saturated.
MX500 2TB appears to be 25%
MX500 2TB appears to be 25% cheaper than the 850 EVO 2TB
Maybe the trim results are
Maybe the trim results are like that because Crucial MX500 NCQ (Native Command Queuing) TRIM is actually working unlike Samsungs SSDs which have broken NCQ TRIM (this is why 8xx series are blacklisted for NCQ TRIM in Linux kernel).
Is there any test you could do to confirm this? Maybe somehow try to disable NCQ TRIM and then run the tests again. Maybe even run MX500 and 850 EVO in IDE mode instead of AHCI to make sure that NCQ is not a factor.