Performance Comparisons – TRIM Speed
Thanks to the plethora of data we have at our disposal from the new suite, I can derive some additional interesting data that nobody seems to have been paying any attention to yet. Have you ever deleted a large file and then noticed your system seem to hang for some time afterward? Maybe file moves from your SSD seemed to take longer than expected?
That's your problem right there. In the above capture, a 16GB file was deleted while a minimal level of background IO was taking place. Note how that IO completely stalls for a few seconds shortly after the file was deleted? That's a bad thing. We don't want that, but to fix it, someone needs to measure it and point it out. Enter another aspect of our new testing:
Latency Percentile data was obtained while running a 'light' (1000 IOPS) workload in the background while files of varying sizes were deleted. The amount of latency added during the deletions was measured, compared with a baseline, and correlated with the sizes of the deleted files. The result is how much latency is added to the active workload per GB of file size that was deleted. In short, this is how long you may notice a stutter last after deleting a 1GB file.
To avoid confusion, I've maintained the performance-based sort from the mixed test for these charts. Here you can tell that some drives that did perform well on that test stick out a bit here when it comes to how they handle TRIM. Ideally, these results should all be as close to 0.000 as possible. Higher figures translate to longer performance dips after files have been moved or deleted.
This is another result sourced from a different segment of data. While our suite runs, it issues a full drive TRIM several times. Some of those times it is done on an empty SSD, others is it done on a full SSD. Any difference in time taken is measured and calculated, normalizing to a response time per GB TRIMmed. In short, this is how long an otherwise idle SSD would hang upon receiving a TRIM command for a 1GB file. These times are shorter than the last chart because the SSD controller does not have to juggle this TRIM with background activity and can throw all of its resources at the request.
Looks like we found a chink in the armor. The MX500 delayed accesses by nearly 1/2 second per GB of data deleted/TRIMmed, and clearing the entire 1TB volume (as seen with an OS reinstall or repartitioning a previously full SSD) took nearly two full minutes (107 seconds). Now, this isn't a deal breaker, but it is something to be aware of considering your specific use case.
Below are more results for this valuable metric, sorted by performance. Note that the oldest SSDs (X25-M) are N/A here because they did not employ TRIM:
Now go back on those long
Now go back on those long lists of SSD tested and put a red box around the SSD being tested because that’s some haystack of results to visually search through to see where the drive being tested compares to all those others in that very long List.
You can see the 4k and 128kb
You can see the 4k and 128kb scores in the 2 top charts, take that score and scroll down till you get to it.
The SSD being tested is at
The SSD being tested is at the top of the abbreviated charts – above the longer charts.
Allyn Malventano, Regarding
Allyn Malventano, Regarding the TRIM issues, can Crucial fix the problem with a firmware update? Thanks.
Most likely, yes.
Most likely, yes.
Looks like a solid
Looks like a solid alternative to 850 evo..
Allyn, what do you think of a
Allyn, what do you think of a MLC SSD with TLC cache?
TLC is slower than MLC, which
TLC is slower than MLC, which itself is slower than SLC. Micron has SLC mode caching for their smaller MLC/TLC drives because it improves speed.
A TLC cache would hurt performance.
I have the 1TB MLC Crucial MX200, which has enough flash that it doesnt need an SLC cache, however i do use the Momentum Cache which uses system DRAM as a fast cache. Its a good idea if you have a UPS, which i do.
Interesting, I wonder if,
Interesting, I wonder if, with the BX line being the ultra cheap ones, we’ll see it move to 3D QLC NAND before long, sure it’ll be slower than the others, but it’ll be a butt tonne cheaper.
get back to us when they are
get back to us when they are at $.10 a GB
Maybe in 5 years
Maybe in 5 years
With regards to what Jon
With regards to what Jon Tanguy said in the video about Power Loss Immunity eliminating the need for banks of capacitors – they were pretty cool to look at: https://i.imgur.com/wVXxOre.jpg
How does it compare with
How does it compare with MX300?
One of my takeaways is (trim
One of my takeaways is (trim speed aside) the performance on this isn’t all that different from a Vector. And the Vector was a monster (an unsafe hotrod that blew a gasket if you cycled power at the wrong time) of a client drive when it came out and was MLC only. It’s nice to see a budget TLC drive isn’t completely compromised.
Went from a 256 gig c300 at
Went from a 256 gig c300 at launch to a 500gig mx100, I just might upgrade to a 1 terabyte mx500.
Things are getting a bit saturated.
MX500 2TB appears to be 25%
MX500 2TB appears to be 25% cheaper than the 850 EVO 2TB
Maybe the trim results are
Maybe the trim results are like that because Crucial MX500 NCQ (Native Command Queuing) TRIM is actually working unlike Samsungs SSDs which have broken NCQ TRIM (this is why 8xx series are blacklisted for NCQ TRIM in Linux kernel).
Is there any test you could do to confirm this? Maybe somehow try to disable NCQ TRIM and then run the tests again. Maybe even run MX500 and 850 EVO in IDE mode instead of AHCI to make sure that NCQ is not a factor.