Performance Focus – Crucial MX500 1TB
Before we dive in, a quick note: I’ve been analyzing the effects of how full an SSD is on its performance. I’ve found that most SSDs perform greater when empty (FOB) as they do when half or nearly filled to capacity. Most people actually put stuff on their SSD. To properly capture performance at various levels of fill, the entire suite is run multiple times and at varying levels of drive fill. This is done in a way to emulate the actual use of the SSD over time. Random and sequential performance is rechecked in the same areas as data is added. Those checks are made on the same files and areas checked throughout the test. Once all of this data is obtained, we again apply the weighting method mentioned in the intro in order to balance the results towards the more realistic levels of fill. The below results all use this method.
Sequential performance looks strong. Near full speed at QD=1 is a good thing to see here.
Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.
Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.
- Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
- Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.
The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.
Our new results are derived from a very large dataset. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.
MX500 does good here as well, but the real proof will be in the comparisons. For the power users out there, here's the full read/write burst sweep at all queue depths:
Write Cache Testing
The MX500 is supposed to employ a dynamic SLC cache in addition to the bulk TLC storage, and I have no doubts that it is doing so, but the great thing here is that from how it performed across several runs and at varying levels of drive fill, you'd never know there was a cache at play. This is great as it means no slowdowns even under the heaviest use.







Now go back on those long
Now go back on those long lists of SSD tested and put a red box around the SSD being tested because that’s some haystack of results to visually search through to see where the drive being tested compares to all those others in that very long List.
You can see the 4k and 128kb
You can see the 4k and 128kb scores in the 2 top charts, take that score and scroll down till you get to it.
The SSD being tested is at
The SSD being tested is at the top of the abbreviated charts – above the longer charts.
Allyn Malventano, Regarding
Allyn Malventano, Regarding the TRIM issues, can Crucial fix the problem with a firmware update? Thanks.
Most likely, yes.
Most likely, yes.
Looks like a solid
Looks like a solid alternative to 850 evo..
Allyn, what do you think of a
Allyn, what do you think of a MLC SSD with TLC cache?
TLC is slower than MLC, which
TLC is slower than MLC, which itself is slower than SLC. Micron has SLC mode caching for their smaller MLC/TLC drives because it improves speed.
A TLC cache would hurt performance.
I have the 1TB MLC Crucial MX200, which has enough flash that it doesnt need an SLC cache, however i do use the Momentum Cache which uses system DRAM as a fast cache. Its a good idea if you have a UPS, which i do.
Interesting, I wonder if,
Interesting, I wonder if, with the BX line being the ultra cheap ones, we’ll see it move to 3D QLC NAND before long, sure it’ll be slower than the others, but it’ll be a butt tonne cheaper.
get back to us when they are
get back to us when they are at $.10 a GB
Maybe in 5 years
Maybe in 5 years
With regards to what Jon
With regards to what Jon Tanguy said in the video about Power Loss Immunity eliminating the need for banks of capacitors – they were pretty cool to look at: https://i.imgur.com/wVXxOre.jpg
How does it compare with
How does it compare with MX300?
One of my takeaways is (trim
One of my takeaways is (trim speed aside) the performance on this isn’t all that different from a Vector. And the Vector was a monster (an unsafe hotrod that blew a gasket if you cycled power at the wrong time) of a client drive when it came out and was MLC only. It’s nice to see a budget TLC drive isn’t completely compromised.
Went from a 256 gig c300 at
Went from a 256 gig c300 at launch to a 500gig mx100, I just might upgrade to a 1 terabyte mx500.
Things are getting a bit saturated.
MX500 2TB appears to be 25%
MX500 2TB appears to be 25% cheaper than the 850 EVO 2TB
Maybe the trim results are
Maybe the trim results are like that because Crucial MX500 NCQ (Native Command Queuing) TRIM is actually working unlike Samsungs SSDs which have broken NCQ TRIM (this is why 8xx series are blacklisted for NCQ TRIM in Linux kernel).
Is there any test you could do to confirm this? Maybe somehow try to disable NCQ TRIM and then run the tests again. Maybe even run MX500 and 850 EVO in IDE mode instead of AHCI to make sure that NCQ is not a factor.