Performance Focus – Crucial MX500 1TB

Before we dive in, a quick note: I’ve been analyzing the effects of how full an SSD is on its performance. I’ve found that most SSDs perform greater when empty (FOB) as they do when half or nearly filled to capacity. Most people actually put stuff on their SSD. To properly capture performance at various levels of fill, the entire suite is run multiple times and at varying levels of drive fill. This is done in a way to emulate the actual use of the SSD over time. Random and sequential performance is rechecked in the same areas as data is added. Those checks are made on the same files and areas checked throughout the test. Once all of this data is obtained, we again apply the weighting method mentioned in the intro in order to balance the results towards the more realistic levels of fill. The below results all use this method.

Sequential performance looks strong. Near full speed at QD=1 is a good thing to see here.

Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.

Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.

  • Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
  • Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.

The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.

Our new results are derived from a very large dataset. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.

MX500 does good here as well, but the real proof will be in the comparisons. For the power users out there, here's the full read/write burst sweep at all queue depths:

Write Cache Testing

The MX500 is supposed to employ a dynamic SLC cache in addition to the bulk TLC storage, and I have no doubts that it is doing so, but the great thing here is that from how it performed across several runs and at varying levels of drive fill, you'd never know there was a cache at play. This is great as it means no slowdowns even under the heaviest use.

« PreviousNext »