Performance Focus – 750 EVO 250GB

The 250GB model comes in slightly higher than the 120, but still a bit shy of pure saturation of the SATA interface.

Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.

Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.

  • Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
  • Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.

The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.

Our new results are derived from a very large data set. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.

Write Cache Testing

(Due to this being a roundup piece, I'm shifting cache results to its own dedicated comparison page later in this article)

« PreviousNext »