Performance Focus – SSD 760p 128GB

A quick note on these results: I’ve been analyzing the effects of how full an SSD is on its performance. I’ve found that most SSDs perform greater when empty (FOB) as they do when half or nearly filled to capacity. Most people actually put stuff on their SSD. To properly capture performance at various levels of fill, the entire suite is run multiple times and at varying levels of drive fill. This is done in a way to emulate the actual use of the SSD over time. Random and sequential performance is rechecked in the same areas as data is added. Those checks are made on the same files and areas checked throughout the test. Once all of this data is obtained, we again apply the weighting method mentioned in the intro in order to balance the results towards the more realistic levels of fill. The below results all use this method.

Sequential performance looks reasonable, but read speeds do not ramp up fully until QD=8.

Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.

Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.

  • Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
  • Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.

The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.

Our new results are derived from a very large dataset. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.

For the power users out there, here's the full read/write burst sweep at all queue depths:

Saturated vs. Burst Performance

These tests are intended to show the 'max' sustained performance figures of the SSD being tested. In the case of caching SSDs, sustained (or 'saturated') results will typically be lower for writes, as any cache would be full and writes would be direct to the bulk media (direct-to-die writes). Other SSDs may suffer more greatly by sending all writes to the cache and entering a swap state when that cache is overfilled, where data must be transferred from the host to the cache and simultaneously from the cache to the bulk media. While the average speed may look ok in those cases, immediate performance will be very stuttery and may fluctuate wildly. Do take saturated results with a large grain of salt as this sort of write workload is rarely encountered in real-world use.

The reduced die count of the smallest capacity 128GB model results in a rather sharp hit to burst and especially saturated writes, which are low enough that typical use will be impacted as compared to the higher capacity products of the line.

Write Cache Testing

Uh oh, this looks like the write toggle issue that we saw with the 600p. Granted this is the smallest capacity model and we might have just caught it trying to do some garbage collection. Fortunately, this is just one of our cache test runs:

Above is an additional run performed starting +3 seconds after the completion of the previous run. Things looked significantly better here, as well as in the rest of the runs of this test sequence:

I'll chalk this up to simply stating that we shouldn't expect the 128GB model to be as stellar or consistent when compared to its larger brothers. Note that competitors shy away from offering such low capacities in higher performance packaging to avoid this type of thing.

« PreviousNext »