Performance Focus – 960 EVO 1TB
Very impressive speeds for the 1TB 960 EVO. Remember these are burst-based tests which hit the cache exclusively for writes (more on that below).
Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.
Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.
- Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
- Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.
The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.
Our new results are derived from a very large data set. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.
Write Cache Testing
Our first official use of the new write cache test:
Cache size came out to 42GB, which again matches Samsung's stated specification. Once the cache is full, we drop to the max TLC speed of ~1.1 GB/s, which is still considerably faster than any data source on a typical system.