Performance Focus – 960 EVO 250GB
Before we dive in, a quick note: I’ve been analyzing the effects of how full an SSD is on its performance. I’ve found that most SSDs perform greater when empty (FOB) as they do when half or nearly filled to capacity. Most people actually put stuff on their SSD. To properly capture performance at various levels of fill, the entire suite is run multiple times and at varying levels of drive fill. This is done in a way to emulate actual use of the SSD over time. Random and sequential performance is re-checked on the same areas as additional data is added. Those checks are made on the same files and areas checked throughout the test. Once all of this data is obtained, we again apply the weighting method mentioned in the intro in order to balance the results towards the more realistic levels of fill. The below results all use this method.
Speeds are damn impressive for a 250GB SSD!
Now for random access. The blue and red lines are read and write, and I've thrown in a 70% R/W mix as an additional data point. SSDs typically have a hard time with mixed workloads, so the closer that 70% plot is to the read plot, the better.
Something our readers might not be used to is the noticeably higher write performance at these lower queue depths. To better grasp the cause, think about what must happen while these transfers are taking place, and what constitutes a ‘complete IO’ from the perspective of the host system.
- Writes: Host sends data to SSD. SSD receives data and acknowledges the IO. SSD then passes that data onto the flash for writing. All necessary metadata / FTL table updates take place.
- Reads: Host requests data from SSD. SSD controller looks up data location in FTL, addresses and reads data from the appropriate flash dies, and finally replies to the host with the data, completing the IO.
The fundamental difference there is when the IO is considered complete. While ‘max’ values for random reads are typically higher than for random writes (due to limits in flash write speeds), lower QD writes can generally be serviced faster, resulting in higher IOPS. Random writes can also ‘ramp up’ faster since writes don’t need a high queue to achieve the parallelism which benefits and results in high QD high IOPS reads.
Our new results are derived from a very large data set. I'm including the raw (% fill weighted) data set below for those who have specific needs and want to find their specific use case on the plot.
Write Cache Testing
Our first official use of the new write cache test:
Cache size came out to 13GB, which matches Samsung's stated specification. Performance is also very consistent at just over 300 MB/s once the cache has been depleted.
Hmm, maybe add 850 evo raid
Hmm, maybe add 850 evo raid into the chart? Obviously testing every single drive in raid takes way too long, but raid 850 evo or raid of a budget drive seems like an interesting data point that doesn’t take way too long.
As tempting as it may be I
As tempting as it may be I cannot find myself ever buying a TLC drive for anything other than a scratch drive, cache drive etc.
Why? Because of the lifespan
Why? Because of the lifespan of TLC drives?
https://us.hardware.info/reviews/4178/hardwareinfo-tests-lifespan-of-samsung-ssd-840-250gb-tlc-ssd-updated-with-final-conclusion
As an AMD owner, I’d love to
As an AMD owner, I’d love to see benchmarks of this on a motherboard with a PCIe 2.0 4x m.2 connector (like the Gigabyte 990fx-gamer). Does it completely saturate the 20Gb/s bandwidth?
Wow, poor showing in the
Wow, poor showing in the Intel 600p. I feel bad for those who upgraded to the Intel drive on the assumption NVME would be a big boost over a SATA6GB SSD.
The Client QD weighted chart makes it look like it might actually be worth upgrading from a SATA to NVME SSD for real world performance.
I’m curious where the OEM LiteON SSD in my Dell laptop would fit into the mix.
Awesome review. Really like
Awesome review. Really like the detailed info about the topic. I would love to ask one thing about these drives. Will I gain any performance boost by moving from X79 (DMI 1) to Z170 (DMI 3) ?
Thanks 😉
question about endurance or
question about endurance or TBW ssd 960 evo 250gb.
it’s said 100 TBW but is it true
for for example i have ssd mx300 and according to crucial software
and other software like crstal disk the total write is 500000gb and life of the ssd around 15%, in the website of crucial the TBW of the ssd 160 TBW so it clean way above that.
i am interesting in buying the SSD 960 evo 250GB SO MY QUESTION IS :
WHAT IS THE REAL TWB OF THE SSD 960 EVO or the max write of the ssd until it die ?