Performance Comparisons – TRIM Speed
Thanks to the plethora of data we have at our disposal from the new suite, I can derive some additional interesting data that nobody seems to have been paying any attention to yet. Have you ever deleted a large file and then noticed your system seem to hang for some time afterward? Maybe file moves from your SSD seemed to take longer than expected?
That's your problem right there. In the above capture, a 16GB file was deleted while a minimal level of background IO was taking place. Note how that IO completely stalls for a few seconds shortly after the file was deleted? That's a bad thing. We don't want that, but to fix it, someone needs to measure it and point it out. Enter another aspect of our testing:
Latency Percentile data was obtained while running a 'light' (1000 IOPS) workload in the background while files of varying sizes were deleted. The amount of latency added during the deletions was measured, compared with a baseline, and correlated with the sizes of the deleted files. The result is how much latency is added to the active workload per GB of file size that was deleted. In short, this is how long you may notice a stutter last after deleting a 1GB file.
To avoid confusion, I've maintained the performance-based sort from the mixed test for these charts. Here you can tell that some drives that did perform well on that test stick out a bit here when it comes to how they handle TRIM. Ideally, these results should all be as close to 0.000 as possible. Higher figures translate to longer performance dips after files have been moved or deleted.
The new WD Black and SanDisk Extreme PRO turned in the lowest figures we've seen from this highly sensitive test, with the Black actually reaching a score of 0. Looks like that NVMe ASIC is doing its job very effectively!
This is another result from a different set of data. While our suite runs, it issues a full drive TRIM several times. Some of those times it is done on an empty SSD, others it is done on a full SSD. Any difference in time taken is measured and calculated, normalizing to a response time per GB TRIMmed. In short, this is how long an otherwise idle SSD would hang upon receiving a TRIM command for a 1GB file. These times are shorter than the last chart because the SSD controller does not have to juggle this TRIM with background activity and can throw all of its resources at the request.
All SSDs do well here except for the MX500 and the 860 PRO.
Im about to build a new
Im about to build a new system and all these new NVMe drives coming out which is starting to make the Samsung 960 EVO look antiquated. What to do?
Given the random read (low
Given the random read (low QD) performance falls slightly behind the 960 EVO, I'd consider both products roughly equal and go for the lower cost/GB unless you wanted the more proven (Samsung) part. Josh found 960 EVOs on sale at Newegg for $0.40/GB last night, so in that moment I'd go with the EVO.
Second chart on “Performance
Second chart on “Performance Focus – Western Digital WD Black NVMe 1TB SSD” page is shown as Throughput, but should be IOPs (unless these drives are magically pushing over 300GBps 🙂 ).
Ooh, good catch. That chart
Ooh, good catch. That chart has been wrong for a *long* time apparently…
Great review and very solid
Great review and very solid drive.
But pardon of my ignorance, how is those thermals(Do you have FLIR)? Any thermal throttle?
This drive runs cool enough
This drive runs cool enough that WD didn't even need to use a copper-layered label as some other SSDs do, so I wouldn't consider it a concern. The controller has the capability to throttle if it needs to, but you'd have to be unrealistically hard on it to get to that point. This is the case with most M.2 SSDs – folks run a continuous storage test on them for minutes at a time and then complain about throttling, but nothing other than benchmarks hits the SSD that hard.
Maybe I am missing something,
Maybe I am missing something, but why does the Mixed Burst section have a screenshot of an OCZ drive when the article is about WD/Sandisk drives?
It’s a pic comparing a drive
It's a pic comparing a drive that has a harder time with the workload (left) to a faster drive that executes more quickly and consistently over time (right).
Hmm, I dunno. I feel like
Hmm, I dunno. I feel like 760p has higher random and sequential read while costing less, although there is no 1TB option still.
You’re right there – the 760P
You're right there – the 760P does run closer to the Samsung parts in read performance, and also is competitive on cost, but not available in 1TB. I was trying to stick with a sampling of various SSDs at or above the 1TB capacity point but some models we have only tested 512GB (the previous WD Black), and the charts get too cluttered if we go higher than 10.
Why are they taking so long
Why are they taking so long for 1tb? 🙁 I might even want 2tb in the future… Or a 4tb MX500. Is it the controller?
I suspect that the issue is
I suspect that the issue is limited space for the dies which are required to support larger capacities.
I suspect that to be the case
I suspect that to be the case for the Intel since it’s m.2, but for MX500? I think there’s more room in there.
My X79 mobo was before m.2 so
My X79 mobo was before m.2 so I used an Intel 750. With no NVMe boot options, Windows and those calls come from an SATA SSD while programs and the Swap File are on the 750. I know this ‘parallel’ fetching isn’t meaningful, and the whole system is very fast (4930K – I only buy if I have to).
I remember an early m.2 mobo (Asus) that stood the drive up in the path of the front cooling fan, but heat doesn’t seem to be much of an issue with the ones lying down.
I have looked at all SSD
I have looked at all SSD reviews out there and the only 2 that stand out are PC Perspective and Anandtech. Reason being you actually devise tests to suit the underlying architecture and not run of the mill benchmark suites.
Would it be possible to specify under System Setup if the drive is plugged on the motherboard’s M.2 slot or is on a PCIe add-in card?
Also it would be nice if for the top 10 drives you could show the difference in latency based on whether the drive is used via M.2 PCIe AIC adapter vs M.2 through the PCH linked to CPU via DMI3.
I second Jabbadap’s request for thermal data. I agree that in real-world systems you cant heat up a drive but I am more interested in systems used in harsh environments. The idea being that a drive that generates less of its own heat is likely to perform better in hotter ambient temperatures. I know one can always stick a M.2 cooler on but since you are pushing the drives during testing, it is simply a matter of fixing a thermal camera aimed at the drive under test.
Once again, I really appreciate your testing methodology.