Internals, Spectrum Controller, Testing Methodology and System Setup
Internals
Outside:
Inside:
Let's spin that top one around…:
If there was any doubt that these are identical parts, that last pic should settle things. The flash is BiCS3 TLC. The controller is a proprietary in-house 'Spectrum' part designed by WD/SanDisk. Note the arrangement here. The controller is centered, which WD claims helps spread heat away from the controller without the need for special copper-layered labels or heat sinks. Let's dive a bit deeper into this new controller:
Spectrum Controller
SSD controllers need to do things differently if there is any hope of beating the hefty competition out there, and Western Digital’s new Spectrum controller employs a range of unique functionality towards this end.
First, we have a restructuring of the portions of the controller responsible for handling repetitive tasks that are relatively simple on their own. These tasks hang (logically) off of each end of the controller, where it speaks to the host via NVMe or to the NAND. These pathways are handled by Sequencers on the Spectrum controller, and the way WD had sped these processes up can be summed up with a very simple term: ASIC.
Since the NVMe and NAND architectures employed change very infrequently (years), WD can implement those protocols purely in hardware. The downside here is that you lose the ability to firmware update those portions of the SSD, but so long as they get things right the first time, there should be nothing to worry about here.
Another aspect of the Spectrum controller that enjoys ASIC acceleration is the ‘multi-gear’ error correction. As with modern HDDs, NAND always has some degree of error correction at play, and WD has implemented multiple stages of LDPC error correction in hardware. They are all reasonably quick but use increased power as the complexity of the error correction increases. If a given error is too large, it ‘falls through’ the LDPC stages and now we have the CPU / firmware kick in and attempt some additional DSP tricks (threshold adjustment, etc.) to get the data read successfully. A last-ditch effort available is a ‘RAID-like’ recovery where some XOR data may be available for this page(s). It’s not a full RAID-5 of flash memory dies, but anything to help improve your chances here is a good thing.
Before we move on, let’s touch on caching. The bulk media on these new parts is TLC, but there is an SLC cache. nCache 3.0 is WD’s label for the caching scheme here, and it functions similarly to other high-end SLC caching SSDs. Incoming writes go to the cache whenever possible, and that cache is emptied (folded) into the TLC space during any idle periods. Should the SSD see enough sustained writes that the cache becomes saturated, it shifts to ‘direct to die’ TLC writes, which are slower, but still faster than the case where the SLC cache was being simultaneously filled and emptied. In these cases, it’s more efficient to just let the earlier data sit idle in the cache area until the workload subsides and the cache can be emptied during the next idle period.
Testing Methodology
Our tests are a mix of synthetic and real-world benchmarks. IOMeter, HDTach, HDTune, Yapt and our custom File Copy test round out the selection to cover just about all bases. We have developed a custom test suite as off-the-shelf tests just no longer cut it for in-depth storage testing. More details on the next page. If you have any questions about our tests just drop into the Storage Forum and we'll help you out!
Test System Setup
We have several storage testbeds. A newer ASUS P8Z77-V Pro/Thunderbolt and a Gigabyte Z170X SOC Force (for RAID testing). Future PCIe and SATA device testing, including this review, take place on an ASUS Sabertooth X99, which comes equipped with USB 3.1, M.2, and can also handle SFF-8639 (U.2) devices with the proper adapter.
PC Perspective would like to thank Intel, ASUS, Gigabyte, Corsair, Kingston, and EVGA for supplying some of the components of our test rigs.
Hard Drive Test System Setup | |
CPU | Intel Core i7 5820K @ 4.125 GHz |
Motherboard | ASUS Sabertooth X99 |
Memory | 16GB Micron DDR4 @ 3333 |
Hard Drive | G.Skill 32GB SLC SSD |
Sound Card | N/A |
Video Card | EVGA GeForce GTX 750 |
Video Drivers | GeForce Game Ready Driver 347.88 |
Power Supply | Corsair CMPSU-650TX |
DirectX Version | N/A |
Operating System | Windows 8.1 Pro X64 (update) |
PCPer File Copy TestHDTachHDTuneIOMeterYAPT- PCPer Custom SSD TEST SUITE!!!
Im about to build a new
Im about to build a new system and all these new NVMe drives coming out which is starting to make the Samsung 960 EVO look antiquated. What to do?
Given the random read (low
Given the random read (low QD) performance falls slightly behind the 960 EVO, I'd consider both products roughly equal and go for the lower cost/GB unless you wanted the more proven (Samsung) part. Josh found 960 EVOs on sale at Newegg for $0.40/GB last night, so in that moment I'd go with the EVO.
Second chart on “Performance
Second chart on “Performance Focus – Western Digital WD Black NVMe 1TB SSD” page is shown as Throughput, but should be IOPs (unless these drives are magically pushing over 300GBps 🙂 ).
Ooh, good catch. That chart
Ooh, good catch. That chart has been wrong for a *long* time apparently…
Great review and very solid
Great review and very solid drive.
But pardon of my ignorance, how is those thermals(Do you have FLIR)? Any thermal throttle?
This drive runs cool enough
This drive runs cool enough that WD didn't even need to use a copper-layered label as some other SSDs do, so I wouldn't consider it a concern. The controller has the capability to throttle if it needs to, but you'd have to be unrealistically hard on it to get to that point. This is the case with most M.2 SSDs – folks run a continuous storage test on them for minutes at a time and then complain about throttling, but nothing other than benchmarks hits the SSD that hard.
Maybe I am missing something,
Maybe I am missing something, but why does the Mixed Burst section have a screenshot of an OCZ drive when the article is about WD/Sandisk drives?
It’s a pic comparing a drive
It's a pic comparing a drive that has a harder time with the workload (left) to a faster drive that executes more quickly and consistently over time (right).
Hmm, I dunno. I feel like
Hmm, I dunno. I feel like 760p has higher random and sequential read while costing less, although there is no 1TB option still.
You’re right there – the 760P
You're right there – the 760P does run closer to the Samsung parts in read performance, and also is competitive on cost, but not available in 1TB. I was trying to stick with a sampling of various SSDs at or above the 1TB capacity point but some models we have only tested 512GB (the previous WD Black), and the charts get too cluttered if we go higher than 10.
Why are they taking so long
Why are they taking so long for 1tb? 🙁 I might even want 2tb in the future… Or a 4tb MX500. Is it the controller?
I suspect that the issue is
I suspect that the issue is limited space for the dies which are required to support larger capacities.
I suspect that to be the case
I suspect that to be the case for the Intel since it’s m.2, but for MX500? I think there’s more room in there.
My X79 mobo was before m.2 so
My X79 mobo was before m.2 so I used an Intel 750. With no NVMe boot options, Windows and those calls come from an SATA SSD while programs and the Swap File are on the 750. I know this ‘parallel’ fetching isn’t meaningful, and the whole system is very fast (4930K – I only buy if I have to).
I remember an early m.2 mobo (Asus) that stood the drive up in the path of the front cooling fan, but heat doesn’t seem to be much of an issue with the ones lying down.
I have looked at all SSD
I have looked at all SSD reviews out there and the only 2 that stand out are PC Perspective and Anandtech. Reason being you actually devise tests to suit the underlying architecture and not run of the mill benchmark suites.
Would it be possible to specify under System Setup if the drive is plugged on the motherboard’s M.2 slot or is on a PCIe add-in card?
Also it would be nice if for the top 10 drives you could show the difference in latency based on whether the drive is used via M.2 PCIe AIC adapter vs M.2 through the PCH linked to CPU via DMI3.
I second Jabbadap’s request for thermal data. I agree that in real-world systems you cant heat up a drive but I am more interested in systems used in harsh environments. The idea being that a drive that generates less of its own heat is likely to perform better in hotter ambient temperatures. I know one can always stick a M.2 cooler on but since you are pushing the drives during testing, it is simply a matter of fixing a thermal camera aimed at the drive under test.
Once again, I really appreciate your testing methodology.