Al has already exhaustively covered the new Samsung 960 Pro in his latest article, which also happens to be the premiere of PC Perspective's new storage testing suite. An in depth discussion of the new testing methodology can be found on the third page and you can expect to hear about it on our podcast tomorrow and perhaps in a standalone article in the near future. Several comments have inquired as to the effect this drive would have on a system used for gaming or multimedia and how it would compare to drives like the Intel 750 and DC P3700 or OZC's RD 400. The best place to find those comparisons is over at The Tech Report, their RoboBench transfer test features a long list of drives you can look at. Check it out once you have finished off our article.
"Samsung's 960 Pro follows up on last year's 950 Pro with denser V-NAND, a brand-new controller, and space-age label technology. We put this drive to the test to see whether its performance is truly out-of-this-world."
Here are some more Storage reviews from around the web:
- Samsung SSD960 PRO 2TB M.2 PCIe NVMe SSD @ Kitguru
- WD Blue SSD Review (1TB) @ Kitguru
- Crucial MX300 M.2 525GB SSD @ eTeknix
- Seagate BarraCuda Pro 10TB SATA III HDD Review @ NikKTech
Unfortunately TechReport’s NVMe SSDs are held back by the DMI 2.0 link on the Z97 chipset they’re using for testing; so that limits some of the difference you might see on the higher end drives..
I would really like to see
I would really like to see pcper do a test of the DMI link in order to see hot in handles saturation. For example, have the 960 pro and a few SATA SSDs do linear writes at the same time, while using ASIO on a PCI express x1 sound card. Will it prioritize the sound card, or will it suffer from degraded performance?
With modern SSDs now beginning to turn DMI into a bottleneck, it would be good to know how it impacts other devices that share its throughput.
That would be really good to
That would be really good to know actually.. network activity could also be impacted; but yeah I could see heavy SSD usage possibly causing popping or other sound problems with PCI express sound cards. I’m certain there’s no QoS on DMI 🙂
If you actually read their
If you actually read their review you’d see that techreport uses a pci-e to m.2 adaptor board in the pci-e 3.0 x16 graphics slot for testing m.2 nvme drives.. connecting directly to the cpu with full bandwidth, DMI has nothing to do with it.
Hmmm.. Re-reading this again
Hmmm.. Re-reading this again you are correct. I read the notes for their test suite and there were references to gen2 PCI-E cards and apparently missed the detail on page 6 of this review. I stand corrected. Sorry for bad information folks.
I would like to ask a favor
I would like to ask a favor of everyone reading this thread:
Highpoint have now announced a very interesting
RocketRAID model 3840A NVMe RAID controller with
full x16 edge connector and compatible with PCIe 3.0:
I’ve been posting lots of comments about it on the Internet,
but to date I have still not found any reviews of same.
YOU CAN ALL HELP BY WRITING TO HIGHPOINT SALES
TO ASK THEM WHEN THIS 3840A NVMe RAID CONTROLLER
WILL ACTUALLY BE AVAILABLE FOR REVIEWS.
It should be relatively easy to cable this controller
to a 2.5″-to-M.2 enclosure, like this one by Syba:
That assembly will permit different NVMe M.2 SSDs
to be measured and compared.
I would LUV to turn Allyn loose with his battery
of comprehensive benchmarks.
Here are some parametrics to chew on:
x16 @ 8 GHz / 8.125 bits per byte = 15.75 GB/second
With 4 TIMES as many PCIe 3.0 lanes,
a full x16 edge connector has
4 TIMES the upstream bandwidth of a DMI 3.0 link.
Thus, even allowing for aggregate controller overhead,
a RAID-0 array using the 3840A and 4 x 960 Pro SSDs
should achieve a raw READ speed in excess of 10,000 MB/sec.
That is almost exactly the same as the raw bandwidth
of DDR3-1333 DRAM:
1333 x 8 = 10,664 MB/second
1600 x 8 = 12,800 MB/second