ASUS Hyper M.2 X16 Card Closer Look, Wrap Up and Conclusion
ASUS Hyper M.2 X16 Card Closer Look
Before we wrap, let's take a closer look at the nicely built ASUS Hyper M.2 X16 Card:
Nice brushed aluminum heatsink.
A few screws removed and we're in.
The layout is very simple as no PCIe switch is needed. Each M.2 x4 socket is wired straight to a set of four PCIe lanes at the x16 connector. No fuss, no muss.
At the backplate, we have power/activity indicators for the four sockets, as well as a switch to turn off the blower fan. With such a large heatsink and the fact that M.2 SSDs rarely draw more than 6W each at full load, most users will probably be fine with the fan switched off.
Wrap Up
To wrap up, we've extracted as much useful test data as possible from this early look at VROC, and we can come away with some generalized points:
- Intel Optane Memory wins big on IOPS and Latency.
- Samsung 960 PRO wins big on sequential performance.
- Z270 sees the lowest possible NVMe latency, but only if you bypass the PCH.
- Z270 has lowest NVMe latency in RST arrays.
- X299 VROC enables the highest possible sequential performance (no DMI bottleneck).
- X299 VROC fails ease of use (keys for features that are standard on lesser platforms?!?).
- X299 VROC random performance is adequate**.
**Remember, we are testing VROC in a not-officially-released form here. There may be significant optimizations made before it reaches a final state.
Conclusion
Well, there we have it. Pre-release X299 VROC put through its paces and compared against Z270 RST. As it stands, both solutions have their pros and cons. Z270 wins on IOPS and Latency while X299 looks like it will see sequential transfers scale until you've run out of SSDs to throw at it. Just don't get too carried away if you want to boot from the VROC array, as that is limited to a single VMD (currently 4x NVMe SSDs installed on an ASUS Hyper M.2 X16 Card). We do still have that big elephant in the room – that pesky VROC key that users will have to purchase just to officially enable the features Z270 owners were already getting for free. We won't know if final VROC will enjoy lower latencies, but at present Z270 wins that race – even if it is DMI limited to ~3.6GB/s while a bootable VROC volume can hit 13GB/s (*if* Intel decides to support booting from 960 PROs, that is).
A note to Intel:
If those making the VROC key decisions happen to read this, I believe a more reasonable solution to your segmentation problem would be to enable all RAID modes for a single VMD (four SSDs / a single bootable array using a x16 card). Remember, some of these X299 buyers are just power users who want storage feature parity with Z270. They don't need 20 SSD RAIDs, but they probably expect something like RAID 0/1/5 across four SSDs without having to buy a key to do so. If folks want additional arrays or to span multiple VMDs, *then* make them get a key. This way your X299 platform has slightly better NVMe RAID support than Z270, while not encroaching on your enterprise market share. You've segmented keys this way in the past (4/8 ports on C600), so surely it is doable with VROC. Please don't make someone buy a premium key just to do a 3 or 4 SSD bootable RAID-5 on a single VMD, especially if a hardware RAID card capable of the same could potentially be had for a lower cost than your premium key.
I hate to spoil it for those who spent most of this article drooling over the 13GB/s figures, but for those who feel the 4GB/s DMI limit is worth the latency/responsiveness benefit, this chart likely agrees with you:
I downloaded the guide and I
I downloaded the guide and I think that on the review you might have missed a step to configure VROC. I see that you configured the hardware for connecting multiple drives to a configured set of lanes. On the guide they set specific VMD ports through an specific OCulink connection, whatever that is. They also configured the Volume Management Device as an OCulink connection. They did the same for every CPU the processor had. I’m assuming that the ASUS board has the ability to do this with a PCIe 3.0 connection. Correct if I’m wrong but I’m assuming that any RAID array created on the RSTe GUI will run under the PCH connection if the VMD ports aren’t linked to the PCIe 3.0 connection on the BIOS.
Anyone knows where I can find
Anyone knows where I can find the VROC key and the price? Intel says “contact your mainboard manifacturer” and Gigabyte (I have a GA-X299-UD4 with 2 x Samsung 960 PRO) says “contact your dealer” but I’m the dealer and I can’t find the key!
Thank you!
Hi, I have a couple questions
Hi, I have a couple questions about bandwidth if someone can answer them for me:
1. Would I experience a bottleneck with 4 x Samsung 960 Pros if I use this card in a x8 slot rather than a x16 slot? Will it make any noticeable difference?
2. How does this card compare to the dimm.2 risers on asus boards (Rampage VI Apex & Extreme)? The riser card provides 2 PCle x4 connections directly to the cpu. Does the Hyper m.2 x 16 card have additional overhead that would cause more latency than the riser cards?
As far as I know, but without
As far as I know, but without having actual empirical experience with 4 x Samsung 960 Pros, to exploit the raw bandwidth of an x16 slot the BIOS/UEFI must support what is called PCIe lane “bifurcation”.
In the ASUS UEFI, it shows up as x4/x4/x4/x4:
https://www.youtube.com/watch?v=9CoAyjzJWfw
In the ASRock UEFI, it shows up as 4×4:
http://supremelaw.org/systems/asrock/X399/
This allows the CPU to access a single x16 slot as four independent x4 PCIe slots.
As such, even if an x8 slot were able to be bifurcated, it would end up as 2×4, or x4/x4, and the other 2 NVMe SSDs would probably get ignored.
There are some versions of these add-in cards that have an on-board PLX chip, which may be able to address all 4 SSDs even if only x8 PCIe lanes are assigned to an x16 slot by the BIOS/UEFI.
(Also, by shifting the I/O processing to the CPU, this architecture should eliminate the need for dedicated RAID IOP’s on the add-in card.)
Also, a full x16 edge connector may not fit into an x8 mechanical slot.
Ideally, therefore, these “quad M.2” AICs are designed to install in a full x16 mechanical slot that is assigned the full x16 PCIe lanes with bifurcation support in the BIOS/UEFI subsystem.
You should ask this same question of Allyn, because he will surely have more insights to share with us here.
If anyone is interested,
If anyone is interested, ASRock replied to our query with simple instructions for doing a fresh install of Windows 10 to an ASRock Ultra Quad M.2 card installed in an AMD X399 motherboard. We uploaded that .pdf file to the Internet here:
http://supremelaw.org/systems/asrock/X399/
As you want something super
As you want something super new? Take a look at this site. Only here the choice of horny for every taste and completely free! They are hardcore slaves, they will and want implement anything you command !
http://gov.shortcm.li/kings1#F25