More Confusion, Configuration, and Test System Setup
Quick note: After this article went live we did manage to boot from an Intel SSD VROC array *without* a VROC key installed! Now let's continue.
I'd first like to point out that we have no idea how or why this is even working in the first place. Observe these two conflicting pieces of information. First from the Intel VROC FAQ:
…and second from the ASUS Hyper M.2 X16 Card press slides:
Alright, so we have Intel saying 'No RAID support' without a key, but ASUS saying RAID-0 'No need' a hardware key. But here I am staring at this:
I can choose any of those options. They all work. They all create usable arrays that report as bootable, though the interface is reporting that I am in a 'Trial mode' for a 90-day period. While everything seems to work, except for boot support actually working (which we can only assume is due to the BIOS needing a key installed for that feature to function), there are additional points of confusion to bring up:
Once VROC is enabled, the BIOS allows you to configure arrays directly. Note that the pair of Intel 600P SSDs can be selected, while a pair of Samsung 960 PROs can not. Now you might expect this since Intel initially said only Intel SSDs would be supported, however:
There are the same two SSDs configured in an array within the VROC Windows driver GUI. Note that they also report as bootable (right pane). This array actually works and is completely usable.
After all of this, we are left wondering if we are even using true VROC here. There are multiple points in the 'for' column (needs a driver installed to see any connected SSDs in this mode, etc), but there are also plenty of points in the 'against' column as well (SSDs that shouldn't work yet do, etc). We even have another 'WTF' column (bootability is broken even with Intel products, etc). Maybe it's all just half-baked and incomplete at this point, but hey, it appears to work well enough to throw some tests at it, so I guess we can see how it looks, eh?
Configuration
We've already skimmed a lot of the basics here, but a few additional steps you'd need to do in addition to configuring the BIOS, installing the SSDs to the card, installing the card, etc:
While you can configure the array within the BIOS (assuming you had a VROC key for it to all work properly), you would likely need to load the 'F6 Driver' during the Windows install in order to see the array. We're all spoiled with operating systems having many common RAID drivers built in, but VROC is a new thing, so be sure to copy those drivers onto your USB installer.
If you used the simple 'F6 driver' while installing Windows directly to a VROC array, or if your OS was new enough to see the array without one, you will still need to install the full VROC driver package once within Windows if you wish to get the RSTe GUI, where you can check on array status and configure additional arrays.
Test System Setup
For these tests, we will be using two platforms:
- X299 (VROC) Testing:
- ASUS X299
- Intel Core i9-7960X (16 core, 32 thread. No overclock)
- 32GB DDR4
- Z270 (RST) Testing:
- ASUS Z270
- Intel Core i7-7700K (4 core, 8 thread. No overclock)
- 16GB DDR4
To better represent real-world QD=1 performance, c-states were disabled on both platforms. QD=1 benchmark tests on some platforms do not load the CPU sufficiently to observe full system responsiveness as the lower clock rates negatively impact storage performance. This occurs because storage benchmarks focus only on the storage and nothing else. Real-world applications would be performing calculations or otherwise doing something with the accessed data, causing the system to operate at a higher clock rate. Disabling c-states gets us closer to that real-world state while running these simpler tests.
I downloaded the guide and I
I downloaded the guide and I think that on the review you might have missed a step to configure VROC. I see that you configured the hardware for connecting multiple drives to a configured set of lanes. On the guide they set specific VMD ports through an specific OCulink connection, whatever that is. They also configured the Volume Management Device as an OCulink connection. They did the same for every CPU the processor had. I’m assuming that the ASUS board has the ability to do this with a PCIe 3.0 connection. Correct if I’m wrong but I’m assuming that any RAID array created on the RSTe GUI will run under the PCH connection if the VMD ports aren’t linked to the PCIe 3.0 connection on the BIOS.
Anyone knows where I can find
Anyone knows where I can find the VROC key and the price? Intel says “contact your mainboard manifacturer” and Gigabyte (I have a GA-X299-UD4 with 2 x Samsung 960 PRO) says “contact your dealer” but I’m the dealer and I can’t find the key!
Thank you!
Hi, I have a couple questions
Hi, I have a couple questions about bandwidth if someone can answer them for me:
1. Would I experience a bottleneck with 4 x Samsung 960 Pros if I use this card in a x8 slot rather than a x16 slot? Will it make any noticeable difference?
2. How does this card compare to the dimm.2 risers on asus boards (Rampage VI Apex & Extreme)? The riser card provides 2 PCle x4 connections directly to the cpu. Does the Hyper m.2 x 16 card have additional overhead that would cause more latency than the riser cards?
As far as I know, but without
As far as I know, but without having actual empirical experience with 4 x Samsung 960 Pros, to exploit the raw bandwidth of an x16 slot the BIOS/UEFI must support what is called PCIe lane “bifurcation”.
In the ASUS UEFI, it shows up as x4/x4/x4/x4:
https://www.youtube.com/watch?v=9CoAyjzJWfw
In the ASRock UEFI, it shows up as 4×4:
http://supremelaw.org/systems/asrock/X399/
This allows the CPU to access a single x16 slot as four independent x4 PCIe slots.
As such, even if an x8 slot were able to be bifurcated, it would end up as 2×4, or x4/x4, and the other 2 NVMe SSDs would probably get ignored.
There are some versions of these add-in cards that have an on-board PLX chip, which may be able to address all 4 SSDs even if only x8 PCIe lanes are assigned to an x16 slot by the BIOS/UEFI.
(Also, by shifting the I/O processing to the CPU, this architecture should eliminate the need for dedicated RAID IOP’s on the add-in card.)
Also, a full x16 edge connector may not fit into an x8 mechanical slot.
Ideally, therefore, these “quad M.2” AICs are designed to install in a full x16 mechanical slot that is assigned the full x16 PCIe lanes with bifurcation support in the BIOS/UEFI subsystem.
You should ask this same question of Allyn, because he will surely have more insights to share with us here.
If anyone is interested,
If anyone is interested, ASRock replied to our query with simple instructions for doing a fresh install of Windows 10 to an ASRock Ultra Quad M.2 card installed in an AMD X399 motherboard. We uploaded that .pdf file to the Internet here:
http://supremelaw.org/systems/asrock/X399/
As you want something super
As you want something super new? Take a look at this site. Only here the choice of horny for every taste and completely free! They are hardcore slaves, they will and want implement anything you command !
http://gov.shortcm.li/kings1#F25