We've been playing around a bit with Intel VROC lately. This new tech lets you create a RAID of NVMe SSDs connected directly to newer Intel Skylake-X CPUs, without the assistance of any additional chipset or other RAID controlling hardware on the X299 platform. While the technology is not fully rolled out, we did manage to get it working and test a few different array types as a secondary volume. One of the pieces of conflicting info we had been trying to clear up was can you boot from a VROC array without the currently unobtanium VROC key…
Well, it seems that question has been answered with our own tinkering. While there was absolutely no indication in the BIOS that our Optane Memory quad RAID-0 was bootable (the array is configurable but does not appear in the bootable devices list), I'm sitting here looking at Windows installed directly to a VROC array!
Important relevant screenshots below:
For the moment this will only work with Intel SSDs, but Intel's VROC FAQ states that 'selected third-party SSDs' will be supported, but is unclear if that includes bootability (future support changes would come as BIOS updates since they must be applied at the CPU level). We're still digging into VROC as well as AMD's RAID implementation. Much more to follow, so stay tuned!
They finally listened to your
They finally listened to your DRM warcry and are softening up their defenses.
Next objective is to get them out of the closet.
Oh, this had nothing to do
Oh, this had nothing to do with me. Nothing has been updated since I posted the last article. Heck, it could be something that's not supposed to be allowed for all we know!
Is it possible that the
Is it possible that the motherboard includes the regular, but not the premium key? Or is it in some kind of free trial mode?
The motherboard is in
The motherboard is in pass-through mode, but RAID-0 *may* be supported without the key at all.
I can’t recall any of Intel’s
I can’t recall any of Intel’s marketing material ever stating that the key was needed for bootable VROC, only that it was needed for non-RAID-0 arrays.
YOU GO, Allyn!
G-R-E-A-T
YOU GO, Allyn!
G-R-E-A-T stuff.
100GB is a decent C: system partition.
Allyn is the absolute best we
Allyn is the absolute best we have, Go Man Go! 🙂
I’m using a single 32GB Optane for my X99 workrig right now, with amazing results.
I tried to RAID two Optane modules, but X99 would only recognize a single drive.
Upgrading to Z370 Maximus 10 Extreme and 8700K as soon as I can, and adding an Intel Optane 900P SSD into slot number 4. 🙂
Thank you Allyn,
You are da man of the hour,
peace and love.
Do you mean your X299
Do you mean your X299 platform? I didn’t think the optane drives were compatible with the older X99 platform.
“One of the pieces of
“One of the pieces of conflicting into we had been trying to clear up was can you boot from a VROC array without the currently unobtanium VROC key…”
Conflicting into? I assume you mean info.
In the earlier article, you said something about a 90 day free trial or something. Does the RAID just stop working after 90 days?
The trial appears to only
The trial appears to only effect volumes created within the GUI that are beyond the system key. The trial stuff can't apply to an array seen by the BIOS as we're way before any trial counters within the installed OS driver. The conflict I was referencing was that the Intel VROC FAQ states that pass-through mode won't even support RAID-0.
Any idea how much of a CPU
Any idea how much of a CPU performance hit you would take for the RAID 5 mode for parity calculations? Is that trivial these days? I was thinking it might not be at many GB per second. I think I would rather have an actual completely hardware RAID card, but I probably don’t want to pay for one.
With so many multi-core CPUs
With so many multi-core CPUs available now,
there is a very real probability that one or more
of those multiple cores is idle and available
to do the processing necessary to support
software RAID arrays. Also, it appears from
initial measurements that AMD’s X399 chipset
has a UEFI feature that allows “interleaving”
between 2 or more x16 PCIe slots. As such,
the era of dedicated hardware RAID IOPs
may be waning in the face of these super powerful
multi-core CPUs like AMD’s Threadripper.
If I had to guess, I would speculate that these
factors played heavily in minds of the ASUS
engineers who designed their Hyper M.2 X16 Card.
Supporting bifurcation / quad-furcation in the
chipset also eliminates the need for a PLX-type
switch to be integrated onto the card’s PCB.
Here, compare Highpoint’s SSD7101A-1, which does
have a PLX chip. In general, motherboard vendors
need to embrace a goal of supporting all modern
RAID modes for as many NVMe SSDs as their motherboards
can accommodate. And, that appears to be the case
for AMD’s recently announced NVMe RAID support,
albeit only for their top-end X399 chipset.
Expect this technology to trickle down over time.
We can also predict natural evolutions e.g.
an ASUS DIMM.2 slot the accepts an AIC with 4 x M.2s.
I like the features on that
I like the features on that Gigabyte MZ31-AR0 Extended ATX Server Socket SP3/Epyc Motherboard. And that’s plenty of PCIe 3.0 connectivity with: PCIe 3.0 x16(4 full x16), one PCIe 3.0 x16(x8 slot), and 2 PCIe 3.0 x8 slots connectivity So maybe someone will get this and do some benchmarking with the Epyc 7401P 24 core/48 thread CPU SKU. And the Epyc 7401P(24 core/48 thread) CPU price, at $1075, is only $76 dollars more than a Treadripper 1950X. So that Epyc/SP3 single socket MB supports twice the PCIe lanes(128) and twice the Memory channels(8) as any TR/X399 MB SKU. The Gigabyte Epyc/SP3 MB also supports dual 10Gb ethernet and one 1 GB ports and a lot of other workstation grade features that the consumer MB’s cant match.
That Gigabyte MZ31-AR0 is back in stock at newegg/others but at newegg is only costs $610 which is not bad for the features that it offers(1). And that Includes actual Certification/Validation for ECC memory use and the support is in the warrenty, unlike the consumer variants that may “support” ECC but are not Certified/Validated to do so.
(1)
“MZ31-AR0 (rev. 1.0)”
http://b2b.gigabyte.com/Server-Motherboard/MZ31-AR0-rev-10#ov
Is there enough room between
Is there enough room between the DIMM slots
and the x16 PCIe slots on that MZ31-AR0?
I set up a RAID once and had
I set up a RAID once and had no idea what to put for the “Data stripe size” and googling just brought up Tom’s Hardware forum idiots pretending like they knew what they were talking about.
I left it at 4k because the sticker on the HDDs said 4k advanced format, and it seems to work okay. I just would like to have my stuff optimized to not lose performance on the table.
It really just boils down to
It really just boils down to a few things you have to compromise on:
Generally, it's best to go with the default for the given setup, unless you are willing to put in some additional time tuning for your specific workload.
I have a question about this
I have a question about this setup and using m.2 to u.2 converters to leverage far larger optane u.2 SSDs.
Is there any indication that this would or wold not work?
Even if the m.2 form factor optane drives get bigger it wont be as big as the u.2 ones so if I wait the u.2 drives will still be the tempting way to go.
BTW, I already have my Asus 16X card, just looking for 4 SSDs to install into it as my boot drive. I am currently torn between waiting for the new Samsung SSDs and going with a threadripper platform and going all Intel.