Introduction
VROC vs. RST, Optane vs. 960 PRO
Introduction
We've been hearing about Intel's VROC (NVMe RAID) technology for a few months now. ASUS started slipping clues in with their X299 motherboard releases starting back in May. The idea was very exciting, as prior NVMe RAID implementations on Z170 and Z270 platforms were bottlenecked by the chipset's PCIe 3.0 x4 DMI link to the CPU, and they also had to trade away SATA ports for M.2 PCIe lanes in order to accomplish the feat. X99 motherboards supported SATA RAID and even sported four additional ports, but they were left out of NVMe bootable RAID altogether. It would be foolish of Intel to launch a successor to their higher end workstation-class platform without a feature available in two (soon to be three) generations of their consumer platform.
To get a grip on what VROC is all about, lets set up some context with a few slides:
First, we have a slide laying out what the acronyms mean:
- VROC = Virtual RAID on CPU
- VMD = Volume Management Device
What's a VMD you say?
…so the VMD is extra logic present on Intel Skylake-SP CPUs, which enables the processor to group up to 16 lanes of storage (4×4) into a single PCIe storage domain. There are three VMD controllers per CPU.
VROC is the next logical step, and takes things a bit further. While boot support is restricted to within a single VMD, PCIe switches can be added downstream to create a bootable RAID possibly exceeding 4 SSDs. So long as the array need not be bootable, VROC enables spanning across multiple VMDs and even across CPUs!
Assembling the Missing Pieces
Unlike prior Intel storage technology launches, the VROC launch has been piecemeal at best and contradictory at worst. We initially heard that VROC would only support Intel SSDs, but Intel later published a FAQ that stated 'selected third-party SSDs' would also be supported. One thing they have remained steadfast on is the requirement for a hardware key to unlock RAID-1 and RAID-5 modes – a seemingly silly requirement given their consumer chipset supports bootable RAID-0,1,5 without any key requirement (and VROC only supports one additional SSD over Z170/Z270/Z370, which can boot from 3-drive arrays).
On the 'piecemeal' topic, we need three things for VROC to work:
- BIOS support for enabling VMD Domains for select groups of PCIe lanes.
- Hardware for connecting a group of NVMe SSDs to that group of PCIe lanes.
- A driver for OS mounting and managing of the array.
Let's run down this list and see what is currently available:
BIOS support?
Check. Hardware for connecting multiple drives to the configured set of lanes?
Check (960 PRO pic here). Note that the ASUS Hyper M.2 X16 Card will only work on motherboards supporting PCIe bifurcation, which allows the CPU to split PCIe lanes into subgroups without the need of a PLX chip. You can see two bifurcated modes in the above screenshot – one intended for VMD/VROC, while the other (data) selection enables bifurcation without enabling the VMD controller. This option presents the four SSDs to the OS without the need of any special driver.
With the above installed, and the slot configured for VROC in the BIOS, we are greeted by the expected disappointing result:
Now for that pesky driver. After a bit of digging around the dark corners of the internet:
Check! (well, that's what it looked like after I rapidly clicked my way through the array creation)
Don't even pretend like you won't read the rest of this review! (click here now!)
I like big butts and I can
I like big butts and I can not lie.
on topic: impressive per results!
Yeah, butt why no WRITE
Yeah, butt why no WRITE results?
Mainly in the interest of
Mainly in the interest of halving the number of charts needed, as well as speeding up testing. Everything does what you’d expect for writes (scales the same way as reads, etc), but getting steady state random writes for the 960 PROs would have meant way more testing time since I’d have to run that workload for much longer to get through the sequential to random transition of the FTL. Optane doesn’t have this problem, but didn’t want to be unfair to the NAND stuff.
i am interested to see if the
i am interested to see if the 4k write cache is available in the VROC type of raid, cause they usually only available via IRST which is via PCH.
Mr. Malventano we need you
Mr. Malventano we need you back man the last SSD review (after you left) was abysmal.
>butt
>butt
Also, AMD just announced
Also, AMD just announced this:
Now available: Free NVMe RAID upgrade for AMD X399 chipset!
https://community.amd.com/community/gaming/blog/2017/10/02/now-available-free-nvme-raid-upgrade-for-amd-x399-chipset?sf118245427=1
Indeed they did, and we’re
Indeed they did, and we’re testing it now!
Excellent!! This is the last
Excellent!! This is the last piece of information I need before deciding on threadripper vs x299 vs 8700k.
More importantly its free.
More importantly its free.
Amd needs to start getting
Amd needs to start getting parity with intel when it comes to intels storage tech.
Optane/XPoint is not an Intel
Optane/XPoint is not an Intel only storage technology XPoint is an Intel/Micron technology and Micron has their QuantX brand of XPoint that’s supposed to be available at the end of 2017. So AMD/others can offer XPoint devices if they want to license from Micron as Micron appears to be wanting to license its XPoint IP to other makers in addition to branding some QuantX/XPoint products of its own.
Intel’s RAID keys required compered to AMD’s support for RAID booting with the latest firmware update is not an extra cost on AMD’s platforms also when considering AM4/X399/SP3 motherboards that support AMDs Ryzen/Threadripper/Epyc CPU SKUs and their respective motherboard platforms.
I’d like to see RAID testing done across Intel’s consumer and Xeon pattform SKUs as well as AMD consumer and Epyc platform SKUs. And I’d really love to see that Gigabyte Epyc/SP3 single socket motherboerd tested by PCPER with all the single socket Epyc “P” CPU SKUs and that compared to any Intel Xeon or consumer SKUs as well as any consumer/Threadripper SKUs because the Epyc single socket SP3 motherboard/Epyc CPU “P” varients are an even better feature for feature and core for core deal that even consumer Threadripper.
Micron and Intel collaborated
Micron and Intel collaborated on the physical storage chips, but Intel alone have added the OS-transparent drive caching to their CPUs and chipsets. That’s not something Micron can license to others.
Really you think that
Really you think that AMD/Other CPU makers do not have the skills necessary to make XPoint work on their systems/platforms! So you claim is not very valid. And Intel’s IP is proprietary while Micron may just go with some open standard ways as ther are many open standard methods that are transparent to the OS also.
And see here with no costly($$$$) RAID required:
“Threadripper owners can now fire up their NVMe RAID arrays”
techreport dot com/news/32632/threadripper-owners-can-now-fire-up-their-nvme-raid-arrays
[Spam Filter is blocking Legit Websites so that techreport dot com will have to be made to actually work for the link to properly work]
You did not actually read
You did not actually read what I said: AMD have not once, on any of their platforms, demonstrated transparent drive caching. Neither have ASMedia (the designer of AMD’s recent chipsets).
Sure, you can use Optane on an AMD platform. You can do it TODAY: just plug it right in, it exposes as a normal PCIe NVME drive. What you can’t do is the transparent caching, be it with Optane or any other drive. That’s a completely seperate problem to solve, Micron aren’t even peripherally involved.
I mean, its definitely one
I mean, its definitely one way to get a 128gb Optane SSD 😛
True. Too bad you can’t boot
True. Too bad you can’t boot from a pair (or trio!?) of stacked X16 cards to go even larger!
Well you could on X399. It
Well you could on X399. It would be funny to see 8-10 Optane drives on a RAID 0 on x399.
Have you seen der8auer x399
Have you seen der8auer x399 raid video?
> pair (or trio!?) of
> pair (or trio!?) of stacked X16 cards
Users who have contacted Highpoint are
being told that they are working on
making their SSD7101A-1 bootable.
The specs for that AIC state that
multiple cards are also supported.
Many thanks, Allyn, for pushing the envelope.
Can’t wait for your comparisons with Threadripper!
The neon glow lines are still
The neon glow lines are still frustrating to look at on the graphs.
Is Asus ever going to
Is Asus ever going to actually sell the Hyper M.2 x16 or is it more vaporware?
https://www.asus.com/Motherbo
https://www.asus.com/Motherboard-Accessory/HYPER-M-2-X16-CARD/
https://www.newegg.com/Product/Product.aspx?Item=9SIA4UG6B68401&Tpk=9SIA4UG6B68401
FYI: Highpoint have announced
FYI: Highpoint have announced three NVMe add-in cards:
http://www.highpoint-tech.com/USA_new/CS-product_nvme.htm
One user at another Forum reported success
getting their SSD7110 driver to work with the SSD7101.
The specs for the SSD7110 say it’s bootable:
http://www.highpoint-tech.com/PDF/NVMe/SSD7110/Datasheet_SSD7110_17_09_21.pdf
“Bootable & Data Storage”
I already linked to two of
I already linked to two of those in the article 🙂
I missed those links because
I missed those links because I didn’t click on them:
http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm
http://www.highpoint-tech.com/USA_new/series-ssd7101a-1-overview.htm
p.s. Readers should know that I’m a “Highpoint Fanboy”,
and Allyn graciously tested a Syba 2.5″ U.2-to-M.2
enclosure for me, before the Highpoint SSD7101A-1
was released, after they announced their model 3840A.
Here’s the Newegg link to that Syba enclosure:
https://www.newegg.com/Product/Product.aspx?Item=N82E16817801139&Tpk=N82E16817801139
Thanks again to Allyn for doing that test.
An add-in card is superior for adding 4 x M.2 SSDs,
because it eliminates the needs for U.2 cables
and additional enclosures.
Great test, Allyn! I’m
Great test, Allyn! I’m disappoimted to see that random 4K reads at QD1 for the VROC Optane raid doesn’t scale at all as you add drives, what’s the deal with that?
From the 86k IOPS @ QD1 with
From the 86k IOPS @ QD1 with one drive, I was hoping to see 300K+ IOPS @ QD1 with four drives… 🙁
Is this a driver issue or is the actual hardware pipeline saturated at around ~100K IOPS?
QD1 can’t scale with
QD1 can't scale with additional drives in RAID because each individual request is only going to a single drive. It *can* scale sequential performance, for example if you had 16KB stripe size and did 128KB sequential, each request would spread across multiple drives and you would get higher throughput. Not so for small (4KB) random access, where it's the latency of the device that comes into play more than the straight line throughput.
What *is* significant about increasing the number of low latency devices in a RAID is that latencies will remain lower as QD increases, since the load is being spread across several SSDs. I dug deeper into this using Latency Percentile data in my triple M.2 Z170 piece. Keeping latencies lower helps 'shallow the queue' since a given workload will naturally settle at a lower queue depth when applied to very low latency storage (Optane).
The latency results of this piece also used Latency Percentile data, only I referenced the 50% point of the results to get 'latency weighted average' figures instead of the IO weighted numbers you'd get from simpler benchmark apps. Trying to make this many comparisons across this many dimensions (number of drives, different drive types, different platforms, different workloads, different queue depths, etc) meant that there was no room left for a 701 data point plot line of each individual test result :).
Excellent clarification,
Excellent clarification, thanks!
Does Allyn have an AMD
Does Allyn have an AMD Threadripper in his lab?
He does, and yes, he is
He does, and yes, he is testing AMD's implementation.
Can’t wait!
We appreciate
Can’t wait!
We appreciate your consistent excellence, Allyn.
+1!
+1!
Allyn, this YouTube video is
Allyn, this YouTube video is back up:
Finally figured out why THREADRIPPER has so many PCIe lanes (en)
https://www.youtube.com/watch?v=9CoAyjzJWfw
In answer to a subtle issue we have already discussed,
he does show how to enable the “interleave” option
in the Zenith Extreme UEFI/BIOS. As such, it appears
that it is possible to interleave 2 such add-in cards.
And for every PCIe
And for every PCIe lane/memory channel that the Threadripper/X399 motherboard platform offers the Epyc/SP3 single socket motherboard platform offers 2! So the Epyc SP3 motherboards support 128 PCIe lanes and 8 memory channels.
And Anandtech makes me LOL with theirs along with servethehome’s crappy “testing” of the single socket Epyc “P” SKUs (Anandtech and servethehome are using dual socket Epyc/SP3 motherboards and non “P” Epyc SKUs) in an attempt at trying to estimate what a single socket Epyc SKU may do by only populating a single socket on a dual socket Epyc motherboard to estimate how a single socket Epyc SKU would perform, and an Epyc 7401(Dual socket SKU) is not an Epyc 7401P(Single socket SKU), and the Epyc 7401 cost a more than the 7401P.
Really Anandtech and Servethehome have hit a new lows and neglected to even test the Epyc 7401P against the Threadripper 1950x(anandtech) or are only testing with 2P Epyc/SP3 motherboard/Epyc non-P CPU SKUs(Anandtech and Servethehome).
Look at what Anandtech says:
“Anyone looking to build a new workstation is probably in a good position to start doing so today. The only real limitation is going to be if parts are at retail or can only be found by OEMs, how many motherboards will be available, and how quickly AMD plans to ramp up production of EPYC for the workstation market. We’re getting all the EPYC 1P processors in for review here shortly, and we’re hoping Intel reaches out for Xeon-W. Put your benchmark requests in the comments below.”(1)
How the hell can Anandtech ignore the Epyc 7401P with it’s 24 cores and 48 threads for only $1075 compared to the Threadripper 1950X($999) in Anandtech’s “Best” CPUs for workstations article! And servethehome is doing the same thing by trying out the 7401(dual socket SKU) in only an only one socket populated on a 2 socket SP3 motherboard.
There is currently a great Epyc/SP3 single socket motherboard offering up for sale(the GIGABYTE MZ31-AR0 Extended ATX Server Motherboard Socket SP3 single socket MB) for $609(back in stock again at Newegg) and no one is using that for their Epyc/”P” single socket processor testing. Anandtech even states[Above: see quoted statment] that they are “getting” all the Epyc “P” single socket SKUs in for testing but still publishing an article that should not have even been published until Anandtech had a single socket Epyc/SP3 motherboard to test the Epyc 7401P and its other “P” variants! And that Gigabyte Epyc/SP3 gigabyte single socket motherboard has been on sale for over a month now and do not tell me that Anandtech does not Know that with all the contacts the Anandtech has in the motherboard/CPU industry.
Damn folks are screaming all across the web on the Blender/Adobe/solidworks graphics forums for some single socket Epyc CPU/SP3 motherboard testing and folks have to rely on the enthusiasts website like Anandtech/others but are being ignored mostly by the enthusiasts websites. And the enthusiasts websites are trying like hell to sale everyone on Threadripper and Theradripper is not even a workstation grade part.
(1)
“Best CPUs for Workstations: 2017″[what a joke this article is and Anand Lal Shimpi would have never published this under his watch]
https://www.anandtech.com/show/11891/best-cpus-for-workstations-2017
‘Interleave’ as it relates to
‘Interleave’ as it relates to that BIOS option is likely for tweaking performance. I don’t think it is required to make a bootable volume >4 SSDs – that’s an Intel VMD limitation.
> that BIOS option is likely
> that BIOS option is likely for tweaking performance.
Yes: I thought the same thing e.g.
somewhat similar to the way DRAM is interleaved.
And, I thought this tweaking option might also
help to explain what you observed earlier:
“Each drive would have to do 3.55GB/s to accomplish this speed. 960 PROs only go 3,300 MB/s when they are reading actual data.”
Page 3:
“With Optane’s in a
Page 3:
“With Optane’s in a RAID,”
should read
“With Optanes in a RAID,” or “With Optane drives in a RAID,”
The offending apostrophe has
The offending apostrophe has been sacked.
(Thanks)
I skimmed through the
I skimmed through the article, but I still don’t know exactly what Intel is trying to lock out without paying for a ridiculous hardware key. I could see them trying to keep RAID 5 (parity) mode for professional use/market segmentation, although I don’t know if many people would choose to pay for just RAID 5. It would presumably use less valuable SSD space than mirroring everything. Massive RAID bandwidth isn’t that useful to your average consumer anyway. It is mostly professional applications. Locking RAID modes out that can be used on the low end consumer chipset is bogus though. Intel has used the chipset bottleneck as a way to segment the market for quite a while. You can have all kinds of connections on the chipset, but you can’t use many of them at once due to the upstream bandwidth limitation. If you wanted more via more PCI-e off the CPU, you needed to upgrade to a higher end platform. AMD’s Ryzen (non-ThreadRipper) is very limited on PCI-e also, so this isn’t really unique. Threadripper is already basically a low end Epyc processor, and significantly more expensive platform than standard Ryzen.
Locking parity mode out would probably just set up people to lose data because they will configure just striping without mirroring. It is kind of like keeping ECC for the enterprise market. I am of the opinion that ECC should be everywhere now. I had a huge number of files get corrupted by an undetected memory error. I am tempted to buy a serve level board.
I believe their concern is
I believe their concern is that the VROC technology can scale very high on drive counts and has its roots in the enterprise side. The worry would be that some creative enterprise IT guy would just buy a batch of the cheaper desktop parts and roll them out in a larger cluster. The parity or not question is a bit moot since higher performing storage would generally be RAID-0 with a redundant complete system sitting alongside it with a mirror of the data for failover. That's why I suggest at the end of the piece that the hardware key limits should be in drive counts and not in RAID levels. This way pro users could benefit from the same parity volume reliability benefits that Z270 users currently enjoy, but limited to a reasonable consumer-level drive count of 4 (which is the bootable VMD limit anyway).
Nvidia is facing a similar
Nvidia is facing a similar problem right now. A lot of folks are aware that miners are purchasing a ton of graphics cards, which has lead to higher graphics card pricing. What folks don’t realize is that consumer Nvidia graphics cards are getting snapped up by the professional market as well. That is why it’s hard to find any of the blower-style 1080’s in stock (FE and 3rd party).
I meant 1080 TI’s. But they
I meant 1080 TI’s. But they are buying 1080, 1070, and 1060 as well. 1080 Ti blower is the preferred due to it’s size and fan setup being the best fit for server cases
Even limiting it to 4 drives
Even limiting it to 4 drives would offer huge bandwidth, more than any consumer level applications really require. It seems like they could have done the consumer version of the chipset, with 4 drive limit (1 VMD controller) and a workstation/server variant with the full 3 VMD controllers enabled without char ware keys. That still would not really look very good with AMD offering support for a large number of drives with no up charge. It is just a fact that massive bandwidth can be supported with high-end, but still consumer level hardware. This is similar to when high end RISC workstations fell to cheap PC hardware.
> The parity or not
> The parity or not question is a bit moot since higher performing storage would generally be RAID-0 with a redundant complete system sitting alongside it with a mirror of the data for failover.
This is EXACTLY what we have been doing with our
production workstation, and it works G-R-E-A-T!
We actually run 3 active “tiers’ on that workstation,
and a fourth tier is backup storage servers:
(1) 14GB ramdisk
(2) RAID-0 w/ 4 x 6G Nand Flash SSDs
(3) 2TB rotating HDD for archiving (5 year warranty)
(4) networked storage servers (e.g. old LGA-775 PCs)
It’s also very easy to “sync” our ramdisk
with our RAID-0 array e.g.:
xcopy R:folder O:folder /s/e/v/d
xcopy O:folder R:folder /s/e/v/d
Where,
R: = ramdisk drive letter
O: = RAID-0 drive letter
The entire ramdisk is SAVEd and RESTOREd on O:
automatically at SHUTDOWN and STARTUP, respectively.
I have a premium VROC key.
I have a premium VROC key. It was actually fairly easy to get. Have offered it to a couple of tech folks to test but no takers so far.
By the way, I have some
By the way, I have some additional notes about VROC and it’s setup that may be of help. I assume you can view the email address I’ve input for this comment. If you want to discuss just let me know
To answer your question about
To answer your question about using VROC or not… I have to say not because you did not create the array within the BIOS which the silly Key was not engaged. Still baffles me INTEL will charge for non-intel M.2’s and not released the key. AMD giving it away for free and having more PCIe lanes will hurt Intel.
I have been bouncing between 1950x / 7900x for my next system… I already have a Hyper M.2 x16 card for whichever I choose. Where I am at is difficult to obtain cooling for the 1950x and the VROC for the 7900x…
—————–
Ok why I think you are not using VROC… #1 you are in “pass-thru” mode for VROC. The best way I can explain it is VROC is acting like a HBA in your setup. This terminology makes sense.
Using the RSTe gives you the ability to create a Software Raid. This was the only way you were able to create the Stripe from the article.
You do not have the DMI 3.0 bottleneck from the results (So using CPU lanes, expected with x16 PCIe), however a little overhead for the software Stripe. My guess is the pure VROC RAID 0 setup will yield slightly better results is managed from BIOS.
This can be tested by installing windows on Formatted(Secure Erase,etc) M.2’s… F6 the driver and if it doesn’t see an Array or wants to create an Array, we should have the answer… predicting Software RAID via VROC pass-thru(aka-HBA)…
Lastly, it is possible VROC is using the same formatting of the array as RSTe. Just VROC is doing from the BIOS… Intel has kept us in the dark on this crap. I am curious after building the RAID 0 in RSTe, when you go to the BIOS, does it show RAID 0 or unconfigured like your screenshot…
This is my best guess on what is happening with VROC presently… AMD’s style looks to be CPU pass-thru, software RAID…
Thanks for reading my long attempt to explain my idea… //R//
VROC arrays *can* be created
VROC arrays *can* be created in the BIOS, but only with Intel SSDs. Samsung SSDs currently show as not compatible. Further, after this piece went up I was able to create, install windows to, and boot from a RAID-0 array. Stands to reason it's true VROC.
Hi, just to add… I found a
Hi, just to add… I found a XEON based VROC article that explains how VROC works:
https://www.intel.com/content/www/us/en/software/virtual-raid-on-cpu-vroc-faqs.html
Q7: Is Intel VROC software or hardware RAID… Ans: Hybrid… Seems the VROC uses mainly hardware with software to calculate RAID logic.
Q11: References what needs a VROC Key… I think Intel should revisit for the X299 chipsets
Q12: How is Intel VROC different from Intel RSTe…
I tried to copy and paste… however Intel locked the PDF from the link above… This is a good read to understand more in depth how VROC works… //R//
Hi Allyn,
Excellent review
Hi Allyn,
Excellent review and very thorough testing. A couple of questions:
1. I can’t seem to find the VROC key for sale anywhere. How did you obtain yours?
2. The SM961 is listed as being compatible with VROC. I wonder if the SM960 is also compatible to create a bootable VROC Raid 0 or Raid 1 array since it is just the consumer version of the OEM SM961?
ps: Above questions also open to anyone who has the hardware and has tested this.
Thanks in advance gents/ladies.
1. All testing in this
1. All testing in this article was done without a VROC key.
2. They should be, but no way to know for sure as we don't know how Intel is limiting the compatibility.
Third party SSDs like the
Third party SSDs like the SM961 are supported only by Intel XEON Chipsets, like the C622 or C4xx.
Even then, Intel does not mention if they are bootable under VROC.
Intel informed me by email that Intel does not support VROC for the X299, and that the mainboard manufactures offering VROC are responsible for the proper functioning.
Very frustrating that ASUS put the blame on Intel, not offering any information, beside spreading false information -at least at the german support.
I want to run a real VROC! Not in pass-through-mode.
Amyhow, thanks for sharing this excellent test!
Michael
More details are in Podcast
More details are in Podcast #470:
https://www.youtube.com/watch?v=4V2o91CSWXc
I am struggling already with
I am struggling already with a very basic problem.
As far as I understood I will have to install the RSTe (enterprise) driver rather than the RST.
However, the RSTe can not be installed “on my platform”.
Presumably it does not support Win10x64.
In general Intel does not officially support the X299 with that driver…
Have you installed the RSTe on a Win10-platform? Which version?
Many thanks
Michael
You might have found one of
You might have found one of the older RSTe versions. Look for 5.2.2.1022.
Hi Allyn I found an Intel
Hi Allyn I found an Intel document explaining the vROC trial mode.
Searching the web I found this document “Intel Virtual RAID on CPU (Intel® VROC) and Intel Rapid Storage Technology enterprise (Intel® RSTe)”.
On that document they explain the 90 day trial period and its limitations. What I understood was that the trial period acts as if you had the Premium Key installed. That is why you are able to use 3rd party SSDs. It will show the RAID array on the RSTe GUI but it won’t show it on the BIOS, where the attached SSDs will appear as independent non-RAID disks and might inform there is no RAID volume on the system.
The RSTe vROC implementation has a feature to configure a RAID on the BIOS but you need to have the Intel VDM configured on the BIOS (the guide has an example of doing this in a Purley chipset motherboard) and the Upgrade Key installed.
They made the following important clarifications: you can only configure data RAID arrays on the BIOS and not spanned system volumes. You also need to use the correct F6 driver when installing Windows to a bootable RAID to see the device during installation. iaStorE drivers are for SATA and sSATA drives, iaVROC will be for NVMe drives. You need to load iaVROC driver.
The guide also comes with a couple of warnings of what happens when the 90 day trial period finishes. Your RAID volumes will appear on the RSTe GUI but won’t be accessible. They will only become accessible when you install the upgrade key. They don’t guarantee the safety of the data during the trial period.
On an Intel forum an Intel representative informed a customer that for X299 only the standard mode could be activated after the trial period. He even advise to purchase a key from Mouser Electronics to obtain the correct VROC upgrade key.
I really appreciated the depth of your article and your latency analysis was awesome. We all were interested in vROC because it provided a direct connection to the CPU bypassing the DMI bottleneck, hopefully reducing access latencies and improving RAID performance. But the poor latency results and the Intel guide cause certain questions to arised.
1) The first one is about PCIe bifurcation.
A customer from Gygabite asked if their motherboards supported PCIe bifurcation and he was informed that none of their boards actually supported it. However Gygabite has options on the BIOS to configure PCIe slots for VROC on some motherboards.
Another customer purchased an ASUS Hyper M.2 card and wanted to use it on an EVGA motherboard and was able to do so after a BIOS update to a BIOS that let him configure the PCIe slots. Is PCIe bifurcation something that can be done in software only?
The Hyper M.2 NVMe adapter doesn’t feature a PLX chip for PCIe
bifurcation but it is capable to divide the slot bandwidth when
configured on the BIOS. There is no mention of 4x/4x/4x/4x bifurcation on the ASUS motherboard manuals. The only motherboards that I know that state bifurcation are mini-ITX motherboards that bifurcate the only PCI-e slot they have to support riser cards for dual GPUs.
2) This guide mention OCulink technology, a high speed PCIe transmission technology.
On one of the examples of the guide, an Intel reference board will only work with VROC through an OCulink connection. Maybe to get the most of VROC you need an specialized connection like OCulink.
Reading an article on OCulink, it says some U.2 devices are able to utilize the OCulink protocol. Since the VDMs accept these types of connections it is safe to assume that vROC uses this protocol in some form. The RSTe driver lets you connect RAID arrays to the CPU but it also lets you configure RAID arrays that connect to the PCH. If Oculink is not available will it reset to a PCH RAID?
I found a Supermicro Xeon WS motherboard, the X11SRM-VF with these type of connections. Will this motherboard with this type of connection reduce the access latency?
Could a U.2 connection reduce the latency if it uses the OCulink protocol? Does Intel U.2 drives support this protocol? Is this the reason why Intel drives are the only ones supported on some modes? 3rd party SSDs are supported on the Premium mode but do they fall back to the PCH connection due to the lack of this protocol? Is there a way to check if the I/O traffic is using
the PCH on the VROC trial mode.
I found a reseller of this board on my country and I will love to handed it to you for a OCulink latency analysis.
3) On the guide they mention an special F6 driver for vROC, the iaVROC driver, is this driver only used to see the BIOS configured RAID array during the Windows installation or if I load a non VROC driver like iaSTORE that is for SATA, could that cause performance degradation, or will the RAID array fall back to the PCH because of this.
4) The guide also mentions that each motherboard has a different way to configure the Intel VDM on the BIOS, it is a requisite for using VROC. Do you know where can I check this on an ASUS X299 motherboard BIOS?
Your article was an excellent example of technology journalism and I’m looking forward to test VROC under an OCulink connection.
Sorry if my English appears to be harsh or broken, but I’m not a native speaker.