Performance Comparison: X299 VROC vs. Z270 RST
Alright, so now that we have looked at X299 VROC, let's see how it stacks up against Z270's RST implementation. Note that we did not have a Z270 board on hand which supported triple M.2 RAID via the PCH, but we can saturate DMI with only two fast SSDs and can estimate the triple RAID figures where it is useful to do so.
To make these comparisons a bit easier to digest, the first three charts here will evaluate single SSD performance across these two chipsets:
Single SSD Comparisons:
4KB Random IOPS:
Samsung 960 PRO SSD performance across the X299 and the Z270 (connected via PCH as well as directly to the CPU) are all nearly identical, though the Z270 platform does see a slight edge. The high end of the Optane (32GB) parts all saturate at the same maximum figure, but there is what appears to be a battle going on for low QD performance. X299 and the Z270 PCH trade blows a bit, but the Optane part directly connected to the 7700K CPU blows everything else away, with IOPS starting at over 93k @ QD=1! How can this be so much faster than the brand new X299 platform? Read on to find out.
4KB Random Latency:
The most important thing here is the first three figures in the QD=1 column. First, we see the 10.7us figure from the X299 testing. That figure jumps to 12.2us on Z270 with an SSD installed into an M.2 socket (M.2 locations will very likely be wired to the PCH so that RST RAID is possible). Installing the SSD into a standard PCIe slot (via an interposer) brings the latency down to 10.1us. This tells us two things:
- The faster clocks and lower core-to-DRAM latency of the 7700K shave 0.6us off of the transaction latency compared to the 7960X.
- The Z270 platform nets a ~2.1us per transaction latency penalty for SSDs communicating via the PCH.
128KB Sequential:
A bit of a spread here with the Z270 showing an odd falloff at higher QD sequentials on Optane (orange), while both the PCH and direct connected Z270 (green + light blue hiding behind it) performed significantly better than the same on the X299 platform (yellow). Typically the Samsung results would be equal across the board, however after these test runs were completed, we discovered the Samsung NVMe driver was inadvertently installed on our Z270 test system, boosting single SSD sequential results. I'll re-run these tests with the standard NVMe driver and update when able.
Now we shift to comparing RAID performance across both platforms. To make the charts reasonably readable, I will stick to QD=1 for random and QD=32 for sequential.
Multiple SSD RAID-0 Comparisons:
4KB Random IOPS:
For the 960 PROs, Z270 (yellow) adds a few IOPS when compared to X299 (orange).
For Optane Memory, X299 (blue) beats Z270 (grey) in single SSD configs, but X299's higher latency penalty in RAID causes it to fall behind. Note that Z270 can beat X299 across the board if single SSD is connected directly to the CPU (93k IOPS).
4KB Random Latency:
Nothing surprising here as this is effectively an inversion of the IOPS seen above. Z270 single SSD Optane Memory directly connected to the CPU would be 10.1 microseconds, forcing the grey line below the blue across all points.
128KB Sequential:
This is the same chart we finished the previous page with, but Z270 data has been added. x3 estimated throughput is simply the x2 saturation throughput seen with the 960 PROs. Removing the DMI bottleneck certainly helps out a lot, as the X299 figures just scream past the Z270, especially when using 960 PRO SSDs!
I downloaded the guide and I
I downloaded the guide and I think that on the review you might have missed a step to configure VROC. I see that you configured the hardware for connecting multiple drives to a configured set of lanes. On the guide they set specific VMD ports through an specific OCulink connection, whatever that is. They also configured the Volume Management Device as an OCulink connection. They did the same for every CPU the processor had. I’m assuming that the ASUS board has the ability to do this with a PCIe 3.0 connection. Correct if I’m wrong but I’m assuming that any RAID array created on the RSTe GUI will run under the PCH connection if the VMD ports aren’t linked to the PCIe 3.0 connection on the BIOS.
Anyone knows where I can find
Anyone knows where I can find the VROC key and the price? Intel says “contact your mainboard manifacturer” and Gigabyte (I have a GA-X299-UD4 with 2 x Samsung 960 PRO) says “contact your dealer” but I’m the dealer and I can’t find the key!
Thank you!
Hi, I have a couple questions
Hi, I have a couple questions about bandwidth if someone can answer them for me:
1. Would I experience a bottleneck with 4 x Samsung 960 Pros if I use this card in a x8 slot rather than a x16 slot? Will it make any noticeable difference?
2. How does this card compare to the dimm.2 risers on asus boards (Rampage VI Apex & Extreme)? The riser card provides 2 PCle x4 connections directly to the cpu. Does the Hyper m.2 x 16 card have additional overhead that would cause more latency than the riser cards?
As far as I know, but without
As far as I know, but without having actual empirical experience with 4 x Samsung 960 Pros, to exploit the raw bandwidth of an x16 slot the BIOS/UEFI must support what is called PCIe lane “bifurcation”.
In the ASUS UEFI, it shows up as x4/x4/x4/x4:
https://www.youtube.com/watch?v=9CoAyjzJWfw
In the ASRock UEFI, it shows up as 4×4:
http://supremelaw.org/systems/asrock/X399/
This allows the CPU to access a single x16 slot as four independent x4 PCIe slots.
As such, even if an x8 slot were able to be bifurcated, it would end up as 2×4, or x4/x4, and the other 2 NVMe SSDs would probably get ignored.
There are some versions of these add-in cards that have an on-board PLX chip, which may be able to address all 4 SSDs even if only x8 PCIe lanes are assigned to an x16 slot by the BIOS/UEFI.
(Also, by shifting the I/O processing to the CPU, this architecture should eliminate the need for dedicated RAID IOP’s on the add-in card.)
Also, a full x16 edge connector may not fit into an x8 mechanical slot.
Ideally, therefore, these “quad M.2” AICs are designed to install in a full x16 mechanical slot that is assigned the full x16 PCIe lanes with bifurcation support in the BIOS/UEFI subsystem.
You should ask this same question of Allyn, because he will surely have more insights to share with us here.
If anyone is interested,
If anyone is interested, ASRock replied to our query with simple instructions for doing a fresh install of Windows 10 to an ASRock Ultra Quad M.2 card installed in an AMD X399 motherboard. We uploaded that .pdf file to the Internet here:
http://supremelaw.org/systems/asrock/X399/
As you want something super
As you want something super new? Take a look at this site. Only here the choice of horny for every taste and completely free! They are hardcore slaves, they will and want implement anything you command !
http://gov.shortcm.li/kings1#F25