PCIe RAID Results

…so after some more communication with ASUS, we were told that for PCIe RAID to work properly, we *must* flip this additional switch:

As it turns out, the auto setting is not intelligent enough to pick up on all of the other options we flipped. Do note that switching to the x4 setitng disables the 5th and 6th ports of the Intel SATA controller. What is happening on an internal level is that the pair of PCIe lanes that were driving that last pair of SATA ports on the Intel controller are switched over to be the 3rd and 4th PCIe lanes, now linked to PCIe_3 on this particular motherboard. Other boards may be configured differently. Here is a diagram that will be helpful in understanding how RST will work on 170 series motherboards moving forward:

The above diagram shows the portion of the 20 PCIe lanes that can be linked to RST. Note this is a 12 lane chunk *only*, meaning that if you intend to pair up two PCIe x4 SSDs, only four lanes will remain, which translated to a maximum of four remaining RST-linked SATA. Based on this logic, there is no way to have the six Intel SATA ports enabled while also using a PCIe RAID. For the Z170 Deluxe configuration we were testing, the center four lanes were SATA, M.2 uses the left four, and PCIe slot 3 uses the right four. Note that to use those last four, the last two Intel SATA ports are disabled. The Z170 Deluxe does have eight total SATA ports, so the total dropped to six, but only four of those are usable with RST in a RAID configuration.

Once we made x4 change noted above, everything just started working as it should have in the first place (even booting from SATA, which was previuously disabled when the PCIe-related values were not perfectly lined up). Upon booting off of that SATA SSD, we were greeted with this when we fired up RST:

…now there's something you don't see every day. PCIe SSDs sitting in a console that only ever had SATA devices previously. Here's the process of creating a RAID:

You can now choose SATA or PCIe modes when creating an array.

Stripe size selection is similar to what has been in prior versions. I recommend sticking with the default that RST chooses based on the capacity of the drives being added to the array. You can select a higher stripe size at the cost of lower small file performance. I don't recommend going lower than the recommended value, as it typically results in reduced efficiency of the RST driver (the thread handling the RAID pegs its thread / memory allocation just handling the lookups), which results in a performance hit on the RAID.

With the array created, I turned up all of the caching settings (only do this in production if you really need the added performance, and only if the system is behind a UPS). I will also be doing this testing with TRIMmed / empty SSD 750's. This is normally taboo, but I want to test the maximum performance of RST here – *not* the steady-state performance of the SSDs being tested.

The following results were attained with a pair of 1.2TB SSD 750's in a RAID-0, default stripe size of 16KB.

Iometer sequential 128KB read QD32:

Iometer random 4K read QD128 (4 workers at QD32 each):

ATTO (QD=4):

For comparison, here is a single SSD 750, also ATTO QD=4:

So what we are seeing here is that the DMI 3.0 bandwidth saturates at ~3.5 GB/sec, so this is the upper limit for sequentials on whichever pair of PCIe SSDs you decide to RAID together. This chops off some of the maximum read speed for 750's, but 2x the write speed of a single 750 is still less than 3.5 GB/sec, so that gets a neat doubling in performance for an effective 100% scaling. I was a bit more hopeful for 4K random performance, but a quick comparison of the 4K results in the two ATTO runs above suggest that even with caching at the maximum setting, RST pegged at the 4K random figure equal to a single SSD 750. Testing at stripe sizes less than the default 16K did not change this result (and performed lower).

The takeaway here is that PCIe RAID is here! There are a few tricks to getting it running, and it may be a bit buggy for the first few BIOS revisions from motherboard makers, but it is a feature that is more than welcome and works reasonably well considering the complications Intel would have had to overcome to get all of this to perform as well as it does. So long as you are aware of the 3.5 GB/sec hard limit of DMI 3.0, and realize that 4K performance may not scale like it used to with SATA devices, you'll still be getting better performance than you can with any single PCIe SSD to date!

« PreviousNext »