Preliminary Results at Varying RAID Levels

I first ran through some simple tests of varying the number of SSDs in the array. First here are a series of ATTO results taken at QD=10:

Single 950 Pro 512GB

RAID-0 2x 950 Pro

RAID-0 3x 950 Pro

Another handy feature of supporting three M.2 RAID is the possibility of RAID-5 for added redundancy:

Operating in RAID-5 will see additional CPU overhead during writes, as parity data must be calculated on-the-fly. That said, the performance remains respectable, with writes falling between 1-2 SSDs and reads behaving more like the triple SSD RAID. This is a good option for those who are willing to sacrifice some write performance and one SSD worth of capacity for the ability to survive an SSD failure.

Before moving on the much more detailed Latency Distribution and Latency Percentile results, here is a simple IOPS vs. QD ramp for read and write performance scaling from one to three SSDs:

No real surprises here, though these results do differ from SATA RAID, in that the IOPS ramp does not double when adding SSDs to the RAID. This is because we are pushing the IOPS limits of NVMe itself, combined with the DMI bandwidth limit of the Z170 Chipset. We do get ultimately higher IOPS with additional SSDs in the array, but it takes unrealistic Queue Depths to get there (even power users have a hard time exceeding QD=16).

Writes are a different story, as write speeds are a fraction of the DMI bandwidth available, so we see a steady increase in IOPS with additional SSDs in the array.

…now some review sites have tested some form of triple PCIe RAID under the Z170 platform, but what nobody has been able to truly quantify or explain was the increased 'feel' in the speed of an SSD array – even though it is clearly bandwidth bottlenecked elsewhere in the pipeline. I'm here to say that there *is* a difference, and it *can* be shown. On the next page is your reward for making it this far in the review (boy are all of those folks that wandered off after the first page going to regret it!).

« PreviousNext »