A quick look at storage
We take a quick look at the new Z170 chipset and the changes it offers up for PCIe and SATA RAID configs.
** This piece has been updated to reflect changes since first posting. See page two for PCIe RAID results! **
Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!
- The Intel Core i7-6700K Review – Skylake First for Enthusiasts (Video)
- Skylake vs. Sandy Bridge: Discrete GPU Showdown (Video)
- ASUS Z170-A Motherboard Preview
- Intel Skylake / Z170 Rapid Storage Technology Tested – PCIe and SATA RAID
When I saw the small amount of press information provided with the launch of Intel Skylake, I was both surprised and impressed. The new Z170 chipset was going to have an upgraded DMI link, nearly doubling throughput. DMI has, for a long time, been suspected as the reason Intel SATA controllers have pegged at ~1.8 GB/sec, which limits the effectiveness of a RAID with more than 3 SSDs. Improved DMI throughput could enable the possibility of a 6-SSD RAID-0 that exceeds 3GB/sec, which would compete with PCIe SSDs.
Speaking of PCIe SSDs, that’s the other big addition to Z170. Intel’s Rapid Storage Technology was going to be expanded to include PCIe (even NVMe) SSDs, with the caveat that they must be physically connected to PCIe lanes falling under the DMI-connected chipset. This is not as big of as issue as you might think, as Skylake does not have 28 or 40 PCIe lanes as seen with X99 solutions. Z170 motherboards only have to route 16 PCIe lanes from the CPU to either two (8×8) or three (8x4x4) PCIe slots, and the remaining slots must all hang off of the chipset. This includes the PCIe portion of M.2 and SATA Express devices.
I spent yesterday connecting a Skylake system to many different storage devices, starting with the PCIe side. As you can see above, the UEFI has been updated to include additional options that are specific to Intel’s new RST additions. Flipping the various switches diverts control of the connected device over to RST. With a pair of Intel SSD 750’s installed, one via PCIe_3 and the other via the U.2/M.2 Hyper Kit adapter, we were supposed to find an additional option elsewhere in the BIOS. As it turned out, this option did not appear until forcing UEFI in the Compatibility Support Module (CSM) options:
With that last option tweaked, we found what we were looking for:
This is an interesting addition as well, as in the past you could only create RAID volumes from within the option ROM presented during boot (Ctrl-I).
Creating a PCIe RAID here was no more difficult than creating one from SATA devices.
With the PCIe RAID enabled, all we could boot from was our USB Windows installer drive.
Unfortunately that is where the fun ended (**EDIT** We did get this working! Check out Page 2 for the details and results). While we could create a RAID of PCIe devices, the same combination of hardware and software configuration changes that made the RAID possible also removed our ability to boot the system. We could not even test the PCIe RAID from within Windows when booting from a single SATA device. Any single option flipped the other way would enable booting from SATA, but that same change would also make the PCIe RAID disappear. I ran the same gauntlet with a pair of Plextor M6E SSDs, which also had the same result. It was probably the most frustrating game of catch-22 I’ve ever played, so we had to shelf this testing until we can get some higher level support from ASUS and Intel – I’m guessing in the form of a bugfixed UEFI firmware.
SATA RAID Testing:
With PCIe testing on hold, I moved on to SATA. If this new DMI link could handle upwards of the claimed 3.5 GB/sec with a connected PCIe RAID, I set out to discover how that new upper limit would affect a SATA RAID. I broke out a six pack of recent Intel SATA 6Gb/sec SSDs and scoured the office for SATA power cables. With all six SATA 6Gb/sec ports populated, I created a RAID-0, enabled the highest level of caching, and ran a quick throughput check:
With SATA speeds apparently still capped at less than 2GB/sec, I was once again disappointed. While these troughput figures are ~100-200 MB/sec faster than what I’ve seen on Z97 / X99 RAIDs, it appears the link between the SATA controller and the rest of the chipset is to blame for the limit we are seeing here. It stands to reason that in those older chipsets, the SATA controller was designed to run just slightly faster than the DMI 2.0 throughput of 20 Gb/s. The new Z170 chipset may have a faster and more capable DMI, but the SATA controller appears to be based on the legacy design. It may still be relying on a PCIe 2.0 x4 link.
The lack of increased SATA performance of the new Z170 chipset can be disappointing for those who were holding out for an update, but I can see how Intel would shift their focus toward PCIe SSD RAID support. Their SATA solution is still more than sufficient for HDDs performing mass storage duties while PCIe SSDs can take over for the more latency sensitive tasks. As for that fire breathing PCIe RAID we have all been waiting for, we will have to wait a few more days for an updated firmware before we can provide those results. If you have been chomping at the bit to boot off of a PCIe RAID, I recommend holding off on that Z170 motherboard purchase until we can confirm that this issue is corrected.