A quick look at storage
We take a quick look at the new Z170 chipset and the changes it offers up for PCIe and SATA RAID configs.
** This piece has been updated to reflect changes since first posting. See page two for PCIe RAID results! **
Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!
- The Intel Core i7-6700K Review – Skylake First for Enthusiasts (Video)
- Skylake vs. Sandy Bridge: Discrete GPU Showdown (Video)
- ASUS Z170-A Motherboard Preview
- Intel Skylake / Z170 Rapid Storage Technology Tested – PCIe and SATA RAID
When I saw the small amount of press information provided with the launch of Intel Skylake, I was both surprised and impressed. The new Z170 chipset was going to have an upgraded DMI link, nearly doubling throughput. DMI has, for a long time, been suspected as the reason Intel SATA controllers have pegged at ~1.8 GB/sec, which limits the effectiveness of a RAID with more than 3 SSDs. Improved DMI throughput could enable the possibility of a 6-SSD RAID-0 that exceeds 3GB/sec, which would compete with PCIe SSDs.
Speaking of PCIe SSDs, that’s the other big addition to Z170. Intel’s Rapid Storage Technology was going to be expanded to include PCIe (even NVMe) SSDs, with the caveat that they must be physically connected to PCIe lanes falling under the DMI-connected chipset. This is not as big of as issue as you might think, as Skylake does not have 28 or 40 PCIe lanes as seen with X99 solutions. Z170 motherboards only have to route 16 PCIe lanes from the CPU to either two (8×8) or three (8x4x4) PCIe slots, and the remaining slots must all hang off of the chipset. This includes the PCIe portion of M.2 and SATA Express devices.
I spent yesterday connecting a Skylake system to many different storage devices, starting with the PCIe side. As you can see above, the UEFI has been updated to include additional options that are specific to Intel’s new RST additions. Flipping the various switches diverts control of the connected device over to RST. With a pair of Intel SSD 750’s installed, one via PCIe_3 and the other via the U.2/M.2 Hyper Kit adapter, we were supposed to find an additional option elsewhere in the BIOS. As it turned out, this option did not appear until forcing UEFI in the Compatibility Support Module (CSM) options:
With that last option tweaked, we found what we were looking for:
This is an interesting addition as well, as in the past you could only create RAID volumes from within the option ROM presented during boot (Ctrl-I).
Creating a PCIe RAID here was no more difficult than creating one from SATA devices.
With the PCIe RAID enabled, all we could boot from was our USB Windows installer drive.
Unfortunately that is where the fun ended (**EDIT** We did get this working! Check out Page 2 for the details and results). While we could create a RAID of PCIe devices, the same combination of hardware and software configuration changes that made the RAID possible also removed our ability to boot the system. We could not even test the PCIe RAID from within Windows when booting from a single SATA device. Any single option flipped the other way would enable booting from SATA, but that same change would also make the PCIe RAID disappear. I ran the same gauntlet with a pair of Plextor M6E SSDs, which also had the same result. It was probably the most frustrating game of catch-22 I’ve ever played, so we had to shelf this testing until we can get some higher level support from ASUS and Intel – I’m guessing in the form of a bugfixed UEFI firmware.
SATA RAID Testing:
With PCIe testing on hold, I moved on to SATA. If this new DMI link could handle upwards of the claimed 3.5 GB/sec with a connected PCIe RAID, I set out to discover how that new upper limit would affect a SATA RAID. I broke out a six pack of recent Intel SATA 6Gb/sec SSDs and scoured the office for SATA power cables. With all six SATA 6Gb/sec ports populated, I created a RAID-0, enabled the highest level of caching, and ran a quick throughput check:
With SATA speeds apparently still capped at less than 2GB/sec, I was once again disappointed. While these troughput figures are ~100-200 MB/sec faster than what I’ve seen on Z97 / X99 RAIDs, it appears the link between the SATA controller and the rest of the chipset is to blame for the limit we are seeing here. It stands to reason that in those older chipsets, the SATA controller was designed to run just slightly faster than the DMI 2.0 throughput of 20 Gb/s. The new Z170 chipset may have a faster and more capable DMI, but the SATA controller appears to be based on the legacy design. It may still be relying on a PCIe 2.0 x4 link.
The lack of increased SATA performance of the new Z170 chipset can be disappointing for those who were holding out for an update, but I can see how Intel would shift their focus toward PCIe SSD RAID support. Their SATA solution is still more than sufficient for HDDs performing mass storage duties while PCIe SSDs can take over for the more latency sensitive tasks. As for that fire breathing PCIe RAID we have all been waiting for, we will have to wait a few more days for an updated firmware before we can provide those results. If you have been chomping at the bit to boot off of a PCIe RAID, I recommend holding off on that Z170 motherboard purchase until we can confirm that this issue is corrected.
Hey guys what’s the version
Hey guys what’s the version of intel rapid storage technology that you have installed and were did you get it am not seeing the options shown as yours.
My setup ASUS Z170-DELUXE xp951(512GB) and intel 750 1.2TB drives
I created raid0 intel 750+Xp951.Capacity lowered 953GB
Able to see the drives on raid0 during install but capacity (953GB)lowered why?
W10 install went smoothly but does not boot tried various changes unable to boot.
Can I have the Bios settings to make UEFI boot with raid0.
Thanks for your article.
2 members of a RAID-0 array
2 members of a RAID-0 array must contribute
the exact same amount of storage to that array,
hence — as summarized in a reply above —
RST allocated only 512GB (unformatted) from
the Intel 750, as follows:
512GB (xp951) + 512GB (Intel 750) = 1,024 GB unformatted
Using a much older version of Intel Matrix Storage Console,
it reports 48.83GB after formatting a 50GB C: partition on
2 x 750GB WDC HDDs in RAID-0.
Using that ratio, then, we predict:
48.83 / 50 = .9766 x 1,024 = 1,000.03 GB (formatted)
You got 953GB == close enough, considering that we
are looking at entirely new storage technologies here,
and later operating systems will default to creating
shadow and backup partitions, which explain the
differences between 1,000 and 953.
Hope this helps.
Did you format with GPT or
Did you format with GPT or MBR?
I seem to recall that UEFI booting requires a GPT C: partition.
Please double-check this, because I’ve only formatted
one GPT partition in my experience (I’m still back at XP/Pro).
Allyn, if you are reading this, please advise.
capacity lowered to match the
capacity lowered to match the smaller drive,anyone able to boot windows from a raid 0 pcie setup?
Samsung SM951 128GB AHCI can work in raid 0 X 3 disk ?
if i buy 3 disk like that
what speed can reach ?
what is the difference between Samsung SM951 vs SSD 750 ?
I am buying a gigabyte g1
I am buying a gigabyte g1 gaming board and it has 2 m.2 slots on the board and says it can do raid0 if I put 2 sm951 256 gb m.2 cards on board should I be able to raid 0 and also boot and how would I set it up I was also planning on a 2 tbhdd for storage any thoughts
Good luck just returned my
Good luck just returned my G-1 gaming could not get CSM disabled to stay and never got the Intel raid setup to appear in the Bios.
This was just a fine article,
This was just a fine article, advanced and I enjoyed reading it. Thanks
Way back, I was one of the first to run bootable dual SSD`s in Raid 0 with an LSI raid controller. This month I hope to purchase the new Samsung 950 Pro V-NAND M.2.
I want to run a pair in Raid O.
But, and I quote the author: So what we are seeing here is that the DMI 3.0 bandwidth saturates at ~3.5 GB/sec, so this is the upper limit for sequentials on whichever pair of PCIe SSDs you decide to RAID together.
Since the new Samsungs are in broad terms:
Sequential Read 2,500 MB/s
Sequential Write 1,500 MB/s
Ergo, running dual Samsung 950 Pro V-NAND M.2 would have a theoretical sequential of 5 MBs, thus surpassing the bandwidth of DMI 3.0 by 1.5 MBs.
Even if a hardware raid controller is used, it will be limited to DMI bandwidth?
I take it there is no workaround for this bottleneck at the moment…
> Even if a hardware raid
> Even if a hardware raid controller is used, it will be limited to DMI bandwidth?
Only if it is connected to a PCIe slot behind DMI.
( sorry for my bad
( sorry for my bad english)
i’m trying to install 2 sm951 nvme in radi 0 on my sabertooth x99 and i have some difficult to do.
the ssd is installed one in m.2 native slot and one in a pcie adaptor in the second slot.
bios see both.
i can install os in both.
i try some changes in bios like your article.
i try to modding bios file to update with version 14 of intel rst driver ( in fact in the advanced mode of bios in intel rapid storage flag it shown version 14 but no disc is detected)
in the nvme configuration they are listed
bus 3 sm951…..
bus 5 sm951…..
can you help me in some way?
By implementing Raid 0
By implementing Raid 0 on a ASRock Z170 Extreme7 using 3 x Samsung 950 Pro V-NAND M.2 (upcoming) inserted into the three available Ultra M.2 ports (which are connected directly to the CPU, i.e not behind a chipset with DMI 3.0 limit of 3.5 GB/sec), wouldn’t it be possible to achieve about 7.5GB/s sequential read?
As it turns out the Intel 750
As it turns out the Intel 750 was tested by some folks at ASUS with a sequential read of over 2.7 GB/s read and 1.3 GB/s write. Who needs anything else? I don’t think Samsung will be as fast, and it’s late to the market.
October 14, 2015 | 09:41 PM –
October 14, 2015 | 09:41 PM – Posted by Md (not verified)
Follow up simple question, please. Can you give, or direct me to, steps to installing Windows 7 Pro onto SM951 installed on m.2 slot of Asus Z-170-A mobo as boot drive?
Have read endless threads on problems and still not clear on how to.
So far your thread is great…but not seeking raid…just booting off small with windows 7..
Thanks in advance..
Hi, do you try to install in
Hi, do you try to install in a normal sata ssd, then you clone to sm951?
If I understand correctly
If I understand correctly the max bandwidth over the dmi 3.0 Z170 sata express connector is 3.5 Gb/s? What about on the X99 chipset? Is it also possible to raid m.2 on X99? And why do motherboard manufactures advertise the speed limit at 32Gb/sec?
Hi Ryan, I have some question
Hi Ryan, I have some question with the Raid 0 Configuration of NVME M.2 SSD’s.. If I make a RAID 0 set up for 2 NVME SSDs running on 2 PCIe 3.0 X4, is the DMI 3.0 be sufficient enough to carry such bandwidth? From what I know, One PCIe 3.0 x4 is equal to 32Gbps.. So in RAID 0, it will be 64 Gbps yes?
I really appreciate the
I really appreciate the testing you have done here. I built a z170 deluxe, pcie 750 ssd, GTX 980ti and called intel for support after I buggered up my UEFI settings. My support guy couldn’t find any answers (i’m sure he was googling everything I had did) but having him on the line gave me the courage to forge ahead and finally got the drive back and working. I do think I have a ways to go though. I only have two SATA ports available now. SATA 3,4. In reading your posts I believe I forgot to go back and disable Hyperkit. Obviously Hyperkit isn’t needed for a drive installed in the PCIe slot 3. Doh!
Keep up the good work
For those of you wanting to
For those of you wanting to do m.2 RST SSD caching, to get the drive to show up just change the boot option as mentioned in the article. Thank you guys so much for finding and sharing these settings!!
Thanks guys, the diagram on
Thanks guys, the diagram on the next page on the lanes helped me a lot, have been struggling to get my PCIe raid running along side a SATA one, now I understand why but see how I can do it