A quick look at storage
We take a quick look at the new Z170 chipset and the changes it offers up for PCIe and SATA RAID configs.
** This piece has been updated to reflect changes since first posting. See page two for PCIe RAID results! **
Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!
- The Intel Core i7-6700K Review – Skylake First for Enthusiasts (Video)
- Skylake vs. Sandy Bridge: Discrete GPU Showdown (Video)
- ASUS Z170-A Motherboard Preview
- Intel Skylake / Z170 Rapid Storage Technology Tested – PCIe and SATA RAID
When I saw the small amount of press information provided with the launch of Intel Skylake, I was both surprised and impressed. The new Z170 chipset was going to have an upgraded DMI link, nearly doubling throughput. DMI has, for a long time, been suspected as the reason Intel SATA controllers have pegged at ~1.8 GB/sec, which limits the effectiveness of a RAID with more than 3 SSDs. Improved DMI throughput could enable the possibility of a 6-SSD RAID-0 that exceeds 3GB/sec, which would compete with PCIe SSDs.
Speaking of PCIe SSDs, that’s the other big addition to Z170. Intel’s Rapid Storage Technology was going to be expanded to include PCIe (even NVMe) SSDs, with the caveat that they must be physically connected to PCIe lanes falling under the DMI-connected chipset. This is not as big of as issue as you might think, as Skylake does not have 28 or 40 PCIe lanes as seen with X99 solutions. Z170 motherboards only have to route 16 PCIe lanes from the CPU to either two (8×8) or three (8x4x4) PCIe slots, and the remaining slots must all hang off of the chipset. This includes the PCIe portion of M.2 and SATA Express devices.
I spent yesterday connecting a Skylake system to many different storage devices, starting with the PCIe side. As you can see above, the UEFI has been updated to include additional options that are specific to Intel’s new RST additions. Flipping the various switches diverts control of the connected device over to RST. With a pair of Intel SSD 750’s installed, one via PCIe_3 and the other via the U.2/M.2 Hyper Kit adapter, we were supposed to find an additional option elsewhere in the BIOS. As it turned out, this option did not appear until forcing UEFI in the Compatibility Support Module (CSM) options:
With that last option tweaked, we found what we were looking for:
This is an interesting addition as well, as in the past you could only create RAID volumes from within the option ROM presented during boot (Ctrl-I).
Creating a PCIe RAID here was no more difficult than creating one from SATA devices.
With the PCIe RAID enabled, all we could boot from was our USB Windows installer drive.
Unfortunately that is where the fun ended (**EDIT** We did get this working! Check out Page 2 for the details and results). While we could create a RAID of PCIe devices, the same combination of hardware and software configuration changes that made the RAID possible also removed our ability to boot the system. We could not even test the PCIe RAID from within Windows when booting from a single SATA device. Any single option flipped the other way would enable booting from SATA, but that same change would also make the PCIe RAID disappear. I ran the same gauntlet with a pair of Plextor M6E SSDs, which also had the same result. It was probably the most frustrating game of catch-22 I’ve ever played, so we had to shelf this testing until we can get some higher level support from ASUS and Intel – I’m guessing in the form of a bugfixed UEFI firmware.
SATA RAID Testing:
With PCIe testing on hold, I moved on to SATA. If this new DMI link could handle upwards of the claimed 3.5 GB/sec with a connected PCIe RAID, I set out to discover how that new upper limit would affect a SATA RAID. I broke out a six pack of recent Intel SATA 6Gb/sec SSDs and scoured the office for SATA power cables. With all six SATA 6Gb/sec ports populated, I created a RAID-0, enabled the highest level of caching, and ran a quick throughput check:
With SATA speeds apparently still capped at less than 2GB/sec, I was once again disappointed. While these troughput figures are ~100-200 MB/sec faster than what I’ve seen on Z97 / X99 RAIDs, it appears the link between the SATA controller and the rest of the chipset is to blame for the limit we are seeing here. It stands to reason that in those older chipsets, the SATA controller was designed to run just slightly faster than the DMI 2.0 throughput of 20 Gb/s. The new Z170 chipset may have a faster and more capable DMI, but the SATA controller appears to be based on the legacy design. It may still be relying on a PCIe 2.0 x4 link.
Conclusion:
The lack of increased SATA performance of the new Z170 chipset can be disappointing for those who were holding out for an update, but I can see how Intel would shift their focus toward PCIe SSD RAID support. Their SATA solution is still more than sufficient for HDDs performing mass storage duties while PCIe SSDs can take over for the more latency sensitive tasks. As for that fire breathing PCIe RAID we have all been waiting for, we will have to wait a few more days for an updated firmware before we can provide those results. If you have been chomping at the bit to boot off of a PCIe RAID, I recommend holding off on that Z170 motherboard purchase until we can confirm that this issue is corrected.
Hey Allyn, have you tested
Hey Allyn, have you tested the boot up speeds with the new z170 boards using the intel 750? I was hoping the enumeration fault was gone and we wouldn’t need to use the csm to boot into them, guess we’ll have to wait for another gen..?
I can’t say for sure since
I can't say for sure since the implementation we got in for testing still had a few bugs to work out. One issue was that we couldn't get the CSM disabled – it would re-enable again after a reboot.
Any update to this, Allyn? 😛
Any update to this, Allyn? 😛
oh and er, maybe some
oh and er, maybe some academic tests of boot times with:
raid 750×2
single 750
Why are you so focused on the
Why are you so focused on the SATA performance, Allyn? In the real world, would you rather RAID 0 6 x 240GB SSDs or grab a single 1.2TB SSD 750?
SATA is dead to me. It’s great for general usage but you can’t compared 65k/65k queues/commands of NVMe vs 1/32 of SATA. On the other hand, I am disappointed that getting a NVMe RAID solution is still problematic.
Another thing. The PCH can provide 20 PCIe 3.0 lanes. which translates roughly to 20 GB/S. The DMI however, can’t process more than 8 GT/s. What’s the point in including all of those lanes when DMI is still the bottleneck?
That and no real
That and no real specifications on the GPU that’s integrated into the processor, The DMI is not the only problem, there are plenty of older Intel SKUs in the retail channels, and plenty of time to wait for Zen’s arrival. I’m still coming up empty in my search for a Carrizo Based laptop with downgrade rights to windows 7, and I’ll pay a premium to get Carrizo’s graphics and not have to wait for even longer for the i7’s quad cores on my current laptop to struggle at 100% for a few hours just to render a single image. Carrizo’s latest GCN cores and the latest software to utilize HSA have moved the rendering more fully onto the GPU, and so long with the need for Quad cores and 8 threads for rendering workloads! Come on HP where are the Probooks with a 35 watt Carrizo, and a 1920 x 1080 screen, and maybe a Probook with a discrete GPU to pair with the integrated one, I like business laptops because they have the windows 7 options, just show me some options for better screen resolutions and Carrizo.
SATA still has its uses.
SATA still has its uses. Cost/GB is still lower than PCIe devices and there are still some who want GPUs in all of their PCIe slots. Booting NVMe PCIe is still not as elegant / trouble free as it should be, either. Also, if it wasn't bottlenecked somewhere, 6xSATA in a RAID-0 would give many PCIe SSDs a run for their money, especially in write speeds.
This is so that you can
This is so that you can physically hook up a ton of devices. The expectation is that you don’t use them all at once.
The CPU, in the end, can only process data at a certain speed. If you were able to hook up enough SATA drives to saturate 20 PCI-Express 3.0 lanes, and let’s go with your figure of 20MB/s, that’s awfully close to the maximum total memory bandwidth of a DDR3 system. You’d choke everything else.. it just doesn’t make sense at least on a consumer system.
And hey, SATA is a dog now but remember how far that has come since the IDE days. Now it’s time for NVMe, certainly the next evolution. But in all honesty, while the geek in my loves all this, at the moment I am left wondering how much I/O I really need. As it is, I’m mostly blocked by a 75/75 internet connection in terms of I/O, and that’s easily taken care of by one SATA port.
.. and certainly 20GB/s not
.. and certainly 20GB/s not MB/s lol 🙂
.. and certainly 20GB/s not
.. and certainly 20GB/s not MB/s lol 🙂
How about any of the stable
How about any of the stable PCIe 3.0 add-on RAID controllers with 2 fan-out ports supporting a total of 8 x 6G SSDs?
If my math is correct, such x8 edge connectors support an upstream bandwidth of 8.0 GHz / 8.125 x 8 lanes ~= 7.88 GBps
(with the 128b/130b “jumbo frame”, 130 bits / 16 bytes = 8.125 bits per byte in PCIe 3.0 chipsets).
Allyn, I know you don’t prefer Highpoint AOCs, but doesn’t PCPER have a few of the latest PCIe 3.0 RAID controllers in your parts inventory e.g. LSI, ATTO, Adaptec et al.?
p.s. Plug-and-Play, where are you?
MRFS
At Newegg, search for “12Gb/s
At Newegg, search for “12Gb/s RAID Controller”.
It would be nice if SATA SSD manufacturers
upped their clock speeds to 12 Gbps.
Or, am I dreaming again?
MRFS
Not to be forgotten: 12Gbps
Not to be forgotten: 12Gbps SAS SSDs:
Toshiba Raises The Bar For 12 Gb/s SAS SSD Performance With PX04S Series
http://www.tomsitpro.com/articles/toshiba-ssd-12gbs-sas-px04s,1-2778.html
Really simple question here:
Really simple question here: can I just grab any Z170 mobo (with a 32Gbps M.2 slot) and BOOT Win 8.1 / 10 from a SM951?
Yes, that should work without
Yes, that should work without issue (even with the NVMe version of the SM951)
Hi Allyn,
Follow up simple
Hi Allyn,
Follow up simple question, please. Can you give, or direct me to, steps to installing Windows 7 Pro onto SM951 installed on m.2 slot of Asus Z-170-A mobo as boot drive?
Have read endless threads on problems and still not clear on how to.
So far your thread is great…but not seeking raid…just booting off small with windows 7..
Thanks in advance..
Do you plan on testing NVMe
Do you plan on testing NVMe PCIe Gen 3 SSD connected to the Z170 PCH vs connected to native CPU lanes on X99 to see the difference in performance (latency etc.)?
Some quick tests shows that a
Some quick tests shows that a single SSD 750 is very close in performance on either side of the chipset. Will do more detailed testing in the future, but it's nothing to be concerned about.
Please also include test
Please also include test cases when there’s simultaneous I/O going via PCIe SSD and USB 3 over DMI 3 to see how things slow down when DMI is saturated from multiple sources.
Same question I had!
Same question I had!
I’ve been waiting for an
I’ve been waiting for an article like this, many thanks.
As far as the results, what a shame. Turns out the DMI bottleneck was not the only one at work.
Does the 750 or sm951 use CPU
Does the 750 or sm951 use CPU lanes on x99 mobos? If so, how does its performance compare to PCH lanes of the Skylake stuff?
On a side note, a storage guy on OCN told me that Skyrim with a ton of mods probably requires a lot of sequential reads. That seems contrary to what I know about SSDs. Shouldn’t it be random reads? 4k, 16k.
And finally, I’d make a trace analysis joke but I think that’s pretty dead at this point, lol.
Skyrim mods are likely being
Skyrim mods are likely being read out at 128k sequential. A given texture is probably >16k in size, and would look more sequential to an SSD than random.
Thanks a lot Allyn, that’s
Thanks a lot Allyn, that’s very useful information to me.
What about the PCH/CPU pcie lane stuff?
DMI doesnt count against the
DMI doesnt count against the 16 CPU lanes of Skylake. Also, I initially misunderstood the info we were passed – there are 20 downstream lanes from the PCH (not 20-4 as I initially thought). This means you can have 36 total PCIe lanes connected to a Skylane CPU (though 20 of those are bottlenecked by a 4x DMI link).
So the difference between the
So the difference between the 16 lanes in Skylake for the GPU and the rest of the PCIE lanes (20 left) for other stuff, is that the 16 lanes for the GPU has a much higher bandwidth compared to the other 20 lanes?
I heard people say ‘the 16 lanes are native, direct connections to the CPU whereas the rest are handled by the chipset’ which hints that there might be some sort of caveat with the rest of the PCIE lanes beyond just bandwidth. Higher latencies? Or something.
Nice work on these articles, you produce some good stuff. really hope you get around to answering this question, it’s my last burning SSD question.
There wouldn’t be in general
There wouldn’t be in general much of a latency difference, since the DMI 3.0 connection is very similar to PCI 3.0, i.e. a packet bus. The problem is though, that they will all share 3.93 GB/s (31.44 Gb/s) via DMI 3.0 and they will also share that bandwidth with 6 other HSIO slots that are used for USB 3.x.
cf.
http://forums.storagerevi
cf.
http://forums.storagereview.com/index.php/topic/37949-toshiba-px04s-enterprise-ssd-review-discussion/?p=291002
Why aren’t SATA SSD manufacturers upping their clock speeds to 12 Gb/s too?
Even though SATA SSDs are necessariliy single-ported (as compared to SAS double-porting), an 8G clock is already a standard feature of the PCIe 3.0 specification, and 12 Gb/s RAID controllers are now available from a number of reputable vendors r.g. Areca, LSI, Adaptec, ATTO, Intel etc.
Moreover, SAS backplanes are designed to work with SATA drives.
What am I missing here, please?
MRFS
It seems like DMI is still a
It seems like DMI is still a limiting factor. If they are going to put 20 PCIe links on the chipset, then it seems like they should increase the speed of the up link to the CPU more significantly.
could you the intel rst tech
could you the intel rst tech with a bunch of sata ssd? cause then having a raid 0 zero cache would be a like a ultimate all in one solution
so if 3.5gb/s is the limit of
so if 3.5gb/s is the limit of dmi 3.0, why have all these ports with a limit so low? you have 10-12 usb 3.0 ports, usb 3.1, m.2, sata 3 ports. i feel like it would be easy to max that out with all the connections you have.
PCIe raid is really exciting
PCIe raid is really exciting but for the money I think i’d still just raid-0 pair of 850 evos for daily use.
We experimented with a RAID-0
We experimented with a RAID-0 of 4 x Samsung 840 Pro SSDs,
and our sequential READ speed exceeded 1,800 MB/sec.
with PCIe 2.0 and a cheap Highpoint RocketRAID 2720SGL:
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.1.bmp
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.2.bmp
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium.Direct.IO.1.bmp
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium.Direct.IO.2.bmp
The P5Q Deluxe machine above remains very snappy:
we plan to do the same with 4 x Samsung 850 Pro SSDs.
With a 10-year factory warranty, the speed will be
fast enough e.g. to SAVE and RESTORE 12GB ramdisks
at SHUTDOWN and STARTUP using RamDisk Plus.
Icy Dock now make a nice 8-slot 5.25″ enclosure:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994178&Tpk=N82E16817994178
Allyn,
How well do we think
Allyn,
How well do we think z170 boards will boot from nvme SSDs?
specifically m.2 nvme
specifically m.2 nvme
Hey Allyn – are you 100% sure
Hey Allyn – are you 100% sure all 6 of those sata SSDs were communicating at 6 gbps?
Individually they were going
Individually they were going full speed.
hi Allyn,
(1) when testing
hi Allyn,
(1) when testing the PCIe raid0, did you try Windows Write-cache buffer flushing set to default Enabled (i.e. disabling Intel proprietary caching options)? i think it maybe actually gave me better results in some tests a while back with an ssd raid..
(2) if you create 6-ssd SATA raid0 in Windows intel RST GUI, does the wizard also happen to give you choice between SATA and PCIe controller options? could it be that with PCIe controller setting chosen, the 6-ssd raid could get the full 3+ GB/s bandwidth?
Performance was lower with it
Performance was lower with it enabled.
When you select PCIe, you can only choose PCIe devices.
So 3x sm951 on the asrock
So 3x sm951 on the asrock z170 extreme7 would still be capped at 3.5GBps?
There would be no way to
There would be no way to connect the third one and use it with RST.
Are you sure Allyn -reading
Are you sure Allyn -reading the manual this looks very do-able
It would mean no SATA drives tho…
https://pcper.com/reviews/
https://pcper.com/reviews/Storage/Intel-Skylake-Z170-Rapid-Storage-Technology-Tested-PCIe-and-SATA-RAID/PCIe-RAID-Resu
for what ASRock Z170 Extreme7+ make three Ultra M.2 slots ?
That isn’t true, people have
That isn’t true, people have tested with three drives connected to m.2 in RAID. It works fine. It does away with all of the SATA 3 except for 4, as per their manual.
can i work with 2 or 3
can i work with 2 or 3 samsung 951 in raid 0 ???
can i get bus speed 200MHZ ?
BTW, Allyn,
NICE WORK!
You
BTW, Allyn,
NICE WORK!
You be THE BEST, Man 🙂
MRFS