Architectural Changes
AVX-512
Although the underlying architecture of the Skylake-X processors is the same as the mainstream consumer Skylake line, which we knew as the Core i7-6000 series, there are some important changes thanks to the Xeon heritage of these parts. First, Intel has tried to impart the value of AVX-512 on us, each and every time we discuss this platform, and its ability to drastically improve the performance of applications that are recompiled and engineered to take advantage of it. Due to timing constraints today, and with a lack of real-world software that can utilize it, we are going to hold off on the more detailed AVX-512 discussion for another day.
Caching Hierarchy and Performance
We do know that the cache hierarchy of the Skylake-X processors has changed:
Skylake-X processors will also rebalance the cache hierarchy compared to previous generations, rebalancing to more exclusive per-core cache at the expensive of shared LLC. While Broadwell-E had 256KB of private L3 cache per core, and 2.5 MB per core of shared, Skylake-X moves to 1MB of private cache per core and 1.375MB per core of shared.
This shift in cache division will increase the hit rate on the lowest latency memory requests, though we do expect inter-core latency to increase slightly as a result. Intel obviously has made this decision based on workload profiling so I am curious to see how it impacts our testing in the coming weeks.
After more talks with Intel and our own testing, it’s clear that the changes made to the mesh architecture (below) and cache divisions have an impact on latencies and performance in some applications. Take a look at our cache latency results below:
I am showing the Core i9-7900X running at DDR4 speeds of 2400 MHz, 2800 MHz and 3200 MHz, and the previous 10-core part from Intel, the Core i7-6950X at DDR4-2400. What should be obvious to us is that the L3/LLC average latency for the new SKL-X processors is going to be higher (slower) than the previous generation by a considerable margin at the same memory speeds. A delta of 26%, to the advantage of the previous architecture, is worth keeping an eye on, even if the L2 access latencies remain essentially unchanged.
Mesh Architecture Interconnect
I wrote about this new revelation that is part of both the Skylake-X HEDT consumer processors and the Xeon Scalable product this week, but it’s worth including the details here as well.
One of the most significant changes to the new processor design comes in the form of a new mesh interconnect architecture that handles the communications between the on-chip logical areas.
Since the days of Nehalem-EX, Intel has utilized a ring-bus architecture for processor design. The ring bus operated in a bi-directional, sequential method that cycled through various stops. At each stop, the control logic would determine if data was to be the collected to deposited with that module. These ring bus stops are located at memory controllers, CPU cores / caches, the PCI Express interface, memory controllers, LLCs, etc. This ring bus was fairly simple and easily expandable by simply adding more stops on the ring bus itself.
However, over several generations, the ring bus has become quite large and unwieldly. Compare the ring bus from Nehalem above, to the one for last year’s Xeon E5 v5 platform.
The spike in core counts and other modules caused a ballooning of the ring that eventually turned into multiple rings, complicating the design. As you increase the stops on the ring bus you also increase the physical latency of the messaging and data transfer, for which Intel compensated by increasing bandwidth and clock speed of this interface. The expense of that is power and efficiency.
For an on-die interconnect to remain relevant, it needs to be flexible in bandwidth scaling, reduce latency, and remain energy efficient. With 28-core Xeon processors imminent, and new IO capabilities coming along with it, the time for the ring bus in this space is over.
Starting with the HEDT and Xeon products released this year, Intel will be using a new on-chip design called a mesh that Intel promises will offer higher bandwidth, lower latency, and improved power efficiency. As the name implies, the mesh architecture is one in which each node relays messages through the network between source and destination. Though I cannot share many of the details on performance characteristics just yet, Intel did share the following diagram.
As Intel indicates in its blog on the mesh announcements, this generic diagram “shows a representation of the mesh architecture where cores, on-chip cache banks, memory controllers, and I/O controllers are organized in rows and columns, with wires and switches connecting them at each intersection to allow for turns. By providing a more direct path than the prior ring architectures and many more pathways to eliminate bottlenecks, the mesh can operate at a lower frequency and voltage and can still deliver very high bandwidth and low latency. This results in improved performance and greater energy efficiency similar to a well-designed highway system that lets traffic flow at the optimal speed without congestion.”
The bi-directional mesh design allows a many-core design to offer lower node-to-node latency than the ring architecture could provide, and, by adjusting the width of the interface, Intel can control bandwidth (and, by relation, frequency). Intel tells us that this can offer lower average latency without increasing power. Though it wasn’t specifically mentioned in this blog, the assumption is that because nothing is free, this has a slight die size cost to implement the more granular mesh network.
Using a mesh architecture offers a couple of capabilities and also requires a few changes to the cache design. By dividing up the IO interfaces (think multiple PCI Express banks, or memory channels), Intel can provide better average access times to each core by intelligently spacing the location of those modules. Intel will also be breaking up the LLC into different segments which will share a “stop” on the network with a processor core. Rather than the previous design of the ring bus where the entirety of the LLC was accessed through a single stop, the LLC will perform as a divided system. However, Intel assures us that performance variability is not a concern:
Negligible latency differences in accessing different cache banks allows software to treat the distributed cache banks as one large unified last level cache. As a result, application developers do not have to worry about variable latency in accessing different cache banks, nor do they need to optimize or recompile code to get a significant performance boosts out of their applications.
There is a lot to dissect when it comes to this new mesh architecture for Xeon Scalable and Core i9 processors, including its overall effect on the LLC cache performance and how it might affect system memory or PCI Express performance. In theory, the integration of a mesh network-style interface could drastically improve the average latency in all cases and increase maximum memory bandwidth by giving more cores access to the memory bus sooner. But, it is also possible this increases maximum latency in some fringe cases.
Turbo Boost Max Technology 3.0
With the release of the Broadwell-E platform, Intel introduced Turbo Boost Max Technology 3.0 that allowed a single core on those CPUs to run at higher clock speeds than the others, effectively improving single-threaded performance. With Skylake-X, Intel has improved the technology to utilize the TWO best cores, rather than just one.
This allows the 8-core and higher count processors from this launch to run at higher frequencies when only one or two cores is being utilized. In the two products that we have clock speeds for, that is a 200 MHz advantage over standard Turbo Boost technology. Intel hopes that this improvement in the technology gives them another advantage in any gaming or lightly threaded workload over the AMD Ryzen and upcoming Threadripper processors.
This feature seems to work as intended, with the single threaded workloads boosting up to 4.5 GHz in my testing and the two core workloads doing the same, just much less reliably.
One thread
Two threads
There may need to be some more software work (driver, OS or BIOS) done to get the two-core iteration of Turbo Boost Max Technology to more regularly hit the 4.5 GHz clock speed.
SpeedShift on HEDT
For the first time, the HEDT platform will get SpeedShift technology. This feature has been present since the launch of Skylake on the consumer notebook line, was updated with Kaby Lake, and now finds its way to the high performance platforms. The basis of the technology allows the clock rates of the CPU to get higher, and do so faster, in order to improve the responsiveness of the system for short, bursty workloads. It accomplishes this by taking over much of the control of power states from the operating system and leaves that decision making on the CPU itself.
Zoomed
Comparing the Core i9-7900X to the Core 7-6950X (that does not have SpeedShift) and the Core i7-7700K (Kaby Lake) shows the differences in implementation. The 7900X reaches its peak clock speed in 40ms while the Broadwell-E processor from last year takes over 250ms to reach its highest clock state. That’s a significant difference and should give users better performance on application loads and other short workloads. Note the difference on the 7700K though: the consumer part and Kaby Lake design is even more aggressively targeting instantaneous clock rates.
A 15% increase in performance
A 15% increase in performance resulting from a 50% increase in power consumption seems to indicate that this processor is firmly out of its comfort zone in terms of efficiency.
Makes me wonder where it would land with similar clock rates as the 6950X.
As for the i9 line-up, I don’t follow the argument that these CPUs are not the direct result of AMD’s renewed competitiveness. Sure, 6- through 10-core CPUs would’ve been planned for long ago, but their final clocks were set post-Ryzen. The idiotic KBL-X were rushed post-Ryzen. The MCC-i9s are clearly a rush job (hence their late launch) trying to compete with Threadripper.
I’d be willing to bet that not a single CPU launched for this platform was planned exactly as-is 9 months ago.
Even if everything you say is
Even if everything you say is true, is that a problem? Is that not what we want? Some competition to push things forward?
Sorry, I might have simply
Sorry, I might have simply misread/misunderstood your conclusion.
As far as I’m concerned, it was not giving enough credit to AMD for the final specs of these CPUs, as they are / will be shipping.
Anyways, thanks for testing the rejiggered cashes and mesh topography and showing how it affects scaling when compared to its predecessor!
I am curious if future BIOS
I am curious if future BIOS updates will affect mesh speed(ping time?), and what kinds of differences that will make.
I like the performance/$ metrics. There’s so many ways to slice those — CPU Only, including motherboard and RAM (which you have to buy anyway to use the CPU), or full system price. Pros/Cons to each.
Best internet line of the day:
“Until July. Or August. Or October…”
Great review PCPer!
Future BIOS should not have a
Future BIOS should not have a direct effect on it, unless Intel changes its stance on the clocks of the cache. It runs at a slower clock that memory or the CPU itself, but it is controllable – I show you the change on one of our graphs here looking at thread to thread "ping times".
On the performance / dollar, you are right, we could have included memory and motherboard in that and it might be worth doing in the future. But I think most people reading will understand that the X299 motherboard price average is higher than the X370 motherboard price average, so the differenecs will widen slightly.
X370 may not be a fair
X370 may not be a fair yardstick if you want price/performance. X370 is closer to X299 in features (though still a long way off), but if want you want is maximum price/performance B350 is the way to go.
Hey Ryan great review. If
Hey Ryan great review. If possible for the gaming benchmarks could you post the 1% and .1% low frame rates or just the min fps if that would be easier. I have found the enthusiast platform tends to excel in the minimum FPS and smooth delivery of frames (less stutter) and that is what motiveates my purchases more than Max or Average fps i would rather have a CPU with a min of 60 fps and a max of 85 fps than one with a max of 105 fps and a min of 45 fps even if that mean it has a lower average fps, smoothness is everything for me
At what speed where you
At what speed where you running the 1800x infinity frabic ?
Also your idle system wattage look to be half of other sites for the 1800x. I wonder what you or they are doing differently.
Cinebench value. Not sure why but I get a score of 1641 on a stock 1800x. / $440 (amazon) = 3.72
I think you are using the launch day price of $500 ?
note: I run my ram at 2400mhz (the rated XMP profile)
All of the 1800X data was
All of the 1800X data was generated at stock settings, DDR4-2400 memory. And yes, I am still using the $499 launch price for that data, as you note.
Great Video Ryan, Actually
Great Video Ryan, Actually made me read the review… that was good too.
With….one little exception New parts, higher clocks, more cores
Kinda wanted to see what the “NiceHash” daily BTC amount would be, you know for science.
Consider including it in your benchmarks for all the new CPU’s?
Maybe…but CPUs, even
Maybe…but CPUs, even 10-core CPUs, are very inefficient in comparison to even moderate GPUs.
You are totally right, and
You are totally right, and rather handsome,
However after Electrickery a Ryzen 7 1700X nets $600 per annum
Which is peanuts to golden haired tech gods granted, but some peeps may want to put one in a corner and let it pay for itself (with all the assumptions granted) while heating up their greenhouse.
As alogs change and prices fluctuate, releases get more cores, it’ll be nice to keep an eye on hashing value.
Goes without saying that it will be awesome to have it on GPU charts.
You’re obviously way too important and tall to take on such a task, maybe the smaller more condensed you (Ken) could take on such a burden of honour.
Something I haven’t seen much
Something I haven’t seen much of is the (potential) benefit of this X299 plan to boutique system builders, and even larger mass producers of custom PCs such as HP with their Omen, and Dell / Alienware.
They could standardize on X299 for most of their builds, and then offer customers the choice of i5 and “entry level” i7 now, with the option to upgrade to a true HEDT system later on, while keeping the same chassis and main system components.
That and single-core performance should be best on those parts, especially when overclocked to their max.
In terms of TDP, did you
In terms of TDP, did you measure that at stock or overclocked? I’d have to assume stock, and if so, could the measurements be off due to the new platform?
I know you know this, but for anyone who wonders how Intel defines TDP…from https://www.intel.com/content/dam/doc/white-paper/resources-xeon-measuring-processor-power-paper.pdf:
“Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE.1”
All measured at stock
All measured at stock settings.
Seems to me that there has
Seems to me that there has been a cost-shift Intel has done here from the CPUs to the chipsets. The motherboards are about $100 more expensive than they should be. This way, Intel can make their CPUs out to be a better value than they actually are.
I don’t think that’s
I don't think that's accurate. Intel is probably getting slightly more from the X299 than the Z270, but I would guess not much. If anything, the motherboard vendors know this is a higher end platform and audience, so they put higher end products together to serve it.
Ryan,
Did overclocking the
Ryan,
Did overclocking the cache + using faster RAM have any effect on benchmarks?
I honestly did not have time
I honestly did not have time to check, only to do the latency evaluation you saw on that page. We'll be following up – my expectation is that it will have affect on things like 7zip and the 1080p gaming results, if it all.
I’m looking at Guru3D’s X299
I’m looking at Guru3D’s X299 motherboard reviews and it seems like the BIOS that run the more conservative power profile have higher memory/L3 latency and run worse in games and synthetics like Cinebench. The Cinebench scores matched your results so I’m assuming these latency tests were done using the lower power profiles.
It will be interesting to see what your latency tester shows on the higher power profiles.
Great Review!
Grammar
Great Review!
Grammar Nazi:
On the last page, under the last picture
It is worth noting here that our early testing with the X299 motherboards has including troubling amounts of performance instability and questionable compatibility.
Ah, thanks. 🙂
Ah, thanks. 🙂
Interesting to see the
Interesting to see the Intercore latency affect Skylake X so much. Despite Ryzen’s latency affecting games, it does compete well with Broadwell often, despite lower clocks usually.
It’s nearly the reverse in gaming with Skylake X, where it’s clocked higher, and still loses.
I hope Ryan does some detailed tests with Skylake X X CPUs, and Threadripper to see how the increase CPUs & CCX’s will affect latency; and as a result affect some use cases.
“And to combat Threadripper,
“And to combat Threadripper, it seems clear that Intel was willing to bring forward the release of Skylake-X, to ensure that it maintained cognitive leadership in the high-end prosumer market.”
Impressive Intel know the release date for Threadripper back in 2015 when they scheduled Basin Falls! https://regmedia.co.uk/2015/05/26/intel-kdm-roadmap-1.jpg
Hey Ryan,
Great job as
Hey Ryan,
Great job as always! Just wanted to give a little feedback about the graphs – the font is borderline unreadable and that is on a 1080p 27″ ultrasharp Dell.
Otherwise keep on rocking!
Why is your latancy test
Why is your latancy test different to that from sysoft Sandra?
http://www.tomshardware.de/performance-benchmarks-ubertaktung-leistungsaufnahme-kuhlung,testberichte-242365-2.html
Intel 7900x
Sisoft: 79ns
PCPer: 100ns
AMD Fabric:
Sisoft: 122
PcPer: 140
Perhaps you guys should
Perhaps you guys should factor in the platform cost in these reviews – B350 Mobos can be had for ~$100, while these X299 Mobos cost at least $400. It’s hard to argue the i7-7800X is a suitable competitor for the 1700 when you have to pay another $400 for the motherboard, and are still two cores short (though the higher clocks make up for this)
Intel needs to offer multi-core mainstream offers to truly compete with the 1700 in the future. Right now higher clocks trump twice the threads, but if games like Battlefield and the higher core count of consoles are anything to go for that won’t last forever.
Perhaps you guys should
Perhaps you guys should factor in the platform cost in these reviews – B350 Mobos can be had for ~$100, while these X299 Mobos cost at least $400. It’s hard to argue the i7-7800X is a suitable competitor for the 1700 when you have to pay another $400 for the motherboard, and are still two cores short (though the higher clocks make up for this)
Intel needs to offer multi-core mainstream offers to truly compete with the 1700 in the future. Right now higher clocks trump twice the threads, but if games like Battlefield and the higher core count of consoles are anything to go for that won’t last forever.