For a while, it was unclear whether we would see Broadwell on the desktop. With the recently leaked benchmarks of the Intel Core i7-6700K, it seems all-but-certain that Intel will skip it and go straight to Skylake. Compared to Devil's Canyon, the Haswell-based Core i7-4790K, the Skylake-S Core i7-6700K has the same base clock (4.0 GHz) and same full-processor Turbo clock (4.2 GHz). Pretty much every improvement that you see is pure performance per clock (IPC).
Image Credit: CPU Monkey
In multi-threaded applications, the Core i7-6700K tends to get about a 9% increase while, when a single core is being loaded, it tends to get about a 4% increase. Part of this might be the slightly lower single-core Turbo clock, which is said to be 4.2 GHz instead of 4.4 GHz. There might also be some increased efficiency with HyperThreading or cache access — I don't know — but it would be interesting to see.
I should note that we know nothing about the GPU. In fact, CPU Monkey fails to list a GPU at all. Intel has expressed interest in bringing Iris Pro-class graphics to the high-end mainstream desktop processors. For someone who is interested in GPU compute, especially with Explicit Unlinked MultiAdapter in DirectX 12 upcoming, it would be nice to see GPUs be ubiquitous and always enabled. It is expected to have the new GT4e graphics with 72 compute units and either 64 or 128MB of eDRAM. If clocks are equivalent, this could translate well over a teraflop (~1.2 TFLOPs) of compute performance in addition to discrete graphics. In discrete graphics, that would be nearly equivalent to an NVIDIA GTX 560 Ti.
We are expecting to see the Core i7-6700K launch in Q3 of this year. We'll see.
I was really happy with my
I was really happy with my quad DDR2 system and skipped DDR3 altogether. My 5th gen DDR4 system was a significant boost and cores DO matter to me since I fold and work with applications that actually use them. Many people don’t fully appreciate what has already been brought to the table with more cores and DDR4 simply because they don’t USE it. A 10% boost is kind of a generalist view of things and if you’re just playing games and cruising the web- you don’t need to fork over for a 100% new system for a 10% gain you may never really see unless you want to afford it.
That said, I’m kind of excited about advances across the board and parallelism is value added work for me that works best when it scales well- but I’m a 1% audience from what I can tell.
x86 hits a performance wall.
x86 hits a performance wall. x86 performance will remain stagnant for years.
This isn’t limited to x86.
This isn’t limited to x86. Single threaded code has been pushed to the limits to the point that extracting any more ILP is quite difficult. These CPUs are already massively complex with all of the OOO execution, speculative execution, and such. Note that none of these have anything to do with the ISA. If ARM processors are pushed to such performance levels, they will run up against the same limitations.
The problem is not single
The problem is not single thread code, the problem is that x86 (and other ISAs like ARM, Power,…) are serial ISAs and the hardware cannot extract all the parallelism hidden in the code, because engineers hit an ILP wall. The only way to extract double-digit performance gains per gen is using new ISAs where parallelism is explicit. Skylake will be about 5% faster than Haswell on x86 code but will be up to 70% faster than Haswell when using the new AVX-512 instructions.
I guarantee you they have
I guarantee you they have CPU’s with much greater gains lying around. Intel needs competition so that they start releasing serious changes.
Although I am aware that as technology advances, gains are a little less each time, new tech advancements being the exception.
Build a damn CPU socket that’s 15cm across and give us some insane high core high frequency power houses!