Update: Sept 17, 2015 @ 10:30 ET — To clarify: I'm speaking of socketed desktop Skylake. There will definitely be Iris Pro in the BGA options.
Before I begin, the upstream story has a few disputes that I'm not entirely sure on. The Tech Report published a post in September that cited an Intel spokesperson, who said that Skylake would not be getting a socketed processor with eDRAM (unlike Broadwell did just before Skylake launched). This could be a big deal, because the fast, on-processor cache could be used by the CPU as well as the RAM. It is sometimes called “128MB of L4 cache”.
Later, ITWorld and others posted stories that said Intel killed off a Skylake processor with eDRAM, citing The Tech Report. After, Scott Wasson claimed that a story, which may or may not be ITWorld's one, had some “scrambled facts” but wouldn't elaborate. Comparing the two articles doesn't really illuminate any massive, glaring issues, but I might just be missing something.
Update: Sept 18, 2015 @ 9:45pm — So I apparently misunderstood the ITWorld article. They were claiming that Broadwell-C was discontinued, while The Tech Report was talking about Socketed Skylake with Iris Pro. I thought they both were talking about the latter. Moreover, Anandtech received word from Intel that Broadwell-C is, in fact, not discontinued. This is odd, because ITWorld said they had confirmation from Intel. My guess is that someone gave them incorrect information. Sorry that it took so long to update.
In the same thread, Ian Cutress of Anandtech asked whether The Tech Report benchmarked the processor after Intel tweaked its FCLK capabilities, which Scott did not (but is interested in doing so). Intel addressed a slight frequency boost between the CPU and PCIe lanes after Skylake shipped, which naturally benefits discrete GPUs. Since the original claim was that Broadwell-C is better than Skylake-K for gaming, giving a 25% boost to GPU performance (or removing a 20% loss, depending on how you look at it) could tilt Skylake back above Broadwell. We won't know until it's benchmarked, though.
Iris Pro and eDRAM, while skipping Skylake, might arrive in future architectures though, such as Kaby Lake. It seems to have been demonstrated that, in some situations, and ones relevant to gamers at that, that this boost in eDRAM can help computation — without even considering the compute potential of a better secondary GPU. One argument is that cutting the extra die room gives Intel more margins, which is almost definitely true, but I wonder how much attention Kaby Lake will get. Especially with AVX-512 and other features being debatably removed, it almost feels like Intel is treating this Tock like a Tick, since they didn't really get one with Broadwell, and Kaby Lake will be the architecture that will lead us to 10nm. On the other hand, each of these architectures are developed by independent teams, so I might be wrong in comparing them serially.
Okay, this story confuses me.
Okay, this story confuses me. It starts off talking about a Skylake processor being cancelled and yet the linked ITworld article is about the Broadwell C processor being cancelled. The Itworld article says that it’s unknown whether there will be a Skylake processor with EDRAM or not.
Just updated the story. I was
Just updated the story. I was confused, and it seems like ITWorld was given bad info on top of that. Thanks!
Technically the writer of
Technically the writer of this article is correct.
However it would only be apparent to the most savvy in Intel technology.
This article is poorly written for the masses. Sadly, there are valid points which Conveyed properly (simply with greater detail) would have carried more weight.
Basically Intel has the capability proven with Broadwell processors (predecessor to Skylake) to provide on die (included with the processor) graphics equal to 80 to 100 dollar separately bought graphics cards.
Skylake (Intel newest architecture) although shown to provide superior compute capabilities has inferior graphics capabilities compared to it’s predecessor.
This has upset some of us, myself included. For the casual gamer this was the holly grail. I personally am very disappointed that Intel has not made the effort to continue it’s best graphics in Skylake referred as Iris Pro, GT4E, or Pro 580.
I almost bought Haswell, but no Pro Graphics included, Broadwell had it, but was on the heals of Skylake.
Now Skylake was left with no Pro Graphics, congratulations Intel, another sale lost.
So either go for a
So either go for a Broadwell-C CPU if you want Iris Pro on the Desktop (because CPU performance is damn near identical to Skylake, particularly for gaming), or wait for an integrated board with a BGA Skylake with Iris Pro (no S-series 4+4e, but there IS a H-series 4+4e on the roadmap).
On the haswell die photos I
On the haswell die photos I have seen, the eDRAM controller actually took up considerable die area. I don’t know if they will be producing parts without the eDRAM controller though. It also may be significantly more expensive to manufacture since it is two chips on the package. It may not make sense to sell it as a socketed processor; it might be too expensive. Mobile CPUs cost quite a bit more than their desktop counterparts. If it is more than $100 more expensive, then it would make more sense to just get a cheap, dedicated GPU. Would you still buy one of these if it was $500 or $600?
I don’t know how much the
I don’t know how much the eDRAM actually cost to use, but it looks like the controller on the haswell die took up a lot of die area. I have been wondering if they will actually make a die completely without the eDRAM controller. I read somewhere that they were using the L3 to store the eDRAM tag data. Moving it to the memory controller, rather than connecting it off the L3 cache, may mean that they cannot store tag data in the L3 anymore. The tag data would then add to the die area, which may be more motivation to produce a die without the eDRAM controller for parts that will not use it. This may mean that the non-eDRAM die will be cheaper to produce than the eDRAM part.
It is also significantly more expensive to mount two separate die on the package. There was a time when Intel had separate L2 cache chips on the package with their processors (Pentium 2 “slot 1” processors). They moved away from this pretty quickly. If it was cheap, then all of the processors would use eDRAM. I suspect that the eDRAM parts, if produced as a socketed part, would cost more than most people would be willing to pay. It is probably much cheaper to just buy a cheap, discrete graphics card. Mobile parts are generally not as cheap as desktop parts, even when there is little difference between them other than binning. It would be interesting to know how much system builders actually pay for different mobile parts.
In my opinion, Intel has been
In my opinion, Intel has been attempting to push enthusiasts to extreme/xeon type cpu’s for a while now. You can get a hex-core/12 thread processor for $299.00 at the ol’ Microcenter, and I’ve seen X99 boards for around $200.00. If I were building a system right now, that’s what I would get. Unless, the socketed eDRAM chips were available, that is.
Enjoy your sub-par
Enjoy your sub-par single-threading performance; a cheap hex-core Xeon will turbo to around 3.2GHz (the cheapest, the 2603v3, will only go to 1.6GHz non-turbo!), vs. the 4.4GHz of a Devil’s Canyon Haswell. Even now, the majority of tasks benefit more from a few fast cores than additional slow ones (Amdahl’s Law).
CPU IPC peasant! Who cares
CPU IPC peasant! Who cares about total IPC as a single defining metric for computing when there will be hundreds and thousands of GPU cores crunching those numbers in parallel, and in serial workloads on ACE type units that can context switch in and out the thousands of asynchronous processing threads on GPU cores!
I’ll take that AMD
I’ll take that AMD server/HPC/workstation Zen Based APU with the Fat Greenland/Arctic Islands GPU, and I’ll have All those ACE units FP/Int/Other core resources and who cares about the CPUs limited resources, AMD’s or Intel’s limited in comparison CPU FP/INT resources! The CPU, AMD’s or Intel’s, can have the janitorial duties of running the OS, and other menial tasks, I want those GPU ACE units for ray tracing workloads that would choke any Zen’s, Xeon’s, Power8’s CPU core/s, so to hell with the with those wimpy CPU cores, I’ll take the thousands of hardware asynchronous compute cores on the GPU thank you! Now get to your menial tasks you damn dirty CPU cores!
Looks at the all those CPU serfs, and Laughs!!!
Have you ever touched a girl?
Have you ever touched a girl?
Plenty of women, and there is
Plenty of women, and there is no such thing as a gaming geek, there are mostly only gaming Git’s and Game-necks like yourself. The real Geeks are designing the GPUs that will eat the CPU’s computing lunch. The people that design games will be doing things more on the GPU while trailer trash like your self will still be trying to wrap your single digit IQ brain(single brain cell) around even the simplest technological details and implications of GPUs and computing’s future.
How’s that world of the rusting double-wide working out for you and your Smom!
l.o.l.
l.o.l.
Mobile Skylake Xeon laptops
Mobile Skylake Xeon laptops will have GT4e… but at what cost?
Ugh. Intel and their product
Ugh. Intel and their product segmentation ensuring that one processor won’t be the best at everything. Where’s the extreme edition with eDRAM?
It is doubtful that the CPU
It is doubtful that the CPU would benefit much from the eDRAM cache without integrated graphics. There is much reason to add it to extreme edition parts.
Is not much reason, that is.
Is not much reason, that is.
Is not much reason, that is.
Is not much reason, that is.
Flat out wrong. The i7-5775C
Flat out wrong. The i7-5775C delivers (even at stock!) much more consistent frame times in games than even an i7-4790K.
The 128MiB eDRAM is used as a L4 cache when the iGPU is disabled.
Is there some benchmarks
Is there some benchmarks showing this? I haven’t seen any.
Is there some benchmarks
Is there some benchmarks showing this? I haven’t seen any.
Looking at anandtech bench
Looking at anandtech bench and comparing a broadwell-c to a 6700k, there is very little difference in performance when using a dedicated GPU. I think this whole thing is blown out of proportion.
True and that just means the
True and that just means the there is no measurable benefit to having the L4 cache for non integrated Intel graphics uses like discrete GPU gaming, the regular L1, L2, l3 caching is fairly efficient on Intel’s previous generation SKUs. The Intel caching is fine at 3 levels for discrete GPU gaming workloads and any variances are probably dew to the GPU’s binning than the 6500K’s CPU’s perceived contribution to frame rates.
Did AnandTech list a margin of error, on the results?
I’m having a bit of trouble
I’m having a bit of trouble parsing this article so maybe another one of the readers can help me out.
Was one of the claims made in this article that, with currently shipping Broadwell and Skylake processors, Broadwell provides a 20-25% increase over Skylake for discrete (add on card) GPU performance?
If so, could you point me to a reference for this? I’d like to learn more about it.
There is an arstechnica
There is an arstechnica article titled “Intel’s Skylake lineup is robbing us of the performance king we deserve” that mentions a large performance difference:
“But in memory-intensive workloads, such as some games and scientific applications, the cache is better than 21 percent more clock speed and 40 percent more power.”
This is comparing broadwell to Skylake. It is unclear if they are really talking about games running with a dedicated GPU though.
More like a typical market
More like a typical market monopoly holder further segmenting its product line to milk every bit of profit out of charging progressively larger amounts for the miniscule differences between the dizzying array of Skylake SKUs!
Just look at all those choices, and each one a little more costly as you go up the line! Let’s segment those bins and charge for every little bit of performance difference! Milk, Milk, Milk!
What do people expect here?
What do people expect here? Do they expect Intel to throw in an eDRAM cache chip fro free? Would people still be interested if it cost $500 or $600? For that price, you could get a 6700k and a good graphics card.
What do people expect here?
What do people expect here? Do they expect Intel to throw in an eDRAM cache chip fro free? Would people still be interested if it cost $500 or $600? For that price, you could get a 6700k and a good graphics card.
What do people expect here?
What do people expect here? Do they expect Intel to throw in an eDRAM cache chip fro free? Would people still be interested if it cost $500 or $600? For that price, you could get a 6700k and a good graphics card.
i guess posting from an iPad
i guess posting from an iPad can cause triple post?
They’re doing this so
They’re doing this so Kabylake can improve upon Skylake via L4 cache..
By that time all of the game
By that time all of the game code will be run on the ACE units of AMD’s GPUs, and other GPU maker’s ACE type units! For what purpose will a CPU be used for in gaming! CPUs will be those things that Run the OS, while the GPUs run the games, and gaming graphics. But if you think CPUs are all that, then pull out your discrete GPU and try and high-end game on Intel’s CPU cores alone. Intel for gaming still needs some third party GPU support. DX12, and Vulkan, as well as the games makers will be doing more on the GPU, and even less on the CPU with the newer GPUs that are coming! Will Zen’s or Kabylake’s CPU cores even matter much by the time Arctic Islands arrives!
GPUs RULE!