What’s new and what’s not
Intel is finally coming clean about the 7th Generation Core processors, Kaby Lake.
While spending time learning about upcoming products and technologies at the Intel Developer Forum earlier this month, I sat down with the company to learn about the release of Kaby Lake, now known as the 7th Generation Core processor family. We have been seeing and reporting on the details of Kaby Lake for quite some time here on PC Perspective – it became a more important topic when we realized that this would be the product that officially killed off the ‘tick-tock’ design philosophy that Intel had implemented years ago and that was responsible for much of the innovation in the CPU space over the last decade.
Today Intel released new information about the 7th Gen CPU family and Kaby Lake. Let’s dive into this topic with a simple and straight forward mindset in how it compares to Skylake.
What is the same
Actually, quite a lot. At its core, the microarchitecture of Kaby Lake is identical to that of Skylake. Instructions per clock (IPC) remain the same with the exception of dedicated hardware changes in the media engine, so you should not expect any performance differences with Kaby Lake except with improved clock speeds we’ll discuss in a bit.
Because of this lack of change many people will look down on the Kaby Lake release as Intel’s attempt to repackage an existing product to make sure it meets a financial market required annual product cadence. It is a valid but arguable criticism, but Intel is making changes in other areas that should make KBL an improvement in the thin and light ecosystem.
Also worth noting is that Intel is still building Kaby Lake on 14nm process technology, the same used on Skylake. The term “same” will be debated as well as Intel claims that improvements made in the process technology over the last 24 months have allowed them to expand clock speeds and improve on efficiency
What is changed
Dubbing this new revision of the process as “14nm+”, Intel tells me that they have improved the fin profile for the 3D transistors as well as channel strain while more tightly integrating the design process with manufacturing. The result is a 12% increase in process performance; that is a sizeable gain in a fairly tight time frame even for Intel.
That process improvement directly results in higher clock speeds for Kaby Lake when compared to Skylake when running at the same target TDPs. In general, we are looking at 300-400 MHz higher peak clock speeds in Turbo Boost situations when compared to similar TDP products in the 6th generation. Sustained clocks will very likely remain voltage / thermally limited but the ability spike up to higher clocks for even short bursts can improve performance and responsiveness of Kaby Lake when compared to Skylake.
In these two examples, Intel compares the 15 watt Core i7-6500U (a common part in currently shipping notebooks) and the upcoming 15 watt Core i7-7500U, both with dual-core HyperThreaded configurations. In SYSmark 2014 a 12% score improvement is measured while WebXPRT shows a 19% advantage. Double digit performance increases are pretty astounding for a new generational jump that does not include a new microarchitecture or a new process technology (more or less) though we should temper expectations for other applications and workload profiles like content creation.
Along with higher fixed clock speeds for Kaby Lake processors, tweaks to Speed Shift will allow these processors to get to peak clock speeds more quickly than previous designs. I extensively tested Speed Shift when the feature was first enabled in Windows 10 and found that the improvement in user experience was striking. Though the move from Skylake to Kaby Lake won’t be as big of a change, Intel was able to improve the behavior.
This sample data shows the Kaby Lake Core i7-7500U hitting its peak clock rate of 3.5 GHz in just 15ms, while the Skylake Core i7-6500U takes about 30ms to hit its peak of 3.1 GHz. Meanwhile, the same 6500U with Speed Shift disabled takes over 90ms to reach its highest clock, reducing responsiveness for systems, especially those that depend on tough interaction.
The primary change to KBL comes in the media engine where native 4K has been fully integrated.
The graphics architecture and EU (execution unit) layout remains the same from Skylake, but Intel was able to integrate a new video decode unit to improve power efficiency. That new engine can work in parallel with the EUs to improve performance throughput as well, but obviously at the expensive of some power efficiency.
Specific additions to the codec lineup include decode support for 10-bit HEVC and 8/10-bit VP9 as well as encode support for 10-bit HEVC and 9-bit VP9. The video engine adds HDR support with tone mapping though it does require EU utilization. Wide Color Gamut (Rec. 2020) is prepped and ready to go according to Intel for when that standard starts rolling out to displays.
Performance levels for these new HEVC encode/decode blocks is set to allow for 4K 120mbps real-time on both the Y-series (4.5 watt) and U-series (15 watt) processors.
The resulting changes when comparing Kaby Lake and Skylake are going to be impressive for anyone looking to playback local or streaming 4K content. This graphic compares the SoC power draw of a local playback 4K 10-bit video file: the 6th Generation processor uses over 10 watts on average to playback the video with ~50% CPU utilization. The dedicated hardware block in the 7th Generation processor lowers that to 0.5 watts and ~4% CPU utilization, offering up 2.6x the battery life for consumers.
A similar improvement is seen when playing back VP9 4K video from YouTube. Power draw drops from 5.8 watts to 0.8 watts while CPU utilization goes from 80% to ~15% as you move from the 6500U (Sky Lake) to the 7500U (Kaby Lake).
What is available now
Starting next month, systems based on Kaby Lake will be shipping from OEMs like Lenovo, Acer, ASUS and others and Intel expects there to be more than 100 designs on the market in Q4 of this year. A total of six processors are shipping now, three each of the U-series and Y-series.
The 15-watt U-series of processors will be found in the vast majority of the next generation of thin and light notebooks and convertibles including some of our favorites like the Dell XPS 13 and Lenovo Yoga ThinkPad. All three are 2-core/4-thread designs though clock speeds vary. The Core i3-7100U does NOT include support Turbo Boost and instead will run at a fixed 2.4 GHz under load, the Core i5-7200U will clock from 2.5 GHz to 3.1 GHz and the Core i7-7500U will run at 2.7 GHz up to 3.5 GHz. All three platforms utilize DDR3L or LPDDR3 memory at 1600-1866 MHz or they can integrate DDR4 memory at up to 2133 MHz.
The lower power, 4.5 watt processors are also 2-core/4-thread designs but have much wider ranges of clock speeds in order to fit into that adjusted and configurable thermal envelope. The Core m3-7Y30 has a base clock of just 1.0 GHz but can boost as high as 2.6 GHz! Interestingly, Intel has gone away with the m5/m7 designation (and least for this release) and the next two higher models of Y-series parts are Core i5/i7 branded instead. The Core i5-7Y54 clocks from 1.2 GHz up to 3.2 GHz while the Core i7-7Y75 range goes from 1.3 GHz to 3.6 GHz, making it (technically) the highest clock speed part announced today. Of course, it won’t maintain that high of a clock rate for very long in the types of chassis built around 4.5 watt processors.
I expect to start seeing 7Th Generation notebooks and 2-in-1s in our offices very soon for testing, where we will be able to do some hands on testing with the new 4K video capabilities and to measure the expected performance improvements that Kaby Lake will offer.
What is coming next year
In January 2017, likely at CES, we will start to see the release of other products based on the Kaby Lake design, including consumer, enterprise, workstation and processors that integrate Iris-style graphics systems. We really don’t have any more data than that to share but you will definitely see Kaby Lake K-series processors to overtake the Skylake K-series parts currently on the market. How high they will clock and how much they will improve on currently shipping products will be interesting to see. Could we actually see the same 300-400 MHz clock speed improvements on the desktop?
Y-series Kaby Lake
Despite getting the incremented brand of being the 7th Generation Core processor design, Kaby Lake sees a less impressive technological shift than we have come to expect. The move away from the ‘tick-tock’ model into the ‘tick-tock-optimize’ scheme is a result of combined technology and business directions, but the added performance available courtesy of clock speed increases and 4K media improvements will at least offer some differentiation between previous and upcoming systems. It will vary between different user workloads, but peak clock speed increases of >12% are going to improve the computing experience for the vast majority of consumers.
Gary Johnson / Libertarians
Gary Johnson / Libertarians for President! .. Err.. I mean – looking forward to the clock speed increases of 7700K 🙂
There are no libertarians
There are no libertarians running for president. Bill Weld supports gun control and Gary “Bake My Cake” Johnson isn’t much better.
Kind of a funny twist to now see Intel pushing clock speeds and AMD pushing IPC.
AMD is, I guess, using high
AMD is, I guess, using high density libraries for both CPUs and GPUs. This density will allow them to pack a lot more hardware, but clock speeds will probably not be as high. It makes sense that they will have to push IPC over clock speed. Intel has been pushing clock speed and achieving lower power by burst at high speed followed by down clocking or shutting off portions of the die. Since the CPU will have to wait on memory in many cases, it makes some sense to clock up high, complete everything that can be, then down clock while memory responds.
No AMD will still use the
No AMD will still use the nominal CPU low density design libraries for its Zen server/desktop variants, and maybe a high performance Zen laptop SKU! But for the mobile market tablets/low power laptops, AMD can take their high density layout libraries and use them for a CPU core’s layout on SKUs that are going to be clocked lower anyways in spite of a CPU maybe having the ability to be clocked higher.
So why waste the CPU core space using low density design libraries on mobile SKUs that are going to be clocked lower to go into mobile SKUs with less thermal headroom to begin with when they can get that extra 30% of space saved on a CPU core’s layout by using the high density design libraries for more die area savings on top of the space savings from going to the 14nm process node.
Not all 14nm are made equal.
Not all 14nm are made equal. AMD only have 14nm LPP available to them at Global Foundries, and that “LPP” means Low Power Plus. Typically low power process have low clockspeeds.
You did not read the post
You did not read the post that you replied to, and do you even know what high density layout libraries are! High density design libraries are used to design/layout GPUs that have their circuity more densely packed to get all those GPU cores on as little die space as possible, and GPUs have traditionally only been able to be clocked from 1/2 to 1/3 the speed of CPUs(CPUs are laid of with normally low density design libraries so they can be clocked higher) and it’s not for the lack of a mature chip fab process node. It’s because the high density design libraries used to layout GPUs cram a lot more transistors into a lot smaller space on more layers to get those thousands of cores with their FP/other units crammed onto as little space as possible generate too much heat to be clocked higher.
So mature 14nm process node aside, for the most part, using high density design libraries for the layout of a CPU’s cores saves space at the cost of the CPU core/s NOT being able to be clocked as high. But for CPUs used in mobile devices their CPU cores may be able to go to higher clocks but they still have to be clocked lower anyways to meet the mobile device’s thermal envelop, even if the CPU cores are laid out using the nominal CPU style low density design libraries. So why not instead use the high density design libraries to lay out the CPU core/s in some mobile tablet/laptop SKUs to save space, because the CPU core/s in some mobile devices are going to be clocked lower anyways because of thermal constraints!
So for AMD’s Zen based APUs that are going to be used in non gaming laptops, and tablets/other mobile devices why not use the high density design libraries to save CPU core space and use the saved space for more GPU ACEs and GPU/Cores for better graphics! It’s because in mobile devices the CPU’s cores are going to be clocked lower anyways no matter what CPU style design libraries the CPU was designed with. I’d rather the CPU cores be designed to take up less space if the CPU’s cores are not going to be able to be clocked higher in a mobile device anyways and have the save space used for more graphics resources and better graphics performance for the mobile APU!
Sure a more mature 14nm process is supposed to use less power, but that’s besides the point of the post that you replied to! GPU’s derive their processing power from massively parallel resources mostly and not from only clock speeds, while CPUs have less parallel resources and can use the higher clock speeds to get more relative performance out of less total amounts of cores! But for mobile devices it’s a waste of space to layout a CPU’s cores low density if the mobile device using the CPU’s cores has to clock the cores lower to maintain thermal limits in mobile form factor devices anyways. Why not use high density design libraries for the CPU’s cores and get that space saved used to up the APU’s GPU/graphics resources! It’s not like the CPU cores are ever going to be clocked higher in mobile devices anyways!
edit: envelop
to: envelope
edit: envelop
to: envelope
and Clinton is like AMD
and Clinton is like AMD Bulldozer: Promises, Hot Air.
Trump is like Celeron D: Partially capable thanks to his father.
But yes, glad to see some actual frequency increase this time around – it means Moore’s Law isn’t dead yet…
What the hell does Moore’s
What the hell does Moore’s law have to do with clock speeds! Moore’s “Law”/observation only deals with the economic viability of being able to double the transistor counts on processors every 18-24 months and says nothing about the transistors clock speeds!
If transistors are properly
If transistors are properly scaling, they’ll switch faster..
Agreed, they essentially
Agreed, they essentially failed to nominate the only actual libertarian that was running, Darryl Perry.
For someone to be libertarian, their actions must fall within their pledge> “I hereby certify that I do not believe in or advocate the initiation of force as a means of achieving political or social goals.”
The people currently running, simply do not understand the meaning of that pledge.
Anyway, I can’t wait to see the desktop parts. (Will we finally be able to reliably overclock to 5GHz?
Alfred E. Neuman for Prez!
Alfred E. Neuman for Prez! He’ll put those Monopolistic gaming Gits on notice! And Break up the Intel/Nvidia/M$ trusts! A.E. Neuman for Prez of these crazy states of madness with their really bad hair and pork belly trading options!
Hail to the chief he is never really worried! He’ll fix the folds for to make a different picture!…
I really hope the HQ
I really hope the HQ designated chips are revealed sooner rather than later. I want to see what kind of gpu performance a 7700HQ can muster, especially one with an Iris/Iris Pro label.
i wonder if those 14nm
i wonder if those 14nm improvements they’re talking about do apply to skylake too because it seems to me that recent batches show some OC gains compared to the release samples.
Gratz intel, the 4.5w ones
Gratz intel, the 4.5w ones are the perfect $300 chips for $200 devices…
Well played once again.
Lack of native HDMI 2.0
Lack of native HDMI 2.0 disappoints.
Which GPU is best for HTPC?
GTX 950 and RX 460
If you
GTX 950 and RX 460
If you don’t care about graphics, GTX 750 will do the job.
GTX 950 is so last year and
GTX 950 is so last year and since you put it in one sentence with RX 460 it makes me think the later is not much better (if any). Thanks anyway.
If you care about the HDMI
If you care about the HDMI 2.0 on Kaby lake then you don’t need something more.
A Zen/Polaris mobile APU with
A Zen/Polaris mobile APU with Polaris graphics being what gets AMD some wins over Intel’s Iris pro graphics. AMD would be able to take their high density design libraries and apply them to any low power/low clocked Zen cores and get the same 30% extra savings with a Zen core’s layout at 14nm that AMD got with it Carrizo core’s layout at 28nm.
So a Zen core at 14nm laid out using high density design libraries to get the extra added 30% of die area saved at 14nm, in addition to the 14nm process node shrink’s savings, and even more Polaris ACE units on the single APU die. If AMD would just go with a monolithic Zen/Polaris APU die on an interposer with even 1, or 2, HBM stacks to give the integrated graphics plenty of bandwidth AMD would have a very attractive product. Even if the APU could be supplied with only one HBM2 stack and additional channel/s to regular DIMM based DRAM, AMD would still be able to leverage the HBM2 speed from least 4GB of HBM2, and use the DIMM based DRAM as a second tier RAM store with most of the bandwidth discrepancy hidden by using the HBM2 like an L4 cache to a larger pool of standard DIMM based DRAM. A mobile Zen based APU with even half of an HBM2’s 4GB memory reserved for VRAM would still have 2GB for the OS and applications to leverage for more bandwidth.
AMD should at least try and get some Zen/Polaris APUs and pair them with 1, or 2, HBM2 stacks in addition to using regular DIMM based DRAM as a larger pool of lower bandwidth secondary RAM so that the APU’s integrated Polaris graphics could have enough HBM2 bandwidth to accommodate more than enough extra ACE unit’s bandwidth needs and totally outclass any of Intel’s Iris pro offerings.
I’d gladly take a Zen/Polaris laptop SKU with a single HBM2(4GB) stack that would offer plenty of bandwidth paired with a single DIMM channel to 8, or 16GB, of regular DIMM based DDR4 RAM, with the GPU able to take advantage of the extra HBM2 bandwidth that even a single HBM2 stack would provide over standard DIMM based DRAM.
Couldn’t agree more, but
Couldn’t agree more, but didn’t understand a word
zen? what zen you mean the
zen? what zen you mean the zen they haven’t come out with yet? The problem is iris pro spanks anything amd has to offer not to mention the horrid apu fiasco they currently dabble in.
So AMD 28 nm graphics against
So AMD 28 nm graphics against Intel 14 nm graphics? Do you really think Intel graphics will look that good when both are at 14 nm? Intel could have made a high powered graphics card, but they don’t want to be in the gpu business. How much does Intel get for 400 to 600 square mm of silicon as a Xeon or Xeon Phi compared to how much a similar sized GPU goes for? You can probably get the gpu on a card with a bunch of high speed memory for several times less than what Intel gets for just the chip. Intel is abandoning the consumer market to some extent, to focus on enterprise. It is unclear yet whether AMD will rise to the top, or whether ARM will move into the PC space. ARM in the PC space would open up the market to competition from many different players. Apple’s A9x is probably already good enough for a low end laptop. This is why AMD pushing a powerful k12 processor is a bad idea for AMD. It would leave them open to a lot of other competition. It will be best for them to stick to x86 for now since a switch to ARM will take a while. There isn’t exactly much competition in the AMD64 market if Intel “de-emphasizes” their consumer role.
Really how many Intel SOC
Really how many Intel SOC SKUs even have the top end Iris Pro graphics, and Intel really charges for its so so graphics. Really Polaris is at 14nm and Zen will be here soon enough at 14nm. AMD has a way to make its mobile Zen cores smaller at 14nm that same way it made Carrizo cores smaller at Carrizo’s 28nm fab node, without a die shrink needed to save 30% in a core’s area by using the GPU style high density design libraries! So AMD with Zen at 14nm node can get the same extra space savings at 14nm plus an extra 30% for using the high density design libraries to lay out a mobile Zen core for a mobile Zen APU with even more Polaris ACE units, at a way more affordable price.
Have you looked at Intel’s U/M core i7/i5 mainstream offerings they are not that fast to begin with and they never come with the Top End Iris Pro graphics that the Intel Gaming Fanboys like to compare against AMD’s APU graphics. AMD’s Offerings have better graphics in the same price point compared to what Intel offers, and That “Top” Iris Pro graphics is only available on the highest priced Intel SKUs that no OEM appears to be using on the majority of the OEM’s laptop SKUs.
Zen is at 14nm, and Polaris is already out there at 14nm, and the Zen/Polaris APUs will be there in 2017 with Polaris graphics, and maybe AMD will think to get a Zen/Polaris APU die and wire it up via an interposer to even a single HBM2 stack at 1024 bits clocked a little higher for plenty of integrated graphics bandwidth, and extra HBM2 bandwidth for the Zen APU cores also, with a single HBM2 stack acting as a L4 cache of high bandwidth memory, while also having a channel to regular lower effective bandwidth DDR4 DRAM DIMMS. Intel does something similar with it on die eDRAM/L4 style memory.
AMD’s APU graphics have allays been starved of bandwidth, by OEM’s using single channel memory mostly, so AMD if they even put one Zen/Polaris APU die on an interposer connected up to a single HBM2 die would never have to worry about any single channel nonsense from any OEMs with the APU/HBM2 having 4GB right on the interposer to allow AMD’s integrated graphics to shine no matter what the OEM(Under Intel’s influence) would do to gimp the regular channels to the DDR3/4 memory, like only using a single channel. There would be no starving the Zen/Polaris HBM2 of any bandwidth, as the HBM2 stack would be on the interposer module and integrated into the interposer along with the Zen/Polaris APU die.
the thing is intel does not
the thing is intel does not have any intention to dug it out with AMD. if intel really want to get the performance crown for IGP they can simply give their low end cpu faster graphic. it is not about being expensive or not. some people make it out intel IGP will never touch AMD graphic in APU but in reality it is not impossible.
and we will never going to see APU with HBM any time soon. surely not in 2017. even if they exist that will be for HPC part and not for regular consumer.
this jackass has been running
this jackass has been running his yap for the last few years. He comments on every AMD, Intel and ARM article with his run on sentences and his incoherent nonsense.
I would rather have someone
I would rather have someone who is excited about the possibilities with silicon interposers than the legion of trolls that show up in any AMD news stories. It is unfortunate that this particular anonymous poster (there is often more than one anonymous poster) has difficulty with sentence structure. English is not their first language perhaps.
Most of the people talking
Most of the people talking clearly dont know whats going on in the HPC space.
The biggest news in silicon right now is neither AMD, Intel or Nvidia. Its ARM and Fujitsu. The Post K computers preliminary specs have been announced. ARM V8a with 512 SVE and Tofu 3 on the chip or package, probably supporting XPoint as well.
1 EFLOPS with 100x application performance compared to K(which is still #1 on Graph500) and about 10x the PrimeHPC FX100 SPARC XIfx.
Intel Knights Hill, Skylake Purley and GP100/GV100 are the other really interesting things. We know plenty about GP100, Knights Landing and the deep learning and AI push.
3D IC packaging is not as straightforward as these people saying “PUT EVERYTHING ON INTERPOSERS WITH HBM” make it sound.
TSVs are currently in use in several places like interposers, but almost no one mentions HMC which has been in use with CPUs since early 2015. The SPARC XIfx has been using it all this time while everyone talks about future products. Knights Landing uses it too.
Neither CPU uses interposers, because theyre expensive and unnecessary in most cases. Intel EMIB is a competing technology thats been in use in real world applications.
AI and deep learning, big data and HPC are converging so memory and interconnect bandwidth is the new focus. XPoint and NVDIMMs are coming to Skylake Purley and Knights Hill next. That will allow for many new possibilities with nonvolatile storage being used as fast far memory.
Oh yeah i forgot to mention
Oh yeah i forgot to mention the impact that putting STT-MRAM on CPUs will have as well. Thats another technology thats been in use for years thats about to become a lot more popular.
I do the run-ons just to piss
I do the run-ons just to piss you off, so more run-ons, and commas,,,Just to run you mad. Now, go and enjoy your Intel, dog food, graphics!
I’ll take my Zen/Polaris APU with a single stack order of HBM2 on that interposer plate, for some de•li•cious graphics, not starved for bandwidth, to Run-On my laptop!
Yes that’s a Zen/Polaris APU die on an interposer with a single stack of HBM2(4GB) all wired up with a 1024 bit interface to the Zen/Polaris APU die and the HBM2 stack clocked a little higher for plenty of bandwidth for the Zen cores and the Polaris graphics, with lots of ACE units on that APU’s die, because the Zen mobile cores, at 14nm, where laid out using high density design libraries, so they take up 30% less die area, in addition to the die area saved by going to 14nm, for more space for more ACE units that feed from the HBM2’s faster RAM memory without being bandwidth starved!
So that mobile Zen/Polaris APU can have enough HBM2 bandwidth in spite of only having a single(OEM gimped) channel to regular slower DIMM based DRAM, because the HBM2 stack has its own 1024 bit fat wide connection etched out on the silicon interposer’s silicon substrate directly to the Zen/Polaris cores die, with the HBM2 acting like an L4 cache for the Polaris GPU’s textures and the Zen core’s OS/Gaming engine needs also, with everything else stored on the slower DRAM and transferred to and from HBM2 in the background by the memory controller, so only the most used/needed textures/code are on the HBM2’s memory, with the rest stored in the slower DRAM and transferred to the HBM2 when needed using memory caching algorithms in the APU’s memory controller, to keep the slower RAM’s bandwidth deficiency/latency issues hidden.
That’s some HBM2 goodness for a Zen/Polaris APU all on an interposer for some great affordable laptop SKUs with plenty of HBM2 bandwidth for the APU’s graphics!
Yeah lets put a $1000 APU
Yeah lets put a $1000 APU into a cheap plastic laptop that costs $200. Great idea!
You are not going to get that
You are not going to get that fast of graphics out of an IGP without solving the memory bandwidth issue. Intel’s current solution, with a custom memory chip on the same package, is probably just going to be too expensive for many use cases. Intel could possibly use their EMIB technology, but that would probably also be quite expensive. There is room for such devices on the high end though. I could see Apple making a MacBook Pro with an Intel chip with memory connected using EMIB tech or an AMD APU with HBM. I believe they already use Intel Iris Pro (on package eDRAM).
That is somewhat irrelevant though, because these new, expensive memory technologies are not actually necessary to solve the bandwidth issue. All that they would need to do would be to make a system for laptops like they do the consoles; attach graphics memory (GDDR of some type) directly to the APU. A lot of laptops are coming with non-upgradable memory anyway, so why not use some GDDR5 instead of DDR4? Once the memory bandwidth is solved, you still need a powerful gpu. Intel’s current GPU core will not compete with AMD’s GPUs. Intel could make a powerful gpu using a device derived from larrabee, but they do not want to be in the gpu market, so they will not do it.
Well they certainly seem to
Well they certainly seem to want to compete with GPUs in the deep learning arena.
So are the stories of
So are the stories of Microsoft’s saying that Kaby Lake will only support Windows 10 Ver. 1511, Anniversary, and in the future Redstone 1 and 2? and AMD’s forthcoming ZEN cpu will be the same?
Zzzzzzzzzzzzzzzzzz
Zzzzzzzzzzzzzzzzzz
I’ll see your
I’ll see your Zzzzzzzzzzzzzzzzzz, and raise you ZZZZZZZZZZZZzzzzzzzzzzzZZZZZZZZZZZZZZ
This seems to mostly just be
This seems to mostly just be Skylake with a slightly improved process and some of the bugs fixed. It is interesting how irrelevant the CPU cores themselves are becoming though. I booted up some of my older machines a while back and they could all probably do quite well for many task, until you try to play a video Or something. The hardware video decode support is, in my opinion, more important than the CPU cores. Almost any good CPU from about 2007 on will perform quite well for day to day task. Playing a video without hardware acceleration will bring them to a halt though.
In the U & Y space, they’re
In the U & Y space, they’re addressing stuff that matters more than clockspeed, and still increasing clock speed. It’s an overall net positive, as it should be, from what I can see here.
Intel needs competetion so
Intel needs competetion so theyll stop screwing around and release an LGA1151 i7 with 128MB eDRAM and a 91W TDP at 4GHz.
I thought my 6700K was going to have that, but it turns out a few soldered to the motherboard E3 Xeons are the only Skylake parts with eDRAM that arent shitty 5W laptop junk.
Intel will not sell such a
Intel will not sell such a device. They want to make sure that any eDRAM equipped CPU can only be used in very limited form factors. If they didn’t, then people would buy the eDRAM devices for workstations instead of Intel’s large L3 cache devices, which are very expensive; up to about $7000, I believe. The profit margins on the large, on-die cache Xeons are huge. There are many non-consumer applications which would benefit from 128 MB L4 cache, even if it isn’t as fast as on die cache. I hope that such market segmentation doesn’t keep HBM based APUs out of the consumer market also. It may be the case that HBM is just too expensive for the consumer market anyway, independent of artificial market segmentation. The existence of GDDR5x and now supposed GDDR6 is a bit worrying as far as the HBM2 ramp.
Low cost HBM will possibly
Low cost HBM will possibly solve that?
HBM3 and HMC 3.0(why the fuck does no one talk about HMC?) are both announced. HMC 2 has been in use for longer than HBM as well.
I think the advantages of APUs as they exist are overblown too. Id rather juat have a CPU that has more than 35GB/s for all 4 cores.
NEC managed to get 64GB/s PER CORE out of DDR3 on their 4 core SX-ACE since 2013.
If you divide Intel or AMDs bandwidth by the number of cores on their CPUs it pales in comparison.
Probably with Skylake, can be
Probably with Skylake, can be higher than 4.1GHZ at 91W TDP, we’ll see!
I mean Kaby Lake of course.
I mean Kaby Lake of course.
Let’s keep it simple
Let’s keep it simple people
It is AMD’s fault.
Laptop Kaby Lake early review
Laptop Kaby Lake early review http://www.notebookcheck.net/Kaby-Lake-Core-i7-7500U-Review-Skylake-on-Steroids.172692.0.html uses less power than Skylake while having better performance. Fully supports Main10 (10bit) HEVC also.
This is a great article.
This is a great article. Also of interest is Intel launching true quad core parts in the 15W segment.
“At its core, the
“At its core, the microarchitecture of Kaby Lake is identical to that of Skylake. ”
Argh. PPL were expecting a lot more CPU performance increase with Skylake from Haswell/Broadwell but were disappointed. Same thing here. But more clockspeed is great along with less power consumption. Can probably have a high clocked quad core CPU in a thin laptop with a 1060 GPU no problem.
Well, AMD may catch up since Kaby Lake doesn’t improve the CPU core, since AMD touts 40% more performance clock for clock on Zen I think!
I just hope that the 7700hq
I just hope that the 7700hq has better STP then the 6700hq since skylake is a step back in that regard compared to something like the 4720hq which is a haswell based CPU.
AKA this is what I am talking about
http://www.cpubenchmark.net/compare.php?cmp%5B%5D=2448&cmp%5B%5D=2586
Some of my programs perform much better with higher STP.
Currently using a thinkpad t420 with a sandbridge based i5 2520m and a nvidia nvs 4200m so I really need an upgrade and I am thinking a i7 7700hq with a nvidia 1050/ti and 8gb of ram along with a 256gb ssd for $1k or less is in my future as long as one of these OEMs make one that us