Introduction
Intel Kaby Lake is already shipping in notebooks, but is the upgrade worth it? More than you might think.
In August at the company’s annual developer forum, Intel officially took the lid off its 7th generation of Core processor series, codenamed Kaby Lake. The build up to this release has been an interesting one as we saw the retirement of the “tick tock” cadence of processor releases and instead are moving into a market where Intel can spend more development time on a single architecture design to refine and tweak it as the engineers see fit. With that knowledge in tow, I believed, as I think many still do today, that Kaby Lake would be something along the lines of a simple rebrand of current shipping product. After all, since we know of no major architectural changes from Skylake other than improvements in the video and media side of the GPU, what is left for us to look forward to?
As it turns out, the advantages of the 7th Generation Core processor family and Kaby Lake are more substantial than I expected. I was able to get a hold of two different notebooks from the HP Spectre lineup, as near to identical as I could manage, with the primary difference being the move from the 6th Generation Skylake design to the 7th Generation Kaby Lake. After running both machines through a gamut of tests ranging from productivity to content creation and of course battery life, I can say with authority that Intel’s 7th Gen product deserves more accolades than it is getting.
Architectural Refresher
Before we get into the systems and to our results, I think it’s worth taking some time to quickly go over some of what we know about Kaby Lake from the processor perspective. Most of this content was published back in August just after the Intel Developer Forum, so if you are sure you are caught up, you can jump right along to a pictorial look at the two notebooks being tested today.
At its core, the microarchitecture of Kaby Lake is identical to that of Skylake. Instructions per clock (IPC) remain the same with the exception of dedicated hardware changes in the media engine, so you should not expect any performance differences with Kaby Lake except with improved clock speeds.
Also worth noting is that Intel is still building Kaby Lake on 14nm process technology, the same used on Skylake. The term “same” will be debated as well as Intel claims that improvements made in the process technology over the last 24 months have allowed them to expand clock speeds and improve on efficiency.
Dubbing this new revision of the process as “14nm+”, Intel tells me that they have improved the fin profile for the 3D transistors as well as channel strain while more tightly integrating the design process with manufacturing. The result is a 12% increase in process performance; that is a sizeable gain in a fairly tight time frame even for Intel.
That process improvement directly results in higher clock speeds for Kaby Lake when compared to Skylake when running at the same target TDPs. In general, we are looking at 300-400 MHz higher peak clock speeds in Turbo Boost situations when compared to similar TDP products in the 6th generation. Sustained clocks will very likely remain voltage / thermally limited but the ability spike up to higher clocks for even short bursts can improve performance and responsiveness of Kaby Lake when compared to Skylake.
Along with higher fixed clock speeds for Kaby Lake processors, tweaks to Speed Shift will allow these processors to get to peak clock speeds more quickly than previous designs. I extensively tested Speed Shift when the feature was first enabled in Windows 10 and found that the improvement in user experience was striking. Though the move from Skylake to Kaby Lake won’t be as big of a change, Intel was able to improve the behavior.
The graphics architecture and EU (execution unit) layout remains the same from Skylake, but Intel was able to integrate a new video decode unit to improve power efficiency. That new engine can work in parallel with the EUs to improve performance throughput as well, but obviously at the expensive of some power efficiency.
Specific additions to the codec lineup include decode support for 10-bit HEVC and 8/10-bit VP9 as well as encode support for 10-bit HEVC and 9-bit VP9. The video engine adds HDR support with tone mapping though it does require EU utilization. Wide Color Gamut (Rec. 2020) is prepped and ready to go according to Intel for when that standard starts rolling out to displays.
Performance levels for these new HEVC encode/decode blocks is set to allow for 4K 120mbps real-time on both the Y-series (4.5 watt) and U-series (15 watt) processors.
It’s obvious that the changes to Kaby Lake from Skylake are subtle and even I found myself overlooking the benefits that it might offer. While the capabilities it has will be tested on the desktop side at a later date in 2017, for thin and light notebooks, convertibles and even some tablets, the 7th Generation Core processors do in fact take advantage of the process improvements and higher clock speeds to offer an improved user experience.
Intel’s 14nm+ was offering
Intel’s 14nm+ was offering lower power consumption and in a laptop this is important. So, the higher frequency of Kaby Lake and probably the more time it stays at boost speeds, offer all the extra performance you could expect.
Put Kaby Lake against Skylake on the desktop and you have a second Devil’s Canyon. The only difference is that now you have an optimized process, not better thermal paste.
The only real advantage of Kaby Lake series is probably the better codec support, especially HEVC.
PS
Desktop comparisons
http://diy.pconline.com.cn/851/8515185_all.html
http://www.xfastest.com/thread-178876-1-1.html
And just in time, check out
And just in time, check out http://arstechnica.com/gadgets/2016/11/netflix-4k-streaming-pc-kaby-lake-cpu-windows-10-edge-browser/ quote “Netflix 4K streaming comes to the PC but it needs Kaby Lake CPU” and quaote “There’s also the matter of hardware decoding support for 10-bit HEVC, the 4K codec used by Netflix and other streaming services”. Thats game over for all AMD’s APUs which can’t decode 10bit HEVC.
How about software decoding?
How about software decoding? If the processor is strong enough, this is not a problem in a desktop/htpc environment. How about discrete graphics cards? Are Pascal based cards useless compared to a Kaby Lake, or are you just super excited because of some marketing stuff that you read and talks only for Kaby Lake(thanks to Intel’s $$$)?
And it’s not game over for APUs. As it is not game over for Skylake, or Broadwell, or even Haswell based PCs. It’s just one more reason for people using older AMD APUs, or Intel processors, to upgrade to newer APUs or Intel CPUs, that will also have full HEVC decoding, HDR and HDMI 2.0 support, or just buy a new discrete graphics card. You know, that’s how industry is moving forward, with newer models offering more features compared to older models, so people have a reason to upgrade. The same applies to laptops. People will upgrade to a laptop that will be able to do hardware decoding of HEVC. Kaby Lake, Bristol Ridge, or with a Pascal/Polaris in it.
One last thing. The planet is not a customer to Netflix. Netflix could be something important in US, but I doubt it is so much of a big deal outside US borders.
Software decoding may work
Software decoding may work for the fastest desktop CPUs, but for mobile CPUs its not going to make the cut. For example, even mobile Skylake struggles with 10bit HEVC despite having a hybrid HEVC decoder http://www.notebookcheck.net/Kaby-Lake-Core-i7-7500U-Review-Skylake-on-Steroids.172692.0.html quote “Our demanding 4K trailer (HEVC Main10, 50 Mbps, 60 fps) is handled smoothly by the i7-7500U at an average power consumption of just 3.2 Watts (CPU Package Power), while the video stuttered noticeably on the i7-6600U despite a consumption of 16.5 Watts”.
And that mobile Skylake (Core i7-6600U) is faster than most of AMD’s APUs, including desktop ones. Example https://browser.primatelabs.com/v4/cpu/compare/578873?baseline=664340 in both in single and mult-thread performance, Intel Core i7-6600U is still faster than AMD’s latest A12-9800 (for AM4 platform).
Installing a discrete graphics card that can support 10bit HEVC decoding is only viable for desktop machines. As for notebooks, depends on either the CPU (preferably Kaby Lake) and its integrated discrete graphics mostly (preferably latest Pascal or Polaris).
A12-9800 comes with hardware
A12-9800 comes with hardware decoding for HEVC. I am not sure if that includes 4K 10bit. It will be stupid not to, but it wouldn’t be a first for AMD(Fury cards that didn’t supported HDMI 2.0 for example). It’s really bad to see the Skylake stutter here. That low TDP limit probably doesn’t help.
On the other hand they do mention “demanding 4K trailer” which could be a non issue for most people out there, at least for now. They do mention that they didn’t had problems with 8bit 4K HEVC, which is going to be pretty enough for everyone, not trying to use their laptops as media players on their ultra expensive new TVs. Anyway, I don’t know if in 1-2 years support for that kind of video will be more important. If it does, it’s good for both AMD and Intel. Good for their future sales.
In case of Pascal or Polaris GPU, it doesn’t mater if you have a Kaby Lake in your laptop. The GPU will do the job anyway.
All of AMD’s current APUs
All of AMD’s current APUs including Carrizo and Bristol Ridge still does not support 10bit HEVC decoding (both have the same integrated GPU and UVD engine also). There’s a simple reason for that, “7th generation” Bristol Ridge is actually “6th generation” Carrizo “refresh” with DDR4 memory controller enabled (ie. Carrizo orignally had DDR4 memory controller built in, check out http://techreport.com/news/26183/report-amd-next-gen-carrizo-apu-supports-ddr4-integrates-chipset ), thus Bristol Ridge only supports the standard 8bit HEVC decoding just like Carrizo (example http://www.anandtech.com/show/10705/amd-7th-gen-bristol-ridge-and-am4-analysis-a12-9800-b350-a320-chipset/3 quote “HEVC 8-bit Main Profile Decode Only at Level 5.2”).
And majority of (ultraportable) notebooks and tablets do not have discrete GPUs. Also having an extra GPU chip and video memory would reduce battery life. Likewise many highly compact mini PCs also do not have discrete GPUs. Thus, there is still a large number of portable and smaller machines (that uses previous generation parts) that are incapable of decoding 10bit HEVC.
Additionally Intel’s new Apollo Lake also has 10bit HEVC decoder (example http://www.technikaffe.de/anleitung-394-asrock_j4205_itx_im_test__apollo_lake_mit_hdmi_2.0_und_hevc_10bit ). Same goes to many of the latest ARM SoCs in the market using the latest video engines, for example ARM Mali-V550 (check out https://www.arm.com/products/multimedia/mali-video/mali-v550.php ). Just like 10bit H.264 (a.k.a “Hi10p” which is still CPU only decoding), 10bit HEVC is beginning to make its presence known.
I do know some of the stuff
I do know some of the stuff you posted. The first DDR4 AMD processors where Carrizo for the embedded market long ago. DDR4 was necessary for those models because they where coming with long support. Thanks for the rest of the info/reminders.
I guess 10bit HEVC is not yet necessary for the majority of consumers and probably wouldn’t be for some time, that’s why it is not supported yet. But it would have been nice if it was included in the Bristol Ridge features, especially when those products are advertising their GPU performance. On the other hand, Intel having hit some walls with it’s GPU in 3D performance, it’s no surprise that is concentrating on codec support.
You are not seeing the big
You are not seeing the big picture here. If Intel starts supporting this 10bit HEVC codec then surely the rest of the content and VOD (video on demand) providers may start using it also. Thats because Intel currently has the lion’s share of the PC market, thus the rest of the industry will start catering to new hardware capabilities. Likewise with more ARM SoCs featuring the new video decoders supporting 10bit HEVC, then the same effect also (since ARM chips practically outnumber Intel chips in mobile, embedded, consumer electronics like smart TVs, internet streaming video boxes, etc).
It’s not me that is not
It’s not me that is not seeing the big picture, but you that exaggerates and in a way rush to move the world to the 10bit HEVC era. Just have a look at plain simple HEVC. Well, people are still happy with 1080p, or even 720p H264 movies. No one is rushing to pass to the HEVC era. We are at least 5 years away before 10bit HEVC becomes something more than (rare) premium content for 5 or 10% of the viewers.
Believe me. Technology moves much slower than what you probably think. Especially those last years. And don’t forget that people are full of devices today(oh, poor poor mother Earth), so they will be less willing to throw them away (thank God for the environment), just for seeing movies at higher quality.
Confirmed, Netflix 4K works
Confirmed, Netflix 4K works with Intel’s Kaby Lake only https://www.heise.de/newsticker/meldung/Netflix-4K-auf-dem-PC-ausprobiert-Kaby-Lake-funktioniert-GeForce-1000-und-Radeon-RX-400-nicht-3505452.html They also tested AMD’s Polaris and NVIDA’s Pacal, unfortunately both failed due to DRM support required. Since companies like Netflix is already using 10bit HEVC, then probably a few other VOD providers will follow soon as the number of hardware capable of 10bit HEVC decoding with DRM support increases.
The market doesn’t upgrade
The market doesn’t upgrade like it used to. People keep their PC’s for 4yr+ now commonly. With no rush to Kabylake based PC’s, since they’re not offering much in the way of a performance increase over Skylake much less Haswell, AMD has nothing to worry about.
Netflix isn’t going to alienate all the existing Intel user base just to please content providers and Intel.
Introducing new hardware
Introducing new hardware features like 10bit HEVC is meant to drive sales for upgrading current hardware. That said, very likely most users already using AMD’s low end APUs (on dead-end FM2+ platform) may be enticed to move to a newer platform (that is more futureproof). The new 10bit HEVC support will also attract HTPC builders as well, as previous generation hardware will soon become the least likely choice (includes almost forgotten AM1 platform and FM2+ platform). As JPR’s market watch reports http://jonpeddie.com/publications/market_watch has shown throughout the rest of 2016 (Q-to-Q -16.8% in Q1, -22.2% in Q2 and -10% in Q3), the number of desktop APUs shipped has declined significantly.
AMD’s marketshare is so low
AMD’s marketshare is so low right now that Intel isn’t interested in trying to take it away from them with features like on die X265 hardware decoders. Anyone holding on to AMD CPU’s/APU’s at this point probably isn’t doing it for very rational reasons anyways.
Intel is essentially competing with itself right now. That may change if Zen turns out to be as good as rumored but that is a wait n’ see situation.
The ps4 has almost NO CPU and
The ps4 has almost NO CPU and can do netflix 4k.
Netflix and Amazon 4K videos
Netflix and Amazon 4K videos will require H.265 (a.k.a “HEVC”) codec, however the current PS4 console supports H.264 codec at HD resolutions only. Only the newer upgraded PS4 Pro will be able to support 4K Netflix.
That is a total non-issue.
That is a total non-issue. How many AMD APU based notebooks will ever be connected to a 4K screen? Probably around zero, at least until we get a high end Zen based laptop. Also, not that many 4K computer screens are even capable of displaying deep color. I have a very expensive Dell UltraSharp desktop display with a 10-bit capable panel. Most of the displays on laptops are essentially 6-bit TN panels or maybe 8-bit if you get an IPS touch screen. Looking at a lot of the laptops, even the supposed high-end option is often a 4K screen at only 72 % NTSC or so. That is t going to show of 10-bit HEVC very well. Most 4K TVs are coming with built in Netflix capabilities that can be used to get the full quality without going over HDMI anyway.
Certain notebooks have 4K
Certain notebooks have 4K panels, for example Dell XPS 15 http://www.dell.com/en-us/shop/productdetails/xps-15-9550-laptop/dncwx1636h quote “15.6” 4K Ultra HD (3840 x 2160) InfinityEdge touch”. For desktop PC users, most likely they will need a new discrete GPU that supports 10bit HEVC. This also impacts HTPC builders to consider the hardware required if they want to use 10bit HEVC. However those already using compact mini PCs using only integrated GPU will be impacted the most, as a total motherboard change will be required to enable 10bit HEVC support. Likewise notebooks using previous generation chips will be left out (thus have to get new generation notebooks)…
Certain isn’t common
Certain isn’t common place.
4K monitors are still fairly niche and will be for a while yet. Virtually no one is going to upgrade their PC just to watch Netflix or Prime. They’ll shrug and just use 1080p content and let their TV upscale the steam into 4K-ish which they probably won’t notice anyways unless they have a very large (70″+) TV or sit very close to their TV’s.
That’s game over for all
That’s game over for all Intel-CPUs except Kaby Lake, too… if one follows your ridiculous logic.
Really, rendering workloads
Really, rendering workloads on the CPU’s cores are only good for benchmarking purposes as rendering so taxes a CPU’s hardware. But to be realistic CPUs are a joke at rendering and even Blender 3d has its Cycles rendering for rendering done on the GPU and done much quicker than any CPU cores can. The one test that I really want to see done with any Laptop based APU/SOC is a simple high polygon count model/scene in Blender’s/Other 3d software’s editor mode where the UI/Interface can sometimes become bogged down handling high polygon mesh models in 3D edit mode.
Intel’s integrated graphics appears to not have enough parallel shader resources relative to AMD’s or Nvidia’s GPU SKUs to allow for high polygon mesh models/scenes to be worked on in 3D edit mode and have the interface remain responsive. Intel’s graphics may be sufficient for the low polygon mesh models used for gaming but in Blender 3D’s Open GL based/rendered editing mode Intel’s SOC integrated graphics does not have enough shader cores to effectively handle high polygon mesh models while allowing the 3D editor software’s UI/Interface to maintain a fluid motion. I have had some high polygon count mesh models so bogged down under Blender 3D’s 3d editing that the entire mesh model and interface became so unresponsive under Intel’s integrated graphics that I could not get any productive work done. I would move the mouse and the model in Blender’s 3D edit mode took more than 5 seconds to respond using Intel’s Integrated graphics.
I will always need a discrete mobile GPU if the laptop comes with Intel’s integrated graphics. Intel’s CPU’s may be good when paired with AMD’s or Nvidia’s discrete mobile GPUs in the laptop from factor, but any serious laptop Graphics Workloads require either AMD’s or Nvidia’s discrete mobile graphics to handle the really high polygon mesh models that are created for animation and other 3D non gaming workloads.
I eagerly await AMD’s Zen/Vega APUs and not so much for the Zen core’s IPC metric but for that GPU shader count metric that will allow me to edit my high polygon count mesh models. Some of the rumored AMD Zen/Vega Interposer based APUs with 16 or more CUs on the APU’s GPU/Graphics will allow me to forgo having to have any discrete mobile GPU while allowing me to work with some very detailed high polygon mesh models in Blender 3D’s 3D editing mode. Getting 16 Vega compute units on an AMD Laptop APU SKU is going to be very popular for those that want to do some serious mesh Modeling/Editing on a laptop while having the UI/Interface remain very responsive. For rendering no one wants the CPU for that, as GPU’s are the way to go for rendering.
In terms of performance it
In terms of performance it seems that it’s due to the higher boost clock, than architectural improvements. IPC improvements seem to be low, as usual for Intel.
What is really impressive is power consumption. Having higher clocks and improving consumption by this much is very good.
That probably has more to do
That probably has more to do with fab process node improvements but for some laptop SKUs better power usage is a good thing to a point. I’d rather have more processor power on my laptops than battery life, as even on the go I use my laptops plugged in mostly. The main limiting factor is Graphics as Intel is not improving it’s graphics as much generation to generation and Intel refuses to use its top tier graphics on in more mainstream more affordable laptop SKUs.
Intel are saying its a 1%
Intel are saying its a 1% increase in IPC. But if compared to Skylake’s base clock then its exactly the same. So Intel’s kaby lake is returning to the days of the ‘rebrand’ at a ripoff to the customer.
Yeah, 12-20% better
Yeah, 12-20% better performance and 2 more hours of battery life for the same price. What a ripoff! How dare they!
Its more like 10% and
Its more like 10% and maaaaybe an hour. And at Intel’s prices yes how dare they.
A quick thing i noticed is on
A quick thing i noticed is on the Blender Graph:
1) Your BMW plot was labeled “BWM”
2) The time measurements are shown as seconds on the graph, but stated as minutes in the text below explaining the graph.
Ha, thanks for catching it.
Ha, thanks for catching it. Will update soon!
There aren’t any single
There aren’t any single thread tests in this reviews. Would like to see single thread results to see whether there are changes to the IPC besides the clock increase and speed shift thingy.
Spoiler question for the
Spoiler question for the upcoming review. Do you like the XPS 13 or specter better?
Hmm, tough to say. I’d like
Hmm, tough to say. I'd like to get the new Kaby Lake XPS 13 in first to decide. Connectivity might have a lot to do with it.
So no need to upgrade from my
So no need to upgrade from my 6700K then?
This is only looking at the
This is only looking at the notebook processors. Desktop situation could be very different – we'll know more in a few months.
Jan-Feb we’ll have the info
Jan-Feb we’ll have the info on 7700K, Z270 (extra PCI-E lanes, and Optane support — we’ll know what that actually means), as well as AMD Zen. Short time to wait for any upgrades and see what is really out there.
Why don’t they put a 16:10
Why don’t they put a 16:10 screen on that hp and get rid of some of the ugly bottom bezel? Additional Vertical screen real estate is very useful for getting work done.
Is the Spectre x360 with Kaby
Is the Spectre x360 with Kaby Lake using Connected Standby when sleeping and getting to DRIPS successfully?
Those were a couple of the biggest issues with Skylake. Microsoft struggled long and hard with them, releasing many firmware updates to try to get them right on the Surface series. Dell’s XPS 13’s didn’t even try to support them with Skylake though predecessor models did.
(Connected Standby is important to give modern devices a ‘smartphone-like’ experience and DRIPS is important to maximize battery life while the device is on standby. I hope people are aware of them and keen for them, but that doesn’t always seem to be the case.)
Thanks!
Looks impressive on the
Looks impressive on the battery life side of things!
Can’t wait to see that in a surface pro 4, 11h maybe.
Surface book 2 probably 15h!!
And with the usual 5 to 10% gain nice.
Thanks for the article!
Don’t get too excited,
Don’t get too excited, Microsoft might decide to use the added battery life to put in a smaller battery and make the Surface Pro 5 thinner. Because as we all know, thinner is what everyone craves. Apparently.
They did it with the SP4, the Skylake CPU was offering better battery life so they put in a smaller battery so the overall battery life turned out slightly worse than the older model. That’s the only issue I have with my SP4 i5, the battery life is way too short, 3-4 hours is typical for my usage. I get battery anxiety when the needle dips below 80% so this is bad for me.
Hopefully someone will explain to them that we don’t REALLY crave ever thinner devices before they finish designing the SP5, there’s still hope…
you don’t do a clock for
you don’t do a clock for clock analysis so it’s a biased methodology and real cpus comparison cannot be taken in consideration as it’s not core for core.
Ryan – what RAM (speed) was
Ryan – what RAM (speed) was used in each of these systems?
May I suggest that the gaming
May I suggest that the gaming performance difference you’re seeing here is actually LARGELY down to something other than the CPU?
The specification for the Skylake Spectre 13 you tested has it listed as having a single 8GB DIMM, where as the Kaby Lake system has 8GB soldered down, likely in a dual channel configuration.
In CPU focused tests the 200MHz clock speed advantage is playing out, but in the gaming tests you’re benefitting from the greater memory bandwidth on the Kaby Lake system – in my testing with the i3-7100U the difference between single and dual channel DDR4-2133 is about 30%, where as the gap between i3-6100U and i3-7100U in 3DMark Sky Diver was just 6% – predominantly because of the CPU focused physics portion of the test.
Lower power consumption means
Lower power consumption means that both the CPU and the GPU can run for more time at higher frequencies, if I am not wrong.
Also that 30% from your tests, seems pretty high, considering we are talking about an Intel GPU. I wouldn’t expect that GPU to be able to gain that much from more bandwidth. I consider it too slow to gain that kind of performance from that extra bandwidth.
Bandwidth is a major
Bandwidth is a major bottleneck for integrated GPUs. I wouldn’t be surprised that dual channel vs. single channel could make such a difference. If they are going to solder the memory on the board, it would be nice if they would use GDDR5 instead of just slow DDR3 or DDR4, but 8 GB of GDDR5 would burn a huge amount of power. It wouldn’t be worth it unless a powerful gaming APU is used. Bu guess we will have to wait for some HBM derivative to make it to mobile APUs.
I have seen in the past an
I have seen in the past an article where AMD integrated GPUs where gaining from faster memories, but Intel integrated GPUs where not. I guess Intel had improved in that.
Intel have improved their
Intel have improved their GPUs a LOT in recent generations.
When you are at the bottom
When you are at the bottom you have plenty of room for improvements. And so to be clear that I am not just bashing Intel here, take for example the leap from Excavator to Zen.
As the earlier poster
As the earlier poster mentioned, Intel has greatly improved its integrated graphics performance especially in the lower wattage areas. For example https://gfxbench.com/compare.jsp?benchmark=gfx40&did1=37753509&os1=Windows&api1=gl&hwtype1=iGPU&hwname1=Intel%28R%29+HD+Graphics&D2=AMD+A9-9410+RADEON+R5%2C+5+COMPUTE+CORES+2C%2B3G that is Intel’s new lowly Atom E3900 versus AMD’s latest A9-9410. That’s low power “Atom class” versus ultraportable “notebook class”. Another example http://www.notebookcheck.net/Kaby-Lake-Core-i7-7500U-Review-Skylake-on-Steroids.172692.0.html quote “All in all, the HD Graphics 620 is roughly on par with a dedicated Nvidia GeForce 920M or AMD Radeon R7 M440”.
I don’t really trust GFXbench
I don’t really trust GFXbench for GPU tests. Show me game performance and then we have something to discus. Not to mention that the A9 has configurable TDP and OEMs love to destroy AMD APU performance with single channel memory and low TDP limits.
As for the 7500U, it’s not exactly a cheap CPU. And those two discrete GPUs from AMD and Nvidia are using a combination of DDR3 with a 64bit bus, making them equivalent to an integrated GPU anyway, because of the low bandwidth. No matter how many shaders/cores they have. Put that Intel GPU next to a dedicated GPU with at least 30-40GB/sec bandwidth and things will look completely different.
You can compare the frames
You can compare the frames per second numbers for Intel’s Kaby Lake Core i7-7500U here http://www.notebookcheck.net/Kaby-Lake-Core-i7-7500U-Review-Skylake-on-Steroids.172692.0.html with AMD’s Bristol Ridge A10-9600P here http://www.notebookcheck.net/Bristol-Ridge-in-Review-AMDs-A10-9600P-Against-the-Competition.168477.0.html Do note that reviewed AMD Bristol Ridge powered notebook is using dual channel memory. Starting off with Diablo III, already can see that Intel’s Kaby Lake is way ahead with 105.9 fps trouncing AMD’s Bristol Ridge’s 50.9 fps. Even at medium to high settings, the pattern is the same. The trend continues with other games like Crysis 3, Bioshock Infinite, Metro Last Light, etc
The Skylake version seems to
The Skylake version seems to only be purchase able with 8 gb but it looks like you can get the KabyLake with 8 or 16 GB. Ram speeds are ddr3l-1866.
I can’t determine whether either is single vs dual channel though.
A couple videos to
A couple videos to demonstrate where I believe your results may be flawed:
i3-6100U and i3-7100U – 3DMark Sky Diver: https://www.youtube.com/watch?v=bySkyfSTPRg
i3-7100U 1 x 8GB DDR4-2133 vs. 2 x 4GB DDR4-2133: https://www.youtube.com/watch?v=tQvOXa9K-UE
Wow 60fps in overwatch at
Wow 60fps in overwatch at medium is simply amazing for non dedicated gpu. I litterly get less on my desktop with a geforce 460 gtx and sucking nearly 200w. Bravo.
Looked at the new macbook
Looked at the new macbook pros today, they should be called the airbook pro anorexic, with no 32GB RAM options. Again the non Radeon Pro SKUs do not have more higher end Intel graphics options! Apple looks to be reducing the numbers of laptop SKUs, and now the macbook pro is not so pro anymore. Apple appears to be more interested in its brand than any sorts of functionality for any professional graphics usage so now there is the airbook not so pro full on anorexic form over functionality with the hide the escape key fun bar! It’s not looking good for any Apple laptop refreshes if they get any thinner they will disappear. RIP macbook pro!
They sacrifice too much for
They sacrifice too much for lower power consumption. I wouldn’t buy most of their previous models due to the repair and upgrade issues. I want to be able to easily swap out storage at a minimum. My ancient MacBook Pro required removal of about 2 dozen screws to get to the hard drive and also had delicate tabs that got broken on the second hard drive upgrade making the touchpad unusable. Not a good design. Most of the stuff Apple does is not needed to make the devices thinner, at least for laptops.
The newer ones would probably require removal of a bunch of screws and probably some work with a heat gun because of the glue. With the ones with the touch sensitive strip, the SSD is actually soldered onto the board; this is unacceptable in my opinion. I guess the model without the touch bar doesn’t have it soldered, but it uses a proprietary SSD. Definitely a case of planned to fail. With heavy use or even software bugs that hit the SSD hard, these things could be due for a full system board replacement in much less time than I usually keep my devices. I guess they are hoping to get people to do the cell phone model of upgrades. Get the latest and greatest laptop every two years, and pass the current one off on someone else. The planned failure mode isn’t your problem then. I wouldn’t recommend buying a used SSD or a used SSD with a laptop wrapped around it. Good luck with the resale value.
I do not intend to ever buy
I do not intend to ever buy an Apple now and I’ll guess I will have to keep my old IvyBridge core i7 HP probook a little longer but the AMD 7650M GPU is getting out of date and the laptop’s keyboard is getting a little rough around the edges. Apple’s macbook SKUs are just so much overkill with that overly thin design. I’m hoping that AMD’s Zen/Vega APUs will get some design wins but I’ll not be wanting windows 10 so maybe there will be some Zen/Vega Linux OS based laptops available before 2020.
It’s only Intel/Nvidia offerings mostly from the Linux OS based OEM laptops makers currently but I have hopes for some Zen/Vega APU design wins from AMD before 2020 and maybe some of the Linux OS based laptop OEMs will get on board with any Zen/Vega APU based laptops in the future. I’m having a hard time finding any laptops with discrete mobile AMD GPUs that are newer than GCN 1.0/GCN gen 1 and even microcenter does not appear to have any laptops other than laptops with AMD’s first generation GCN discrete mobile GPUs.
It’s really going to take Zen/Vega to get AMD any larger laptop APU design wins. So maybe by this time next year there will be Zen/Vega options from HP/Dell/others, and HP does have some Probook with Linux OS options or at least they did for my model probook when it was made. I have at least 3 years before windows 7 goes away for any support so I’ll just have to rely on my older laptops that I still have that came with windows 7.
What good is 4K HEVC 10
What good is 4K HEVC 10 support without HDCP 2.2?
Comparing the two Spectre
Comparing the two Spectre laptops, was there also a change from SATA to NVMe as the SSD bus?
I think you’ve all missed the
I think you’ve all missed the point about Kaby Lake and Netflix.
Netflix will allow 4K on Windows because Kaby Lake supports HDCP 2.2. HEVC, 10-bit or not, it’s all about DRM.
humm I feel a upgrade comming
humm I feel a upgrade comming on from a I3 6320 to a I5 7600k what do you guys think ?
Hi everyone! I came across
Hi everyone! I came across your converstaion while researching for what laptop to buy. I am not good at specs and technical stuff, and I’ve read some of your comments here.
I am torn between i5 (6th Gen) with NVIDIA GEforce 940MX (2gb) and AMD A9-9410 with Radeon R5 m430 (2gb)
These are the actual notebooks I am looking at:
https://www.notebooksbilliger.de/acer+aspire+e15+e5+523g+93xx/incrpc/lastseen
https://www.notebooksbilliger.de/asus+f556uq+xo528d/incrpc/lastseen
https://www.notebooksbilliger.de/hp+15+ba049ng+notebook/eqsqid/09bcef30-b3c6-4cf2-aee0-bd36b799ce96
I’m choosing between the three. Basically, it will be my 2nd laptop as I have a macbook and I need to run some windows program for my studies and I don’t wanna run bootcamp all the time. Thank you so much for all your advise.
Go for HP. SSD is a must in
Go for HP. SSD is a must in 2016.
While the Intel will offer
While the Intel will offer faster CPU performance and better efficiency(the laptop will stay more time on with the battery), the HP offers SSD and 1080p display at almost the same price.
I think you should go with the HP 15.
If you decide to go with an Intel option, you should consider at least a model with 1080p resolution, even if that means to pay more. Don’t buy a 1366×768 laptop.
Ignore the Acer and any laptop with a dual core CPU, especially from AMD. Quad core AMD APUs are fine.