Maxwell and Kepler and…Fermi?
NVIDIA is launching a complete new line of mobility GPUs, some based on Maxwell, some on Kepler, with a new feature called Battery Boost.
Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.
Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.
With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.
Let's take a quick look at the new hardware specifications.
GTX 880M | GTX 780M | GTX 870M | GTX 770M | |
---|---|---|---|---|
GPU Code name | Kepler | Kepler | Kepler | Kepler |
GPU Cores | 1536 | 1536 | 1344 | 960 |
Rated Clock | 954 MHz | 823 MHz | 941 MHz | 811 MHz |
Memory | Up to 4GB | Up to 4GB | Up to 3GB | Up to 3GB |
Memory Clock | 5000 MHz | 5000 MHz | 5000 MHz | 4000 MHz |
Memory Interface | 256-bit | 256-bit | 192-bit | 192-bit |
Features | Battery Boost GameStream ShadowPlay GFE |
GameStream ShadowPlay GFE |
Battery Boost GameStream ShadowPlay GFE |
GameStream ShadowPlay GFE |
Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.
GTX 860M | GTX 760M | GTX 850M | GTX 750M | |
---|---|---|---|---|
GPU Code name | Kepler or Maxwell | Kepler | Maxwell | Kepler |
GPU Cores | 1152 or 640 | 768 | 640 | 384 |
Rated Clock | 797 or 1029 MHz | 768 MHz | 876 MHz | 967 MHz |
Memory | Up to 2GB | Up to 2GB | Up to 2GB | Up to 2GB |
Memory Clock | 5000 MHz | 4000 MHz | 5000 MHz | 5000 MHz |
Memory Interface | 128-bit | 128-bit | 128-bit | 128-bit |
Features | Battery Boost GameStream ShadowPlay GFE |
GameStream ShadowPlay GFE |
Battery Boost GameStream ShadowPlay GFE |
GameStream ShadowPlay GFE |
Things get a bit more interesting on the GTX 860M and GTX 850M, both of which are based on Maxwell - or at least partially. What? The GTX 860M will have options available in both Kepler and Maxwell architectures, resulting in a variable in the marketplace that I think is really unnecessary. NVIDIA's reasoning for the duplication of products is that "mobile GPUs are marketed on performance" - which is fine, but so are desktop GPUs and we don't really have exact model numbers on the shelf based on two distinct designs. Performance on the 860M should be close the same with both ASICs but obviously the Maxwell SKU will offer better power efficiency, and thus battery life. How (and if) notebook vendors differentiate this variation will be very interesting.
Also launching today are the GeForce 840M and GeForce 830M, both based on Maxwell - though no shader counts were given in the press deck.... These are added to the GeForce 820M that continues to be based on Fermi. Yes, Fermi, the same GPU architecture that was released in September of 2009! None of these GPUs will support ShadowPlay, GameStream or even Battery Boost, which is a bit of a let down really.
so battery boost only to 800M
so battery boost only to 800M part, while 700M and 800M will be able to run shadowplay and game stream? (well except fermi based gpu)
Correct!
Correct!
Really guys battery boost, I
Really guys battery boost, I bet the gpu will down clock to save battery life
Does anyone know how these
Does anyone know how these would compare to nvidia’s desktop GPUs? Like is the 880M roughly as powerful as a 760 ti or something like that if one were to compare their average FPS/frame timings for a given game? Or the 870M like a 750ti maybe? I’ve never owned a laptop with one of these discrete gpus so I’m curious for a comparison that I can wrap my head around.
Just compare CUDA core
Just compare CUDA core counts. The 880M has the same number of cores as the GTX 770 on the desktop side, though at lower GPU and memory clocks.
Any news about 8xx desktop
Any news about 8xx desktop parts?
Nope!
Nope!
In the battery graphics, why
In the battery graphics, why do they include the battery’s initial (1X) charge/play time in the total, as that is the default play time/battery capacity! Why not just list the actual improvment with the true percentage gain, as I smell more marketing fibs to make those numbers appear larger?
Borderlands 2(52%), Bioshock Infinite(50%)League of Legends(97%)! What about the obfuscation with the Maxwell and Fermi based SKUs? This makes it appear like a Laundry Soap ad that only touts the new design for the Box, while the soap has not been reformulated/improved at all. can this savings ability be used in older SKUs with just a driver change?
Maxwell only uses less power
Maxwell only uses less power because it relies a lot on the cpu to do the work, nvidia were very clever with this new gpu because they know most people have fast cpus these days
Proof please…
What exactly
Proof please…
What exactly is the cpu doing that the gpu normally does? You’re saying NV’s product is total BS here? Heck the GPU is streaming video encoded on the GPU that normally isn’t happening at all while gaming. What extra work in any situation is the CPU getting now with maxwell? GPU’s today are attempting to steal ALL the CPU work they can, not trying to hand off work to the cpu. Cuda, GPGPU etc are ways to steal work from the CPU not give it more work. Even the 30fps isn’t really a trick so to speak, as they are just governing the GPU at a speed that is always playable but not say, giving 200fps when 30fps is all you need. Why run balls out when you have no need for 200fps power? As long as they can keep it from EVER dipping below 30fps I say save the power with a checkbox when desired on battery. As long as I can uncheck it when desired I’m cool with extra battery life if I want it. It doesn’t sound like they are forcing it on you, so I’m cool with it being in there.
The only clever thing NV did was spend money on their CORE products instead of a console. Meaning gpu improvements, drivers that are NOT in phase 3, gsync that IS a product selling NOW, Geforce Experience that works (where AMD’s needs a LOT of help even according to anandtech & others), Ref cards adequately cooled that run at RATED speeds, etc etc…Meanwhile AMD spent on consoles, robbing from gpus, drivers, cpus, freesync (not even a product yet), mantle (beta still), no ARM PHONE/Tablet core until 2015 at least (only seattle for servers this year) etc. I could go on but you should get the point.
You shouldn’t be too surprised or upset when NV/Intel runs away with the races when AMD spent the small wad they did have (ok, really money they didn’t have to begin with, all debt no cash as even if they paid all their cash they’d still owe a few billion+) on things that don’t keep up with Intel/NV. This should be expected when you axe spending on core products for 2+yrs to make 2 console APU’s instead of GPU’s/CPU’s, while the other two guys spent on CORE products ONLY or could at least afford more than core so spending on junk doesn’t hurt other CORE products. Steamroller was supposed to be great, so was hawaii. We got low perf and hot/throttled instead of “competes with Intel” and “even with NV”. Never mind AMD hasn’t even put a Phone/Tablet soc on the roadmap while NV is already on T5 (k1) in another month and Denver IN HOUSE cores later this year. By the time AMD ships AMDSOC1 (whatever it’s called), NV will be on T6 (M1? perhaps with Denver R2 or whatever). Meanwhile AMD chases shrinking markets (low-end, notebooks just lost 21% to chromebooks, and low-end desktops are next with Denver etc coming, expect desktops to drop another 20% in the next year or two), and neglects the only GROWING market in mobile and even that is slowing growth now. But at least it’s a HUGE pie to fight over and their CORE GPU’s are taking over mobile so you should have went there big time like NV years ago.
As the GPU is becoming king on mobile INSTEAD of the modem (I won’t need to get to my cap even faster than now going forward, 2GB caps make a faster modem pointless, watch as Qcom stock price drops this year, like rimm losing enterprise email as important once everyone had it), AMD should have been there to reap the benefits of years of GPU/game/driver development just like NV. Instead the winners/losers will be decided before AMD even launches their first mobile soc. Bummer. NV has just played a delay game waiting for desktop gpus to meet mobile lines, and we’re about to finally get the REAL Tegra’s with GPU’s specifically made for MOBILE with in house CPU (denver A57/64bit) to go with them. NV should have no trouble matching Qcom power in a phone this year or early next year. At that point its a GPU race not a modem/power race and NV/AMD has the best chance of winning that war as all their desktop GPU’s have been optimized to death already in ALL game engines/drivers.
But alas, AMD decided to play a CONSOLE game which is dying (see GDC 2013, devs went massively to mobile last year and will continue next week at GDC 2014, we follow the game devs/great games not the hardware) instead of MOBILE which is growing. AMD going after consoles which are losing money on all fronts (hardware/software sales figures being downed right and left, xbox360/ps3 made MS/Sony a few billion in losses each), while just Samsung/Apple racked up ~$40 Billion each on mobile and Qcom another $6.5B. Is it easier to make a billion on consoles or steal a billion from Apple/Samsung/Qcom? Even with 4 xmas launch quarters in a row, AMD wouldn’t crack 500mil in profits (and CPU losses coupled with GF fines+Debt interest will eat all that). Wisely NV went after the $86 Billion instead of ~$7B in console losses (did anyone make money involved in xbox360/ps3 hardware parts? MS/Sony lost their butts overall). Even with a massive console xmas AMD is still negative on earnings for the TTM and they won’t have an xmas console sales quarter for the next 3 quarters. Xmas doesn’t last all year, just for 1Q at xmas and consoles only launch ONCE every 7yrs or so, so the main party is already over largely which barely got them past break even for a SINGLE quarter of the year and an overall yearly LOSS again.
Where is AMD growth about to come from? Not consoles, not mobile, and hawaii did ZERO damage to NV gpu sales (NV sales were UP) and did NOTHING to further Mantle as not many gamers bought them right? Miners supposedly bought the few that sold. IF they sold a LOT why didn’t they make more than 120mil on the gpu side (which is basically 10mil console chips x $12 each, so nothing made in GPUS)? Mantle is dead next week at GDC 2014 with OpenGL/DirectX speeches. AMD needs to fire management and get people who understand how to make money by chasing GROWTH markets, not declining markets.
Battery boost, I hope no one
Battery boost, I hope no one says that to me drunk
“Performance on the 860M
“Performance on the 860M should be close the same with both ASICs”
Sounds like there needs to be GTX 860M vs GTX 860M benchmarks to see which GPU(Kepler or Maxwell) comes out on top for gaming as well as rendering workloads! I would rather have more Maxwell Cores and forget about the power savings, there needs to be a direct comparsion between a kepler core and a maxwell core for these mobile parts provided by Nvidia, supported with indipendent benchmarks(gaming and rendering). This rebadging madness has got to stop, and you just know most retailers are not going to know the difference, especially the slackjaws at most local stores. Screw thin and light, give me regular laptop and more GPU cores, this is not tablets, I’ll wait for a Maxwell laptop SKU with more than 1600 cores.
I read the headline and was
I read the headline and was getting good and worked up over having just invested Serious Money into my Asus gaming laptop with a 780M.
Then I looked at Cuda count and clock speeds and felt a lot better.
So, xoticpc has just updated
So, xoticpc has just updated their sager/clevo gaming laptops with the 800 series gpu’s (model NP9377-S). My questions are the following: how does a single 880M compare to an SLI 870M configuration. It’s only $50 more for this SLI configuration. With the 870’s slightly less cores (1536 vs 1344) and almost the same clock speed (954 vs. 941) it seems that an SLI config would be a significant gain over a single 880M – especially for only $50! And it’s an extra $600 for SLI 880M. So, my next question is how much performance gain would an SLI 880m have over an SLI 870m – especially for a net increase in $550! Any benchmarks on this yet?
Also, xoticpc lists the 880M and 870M with 8 GB and 6 GB frame buffer – freeking 16 GB and 12 GB SLI, respectively!! How big of a difference would this seemingly huge amount of Vram make? Obviously, I plan on gaming on this machine. I like to heavily mod games like skyrim and play them in stereoscopic 3D vision – which was unplayable on my last ASUS G74 laptop (it had a single 560m with 3GB VRAM and i7 2.0 ghz CPU) Also, I would like to take advantage of g sync later on down the line when the 2550 x 1440 120 hz monitors come out – preferably in 3d vision! Is it possible to drive such a monitor with this system? Lastly, I want this system to drive the oculus rift with as minimal latency as possible when it comes out later this year (hopefully) ,
Finally, desktops are just not an option for me as I spend half my life at work on the road in hotels and this away time usually the only time I have to game as when I’m home it’s family and honey-do list time! So, please no desktops are better than laptops for the money comments 🙂
SLI memory is not added –
SLI memory is not added – both cards have separate frame buffers. That 8 and 6GB GPU memory has to be error – most probably cards are 4 and 3GB (880m and 870m respectively) but xoticpc is making the same error as you do, namely they sum up the frame buffers for SLI configuration.
870 SLI will be faster than
870 SLI will be faster than single 880m if SLI works correctly. Some games work with SLI only months after launch or scale poorly or not at all due to engine limitations. I would stick to single 870m which should be plenty anyway.
Thanks for the response! So,
Thanks for the response! So, the whole “double the frame buffer in SLI” is the wrong way to think about it! Didn’t know that…but there is still a significant performance increase for SLI if I want to game in 2550 x 1440, right? Would an SLI config be necessary then? Will I be able to do that with an 870m SLI or would I need an 880m SLI? What if I also want to do 3d vision on a 2k monitor – now are we talking about needing 880m SLI?
On a side note, anyone know of any upcoming g sync monitor high performance laptops – preferably 2550×1440?
G-sync will not work at the
G-sync will not work at the same time as 3d, and as for the SLI, if you can afford the 870 SLI I would get it. If not then just get one 870m
The 870m has huge improvements over the previous 770 series (between 20-50% in most uses).
And as for the cards having 6 and 8 GB of ram, I do not believe this is an error. There will be two versions of each card, the solder-in card (non-replaceable) and the MXM-slot card (replaceable)
From what I have seen, the MXM cards will have 2x the ram of the soldered in cards. So many of the MSI/Gigabyte/Asus laptops have the soldered-in variety, while the sager models and the high end MSI laptops have the MXM cards.
The 870m is slightly less powerful than the previous generation 780m, according to benchmarks, so a single one should be able to max out most if not all modern games.
However, that other poster is correct, sli configurations do not double the ram, they merely clone the contents back and forth so that they both have the same information at any given time.
Hope this helps.
Um i hv 1 question to ask..
Um i hv 1 question to ask.. my laptop is installed with graphic card 820m and im hving very terrible spikes and lags and fps drops when im gaming games like DoTA 2 and League of Legends.. Is the graphic card i hv good enough for these games or wany suggestion for me to stabilize the fps or increase the fps and performance of the games??? I am really in need of help.. thanks alot
Are you keeping your laptop
Are you keeping your laptop cooled? When i game my frame rate shoots through the floor when the laptop gets too hot. Invest in a cooling fan to place it on while you game
Can anybody please tell me if
Can anybody please tell me if that laptop is any good for gaming..
http://www.microsoftstore.com/store/msusa/en_US/pdp/HP-Envy-15-k012nr-Signature-Edition-Laptop/productID.306324300