Battery Boost
Being introduced for the first time with the GeForce GTX 800M series is Battery Boost technology, a method that NVIDIA has developed to extend the battery life of notebooks while gaming.
Using a combination of GeForce Experience and Optimus, Battery Boost is enabled automatically when you start a game on your laptop in battery mode. (As a side note, if you unplug your machine after starting a game, the battery savings will not be put into affect!)
Even though NVIDIA is being somewhat cagey about what exactly Battery Boost is doing, we can come up with some pretty good guesses. First and foremost, the battery saving technology is using frame rate limiting, a feature that has existed with desktop GeForce GPUs since the introduction of Kepler. While Kepler (and Maxwell) GPUs were built with very granular power states and clock rates to enable GPU Boost, the same features are being used to lower performance / power consumption to extend battery life.
NVIDIA by default will target a 30 FPS mark for "playable" frame rates when in battery mode although this is adjustable by the user inside GeForce Experience.
But there is more going on under the hood with Battery Boost than just frame rate. NVIDIA calls this a "driver level governor" that can operate the "CPU, GPU and memory at peak efficiency." NVIDIA did admit that they are not adjusting CPU power states at all so that leaves us with more questions than answers. The driver could be request data from the processor in less frequent intervals or lowering compile times for shaders in real-time or it could be something else entirely. But based on the graphic above, NVIDIA is definitely doing more than just lowering GPU clock speeds to meet the 30 FPS frame rate limit.
The first iteration of GeForce Experience with support for Battery Boost will use the same image quality settings for battery-based and plugged-in gaming though, coming later this month, NVIDIA will offer up an additional profile. It will allow users to have different settings (lower resolution, less AA) while running on battery power to help further increase usable gaming time.
NVIDIA provided some examples since we hadn't wrapped our hands around a working notebook with Battery Boost integrated yet. Borderlands 2, at 1920×1080 and high presets, saw a 52% improvement in battery life while gaming. What we don't see is the frame rate and experience differences between the two modes.
Bioshock Infinite with a GeForce GTX 880M sees as much as 50% better gaming battery life with Battery Boost.
The big winner though is League of Legends, that at 1080p with Very High settings, saw a 97% increase in playable battery life on the GTX 860M!
Along with Battery Boost, that is ONLY coming to the GeForce GTX 800M GPUs, NVIDIA is bringing some other features, currently exclusive to the desktop, to mobile gamers.
ShadowPlay gives users the ability record in-game footage with minimal impact on performance using the encoding power of the GeForce GPU. GameStream allows gamers to stream PC games to select devices like NVIDIA SHIELD over their home network. Both features were previously limited to desktop users even when we knew that the compute power of many mobile GPUs was more than capable of handling the workload. Now, with the release of a pending update to GeForce Experience coming later in March, GeForce GTX 700M and GTX 800M products will be able to run both.
Though there are several aspects to today's announcement of the GeForce 800M series of mobility GPUs that are only worth a passing glance (rebrands), the inclusion of Maxwell GPUs with the GTX 860M, as well as the 840M and 830M, brings some spice to the mobile gaming market. I still believe that the confusion of having both Kepler and Maxwell options with the same naming scheme will aggravate and annoy OEMs as well as the users smart enough to know which one they want.
Battery Boost is an interesting technology that I am eager to test out once we get our hands on a notebook that integrates it. Gaming battery life has usually been something we tended to laugh off with most machines but it's possible now that NVIDIA's focus will push reviewers (like us) and OEMs to take note (and improve upon) that aspect of performance. I still have doubts about the experience effects of frame rate limiting, and how that is balanced against battery life, but we can address those after some hands on time later this month.
Though we didn't see a flagship Maxwell part released for mobile devices that we kind of hoped, NVIDIA continues to push forward with discrete graphics innovation for notebooks and that is something we can all get behind.
so battery boost only to 800M
so battery boost only to 800M part, while 700M and 800M will be able to run shadowplay and game stream? (well except fermi based gpu)
Correct!
Correct!
Really guys battery boost, I
Really guys battery boost, I bet the gpu will down clock to save battery life
Does anyone know how these
Does anyone know how these would compare to nvidia’s desktop GPUs? Like is the 880M roughly as powerful as a 760 ti or something like that if one were to compare their average FPS/frame timings for a given game? Or the 870M like a 750ti maybe? I’ve never owned a laptop with one of these discrete gpus so I’m curious for a comparison that I can wrap my head around.
Just compare CUDA core
Just compare CUDA core counts. The 880M has the same number of cores as the GTX 770 on the desktop side, though at lower GPU and memory clocks.
Any news about 8xx desktop
Any news about 8xx desktop parts?
Nope!
Nope!
In the battery graphics, why
In the battery graphics, why do they include the battery’s initial (1X) charge/play time in the total, as that is the default play time/battery capacity! Why not just list the actual improvment with the true percentage gain, as I smell more marketing fibs to make those numbers appear larger?
Borderlands 2(52%), Bioshock Infinite(50%)League of Legends(97%)! What about the obfuscation with the Maxwell and Fermi based SKUs? This makes it appear like a Laundry Soap ad that only touts the new design for the Box, while the soap has not been reformulated/improved at all. can this savings ability be used in older SKUs with just a driver change?
Maxwell only uses less power
Maxwell only uses less power because it relies a lot on the cpu to do the work, nvidia were very clever with this new gpu because they know most people have fast cpus these days
Proof please…
What exactly
Proof please…
What exactly is the cpu doing that the gpu normally does? You’re saying NV’s product is total BS here? Heck the GPU is streaming video encoded on the GPU that normally isn’t happening at all while gaming. What extra work in any situation is the CPU getting now with maxwell? GPU’s today are attempting to steal ALL the CPU work they can, not trying to hand off work to the cpu. Cuda, GPGPU etc are ways to steal work from the CPU not give it more work. Even the 30fps isn’t really a trick so to speak, as they are just governing the GPU at a speed that is always playable but not say, giving 200fps when 30fps is all you need. Why run balls out when you have no need for 200fps power? As long as they can keep it from EVER dipping below 30fps I say save the power with a checkbox when desired on battery. As long as I can uncheck it when desired I’m cool with extra battery life if I want it. It doesn’t sound like they are forcing it on you, so I’m cool with it being in there.
The only clever thing NV did was spend money on their CORE products instead of a console. Meaning gpu improvements, drivers that are NOT in phase 3, gsync that IS a product selling NOW, Geforce Experience that works (where AMD’s needs a LOT of help even according to anandtech & others), Ref cards adequately cooled that run at RATED speeds, etc etc…Meanwhile AMD spent on consoles, robbing from gpus, drivers, cpus, freesync (not even a product yet), mantle (beta still), no ARM PHONE/Tablet core until 2015 at least (only seattle for servers this year) etc. I could go on but you should get the point.
You shouldn’t be too surprised or upset when NV/Intel runs away with the races when AMD spent the small wad they did have (ok, really money they didn’t have to begin with, all debt no cash as even if they paid all their cash they’d still owe a few billion+) on things that don’t keep up with Intel/NV. This should be expected when you axe spending on core products for 2+yrs to make 2 console APU’s instead of GPU’s/CPU’s, while the other two guys spent on CORE products ONLY or could at least afford more than core so spending on junk doesn’t hurt other CORE products. Steamroller was supposed to be great, so was hawaii. We got low perf and hot/throttled instead of “competes with Intel” and “even with NV”. Never mind AMD hasn’t even put a Phone/Tablet soc on the roadmap while NV is already on T5 (k1) in another month and Denver IN HOUSE cores later this year. By the time AMD ships AMDSOC1 (whatever it’s called), NV will be on T6 (M1? perhaps with Denver R2 or whatever). Meanwhile AMD chases shrinking markets (low-end, notebooks just lost 21% to chromebooks, and low-end desktops are next with Denver etc coming, expect desktops to drop another 20% in the next year or two), and neglects the only GROWING market in mobile and even that is slowing growth now. But at least it’s a HUGE pie to fight over and their CORE GPU’s are taking over mobile so you should have went there big time like NV years ago.
As the GPU is becoming king on mobile INSTEAD of the modem (I won’t need to get to my cap even faster than now going forward, 2GB caps make a faster modem pointless, watch as Qcom stock price drops this year, like rimm losing enterprise email as important once everyone had it), AMD should have been there to reap the benefits of years of GPU/game/driver development just like NV. Instead the winners/losers will be decided before AMD even launches their first mobile soc. Bummer. NV has just played a delay game waiting for desktop gpus to meet mobile lines, and we’re about to finally get the REAL Tegra’s with GPU’s specifically made for MOBILE with in house CPU (denver A57/64bit) to go with them. NV should have no trouble matching Qcom power in a phone this year or early next year. At that point its a GPU race not a modem/power race and NV/AMD has the best chance of winning that war as all their desktop GPU’s have been optimized to death already in ALL game engines/drivers.
But alas, AMD decided to play a CONSOLE game which is dying (see GDC 2013, devs went massively to mobile last year and will continue next week at GDC 2014, we follow the game devs/great games not the hardware) instead of MOBILE which is growing. AMD going after consoles which are losing money on all fronts (hardware/software sales figures being downed right and left, xbox360/ps3 made MS/Sony a few billion in losses each), while just Samsung/Apple racked up ~$40 Billion each on mobile and Qcom another $6.5B. Is it easier to make a billion on consoles or steal a billion from Apple/Samsung/Qcom? Even with 4 xmas launch quarters in a row, AMD wouldn’t crack 500mil in profits (and CPU losses coupled with GF fines+Debt interest will eat all that). Wisely NV went after the $86 Billion instead of ~$7B in console losses (did anyone make money involved in xbox360/ps3 hardware parts? MS/Sony lost their butts overall). Even with a massive console xmas AMD is still negative on earnings for the TTM and they won’t have an xmas console sales quarter for the next 3 quarters. Xmas doesn’t last all year, just for 1Q at xmas and consoles only launch ONCE every 7yrs or so, so the main party is already over largely which barely got them past break even for a SINGLE quarter of the year and an overall yearly LOSS again.
Where is AMD growth about to come from? Not consoles, not mobile, and hawaii did ZERO damage to NV gpu sales (NV sales were UP) and did NOTHING to further Mantle as not many gamers bought them right? Miners supposedly bought the few that sold. IF they sold a LOT why didn’t they make more than 120mil on the gpu side (which is basically 10mil console chips x $12 each, so nothing made in GPUS)? Mantle is dead next week at GDC 2014 with OpenGL/DirectX speeches. AMD needs to fire management and get people who understand how to make money by chasing GROWTH markets, not declining markets.
Battery boost, I hope no one
Battery boost, I hope no one says that to me drunk
“Performance on the 860M
“Performance on the 860M should be close the same with both ASICs”
Sounds like there needs to be GTX 860M vs GTX 860M benchmarks to see which GPU(Kepler or Maxwell) comes out on top for gaming as well as rendering workloads! I would rather have more Maxwell Cores and forget about the power savings, there needs to be a direct comparsion between a kepler core and a maxwell core for these mobile parts provided by Nvidia, supported with indipendent benchmarks(gaming and rendering). This rebadging madness has got to stop, and you just know most retailers are not going to know the difference, especially the slackjaws at most local stores. Screw thin and light, give me regular laptop and more GPU cores, this is not tablets, I’ll wait for a Maxwell laptop SKU with more than 1600 cores.
I read the headline and was
I read the headline and was getting good and worked up over having just invested Serious Money into my Asus gaming laptop with a 780M.
Then I looked at Cuda count and clock speeds and felt a lot better.
So, xoticpc has just updated
So, xoticpc has just updated their sager/clevo gaming laptops with the 800 series gpu’s (model NP9377-S). My questions are the following: how does a single 880M compare to an SLI 870M configuration. It’s only $50 more for this SLI configuration. With the 870’s slightly less cores (1536 vs 1344) and almost the same clock speed (954 vs. 941) it seems that an SLI config would be a significant gain over a single 880M – especially for only $50! And it’s an extra $600 for SLI 880M. So, my next question is how much performance gain would an SLI 880m have over an SLI 870m – especially for a net increase in $550! Any benchmarks on this yet?
Also, xoticpc lists the 880M and 870M with 8 GB and 6 GB frame buffer – freeking 16 GB and 12 GB SLI, respectively!! How big of a difference would this seemingly huge amount of Vram make? Obviously, I plan on gaming on this machine. I like to heavily mod games like skyrim and play them in stereoscopic 3D vision – which was unplayable on my last ASUS G74 laptop (it had a single 560m with 3GB VRAM and i7 2.0 ghz CPU) Also, I would like to take advantage of g sync later on down the line when the 2550 x 1440 120 hz monitors come out – preferably in 3d vision! Is it possible to drive such a monitor with this system? Lastly, I want this system to drive the oculus rift with as minimal latency as possible when it comes out later this year (hopefully) ,
Finally, desktops are just not an option for me as I spend half my life at work on the road in hotels and this away time usually the only time I have to game as when I’m home it’s family and honey-do list time! So, please no desktops are better than laptops for the money comments 🙂
SLI memory is not added –
SLI memory is not added – both cards have separate frame buffers. That 8 and 6GB GPU memory has to be error – most probably cards are 4 and 3GB (880m and 870m respectively) but xoticpc is making the same error as you do, namely they sum up the frame buffers for SLI configuration.
870 SLI will be faster than
870 SLI will be faster than single 880m if SLI works correctly. Some games work with SLI only months after launch or scale poorly or not at all due to engine limitations. I would stick to single 870m which should be plenty anyway.
Thanks for the response! So,
Thanks for the response! So, the whole “double the frame buffer in SLI” is the wrong way to think about it! Didn’t know that…but there is still a significant performance increase for SLI if I want to game in 2550 x 1440, right? Would an SLI config be necessary then? Will I be able to do that with an 870m SLI or would I need an 880m SLI? What if I also want to do 3d vision on a 2k monitor – now are we talking about needing 880m SLI?
On a side note, anyone know of any upcoming g sync monitor high performance laptops – preferably 2550×1440?
G-sync will not work at the
G-sync will not work at the same time as 3d, and as for the SLI, if you can afford the 870 SLI I would get it. If not then just get one 870m
The 870m has huge improvements over the previous 770 series (between 20-50% in most uses).
And as for the cards having 6 and 8 GB of ram, I do not believe this is an error. There will be two versions of each card, the solder-in card (non-replaceable) and the MXM-slot card (replaceable)
From what I have seen, the MXM cards will have 2x the ram of the soldered in cards. So many of the MSI/Gigabyte/Asus laptops have the soldered-in variety, while the sager models and the high end MSI laptops have the MXM cards.
The 870m is slightly less powerful than the previous generation 780m, according to benchmarks, so a single one should be able to max out most if not all modern games.
However, that other poster is correct, sli configurations do not double the ram, they merely clone the contents back and forth so that they both have the same information at any given time.
Hope this helps.
Um i hv 1 question to ask..
Um i hv 1 question to ask.. my laptop is installed with graphic card 820m and im hving very terrible spikes and lags and fps drops when im gaming games like DoTA 2 and League of Legends.. Is the graphic card i hv good enough for these games or wany suggestion for me to stabilize the fps or increase the fps and performance of the games??? I am really in need of help.. thanks alot
Are you keeping your laptop
Are you keeping your laptop cooled? When i game my frame rate shoots through the floor when the laptop gets too hot. Invest in a cooling fan to place it on while you game
Can anybody please tell me if
Can anybody please tell me if that laptop is any good for gaming..
http://www.microsoftstore.com/store/msusa/en_US/pdp/HP-Envy-15-k012nr-Signature-Edition-Laptop/productID.306324300