We knew that NVIDIA had an impending driver update providing DirectX 11 performance improvements. Launched today, 337.50 still claims significant performance increases over the previous 335.23 version. What was a surprise is GeForce Experience 2.0. This version allows both ShadowPlay and GameStream to operate on notebooks. It also allows ShadowPlay to record, and apparently stream to Twitch, your Windows desktop (but not on notebooks). It also enables Battery Boost, discussed previously.
Personally, I find desktop streaming is the headlining feature, although I rarely use laptops (and much less for gaming). This is especially useful for OpenGL, games which run in windowed mode, and if you want to occasionally screencast without paying for Camtasia or tinkering with CamStudio. If I were to make a critique, and of course I will, I would like the option to select which monitor gets recorded. Its current behavior records the primary monitor as far as I can tell.
I should also mention that, in my testing, "shadow recording" is not supported when not recording a fullscreen game. I'm guessing that NVIDIA believes their users would prefer to not record their desktops until manually started and likewise stopped. It seems like it had to have been a conscious decision. It does limit its usefulness in OpenGL or windowed games, however.
This driver also introduces GameStream for devices out of your home discussed in the SHIELD update.
This slide is SLi improvements, driver-to driver, for the GTX 770 and the 780 Ti.
As for the performance boost, NVIDIA claims up to 64% faster performance in configurations with one active GPU and up to 71% faster in SLI. It will obviously vary on a game-by-game and GPU-by-GPU basis. I do not have any benchmarks, besides a few examples provided by NVIDIA, to share. That said, it is a free driver. If you have a GeForce GPU, download it. It does complicate matters if you are deciding between AMD and NVIDIA, however.
Are you going to do your own
Are you going to do your own testing of the new driver? Apparently it does not live up to those fantastic claims from Nvidia…
Me, personally? No. It will
Me, personally? No. It will be the one used by Ryan in GPU reviews, however, so you should see how it performs against other parts. I don't know if anyone has a direct driver-to-driver comparison planned, though.
Looking at a couple of
Looking at a couple of results in Anandtech and Toms Hardware it looks like more slides than performance increase with some exceptions of course like Rome II. But wasn’t always like this with any new driver release? Almost minimal gains in most titles and a few cases where performance increase was really high.
As well as tom’s
As well as tom’s
You said tom’s hardware too
You said tom’s hardware too my bad.
Yeah it does AMD troll
Yeah it does AMD troll
gotta love nvidia’s marketing
gotta love nvidia’s marketing graphs 😀 where 100% bumps translates into 4fps xD in a 3000$ rig, and 2 fps in a 1000$ rig, and 1fps in a 600$ rig.
the other day i was playing the witcher 1 and 2, and it was funny to me to see that the witcher2 ran faster than the witcher1, so i told my self what if they made today driver hardware for the witcher1 it would run way much faster. guess nvidia had the same idea, optimise very old games to show more gains…
the star swarm is interesting though, i would like to see a Mantle R9 290X and a 780Ti DX11, on 3 different rigs, low, mid en high end.
Nvidia new Wonder driver!
Nvidia new Wonder driver! With 150% more marketing to combat the AMD Mantle hype.
Nvidia did a great job with this driver, but it is definitely a stretch with their consumer information delivery.
the weird part of all this,
the weird part of all this, Nvidia promotes reduction on cpu overhead, then benchmark with 3970x, if you wanna show cpu overhead reduction why not benchmark i3? does this make sense ?
this driver is just like any other driver 2-5fps gains.
the plus that need to be tested is star swarm.
i see also good optimisation in SLI, where nvidia is anticipating AMD’s Mantle crossfire release after the R9 295X2.
Now that the “endless” die
Now that the “endless” die shrinks are about to become a thing of the past in not too long of a time frame, the big GPU players will have to focus on Driver/API/software inprovments to get more out of their products. 14nm is, and down to 5nm may be possable, but the costs are going the be too much, even for Intel size wallets, so in order to make up for the slowdown in process node shrinks, improving the graphics/drivers/APIs may be the only way to increase performence, as Moor’s Law/observation is beginning to slow down. As far as endless Die shrinks go, the laws of economics may force an end to process node shrinks before the laws of physics do. Even Intel is putting chip fabs on hold, even its newest fab was halted, and the new building has been placed in mothballs and is currently mostly an empty shell. Everyone, Intel included, is shifting around their roadmaps, and adjusting their Tic Tocs, as the sales are just not there to justify the costs of new process node shrinks below 14nm. Not supporting openGL, and other open standards, or openGL to the fullest, will be a big mistake for AMD, Nvidia, Intel, or Microsoft, as the mobile market has shown that OpenGL and open standards are too important to ignore. Steam OS, as well as other OSs that utilize OpenGL as the main graphics API on PCs/Laptops, will put an end to the total domination of the proprietary Graphics APIs, and force GPU makers to end the short changing on the graphics API/drivers side of their products.
wrong one reply
wrong one reply
Fun fact AMD/nvidia purposely
Fun fact AMD/nvidia purposely hold back GPU performance with drivers
Hard to prove nvidia does it
Hard to prove nvidia does it since you got nothing to compare to. AMD since they got their own api you could, theif for example, 4670k + 290x(dx11) == 63fps, using same machine and mantle it does 73fps avg. Funny thing is my 4770k + gtx780(not ti model) avged 69fps using same settings.
They all do it, it’s called
They all do it, it’s called milking the cash cow! The entire technology market is so full of snake oil salesmen and MBAs, whose sole purpose is to figure out ways to slow the pace of technological progress, so that what little improvments that remain to be achieved can be metered out over a longer time frame for more profits. You can bet these driver improvments will mostly be only for the newer hardware that must be purchased!
The mobile market, with its much better competition, has produced more rapid innovation (see articles on hardware ray tracing mobile GPU) than the desktop market. There needs to be a third player, or more, in the desktop GPU market, and hopefully one of the more sucessful mobile GPU IP suppliers can be financed to enter the descrete GPU market by someone with Google/facebook/Apple style checkbooks. The way the mobile CPU market is going, starting with CPUs like the Apple A7(cyclone), that are desktop/laptop class chips that just so happen to run the ARMv8 ISA. Many custom ARM ISA based CPUs are being designed to run the ARMv8 ISA to take advantage of the software ecosystem built up around the ARM ISA market of the last few decades. With OpenGL’s popularity in mobile along with the Linux/Linux kernel based OSs like android, or full linux based Steam OS(Debian) it should not be hard to take a mobile GPU and scale it up for the desktop/laptop market and use the openGL software/API ecosystem to enter the descrete GPU market more easily.
Apple for sure, via its purchase of P.A. simiconductor for its CPU IP/brain power, is looking to to do the same should it desire an Apple based GPU to go with its Apple based A8, or A9 and beyond. The ARMv8 ISA based CPUs should start to make inroads into more than just chromebooks in the next few years, and some of the most innovative new GPU architectures may just come up from the mobile market. Nvidia and AMD must be, in private, cursing OpenGL’s existence, but if they do not support OpenGL they will suffer sales pressure in the future. The main problem in the desktop GPU market is the lack on 3 or more players, and more players in the desktop GPU market will cure AMD’s/Nvidia’s Management/Marketing of their natural tendency to milk the market at the expence of real improvments. The constant rebranding/rebadging of GPU SKUs in the past few years is enough reason to want more than 2 players in the GPU market, that and all of this graphics API special sauce marketing just proves the point.
Get rekt AMD
Get rekt AMD
Ryan,
Can you check these
Ryan,
Can you check these drives change temperatures ?