Unlike last week's 355.80 Hotfix, today's driver is fully certified by both NVIDIA and Microsoft (WHQL). According to users on GeForce Forums, this driver includes the hotfix changes, although I am still seeing a few users complain about memory issues under SLI. The general consensus seems to be that a number of bugs were fixed, and that driver quality is steadily increasing. This is also a “Game Ready” driver for Mad Max and Metal Gear Solid V: The Phantom Pain.
NVIDIA's GeForce Game Ready 355.82 WHQL Mad Max and Metal Gear Solid V: The Phantom Pain drivers (inhale, exhale, inhale) are now available for download at their website. Note that Windows 10 drivers are separate from Windows 7 and Windows 8.x ones, so be sure to not take shortcuts when filling out the “select your driver” form. That, or just use GeForce Experience.
http://www.dsogaming.com/news
http://www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/
Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that.
That has diddly squat to do
That has diddly squat to do with this.
“I believe the initial
“I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.”
Neither is this… But any company bullying game devs like this needs to stop. IMMEDIATELY!
The worst part is that (if
The worst part is that (if not all) most “big” news websites have absolutely 0 mention of this news.
I mean, it couldn’t be more official than from a developer’s mouth that nVidia told them to “fix” a benchmark. Much like GameWorks (probably) does in supported games.
And it’s sad because AMD gets bashed all the time…
http://www.guru3d.com/news-story/nvidia-wanted-oxide-dev-dx12-benchmark-to-disable-certain-settings.html
AMD does a lot of bashing on
AMD does a lot of bashing on NVidia too. In fact, they flat out mislead at times as well. For example, they talked about NVidia “only supporting 30Hz to 144Hz” and saying they supported “9Hz to 240Hz” which is very, very misleading.
NVidia’s numbers were for real-world hardware and AMD’s was for a specification (AMD could only drop down to 40Hz so NVidia was superior).
AMD said GSYNC had extra latency and a bunch of other bullet points that made Freesync seem better.
AMD also whines about proprietary this and that but did have their own TressFX, and worse Mantle. Mantle is good right?
AMD is on record (and in video) saying they wanted to retain control of Mantle which is NOT open source. Basically the same exact main complaint they had against NVidia and their “black box” of NV Gameworks.
AMD said NVidia optimized their code for Gameworks in Witcher 3 so that it would run horribly on both AMD and NVidia systems just so NVidia cards would look better. Really? On the same thing they said it was impossible for them to even get support at all due to the black box nature etc… Guess what happened?
Two weeks later AMD put out an update which caused almost EXACTLY the same performance drop for AMD and NVidia. So not only COULD they do it, they could do it quickly and with the same loss so NVidia wasn’t “screwing” them at all.
They originally came out and said Mantle would be released “to everyone” once they got to a certain point but do some reading and you discover that was a LIE. That would have been a nightmare for NVidia trying to plan how to optimize their GPU path.
Remember when AMD paid people to show up at NVidia events to crash them? Remember when AMD made videos with people destroying their NVidia cards?
I’m not a fanboy and really want AMD to succeed but it does make my blood boil at times when people think AMD is the poor underdog getting trampled on not doing anything wrong.
You totally missed the point
You totally missed the point here mate.
This is not about AMD bashing nVidia or vise-versa.
This is not e debate between them 2 (AMD and nVidia)
This is a matter of a DEVELOPER coming out and saying that nVidia wanted them (Oxide) to alter the benchmark process because it wouldn’t suit them.
And when I said about AMD getting bashed, I didn’t mean from nVidia or Intel or Samsung or idk, Lamborghini. I meant that AMD gets bashed from the media while nVidia and Intel gets nothing.
So, please. Before you “attack” full speed on someone, read exactly what he/she has written 😉
Maxwell cards not fully DX12
Maxwell cards not fully DX12 compliant as Nvidia advertised.
AMD propierary async which
AMD propierary async which was that before DX12 isn’t required part of DX12. I would point out that is evidence why AMD didn’t release the source for mantle for so long.
Butthurt Much, green with
Butthurt Much, green with envy team! AMD’s ACEs are in the house and trampled all over your irrational dreams! Now get to work on some ACEs of your own, before things get too much out of Nvidia’s bank account and the greenbacks become sparse! Vulkan will like those ACEs as well, for some in the hardware acceleration of gaming and compute.
I think you need to read this
I think you need to read this article to be better informed about why and when Mantle and ACE’s was released and not just throw out silly accusations:
http://thegametechnician.com/2015/08/31/analysis-amds-long-game-realization/
It comes down to MS not wanting the superior PC platform competing with their consoles when they launched. AMD knew that low level api would benefit their hardware more, and more importantly actually benefit game devs and gamers more. So they forced the issue by coming out with a custom low-level-api.
Yea, AMD developed and implemented their own version of Asynchronous compute engines, just as Nvidia will do their own version as well(I’d suspect that their next line up of GPUs will most likely have it)
ACEs were all part of
ACEs were all part of AMDs/others involvement with HSA and doing more on the GPUs cores besides graphics, and even gaming has benefited from the ability to offload workloads asynchronously to the ACEs.
Just wait until Arctic Islands and even more computations will be able to be done on those ACEs without the need of any CPU intervention. The Zen Based APUs will be able to leverage the ACEs to do even more workloads, and specialized gaming workloads on the newer GCN. Those single core CPU IPCs will become even less important for gaming workloads as more direct computing is done on the ACEs.
Say hello to your new HSA APU supercomputing future, on die, and even on the interposer with lots of GPU/ACE/HBM number crunching and the CPU remanded backup position. Those ACEs are going to be getting more of the CPU style instruction sets and logic, to better accelerate gaming and compute!
I stopped letting GeForce
I stopped letting GeForce Experience check for drivers and updates of itself when it started popping up messages over my games to let me know something was ready.
Microsoft and Nvidia – DO NOT SCREW WITH MY GAMING EXPERIENCE!! Crap popping up on top of my games isn’t “convenience” it’s “annoyance”.
Heyyo, you do realize Geforce
Heyyo, you do realize Geforce Experience is optional right? You don’t need to install it and you can easily uninstalled just that and keep the driver.
Same with the driver checking, you can turn that off in the Geforce Experience settings…
To prevent it from being installed? Just click custom install in the future.
Custom install is for pros
Custom install is for pros only.
Will pcper be covering the
Will pcper be covering the nvidia DX12 debacle?
What debacle? There is none
What debacle? There is none to cover over 1 minor DX12 game that is in Alpha stages.
Same reason they didnt cover
Same reason they didnt cover the fact SLI wasnt wasnt working for windows 10.
Yip, media cover up. No one
Yip, media cover up. No one wants to P*ss off Nvidia.
It should be headlines, not left to the few who actually comment online.
It worked fine with my GTX
It worked fine with my GTX 680 SLI system and fine with my GTX 980, both on Windows 10.
What wasn’t working?
I may have some fine details
I may have some fine details wrong, but apparently for an awful lot of users (but not all), with SLI in Windows 10, memory leaks and massive pagefile usage caused severe stuttering and lag in 3d games, often to the point of being completely unusable.
You mean this?
You mean this?
Nice link. Huge analysis,
Nice link. Huge analysis, with pictures and stuff. Not to mention that it is about the fix, not an article that rushes to reveal the problem. In fact the author says that it was looking like something serious, but guess what. No article about it.
Any link to the article about async compute? But that’s something you can’t possibly fix in future drivers.
Any news about Fermi and DX12? It seems that everyone has forgotten about that.
i think nvidia said that they
i think nvidia said that they will release DX12 for fermi when DX12 games officially come out this year.
Nvidia was saying that Fermi
Nvidia was saying that Fermi will get DX12 support when Windows 10 gets announced officially. A few months ago everyone was considering Fermi support to be warranted.
The announcement came and Nvidia change it’s original statement saying that Fermi cards will be supported latter this year. I wonder if at 1/1/2016 we will have support or another delay or an announcement saying that those plans where abandoned.
It is an alpha game on an
It is an alpha game on an already established game engine. The main issue is that it is using a technology that is a major part of DX12, that is able to produce a massive performance boost.
From demo and various example apps created by some users on some of the overclocking forums show nvidia hardware being able to accept and process the instructions but being unable a consistent response time.
I don’t know who’s greener
I don’t know who’s greener you or Kermit the frog
Are you really that naive?
Are you really that naive? Asynchronous shaders seems to be a pretty major part of DX12, and is clearly a huge advantage.
It’s not about the game
It’s not about the game itself as it’s about nVidia telling them to disable certain DX12 features to make their products look like they perform far better than the competition.
Nvidia makes marketing claims
Nvidia makes marketing claims for months about their 900-series being “fully DX12 compliant” and supporting asynchronous compute.
Nvidia driver clearly shows game dev that asynchronous compute is supported.
Game dev tries to use asynchronous compute on Nvidia cards and it doesn’t work.
Game dev contacts Nvidia to find out what’s going wrong, Nvidia tries to strongarm game dev into disabling asynchronous compute entirely.
Game dev refuses, instead disables asynchronous compute for Nvidia cards because it doesn’t work.
Nvidia claims their less-than-stellar DX12 performance in game dev’s game is due to an MSAA bug.
Game dev shows there’s no MSAA bug and says that the problem is that Nvidia doesn’t support asynchronous compute the way they – and their driver – claim they do.
…..
Replace “Nvidia” with “AMD” and you would be all over these comment threads having a cat over it, talking about how AMD lied.
But it’s Nvidia, so they get a pass. Funny how that works.
I’ll bet a nice shiny quarter
I’ll bet a nice shiny quarter (US) that it comes up in the podcast tomorrow.
I looked at the reddit
I looked at the reddit threads on the “DX12 debacle” topic. Yep, it immediately reminded me why I don’t read reddit. So many people jumping to conclusions based on an alpha of a game. Then they completely swear off a product based off of the little information available. Oh the internet.
In one of the PC Perspective podcasts they mentioned the mixed support of DX12 feature set from Nvidia, AMD, intel and how Microsoft needed to clarify what DX12 support actually means. It’s interesting seeing the public’s (very negative) reaction to the subsets of “DX12 support”. This is precisely why the public message avoided diving into the fine details of DX12.
The smart thing to do is wait for additional DX12 games to be released before making a GPU purchase if you’re worried about DX12 support.
Maybe the smart thing to do
Maybe the smart thing to do but many people already bought their GPUs with “DirectX 12” on the box setting up expectations upon purchased.
What is the expectation here?
What is the expectation here? This situation with Ashes of the Singularity seems to just point to Nvidia’s DX11 drivers being very good, not necessarily any significant deficiency in DX12. I doubt Nvidia will be able to increase DX12 performance much with drivers since they may be hitting the limits of the actual compute hardware available. Nvidia customers paid a premium price, and they get significantly better performance (mostly) under DX11. DX11 games will be around for a while. With DX12 games, AMD customers may get a big boost that Nvidia customers may not get. So what. Nvidia parts come very close to AMD parts in Ashes of the Singularity anyway.
The expectation is that the
The expectation is that the 900 series card that they just bought is 100% fully compliant with DX12, as Nvidia has been claiming for months (and, in fact, have been making that claim alongside their claims that AMD isn’t). This is, in fact, untrue, but many of their customers believed that it was true when they made their purchasing decision.
The problem is that when
The problem is that when actual games come out, we will likely end up with 2 types of game. One where the game engine can use the asynchronous compute, but switch to a non asynchronous compute path when nvidia hardware is detected, thus some DX12 improvement but not as much as what AMD hardware will have.
Another outcome may be a set of games where the developers use a specific code path regardless of the hardware, at which point, the nvidia hardware will get a large performance drop.
Another issue is that since most PC games will be console ports, and consoles are making full use of asynchronous compute, we may end up with a situation where the vast majority of new games will be a port that will make heavy use of asynchronous compute, with no special optimizations to handle videocards that are essentially emulating the asynchronous compute.
This is the simplest explanation of the issue that I have found so far on the issue and why nvidia cards suffer greatly when tasks become increasingly more complex (e.g., a game wanting to use asynchronous compute and graphics at the same time.
from reddit (SilverforceG):
“Think of traffic flow moving from A->B.
NV GPUs: Has 1 road, with 1 lane for Cars (Graphics) and 32 lanes for Trucks (Compute).
But it cannot have both Cars and Trucks on the road at the same time. If the road is being used by Cars, Trucks have to wait in queue until all the Cars are cleared, then they can enter. This is the context switch that programmers refer to. It has a performance penalty.
AMD GCN GPUs: Has 1 Road (CP; Command Processor) with 1 lane for Cars & Trucks. Has an EXTRA 8 Roads (ACEs; Asynchronous Compute Engines) with 8 lanes each (64 total) for Trucks only.
So Cars and Truck can move freely, at the same time towards their destination, in parallel, asynchronously, Trucks through the ACEs, Cars through the CP. There is no context switch required, if they can operate in this mode.
NV’s design is good for DX11, because DX11 can ONLY use 1 Road, period. GCN’s ACEs are doing nothing in DX11, the extra roads are inaccessible/closed. DX12 opens all the roads.
“
From what I have read Maxwell
From what I have read Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues).
The GTX 980 Ti is twice as the Fury X but only when it is under 31 simultaneous command lists.
The GTX 980 Ti performed roughly equal to the Fury X at up to 128 command lists.
This is why we need to wait for more games to be released before we jump to conclusions.
Where is the twice as fast
Where is the twice as fast when it is under 31 command list from? Do you have a link to test which back up this assertion? With DX11, Nvidia is a lot faster, but DX11 doesn’t use asynchronous compute at all, so the 31 vs. 128 ACE does not come into play. It is just the one graphics context. To test whether the compute queues is a limiting factor you would need a DX12 benchmark which uses varying numbers of queues to complete the same task. It may be the case that the workload is not easily split/combined to limit the number of simultaneous command list, so testing this may still be irrelevant.
Anyway, when it comes down to raw compute power, even a 390x is actually closer to a 980 Ti than it is to a 980.
Stats from Wikipedia.
Shader Processors : Texture mapping units : Render output units
980 Ti 2816:176:96
390x 2816:176:64
980 2048:128:64
GFLOPS single/double precision
980Ti 5632/176
390x 5914/739
980 4612/144
For the max theoretical flops rating is actually higher for a 390x than a 980 Ti. The double precision is significantly higher also (1/8 single vs. 1/32), but this is probably irrelevant.
I would agree that this is one game engine, and we don’t know yet how representative it is. Nvidia performance in Ashes of the Singularity isn’t really that bad, as you indicated. It is close to AMD parts mostly. I suspect it will not increase that significantly with Nvidia driver optimizations though. It may be the case that there just isn’t enough compute resources available to push the performance much higher; they are already coming close to matching AMD with less hardware.
https://www.reddit.com/r/nvid
https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/
I am very suspicious of that
I am very suspicious of that result and that supposed benchmark. Going about saying that Nvidia hardware is twice as fast without clarification is just flame bait. The benchmark in question seems to be something that a programmer who is inexperienced with DX12 put together and what it really means is highly debatable. Several issues have already been brought up in the thread; I haven’t read the whole thing yet though.
This quote is in the link:
“An Oxide developer said:
‘AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.'”
If this is real, then Nvidia may have a serious issue with asynchronous compute. If it is an issue with context switch between graphics and compute, then this may not be fixable in the driver. It depends on how the context switch is handled.
There is a lot of unknowns here. A major one is how many asynchronous shader commands are going to be common. Is it usually going to be above 31 simultaneously? This seems entirely possible. It would be interesting to know how many this game is actually throwing at the hardware.
We are not going to have these unknowns resolved for quite a while. Anyway, with how much market share Nvidia has, even if there are such problems, developers will have to work around them as they seem to have done with this game. Although, the market share numbers are a bit misleading when it comes to game development. AMD doesn’t have that much of the PC market, but both major consoles are based on GCN parts.
If there is a real issue, hopefully Nvidia has it fixed in the next generation part. Given the lag time for designing a gpu, if it is a problem and not fixed now, then it is probably too late.
Even if Maxwell does have
Even if Maxwell does have very real performance limitations in AotS due to Async compute differences, this doesn’t indicate its performance in other titles. AotS is a massive scale RTS, which should be the extreme example of DX12 performance improvements. I feel like a modern FPS for example with relative few units present at any time wouldn’t exhibit the same bottlenecks in performance. We really just have to wait for more games as you say.
It can, but not instead of
It can, but not instead of doing the async related stuff and graphics at the same time, it has to rapidly switch between them, it is like a factory when 32 workers and 2 assembly lines. When assembly line 1 is in use, worker 1 does his or her job while the other 31 workers sit around and do nothing. when assembly line 2 does something, the 31 workers move to the second assembly line while worker 1 sits around doing nothing.
The root of the issue is that the card can only do 1 or the other, but not both tasks at the same time, and if a complex task is given the the card (like a game which requires both) then the card has to waste time constantly switching between the 2 task types, and that switching carries a massive performance impact.
This also counts as not supporting the DX12 feature because it is not doing what is required of it. To benefit from it, the graphics and compute stuff have to be completed in parallel while the card is only doing one at the time.
If this were done on the CPU side, you would have a 32 core CPU where each core only ran at a 1/32nd duty cycle, and each took turns executing instructions from a single thread.
If it has to do a context
If it has to do a context switch between graphics and compute, then some people would describe this as not having asynchronous compute. I would wonder what exactly you are getting under DX12 with AotS since there seems to be quotes from Oxide developers indicating that they had to turn off asynchronous compute for Nvidia GPUs. Is it essentially the DX11 code path on Nvidia hardware?
Memory leak is still not
Memory leak is still not fixed. Though I have check only about 6 games. Black ops iii wouldn’t run for more than 3 minutes. (With the very latest driver). Had to drop to 1 card and 1440. I appreciate this is a beta.
Isn’t this the third time sli windows 10 has been “fixed”.
Strange the media have been so quiet about this, with people with 4000 euro 4k rigs sitting idle. I am lucky in that I have a few different setups.
Its strange that its the same issue all the time, and honestly I don’t ever remember getting this message even with non SLI games in the past, ever.
Wow, great timing. I was
Wow, great timing. I was literally just coming here to find the article about the drivers so I could go download them, because of issues with OBS-MP performance.. thought “whateve,r I don’t care if they’re a hotfix I’ll risk it”.. only to find on the front page that it’s released, now. Hah.