[H]ard|OCP sat down with the new DX12 based Gears of War 4 to test the performance of the game on a variety of cards, with a focus on the effect of enabling Async Compute. In their testing they found no reason for Async Compute to be disabled as it did not hurt the performance of any card. On the other hand NVIDIA's offerings do not benefit in any meaningful way from the feature and while AMD's cards certainly did, it was not enough to allow you to run everything at maximum on an RX 480. Overall the game was no challenge to any of the cards except perhaps the RX 460 and the GTX 1050 Ti. When playing at 4K resolution they saw memory usage in excess of 6GB, making the GTX 1080 the card for those who want to play with the highest graphical settings. Get more details and benchmarks in their full review.
"We take Gears of War 4, a new Windows 10 only game supporting DX12 natively and compare performance with seven video cards. We will find out which one provides the best experience at 4K, 1440p, and 1080p resolutions, and see how these compare to each other. We will also look specifically at the Async Compute feature."
Here is some more Tech News from around the web:
- Total War: WARHAMMER NVIDIA Linux Benchmarks @ Phoronix
- Total War: Warhammer’s Wood Elves like to shoot and run @ Rock, Paper, SHOTGUN
- Deus Ex: Mankind Divided DX12 Performance @ [H]ard|OCP
- Star Wars Battlefront’s Rogue One DLC on December 6th @ Rock, Paper, SHOTGUN
- AMD Radeon RX 470 Hitman Complete promo goes live @ HEXUS
- Shadow Tactics demo offers Commandos-y stealth @ Rock, Paper, SHOTGUN
- AMD & NVIDIA GPU VR Performance – Google Earth VR @ [H]ard|OCP
- Quick Look: Dark Souls III: Ashes of Ariandel @ GiantBomb
- Origin/EA Black Friday Sale
- AI War 2 returns to Kickstarter, smaller and cheaper @ Rock, Paper, SHOTGUN
Wait am I missing something?
Wait am I missing something? UE is still in DX12 experimental phase and a tech-site is benchmarking and reposting it as DX12 native support.
All that web-site had to do is look at UE website.
Even you guys reported on it 8days ago.
Epic Games Releases Unreal Engine 4.14
https://www.pcper.com/news/General-Tech/Epic-Games-Releases-Unreal-Engine-414
Well, Microsoft describes it
Well, Microsoft describes it as "DirectX 12 API, Hardware Feature Level 11" and I would guess they would know a thing or two about the DirectX API.
Your point about native support is not inaccurate, then again [H] didn't state that in the review.
Yes they did. Their review
Yes they did. Their review starts off with
If you look up hardware feature level from Microsoft website
12.1 & 12.0 require D3D 12
11.0 can run on D3D 11
Techsites like these do little to no fact checking.
The Coalition is a
The Coalition is a Microsoft-owned, AAA developer that only needs to publish on Windows 10 and Xbox One. They consider their fork of Unreal Engine 4 to be fully compatible with DirectX 12. (You may remember Gears of War Ultimate Edition had some birth pains.)
The official engine, from Epic Games, still has DirectX 12 as an experimental technology.
Like Jeremy pointed out and
Like Jeremy pointed out and Microsoft displays on the web store DirectX 12 API, Hardware Feature Level 11
It maybe locked to DX12 API but its not native as the site will want you to believe. A techsite shouldn’t petal DX12 native support on their reviews. When there is clear evidence its not from the publisher to the engine itself. It dumb down readers for clicks and does a disservice to the community at large with misinformation.
Hardware feature levels refer
Hardware feature levels refer to things like conservative rasterization, and it also doesn't mean they do not use them. We'd need to ask the developer.
Besides, we're talking about the difference between the rendering back-end being supported for DirectX 12 or not. That's stuff like how the objects are converted into GPU commands, not whether the GPU will skip partial fragments on triangle edges or whether it supports 16-bit, half-precision floating point values.
Well yes it does because
Well yes it does because according to Microsoft. Conservative Rasterization is a 11.3+ feature its not an option in D3D11.0. Runtime D3D11.3 is required for it to even be an option.
That’s not what I’m talking
That's not what I'm talking about. I'm saying that, first, these are individual features. They can be turned on and off, and so they might have not bothered mentioning it in the system recommendations. It could be just something that's either turned on when you flip a setting to high or ultra, or gives you a performance boost when a compatible GPU is detected.
Second, whether or not something supports DirectX 12 doesn't mean it needs to use all features of it. For instance, if the game doesn't require transparent objects, it doesn't need to use Rasterizer Ordered Views. That doesn't mean it's suddenly no-longer native DirectX 12. Its rendering engine could be set up exactly how DX12 intends to receive geometry and materials. In fact, they could have removed Epic's entire RHI interface and replaced it with one that exactly aligns with Xbox One and Windows 10 with DX12, hence "native support" if true.
That said, I don't know what Gears of War 4 actually uses under the hood. We'd need to ask one of their graphics engineers.
In which case its even more
In which case its even more important reviewers of such games and hardware to know whats being called on when one does such reviews with varying hardware and settings.
Different hardware support different hardware level features and if its as you suggest they could very well be running different workloads due to the settings and hardware variations.
so what is considered as
so what is considered as “native” DX12 to you? graphic wise everything that can be done in DX12 can also be done in DX11. also according to GOW4 dev they use many DX12 feature in their games to the point it is impossible for them to port the game back into DX11.
“The PC version of Gears of War 4 isn’t just a port, but it’s built ground up for Windows 10, with a development team in Vancouver working exclusively on it, so the developers expect it to be “really really good.”
All the textures in the game were authored in 4k, so they weren’t just uprezzed.
We also hear that, differently from Quantum Break, it wouldn’t be possible to port the game back to DirectX 11, as it’s built from the ground up for DirectX 12. Gears of War 4 is leveraging many of the benefits of the new API, and some of them are part of its core foundations.”
http://www.dualshockers.com/2016/09/11/gears-of-war-4-devs-confident-that-itll-be-really-really-good-on-windows-10-more-info-shared/
The funny thing is: Quantum
The funny thing is: Quantum Break on Steam uses DirectX 11. Remedy wasn't comfortable with DirectX 12, so they made the Steam re-release DX11.
Having played this game (and
Having played this game (and loved it) on my RX480 and messed around with the “insane” settings, I think Microsoft might just be trolling the e-peen types. The visual difference is miniscule, but the performance impact of those two settings can be as much as a doubling of the frametimes.
I am not sure we are really
I am not sure we are really getting games truly designed around DX12 yet. There could be a big difference between just natively supporting DX12 and highly optimized or designed around DX 12.
Their DX12 review of Mankind
Their DX12 review of Mankind Divided provided quite different framerates that I am getting on my RX 480 – they concluded “unplayable at 1440p” – my gameplay sits at 45-60fps at 1440p in DX12 ultra settings, no AA – did not test dx11 though-
How about adding multi GPU
How about adding multi GPU
All the game companies will
All the game companies will be adding that DX12[non CF/SLI]multi-GPU adaptor into their games and the Khronos folks better get their Vulkan[non CF/SLI] multi-GPU adaptor working fully ASAP. Maybe even Nvidia will be forced to accept that any new DX12/Vulkan graphics multi-adaptor managed in the DX12/Vulkan graphics APIs is the way to go and even users of Nvidia’s GTX 1060/1050 SKUs should be able to use more than one of those cards to get more gaming GPU processing done.
AMD’s Async compute is the way to go for more than just games once the Games/Graphics APIs are tweaked more fully to make use of AMD’s GPU hardware resources. Nvidia may currently have a higher ratios between shaders, ROPs and TMUs to throw more FPS out there for gaming but AMD’s GPU have way more SP/DP Floating Point resources with which games/gaming engine makers can utilize to take more computational stress off of the CPU and do that computation on the GPU for pre/post processing etc. AMD’s GPUs are popular with the bitcoin miners for just such GPGPU usage using the RX 480’s around 5.8 teraflops of SP FP ability. Any Dual RX 480 usage is going to provide about the same amount of SP FP Gflops at the titan X(pascal). So those Bitcoin miners are making use of many RX 480’s to mine those coins using any new bitcoin/whatever coin algorithms not yet implemented in on any ASIC coin mining products currently.
As games/gaming engines become more able to manage AMD’s asynchronous compute there will be much more non graphics related gaming compute accelerated on AMD’s GCN based GPUs. So that asynchronous compute fully implemented in AMD’s hardware will allow for more of the available mainstream CPU to be assisted by the GPU for better gaming performance without having to use the more costly high end CPU SKUs.
So some games will definitely be making use of asynchronous compute to appeal to the larger market of users who do not own the most costly CPU SKUs.
Here is an interesting discussion on the future for ratios between shaders, ROPs and TMUs and bandwidth as well as new ways of rendering to allow for better VR gaming and regular gaming and it also has some posts on console games development and how that affects PC gaming:
“Is pixel fillrate about to become more imporant again?”
https://forum.beyond3d.com/threads/is-pixel-fillrate-about-to-become-more-imporant-again.58288/
basically, amd wants gpu to
basically, amd wants gpu to go multithreads while nvidia stays in single core?
when it comes to multi gpu
when it comes to multi gpu nvidia usually have more consistent and stable support than AMD despite AMD cards have better scaling. but in the end you have to look at reality. game developer in general does not have much interest in multi gpu. we see more and more games able to use more CPU cores but for GPU it goes the opposite direction with more and more rendering method becoming less compatible with multi gpu tech.
So basically, kitchen is CPU
So basically, kitchen is CPU and dining table the GPU?
You need to be clearer with
You need to be clearer with just what sort of Multi-GPU that you are referring to, are you referring to Nvidia’s driver based SLI or are you referring to the Multi-GPU as managed by the DX12/Vulkan graphics API’s. If So then both AMD’s CF and Nvidia’s SLI are not what the article is mostly about. It’s more about async-compute and any DX12 games/gaming engines not currently making use of/taking advantage of any DX12 asymc-compute API calls.
It’s talking about DX12 and async-compute and AMD’s GCN cards not being affected negatively by async-compute being on or off in the DX12 graphics API for games that do not make use of any async-compute calls in the DX12 Graphics API. This async-compute being off or unused by the game does not affect either AMD cards or Nvidia’s cards much for games that are not using any of DX12’s async-compute API calls in the game. So no net negative effects other than the fact that AMD’s cards will tend get some more benefits from having Async-Compute turned on/used for any newer games/gaming engines that are making use of async-compute for more gaming acceleration of non graphics related gaming compute on the GPU.
So currently for the majority of games that are not tuned to make use of Async-Compute under DX12 both AMD’s cards and Nvidia’s are not negatively impacted by having async-compute turned of or unused by any DX12 titles that are not fully tuned to take advantage of any async-compute in DX12. It is entirely possible that there are DX12 titles not making use of async-compute at Nvidia’s request so as to not make Nvidia look as bad under async-compute relative to AMD’s GPUs that do get more benefits from games that use async-compute. This is an attempt by Nvidia to “level” the playing field in games by not having the DX12 game make use of async-compute until Nvidia can get some newer products to market that can make better use of async-compute.
Remember its is entirely possible to design a DX12 game and not have that game make use of any of the DX12 API’s async-compute API calls and if the games’ makers do not explicitly make use of any async-compute calls in the DX12 API then there is an artificially Leveled playing field being imposed. AMD’s cards can and do benefit from async-compute used in any new DX12 gaming titles and I can not help but think that Nvidia is able to fund much more online spin to repetitively have the readers minds focused on other issues until Nvidia can get some newer GPU hardware with better async-compute handling abilities to market.
It’s not that Nvidia is wrong in pointing out that for any DX12 titles that do not make use of any async-compute related DX12 API calls both Nvidia and AMD’s cards suffer no performance regressions. But that’s just not answering any questions about Nvidia’s cards not benefiting from async-compute API calls much if at at all!
I can’t take an nvidia
I can’t take an nvidia sponsored game using Async seriously.
At least Nvidia’s version
At least Nvidia’s version supports async for both sides. AMD sponsored dx12 titles only support async for AMD cards. The fact that Nvidia sponsored game gives you any async support at all is a bonus.
Except the only thing it does
Except the only thing it does is NOT be optimized for AMD so Nvidia doesn’t look so bad to gaining anything.
Async on AMD is done on
Async on AMD is done on hardware that Nvidia doesn’t have.
So, considering that you can enable Async on Nvidia cards, your argument of AMD titles not offering Async on Nvidia cards is false. It’s just that Nvidia cards lack the hardware to use it. Nvidia is totally free to implement hardware async functions and take advantage of that feature. It’s not the same, for example, as hardware physx that was ultra locked on Nvidia hardware(it is even prohibited to have competitor’s hardware in your system with physx – think about it, think an example where Samsung was forcing it’s NVMe SSDs to work as SATA drives, if for example, a Sandisk product was in your PC).
On the other hand the type of Async that – for example – Time Spy uses, I have no idea what it tries to use. Probably it is more like a software implementation, the one that Nvidia was promising for Maxwell owners and never deliver, but kept to show as new feature for Pascal cards, that doesn’t take properly advantage of async hardware. That’s why on Time Spy AMD cards are getting much less performance gains than compared on games, that’s probably why Time Spy is the only software that shows any kind of improvements on Nvidia hardware.
OK. In Timespy at 1440p the
OK. In Timespy at 1440p the Furyx gains 12.9%, Nano 11.1%, and RX 480 8.5%.
https://www.pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance
Looking at async gains for AMD Furyx in Ashes of the Singularity. The furyx only gains 9.3% going from 52.9 frames to 57.8 with async on.
http://wccftech.com/nvidia-titan-gtx-1080-max-oc-benchmarks/
I’m not going to waste any more time getting game results when I know AMD cards won’t get many results higher than in Timespy.
The problem is most AMD users take directx 12 performance and add the gain from that in with the async result as well. Usual gains is in the 5-10% range for async only. If it is coded maximally the gain can theoretically be 15% on a PC.
The 12.9% in Timespy looks like they have a more superior async implementation than AMD’s version.
And on Physx. ATI was offered help from Nvidia and they turned them down. So blame ATI for you not having it on your cards. I also remember them offering it to AMD for pennies on the dollar for licensing agreement but was also turned down.
https://www.techpowerup.com/64787/radeon-physx-creator-nvidia-offered-to-help-us-expected-more-from-amd
You serious?
AMD got offered
You serious?
AMD got offered help from the author of the hack not Nvidia. Thats been well known and even explained in the link that story you posted by the author himself.
No the DX12 API supports
No the DX12 API supports async compute calls into the GPU’s hardware for GPUs that support DX12’s Features and the games makers are free to make use of that DX12 ability to make async compute calls to the GPU(Via the GPU’s Drivers), or to not use it at all.
So all Brands of GPUs that can support DX12 API calls can then be called by the DX12 API to GPU’s close to the metal driver code. It’s just that some GPU’s only support certain DX12 feature levels in their hardware with any other features not supported in the GPUs hardware having to be emulated in software.
So AMD supports async compute fully in it’s hardware with no software emulation layers required while Nvidia’s hardware has to use some driver redirects for async compute to a software based emulator/abstraction level. Nvidia’s cards can do some async compute in it’s hardware but some of the async compute management functionality is not yet fully implemented in Nvidia’s Pascal hardware and requires the less efficient/less responsive software emulation layer calls to work some of the async compute calls in software.
That said Nvidia has improved Pascal’s async compute abilities on its GPU hardware relative to Maxwell but it is still behind AMD’s GCN with respect to Nvidia having those async compute management functions fully implemented in the Nvidia/Pascal hardware. It’s that lack of full async compute Thread/execution management and Thread/Scheduling in Nvidia’s hardware that causes Nvidia’s GPUs to show little or no improvement under DX12’s async compute API features that call into the GPU’s drivers to get work done. Nvidia’s drivers for DX12 are having to divert the async compute management calls to a Nvidia software emulation layer to manage the GPU’s async compute on Nvidia’s Pascal based hardware. I’d expect that by the time Volta comes online that there will be more improvements on async compute managed more in the GPU’s hardware when Nvidia adds the hardware to better manage async compute on Volta.
AMD’s Older GCN hardware that has had that better async compute management functionality in its hardware for some time now has been netting some nice improvements with DX12 and Vulkan for having that async compute management in the GPU’s hardware. The gaming software ecosystem and the graphics API ecosystem has undergone a radical change under DX12 and Vulkan so expect that after some more months of work for there to be even more improvements for any GPU hardware that can manage async compute and async compute management fully in the GPU’s hardware.
Should say “benefit from the
Should say “benefit from the FEATURE” (not future)
In the article I mean.
In the article I mean.
You’re right, polished that
You're right, polished that up.