Benchmark Overview
We got to spend some time with the new Fable Legends benchmark courtesy of Microsoft to continue our look at DX12 performance.
When approached a couple of weeks ago by Microsoft with the opportunity to take an early look at an upcoming performance benchmark built on a DX12 game pending release later this year, I of course was excited for the opportunity. Our adventure into the world of DirectX 12 and performance evaluation started with the 3DMark API Overhead Feature Test back in March and was followed by the release of the Ashes of the Singularity performance test in mid-August. Both of these tests were pinpointing one particular aspect of the DX12 API – the ability to improve CPU throughput and efficiency with higher draw call counts and thus enabling higher frame rates on existing GPUs.
This game and benchmark are beautiful…
Today we dive into the world of Fable Legends, an upcoming free to play based on the world of Albion. This title will be released on the Xbox One and for Windows 10 PCs and it will require the use of DX12. Though scheduled for release in Q4 of this year, Microsoft and Lionhead Studios allowed us early access to a specific performance test using the UE4 engine and the world of Fable Legends. UPDATE: It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12.
This benchmark focuses more on the GPU side of DirectX 12 – on improved rendering techniques and visual quality rather than on the CPU scaling aspects that made Ashes of the Singularity stand out from other graphics tests we have utilized. Fable Legends is more representative of what we expect to see with the release of AAA games using DX12. Let's dive into the test and our results!
Fable Legends is a gorgeous looking game based on the benchmark we have here in-house thanks in some part the modifications that the Lionhead Studios team has made to the UE4 DX12 implementation. The game takes advantage of Asynchronous Compute Shaders, manual resource barrier tracking and explicit memory management to help achieve maximum performance across a wide range of CPU and GPU hardware.
One of the biggest improvements found in DX12 is with CPU efficiency and utilization though Microsoft believes that Fable Legends takes a more common approach to development. During my briefings with the team I asked MS specifically about what its expectations were for CPU versus GPU boundedness with this benchmark and with the game upon final release.
One of the key benefits of DirectX 12 is that it provides benefits to a wide variety of games constructed in different ways. Games such as Ashes were designed to showcase extremely high numbers of objects on the screen (and correspondingly exceedingly high draw calls). These are highly CPU bound and receive large FPS improvement from the massive reduction in CPU overhead and multi-threading, especially in the most demanding parts of the scene and with high-end hardware.
Fable Legends pushes the envelope of what is possible in graphics rendering. It is also particularly representative of most modern AAA titles in that performance typically scales with the power of the GPU. The CPU overhead in these games is typically less of a factor, and, because the rendering in the benchmark is multithreaded, it should scale reasonably well with the number of cores available. On a decent CPU with 4-8 cores @ ~3.5GHz, we expect you to be GPU-bound even on a high-end GPU.
That's interesting – Fable Legends (and I agree most popular PC titles) will see more advantages from the GPU feature and performance improvements in DX12 than the CPU-limited instances that Ashes of the Singularity touch on. Because this benchmark would essentially be maxed out in the CPU performance department by a mainstream enthusiast class processor (even a Core i7-3770K, for example) the main emphasis is on how the GPUs perform.
With this feedback, I decided that rather than run tests on 5+ processors and platforms as we did for Ashes of the Singularity, I would instead focus on the GPU debate, bringing in eight different graphics cards from all price ranges on a decently high end CPU, the Core i7-6700K.
The Fable Legends benchmark is both surprisingly robust in the data it provides and also very limited in the configurability the press was given. The test could only be run in one of three different configurations:
To simplify comparing across hardware classes, we’ve pre-selected three settings tiers (Ultra @ 4K, Ultra @ 1080p, Low @ 720p) for the benchmark. The game itself allows much more finer-grained settings adjustment to enable the game to playable on the largest set of hardware possible.
I wasn't able to run a 2560×1440 test and I wasn't able to find a way to turn off or enable specific features to get finer grain results on what effects AFFECT GPUs in different ways. I'm sure we'll have more flexibility once the game goes live with a public beta later in the fall.
Running the test is built to be dead simple and idiot proof: run a .bat file and then click start. You are then presented with 3939 frames of scenery that look, in a word, stunning. Check out the video of the benchmark below.
The benchmark runs at a fixed time step so the number of frames does not differ from GPU to GPU or resolution to resolution. Instead, the amount of time it takes the test to run will change based on the performance of the system is it running on. Takes me back to the days of the Quake III timedemo… Microsoft claims that writing the test in this manner helps to reduce variability so that the game is always rendering the exact same frames and data sets.
Results are provided in both simple and complex ways depending on the amount of detail you want to look at.
At the conclusion of the benchmark you'll be greeted by this screen with a Combined Score that can be directly compared to other graphics card and systems when run at the same resolution and settings combination. That score is simply the average frame rate multiplied by 100, so this screenshot represents a run that came back at 27.95 average FPS over the entire test.
The GPU timings breakdown is interesting though: it provides six buckets of time (averaged in milliseconds throughout the whole test) that represent the amount of time spent in each category of rendering work.
- GBuffer Rendering is the time to render the main materials of the scene. (UE4 is a deferred renderer, so all the material properties get rendered out to separate render targets at the start, and then lighting happens in separate passes after that.)
- Dynamic lighting is the cost of all the shadow mapping and direct lights.
- Dynamic GI is the cost of our dynamic LPV-based global illumination (see http://www.lionhead.com/blog/
2014/april/17/dynamic-global- illumination-in-fable-legends/ ). Much of this work runs with multi-engine, which reduces the cost.
- Compute shader simulation and culling is the cost of our foliage physics sim, collision and also per-instance culling, all of which run on the GPU. Again, this work runs asynchronously on supporting hardware.
- Transparency is alpha-blended materials in the scene, which are generally effects. We light these dynamically using Forward Plus rendering.
- Other is untracked GPU work. It represents the difference between the GPU total time and the sum of the tracked categories above.
It will be interesting see how this breakdown favors NVIDIA or AMD for different workloads.
For those of us that want even more, a CSV is created for each test run that goes into extraordinary detail of timings on a per-frame basis. I'm talking crazy detail here.
Click to Enlarge…
That's less than HALF the columns of information provided! Everything from frame time to GPU thread time to GPU time spent rendering fog is in here and it honestly warrants more attention than I am able to spend on it for this story. Once the game is released and we have access to the full version of the game (and hopefully still this kind of benchmark detail) we can dive more into how the CPU and GPU threads cooperate.
As I mentioned above, my focus for Fable Legends lies with the GPU performance rather than scaling capability across CPUs and platforms. Also, because this test can ONLY run on DirectX 12, rather than both DX11 and DX12, it's not possible for me to demonstrate vendor to vendor scaling from one API to another.
- Processors
- Intel Core i7-6700K (Skylake, 4-core)
- Graphics Cards
- NVIDIA GeForce GTX 980 Ti
- NVIDIA GeForce GTX 980
- NVIDIA GeForce GTX 960
- NVIDIA GeForce GTX 950
- AMD Radeon R9 Fury X
- AMD Radeon R9 390X
- AMD Radeon R9 380
- AMD Radeon R7 370
- Resolutions
- 1920×1080
- 3840×2160
- Presets
- Ultra
- API
- DX12
- Drivers
- NVIDIA: 355.82
- AMD: 15.201.1151.1002B2
DX12 only? Nope.
DX12 only? Nope.
That said, it was a good
That said, it was a good article, thanks for that! I really liked your comments on the “Looking Forward” section. This game not existing in a DX11/DX12 split does lessen the usefullness of it as a means to show the benefits of DX12.
The game engine is UE4, so it
The game engine is UE4, so it does support DX11/DX12.
How many games used UE3 last generation? You can fully expect this to be more representative of future games.
Not anymore… UE4 will not
Not anymore… UE4 will not be that successful in PC. It is a mobile focused engine now.
Nice try with the fud.
Nice try with the fud.
hahaha soo much hate and fud
hahaha soo much hate and fud for UE4.
Get with the program because there are already a massive amount of games being developed on UE4 with DX12 in mind!
OK. Waiting for you list of
OK. Waiting for you list of high profile games using it… 🙂
here you go uninformed
here you go uninformed pleb
https://www.youtube.com/watch?v=Ymbb8EPDLRo
Yet. Almost non of them is
Yet. Almost non of them is high profile game, most is indie game and not DX12 in mind. Well UE4 itself is not DX12 in mind… if you doubt read about it. MS is writing its own DX12 code!! 😛
But thanks for the confirmation that nothing interesting here to expect. 🙂
On the other hand I can tell you, that the major real! DX12 titles will use their own engines… and there is a reson for this. 😉
Have fun NOT playing any
Have fun NOT playing any games then. Me and the majority of gamers will enjoy all these titles.
If you havent seen any high
If you havent seen any high profile games in that video you are fucking blind. Get lost troll jeez…
https://wiki.unrealengine.com
https://wiki.unrealengine.com/Category:Games
That’s a better list, to be
That’s a better list, to be fair some of the stuff in that unreal sizzle video were ue3. Arkham Knight and Killing Floor 2 were in that video and used ue3 that the respective studios added alot to. May have been more but those 2 are 100% not ue4.
thats what people said about
thats what people said about DX11
true enough, but the big
true enough, but the big issue is when a game states it cannot run and like fable is coming from the microsoft camp.
and then with a software tweak can run on earlier cards.
if there was something critical with DX12 to keep the package from running that would be something- but in reality it is just to force the upgrade package for Windows 10 and the associated vendor graphic card purchase.
time to see the flame
time to see the flame war.this is going to be fun now go guys feel the hate with in you
Seems like you are the only
Seems like you are the only one so far that wants to flame..Go away or bring something useful to the comments.
yes my child feel the hate
yes my child feel the hate flow through you
good article , sexy
good article , sexy graph.
did the game ever use more that 4GB(VRAM) ?
Yea, GPU + CPU load graphs
Yea, GPU + CPU load graphs would have been good.
1440p Results would have been
1440p Results would have been nice as well, especially for the high end Cards. Was surprised to see no R7 360, given it is GCN 1.1.
Ryan,
Curious how your
Ryan,
Curious how your benchmarks show Nvidia with a lead on the AMD on the 980ti/FuryX comparison, but on https://www.extremetech.com/gaming/214834-fable-legends-amd-and-nvidia-go-head-to-head-in-latest-directx-12-benchmark They show them with no difference. Not arguing a bias, just curious because only differences between systems would be the CPU. You guys use the 6700 and they used some X99 CPU.
How many runs did you guys do because they noticed inconsistencies between runs so they averaged a bunch?
Noticed an AMD lead on OC3D
Noticed an AMD lead on OC3D as well, so it seems the numbers are a bit mixed.
Scrap that OC3D got their
Scrap that OC3D got their numbers from an Extremetech article. But still shows AMD ahead, not by much but ahead.
Each test was run AT LEAST
Each test was run AT LEAST four times and results were not quite averaged, but we were looking for variance as we were warned it might be there. I will say that the variance I saw was very low – in the 100-150 points range at most – and people using CPUs with higher core counts than 4 seemed to be more likely to see major swings.
I am very confident in our results here.
Thanks for the explanation. I
Thanks for the explanation. I just found it interesting that the core count made the difference…The benchmark appears to make my R9 290 a still very relevant mainstream card with great value. Really appreciate your work here!
Given that little variance,
Given that little variance, why does the combined score in the 4K video show 3786? Was something different as it is higher than all the other results by a few hundred points.
Updated AMD drivers?
(Funny
Updated AMD drivers?
(Funny how both AMD and nVidia release drivers just for pre-beta benchmarks 🙂 )
The extremetech results were
The extremetech results were provided by AMD but, as ET explained, they are valid. They actually had lower results than AMD for the 980. The difference as it often apparently is, was the clock speeds. Some 980s are faster, some slower. Which is why reviewers should always mention clock speeds IMO.
Did you use custom cards in
Did you use custom cards in your testing?
Did you use custom cards in
Did you use custom cards in your testing?
“Unreal 4 engine — and Nvidia
“Unreal 4 engine — and Nvidia and Epic, Unreal’s developer, have a long history of close collaboration.”
From the extreme tech article. So there will be lots of BETA versions testing that will bring up more questions, but AOS is RTM. It’s time to pay more attention to what information a review omits, as that will be very important. The more reviews read the better the overall picture of actual performance will be. with an “*” on this game until RTM.
*BETA version compared to RTM(AOS).
“The Fable Legends benchmark is both surprisingly robust in the data it provides and also very limited in the configurability the press was given. The test could only be run in one of three different configurations:”
This is not the full benchmark, as the configurability was limited, more questions the must be resolved after the Game is RTM.
The entire Graphics API software stack is turning over with both DX12’s, and Vulkan’s release, and until things stabilize and more games are tested in their RTM versions, and until after the New Graphics API have been out/RTM for 6mo things are going to be in a state of flux. Most testing on BETA versions should be taken with a large amount of skepticism.
Curious how you don’t mention
Curious how you don’t mention that at your link the system ran in low cpu mode, they caught the problem, crankeed the system to high power and 3.3ghz instead of 1.2ghz, AND NVIDIA GAINED 27% WHILE AMD ONLY GAINED 2% – at the normal maxxed cpu speed.
Then the article mentions (you didn’t READ IT !) that their nvidia gpu is clocked lower than one used at tech site x that had higher NVidia results.
,
So I guess that little ignorant attack on this website was a complete failure, looking into it.
Thanks for playing and you’re welcome.
Now, my position is the 980 and 980ti are excellent overclockers compared to the amd cards. So….AMD had better have at least a 33% better price, some free games, and a darn good warranty, or I don’t want it.
NVidia has so many more features, so much more fun packed in, and less power usage.
I feel bad for AMD, they need to be 10%-20% faster given their lack in the software department, and they should be, but they are not.
Curious how you don’t mention
Curious how you don’t mention that at your link the system ran in low cpu mode, they caught the problem, crankeed the system to high power and 3.3ghz instead of 1.2ghz, AND NVIDIA GAINED 27% WHILE AMD ONLY GAINED 2% – at the normal maxxed cpu speed.
Then the article mentions (you didn’t READ IT !) that their nvidia gpu is clocked lower than one used at tech site x that had higher NVidia results.
,
So I guess that little ignorant attack on this website was a complete failure, looking into it.
Thanks for playing and you’re welcome.
Now, my position is the 980 and 980ti are excellent overclockers compared to the amd cards. So….AMD had better have at least a 33% better price, some free games, and a darn good warranty, or I don’t want it.
NVidia has so many more features, so much more fun packed in, and less power usage.
I feel bad for AMD, they need to be 10%-20% faster given their lack in the software department, and they should be, but they are not.
So as expected the whole
So as expected the whole Ashes thing was blown out of proportion by AMD, its fanboy’s, and sites like PCPer that reported it as if DX12 was going to be the end for nVidia.
You are the epitome of the
You are the epitome of the awfulness you THINK is all around you…Everyone hates you 🙂
Ashes is a totally different type of game and of course anyone should take it with a grain of salt because it is new and not out yet, but it could still be a title that shines with AMD products.
We already know it will as
We already know it will as Oxide is a partner with AMD.
Once again, Ashes will not be representative of the majority of DX12 titles.
Stop your FUD right there!
Stop your FUD right there!
Oxide games is NOT partnering with AMD. Oxide games’s publisher Stardock is a partner with AMD.
However! Stardock does also have a partnership with NVIDIA for gpu driver update a.k.a Impulse NVIDIA Edition
The AOTS was a very cpu
The AOTS was a very cpu intense benchmark and so the AMD cards got a huge boost going from Dx11 to Dx12 . This game is more gpu bound .
Actually, in AotS, nVidia
Actually, in AotS, nVidia DX11 vs AMD DX12 has show very similar results as this.
What was surprising was DX11->DX12 jump at AMD vs. nVidia, but frankly, it is clear AMD did not optimize for AotS DX11 at all.
You also forget that it uses
You also forget that it uses Async compute which was a locked AMD only tech til recently. You can argue and claim it wasn’t but it was. As I have dozen times before it would been like Physx being added as a standard in DX12, AMD wouldn’t support it right either. Async is optional not required part.
This game is more neutral in terms of tech used as it didn’t use a tech from 1 side of the other.
Asynchronous compute is not
Asynchronous compute is not owned by anyone, lots of processors use Asynchronous compute in their hardware, Intel, ARM(Mali, ARM based CPUs), AMD(CPUs and GPUs), IBM(Power8, and others). Its Just that Nvidia has been on a gimping spree with its consumer GPU SKUs! Now it’s coming back to bite Nvidia In the A$$, and you are always spouting the same FUD and hoping it will take!
Face it HSA style Asynchronous compute fully in the GPU’s hardware is here to stay, and Future GPUs/SOCs from many of the HSA foundations members will have more GPU compute going forward! Also expect FPGA’s and DSP’s and any other processing hardware to be available for Hardware Asynchronous compute, in spite of Nvidia’s attempts at gimping their consumer GPU’s hardware resources for extra profit milking. CPUs are not the only source of computing power, and have you ever tried to game on a CPU without the help of a GPU. AMD’s GPUs will continue getting even more Hardware Asynchronous compute abilities in their Arctic Island micro-architecture based GPUs.
Expect that AMD’s future HPC/workstation SKUs will begin to make inroads into the market with the on interposer GPU as a computational accelerator wired more directly to the CPU via an interposer and that those ACE units will be able to run more code on their own without the need of assistance from the CPU.
Locked AMD only tech? LOL!!!
Locked AMD only tech? LOL!!! What did I just read? Are you serious. Please say no. 😀
Yes he is serious, and always
Yes he is serious, and always has been. IMO, his posts and avatar explain each other.
async compute was not locked
async compute was not locked to amd, nvidia just doesn’t have a good implementation of it in maxwell. Read this to get a good summation.
https://forum.beyond3d.com/posts/1872750/
It’s probably the case that most of the time nvidia would be better off not using that on their maxwell parts because other methods will work better on their hardware.
“…Async compute which was a
“…Async compute which was a locked AMD only tech til recently. ”
lol, you just made me blow soda out of my nose I was laughing so hard!
a locked AMD only tech?
Async
a locked AMD only tech?
Async compute is a concept adapted by the HSA foundation. Free to use with no loyalties.
Tusk, tusk….Such thing is complete alien to you ngreedya patent trolls.
O’RLY? Maybe 980Ti vs. FuryX
O’RLY? Maybe 980Ti vs. FuryX … but:
That 390X > 980 is normal DX11 business as usual, right?
That 380 > 960 is normal DX11 business as usual, right?
And that is both in FPS and 95-99 percentile (check AnandTech and TechReport as well).
a 380 is typically faster
a 380 is typically faster than a 960 iirc. 390x beats 980 mostly at higher resolutions in some games. Consistently beating it is not normal but this is one benchmark.
blown out of proportion no,
blown out of proportion no, fact is any company that is and has always been biased towards Nvidia will always do everything they can to increase performance with their hardware even if it means and sometimes forces reduction in performance to competitions hardware/software i.e AMD.
There is NOTHING that is fully DX12 compliant, however, AMD is FAR more complete in that regard with nearly all products released from Radeon 7k GCN and up, Nvidia is far more fractured through their entire lineup for supporting DX12 bits and pieces facts are facts.
This is for the same reason that they did not fully support dx10-11 they make loads of $ off blind fanbase, why would they bother actually giving the full ability to their product when it would mean a slight boost in cost to give it, might as well wait till next generation to force buys to buy once more for a bit more support.
AMD has a massive advantage at this point with DX12, and using just 1 game that is funded and tweaked by a company biased towards them is not proof of anything other then foul play.
Its funny you say game
Its funny you say game biased, as if AOTS wasn’t biased in AMD’s favor? Funny how short termed AMD fanoyz memories are. AMD’s drivers for DX11 suck BAD get over it. its not cause a game company biased against them, its AMD’s fault. Stop tring to shift blame, you sound like an AMD PR rep.
AOTS shows that Full Hardware
AOTS shows that Full Hardware Asynchronous compute support, will benefit gaming by moving more of the work to the GPU, and saving a lot off of the intrinsic latency that comes from CPUs having to communicate with the discrete GPUs over PCIe. The more code that can be run on the GPU the better, because GPUs have vastly superior numbers of cores than any CPUs, and GPUs with ACE/other fully in the GPUs hardware Asynchronous compute support will not need the CPU’s help as much. The more running on the GPU the less latency there will be in gaming.
I say put all of the game on the GPU and make those ACE units able to run all of the game’s code, with the CPU there for running the OS and other chores. The Future AMD APUs on an Interposer will be able to host a High End GPU and more directly wire up(Thousands of traces) from the CPU to the GPU, and other processing DIEs on the interposer, including the other wide traces/channels to the HBM, and even AMD’s HBM will be getting some extre FPGA processing going on with an FPGA added to the HBM stack sandwiched between the bottom HBM memory controller logic chip and the HBM memory dies above.
Stop trying to apologize for Nvidia’s lack of foresight and greed, and try the get Nvidia on the path towards full in the GPU’s Hardware Asynchronous compute support. The Gimping of compute will not pay off in the future for gaming. You sound like Nvidia’s damage control! AMD, and others will be making more use of GPU/DSP/FPGA/other Hardware Asynchronous Compute, from the mobile market, to the HPC/workstation/supercomputer market!
You seems that you like to
You seems that you like to ignore the fact that only GM200 can fight AMD. AMD is winning in all categories against GM204. Fable Legents is the proof everyone wanted after AotS that AMD is back.
Enjoy blind fanboy!
AMD
Enjoy blind fanboy!
AMD desperately needs a hit product line to lift its ailing computing and graphics division, which has been crushed between Intel’s x86 chips in PCs and NVIDIA’s (NASDAQ:NVDA) add-in graphics boards. AMD’s market share in add-in boards fell from 22.5% to 18% between the second quarters of 2014 and 2015, according to JPR. During that period, NVIDIA’s share rose from 77.4% to 81.9%.
Last quarter, AMD’s computing and graphics unit’s revenue declined 54% annually as its operating loss widened from $6 million to $147 million.
NVIDIA’s (NASDAQ:NVDA)?
So,
NVIDIA’s (NASDAQ:NVDA)?
So, you are not just an Nvidia fanboy but also a shareholder.
Thank you for the information.
I think next year nvidia will
I think next year nvidia will have trouble.
But based on AMD’s weak
But based on AMD’s weak position in both markets, it’s likely that its top- and bottom-line losses in computing and graphics will keep piling up.
I would not celebrate just
I would not celebrate just yet, nvidia got spanked across the gpu line at every price point except their high end card.
a freaking 390 is in the ballpark with the 980, and a 390x spanks it
280 beats the 960 handily, it’s a win across the board except for the 370, but who wants that thing?
I wonder if any of the tech press throwing out their disappointment at amd just rebranding cards and relaunching them will either dial it back or publicly eat crow? Turns out those 2014 hawaii cards are besting the newer 970/980 pretty clearly. But at least nvidia has the crown with the 980ti… at least until the real game gets launched and we actually see more spell effects with post processing show up and greater use of async compute help narrow/close/open up gaps in amds favor.
*** 2013 Hawaii cards
*** 2013 Hawaii cards
The AMD fanboys fall for it
The AMD fanboys fall for it every time.
Soon the new phantom future AMD winning “tech/thing” will be slithered and slathered into the web announcing the day when the raging hate filled red freaks “destroy the competition!”
In the mean time, rinse, repeat, retreat, retort, RETARDED.
I like fps to be high
I like fps to be high sometimes, its the best. Does this game come with all 8 gpu’s installed in the box? Call me?
Best comment ever.
Best comment ever.
Yes.
Yes.
2560?
2560?
Seriously, this. All the
Seriously, this. All the benchmarks are either 1080 or 4k.
Seriously, this. All the
Seriously, this. All the benchmarks are either 1080 or 4k.
Mr. Shrout, the percentile
Mr. Shrout, the percentile charts were very graphic in leading the editorial benchmark forward. I ask of you to continue demonstrating them in your future reviews.
It’s weird that the 980Ti
It’s weird that the 980Ti pulled ahead of the fury X even more at 4K, we usually see the opposite. How much vram was the benchmark using? More vram usage might also explain why the 390X got ahead of the 980 by that much.
Question: According to the
Question: According to the Intro page, the benchmark supports DX12’s ASync Compute functionality – The same functionality which sparked the Ashes of Singularity controversy, yes?
What we learned from that controversy is that AMD’s drivers already have ASync support, while Nvidia’s Maxwell DOES also support ASync(we don’t know just how well in practice yet, but still), but the drivers are still on their way.
Does that mean we are comparing AMD’s ASync enhanced performance Vs. Nvidia’s raw, serial performance? Would that not mean a further boost is in the pipe for Maxwell cards?
the game isn’t using async
the game isn’t using async compute in pc .. it is enabled only in xbox ..
This is sad, with no async
This is sad, with no async shader support PC performance is artificially held back AGAIN to make nvidia look better, even with DX12!
If microsoft themselves ENCOURAGE this type practice even on published features, then … well.. forget it AMD you can’t fight microsoft.
MSFT cannot force to use X
MSFT cannot force to use X features in DX12, rather for them to have allowed Nvidia to claim “full support” that is a blatant lie period. Async etc are features, MSFT cannot force folks to use this unless it was places as a requirement via hardware to do DX12 at all, which it is not.
Hell for many years since the Radeon 2k series they had tesselators built in, but it was not till DX11 that MSFT made it mandatory via hardware with specific attributes which favored Nvidia and allowed Intel to emulate it. Basically anything prior to Radeon 5k was unable to do it without having an inherent advantage vs competition.
I think this is why they made mandatory changes etc with some of these features for DX11-12 to not artificially limit performance or force those in position that they would not get good results example would be Async which is more or less superb on AMD via hardware where others to my knowledge have to do software workarounds never a good thing, in this regard they should have put a context switch on things to lock out oh I dont know game works optimizations, PhysX etc as they artificially tilt the table to make 1 product look great while shafting everyone else performance/quality wise.
Anyways, DX12 is a great step forward provided MSFT does step in and make sure some underhanded tactics are not made to supersede the benefits of all that extra hardware/software code requirements, but then again, Win10 stands in the way of nearly everything with the way it was built/designed which will destroy consumer/business operations everywhere if allowed to remain as it is.
While I am in no position to
While I am in no position to say whether asynch shaders are in or not, the fact that 390X is ahead of 980 by about the same margin as 980 tends to be ahead of 390X in DX11 games (same for 380 vs 960) …
… it is clear that those old AMD Rebrandeons love DX12, even with UE4. Therefore, I would put those conspiracy theories aside for now.
Surprisingly enough, Radeons have better 95-99 percentiles compared to FPS (vs. nVidia counterparts), while in DX11 it tends to be the other way around.
Nvidia can support via
Nvidia can support via software if they choose, not hardware, as it is more or less HSA/Mantle derived code, just like PhysX is Nvidia specific and locked code. Nvidia may give great performance going from say 400-500-600-700-900 series HOWEVER they also chopped much things out and did not bother putting certain things in, this is why the power requirements have dropped so much though performance for many things stayed ahead.
You cannot chop so many things out and expect awesome performance in every regard, this is part of the reason why AMD cards “appear” to use more power, they are more “complete” if you will be adding more on the die/card, which takes more power, but also gives option to use X without having to do psuedo workarounds.
Nvidia is the master or proprietary crap, ripping and chopping, not adding, screwing with, reduction in quality etc, just to serve their shareholders NOT consumers at all.
Point is, any game/app that uses DX12/Vulkan that is not biased towards any one company at this point Radeons especially GCN 1.2 based will absolutely demolish Geforce cards, yes they use more power to do it, but more performance has a cost, for radeon this cost gives better build quality and higher absolute power use.
Muscle cars always used more fuel then light weight exotics 🙂
Nvidia is the Green Goblin
Nvidia is the Green Goblin Gimper that removes more compute from their consumer SKUs! Nvidia has steadily removed compute from their consumer SKUs and can only implement Asynchronous compute in software, but not truly in hardware. Nvidia even spun this as power savings to feed this line to the pea brained Gits that overspend on Nvidia’s pricy kit. Enough of the Gimping on consumer GPU SKUs, Green Goblin, the new graphics APIs are able to make use of the hardware based Asynchronous compute in GPUs now, so that Green Goblin Gimping for more greenbacks at the expense of Hardware Asynchronous Compute will no longer be profitable.
Most new software including open source software will make use of Hardware Asynchronous Compute and people will want to make use of the GPU for more uses other than gaming, and the GPU will be use for more all around software acceleration going forward. Non gaming Graphics(and now Gaming for games too) software makes heavy use of Hardware Asynchronous Compute to render workloads in minutes that would take the CPU hours to complete. So no more of the Gimping of compute as it will no longer be an advantage to the Green Goblin and its minions of green slimes!
disappointed with amd’s crap
disappointed with amd’s crap performance again, means the amd fan must attack Nvidia in a vitriolic paranoid rage, you performed well, feel the hatred surging through you, now raise the red dragon card and strike again at the enemy, and your journey to the bankrupt failing dark side will be complete !
wow all that conspiracy
wow all that conspiracy fanboy rage fud blown about still…
amd is a crumbling broken empty shell…yet the raging hate filled fanboys survive
hope they gave you some free amd crap
Ryan, did you used the EVGA
Ryan, did you used the EVGA 980Ti or reference 980 Ti?
I’m quite interested in VRAM
I’m quite interested in VRAM usage
why is this not tested!?!?!??!?!
THIS SHOULD BE TESTED!
Why didn’t you test the most
Why didn’t you test the most popular upper mid GPUs R9 390 and gtx 970?
Wow AMD killed it in the
Wow AMD killed it in the benchmarks even though they have the older tech
This doesn’t really look that
This doesn’t really look that dissimilar compared to AotS, but we can’t really tell without testing the DX11 code path. We may find that AMD suffers very low frame rates on DX11, but DX11 isn’t going to be very relevant in this case. The low end DX12 cards in this review are at low enough frame rate that it would be unplayable at those settings. Are there going to be any DX11 limited cards which will be able to play this at a reasonable frame rate without turning the settings down to low quality?
It is interesting to see the 380 doing so well since it is based on such an old design. The 390x looks like it may be the best deal currently. Wonder if that is partially due to 8 GB of memory.
UPDATE: It turns out that the
So all cards were verified to be running DX12 ?
Is it just me, or AMD cards
Is it just me, or AMD cards overall looks less jittery than Nvidia. Maybe checking 99th percentile could show this up too, or some other metric, like moving standard deviation or something.
Is it just me, or AMD cards
Is it just me, or AMD cards overall looks less jittery than Nvidia. Maybe checking 99th percentile could show this up too, or some other metric, like moving standard deviation or something.
Not just you.
These graphs
Not just you.
These graphs demonstrate AMD cards give a smoother running game experience.
So when’s Black & White 3
So when’s Black & White 3 coming out?
I just bought a R9 290X with
I just bought a R9 290X with 8GB Ram for $300, and to see that I will be able to run at above a $450-$500 GTX 980 is OK with me.
fill your 290x to 8gb and
fill your 290x to 8gb and tell me your fps lol
@pat182
?
@pat182
?
lmfao
lmfao
the fantasy the amd fans
the fantasy the amd fans entertain is endless, it’s just so surprising
Good buy
Good buy