[H]ard|OCP takes a look at the new optimizations in the Oxide Game Engine and come up with similar positive results as Ryan. They tested the CPU by dropping the resolution and quality in AotS and utilizing the CPU focused benchmark, as opposed to the GPU focused benchmark utilized by many sites, including ourselves. Their tests showed a 16.46% improvement which shows these optimizations do not simply have an effect on graphical performance but also improve CPU calculation performance as well. Pop by for the full review.
"There has been a lot of talk about how AMD's new Ryzen processors have pulled up somewhat short at low resolution gaming. AMD explained that code optimizations from game developers are needed to address this issue, and today is the day that we are supposed to start seeing some of that code in action."
Here are some more Processor articles from around the web:
- AMD Ryzen Memory Analysis: 20 Apps & 17 Games, up to 4K @ techPowerUp
- Testing ECC Memory & AMD's Ryzen – A Deep Dive @ Hardware Canucks
- Two More Retail Ryzen 7 1700 Overclock Tested @ [H]ard|OCP
- AMD Ryzen 7 1700 Retail CPU Overclocking X 2 @ [H]ard|OCP
- AMD Ryzen R7 1700 @ eTeknix
The average frame rates I do
The average frame rates I do not trust as much as the percentile scores that are always necessary for any proper benchmarking, as sverage framerates can be skewed by outliers.
I’m not a big techreport fan,
I’m not a big techreport fan, but the least bias graph is the frame time histogram. I have no seem any one else to it.
My issue is how you create the graph. (or gather you benchmark data)
What I seen site do is
a) pick games that favor one vendor
b) pick resolution that favor one vendor
c) configure the HW to favor one vendor
…
Maybe unknowingly, but this is why you see conflicting results.
You can basically shape you review to match the result you want to achieve.
One area that seem to have had little coverage: VR + dual GPU
All review are quick to claim Ryzen to be a gaming failure,
but from the little I see if you plan for VR, or plan to keep your CPU more then 12 month its quite the opposite.
Maybe try reading the reviews
Maybe try reading the reviews I linked to instead of making up what they say in your head?
a) Some games show a 17% FPS
a) Some games show a 17% FPS improvement with faster ram at 1080p
(and we are not talking about crazy fast ram, 3200mhz)
b) ECC is implemented and works, windows logs all ECC error. but doesn’t seem to halt operation on 2 bits errors (unrecoverable)
I never run ECC, so I cant say if this is useless without halting on 2+bit errors. Or if this a windows misconfiguration.
c) retail VS sample overclock… it seem some had different experience. where retail overclock better. this is called the silicon lottery.
Personally I find 4ghz on all 8 core not worth it for the power cost… Stilt did in death review on clock / voltage scaling
d) The eteknix result is a great example of benchmark doing what you want it to do. check the “Rise of the Tomb Raider” VS most other reviews
What does ECC have to do with
What does ECC have to do with this?
Canadian Smackdown!
Canadian Smackdown!
You really need both scores.
You really need both scores. Telling me the 0.1% lows by itself is pretty useless.
What a crappy site
What a crappy site [H]ard|OCP.. Users are pointing out to them the latest AdoredTV Video that claims Nvidia drivers are to blame for poor Ryzen benchmarks. And they won’t even investigate it…
I just subscribed to AdoredTV
I just subscribed to AdoredTV , the guy is so lucid.
Yes. He seem to have exposed a threading choices with driver architecture.
nvidia seem to have focused on a “single” threaded design.
(And its a very valid design choice if you have a high clock / high IPC CPU)
But AMD seem to have leveraged dx12 multi threaded design more fully.
He also exposed and destroyed the idea of low res benchmarking having any value. He proved the opposite in his 2012 to 2017 retrospective.
Anyone foretelling that the i5-7600k will age better then a R7 is simply not looking at the gaming world objectively.
“Nvidia seem to have focused
“Nvidia seem to have focused on a “single” threaded design. (And its a very valid design choice if you have a high clock / high IPC CPU)”
It isn’t valid anymore, and hasn’t been for a while. Most gamers should have 4 core CPUs as a minimum. If you are running even a 4 core CPU, a single thread is limited to 25% of the potential performance. With an 8 core CPU, a single thread is limited to 12.5% of the possible performance. If you push single thread performance by 20% (huge by today’s standards), you could get up 15% of the original performancepotential; not exactly spectacular. If you go multi-threaded, you can go a lot higher than 15 percent. Intel has managed to push single threads enough to keep up with GPUs mostly, but it is way past the time that games should have gone heavily multi-threaded. Once game developers have multi-threaded performance available, they will take advantage of it, and any single threaded applications will be left far behind. Having the consoles use 8 low power cores should push the developement of well threaded engines. Even a not so well threaded engine should be able to far exceed a single threaded application. I have access to a 24 core machine at work (2 12-core Xeons with HT off). A single thread running a core at 100% is 4.16% of the performance potential. Single threaded applications are pathetic compared to multi threaded applications with that much power available.
Don’t confuse GAME threading
Don’t confuse GAME threading with the DRIVER code. NVidia doesn’t make the games. If the game is well threaded that starts with the game ENGINE (i.e. Unreal 4) itself then how well the game developer team was able to thread the game itself.
I don’t see why the NVidia driver would need to be multi-threaded itself. It can easily run inside of one CPU core.
Proper core management is more the game developers side, working in conjunction with Windows. For example:
– IF CPU perf>x then have the MAIN GAME code thread run on CORE0.
– disable HT on the main core (running main game thread) if beneficial
– ensure video driver, and other threaded tasks run on the OTHER cores/threads.
Other:
There is also considerable CONFUSION over how ACE, ASync Compute etc works. Keep in mind that if the GPU is fully utilized there’s no more performance to be had, so part of AMD’s gain is unused GPU cycles. It’s a known fact that NVidia is better at DX11.
People slammed NVidia early on when AMD was sporting what was essentially an AMD tech demo (Ashes of the Singularity). In fact, NVidia requested they turn off ASync Compute if NVidia GPU’s were detected and people slammed NVidia.
As it turns out, the “ASync Compute” optimizations in AotS where specific to AMD’s GCN architecture so it was pointless (and in fact detrimental).
Asynchronous computation is a general term that means you can submit requests and have them executed out of order. That means you can avoid parts of the GPU (like Compute) from sitting idle. Again though, the maximum benefit you can achieve depends on how much GPU cycles are still available. So you can’t simply compare AMD to NVidia in a couple games, with early drivers and make assumptions.
*DX12 is still very young. In fact, most games have issues with DX12 and should simply be run with DX11 anyway.
Recently NVidia has improved their DX12 support for several games.
Summary:
Sorry for the long post, but if you learn anything it’s that most people don’t know what they are talking about when it comes to NVidia vs AMD in DX12.
I said this over a year ago, but we STILL need more games and better drivers to have a good comparison.
Finally, DX12 in games today is mainly just some tacked on DX12 code. A proper DX12 game, with good multi-threading etc needs to be written from the ground up to support DX12 (or Vulkan) or at least sufficiently rewritten so as to leverage the benefits more.
It will be DIFFICULT to make a game optimized for DX12 while also supporting DX11. You essentially would need to have major parts of the main code thread have two different branches, though as the game ENGINES improve (i.e. Unreal 4) the benefits will be increasingly baked into the code thus requiring less effort.
DOOM is a great example of how Vulkan (similar to DX12) will benefit gamers. The reason it’s one of the few games that works well so far is because of the money, time, and excellent coders available that this happened.
(I should also mention that
(I should also mention that pre-Pascal doesn’t have hardware asynchronous computation, though it’s not clear how much that even matters as of today. Now here’s a MATH problem:
Johnny has a GTX1070. He is using 90% of the TOTAL GPU processing, then he gets to 100% by enabling asynchronous computation in the game. What is his maximum FPS improvement by percentage?
In case it’s not clear, if the GPU reports “96%” usage that’s not accurate. I don’t know exactly how that’s reported but I’m sure it means the GPU is being utilized 96% of the TIME, but not whether ALL of its Compute etc capabilities are utilized at the same time.
Actual 100% utilization would mean every part of the GPU that CAN process data IS processing data. So that’s where asynchronus operation comes in to move data in and out of these sections optimally.
Now, AMD may very well pull ahead of NVidia in this one area, but ASync Compute is but one part of the pictures and so overhyped with the “NVidia can’t do DX12” concept it’s hard to step back and explain the situation, especially when it’s a complicated hardware and software issue which most people don’t have the edumacation to grasp fully.)
“Sorry for the long post, but
“Sorry for the long post, but if you learn anything it’s that most people don’t know what they are talking about when it comes to NVidia vs AMD in DX12.”
You don’t sound like you know what you are talking about either.
Can you provide a link or
Can you provide a link or clarify for those who are not aware?
orignal investigation of
orignal investigation of nvidia dx12 driver
https://www.youtube.com/watch?v=0tfTZjugDeg
discussion
https://www.reddit.com/r/Amd/comments/62kqph/ryzen_of_the_tomb_raider_when_a_cpu_bottleneck_is/?limit=500#bottom-comments
followup, the division driver review (amd vs nvidia)
https://www.youtube.com/watch?list=PL_sfYUCEg8Og_I4k7nL62IsMrJv5rFRa_&v=QBf2lvfKkxA
It seem that the problem is real and big.
nvidia DX12 driver is pretty weak, non optimized and stressing single core.
So testing games using nvidia drivers on Ryzen doesn’t paint a correct picture of this CPU gaming potential.
I think nvidia did not invest much in DX12 because they have some tweaked out, magic under the hood, Dx11 drivers.
On the flip side, AMD decided not to invest in Dx11 driver magic, but instead invest in Dx12 / Vulkan.
Overall I’m surprised at the amount of tech news around Ryzen.
I barely recognize the AMD of old.
It’s that async-compute it
It’s that async-compute it could very well be in play for the dual RX 480’s on that “Ryzen of the Tomb Raider”(first video you linked to). Nvidia heardware can not do async-compute in the hardware on Nvidia’s GPUs and Nvidia’s DX12 drivers has to emulate part of async-compute(Hardware thread dispatch) in sofwtare, slowing things down, that and maybe Nvidia’s not so hot dx12 drivers.
Do you think there is some call us before you review and Nvidia reviwer’s “Guide” nonsence going on in with the reviews all going with Ryzen paired with Nvidia GPU hardware being used so much for Ryzen Benchmarks. I mean reviewers had to Know about Dual RX 480 uasge because when AMD introduced Polaris AMD did some dual RX 480 demos.
Man seeing Dual RX480 up there a little past an overclocked Titan X(pascal), WTF, maybe its because the dual RX 480s have about the same SP FP TFlops of compute as the Titan X(Psacal). That geothermal valley village OMG, Ryzen and dual RX 480s and Vega on the way, and this is a Nvidia title and not optimized for AMD.
NVidia’s Pascal does have
NVidia’s Pascal does have asynchronous computation. It’s called Dynamic Load Balancing.
http://pc.watch.impress.co.jp/video/pcw/docs/759/368/p10.pdf
Implementing ASync Compute for an AMD GPU doesn’t mean NVidia’s method works by default. The game would need to be written for both options, AND properly detect and use the correct path to work.
I looked at the VIDEO (i.e. Tomb Raider) you referenced. I would suggest anybody looking for GPU information stick with the Intel numbers. For example, one shows 64.5FPS in DX12 but 104.5FPS in DX11 with the RX-1800X. Whaaaat?
I would guess that is related to CCX thread jumping. (for Intel there was a small DROP going to DX12, but I don’t believe that’s true anymore after the recent NVidia DX12 driver update.)
http://www.pcworld.com/article/3178844/components-graphics/nvidia-supercharges-geforce-directx-12-performance-with-new-game-ready-driver.html
SLI and CROSSFIRE require the game developer, and NVidia/AMD (mainly for AFR) to optimize things. There’s a lot that can go wrong, which usually shows as unwelcome STUTTERING.
2xRX-480 for ROTTR varied tremendously. In one section it only gained 14%. In another area it went up to 82% boost!!
I still can’t recommend multi-GPU to most people, especially AMD which has no high-end card to compete with NVidia. Your best bet for the money is a GTX1070, GTX1080 or GTX1080Ti currently. Again, many games do not support multi-GPU or stutter.
*Please don’t forget that DX12, and multi-GPU via SFR are slow to come so don’t base purchasing decisions on what games WILL do. Look at your current game catalogue, and the games you wish to play over the next two years.
Or just use THIS as a guide: https://www.techpowerup.com/reviews/ASUS/GTX_1080_Ti_Strix_OC/29.html
(BTW, multi-GPU via AFR isn’t supported in some games because game devs are learning to look at SIMILARITIES between frames. Similar to video compression. The game engine can’t do that if one frame is being processed on GPU#1, and the next frame is on GPU#2.
So I suspect AFR will fizzle out, and SFR or Split Frame Rendering will slowly come in. In fact, the PS4 Pro has two GPU’s. It overclocked the CPU part of the APU, then added in another GPU to match the existing GPU. If the PS4 Pro isn’t using SFR yet it probably soon will but that’s up to the game developers for moving over PS4 games as well as newer games.
Not sure if SCORPIO has two GPU’s, but by my calculations it must. So I’m guessing 2x3TFlops with likely a slightly faster CPU, 33% more perf per GPU but otherwise likely nearly identical…
SFR on desktop?
DX12 and Vulkan support this, but it’s still complicated. Certainly having consoles transiton to having TWO GPU’s helps push this agenda, but to make it COST effective we need to have this baked into the game ENGINE (i.e. Unreal 4) to make this much easier to implement.
Basically SFR can split each FRAME into its components, then split the tasks between two or more GPU’s. The results are then merged back and the new frame sent to the monitor.
This is the FUTURE and definitely will happen (I think it’s in CIV6 already). It not only means potentially close to 2x the performance compared to the range of 0%, 30%, to 90% and so on, but also make the game less SLUGGISH in multi-GPU.
AFR (alternate frame rendering) adds latency from button press to photon update due to the wait for the next frame to be drawn. SFR works on the SAME FRAME so it actually completes the frame FASTER with more GPU’s.
*Please remember, that SFR must be coded so ONLY those games benefit. That’s why buying GPU’s for this will take a while to make financial sense. Of course, if you want the best performance in just ONE GAME and cost be darned, feel free. For example, maybe you get 4K, ULTRA settings at 60FPS instead of 35FPS.
Good write up Photonboy. If I
Good write up Photonboy. If I may add that those 2x 480s are definitely going to eat more wattage than a single 1070/1080. Additional cost for extra electricity or upgraded PSU isn’t often included when calculating dual card value vs single card.
With recent drop of 1080ti those 1080s have become more affordable in the $500 range. OOps that is about the cost of 2 8 gig 480s.
Don’t worry AMD hasn’t forgotten it’s fans. Rebrandeon is back by popular demand. No need to buy those 480s when you can get a proper 580 with 8 pin connector OC’d 5%-10% to give it a little separation from its predecessor. Coming soon.
Where’s Vega? It may even be delayed again so AMD can OC it some more to try and match 1080ti performance. Be a total fail if AMD can’t deliver in the same 250 watt window though.
Polaris has been doing pretty good for a made for laptop chipset that was supposed to get 2.5x the performance per watt compared to previous arch. Isn’t this false advertising? They are up the efficiency trying to match the leap Nvidia made with Pascal.
Ryzen is going to be almost as good as Intel in utilizing multicard at CPU bound resolutions or using weak cards at any resolution. However I don’t think it would keep pace with Intel when two 1080tis are used in well coded game that gets at least 70% out of second card at 1440/4k.
1080 and 720 resolution testing have shown that Ryzen doesn’t push as many frames as Intel. Maybe they can fix this somewhat but why would they try to hide it if it was easy to fix.
I hate april 1 and there
I hate april 1 and there dirty beds.
I quit gaming at 1080 a long
I quit gaming at 1080 a long time ago. People need to move on. 1440p is the new standard.
Steam hardware survey results
Steam hardware survey results disagree with you: http://store.steampowered.com/hwsurvey
43.23% of people on steam use 1080p, whereas only 1.81% use 1440p.
All that means is people hold
All that means is people hold onto monitors for a while. 1440P 144hz and 240hz exists now and even in the IPS space. No reason to go back and I expect 1080 to start shrinking.
High refresh monitors aren’t
High refresh monitors aren’t cheap, and even less so if you go for any of them above 1080p. PC gaming is already a niche market, and high refresh, high resolution monitors are a small niche within that niche.
The point is what is
The point is what is “standard” and where things are headed are different things.
Standard is what most people are using currently.
Are you telling me I’m part
Are you telling me I’m part of the 1% ? !
Joke aside, 1080p on some games all maxed out can be GPU intensive.
Also those survey dont show the PC gaming community clearly.
if you say 90% of the steam user are casual gamers and are not part of the enthusiasts (like doing minecraft gaming on dads laptop)
And that its that 90% that game at 1080p or lower, then you see a very different picture emerge.
1440p + now become the overwhelming resolution of choice.
I’m not saying it as fact, but you can look at steam survey any ways you want.
And I would guess people that game at >1080p are the majority that buy GTX 1080ti. So doing review using a GTX 1080ti overclocked at 1080p makes little sense. (knowing game engine dont scale over time like review sties think they do)
2560×1440?
There are only
2560×1440?
There are only about 2% at that resolution.
Just over 93% of the monitors are 1920×1080, or LOWER.
I know, like you said you can cherry pick results and just drop the “casual” gamers but I’m not sure what that tells you.
*The BEST way from a software and hardware developer point of view would be to adjust the values based on people’s GAMING INVESTMENT. For example, what percentage of the TOTAL of all Steam purchases where done by those with higher than 1920×1080 resolution?
That’s probably what you meant, though I’m not sure how you can get that information. Is it 5%? 20%?
16.27% are playing with Intel
16.27% are playing with Intel graphics. The steam hardware survey isn’t going to be very representative of what is driving the gaming PC market without sorting it by what games people are playing with what hardware.
There has got to be more
There has got to be more testing done with dual RX 480s and corssfire for Ryzen gaming benchmarks! All this Nvidia only testing with Ryzen has to be stopped and it just crazy suspicious with the benchmarks.
Look at the dual RX 480 results towards the end of the video(1), OMG! Around 13:46 mark, BUT you really have to watch the whole video because it shows different parts of the game and how testing different parts can Tax a CPU and GPU differently.
(1)
“Ryzen of the Tomb Raider”
https://www.youtube.com/watch?v=0tfTZjugDeg
Edit: Around 13:46 mark
To:
Edit: Around 13:46 mark
To: 11:40
But really watch the entire video!
There are plenty of
There are plenty of benchmarks for Ryzen using Crossfire setups. Just because you see it in a couple articles and/or videos doesn’t mean that reflects the case everywhere.
However, Ryzen has issues still so I’d personally rather keep Crossfire mostly with Intel CPU for now because mixing Crossifire + Ryzen makes it difficult to know whether it’s the CPU or crappy multi-GPU scaling for a game that is the issue.
GTX1070 would be a much better choice BTW than having 2xRX-480. Or the GTX1080.
(RX-480 8GB is roughly $300USD+, whereas the GTX1070 is approx $500+, and the GTX1080 roughly $700+. Keep that in mind as the GTX1070 is slightly cheaper than 2xRX-480 of similar quality).
results: https://www.techpowerup.com/reviews/ASUS/GTX_1080_Ti_Strix_OC/29.html
On average, the GTX1080 is 72% faster at 2560×1440. If you wanted the RX-480 to be about the same you’d need 1.72x the FPS with 2xRX-480 in every game, with no added stuttering. That’s not even CLOSE to what happens.
A single RX-480 might make a lot of sense, and yes some people only have enough for a card now, then another card later but Crossfire/SLI AFR (AFR, not SFR) is being phased out. One of the main reasons is new code that can look at sequential frame similarities to improve FPS. That requires the frames be rendered on the SAME GPU.
So it’s very hard to recommend multi-GPU for a new computer.
*Those Tomb Raider results are certainly NOT typical. Up to about 82% boost? That’s nice, though at times it’s closer to 20% in the same game, 0% in other games, and so on.
All of you reviewers are
All of you reviewers are bunch of incompetent hacks. A month of Ryzen tests on nvidia graphic cards showing bad performance drastically inconsistent with Ryzen performance in all non gaming applications and non of you bothered to check is there something wrong with nvida drives. F***** disgrace.
The entire Enthusiasts web
The entire Enthusiasts web reporting sphere lives in metaphysical fear of not getting their review samples if they try to dig too deep. It’s simply because the hardware makers CPU, GPU, motherboard/other hardware makers pick and choose who in the online review press gets the review samples. Ad revenue conflicts of interests come into play across the entire publication market place.
There almost needs to be dedicated academic discipline devoted to just the consumer PC/Laptop and mobile devices markets where PHD statisticians who are also computer sciences PhDs and their graduate students, Masters Degree and PHD degree candidates, work out all the variables that can affect the benchmarking of PCs(Gaming, Otherwise).
They will also have to include the various EEE engineers/PhDs and their Masters Degree and PhD degree candidates also.
There needs to be some form of Industry Wide PC testing Mules where every online review website can purchase one of these testing mules that have specialized hardware ports/interfaces for capturing directly off of the various PC systems buses the necessary data/data packets so as to not be adversely affecting the results.
The government does this kind of testing for fuel economy on engines, but even there for pollution control there can be results gamed by software tweaks to hide things. Review samples need to go into a press pool and the online press can get the samples by lottery, with review samples having to be returned back to the review sample library so other press outlets can get a crack at the review samples. Review websites need to be required to issue transparency statements with respect to any business arrangements that they have with any makers of the parts that the review websites are supposed to be objectively reviewing.
Uh… much of the testing is
Uh… much of the testing is VERY accurate. Yes, you need to separate games from other applications and many places including Gamers Nexus, PCPER and so on do just that.
It wouldn’t have anything to do with NVidia either if you use the same exact GPU (such as GTX1070) so that you are mainly comparing Ryzen vs Intel CPU’s.
If you see lower performance on a RYZEN system then guess what? It’s because of the Ryzen CPU.
*I’m not sure how much research you’ve done because there are LOTS of articles delving into exactly WHY the gaming performance is lower. This includes:
– overclock
– DDR4 memory compatibility
– CCX thread jumping
– machine code compiled to the Intel CPU for most of the x86 code.
This has NOTHING to do with NVidia. These are issues that need AMD, motherboard manufactures, game developers, and Microsoft to fix or at least improve.
(As for application support you can expect a mixture of results there as well depending on how well threaded the code is. If you use ALL of the Ryzen threads thus minimizing any code jumping and benefiting from the increased number of cores for a similarly priced Intel then yes, Ryzen is a great CPU in the right circumstances).
I hope this helps a bit. If you bought a RYZEN CPU then my main advice is link to your EXACT motherboard support page as there will be a number of BIOS updates over the next year or more.
There will be other updates via Microsoft Updates. There may be other advice as time goes on but I can’t think of anything right now.
Summary:
So you were way off base with your comment. You may want to educate yourself a bit more before spewing toxic comments that are incorrect.
Plenty of reviewers follow rigid testing procedures, and have extensive computer knowledge.
Poor AMD fanboys pointing out
Poor AMD fanboys pointing out that Nvidia’s dx12 drivers are bad. Looking at article the point needs to be made that even with xfire 480s; the 1800x Ryzen is still worse than an Intel i7 7700 in Rise of the Tomb Raider. Yes it’s not even an i7 7700k. Running stock for the Intel I would assume and since Ryzen x version runs at maximum I guess it’s totally fair, right?
Also a single ref 1070 is only 30%-40% better than a single ref RX 480 8 GB. So it’s no surprise it loses to two 480s. LOL
http://www.game-debate.com/gpu/index.php?gid=3575&gid2=3505&compare=radeon-rx-480-8gb-vs-geforce-gtx-1070#
Link for Ryzen numbers to follow.
https://www.reddit.com/r/Amd/comments/62kqph/ryzen_of_the_tomb_raider_when_a_cpu_bottleneck_is/?limit=500#bottom-comments
Geothermal Valley:
(480 in CF)
7700 DX12: 1070 91,06 FPS – 480 101,16 FPS
7700 DX11: 1070 79,65 FPS – 480 55,50 FPS
1800 DX12: 1070 68,16 FPS – 480 94,87 FPS
1800 DX11: 1070 59,50 FPS – 480 50,30 FPS
Soviet Installation:
(480 in CF)
7700 DX12: 1070 98,30 FPS – 480 104,90 FPS
7700 DX11: 1070 97,20 FPS – 480 71,20 FPS
1800 DX12: 1070 73,67 FPS – 480 98,77 FPS
1800 DX11: 1070 75,60 FPS – 480 64,55 FPS
My conclusions are wow look at the poor dx11 performance of AMD. Are they even bothering with their drivers at dx11 since dx12 is much better for them?
Ryzen 1800x is still 6.4% slower average than i7 7700 in dx12. 10.3% slower using dx11.
Sorry to inform form you but
Sorry to inform form you but data you provided proves Nvidia drivers are defective when it comes their performance on Ryzen processors. Look at transitions from 7700 to 1800:
Geothermal Valley
AMD:
7700 DX12 480CF 101,16 FPS
to
1800 DX12: 480 94,87 FPS
or 6,2 % performance loss for AMD DX12 drivers when transitioning form 7700 to 1800
NVIDA:
7700 DX12: 1070 91,06 FPS
to
1800 DX12: 1070 68,16 FPS
or 25,1 % performance loss for NVIDA DX12 drivers when transitioning form 7700 to 1800
If we do the same with Soviet Installation we see that AMD loses 5,8 and Nvidia loses 25,1 % performance when transitioning from 7700 to 1800
From that it is crystal clear issue is Nvida DX12 drivers don’t play nice when it comes to Ryzen, ad least when it come to RotTR and Division, passably other DX12 titles too. Could be anything including some kind of compiling error similar to AotS that got resolved simply by changing compiler.
https://twitter.com/FioraAeterna/status/847472586581712897
This is probably not a big problem for Nvidia, they will resolve it in few weeks time is my guess. But it is problem for so called professional reviewers that field to discover and report this issue in entire month of intensive testing, despite problem clearly being present given discrepancy between gaming and non gaming performance on Ryzen. Non of them took time to investigate this discrepancy.
P.S. Issue to somewhat smaller existent might exist even with NV DX11 drivers. On Geothermal Valley AMD DX11 loses 9,1% performance VS NVDX11 that loses 25,1% performance when transitioning form 7700 to 1800.
Those so called professional
Those so called professional reviewers are bound by an Online PC hardware reporting industry wide conflict of interest with ad dollars and review samples dependencies pressuring reviewers to obfuscate the truth by GAMING the gaming benchmarks themselves. With more optimizations coming for Ryzen and both the Vulkan/DX12 graphics APIs and gaming engines taking more advantage of AMD’s GPU hardware based async-compute efficiencies, the AMD Ryzen/Radeon gaming results are improving more with each passing week. Those Dual RX 480s have plenty of SP FP TFlops of compute, around the same as the Titan X(Pascal), so that is what is being made full use of under AMD’s drivers as that adoredTV YouTube video showed with a proper amount of truly journalistic investigation.
A lot of these tech sites had
A lot of these tech sites had preorder links for Ryzen processors of which they received a portion of sales generated from their site. Hmmm conflict of interest maybe.
And Nvidia too when Nvidia
And Nvidia too when Nvidia does the same, ditto for Intel and the MB makers! So that’s a big FAIL on your part to try and spin an industry wide problem as just AMD’s fault.
The reviewer GPU, CPU, MB sample process is hardly fair for all concerned! But in your case you are not interested in overall fairness and objectivity of the review process. You have a green agenda while both Red, Green, and Blue(Intel) should not be in control over the review sample process concerning their own respective CPU, GPU, or MB makers products.
A random lottery process required by the FTC with the CPU, GPU, MB makers having to hire an impartial third party to pick the review samples(No cherry picked samples) and a random lottery to hand out the review samples to the press. This all done with the press having to return the review samples back to the impartial third party so more press outlets can get a crack at the review samples.
Let’s keep a big library of review samples run by that Impartial third party so the press can request samples for dual GPU testing, as well as CPU(All brands and SKUs) testing, MB samples likewise. So the review sample websites can get all the brands of motherboards, other peripherals to test out.
Transparency statements where review sites have to reveal any ad/promotional relationships with any of the makers of the devices that the review websites are supposed to be impartially reviewing!
There needs to be federal dollars earmarked for the scientific analysis of consumer computing products with a stress on development of special hardware benchmarking testbeds and standardized testing OSs to properly test the hardware out for the actual unbiased hardware performance metrics while also trying to test under the commercially available OSs.
With tests of the respective hardware done in a controlled and scientific and peer reviewed process using a standardized Open Source OS testing build so OS/hardware makers can not game the system. Ditto for device drivers with the special hardware benchmarking testbeds having plenty of bus connected testing ports wired up to specialized testing computers that only capture the output/packets transiting the bus system and do not interfere with the results and can not be gamed by any firmware/driver/OS/gaming engine special tweaks that are currently employed to obfuscate any true results.
The video maker glossed over
The video maker glossed over areas that performed as expected, he cherry picked these two areas that showed what he wanted.
It may not be Nvidia’s problem. The fault may lie solely with certain areas of the game. If it was Nvidia’s problem it would happen in every game not just their own. Does it happen in AMD gaming evolved titles?
DX 12 is tricky to program as it is basically on the programmers to do almost everything related to video card functions. DX 12 was patched in to this game so no surprise it doesn’t play well with a new processor that wasn’t around when it was coded.
That is why game developer
That is why game developer generally use a game engine that takes care of a lot of that stuff for them. I don’t know if we really have any engines designed from the ground up for DX12 or Vulkan yet. Also, it isn’t surprising that Nvidia would be behind AMD here. AMD has been pushing more cores for years and specifically developed Mantle (now vulkan) to take advantage of that higher core count. Nvidia has focused heavily on DX11 performance which is mostly limited to single thread. They also did a lot off game by game optimization, which is soke what ridiculous.
Pushing single thread performance is not going to be enough going forward. If you have an 8 core processor, a single core is only 12.5% of the available processing power. If you push single thread performance by even 20% (huge by today’s standards), you only get to 15% of the original potential. Even a badly threaded application might achieve multiple times that level of performance with that many cores available.
It’s a common misconception
It’s a common misconception that dx11 isn’t multithreaded. It is.
http://www.techradar.com/news/computing-components/graphics-cards/q-a-nvidia-and-amd-talk-up-dx11-643546/2
However AMD’s dx11 drivers don’t utilize that as much as Nvidia’s do if at all.
https://www.reddit.com/r/Amd/comments/3sm46y/we_should_really_get_amd_to_multithreaded_their/
https://community.futuremark.com/forum/showthread.php?145185-nvidia-finally-adds-full-DX11-multithreading-support
http://www.overclock.net/t/1573982/amd-gpu-drivers-the-real-truth
DX11 wasn’t designed to be
DX11 wasn’t designed to be multi-threaded; that doesn’t mean that it can’t be. It is just a lot harder than something like DX12 which was designed for multiple threads fron the start.
There are too many variables
There are too many variables to draw good conclusions unless you have more information.
#1 you’re comparing single NVidia GPU vs 2xAMD Crossfire.
#2 there’s also Ryzen which in this case has 2x the core count.
It’s likely that the NVidia big drop is simply due to the RYZEN performance, whereas DX12 combined with 2xRX-480 with more cores etc benefits AMD but it’s hard to know exactly where the benefits come from.
Anyway, there are lots of OTHER benchmarks out there that demonstrate NVidia works just fine with DX12. In fact, there was a recent DX12 update.
(also, some games perform in DX11 worse, relatively, on AMD than on NVidia due to the drivers. Some of the DX12 benefits vs DX11 where because the game code in DX12 helps avoid the weaker DX11 drivers. That’s common knowledge).
Anyway, NVidia users need not be worried.
On a side note AMD certainly
On a side note AMD certainly knew about Ryzen’s worse gaming performance as they tried to get reviewers to bench only in 1440/4k to create a GPU bottleneck so Ryzen looked better. Shady to say the least.
Shady? The only shady company
Shady? The only shady company in the CPU market is Intel, and they even have a legal letter proving that!
https://www.extremetech.com/computing/184323-intel-stuck-with-1-45-billion-fine-in-europe-for-unfair-and-damaging-practices-against-amd
By the way they also lost in the U.S for similar reasons!
And about Nvidia, lets just say 4gb ramgate, Gameworks and so on…
AMD vs Nvidia Drivers:
A Brief History and Understanding Scheduling & CPU Overhead
https://www.youtube.com/watch?v=nIoZB-cnjc0
PS: I own multiple Intel CPU’s and several Nvidia GPU’s to be clear.
Yeah tell me the 970
Yeah tell me the 970 physically doesn’t have 4 gigs.
AMD is being sued over “cores” in bulldozer right now in class action suit.
http://www.pcworld.com/article/3003113/components-processors/lawsuit-alleges-amds-bulldozer-cpus-arent-really-8-core-processors.html
Intel’s legal troubles are from 2003-2007. There have been plenty of times AMD/ATI have been caught as well.
Nothing wrong with Gameworks, it’s the same as AMD has their Gaming Evolved and Mantle (proprietary as well).
LOL.
Good luck with Bulldozer
LOL.
Good luck with Bulldozer class action. Also in the article you link the writer says himself that the chip does indeed scale well with threads.
Bulldozers problem was IPC and perfomance per watt. Not scalability.
Yeah tell me the 970
Yeah tell me the 970 physically doesn’t have 4 gigs.
Hmmm…because the 4G Ramegate is not a thing NVIDIA paid me back Money just out of good-will I’ll guess …
AMD is being sued over “cores” in bulldozer right now in class action suit.
The FX has “8 AMD CPU Cores” – the definition of a CORE is one thing, Intel got sued and found guilty in many cases that was only one example i gave you “”giyf””
Because you don’t get it let’s try it that way:
If i sell you a car i will tell you what’s great about it, and not what is not so great! And if i give you free new hardware i will tell you which Benchmarks i like to see from you…
AMD told them to benchmark in 1440p but they didn’t say you can’t bench them on 1080p or any other resolution so where exactly is that shady…
Intel on the other hand paid and threatened customers not to buy AMD CPUS – THATS the whole way from SHADY to just unexceptable.
But keep defending companies like Intel and Nvidia, because they just dont give a shit about you… while you still fanboying over there stuff! One day you will realize that you are not in a relationship with them, you are more like a crazy STALKER. And all this smack and down talk about AMD and other smaller brands just makes you seem needy… needy for attention!
As said befor i am an Intel/Nvidia user, but i am not delusional to know what i bought into… AMD had no CPU in the last 5 years that was worthy to buy for my needs, that’s why i am stuck with Intel. And that Nvidia wasn’t taking Mantle and AMD seriouse and now has to face the DX 12 troubles, well that’s their own mistake and not AMD’s!
I am glad AMD is back and the market changes, and if you think that Nvidia will try as hard as they do right now without AMD is just a dream.
AMD on the other hand for sure is guilty for stuff too, but not too such degrees! And they need to fix there HYPETRAINS- thats just ridiculous most of the time.
AMD is back in the CPU Market, and with sales comes R&D Money. If you like it or not, thats great for the the market and the customers… But FANBOYS like you are like CANCER for it!
And to all you AMD Fanboys – you guys are CANCER as well !!!
Nothing wrong with Gameworks…
So you say GameWorks is like AMD’s Mantle… AMD makes Mantle and gifted it to Kronos Group… wich made it into Vulkan… wich runs on Nvidia as well and is open-source… hmmm i just can’t see your point here…
You ain’t seriouse are you?!
You are like the dude who shot himself 5 times and still talks about how safe a gun is! But let’s face it, every gun is a weapon, but not every weapon is a gun!
I have no problem with Intel, Nvidia or AMD because i know that in the end, they just all want the same thing… MONEY … and i will spend it where i get the most for my BUCKS!
They settled the ramgate suit
They settled the ramgate suit because it was most likely cheaper to do so then drag it out in court. Lawyers aren’t cheap you know.
Advertising full cores and not core complexes that share FPU among other things with bulldozer. What about the Llano lawsuit? Does this ring a bell convincing stock holders that this would sell extremely well. AMD was also sued by smaller companies for “borrowing” their IPs and settled them as well.
If you seriously think AMD would give Mantle away if it could work as well on Nvidia cards, you’re delusional. It was proprietary tech designed for their cards only and when no one would support it, they gave it away. They learned that trick from Intel, who would release open source compilers than ran Intel optimally and AMD not so much. But hey it was open source and smaller developers didn’t have to pay to develop it.
Vulkan runs as well on Nvidia what kind of crack are you smoking . When Doom Vulkan came out AMD was the only ones getting a massive performance uplift. Now that it’s been patched AMD still gets bigger boost. Nvidia does better than they did originally.
As for dx12, Volta probably will be the best designed for dx12 card in the world and its release will have AMD begging for a dx13 from Microsoft. Dx12 is a dead man walking. No one wants to go to the added expense of coding a game from the ground up except you guessed it Microsoft. The adoption rate of dx12 is slower than dx10 and that died after 3 years. And yes Vulkan may hasten it’s demise. Don’t worry it will still be there for AMD because their dx11 performance is lacking. Even in half assed patched dx12 AMD performance is usually better than their dx11.
Most if not all companies do shady things to earn a buck. A gun by itself is indeed safe it’s the person using it that is the danger.
That’s what I like about you anonymous you’re so quick to anger and insult. I may be an Nvidia fanboy and to a much lesser extent Intel but I’m definitely not you.
Uh…
1) first, AMD has
Uh…
1) first, AMD has already been called out ITSELF for testing 4K, doing an unrepeatable game + video encode stream, and basically AVOIDING anything that showed the low FPS scores that 1080p revealed.
How do you sidestep the initial post and just say “the only shady company..is Intel”?
And BTW, AMD has done many misleading things over the years. I’m NOT saying Intel is innocent but saying AMD does no wrong implies IGNORANCE of the situation (meaning lack of knowledge, not intelligence).
2) NVidia and 4GB… blah blah. Wow, that gets dragged out so much that it loses all meaning.
That was a communication error. In fact, it was rarely much of an issue in gaming but people like to go and and on about it.
3) Gameworks… that’s a long story sure, but NVidia has opened up a lot of the code now so it’s not quite the same as it was. Yes, there were issues but it was severely overblown too.
Please, do mention how EVIL NVidia is because Crysis 2 had overly tesselated concrete.
4) Intel did screw AMD with the unfair price fixing. sure, that’s true.
It doesn’t change the POINT of how misleading AMD was with some of their cherry-picked results though, which is exactly WHY so many people were surprised and annoyed.
AMD only demonstrated that that roughly matched or exceeded Intel’s CPU. It’s not true in many 1080p gaming scenarios, and there are many APPLICATIONS which use just a few cores that Intel is a lot better as well.
1080p you say ? but what
1080p you say ? but what about 7020p ?
You are right, if I cant game at 400fps at 720p on my overclocked Titan X, the CPU is S* for gaming.
No, AMD wanted to make sure review where fair (because they remember the massive bias for almost all their HW release)
Fair is. This CPU will max out all GPU (and often do better) then intel at 4K, and perform near the margin of error at 1440p on nearly all GPU but the super high end.
Why this matter ? so AMD doesnt end up with conclusion like
“AMD stil cant match Intel for gaming”
Its like if Intel release a 8ghz 8 core CPU suddenly a GTX 1070 will go faster at 1440p instead of using a 5ghz i7-7700k.
Most review sites have completely lost perspective on CPU/GPU review for gaming…
What matter is CPU/GPU pairing. No CPU or GPU is best, you have to pair them, they dont work in singularity.
Well, people should ideally
Well, people should ideally get their information from SEVERAL reputable sources so they can have a reasonably informed overview of the situation.
Again, there are SEVERAL sites that do an excellent job of explaining gaming, and a number of application performance results.
Do you really think it’s surprising that people are annoyed with AMD when they cherry-picked their results to imply Ryzen was better overall than it was?
And don’t forget that it’s not just the CPU. There are BIOS, memory, and other issues that prevent their CPU getting the best performance. In some cases it’s under 60% of what it would be once the software issues are fixed.
If people want to whine on and on about biased reviews then go ahead. They certainly exist (as well as misinformed reviews). I would argue that many of the people reading about Ryzen are similarly to blame in that many spent just a couple MINUTES with minimal computer understanding to come up with an opinion about the state of the review status overall.
In other words, there is sufficient information to have a very CLEAR understanding of the present issues related to Ryzen if you spend the time to look.
Other:
Where are the benchmarks that show Ryzen “often” doing better than Intel at 4K?
(and by Intel it should be an overclocked i7-7700K vs R7-1700 if you want to compare by price, though an R7-1800X is pretty close the the R7-1700 anyway)
Considering most 4K results are bottlenecked by the GPU anyway, I find that pretty unlikely.
And since very few games need more than what an i7-7700K offers, even if the GPU wasn’t a bottleneck at 4K that would still be pretty unlikely.
**Seriously. Show me these reputable links where RYZEN is beating an i7-7700K at 4K.
(and yes, we pair CPU + GPU. That’s true. I would certainly consider an 8C/16T Ryzen for video editing, and with a GTX1070 or similar most games should be run at 2560×1440. There is an ideal CPU + GPU combination for everybody depending on how they use their computer… and the fabrication process will improve as well. In 2018 we’ll see higher overclocks which will close the gap to current Intel CPU’s though of course you’d need to again look at the big picture including PRICE to determine the best way to go..
Off topic a bit, but I’m also curious about Intel OPTANE though very few desktop users would benefit compared to an SSD. I just mention that because it needs a modern Intel motherboard currently)