UPDATE: We have put together a much more comprehensive story based on the NVIDIA 337.50 driver that includes more cards and more games while also disputing the Total War: Rome II results seen here. Be sure to read it!!
When I spoke with NVIDIA after the announcement of DirectX 12 at GDC this past March, a lot of the discussion centered around a pending driver release that promised impressive performance advances with current DX11 hardware and DX11 games.
What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11. When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.
NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.
Lofty goals to be sure. This driver was released last week and I immediately wanted to test and verify many of these claims. However, a certain other graphics project kept me occupied most of the week and then a short jaunt to Dallas kept me from the task until yesterday.
To be clear, I am planning to look at several more games and card configurations next week, but I thought it was worth sharing our first set of results. The test bed in use is the same as our standard GPU reviews.
Test System Setup | |
CPU | Intel Core i7-3960X Sandy Bridge-E |
Motherboard | ASUS P9X79 Deluxe |
Memory | Corsair Dominator DDR3-1600 16GB |
Hard Drive | OCZ Agility 4 256GB SSD |
Sound Card | On-board |
Graphics Card | NVIDIA GeForce GTX 780 Ti 3GB NVIDIA GeForce GTX 770 2GB |
Graphics Drivers | NVIDIA: 335.23 WHQL, 337.50 Beta |
Power Supply | Corsair AX1200i |
Operating System | Windows 8 Pro x64 |
The most interesting claims from NVIDIA were spikes as high as 70%+ in Total War: Rome II, so I decided to start there.
First up, let's take a look at the GTX 780 Ti SLI results, the flagship gaming card from NVIDIA.
With this title, running at the Extreme preset, jumps from an average frame rate of 59 FPS to 88 FPS, an increase of 48%! Frame rate variance does increase a bit with the faster average frame rate but it stays within limits of smoothness, but barely.
Next up, the GeForce GTX 770 SLI results.
Results here are even more impressive as the pair of GeForce GTX 770 cards running in SLI jump from 29.5 average FPS to 51 FPS, an increase of 72%!! Even better, this occurs without any kind of frame rate variance increase and in fact, the blue line of the 337.50 driver is actually performing better in that perspective.
All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not – this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.
Of course, not all games are going to see performance improvements like this, or even improvements that are measurable at all. Just as we have seen with other driver enhancements over the years, different hardware configurations, image quality settings and even scenes used to test each game will shift the deltas considerably. I can tell you already that based on some results I have (but am holding for my story next week) performance improvements in other games are ranging from <5% up to 35%+. While those aren't reaching the 72% level we saw in Total War: Rome II above, these kinds of experience changes with driver updates are impressive to see.
Even though we are likely looking at the "best case" for NVIDIA's 337.50 driver changes with the Rome II results here, clearly there is merit behind what the company is pushing. We'll have more results next week!
RTWII is GPU memory intensive
RTWII is GPU memory intensive and the game will complain if you don’t have enough for all the battle objects that have to be rendered with your settings. I’m betting the improvements revolve around that. Still. I’m impressed- done with a cards < 4GB- which is what I would recommend for this title. Thanks for the heads-up- will try it out soon.
You sure these Total War Rome
You sure these Total War Rome II gains aren’t attributed to being the first driver to support a SLI profile for the game ?
If previous drivers didn’t support SLI for Total War Rome II there wouldn’t be SLI performance gains in them until this driver.
http://www.geforce.com/drivers/results/74636
Did you check if the temperature threshold was increased in these drivers ?
I agree with this. Nvidia is
I agree with this. Nvidia is trying to put a fast one over us. Ryan seen to be pushing their spin.
Sli was broken and they fixed it. Now they say look at the low level improvements we’ve made with this new driver.
This driver is a load of
This driver is a load of crap. All they did was enable proper SLI scaling for Rome: Total War 2. I have a Gigabyte GTX 770 windforce and the only thing it accomplished was driving up the temps. It would seem that most of what this driver does is allow the gpu to pull more power so it can try to boost more.
In Battlefield 4 I lock my frame rate at 59.9 in the user.cfg. Before this driver my card stayed in the high 60’s and low 70’s. After installing this driver my temps jumped into the low 80’s. I feel like Nvidia is lying about their vaunted DX11 optimizations.
If the optimiations unblock a
If the optimiations unblock a previously CPU-limited scenario because draw calls or whatever then yes, higher temps are to be expected.
More stuff for the GPU to work on > GPU works harder > GPU gets hotter.
If the GPU is working harder
If the GPU is working harder and reaching threshold faster it would throttle more at a faster rate.
Nvidia would have to raise the thermal threshold. Something that was noticed during testing with the latest drivers.
[H]ardocp – AMD Radeon R9 295X2 Video Card Review
All these small percentage increase could be a result of the threshold being increase rather then any optimization at all.
335.23
84c
normal, all previous drivers
337.50
87c
new thermal threshold
NVIDIA shared a TON of data
NVIDIA shared a TON of data in their article on GeForce.com. http://www.geforce.com/whats-new/articles/nvidia-geforce-337-50-beta-performance-driver
If you look at the leaked preso they gave press, they give a ton of details there too.
BOTH drivers were tested with the latest Rome II patch. NVIDIA said SLI profiles were identical between the two drivers. Only difference is that the new Beta has the tweaks to speed up the CPU bottleneck so the two GPUs can get fed faster.
Thx for the free performance NVIDIA!
Yes, thank you Nvidia for
Yes, thank you Nvidia for only making us wait 4yrs for a DX11 driver improvement.
DX11 2009 came out. Nvidia realizes they can update their DX11 driver in 2014
I look forward to their DX12 enhanced driver come 2019. I can’t wait!!! Free performance 4yrs later!!! Wooohooooo!!!!
Well, you can always use the
Well, you can always use the old driver. Nobody is making you use this driver.
Sorry, but I get sick of gamers that complain when things get better for them. It is like nothing is ever good enough, and all you do is troll and look for something to complain about.
good bench, definitly this
good bench, definitly this driver is kicking Mantle’s ass on cpu overhead on a 3960X, where cpu is almost never the bottleneck just epic.
seriously total war Rome II
seriously total war Rome II benchmark??? the Only game that has a bug that kill 20fps, and guess what is the gain from the driver in this bench 20fps , epic ” Based on our synthetic results the majority of these numbers are logical, but a few may leave you scratching your head. I’ll try to fill in the gaps. A bug has been discovered with the latest Total War: Rome II update which kills CrossFire support. This means that 20 frames per second average was being produced on a single GPU. Hopefully AMD can work with the developers on a fix since that graphics engine gets very resource hungry at higher resolutions.”
the full review can be found here : http://www.forbes.com/sites/jasonevangelho/2014/04/08/radeon-r9-295×2-review-amd-delivers-on-a-promise-with-exciting-liquid-cooled-gpu/
where AMD said that there was a glitch on Total War II. and you picked specificly this game, makes me wonder was it your pick or Nvidia’s Pick, honest question Ryan
just a reminder this review
just a reminder this review was made 4 days ago, the glitch was known, that any dual gpu crossfire or sli, gain 20 fps by fixing the bug………… what bothers me is that it’s used to give illusion of a lie that cpu overhead optimisation gave it.
i really wanna know if there was any other games benched, and if this game was suggested by nvidia for a test, or just random pick.
i tested few games with my
i tested few games with my 980x and 2 780 SLI with speedboost off and didn’t get anything close to what they claimed.
Maybe 2-5fps max.
Someone mentioned in other post and had good point the driver seemed to up the threshold on kepler boost speeds resulting in higher clocks/fps
Nvidia talking out ass the driver is pretty much working the GPU work harder to get more frames not that its actually optimized.
This would explain why SLI is seeing the biggest gains over single GPU
That would explain why i didn’t see any improvements with my cards because i have speedboost disabled.
The most interesting claims
The most interesting claims from NVIDIA were spikes as high as 70%+ in Total War: Rome II, so I decided to start there.
NVidia made a performance claim about this game specifically, so Ryan tested that game to verify said claim. The answer to your question was right up there in the article.
yea i saw that, but i guess
yea i saw that, but i guess it was just a happy accident.
althoug still wondering about single gpu perf results far away from multi-gpu bug, and also lower setup other than gpu bound to see, the miracle of cpu overhead of this wonder driver.
This fixed version of this
This fixed version of this game was used in both test sets here. I used the same version of the game with 335.23 and 337.50.
sorry ryan this feels to me
sorry ryan this feels to me like a sponsored review, no offense.
i mean having the results hiding behind, multi-gpu scalling and a bug, on a rig with 3960x, at very high resolutions where gpu is the bottleneck, to insert a paragraph about Mantle API vs Nvdia Driver bottleneck eraser….
nothing in this article makes sense sorry, if only you waited for more results of next week, or avoided bringing Mantle into the comparaison, but this feels weirdly wheeled to a specific direction, without anything convincing, and just sketchy at best.
Its a common theme with
Its a common theme with Ryan.
Check out the latest podcast. In the 295×2 review discussion Josh had to point out that Nvidias frame variance has gotten worse. I think he said, “Someone threw-up on your graphs and it wasn’t orange”
Ryan was quick to say, “Oh we talk to Nvidia and they are working on it”. Not one article about it or anything. Compare that to the weekly updated articles of AMD.
It didn’t always seen this
It didn’t always seen this way. Since frame pacing has come out, pcper has turned green. It’s sad too because this used to a good unbiased site. And this seems like another example.
Every time some new test
Every time some new test method points out a flaw with a specific vendor, sites are accused of siding with their competition. You guys do realize that we reported on frame pacing issues with Nvidia cads *before* we reported on that (worse and longer-running) issue with AMD cards, right?
If Ryan was biased, I would not be working for him.
Its not about a new testing
Its not about a new testing methodology. It’s about consistency.
In order to have integrity you have to be consistent. If Ryan were to brush off the AMD pacing problems as quick as he was to brush off the results Josh pointed out to him then fine. At least he would be consistent.
Seams he is more than willing to take Nvidia word then to follow up on it. Unlike what he did with AMD.
That’s not integrity nor consistent reeks of bias and hypocrisy.
This is typical with most
This is typical with most sites lately. They are easily unforgiving when talking about AMD, they are happy to be less offensive towards Nvidia. This isn’t strange to tell you the truth. You have to be careful with Nvidia because tomorrow they will be here, but AMD, who knows? On the other hand we all know by now I guess, that Nvidia plays dirty while AMD has a more naive politic. Say something bad about AMD and tomorrow the sun will shine again. Say something bad about Nvidia and then search for a deep hole to hide.
I think you hit the nail on
I think you hit the nail on the head, But are these sites really to blame? I think AMD are too slack for not pulling out their stick as Nvidia do.
I also think favouritism is also at play here. Most sites get caught out by not being consistent (Sorry was busy) enough.
What shocks me is that he had time for SLI but not single ROFLMAO
I have told Ryan in the past
I have told Ryan in the past that you are doing it WAY TOO OBVIOUS. Maybe this is a necessary game you HAVE to play with big companies, either to become bigger or just survive the competition between hardware sites. But it is really easy to spot the articles that are biased, or just look more like press releases/payed advertising thank objective. They have become just too many lately.
For example all those little videos promoting 7850K as a gaming APU. Then to balance a little the situation you go out and propose gaming platforms for Titanfall where AMD is nowhere. Absolutely nowhere for a game that isn’t cpu dependent if I am not wrong which means you can lower the cost of the platform by using AMD hardware and invest most in gpu(AMD or Nvidia). Now you play Nvidia’s marketing game with the “Wonder driver”, the “Mantle killer” driver showing just one game and calling this impressive. We aren’t born yesterday. I remember 40%+ gains from drivers from a decade back. Generally there are double digits in about every new driver out there in one or more titles. And are we really comparing this driver with Mantle using a 12 threads monster? This is ridiculous for a site that have proved many times that the people who are behind this are real experts. And of course the WHOLE article with ALL the wonderful and amazing charts that show the massive performance improvements in Rome II is visible without needing to open it.
Seriously????…………….
It didn’t always seen this
Yeah, even our front page is just so green:
Is it me or has the colour of
Is it me or has the colour of the site got a bit more greener also to match Nvidia? ROFLMAO
That was my comment about the
That was my comment about the orange barf; then again if our pointing out an issue with a driver we reported on in an article is proof we are covering up for NVIDIA then I suppose your observation is not too surprising.
The proof is plain and
The proof is plain and simple. Nvidia provided the proof.
335.23
NO SLI support for Total War Rome II
337.50
SLI support for Total War Rome II
other tech sites have already pointed this out.
ExtremeTech – Nvidia’s questionable GeForce 337.50 driver, or why you shouldn’t trust manufacturer-provided numbers
http://www.extremetech.com/gaming/180088-nvidias-questionable-geforce-337-50-driver-or-why-you-shouldnt-trust-manufacturer-provided-numbers
So Ryan is either lazy not to even read the driver release notes or so incompetent to just take Nvidias word for it.
thanks ryan for the article,
thanks ryan for the article, disapointing but still apreciate the work.
so to explain why i was disapointed, i will share few reasons with you :
1-picked a buged game for sli/crossfire (probably didnt know about it)
2-the article is focusing on cpu overhead claim, while the cpu is 3960x instead of lower end cpu, so that we can see how efficient it is(many ppl commented on this specific point on the 1st release news, was hoping you would have taken them into consideration).
3-redusing cpu overhead doesnt require sli, and all other benchs shown that this driver is basicly multi-gpu scalling, nothing else.
so if you were you, and wanted to investigating cpu overhead, i would have started by benching single gpu, and low/mid end cpu, then with these results moved on to a conclusion about cpu overhead in this driver.
then i would have used sli on few games ( you probably didnt know about the bug in Total War Rome II ), to move and conclude about multi-gpu scalling.
but honestly i think this article is miss leading in way too many points, that strips it of any value of research or objectivity, and in the end i still have the same questions as the day Nvidia announced this driver( although other more in dept reviews showed, no cpu overhead, and good multi-gpu scalling, while this article adresses neither), i was hoping to get an in dept review by pcper, hope ryan will still take my humble opinion into consideration, and maybe investigate this further.(and maybe take the article down for further work, because this one seems like a wast sorry)
1. newest version of the game
1. newest version of the game used for both sides of the comparison.
2. game does not use all threads of the CPU. more specifically, the driver / API is usually a single thread.
3. cpu overhead is greater in SLI.
1- not when the driver fixes
1- not when the driver fixes the multi-gpu scalling bug, where 20 fps are being are being generated on a 1st gpu instead of the 2nd, still a single bpu results would have given more relevant results far away from the shady state of sli in this game.
2-why use a 3960x to test overhead reduction? isn’t it better with a low-mid cpu, and much more relevant for the topic of the article ?
3-not when you bench at high settings and ultra high resolution, and the R9 295X shows how the 780Ti SLI reachs it’s limits due to the low memory, everyone have tested Mantle overhead on single GPU much more accuratly and clearly, why are you failing to do the same with this driver then ?
Here we go again with a new
Here we go again with a new driver that fixes a game and that is revolutionary. Impressive at least.
Oh oh wait. I forgot. It also fixes cpu bottlenecks when using a 12 threads ultra fast Intel cpu.
I fail to see the point of so
I fail to see the point of so much bitching. You all sound like a bunch of old lady’s that didn’t get they’re complementary lemonade at a bingo tourney. Wait for the damn drivers, and test them out your selves. Nothing in this “Review” reads like it’s set in stone, nor do i think it was the intention.
The drivers are out and the
The drivers are out and the benches look nothing like the nvidia slides.
“All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not – this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.”
Now Ryan, do you really believe that?
and do you have solid proof
and do you have solid proof in that nvidia is lying in this regard?
Solid proof, no. Common
Solid proof, no. Common sense, yes.
You really think they pulled that off without changing the sli profile? If there was such great improvements, where is it in non sli set ups?
NVIDIA is insulting us, at
NVIDIA is insulting us, at least. Think a little about this. If there were no pressure from the competition “mantle etc” they’ll never released these. Again, insulting with all NVIDIA clients.
check this link on tests, it
check this link on tests, it proves there lower CPU utilization .
http://www.computerbase.de/2014-04/geforce-337.50-cpu-skalierung-benchmarks/
Google English translation
http://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.computerbase.de%2F2014-04%2Fgeforce-337.50-cpu-skalierung-benchmarks%2F&edit-text=&act=url
dont make me laugh, cpu
dont make me laugh, cpu utilisation ? on SLI with 3960x and at resolution 2.5k/4k/6k and all maxed out ? in what planet does a gpu not bottleneck to leave cpu to be the bottleneck ?
The link I posted is single
The link I posted is single 780ti at 1920×1080 using 2core , 4 core and full 6 cores .
the K in the graph is core count .
shows less cores getting more performance than higher cores .
Sure if you are setting and game that is totally GPU loaded ,your not going to see improvement ,just like increasing CPU won’t help either in that circumstance .
use a 3960x to test overhead
use a 3960x to test overhead reduction by cores why not use low/mid end cpu ? or these drivers works only on 3960X i dont get it
The link I posted is single
This is because the threads that are limiting the game are just that – threads. A given thread only runs on a single core.
yes i got that from the
yes i got that from the answer above, i thought it was resolutions XD, because to me doesnt make sense to use thousand dollar cpu with less core, when each core out perform all the rest of cpu and have way too much memory, just doesnt give you an accurate scalling for multi core scalling perf, which is pointless in the end.
That`s bcoz the higher clocks
That`s bcoz the higher clocks on the “lower end CPU” and the game doesn’t use all cores on the 6 core
On this planet – and with
On this planet – and with that game apparently.
i guess thats sarcasm XD, ok
i guess thats sarcasm XD, ok i deserved it.
now i still stand by my opinion, that introducing this article of a wonder driver coming to fight Mantle on it’s territory, CPU Overhead Reduction, while we all saw how profesional all sites became when it was time to disprove AMD’s Claim, everyone used low-mid end cpus for benching.
but when it’s Nvidia that makes a crazy claim, everyone becomes amateurs benching on 3960X, going even to disabling core on this specific cpu to get some results, and most of the other sites who used different cpu got the average perf of any other driver to none ( along with some new sli profiles additions ).
i know that Mantle phenomenon is a good selling point right now, that any article gets good views going about it.
i also understand that Nvidia needs to take some of the focus away off Mantle, and try to undermine it’s qualities, due to a lack of proper reply pending 2years for DX12.
but ryan doesnt have to play Nvidia’s game, by missleading viewers, the driver itself is good enough, with good sli scalling, why did he have to go and bring up Mantle in his article, using absolutly no objective bench to back him up, with a buged game (fixed by the last driver for SLI that gains exactly around 20fps, as AMD said that), with not even single GPU results(nvidia claimed 60% gain with single Gpu on Rome II).what he couldn’t disable a gpu to rerun the same test ?
seriously nothing of what i said makes sense to anyone else at pcper ?
English Please
English Please
Well you could run 1150 with
Well you could run 1150 with dual core i3 vs i5-4670k vs i7-4770 but that more work .
disabling core is easier to test and every driver installed for HW an OS is same .
results should still give good test .
Not the same.
i7, i5, i3 are
Not the same.
i7, i5, i3 are different. By disabling cores you still have clock and cache differences
Your going to compare a 3960X 3.3GHz-3.9GHz 15mb cache to a 2130 3.4GHz no turbo 3mb cache by just disabling cores.
That’s just lazy and inaccurate results.
honestly, i start to think
honestly, i start to think that most of the optimisation if there is any, is exclusive to 3960X, why is every benchmark using it to test cpu overhead reduction in this driver, not only doesn’t make sense, but looks really fishy that these many using them, who excpects ppl to have a 1000$ cpu ?
Ok , I see your point an
Ok , I see your point an agree , it was only test I came across so far .
Maybe Ryan will test on broader system spec , then we will see .
But there are many who have reported increases and there all not top end CPU many were 35xx and even older quad core (Qxxxx) .
There coming out with WHQL with supposedly even more optimizations so we will see .
extreme tech published a very
extreme tech published a very interesting article about nvidia’s rome total war claims. In the article they claim that there was no sli profile before the new driver and that the 71% performance improvement is because only a single gpu was ever being utilized before the new driver, meaning that sli didn’t work properly before. Nvidia are dirtbags and i’m glad I sold my 780 and went with amd.
If there was an oversight I’m
If there was an oversight I’m not so sure ryan was aware of it. If ryan did speak with someone from nvidia about the results I believe they would purposely deceive him if they thought they could get some positive press and get away with it at the same time. Nvidia know how to market their products, if you remember the performance slides they released comparing the 780ti to the 290x before the 780 ti released, they showed the 780ti around 30% faster in all situations but failed to clarify that they were using reference 290x cards in quiet mode. Turns out that it didn’t quite destroy the 290x.
Will there be also
Will there be also Civilization V? Either I am getting strange results (observing bug or regression) or can’t read log file.
funny that you would bring
funny that you would bring that up.
the new Civilization: beyond earth will be running on Mantle
source : http://www.hardware.fr/news/13653/mantle-nouveau-civilization.html
The problems of the first
The problems of the first world. I wonder what the third world worries about.
They worry about having more
They worry about having more kids and when the USA is going to send them more food. So let us worry about are graphics cards. As long as we send them food.
They worry about having more
They worry about having more kids and when the USA is going to send them more food. So let us worry about are graphics cards. As long as we send them food.
It is really this one
It is really this one paragraph I can’t get by…
“All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not – this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.”
Does anyone even really believe that?
Say that is exactly word for word what nvidia told Ryan, he wrote that without even questioning if it’s true? There is no sentence saying we need to look into this further to make sure it is true.
It’s like a news reported publishing a store about something without checking their sources facts. It is wrong and irrersponsibe.
Notice that Ryan never said
Notice that Ryan never said it? Can’t call him a liar if he never made the claim. Pretty bad though calling an nVidia marketing claim a review. Call it an advertisement, like it is.
Notice that Ryan never said
Notice that Ryan never said it? Can’t call him a liar if he never made the claim. Pretty bad though calling an nVidia marketing claim a review. Call it an advertisement, like it is.
Trust me, I questioned it. I
Trust me, I questioned it. I have seen some stuff that leads me to believe it is true.
That being said, how can "check my sources" in this case? With this kind of driver level issue on the debate table, you pretty much can only believe or not believe what the company says.
well ryan, let me disagree
well ryan, let me disagree with you!
you apparently heard of an issue with Rome II multi-gpu before hand, then why didnt you supply further results on single gpu, since nvidia claimed 60% gains with it, seem to me weird that you would stop at sli without disabling a gpu for single one test, since everything is up and running.
and no you dont have to believe what they say thats why ppl need websites like yours to trust, for you to run test and prove or disprove these claims, otherwise what’s the point ?
Nvidia’s BS claim isnt multi-gpu scalling, but cpu overhead reduction on DX11, that’s why you started the article by comparing to Mantle.
are you telling me you dont know how to test if this is effective or not with cpu limited situations ?
run single gpu, low-mid cpu , at lower resolutions, and prove or disprove the claim.
then run sli for multi-gpu scalling, then give us a conclusion at what this driver really is doing, reducing cpu effectivly(very much doubt it), or scalling well multi-gpu or enabling new profiles ( probably the case)
but saying we cant really know but what they tell us that BS sorry, you can have more relevant article about the driver if you wanted.
no one is asking you to take sides, just be consistant and neutral.
So much for running a tech
So much for running a tech site which reviews hardware and has a wide variety of CPUs at its disposal to test.
Might as well just be a bulletin board for press releases if your stance is to just believe what they tell you.
How about you wait for full
How about you wait for full review before making assumptions .
He didn’t even post full results yet .
Wait it doesn’t matter as I bet most will refute results whatever they are .
this is one of the points
this is one of the points exactly for which i didnt like the review, you feel like they picked the only questionable game for an ali bench on a 1000$ cpu to validate the wrong claim of nvidia, if only they waited for full results, why specificly these, and why bring mantle overhead if this bench doesnt represent that in any way.
the point is this article shoudn’t have been up at all, untill the results are finished, or make an article commenting on the results without comparing to Mantle, but ryan choosed neither.
Apparently it doesn’t matter
Apparently it doesn’t matter to him either.
Everyone able to read driver release notes knows 335 didn’t support a SLI profile for Total War Rome II.
He goes on to say.
Best case scenario equal one driver which support SLI and one that doesn’t.
What kind of testing is that?
Single GPU improvements should be measures in Total War Rome II case. With one driver having SLI support over the other, your not measure improvement one over the other but non-SLI support to SLI support.
Then to hold it up as an improvement example is disingenuous at best. Its not like this is Ryans first run at testing SLI which is even more disheartening he would even do this.
You questioned it, but it
You questioned it, but it took you being called out on it to admit it.
Shouldn’t you be upfront with your readers?
Hey guys, I’ve seen big
Hey guys, I’ve seen big improvements with 3DVision too.
Shader cache is also working to cut loading times.
Can you also test CPU bound scenarios with single gpus? Using lower res etc?
3DVision results:
GTX660@i52500k-4,3ghz
BF4 – SP Baku, after the helicopter explodes the elevator and starts the hunt. Hiding behind the pillars, with the smoke ahead:
1600×900 – Ultra except for “effects” in high and msaa off.
Driver = 3d off; 3d on 100 depth
335.23 = 73; 31
337.50 = 71; 46
Some of you guys need to get
Some of you guys need to get a f###ing life. You would think that you could build a better graphics card yourself. If you all are so f###ing smart go to AMD or Nvidia and build some cards. If not shut up.
Ryan, I think you guys are
Ryan, I think you guys are doing a great job. Dont lisson to these jacka##es. All they do is get on here and cry about something.
Yeah, these guys are
Yeah, these guys are dumb.
They should all be thankful when a company deceives them.
The nerve of these people spending their hard earn money on a product and not wanting to be deceived by marketing tactics.
How dare these consumers ask for accountability to a companies claims.
Dat open advertisment for
Dat open advertisment for nVidia again 😀
Even more obvious than the awesome titanfall system guide bullshit 😀
Congratz !