Summary of Events
We look at AMD Eyefinity against NVIDIA Surround with our Frame Rating technology. Results may surprise!
In January of 2013 I revealed a new testing methodology for graphics cards that I dubbed Frame Rating. At the time I was only able to talk about the process, using capture hardware to record the output directly from the DVI connections on graphics cards, but over the course of a few months started to release data and information using this technology. I followed up the story in January with a collection of videos that displayed some of the capture video and what kind of performance issues and anomalies we were able to easily find.
My first full test results were published in February to quite a bit of stir and then finally in late March released Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing which dramatically changed the way graphics cards and gaming performance was discussed and evaluated forever.
Our testing proved that AMD CrossFire was not improving gaming experiences in the same way that NVIDIA SLI was. Also, we showed that other testing tools like FRAPS were inadequate in showcasing this problem. If you are at all unfamiliar with this testing process or the results it showed, please check out the Frame Rating Dissected story above.
At the time, we tested 5760×1080 resolution using AMD Eyefinity and NVIDIA Surround but found there were too many issues and problems with our scripts and the results they were presenting to give reasonably assured performance metrics. Running AMD + Eyefinity was obviously causing some problems but I wasn’t quite able to pinpoint what they were and how severe it might have been. Instead I posted graphs like this:
We were able to show NVIDIA GTX 680 performance and scaling in SLI at 5760×1080 but we only were giving results for the Radeon HD 7970 GHz Edition in a single GPU configuration.
Since those stories were released, AMD has been very active. At first they were hesitant to believe our results and called into question our processes and the ability for gamers to really see the frame rate issues we were describing. However, after months of work and pressure from quite a few press outlets, AMD released a 13.8 beta driver that offered a Frame Pacing option in the 3D controls that enables the ability to evenly space out frames in multi-GPU configurations producing a smoother gaming experience.
The results were great! The new AMD driver produced very consistent frame times and put CrossFire on a similar playing field to NVIDIA’s SLI technology. There were limitation though: the driver only fixed DX10/11 games and only addressed resolutions of 2560×1440 and below.
But the story won’t end there. CrossFire and Eyefinity are still very important in a lot of gamers minds and with the constant price drops in 1920×1080 panels, more and more gamers are taking (or thinking of taking) the plunge to the world of Eyefinity and Surround. As it turns out though, there are some more problems and complications with Eyefinity and high-resolution gaming (multi-head 4K) that are cropping up and deserve discussion.
PcPer.com, I am one of those
PcPer.com, I am one of those “that are invested in the HD 7000-series of graphics cards and have already purchased their Eyefinity configurations on the promise of a great experience”.
I thank you for your hard work PcPer! I am truly grateful!
Because of it so far, I have seen a dramatic improvement in single screen Xfire game play with my 2 HD 7870s.
I did however invest in triple 1920 x 1080p screens, 2 HD 7870s and I wish I could be rewarded with the performance expectations I paid for. The experience would be awesome.
I only hope AMD will strive to keep me as a loyal customer by listening to your findings and offer a solution shortly to this Eyefinity-CrossFire problem.
You and AMD will have my undying loyalty and gratitude for it.
If AMD does, it will surely make them the king of the hill in value, for dollar per dollar performance…
Are you listening AMD? Is this make or break time for your company? My next GPU purchase depends on how you react. I wish AMD great success!
p.s. Ryan and PcPer.com Team. Please be confident that you are doing a great service to AMD in the long run and their customers.
Marc Senecal
Fort Myers, FL
Marc, thank you very much for
Marc, thank you very much for your comments and feedback. We greatly appreciate the support!
I’m sorry but I have to
I’m sorry but I have to correct you: gaming equipment is not an INVESTMENT. Unless you’re a pro gamer is a consumer spending.
Anyway, I wish you all the best fun with your new hardware 🙂
This is no place for a
This is no place for a grammar Nazi, and you aren’t even correct in the first place.
Yes, the word “investment” is often associated with money spent expecting a monetary return. However, that is not the only usage of the word.
The poster clearly is “invested” considering the amount of money he spent, which he spent anticipating a return – in this case, a return in performance.
Honestly, contribute something useful to the topic or stay quiet. This thread is about obtaining a correction to a product flaw, not “How to parse nomenclature”.
Dear Marc, yes we are
Dear Marc, yes we are definitely listening. Drop me a line direct.
best regards
Roy
hi Marc, I can assure you
hi Marc, I can assure you that myself and several other senior executives and engineers at AMD are deeply committed to making sure you get the experience you deserve.
We believe that Ryan treats this subject fairly, reports accurately and he works closely with us. We are grateful of his recognition of our leadership where we have it, even if I dont personally agree about our competitors ‘sexyness’!
We will back with more on this shortly,
thanks
hello
there is 13.10 beta did
hello
there is 13.10 beta did u test it?
Nope, we were using 13.8
Nope, we were using 13.8 beta. But 13.10 did not add any changes for Eyefinity / Frame Pacing.
i got 7950 X2
I had a chance
i got 7950 X2
I had a chance to buy another one at $ 100
Now I’m all out of it and save the next generation of NVIDIA …
I’m very disappointed that AMD are selling a defective product ..
Thanks for the information.
AMD is always all about
AMD is always all about selling defective and inferior products. They did it before with their “Tri core” CPUs. Those were just Quad cores that had a 4th core that they couldn’t get to work. AMD GPUs and drivers are shit. You get what you pay for.
You do realize that binning
You do realize that binning is a key part in the silicon-industry? By your logic, the Titan is crap because it only has 14 of the GK110’s 15 SMX units activated.
When a produc-line is announced, there are actually few different dies being produced. 3820, 3930, 60 and 70 are all 8-core “xeons” with cores disabled due yield issues.
EVERYONE do this.
And please, stop spreading the false claim that the drivers are bad. Maybe they were back in good ol’ ’04, but that is long gone. AMD has actually had better drivers than Nvidia this generation…
In your dreams retard! NVIDIA
In your dreams retard! NVIDIA PWNS YOUR AMD, NUB!
Are you sure
Are you sure http://support.amd.com/us/kbarticles/Pages/AMDCatalyst13-10WINBetaDriver.aspx
I’m still wondering about this little gem… “PCI-E bus speed is no longer set to x1 on the secondary GPU when running in CrossFire configurations”
“But 13.10 did not add any
“But 13.10 did not add any changes for Eyefinity / Frame Pacing.” – I am curious about this too, as the press release clearly states that it updated something regarding Eyefinity.
I’m fairly technical, but I
I’m fairly technical, but I am getting a little outside my level of knowledge here
1. http://www.bit-tech.net/hardware/2010/11/27/pci-express-3-0-explained/
Each lane of PCIe 3.0 only has 1 GB/sec
2. http://www.tomshardware.com/reviews/pci-express-scaling-analysis,1572-8.html
This is a very old article, but THG did testing graphics cards by limiting the PCIe lanes available to graphics cards, and you can see a very large performance degradation.
3. Back to the “PCI-E bus speed is no longer set to x1 on the secondary GPU when running in CrossFire configurations” Note in the press release.
Would one of the cards in the setup being starved for bandwidth be able to account for these anomalies?
By AMD’s admission, no. This
By AMD’s admission, no. This problem listed in the Catalyst 13.10 notes only affects Crossfire configurations that do not use a bridge adapter across the graphics cards. This coming from AMD’s own Andrew Dodd.
If this were a fix for our problems AMD would surely be crowing about it.
You guys rock pcper. Been
You guys rock pcper. Been here for years. Love amd, but they need to get their act together. Emphasis on the good faith part ya know. Don’t become the crappy option. Amd has so much great iP if they could only get their software side together they would be SIGNIFCANTLY more competitive.
I hope this fix AMD says they
I hope this fix AMD says they are working on will help with my 6970’s too. I know they aren’t worth that now, but I paid $700 for the pair, a couple years ago, to power my 3 screens. I’m about to sell them on craigslist and get an nvidia card, if it doesn’t.
I honestly don’t know if they
I honestly don't know if they will fix it, but the 13.8 beta did fix single screen for 6000-series parts so you might get lucky here too.
Keep hitting them until they
Keep hitting them until they fix this. I bought 2x 7970’s in January 2012 and I noticed immediately the problem. Its not like AMD has only known about this since the beginning of this year, thousands of users were reporting this problem a year before that. It really put me off dual cards until I got a pair of 680’s and found that the grass was much greener on the Nvidia side with SLI.
We need this fixed so we have some competition in the high end of GPUs.
Ryan…let us see your
Ryan…let us see your Tiger-Bus !
How about this?
How about this? http://screencast.com/t/uTeQAykAYhJF
😀
with any luck a fix will be
with any luck a fix will be out from AMD a few after the release of the 9000 series, they are so slow with driver updates that I’d see it taking that long … if they dont just give up …
a few *months* after
a few *months* after
Many of the homebuyers tend
Many of the homebuyers tend to miscalculate the amount of money they borrow, most of the time, overestimating
them, because they think that their income will increase after several years and
that will make the mortgage payment be more comfortable
for them as the time goes by. Qualified properties are identified on the Help-U-Sell website, . Each house loan program will have its own individual set of rules. The upsells are also far cheaper than the actual cost of the phone. One of the nice things about the Federal Housing Administration loan, the FHA loan, thats the first time home buyer type loan, the minimum down payment loan, its only 3 years after you have had a foreclosure that you can qualify to purchase a home again. To begin with, they grant flexible finance guidelines which permit buyers having low credit history to get approved.
Here is my website … como perder barriga rapido
So you can’t run 3 PQ321’s on
So you can’t run 3 PQ321’s on AMD in a 6×1 eyefinity configuration?
I’m glad to see AMD is putting focus on 4K and hope they have a 4K eyefinity solution soon.
The only way NVIDIA will ever support anything other than 3×1 surround is if AMD turns up the heat. NVIDIA if you are listening, you need 2×1 and 2×2 surround support at any resolution to stay competitive on the consumer side. No one is dropping thousands of dollars on Quadro’s just to get that one feature.
I think technically YES you
I think technically YES you can support 3 4K monitors like the ASUS PQ321Q with AMD today…but the main issue is going to be PERFORMANCE. Even without frame pacing and interleaving problems how much horsepower would you need for that?
More than they have available today.
Just wanted to make sure it
Just wanted to make sure it was a performance issue not a driver issue. I agree you wouldn’t be able to run anything more than simple 3D demos in such a setup.
Agreed, I really hope we will see large GPU performance increases with the next round of silicon so that multi 4K gaming becomes a reality.
Keep up the good work, you guys are really pushing the boundaries of what is possible with 4K!
I can’t help but wonder, if
I can’t help but wonder, if the issues seen really are due to lack of proper synchronization, then would there be a FPS impact when AMD makes their changes?
Or in other words: Is AMD cheating on performance (knowingly or not)?
I don’t think they were doing
I don't think they were doing it knowingly, no. That doesn't excuse them though; they should have been able to see these problems before and been trying to fix them before this all started to happen this year.
Hi Ryan, I hope 4K
Hi Ryan, I hope 4K connectivity is scheduled to be included in all future reviews of hardware, like laptop reviews. Back in June I needed to buy something quick when my system crashed and it would have been great to know if any low to mid range laptops could at least drive a 4K display. I am not expecting benchmarks of a Chromebook running Metro Last Light at 4K but it would be nice to know if I could display Sheets on a Seiki display with an A10 based laptop.
The SEIKI’s should be
The SEIKI's should be supported with just about any modern GPU (single head, 30 Hz) but it's the 60 Hz models that require some more testing. I'll try my best to include them going forward!
Thanks Ryan & PcPer for doing
Thanks Ryan & PcPer for doing this & the previous investigative work; it is much appreciated & good to see AMD taking the findings onboard to make fixes!
If you used a monitor that
If you used a monitor that was capable of 600fps would the problem persist?
Yes, the supported refresh
Yes, the supported refresh rate is irrelevant here.
@ Ryan Shrout,
1. You need to
@ Ryan Shrout,
1. You need to state if your using AMD Beta Driver 13.8A (8-1-2013), or AMD Beta Driver 13.8B(8-19-2013). If you’re using 13.8A on purpose during a discussion/benchmark on Surround and 1600p, multiple viewers could come to the conclusion you did this on purpose to make AMD look bad. AMD Beta Driver 13.8A doesn’t have 1600p support. It only addresses the issues for DX11 API. 13.8B addresses 1600p and surround, if I am not mistaken. A possible upcoming 13.8C may address DX9 API, or it could have already been done in the new 13.9 WHQL update.
2. Personally, I can’t take your discussions on a more serious manner. In your conclusions, you state things that give me the impression that you don’t fully understand graphs, or have poor views of AMD Graphic Cards. At the very least, it is leading me to believe that you are bias towards Nvidia. Having favoritism, or a bias point of view to one company over the other, isn’t a good way to approach a discussion or benchmark on any product. It doesn’t help you seem serious, experienced, or reasonable to both bases (AMD and Nvidia users). It only tells readers that you pander to one side, and talk crap about the other brand’s short-comings. AnAndtech doesn’t do it, Guru3D doesn’t do it, techpowerup.com doesn’t do it either, and they all come out with really good benchmarks about computer-based products. Both bases read their benchmarks because they aren’t bias. Mr Shrout, you are bias either because you are letting people know of your hatred towards AMD, or you want to cater discussions and benchmarks that make AMD look bad to the Nvidia Base. Those are reasonable conclusions. If I don’t see a benchmark on here discussing why the GTX 600, 700 and Titan series doesn’t fully support DX11.1(support only software, but not hardware-wise), you are only going to prove me right.
3. Looking at the Frametime Variance Graphs that you posted, AMD 7970 will have a lower minimum band because the cards push lower latency to produce batches of frames. Problem with it, and it’s true, is somewhere along the way, they will produce “runt frames.” Frames that aren’t one whole frame. It could be like 0.8 frames, or 0.9, or 0.7. On the other hand, it takes less time for AMD video cards to produce those batches of frames. Nvidia takes longer to produce the batch because, hardware wise, the system probably calculates whether it needs to spend more time producing an extra “whole” frame. That’s why their minimum frame time band is higher than AMD. The hardware is always trying to push 1.0 frames times x amount of frames to a batch.
1. You are incorrect in this
1. You are incorrect in this assumption. No beta or full release of driver from AMD addresses Eyefinity.
2. I don't understand the relevance to DX11.1 reference here honestly. This story isn't about API support but rather multi-display + multi-GPU gaming. As to the bias question, this is something that gets targeted at people all the time when their results clearly show an advantage to one side or another. Throughout our Frame Rating series of stories I have continued to tell the truth – that AMD cards are fantastic for single GPU configurations but need work on the multi-GPU side. You can have your opinion obviously, but obviously we disagree. As do many of the readers commenting here.
3. Sorry, I'm going to need more explanation on what you are saying here. Frames are not produced in "batches" at all. I think you are trying to describe the runt frame problem maybe?
2. Personally, I can’t take
2. Personally, I can't take your comment on a more serious manner. In your post, you state things that give me the impression that you don't fully understand reviews, or have poor views of NVidia graphics cards. At the very least, it is leading me to believe that you are bias towards AMD. Having favoritism, or a bias point of view to one company over the other, isn't a good way to approach a discussion or benchmark on any product.
Sucks how that works, doesn't it? Oh, for your point 3, it doesn't matter how fast a card can batch process *anything*, so long as what's presented to the user is inferior to the competition. The result is all that matters. Rolling back to point 1, your statements are moot as they are made without the far greater level of knowledge Ryan has – as he speaks with AMD about these various beta versions on an almost daily basis.
As AMD have just stated that
As AMD have just stated that their Hawaii gpu’s are smaller and more efficient but not intended to compete with the “ultra extreme” gpu’s of Nvidia (aka 780/titan)as this is something that will be addressed by AMD’s multi gpu cards….then it is all the more essential that AMD sorts these problems out properly and completely otherwise their product/business model is flawed as badly as their multi gpu performance.
On a slightly different note
On a slightly different note but still regarding multi gpu I’d be interested in the views of you pcper guru’s on the present state of multi gpu systems.
It’s always seemed like such a waste to me using alternate frame rendering on multi gpu cards where each gpu has to have access to its own complete frame buffer size of memory.
Surely it would be better to use a tiled render approach where each gpu is working on individual tiles of the same frame and sharing one frame buffer sized chunk of memory?
someone always brings this
someone always brings this up, & both ati & nv have said for years that AFR brings the most performance in the least complex way (unless a game engine has inter frame dependancies)
in the past, ati had tiled CF as an option, also scissor mode
but think about this, let’s say you’re doing 2 tiles at a horizontal split, you may end up with one card rendering an empty sky, the second rendering a ton of detail, basically resulting in a useless solution that doesnt scale
on top of that, you have to synchronize the tiles to display a final image at the same time, but the cards cant physically render at the exact same time, so you’ll introduce lag or artifacts (which eyefinity does see)
i would say AFR is good enough & the way to go for multiple cards, but i would want to see a new paradigm… do you remember the first core2quad? it was 2 duals stitched together, imagine if 2 gpus were stitched together (no more mirroring the vram, just adjust the socket connections)
http://www.brightsideofnews.c
http://www.brightsideofnews.com/news/2013/9/18/nvidia-launches-amd-has-issues-marketing-offensive-ahead-of-hawaii-launch.aspx
LOL. Some stories are funny,
LOL. Some stories are funny, you know?
I replied to this here: http://www.overclock.net/t/1427828/bsn-nvidia-launches-amd-has-issues-marketing-offensive-ahead-of-hawaii-launch/30#post_20827758
I don’t even understand the
I don’t even understand the point of this article.
http://www.anandtech.com/show/7195/amd-frame-pacing-explorer-cat138/3
Even before that article it was known Amd was going to fix it phases.
The point to showcase the
The point to showcase the very specific Eyefinity problems compared to Surround as they had not been discussed or shown in any form before today.
Will you be taking a look at
Will you be taking a look at the Phanteks Enthoo Primo case? According to Hardware Canucks it might be the “Case of the year”, not bad for such a small company entering the case market. I would be interested in what you think about it.
Here’s the link to the HwC video: http://www.youtube.com/watch?v=rg_DzdHGgN4
How are your displays
How are your displays connected? I was having this issue until I connected all of my displays via DisplayPort. I know this is not ideal but it has eliminated the issue for me. I have 2 HD 7970s in crossfire and 3 Dell U2410 displays.
If the Asus PQ321 supports
If the Asus PQ321 supports DisplayPort 1.2 and the HD 7970 supports DP 1.2 as well, and DP 1.2 can do 4k at 60Hz, then why is 4K necessarily a “dual head” affair? Is that simply due to the way the Asus was designed?
Ok. Nevermind. The whole
Ok. Nevermind. The whole tiled display thing. Is there a particular reason why 4k displays have to be tiled (or multi-headed)?
Ok. Nevermind. The whole
Ok. Nevermind. The whole tiled display thing. Is there a particular reason why 4k displays have to be tiled (or multi-headed)?
Ok. Nevermind. The whole
Ok. Nevermind. The whole tiled display thing.
From another comment on this site by NLPsajeeth:
“Currently there are no timing controllers that support 4K@60p. In order to drive the asus/sharp at 4K@60p, two separate TCONs are used. This is why this monitor has the unique capability of supporting dual HDMI. Each HDMI port feeds into its own TCON.
There is no 4K display that can do 60Hz without tiling. 4K@60p TCONs are supposed to start shipping in small amounts this year and in mass quantities in 2014.”