A bit of a surprise
We got a little bonus today in the form of a second R9 290X. CrossFire and 4K testing anyone?
Okay, let's cut to the chase here: it's late, we are rushing to get our articles out, and I think you all would rather see our testing results NOW rather than LATER. The first thing you should do is read my review of the AMD Radeon R9 290X 4GB Hawaii graphics card which goes over the new architecture, new feature set, and performance in single card configurations.
Then, you should continue reading below to find out how the new XDMA, bridge-less CrossFire implementation actually works in both single panel and 4K (tiled) configurations.
A New CrossFire For a New Generation
CrossFire has caused a lot of problems for AMD in recent months (and a lot of problems for me as well). But, AMD continues to make strides in correcting the frame pacing issues associated with CrossFire configurations and the new R9 290X moves the bar forward.
Without the CrossFire bridge connector on the 290X, all of the CrossFire communication and data transfer occurs over the PCI Express bus that connects the cards to the entire system. AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions (which were the subject of our most recent article on the subject). By accessing the memory of the GPU through PCIe AMD claims that it can alleviate the bandwidth and sync issues that were causing problems with Eyefinity and tiled 4K displays.
Even better, this updated version of CrossFire is said to compatible with the frame pacing updates to the Catalyst driver to improve multi-GPU performance experiences for end users.
When an extra R9 290X accidentally fell into my lap, I decided to take it for a spin. And if you have followed my graphics testing methodology in the past year then you'll understand the important of these tests.
Testing Configuration
The specifications for our testing system haven't changed.
Test System Setup | |
CPU | Intel Core i7-3960X Sandy Bridge-E |
Motherboard | ASUS P9X79 Deluxe |
Memory | Corsair Dominator DDR3-1600 16GB |
Hard Drive | OCZ Agility 4 256GB SSD |
Sound Card | On-board |
Graphics Card |
AMD Radeon R9 290X 4GB NVIDIA GeForce GTX 780 3GB |
Graphics Drivers |
AMD: 13.11 V5 NVIDIA: 331.58 |
Power Supply | Corsair AX1200i |
Operating System | Windows 8 Pro x64 |
What you should be watching for
-
R9 290X CrossFire @ 2560×1440 – AMD has already started addressing frame pacing with this resolution as of August 1st with the 13.8 Catalyst release. The R9 290X should do fine here as long as we don't see any regression.
- R9 290X CrossFire @ 3840×2160 – Using a tiled, 4K display (the ASUS PQ321Q to be exact, a hell of a monitor), we are going to really be putting the new CrossFire to the test. Before today, there were NO FIXES to prevent dropped frame, interleaved frames and other problems for AMD Radeon graphics cards. The R9 290X might be the first to take that step…
If you are already familiar with our Frame Rating testing methodology, feel free to jump straight to the benchmarks!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
The PCPER FRAPS File
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions. So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
Good read cant wait to see if
Good read cant wait to see if the oems can find a better quieter cooler .
Shill on pcper, shill on.
Shill on pcper, shill on. Keep those nvidia checks rolling in.
AMD needs to repair its
AMD needs to repair its problems on the older 7000 series video cards than introduce another new and more expensive AMD video card using no bridge for Crossfire. I have been duped by AMD Crossfire technology as an owner of their crapware.
Can you blame him ?
Nvidia
Can you blame him ?
Nvidia basicly used him as a propaganda tool with F-CAT through out this whole time and now I’m sure they will ask him to only test Nvidia with G-Sync system only which makes F-Cat obsolete.
Hes got to milk it for all its worth in the mean time.
AMD’s Drivers have always
AMD’s Drivers have always been crap. Truth hurts. I’ve owned both brands. The reality is AMD needs to fix their driver design team.
Keep on trolling, Anon.
Keep on trolling, Anon.
Did you even read the
Did you even read the article? It talks about how AMD has fixed most of the issues it had with xfire on eyefinity…
What’s really funny, is that
What’s really funny, is that AMD’s newest and best GPU doesn’t work properly in CrossFire, uses significantly more power, gets considerably hotter, is louder and performs worse in AMD optimized games than Nvidia’s old 780.
This card is also ridiculously expensive for what it is.
Ryan,
Great articles, I’m
Ryan,
Great articles, I’m glad to see AMD stepping it up. Right now I have a NVidia GTX 690 so i’m going to wait and see what GPU’s late next year have in store. If AMD can keep pace with Nvidia in next year’s card battle I may jump the fence again.
any rumor on a single PCB double R9 290X?
I wouldn’t expect that in any
I wouldn't expect that in any reasonable time frame.
Good job once again. Now
Good job once again. Now let’s see if we’ll get those fixes for my 7990 as well… I’m hoping/praying that they didn’t want you to publish these results yet, because their full lineup solution is just around the corner 🙂
the 4K CF tests looks ok
the 4K CF tests looks ok because AMD fixed multi display or because the 290X support 4K as one monitor ?
It’s not about 290X
It’s not about 290X supporting 4K as one monitor. The monitors itself drive the display with two controllers as there isn’t any display out there that can drive 4K at 60hz on it with just one display controller. Next year or 2015 at the latest such controllers should become available and after that 290X can drive the display as one monitor.
ok ty.
so we can see CF
ok ty.
so we can see CF problem as fixed as long as u dont play DX9 games… nice.
Tahnk your for your review 🙂
Tahnk your for your review 🙂 Great work as usual.
Are the 13.11 V5 drivers compatible with the 7000 series ? Can you run some test ?
Worth a shot!
Worth a shot!
Tks 4 the review
Tks 4 the review …
…
5760×1080 is the rez i want to know more details and compare,
also didnt find in any review so far about the recommended power supply for the R290x.
for now… It is my opinion that this card is a 7 out of 10, and it will problematic in many systems becouse of the temps and power consumption. no matter how good system it is in , its just to hot …next summer global warmming will have a blast, for sure.
Also …i belive the core mhz flutuation will be also a problem for non water cooled systems.
AMD is catching up to nvidia in fps, but fps is not all that is important in my opinion, also the next generation from nvidia is around the corner…. so… nice try AMD
Your move Nvidia!
I just read that Nvidia is
I just read that Nvidia is dropping prices on gtx 780 by $150. Still, I’m looking hard at AMD once again. It feels like competition is brewing again.
Serious kudos for being the
Serious kudos for being the first site with a proper Crossfire review. The results are very encouraging; let’s hope this is a good indication for the future.
Thanks!
Thanks!
Did you wait for Kepler cards
Did you wait for Kepler cards to “heat-up” before you did benchmarks after nvidia introduced Boost?
I had a feelling that Toms
I had a feelling that Toms Hardware wasnt showing 1440p resolutions on purpose(slightly short of elite award I would say) and it looks like until a driver update from amd that nvidia is still supeior at current generation and the new wqhd ips pls monitor resolution 1400p(these 1080p benchmarks are inline with toms hardware) at a higher framerate making the 780 the best mate with the overclockable korean monitors with the lowest frame variance providing the superior overall solution. Downscalining a larger than viewable resolution is nice but ultimately unusable in competitive gaming which needs frame speed over resolutions too high for ingame textures. I would like to see the benchmarks at very low resolutions like 720p for truely competitive ultimate speed comparisons at low graphics. Crysis 3 isnt going to look any better at 4k over 1440p its just going to run allot slower(unless screen size is your thing you arent going to get much more pixel density than 27″ wqhd which already has near invisible pixels anyway). Maybe the next amd update(baring some soon to be released unforseen driver if this driver is released i hope pcperspective tests it against the latest nvidia driver) will be what to look for but for now its time to look at what Nvidia does for its 8 series. If I don’t see enough power there I’m probably going to switch. That asus monitor is 4000 dollars last I checked and really I think people should wait for 4k when 4k oled comes down from the stratosphere. Then there will be 4k content and you will have an appropriate blacklevel and ridiculous true contrast ratio. I notice a little improvement with 4k remastered blurays on wqhd but its not worth the price of initial/prototype products unless your some rich guy with a need to show off and impress colleagues and investors(but it wouldnt prove you have much in the way of brains or that you are a true videophile).
hi Ryan, congrats… 🙂
in
hi Ryan, congrats… 🙂
in your opinion, the loss of the crossfire bridge means that the old 7000 series will always have issues and limits because of the bridge or the situation (drivers) will constantly improve anyhow?
and so, this xdma will be just adopted for some particular resolutions and setups? “AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions”
the old 7000 series will be good with Crossfire at 1920×1080 res. in any case or is this a way from AMD to admit that the old technology has insurmountable limits?
thanks
last
last question…please
crossfire technology is even related to the graphic engines features and concept and innovation (also engine release date, e.g. Unreal Engine 3 was first released about 10 years ago)
or these things are not related and is a drivers topic only?
Temperatures ?
Temperatures ?
Check out the full review
Check out the full review here: https://pcper.com/reviews/Graphics-Cards/AMD-Radeon-R9-290X-Hawaii-Review-Taking-TITANs
Thanks,
wonder how will it
Thanks,
wonder how will it perform on average pc case in a hot day
in cross fire with all that heat, would it still be better performing then a 780/Titan?
Im not totally sure as I
Im not totally sure as I don’t sure unreal engine forums but I think its updated or at least most of the big titles using unreal 3 modify the engine and add their own drivers and binaries to make a new modified engine each different from the others. I know the configuration files are all different and the more recent ones like Borderlands 2 and Bioshock Infinite have radically different cvars and game code functions.
ok, thanks, but really i
ok, thanks, but really i didn’t ask if it’s updated or not, (anyway, the core is modified or minor parts?) but if the engine is even related to the multigpu performance and smoothness… 😉 sometimes in patch release notes from developers I can see e.g. “multigpu optimizations” or something like that… so the question arises
How does this card work in
How does this card work in Linux ?
Very good write-up Ryan.
Very good write-up Ryan. Comprehensive review showing some impressive numbers there, this GPU looks to have the goods.
Love the review, i waited
Love the review, i waited buying the Titan, to see what AMD A-game would be, and or what it would to prices, and i can say i am happily surprised!
Just one question, why the 4K testing, i know the ASUS PQ321Q is a hell of a monitor, but come on, a $3500 monitor is not something a lot of people are going to buy.
Why not a more real world setup, of 5760×1080 or pref 1200, you said it was no problem using Eyefinty in your test setup with the capture card.
The reason why i say this, is that i think there properly will be real world differences between 6m pixels and 8m, and for at least the the coming 2 years, 5760×1080/1200 will be by far the most used Eyefinty gaming.
I just don’t see the benefit of using the PQ321Q for Eyefinty testing over a three monitor setup, other then it takes less space on your test bench. ^_^
There are plenty of 4K TVs in
There are plenty of 4K TVs in the market and their market penetration is accelerating quickly. 60hz support will soon be there across the board, so 4K gaming is not a pipe dream anymore and imo will become more relevant than 3 display gaming.
I am not saying 4K game is
I am not saying 4K game is not coming, but it will always stay a niche, as 32” 4K monitors are gone be the new 30”, and i am saying its at least 2y away before they will be, and by then we have new and improved hardware that replaces the 290X.
So imho for now its more or less irrelevant, whit the next gen cards its propperly something you wane have tested, but still think more gamers wane get a Eyefinty setup for $500 then a 32″ 4k for $1000~2000.
But thats just me!
I am not saying 4K game is
I am not saying 4K game is not coming, but it will always stay a niche, as 32” 4K monitors are gone be the new 30”, and i am saying its at least 2y away before they will be, and by then we have new and improved hardware that replaces the 290X.
So imho for now its more or less irrelevant, whit the next gen cards its propperly something you wane have tested, but still think more gamers wane get a Eyefinty setup for $500 then a 32″ 4k for $1000~2000.
But thats just me!
I want to get a 65″ 4K TV for
I want to get a 65″ 4K TV for 3-4000$ with good 60Hz support, and that is NOT far away and imo is very much preferable over 3 small displays. yes it’s also more expensive, but a large screen is good for many other things as well, for example movies, so the higher price can be somewhat justified.
How are you gone fit a 65″ on
How are you gone fit a 65″ on your desk?
For gaming i rather have the benefit of Peripheral vision you get from Eyefinity, like in the real world, as games are build to resemble real life.
Al game’s use a more or less 2D+ world ware the third dimension is used to go up or down hills and building, but your still moving more then less horizontal.
So whats the benefit of having a High Def 16:9 window even on a 32″ 4k, if you can have 58:9 or 58:10 view (3×16=58), the same way your eyes work in the real world.
It won’t be on a desk, but on
It won’t be on a desk, but on a TV stand. I have dedicated space for it. With regards to your example 4K isn’t kind of 16:9, but 32:18 instead. Remember I’m talking about a 65″ TV, which essentially is 4 32″ 1080p monitors without bezels covering the screen.
The high resolution of 4K allows one to move very close to the screen and thus allowing very large horizontal and vertical field of view and a massive perceived screen size. Watch the 65″ from 4-5 feet away and it’s about the same as the first few rows in a modern movie theater. This is enough of horizontal view imo, at least until high res Oculus Rift, where also 4K will be the ultimate sweet spot few years down the line.
edit: Just wanted to point out more that Oculus Rift uses a 16:10 screen at the moment, so covering your field of view is not about the screen aspect ratio, but how that screen is implemented to your field of view.
Ryan “R9 290X is the best
Ryan “R9 290X is the best single GPU avaiable”.
random RERE: “shill on, keep that Nvidia money coming in”
snook: someone needs to google shill.
something I feel is being overlooked. the 290X was down clocking to ~840Mhz and was beating the Titan and 780
with, correct me if I’m wrong, boost 2.0 working. They had an average of about ~200Mhz over the 290X.
that is a seriously brutal result for Nvidia.
currently setting the money aside for my george foreman grill,
errrr, R9 290X.
thanks for the videos and articles PCper
I don’t think it makes sense
I don’t think it makes sense to compare clock speeds if at the higher clock speeds the Titan/780 are still consuming a fair bit less power. The 290X can’t have the clock speeds of the 780/Titan or else it would melt but then again it doesn’t need to. You have to take more into account than clock speed to compare efficiency.
I don’t think it makes sense
I don’t think it makes sense to compare clock speeds if at the higher clock speeds the Titan/780 are still consuming a fair bit less power. The 290X can’t have the clock speeds of the 780/Titan or else it would melt but then again it doesn’t need to. You have to take more into account than clock speed to compare efficiency.
It is strange that the 290X
It is strange that the 290X consumes more power with a lower clock speed and smaller die though. The 780 isn’t using near it’s full die tho so size of the die operating is similar to the 290X. Must be the bigger memory controller etc. I don’t know.
I think sometimes the Green
I think sometimes the Green Team/Red Team thing gets a bit out of hand. Personally I want both companies to make good products so that I always have a good choice.
Think what it would be like if either company stopped making GPUs. The remaining company would simply double or treble prices and put just enough effort in to stay comfortably ahead of Intel’s integrated graphics ( which would not be that hard ).
I also think that most people underestimate how hard it is to continually produce good GPUs. Every time the process nodes shrink there is a new set of design parameters waiting to trip them up. In the current generation, This generation, AMD are obviously running hot due to leakage issues, but NVidia have also had duds with most of their part designs. Of their original set of parts, only GK104 was successful, and thankfully for them it was really quite good.
I know this is kind of out in
I know this is kind of out in left field in 2013, but could you do a quick test to see if older generation cards like 6970s pace their frames properly with the betas we’ve been given these last few months? Much of the current focus is understandably on the 7000 series, but many of us are still interested despite using older hardware. 🙂