Card Comparisons and Testing Configuration
Things are getting interesting here with Radeon R9 290. With an MSRP of $399, the Hawaii GPU is going against some new competition, in particular where the GeForce GTX 770 USED to be. Now, the GTX 770 is sitting at $329 but it still needs to be in the discussion. Here is how it stacks up:
- AMD Radeon R9 290X 4GB – $549 (Newegg.com)
- AMD Radeon R9 290 4GB – $399 (Newegg.com)
- AMD Radeon R9 280X 3GB – $299 (Newegg.com)
- NVIDIA GeForce GTX TITAN 6GB – $999 (Newegg.com)
- NVIDIA GeForce GTX 780 3GB – $499 (Newegg.com)
- NVIDIA GeForce GTX 770 2GB – $329 (Newegg.com)
The Radeon R9 290 is $150 cheaper than the R9 290X even though the hardware configuration is incredibly similar between the two options. At first glance I think this could cause some confusion for AMD buyers and even those users that just purchased the R9 290X. The GTX 780 is now $100 more expensive than the R9 290 and if performance is close between these two cards then NVIDIA will once again find themselves in a tight spot. The GTX 770 is $70 less expensive but I think from a performance stance it doesn't really have a shot at holding up to the new R9 290.
Testing Configuration
The specifications for our testing system haven't changed much.
Test System Setup | |
CPU | Intel Core i7-3960X Sandy Bridge-E |
Motherboard | ASUS P9X79 Deluxe |
Memory | Corsair Dominator DDR3-1600 16GB |
Hard Drive | OCZ Agility 4 256GB SSD |
Sound Card | On-board |
Graphics Card |
AMD Radeon R9 290X 4GB AMD Radeon R9 290 4GB NVIDIA GeForce GTX 780 3GB NVIDIA GeForce GTX 770 2GB |
Graphics Drivers |
AMD: 13.11 V8 NVIDIA: 331.58 |
Power Supply | Corsair AX1200i |
Operating System | Windows 8 Pro x64 |
What you should be watching for
-
R9 290X vs R9 290 – These cards have very similar specifications but are separated by $150 in price? Does the R9 290 have performance enough to make it the best value?
-
R9 290 vs GTX 780 – With a $100 cost disadvantage, how much faster is the GTX 780 over this slightly slower Hawaii GPU?
- R9 290 vs GTX 770 – This was going to the comparison point for the R9 290 but after the NVIDIA price drops it does have an edge of $70. Can it capitalize at all?
If you are already familiar with our testing methodology, you can jump to the first set of benchmarks right here!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
The PCPER FRAPS File
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions. So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
Why aren’t these available
Why aren’t these available yet on newegg?
Its a bit of an odd move for
Its a bit of an odd move for AMD. This card is priced to undercut it owns self. Yea its faster then a gtx770, and around same as 100$ more 780. With nvidia cards you get games with them and cooler and quieter cards. So AMD cards have their upsides but also their down side.
Yeah, really uneven
Yeah, really uneven distribution of their cards has made the 290x obsolete in a way.
Yeah, but only in regards to
Yeah, but only in regards to reference cards. After market cards that allow the 290X to stay closer to the 1GHz clock should allow it pull away a fair bit, especially were the game is shader bound. Anyways, stunning value, I would not have been disappointed if this came in at the rumoured 450$
I am more interested in an
I am more interested in an apples to apples of the 290 and 290x. If Ryan can find a little time I would love to see a few benchmarks of them running at the same maximum fan speed to show the actual performance difference one would get for the extra $150.
givin the heat this gpu makes
givin the heat this gpu makes it won’t be much, there will be some but your talking 100% fan and no one will use it at that. Even at 100% ammount of air pushed through vs say 75% is not that much more.
Let me clarify when I said
Let me clarify when I said “same maximum fan speed” I was referring to manually setting both cards to either 47% or 55% fan speed and performing benchmarks to show a direct performance comparison.
While not exact, we use set
While not exact, we use set the 290X to 20%, 30%, 40%, 50% and 60% fan speeds.
https://pcper.com/reviews/Graphics-Cards/AMD-Radeon-R9-290X-Hawaii-Configurable-GPU
I’m guessing its yield issues
I’m guessing its yield issues that they are getting a lot of cards with a few bad compute units and they expect to sell all of the 290x’s they can make similar to what nvidia does with the titian. Or they could just be happy with the margins on the 290 and are making a grab for increased market share. I’ll be interested so see what the board partners do in the way of custom coolers and I look forward to Joshtekk’s reviews.
EDIT tom’s strapped an arctic accelero xtreme III after market gpu cooler on there 290 and got an almost 13% improvement in performance now i REALLY want to see those custom coolers.
titan is a 2688 cuda core
titan is a 2688 cuda core gpu, the new 780ti is supposed to be according to leaked PDF from galaxy a 2880 cuda core gpu.
GRID 2 single card 4K results
GRID 2 single card 4K results appear to be missing?
GRID 2 has some compatibility
GRID 2 has some compatibility issues at 4K with crashing back to desktop.
I almost bought one just now
I almost bought one just now on Newegg. 399 is a good deal. Then I saw you can get a 780 lightning for $529 plus a 15$ rebate and 3 games. 729 for the shield combo. You can sell the shield and games for over 300 and you have a lightning for the same price as a 290 without the heat and noise.
I’m going to wait for the R9
I’m going to wait for the R9 290 with a 3rd party cooler. Forget the 780 – it’s obsolete now unless it drops to $350 tops.
Agreed. Nvidia’s entire line
Agreed. Nvidia’s entire line up from 770 to Titan is virtually obsolete now, glad I didn’t waste my money on any of those Nvidia GPU’s.
funny how its “obsolete” when
funny how its “obsolete” when amd card is 6 months late to the game. Funny how back 1 year ago one first things people jumped on difference between nvidia and amd card was amd had game bundle with it. Yet NOW its nvidia that has the games with it and AMD has 0 with it yet not a peep about that is even thought of? Sadly though biggest missed part is heat made by this card is terrible as a gamer. Yea amd says its made to run at that temp but sure a lot of people agree with me that just cause it can run 95c all day long doesn’t mean we want that kinda heat in our computer. If it got 95c on an open air test bench its gonna be quite a bit worse when you stick it in an inclosed case. temp diff is usually a good what around 5-10c warmer even with good air flow cause well some that air is stuck in case for short time.
You demonstrate several times
You demonstrate several times in that one paragraph how little you understand about the fundamentals of building a solid system. System heat is defined by the amount of power going into the GPU, not by the temperature said GPU runs at… It’s also going to run at 95c in a case because it is thermally limited.
So, yeah, keep attacking those imaginary fanboys flying around your head all day. It’s fun to watch. 😀
Ryan as the chart says i
Ryan as the chart says i think with a corsair ax 860 i will be fine for 2 of these bad boys.
I really want to see how
I really want to see how third party vendors improves on the fan,heat and noise aspects of these cards.
I could see my self switching my GTX 680 out for a 290 or a 290x, but not a reference version though.
that is hopes atm, is the AMD
that is hopes atm, is the AMD reference cooler is just that bad and its not that gpu makes that much heat. would be interesting to see how well it tests in an enclosed case with more on avg air flow and not open air bench which does kinda mess with results compared to real world use.
Tom’s Hardware showed that
Tom’s Hardware showed that the cards for press are MUCH faster than the retail cards… what do you think?
if it was ever proven that
if it was ever proven that they sent out cards were faster then retail, would be a pretty big PR nightmare later.
edit: i am lookin at toms’s review now and that is very suspect to me. difference between Press card and retail is around 15-20%. if it was ~5% could call within spec of slight differences in the cards but that is a little to much. since 290 on their test is almost same wonder if that is a press card as well? Love to get Ryans thoughts on that data though.
I have purchased some retail
I have purchased some retail cards last night to investigate just this. We will find the truth!
thank you
AMD deserves to be
thank you
AMD deserves to be shamed if the story is truly as it seems
Make sure to run a LONG test
Make sure to run a LONG test inside a case also please. Part of their findings show it tanked more in speed as it got hotter (well duh I guess). Running open air is one thing, but how hot does it get running loops in a pc (or like you would see say, playing BF3/4 for hours)? Does it get so hot that it’s dropping core speed to stay at 95 temps? Toms saw speeds drop into the 700’s (~750 if memory serves) and he wasn’t running his retails in a closed case either.
We need to start seeing tests running in a closed box if this is really what happens. I’m also wondering how a card like this affects your OC’ed CPU. If everything inside the box is hotter, wouldn’t it affect your OC at least some (those not on water I’d guess).
One more point: It seems to me all NV has to do is release the same type of driver adding 10-12DB’s of noise with a higher default clock speed and call it a day. That is all AMD did here right? I wouldn’t be surprised by them doing this to get 780 speeds up just enough to top AMD, then 780TI on top of everything, though there are those 3 AAA games and lower tempts/noise that will mean something to some.
http://www.chiphell.com/thread-891408-1-1.html
780 TI scores above, 2880 cores, and faster clocks takes down a 690 is some stuff. Clock frequencies of 876MHz/928 MHz for the core/boost and 7GHz for the memory are specified. OCing 780TI will produce some fantastic perf with FULL SMX’s on. I rather like NV’s way here, of adding more resources than clocking it to death to win.
Not that confusing, Rory Read
Not that confusing, Rory Read wants to HURT Nvidia’s high end gaming position and he now has the tools to do it.
Oxide Games -> “which lets us see dramatic increases in performance on Mantle-enabled systems” – is another indication Mantle is indeed going to bring, well, dramatic performance increases.
Eidos Montreal is implementing Mantle and is part of Square Enix …
Just to stay competitive with the 290 Nvidia will need to drop the price of the 780 to $400.
What’s Nvidia going to do in a week when APU13 reveals the full extent of AAA Mantle games in the pipeline and a working demo shows what Mantle is really capable of?
And aftermarket 290 and 290x cards start arriving to unlock even more power with much less noise and better temps?
Nvidia will be in a position it cannot be competitive with AMD 2xx cards without selling at a loss.
Rory Read is about to put a 4×4 up against Nvidia’s head.
the 780/titan OC’ed to 1ghz
the 780/titan OC’ed to 1ghz is on about same level as the amd card.
At some point in time, with
At some point in time, with all the driver issues and bandwith constraints, of communicating data GPU to CPU over a 64 bit bus, or even uHSA over a 64 bit, designed for general purpose computing data bus, to regular memory is going to become too restraing for future games. At what point in time will GPUs with much larger data buses, available bandwidth, and much faster dedicated GPU memory, have to include a CPU/s on die with the GPU, a CPU that shares the same fat data bus with the GPU. Whould having a dedicated gaming CPU/GPU gaming box, on a PCI card be the way to go in the future? Imagine if AMD had the freedom to do with its x86 IP license, and its GPU IP, the same thing it does with the dedicated console hardware, unconstrained by the limited gaming compute/hardware restraints of the motherboard general purpose CPU.
Stupid question, but do any
Stupid question, but do any of those DVI ports output to the old VGA just incase you need to hook up to a monitor with VGA (temporarily)?
It looks like the dvi ports are DVI-D, which do not output analog signal for the VGA cable…
It’s time to upgrade your
It’s time to upgrade your benchmarks with Battlefield 4.
It is odd to me that one of
It is odd to me that one of the arguments is how much cooler 80C is…176F, that is still toasty 😛
Tom’s has a R9 290 with an aftermarket cooler modded to fit.
temps weren’t discussed, but quiet was, and it was quiet and overclock to 1.15Ghz. That quiet R9 290 destroyed the GTX 780.
I’ll make a prediction that the non-reference cards are gonna be better and that the GTX 780Ti is gonna need a price drop when the numbers post.
Nv has too much riding on the 780Ti. It can’t just “beat” the 290 and 290X it has to rip them apart, not likely.
edit: also. gsync is NEVER going to work with AMD cards EVER.
It really is about time that
It really is about time that graphics cards have come down to reasonable prices. $1000 for a single gpu configuration?! It seems crazy now, and it has seemed crazy for the last 5 years at least. The highest end cards in the early 2000’s rarely passed $600 and it was just four years ago in 2009 that AMD released their highest end single gpu card the Radeon 5870 at $379. Now imagine a 290X at that price instead – it would compete handsomely with the consoles. It seems odd, but now AMD will be competing with itself as soon as the PS4 and XBone are released.
Think competition with consoles and general GPU computing vs intel not necessarrily competition with Nvidia. By stealing market share from NVidia they can only gain a relativley small and well definedf amount of new customers – however there is a much larger market and millions of new customers awaiting to be tapped in the console market
The consoles have weak
The consoles have weak hardware compared to even a modestly specced PC. A high end PC blows all current (and next gen) consoles out of the water.
So why do console games look so impressive? One of the big reasons is a streamlined graphics API.
So just wait for AMD’s mantle. I think quite a number of people are going to be truly shocked by improved performance with a high end PC due to mantle. Developers are jumping on board. Not long to go now for the 1st game (BF4) to show us what mantle can do !
WTF? How did I get 3 replies
WTF? How did I get 3 replies LOL !
An NVidiot’s stupidity is
An NVidiot’s stupidity is infinite. Why?
Before the release of the AMD R9 290X, the GTX 780s were selling in the $650 to $820 price range.
The NV boys were happily buying the GTX 780s and Titans.
In a few games, the AMD R9 290 at $399 beats the GTX 780s and Titans. NV boys are crying and making excuses now.
“Both the R9 290X and the R9 290 are showing faster results than the GTX 780, as we would expect based on the past pages of gaming benchmarks.” from Ryan’s Review. PCPER, hardocp and techpowerup reviews are in agreement.
AMD brings lower prices to GTX 780s and NV boys are not happy.
One thing I haven’t really
One thing I haven’t really seen discussed is that this card basically cannot be overclocked. It already will not run at its nominal clock speed in real world gaming situations unless you are willing to tolerate a jet engine under your desk. On the other hand, 780 GTX’s and Titans are clocked very conservatively and almost all of them can easily run at 200+ MHz over their nominal clock.
what would the Price to
what would the Price to Performance be a crossfire pair of HD7870 / R9 270x compared to the R9 290? which at $200 each should make for a nice comparison.
the main reason i want to know is that i have a HD7870 and next year i was planning on upgrading to a second GPU. but if it out performs this at the same price i will probably get a R9 290 instead.
The thread below add a more
The thread below add a more highlight on the two gpu thing.. But sometime it is more confusing to compare if you have a old system – http://forums.techarena.in/monitor-video-cards/1472739.htm