Clock Variations
AMD claims that the variations we see in clock speeds with the R9 290X are part of what makes the GPU great. Are they right?
When AMD released the Radeon R9 290X last month, I came away from the review very impressed with the performance and price point the new flagship graphics card was presented with. My review showed that the 290X was clearly faster than the NVIDIA GeForce GTX 780 and (and that time) was considerably less expensive as well – a win-win for AMD without a doubt.
But there were concerns over a couple of aspects of the cards design. First was the temperature and, specifically, how AMD was okay with this rather large silicon hitting 95C sustained. Another concern, AMD has also included a switch at the top of the R9 290X to switch fan profiles. This switch essentially creates two reference defaults and makes it impossible for us to set a baseline of performance. These different modes only changed the maximum fan speed that the card was allowed to reach. Still, performance changed because of this setting thanks to the newly revised (and updated) AMD PowerTune technology.
We also saw, in our initial review, a large variation in clock speeds both from one game to another as well as over time (after giving the card a chance to heat up). This led me to create the following graph showing average clock speeds 5-7 minutes into a gaming session with the card set to the default, "quiet" state. Each test is over a 60 second span.
Clearly there is variance here which led us to more questions about AMD's stance. Remember when the Kepler GPUs launched. AMD was very clear that variance from card to card, silicon to silicon, was bad for the consumer as it created random performance deltas between cards with otherwise identical specifications.
When it comes to the R9 290X, though, AMD claims both the GPU (and card itself) are a customizable graphics solution. The customization is based around the maximum fan speed which is a setting the user can adjust inside the Catalyst Control Center. This setting will allow you to lower the fan speed if you are a gamer desiring a quieter gaming configuration while still having great gaming performance. If you are comfortable with a louder fan, because headphones are magic, then you have the option to simply turn up the maximum fan speed and gain additional performance (a higher average clock rate) without any actual overclocking.
PowerTune Update
This is now possible thanks to the updated AMD PowerTune technology. It no longer creates a fixed "boost" clock for the Hawaii GPU but instead the algorithm is controlled by multiple parameters to vary performance. AMD has set a thermal limit of 95C on the R9 290X and thus fan speeds, clock speeds, and voltages adjust to maintain that maximum temperature. Obviously, with that, performance will follow.
Now you can adjust clock speed and power levels in the driver as well, but for the sake of this conversation and story we are leaving that out to see what kind of flexibility you get with just fan speed. PowerTune has a lot more functionality (and a lot more potential upside) but this, now, creates some questionable performance variances.
What AMD is doing here isn't unique of course; NVIDIA's Kepler architecture introduced GPU Boost technology in early 2012 and Intel's Turbo Boost technology on the CPU side attempts to do something similar as well. But with those two technologies we generally have well defined levels of scaling to expect, and the clock speed deltas are fairly low. With the latest PowerTune from AMD on the 290X, that isn't the case.
The purpose of this story then is two fold. First, I wanted to see how flexible the PowerTune technology was and how configurable the R9 290X card and the Hawaii GPU actually is. Does it live up to AMD's claims that this is a feature and not simply a desire to maintain yields and profit margins? Second, we wanted to know how much performance you can actually gain by increasing the maximum fan speed along with how much performance you lose by decreasing the maximum fan speed?
As it turns out there is quite a lot of variance. This, of course, leads to quite a few topics we will discuss as we go through the data.
- AMD Radeon R9 290X 4GB – $549 (Newegg.com)
- AMD Radeon R9 280X 3GB – $299 (Newegg.com)
- NVIDIA GeForce GTX TITAN 6GB – $999 (Newegg.com)
- NVIDIA GeForce GTX 780 3GB – $649 (Newegg.com)
Testing Setup – Very Important!
You will see we are doing TWO RUNS for our benchmarking. You will see labels for "Run 1" and "Run 2". Run 1 is very simple. We start with a cold GPU, making sure the GPU temperature is in the low 40C area at the desktop. We open up GPU-Z and begin recording data (this captures our clocks, temperatures, fan speeds, etc.). As quickly as possible we start our benchmark (Crysis 3 in this case) and load into the testing area as soon as possible, usually within 30 seconds. I then immediately do our standard 60 second benchmark run through.
Then I just play the game for 5-7 minutes. Not idling, not at a menu, actively playing the game. This heats up the GPU and gets it into a real world scenario. I then reload the map after the minimum of 5 minutes has passed and do the exact same 60 second benchmark run through, capturing our results through our Frame Rating methodology. That is Run 2.
So if you see differences between Run 1 and Run 2 results you are seeing the difference between a cold GPU benchmark and a warm GPU benchmark. Or, this is what we consider "unfair" performance levels against actual, real-world performance levels.
We are only using Crysis 3 here because it is the hardest single game on GPUs today. This phenomenon does occur with all games, to some degree, and mostly in this dramatic fashion.
Leaving everything else untouched, we tried out 5 different fan speed settings.
Using the Catalyst 13.11 V8 driver, I ran this same scenario three times at five different fan speed settings. 20%, 30%, 40%, 50% and 60%. 40% is the default "quiet" setting for the Radeon R9 290X while 55% is the "Uber" mode that AMD offers through the dipswitch on the top of the reference designs. I wanted to see if I could get the card to an even lower noise level with the 20% and 30% options and then push the fan speed up to a very high, and loud, level to see what additional performance users could get out of Hawaii by sacrificing sound levels.
I think the results are going to be quite interesting.
Alright, call me crazy, but
Alright, call me crazy, but is there anything at all scientific about setting a fan speed at 40%?? Not one, just simplicity. How about setting the fan RPMs or the fan decibels for something actually performance related. 40% is arbitray between two different fans. The size and blade pitch could dictate wildly differing CFM values.
Regardless, its a pretty crazy comparison, when its clear AMD put almost zero effort into the reference cooler. They planned to give consumers a good value, not the best of everything. Its loud, we all get it, but how about an actual useful review of someone strapping a water block on it, or another cooler? How many first adopters of $500 hardware leave well enough alone? If you buy 290X cards you know what you are buying and likely have water cooling, so why pay more money for a great cooler you will take off?
In this way it is also configurable performance in FPS, temps and sound levels.
I am not sure what to make of
I am not sure what to make of my R9 290 to be honest. My waterblock arrives tomorrow so my opinions are likely to change, however from the testing that I have done with this card over the last two weeks I can say that overclocking is totally pointless with stock cooling. My average clocks in BF4 with fan on auto is around 820mhz with a low of 720mhz… I quickly hit 94C. The latest 13.11 beta 9.4 catalyst drivers have no performance tab on my test machine (known bug), so I’m not bothering to test with power limit increased until this is fixed and my waterblock is installed.
I am sure that once adequately cooled this problem will disappear.
Still I determine that AMD should be blamed for shoddy marketing and a lousy stock HSF.
They should have been honest enough to state that the clocks on this card are 800mhz boosted to 950mhz (depending on heat and power)…. Just my opinion.
I sold my GTX 780 for a stock R9 290 and believe that this was a good deal. I got $500 for my GTX 780 and this allowed me to buy the R9 290 and a new EK Waterblock. Fully cooled I will be able to get better performance from the R9 290 and as I only play BF4 I am excited to see what Mantle brings to the table later in December.
Nice review as always.