Sapphire Radeon R9 285 and Testing Setup
For our testing AMD sent along a non-reference card – a slightly overclocked Sapphire Radeon R9 285 2GB.
The card looks identical to previous card options from Sapphire as it utilizes the Dual-X cooler that the company has been promoting for some time. The GPU clock goes up to 965 MHz while the memory clock is increased slightly from 1375 MHz to 1400 MHz.
The output configuration mirrors that of most NVIDIA GeForce cards on the market with a pair of dual-link DVI ports, a full-sized HDMI and a full-sized DisplayPort.
Power is provided by a pair of 6-pin connections, which are able to supply much more than the rated 190 watt TDP.
The only outward sign that we are dealing with a new piece of silicon is the lack of any CrossFire connections. Because this GPU is now utilizing PCI Express only interconnect for multi-GPU communication (AMD's XDMA technology), the R9 285 graphics cards will be connector-less just like the 290/290X.
Testing Configuration
The specifications for our testing system haven't changed.
| Test System Setup | |
| CPU | Intel Core i7-3960X Sandy Bridge-E |
| Motherboard | ASUS P9X79 Deluxe |
| Memory | Corsair Dominator DDR3-1600 16GB |
| Hard Drive | OCZ Agility 4 256GB SSD |
| Sound Card | On-board |
| Graphics Card | Sapphire Radeon R9 285 2GB MSI Radeon R9 280X Gaming 3GB MSI Radeon R9 280 Gaming 3GB NVIDIA GeForce GTX 760 2GB |
| Graphics Drivers | AMD: 14.7 Beta NVIDIA: 340.53 |
| Power Supply | Corsair AX1200i |
| Operating System | Windows 8 Pro x64 |
What you should be watching for
- R9 285 vs R9 280 – The most direct comparison here is between the new R9 285 and the R9 280 that is being end-of-lifed with this release. Can a card with 50% less memory and a slightly lower GPU clock speed maintain or improve performance thanks to architectural changes made in Tonga?
- R9 285 vs R9 280X – There is a chance, if AMD's performance claims are trusted, that the new Radeon R9 285 will make up ground on the Radeon R9 280X, a card that sells for only $20-40 more today.
- R9 285 vs GTX 760 – NVIDIA's GTX 760 has remained the only competition against AMD in this price bracket for some time and we already know that on performance along, it doesn't really match up well with the Radeon R9 280. Does the R9 285 change anything?
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
The PCPER FRAPS File

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File

The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.

For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions. So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.







Ryan – is the 250w TDP listed
Ryan – is the 250w TDP listed in the table on page one, for the 280 correct? I thought it was 200w?
As soon as the Asus Strix version of this hits, I think I’m finally upgrading.
Ooops, yep, you’re right!
Ooops, yep, you're right!
A few days back I was really
A few days back I was really confused with power consumption of 280 with others mentioning 250W and others 200W. From a little search 280 is indeed 250W, 7950 Boost was 225W and the first 7950 was 200W.
Maybe have another look about these numbers?
AMD’s
AMD’s slide
http://cdn.videocardz.com/1/2014/03/AMD-Radeon-R9-280-4.jpg
Also in the same table the
Also in the same table the values listed for the “Peak Compute” seem potentially misleading. The quoted values for Tahiti GPUs are for the minimum/base core clock speeds, not the “Rated Clock”, boost, “up to”, max, or whatever you want to call them clock speeds. I would question using “Peak” to classify these values. Do you happen to know if the compute value for the R9 285 is also at it’s base/minimum clock speed? If so, what is it’s minimum clock speed?
If we interpret “Peak Compute” to be theoretical single-precision FLOPS at max/”up to” clocks, then a Tahiti PRO, aka. 7950/R9 280, @827MHz = 2.964 TFLOPS, but @933MHz = 3.344 TFLOPS. Similarly, a Tahiti XT, aka. 7970/R9 280X @850 MHz = 3.481 TFLOPS, but @1000MHz = 4.096 TFLOPS.
This points to a bit of detail that is not usually provided in GPU reviews even after the whole R9 290X/290 launch drama over thermal/power throttling, UBER mode, etc. This was a real problem with my Sapphire 7950 Boost card too. Do you know if the R9 285 was throttling during testing or if it was able to maintain boost clocks throughout? Do you know what voltages are being used?
My 7950 Boost BIOS pumped 1.25V into the GPU in boost mode which caused obvious clock speed throttling under stock settings. Maxing out Power Tune cured most of the throttling but it still ran hot due to the (excessive) voltage. Switching to a non-Boost BIOS allows me to specify a lower 1.169V and run 150MHz over stock (1075MHz) without any throttling. This also lowers max temperatures and noise.
http://www.techpowerup.com/re
http://www.techpowerup.com/reviews/Sapphire/R9_285_Dual-X_OC/26.html
Tonga is very inefficient.
Is it? According to GURU3D,
Is it? According to GURU3D, it consumes 52 Watts less than the 280x despite being almost as fast.
http://www.guru3d.com/articles_pages/amd_radeon_r9_285_review,5.html
This is the problem with
This is the problem with power consumption numbers. Depending the game and HOW you measure the power draw, the variance in results can be HUGE.
I think AMD model numbers and
I think AMD model numbers and re-branding are as much to blame as anything for the confusion. The R9 285 (190W TDP spec) performs on par with (and is meant to replace) the R9 280 (250W TDP spec). A 60W drop in TDP would appear to be a vast improvement in efficiency, but reviews seem to imply that the difference at the wall is much smaller (~20W) once you account for OC vs stock clock versions.
In general, TDP is proportional to electrical power used, not equal to it. Could this difference in TDP spec compared to power at the wall imply that Tonga generates proportionately less waste heat rather than consumes less electrical power compared to Tahiti?
Few review sites have comparable numbers for all incarnations of Tahiti-based products. For example, there is a noticeable difference in power usage between a 7950, a 7950 Boost, and an R9 280 due to voltages and clock speeds. At least they are all based on the same silicon with the same amount of VRAM so power comparisons are easier to make. Now add Tonga to the mix which has a similar die size, 700M more transistors, 33% smaller memory system, and requires lower voltages to achieve higher clock speeds from what I have seen. There are many variables. Some should reduce power needs and some increase power needs.
AMD seemed in no hurry to release the R9 280 in the first place. With Tonga, AMD is essentially back filling their product stack to bring feature parity (FreeSync, XDMA, TrueAudio, etc.) to this price point. It is disingenuous for AMD to promote these features and then keep re-branding old GPU’s that don’t support them. Being stuck at 28nm for this long probably forced some awkward/bad release cycles. Tonga feels like a careful balance of compromises which are unfortunately late to the party. At least it provides a way to fill some pot holes and test out additional tweaks to the GCN architecture.
tomshardware reports around
tomshardware reports around 178w on typical gaming and maintains below 200w when overclocked
Why is not registered if the
Why is not registered if the DP 1.2a supports ???
displayport 1.2a was thing
displayport 1.2a was thing for AMD’s adaptive sync which nvidia won’t support.
While it might be slightly
While it might be slightly more power efficient, it’s nowhere near what Nvidia achieved with the 750 Ti. Granted these are two different performance classes so it remains to be seen if Nvidia will achieve the same in a performance class card.
The bandwidth efficiency and tessellation performance increase is nice to see, and promising for future cards. But if AMD can’t improve power efficiency I don’t think that’s going to help much.
That said, Nvidia’s offerings at this price point suck pretty bad, so I guess 285 wins by default. I say get a 280 while there’s still stock left. I’m guessing they’ll also bring out the full Tonga 285X soon that’ll beat the 280X in a similar way.
Less you are dieing for a gpu
Less you are dieing for a gpu now, likely best to wait for nvidia’s new cards to come out and see what performance comes outta then before making a choice. Worst case drives prices down some.
3 years later and we are
3 years later and we are still getting the same 28nm parts. Meanwhile Intel has stopped increasing in performance because AMD can offer nothing anymore. This has got to be the worst stretch of stagnation for the PC hardware world I can remember…
it is true that intel and amd
it is true that intel and amd have stopped reducing size in chips but this is mostly do to the fact that at the area of 8-10 nm quantum tunneling occurs and chips become more and more inefficient.until they come up with another material like graphene to replace silicon we will not see chips much smaller than 10-15 nm.this being said AMD still has some way to go befor 18nm. the main reason they have not done so yet is the fact that amd chips (gpu and cpu) are underrated by most people. meaning they don’t have as much funds to pay for R&D and nvidia as well as intel dont have a reason to push R&D as AMD is still trying to catch up.(AMD made 83 mil last year in the cpu and gpu market, intel made 10 bil).thought nvidia is planning a 18nm acturture for 2015-16 ish this being maxwell.IBM is doing R&D right nor for graphene chips although it seems its a far way off 2020 maby.
sorry for all the shitty grammar.
intel is working on smaller
intel is working on smaller die’s AMD well not so much they rely on big fabs to do it.
Dude double patterning is
Dude double patterning is hard, give them time. It is not conspiracy, it is just that hard to manufacture at that level when key technologies as UV lithography are not available.
Even Intel will hit the
Even Intel will hit the Gigabucks process node wall before the laws of physics put an end to Moore’s law/observation, it’s costing almost geometrically more with each new process node shrink, so the R&D costs curve verses the ability to amortize these costs over total units sold, is going to rapidly approach the untenable, even for Intel’s big wallet. Why do you think Intel is slowing the Tick Tock, well that and some lack of competition in the x86 desktop market. Going below 14nm, and not having a large enough market for its x86 based parts in the mobile market, is making those amortization curves say no, that and Intel’s Internal bean counters/quants in the accounting division, and stockholders/investors, and the big institutional investors, that hold Intel’s stock(big institutional investors that are accustomed to high dividends at the expense of R&D/whatever, or they will leave for greener pastures).
Intel is coping with an x86 based ISA ecosystem that has hardly any share of the Mobile devices market, and the future licensed Power8s coming in the server market, watch Intel’s share price, once Google announces a firm commitment with the Power8 systems that Google is currently evaluating, that along with the ARM server SKUs beginning to arrive, and compete(no worries there for the ARM based server SKU makers) with Intel’s Avoton(Discontinued/rebranded). Intel is facing a loss of market domination(more so because of different ISAs, used in mobile, and future high power non x86 ISA competition coming to the server room, along with non high powered ARM based densely packed server SKUs), and the x86 based unit sales, that allowed Intel to spend its way to process node leadership are now stagnating, see the empty chip fab buildings.
Intel’s x86 still dominates in the PC/Laptop market, but the ISA that will/continue to dominate the netbook/chromebook, and tablet market is non x86, and comes with better graphics, and Apple, Nvidia, and soon AMD will be introducing more powerful Custom wide order ARMv8 SOCs, with better graphics, the Tegra K1’s graphics is the leading example currently, of why Intel will not be able to sell enough of x86 to make the investments in below 14nm process node pay off as quickly as 22nm, or 32nm before that. The entire silicon based economy is rearranging around dedicated foundries providing fab services for the entire industry, and Intel is one of the last CPU manufacturer holdouts that will still be holding its own fab capacity, and the affordable process node lead for even Intel is drying up fast, without the total unit sales to keep those expensive chip fabs (Uber expensive at 14nm and below) running at full capacity. The GPU makers will be getting the most out of 28nm, before they go to 20nm, as the costs to go below will have to be shared by more companies, than just AMD, or Nvidia, to make it cost effective to go below 20nm. Look for more FINFET and die stacking, and less shrinking, going forward.
zzzzZZZZzzzZZZzzzzzzzZZZZZZZz
zzzzZZZZzzzZZZzzzzzzzZZZZZZZzzzzz
missing old Anand
bzzzzZZZ DIE AMD just die frrrrrfrrrr
AMD will never die, they will
AMD will never die, they will just go 3 ISAs, ARM, x86, and Power8(Rory still has that IBM time under his belt), And SeaMicro(AMD owned) sells Xeon server SKUs, to go along with the Opteron(ARM, and x86 based), and SeaMicro will be selling Power8 based systems too, once those licensed Power8s start hitting the market, Most likely AMD will license the Power8, like the ARM, and profit more than 3 ways. AMD’s SeaMicro selling Xeon based Kit! business is funny that way, when that’s what the customer wants, and there’s money to be made.
Thanks for the review. I was
Thanks for the review. I was just looking for a card to upgrade form my old HD7770, this seems like a reasonable choice, good value for the money. But I’m a bit concerned about somewhat poor performance of this card with mantle I’ve seen in other reviews, hope it is only driver issue that will be resolved. Have you done any testing with mantle?
Wait for Nvidia Maxwell GTX
Wait for Nvidia Maxwell GTX 960 in October.
well rumors say end of this
well rumors say end of this month if holds true well.
http://www.fudzilla.com/home/
http://www.fudzilla.com/home/item/35657-amd%E2%80%99s-freesync-only-for-new-gpus
Ryan Shrout
You do not know how to ask the right questions
” AMD said that the only the newest GPU silicon from AMD will support FreeSync displays ”
” the Hawaii GPU that drives the Radeon R9 290 and 290X will be compatible with FreeSync monitors ”
what is compatible ???
http://support.amd.com/en-us/
http://support.amd.com/en-us/search/faq/219
All AMD Radeon™ graphics cards in the AMD Radeon™ HD 7000, HD 8000, R7 or R9 Series will support Project FreeSync for video playback and power-saving purposes. The AMD Radeon™ R9 295X2, 290X, R9 290, R7 260X and R7 260 GPUs additionally feature updated display controllers that will support dynamic refresh rates during gaming.
AMD APUs codenamed “Kaveri,” “Kabini,” “Temash,” “Beema” and “Mullins” also feature the necessary hardware capabilities to enable dynamic refresh rates for video playback, gaming and power-saving purposes. All products must be connected to a display that supports DisplayPort Adaptive-Sync.
Tonga will support it too.
I asked AMD
Customer Service
I asked AMD
Customer Service says not support DP 1.2a
Video card can connect to the screen with DP 1.2a
But not working full standard
what is compatible ???
Sit on a chair is compatible
What lenovo laptop is that
What lenovo laptop is that Ryan is using?
Inteel make approx 60% margin
Inteel make approx 60% margin om chips why eat into that to salve a few noddy enthudiasts wprth at most 100m in sales?
they have tofillcoffer to pay big fines
Are you guys gonna test this
Are you guys gonna test this card in crossfire?
Ryan don’t forget that
Ryan don’t forget that according to AMD. The R9 280 will have DirectX 12 support and more hardware support for future updates. AMD was implying something more was coming during the interview with ORIGIN PC and AMD.
Direct X what???
Just
Direct X what???
Just sayin…..
Direct X?? lol haha *cough *ahem ahhhhh lol
Edit.. Im happy that Devs are finally gonna START using Dx11. Thx M$, Sony, AMD, x86, and the entire Console Ecosystem as a whole lol. Preciate the love….FINALLY
Sorry the R9 285 will have
Sorry the R9 285 will have DirectX 12 support and more goodies are coming.
FreeSync…the Sync that Sunk
FreeSync…the Sync that Sunk
Ryan.
I was wondering if you
Ryan.
I was wondering if you are thinking of doing a review on AMD R9 285 with 2 cards in XDMA mode.
To compare similar cards in Crossfire and Nvidia SLI, when you get more samples of R9 285.
And when or if there’s going to be any monitors with FreeSync, or at least of any rumors.
Also I seen a few photos of a new AMD Socket & CPU At 5.233 MHz with 6 cores do next year.
6 Mg. L2 and 24 Mg L3 cache & rated at 190 W and thermals at 24 C. and it is not a Piledriver.
Is this just a rummer or is it in the works.
What should I buy R9 270x @
What should I buy R9 270x @ 169.99 or R9 285 $249.99
I play Diablo 3 and I am upgrading from HD 5770 🙂
Please help.
Thanks
My R9 285 is constantly
My R9 285 is constantly dropping GPU core clock and GPU usage when gaming or testing, this is normal?