Polaris 10 Specifications
The teased and talked about Polaris GPU is finally here. Does our review show the RX 480 is the new $200 king?
It would be hard at this point to NOT know about the Radeon RX 480 graphics card. AMD and the Radeon Technologies Group has been talking publicly about the Polaris architecture since December of 2015 with lofty ambitions. In the precarious position that the company rests, being well behind in market share and struggling to compete with the dominant player in the market (NVIDIA), the team was willing to sacrifice sales of current generation parts (300-series) in order to excite the user base for the upcoming move to Polaris. It is a risky bet and one that will play out over the next few months in the market.
Since then AMD continued to release bits of information at a time. First there were details on the new display support, then information about the 14nm process technology advantages. We then saw demos of working silicon at CES with targeted form factors and then at events in Macau, showed press the full details and architecture. At Computex they announced rough performance metrics and a price point. Finally, at E3, AMD discussed the RX 460 and RX 470 cousins and the release date of…today. It’s been quite a whirlwind.
Today the rubber meets the road: is the Radeon RX 480 the groundbreaking and stunning graphics card that we have been promised? Or does it struggle again to keep up with the behemoth that is NVIDIA’s GeForce product line? AMD’s marketing team would have you believe that the RX 480 is the start of some kind of graphics revolution – but will the coup be successful?
Join us for our second major graphics architecture release of the summer and learn for yourself if the Radeon RX 480 is your next GPU.
Polaris 10 – Radeon RX 480 Specifications
First things first, let’s see how the raw specifications of the RX 480 compare to other AMD and NVIDIA products.
RX 480 | R9 390 | R9 380 | GTX 980 | GTX 970 | GTX 960 | R9 Nano | GTX 1070 | |
---|---|---|---|---|---|---|---|---|
GPU | Polaris 10 | Grenada | Tonga | GM204 | GM204 | GM206 | Fiji XT | GP104 |
GPU Cores | 2304 | 2560 | 1792 | 2048 | 1664 | 1024 | 4096 | 1920 |
Rated Clock | 1120 MHz | 1000 MHz | 970 MHz | 1126 MHz | 1050 MHz | 1126 MHz | up to 1000 MHz | 1506 MHz |
Texture Units | 144 | 160 | 112 | 128 | 104 | 64 | 256 | 120 |
ROP Units | 32 | 64 | 32 | 64 | 56 | 32 | 64 | 64 |
Memory | 4GB 8GB |
8GB | 4GB | 4GB | 4GB | 2GB | 4GB | 8GB |
Memory Clock | 7000 MHz 8000 MHz |
6000 MHz | 5700 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 500 MHz | 8000 MHz |
Memory Interface | 256-bit | 512-bit | 256-bit | 256-bit | 256-bit | 128-bit | 4096-bit (HBM) | 256-bit |
Memory Bandwidth | 224 GB/s 256 GB/s |
384 GB/s | 182.4 GB/s | 224 GB/s | 196 GB/s | 112 GB/s | 512 GB/s | 256 GB/s |
TDP | 150 watts | 275 watts | 190 watts | 165 watts | 145 watts | 120 watts | 275 watts | 150 watts |
Peak Compute | 5.1 TFLOPS | 5.1 TFLOPS | 3.48 TFLOPS | 4.61 TFLOPS | 3.4 TFLOPS | 2.3 TFLOPS | 8.19 TFLOPS | 5.7 TFLOPS |
Transistor Count | 5.7B | 6.2B | 5.0B | 5.2B | 5.2B | 2.94B | 8.9B | 7.2B |
Process Tech | 14nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 16nm |
MSRP (current) | $199 | $299 | $199 | $379 | $329 | $279 | $499 | $379 |
A lot of this data was given to us earlier in the month at the products official unveiling at Computex, but it is interesting to see it the context of other hardware on the market today. The Radeon RX 480 has 36 CUs with 2304 stream processors, coming in at a count between the Radeon R9 380 and the R9 390. But you also must consider the clock speeds thanks to production on the 14nm process node at Global Foundries. While the R9 390 ran at just 1000 MHz, the new RX 480 will have a “base” clock speed of 1120 MHz and a “boost” clock speed of 1266 MHz. I put those in quotes for a reason – we’ll discuss that important note below.
The immediate comparison to NVIDIA’s GTX 1070 and GTX 1080 clock speeds will happen, even though the pricing on them puts them in a very different class of product. AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%. That is substantial, and even though we know that you can’t directly compare the clock speeds of differing architectures, there has to be some debate as to why the move from 28nm to 14nm (Global Foundries) does not result in the same immediate clock speed advantages that NVIDIA saw moving from 28nm to 16nm (TSMC). We knew that AMD and NVIDIA were going to be building competing GPUs on different process technologies for the first time in modern PC gaming history and we knew that would likely result in some delta, I just did not expect it to be this wide. Is it issues with Global Foundries or with AMD’s GCN architecture? Hard to tell and neither party in this relationship is willing to tell us much on the issue. For now.
At the boost clock speed of 1266 MHz, the RX 480 is capable of 5.8 TFLOPS of compute, running well past the 5.1 TFLOPS of the Radeon R9 390 and getting close to the Radeon R9 390X, both of which are based on the Hawaii/Grenada chips. There are obviously some efficiency and performance improvements in the CUs themselves with Polaris, as I will note below, but much as we saw with NVIDIA’s Pascal architecture, the fundamental design remains the same coming from the 28nm generation. But as you will soon see in our performance testing, the RX 480 doesn’t really overtake the R9 390 consistently – but why not?
With Polaris AMD is getting into the game of variable clock speeds on its GPUs. When NVIDIA introduced GPU Boost with the GTX 680 cards in 2012, it was able to improve relative gaming performance of its products dramatically. AMD attempted to follow suit with cards that could scale by 50 MHz or so, but in reality the dynamic clocking nature of its product never acted as we expect. They just kind of ran at top speed, all of the time; which sounds great but it defeats the purpose of the technology. Polaris gets it right this time, though with some changes in branding.
For the RX 480, the “base” clock of 1120 MHz is not its minimum clock while gaming, but instead is an “average” expected clock speed that the GPU will run at, computed by AMD in a mix of games, resolutions and synthetic tests. The “boost” clock of 1266 MHz is actually the maximum clock speed of the GPU (without overclocking) and its highest voltage state. Contrast this with what NVIDIA does with base and boost clocks: base is the minimum clock rate that they guarantee the GPU will not run under in a real-world gaming scenario while boost is the “typical” or “average” clock you should expect to see in games in a “typical” chassis environment.
The differences are subtle but important. AMD is advertising the base clock as the frequency you might should see in gaming, while the boost clock is the frequency that you would hit in the absolute best case scenario (good case cooling, good quality ASIC, etc.) NVIDIA doesn’t publicize that maximum clock at all. There is a “bottom base” clock at which the RX 480 should not go under (and in which the fan will increase speed to make sure it doesn’t go under in any circumstance) but it is hidden in the WattMan overclocking utility as the “Min Acoustic Limit” and was set at 910 MHz on my sample.
That is a lot of discussion around clock speeds but I thought it was important to get that information out in the open before diving into anything else. AMD clearly wanted to be able to claim higher clock rates on its marketing and product lines than it might have been able had it followed in NVIDIA’s direction exactly. We’ll see how that plays out in our testing – but as you might insinuate based on my wording above, this is why the card doesn’t bolt past the Radeon R9 390 in our testing.
The RX 480 sample we received in for testing was configured with 8GB of memory running at 8 Gbps / GHz for a total memory bandwidth throughput of 256 GB/s. That’s pretty damned impressive – a 256-bit GDDR5 memory bus is running at 8.0 GHz to get us those numbers, matching the performance of the 256-bit bus on the GeForce GTX 1070. There are some more caveats here though. The 4GB reference model of the Radeon RX 480 will ship with GDDR5 memory running 7.0 GHz, for a bandwidth of 224 GB/s - still very reasonable considering the price point. AMD tells me that while the 8GB reference models will ship with 8 Gbps memory, most of the partner cards may not and instead will fall in the range of 7 Gbps to 8 Gbps. It was an odd conversation to be frank; AMD basically was alluding to the fact that many of the custom built partner cards would default to something close to 7.8 Gbps rather than the 8 Gbps.
Another oddity – and one that may make enthusiast a bit more cheery – is that AMD only built a single reference design card for this release to cover both the 4GB and 8GB varieties. That means that the cards that go on sale today listed at 4GB models will actually have 8GB of memory on them! With half of the DRAM partially disabled, it seems likely that someone soon will find a way to share a VBIOS to enable the additional VRAM. In fact, AMD provided me with a 4GB and an 8GB VBIOS for testing purpose. It’s a cost saving measure on AMD’s part – this way they only have to validate and build a single PCB.
The 150 watt TDP is another one of those data points we have known about for a while, but it is still impressive when compared to the R9 380 and R9 390 cards that use 190 watts and 275 watts (!!) respectively. Even if the move to 14nm didn’t result in clock speeds as impressively high as we saw with Pascal, Polaris definitely gets a dramatic jump in efficiency that allows the RX 480 to compete very well with a card from AMD’s own previous generation that used nearly 2x the power!
You get all of this with a new Radeon RX 480 for just $199 for the 4GB model or $239 for the 8GB model. Though prices on the NVIDIA 900-series have been varying since the launch of the GeForce GTX 1080 and GTX 1070, you will find in our testing that NVIDIA just doesn’t have a competitive product at this time to match what AMD is releasing today.
Before even coming to PCPER,
Before even coming to PCPER, I already knew their review would do everything in its power to make AMD look bad and nvidia look good…PCPER is, afterall, so nvidia biased it’s ridiculous…
What are you babbling about.
What are you babbling about. The numbers are the numbers. The card is good. They said it was good. You are insane.
Yea, all against AMD, that’s
Yea, all against AMD, that’s why Raja decided to be interviewed there…
You are making yourself ridiculous, if PCper wanted to be bad they would have simply put a GTX 1070 in the review and not awarded the gold award…
you are a shame for all the
you are a shame for all the people named Anonymous!
the review was mostly positive and matches what the other sites are saying, and the Raja is coming, deal with it!
Well, still happy with my
Well, still happy with my 390X card bought over a year ago. Disappointing that PCPER never actually reviewed a 390X.
Hey Rayan, nice review, it’s
Hey Rayan, nice review, it’s pity there is not a complete VR comparison especially since the card was advertised for that use by AMD.
I can understand why there isn’t a GTX 1070 but why put a GTX 960 with only 2GB? I see that on amazon it cost 10$ more and actually the very same with rebate. I ask this because in some games the difference (both in performance and frametime) between GTX 970 and 960 looks a bit weird and was wondering if could be due to video memory saturation.
Sorry, mistyped your name
Sorry, mistyped your name Ryan 🙂
This isn’t a bad a card for
This isn’t a bad a card for the price. Granted once you get to high end/enthusiasts it isn’t a great option, but for budget/midrange/casual gamers, it’s a hell of a choice.
Look at this objectively.
Look at this objectively. With this performance level and the advantages of a-sync compute, this becomes the first recommendation to upgrade your rig for VR. For the price, this targets what is 80+% of the actual card market and for the price, offers a massive performance ability for 1080p which is still (BY A HUGE MARGIN) the largest gaming segment.
For a company who needs a bottom line win for revenue and long term sustainability, this is a homerun.
That’s looks a typical
That’s looks a typical “reckon without one’s host”
You’d expect its main competitor will overlook this segment?
I expected more. I allowed
I expected more. I allowed hype to get me. I expected 980 and 390X performance. Now I’m disappointed
I don’t blame AMD. I clearly remember Raja claiming only >5 TFLOPS and that is in the ballpark of 390, so he was honest from the start. It was my own exaggerated expectations.
Anyhow card’s still good value of money. I think I’ll get one to replace my old 285 when AIB come with custom coolers.
Is HDMI 2.0 no longer
Is HDMI 2.0 no longer backwards compatible with DVI? You should be able to get an HDMI to DVI cable to use this card with DVI monitors without an additional adapter from DP as long as this is still supported. I use one on my HD 7950 and it works just the same as a DVI cable as far as clarity and performance.
Wow! This is the disruptive
Wow! This is the disruptive GPU Mr. Koduri was talking about? Awesome!
GG AMD.
Has anyone noticed the
Has anyone noticed the ranking of this card in Passmark? Their chart tells a very different story about this card’s performance. RX 480 scores 6369 G3D marks, just a hair above GTX 770 (6145) and GTX 960 (5925). It doesn’t even come close to the GTX 970 (8661). How do you explain the large 2300 G3D mark discrepancy, when your chart claims that the RX 480 beats the 970?
Where have you seen the
Where have you seen the Passmark benchmark you are talking about?
Here you
Here you are.
http://www.videocardbenchmark.net/high_end_gpus.html
Thanks but I would advise you
Thanks but I would advise you comparing results from different source is not a good practice. Now I’m really curious about the GTX 1060, looks to be a really interesting card, SMP, power consumption, clock speeds… looks it will be an hot summer 😉
Comparing results from other
Comparing results from other sources is called corroboration or validates your findings. Anyone can find one biased source. The more that you can find to prove your point the better.
Yeah the 1060 should be a good card for the masses. I’m guessing price will be $250 or less. I figured Nvidia would keep an ace in the hole. They have the budget to sit on things until they need to reveal them.
After watching reviews today
After watching reviews today I missed out by minuets on obtaining one. They sold out with in an hour of the store opening. Now I just have to sit back and wait until more come in stock
You are lucky. These cards
You are lucky. These cards draw more wattage than PCI express spec. Could possibly fry motherboard connector or PCI express power connector. They also run hot around 84C. If you’re set on getting one wait a week or two for a non reference version with better cooling and or power delivery.
Exactly.
Do not get a model
Exactly.
Do not get a model with 6-pin only.
PCPER was up to 200 Watts with a small overclock!
Even if your motherboard is not damaged it could corrupt the audio output. Ever moved a mouse and heard your audio squealing?
Lots of texture units @u@ i
Lots of texture units @u@ i cant wait to start streaming with it
Thunder… Thunder…
Thunder… Thunder… Thundercats, OOoohH….!!!
Why aren’t we testing with
Why aren’t we testing with Battlefield 4?!? Frostbite is still one of the most technically advanced game engines out there.
That’s because they don’t
That’s because they don’t want Polaris to be shown losing to the 970 gtx. HardOCP dropped it from their benches because they said they didn’t have time but added Hitman 2016. They said they plan to bring it back until Battlefield 1 comes out. Of course draw your own inferences here.
Surely the 480 would win in
Surely the 480 would win in Battlefield 4 since it has Mantle. Maybe thats why they dont include it.
Mantle has been behind DX for
Mantle has been behind DX for a long time now. It was only useful on weak CPUs (AMDs lineup and the Intel i3).
Here is one source with Rx
Here is one source with Rx 480.
http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-5.html
Another.
https://www.techpowerup.com/reviews/AMD/RX_480/9.html
An example of what an AMD biased site would show.
http://www.anandtech.com/show/10446/the-amd-radeon-rx-480-preview/4
Notice carefully that Anandtech had a 970gtx for the bench of Ashes because 970 loses but in Battlefield 4 the 970 is curiously absent. LOL
Tom’s shows 54.5 fps for 970 and 47.7 fps for Rx 480 at 1440p. The 970 is slightly under 390 at this resolution. Would still be ahead if Anandtech included it as well.
So a R9 290 / GTX 970 class
So a R9 290 / GTX 970 class card for $240 ?
And the power efficiency doesn’t seem much better, if at all, compared to a GTX 970 ?
nvidia seem to be kings of GpU architecture, optimization and design.
nvidia 28nm design beat AMD 14nm finfet ! This is not looking good for VEGA because the RX 480 is twice slower then the 1070, yet consume about the same power.
When nvidia release the smaller pascal chip for the 1060/1050,
AMD window of opportunity will be closed.
I think $240 is to high of a price for a R9 290 class card,
but $199 seem just about ok for the 4GB model.
But… nobody sell the 4GB model.
Please someone make a version
Please someone make a version where the power does not come in at the top! There has to be a better way!
I like it
I like it
The card is drawing more
The card is drawing more power from the PCI-E slot the the slot is designed for. Plus the RX 480 is using more power then the TDP was listed. If you have a cheap oem motherboard. I bet that the card will burn out the PCI-E slot on it. Old classic AMD making promises that they can’t keep when it comes to power usage.
That is some pretty desperate
That is some pretty desperate trolling. In pcper’s very precise power measurements, it draws almost exactly 150 watts, which is the rated TDP. This is also exactly what the pic-e slot and a 6-pin power connector are specified to deliver.
Yes that’s true. I’m glad
Yes that’s true. I’m glad PCPer tested this.
But with the smallest of overclock it’s out of spec. I think it’s necessary to see an aftermarket 8 pin model.
it depends on what
it depends on what applications you use however. Other sites have shown almost 90W from the PCIe bus when NOT overclocked.
and yes, people will overclock so 200W is crazy. AMD should have locked down the card and prevented overclocking with only a 6-pin power connector.
200W means that either the motherboard or the PCIe cable is going at least 25W over spec
Agree, more considering the
Agree, more considering the cost difference between a 6pin and an 8pin power connector
What a bummer.
What a bummer.
Oh, you expected the
Oh, you expected the performance of a $700 card for $200 right? So just beating everything else in the price range is a disappointment? Complete BS. This is almost certainly going to be my next video.
I understand it isn’t a
I understand it isn’t a direct competitor to this card by design, but I would have really liked to see the 1070 added to the charts here. I am currently debating spending the extra money (once the prices come down to anywhere near MSRP) and getting a 1070 instead of a 480 and visualizing the performance difference would have been handy.
If you use MSRP as the price
If you use MSRP as the price (the RX-480 will also be overpriced) and use the 8GB model for a fair comparison then the FPS per dollar is identical (on average).
380/240 = 58%
GTX1070 averages 53% faster->
https://www.techpowerup.com/reviews/AMD/RX_480/24.html
both don’t seem to overclock much more so far.
I understand why they didn’t
I understand why they didn’t added the GTX 1070 to the review but if you think for a moment that the GTX 1060 is the natural competitor of this card does this mean we will not see a GTX 1070 vs 1060 review? seems odd…
I’m confused now 🙂
It would be nice to be able
It would be nice to be able to have a discussion with out all of the obvious FUD that occurs in any Nvidia/AMD story. This card is about what I expected, and is the clear winner in this price segment, at least until Nvidia has a new card in this segment. Nvidia does not offer any competition to this card right now. The absolute performance and performance per dollar graphs paint a very clear picture unless you are a troll.
Anyway, I find it interesting that this card seems to have 4 ACE instead of 8. The Xbox One has 2, I believe, and the PS4 has the full 8 used in most GCN cards after the first implementation. I missed the interview, so I don’t know if Ryan discussed this with Raja. I guess this was probably done to save power? Although, it is unclear how the ACE units interact with the hardware scheduler. It would be great if someone could write an article about this, if the information is available.