Polaris 10 Specifications
The teased and talked about Polaris GPU is finally here. Does our review show the RX 480 is the new $200 king?
It would be hard at this point to NOT know about the Radeon RX 480 graphics card. AMD and the Radeon Technologies Group has been talking publicly about the Polaris architecture since December of 2015 with lofty ambitions. In the precarious position that the company rests, being well behind in market share and struggling to compete with the dominant player in the market (NVIDIA), the team was willing to sacrifice sales of current generation parts (300-series) in order to excite the user base for the upcoming move to Polaris. It is a risky bet and one that will play out over the next few months in the market.
Since then AMD continued to release bits of information at a time. First there were details on the new display support, then information about the 14nm process technology advantages. We then saw demos of working silicon at CES with targeted form factors and then at events in Macau, showed press the full details and architecture. At Computex they announced rough performance metrics and a price point. Finally, at E3, AMD discussed the RX 460 and RX 470 cousins and the release date of…today. It’s been quite a whirlwind.
Today the rubber meets the road: is the Radeon RX 480 the groundbreaking and stunning graphics card that we have been promised? Or does it struggle again to keep up with the behemoth that is NVIDIA’s GeForce product line? AMD’s marketing team would have you believe that the RX 480 is the start of some kind of graphics revolution – but will the coup be successful?
Join us for our second major graphics architecture release of the summer and learn for yourself if the Radeon RX 480 is your next GPU.
Polaris 10 – Radeon RX 480 Specifications
First things first, let’s see how the raw specifications of the RX 480 compare to other AMD and NVIDIA products.
RX 480 | R9 390 | R9 380 | GTX 980 | GTX 970 | GTX 960 | R9 Nano | GTX 1070 | |
---|---|---|---|---|---|---|---|---|
GPU | Polaris 10 | Grenada | Tonga | GM204 | GM204 | GM206 | Fiji XT | GP104 |
GPU Cores | 2304 | 2560 | 1792 | 2048 | 1664 | 1024 | 4096 | 1920 |
Rated Clock | 1120 MHz | 1000 MHz | 970 MHz | 1126 MHz | 1050 MHz | 1126 MHz | up to 1000 MHz | 1506 MHz |
Texture Units | 144 | 160 | 112 | 128 | 104 | 64 | 256 | 120 |
ROP Units | 32 | 64 | 32 | 64 | 56 | 32 | 64 | 64 |
Memory | 4GB 8GB |
8GB | 4GB | 4GB | 4GB | 2GB | 4GB | 8GB |
Memory Clock | 7000 MHz 8000 MHz |
6000 MHz | 5700 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 500 MHz | 8000 MHz |
Memory Interface | 256-bit | 512-bit | 256-bit | 256-bit | 256-bit | 128-bit | 4096-bit (HBM) | 256-bit |
Memory Bandwidth | 224 GB/s 256 GB/s |
384 GB/s | 182.4 GB/s | 224 GB/s | 196 GB/s | 112 GB/s | 512 GB/s | 256 GB/s |
TDP | 150 watts | 275 watts | 190 watts | 165 watts | 145 watts | 120 watts | 275 watts | 150 watts |
Peak Compute | 5.1 TFLOPS | 5.1 TFLOPS | 3.48 TFLOPS | 4.61 TFLOPS | 3.4 TFLOPS | 2.3 TFLOPS | 8.19 TFLOPS | 5.7 TFLOPS |
Transistor Count | 5.7B | 6.2B | 5.0B | 5.2B | 5.2B | 2.94B | 8.9B | 7.2B |
Process Tech | 14nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 16nm |
MSRP (current) | $199 | $299 | $199 | $379 | $329 | $279 | $499 | $379 |
A lot of this data was given to us earlier in the month at the products official unveiling at Computex, but it is interesting to see it the context of other hardware on the market today. The Radeon RX 480 has 36 CUs with 2304 stream processors, coming in at a count between the Radeon R9 380 and the R9 390. But you also must consider the clock speeds thanks to production on the 14nm process node at Global Foundries. While the R9 390 ran at just 1000 MHz, the new RX 480 will have a “base” clock speed of 1120 MHz and a “boost” clock speed of 1266 MHz. I put those in quotes for a reason – we’ll discuss that important note below.
The immediate comparison to NVIDIA’s GTX 1070 and GTX 1080 clock speeds will happen, even though the pricing on them puts them in a very different class of product. AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%. That is substantial, and even though we know that you can’t directly compare the clock speeds of differing architectures, there has to be some debate as to why the move from 28nm to 14nm (Global Foundries) does not result in the same immediate clock speed advantages that NVIDIA saw moving from 28nm to 16nm (TSMC). We knew that AMD and NVIDIA were going to be building competing GPUs on different process technologies for the first time in modern PC gaming history and we knew that would likely result in some delta, I just did not expect it to be this wide. Is it issues with Global Foundries or with AMD’s GCN architecture? Hard to tell and neither party in this relationship is willing to tell us much on the issue. For now.
At the boost clock speed of 1266 MHz, the RX 480 is capable of 5.8 TFLOPS of compute, running well past the 5.1 TFLOPS of the Radeon R9 390 and getting close to the Radeon R9 390X, both of which are based on the Hawaii/Grenada chips. There are obviously some efficiency and performance improvements in the CUs themselves with Polaris, as I will note below, but much as we saw with NVIDIA’s Pascal architecture, the fundamental design remains the same coming from the 28nm generation. But as you will soon see in our performance testing, the RX 480 doesn’t really overtake the R9 390 consistently – but why not?
With Polaris AMD is getting into the game of variable clock speeds on its GPUs. When NVIDIA introduced GPU Boost with the GTX 680 cards in 2012, it was able to improve relative gaming performance of its products dramatically. AMD attempted to follow suit with cards that could scale by 50 MHz or so, but in reality the dynamic clocking nature of its product never acted as we expect. They just kind of ran at top speed, all of the time; which sounds great but it defeats the purpose of the technology. Polaris gets it right this time, though with some changes in branding.
For the RX 480, the “base” clock of 1120 MHz is not its minimum clock while gaming, but instead is an “average” expected clock speed that the GPU will run at, computed by AMD in a mix of games, resolutions and synthetic tests. The “boost” clock of 1266 MHz is actually the maximum clock speed of the GPU (without overclocking) and its highest voltage state. Contrast this with what NVIDIA does with base and boost clocks: base is the minimum clock rate that they guarantee the GPU will not run under in a real-world gaming scenario while boost is the “typical” or “average” clock you should expect to see in games in a “typical” chassis environment.
The differences are subtle but important. AMD is advertising the base clock as the frequency you might should see in gaming, while the boost clock is the frequency that you would hit in the absolute best case scenario (good case cooling, good quality ASIC, etc.) NVIDIA doesn’t publicize that maximum clock at all. There is a “bottom base” clock at which the RX 480 should not go under (and in which the fan will increase speed to make sure it doesn’t go under in any circumstance) but it is hidden in the WattMan overclocking utility as the “Min Acoustic Limit” and was set at 910 MHz on my sample.
That is a lot of discussion around clock speeds but I thought it was important to get that information out in the open before diving into anything else. AMD clearly wanted to be able to claim higher clock rates on its marketing and product lines than it might have been able had it followed in NVIDIA’s direction exactly. We’ll see how that plays out in our testing – but as you might insinuate based on my wording above, this is why the card doesn’t bolt past the Radeon R9 390 in our testing.
The RX 480 sample we received in for testing was configured with 8GB of memory running at 8 Gbps / GHz for a total memory bandwidth throughput of 256 GB/s. That’s pretty damned impressive – a 256-bit GDDR5 memory bus is running at 8.0 GHz to get us those numbers, matching the performance of the 256-bit bus on the GeForce GTX 1070. There are some more caveats here though. The 4GB reference model of the Radeon RX 480 will ship with GDDR5 memory running 7.0 GHz, for a bandwidth of 224 GB/s - still very reasonable considering the price point. AMD tells me that while the 8GB reference models will ship with 8 Gbps memory, most of the partner cards may not and instead will fall in the range of 7 Gbps to 8 Gbps. It was an odd conversation to be frank; AMD basically was alluding to the fact that many of the custom built partner cards would default to something close to 7.8 Gbps rather than the 8 Gbps.
Another oddity – and one that may make enthusiast a bit more cheery – is that AMD only built a single reference design card for this release to cover both the 4GB and 8GB varieties. That means that the cards that go on sale today listed at 4GB models will actually have 8GB of memory on them! With half of the DRAM partially disabled, it seems likely that someone soon will find a way to share a VBIOS to enable the additional VRAM. In fact, AMD provided me with a 4GB and an 8GB VBIOS for testing purpose. It’s a cost saving measure on AMD’s part – this way they only have to validate and build a single PCB.
The 150 watt TDP is another one of those data points we have known about for a while, but it is still impressive when compared to the R9 380 and R9 390 cards that use 190 watts and 275 watts (!!) respectively. Even if the move to 14nm didn’t result in clock speeds as impressively high as we saw with Pascal, Polaris definitely gets a dramatic jump in efficiency that allows the RX 480 to compete very well with a card from AMD’s own previous generation that used nearly 2x the power!
You get all of this with a new Radeon RX 480 for just $199 for the 4GB model or $239 for the 8GB model. Though prices on the NVIDIA 900-series have been varying since the launch of the GeForce GTX 1080 and GTX 1070, you will find in our testing that NVIDIA just doesn’t have a competitive product at this time to match what AMD is releasing today.
It’s not quite what I
It’s not quite what I expected- but I’m okay with it. Hopefully new drivers and OCing will push for 5-10% performance gains in the non factory cards.
This review is way off from a
This review is way off from a certain trusted 3D website that shows it on par with the GTX 980 at all times with the exception of that Tomb Raider garbage. hmmm
Can’t believe a word you say
Can’t believe a word you say when you won’t even name your “trusted 3D website”.
pcper is one of the best and
pcper is one of the best and most respected hardware sites out there (alongside techreport and computer base)
(No subject)
saw who posted this.. made my
saw who posted this.. made my day!
Relay? I distinctly renumber
Relay? I distinctly renumber PC per reporting Polaris being delayed because it cant hit above 850 MHz. That kind of quality reporting sure builds respect.
U should really take a look
U should really take a look at the benchemarks out on youtube,it doesnt even beat a 970 in all the games,let alone a 980.
Hello,
I think that unlike
Hello,
I think that unlike the green team, with proper cooling, we may be able to see higher clocks. I like what RTG is doing and if you leave everything on Auto, for most gamer’s, it should be OK
ATI has always been about dam temps and power draw, give me power!, which I have always been about. I don’t mind 80% fan speed, that is what closed back headphones are for =)
I still feel the possibility for upping performance, in this architecture. What remains to be seen is what the AIB partners do with this chip. I am hopeful that with proper cooling and what I have been seeing from reviews, is that there is headroom to get higher clocks.
#MakeAMDgreatAgain
A very welcome return to form
A very welcome return to form from AMD, this will be a huge seller by all accounts. At the current price point there is no competition from AMD.
Importantly it sets a new performance threshold that is very positive for gamers who can’t justify the 1070 cost of entry. A very solid release indeed.
Any idea when the 470/460 reviews will be coming. I’m really looking forward to see what performace can be had from a GPU powered only by the PCIE bus.
that should read ‘competition
that should read ‘competition for AMD’
The RX 480 comes in at an
The RX 480 comes in at an excellent price and sets a new standard for its relative performance level. Even if you were leaning towards the added Nvidia technologies (PhysX, G-SYNC), the 970 is simply an older card that doesn’t have some of the features current-gen cards have (also true with the last-gen AMD offerings). It would be, in my opinion, a bad decision to buy a GTX 970 over an RX 480, even if the price was identical.
But comparing it to a GTX 1070 and saying “can’t justify the cost of entry” really makes no sense. Either you need the power for your intended usage that the 1070 provides or you don’t. If you do, the RX 480 is a bad choice because it won’t do what you need it to do. If you don’t, the GTX 1070 is a waste of money. If you need an Nvidia card, at least wait until the GTX 1060 comes out and hope it will have a similar price/performance.
I am wanting to do 4K, I am
I am wanting to do 4K, I am not picky on frame rates, things don’t have to be 60+ all the time, I don’t need every setting sky high (especially can’t imagine needing AA at 4K), but I haven’t seen any reviews so far with any games at 4K.
And where the heck can we buy
And where the heck can we buy this? Of course Newegg is already showing out of stock!
I see the power color card in
I see the power color card in stock at newegg as of now.
But future drivers will fix
But future drivers will fix everything, right? The secret sauce?
What a disappointment and it sure didn’t help there was so much hype with false expectations. Partially for fanboy’s and fud articles out there, but also AMD themselves.
Remember when they posted that one slide that said 2×480 would beat a single 1080? Yeah, right! Of course the smart one’s could see past it.
lmao no they didnt show it
lmao no they didnt show it beating the 1080 lmao are you blind. It was behind the 1080 but not by much
http://cdn2.pcadvisor.co.uk/c
http://cdn2.pcadvisor.co.uk/cmsdata/reviews/3641216/gtx_1080__amp__1070_vs_rx_480_-_performance_comparison_thumb.jpg
Pretty clear that they said 2×480 > gtx 1080
That was in AShes of the
That was in AShes of the Singularity. That is in one DX12 game. AMD did not say that 2x480s would beat the 1080 in every game and scenario.
Yea they CLAIMED it in AOTS
Yea they CLAIMED it in AOTS but looking at this review beating a gtx1070 looks like will be iffy in most games.
Between you and other
Between you and other websites(one major one) only doing/focusing on the DX11 benchmarks, folks can see where the money is going to FUD Up on AMD while spinning positive towards Nvidia.
Watch the websites that practice those lies of omission, by only testing on the older graphics APIs, and not even attempting any DX12/Vulkan games. So once the fully optimized DX12/Vulkan games are out then there can be more benchmarks done.
When all the RX 480 features are tweaked and more and better games make use of explicit GPU multi-adaptor and the new graphics APIs then 2 RX 480s may just be a very good deal in getting GTX 1080 levels of performance at a very nice RX 480 price savings(for even 2 RX 480s), and DX11 is not the way forward for gaming, as there is DX12/Vukan that are out and being developed for! And Programming of the Polaris HWS units is in microcode, so the HWS units can be re-programmed and their scheduling algorithms can be improved over time with new microcode/firmware updates.
At Least Charlie over at S/A is doing a point by point comparison of each of the Polaris execution Units’ new feature tweaks/improvements, for shaders, tessellation, compression, sound, scheduling etc.
That Nvidia has more money to hire in astroturf land, and send those terfing squads out in force! Nvidia is sure making DX11 its focus still, but DX11 is now on the way out.
You arbiter are looking for that spin against AMD and for your green masters, including one other prominent Nvidia favoring poster on one tech website in particular going over to another prominent Linux OS/Linux test suite based testing website and spinning for Nvidia there.
DX12 has been tested, it’s
DX12 has been tested, it’s just that not many titles exist. Of the few that do, AotS may be heavily optimized towards AMD’s RX-480 (thus not representative of most DX12 titles) and most of the others if not all only have some DX12 code tacked on.
DX11 is on the way out?
Sort of, but how many games will the average person have in their Steam library that use DX12 by the end of 2017?
In fact, how many DX12 titles will ship relative to DX11 in the next year?
It’s not tough for me to recommend cards now though (aside from waiting for prices to get closer to MSRP). If you have about $260 then get an RX-480 8GB.
If you have $240 or so and can’t swing any more get the 4GB RX-480. (after-market like Asus Strix or similar with 8-pin or 2×6-pin recommended for RX-480).
In the $400 plus it’s simply GTX1070 or GTX1080, again once prices stabilize.
There’s really no overlap, nor any great DX12 data except that you should avoid NVidia 900 series if your budget is in the RX-480 range.
“Sort of, but how many games
“Sort of, but how many games will the average person have in their Steam library that use DX12 by the end of 2017?”
Not everyone who plays games uses Steam. It’s terrible.
Directx 11 on the way out.
Directx 11 on the way out. LOL. Why are video cards backwards compatible to directx9. Blizzard still makes games directx 9/10 compatible such as StarCraft 2, Diablo 3 and World of Warcraft. They are a very profitable company. Directx 12 isn’t viable yet with only 350 million user base for Windows 10 only. All other windows platforms number in a billion or two or more. So yeah there’s that and other hurdles to jump. If Microsoft makes directx 12 available to Windows 7/8 users via patch then you can talk declining. Games take a few years to make sometimes and you’re not going to scrap it and go with new directx right away and true directx 12 games are still a year or two away but still will have directx 11 version. When you see mostly only dx12 versions of games coming out you can make your statement confidently.
Yes, they said two RX 480s
Yes, they said two RX 480s would beat one GTX 1080 in Ashes of the Singularity.
And they do.
Just buy a used 970 and
Just buy a used 970 and forget you ever had any hope for the 480.
970 is considerably slower in
970 is considerably slower in DX12 titles than the 480
I’d rather drink a pint of my
I’d rather drink a pint of my own diarrhea, on the hour, every hour than buy anything from nVidia.
Eww.
Eww.
He only has that much s**t
He only has that much s**t because he’s an extreme AMD fanboy.
“The immediate comparison to
“The immediate comparison to NVIDIA’s GTX 1070 and GTX 1080 clock speeds will happen, even though the pricing on them puts them in a very different class of product. AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%. That is substantial, and even though we know that you can’t directly compare the clock speeds of differing architectures, there has to be some debate as to why the move from 28nm to 14nm (Global Foundries) does not result in the same immediate clock speed advantages that NVIDIA saw moving from 28nm to 16nm (TSMC). We knew that AMD and NVIDIA were going to be building competing GPUs on different process technologies for the first time in modern PC gaming history and we knew that would likely result in some delta, I just did not expect it to be this wide. Is it issues with Global Foundries or with AMD’s GCN architecture? Hard to tell and neither party in this relationship is willing to tell us much on the issue. For now.”
14nm is more densely packed than 16nm, and maybe AMD was going for more cores per die with a higher density design library process tweak to get more dies per wafer and price them lower to grab that mainstream market share. Maybe AMD went with more layers and a denser circuit structure that can not be clocked as high. But maybe with a little better cooling solution the part’s clocks can be higher with the AIO coolers and the custom boards.
There are higher density designs/libraries variations even among the different GPU SKUs with some designs made to achieve smaller dies at the expense of higher clocking ability, for more dies per wafer and better pricing metrics. AMD will be able to with other designs have the circuit pitch increased or GF will have that 14nm high performance tweak with the licensed from Samsung 14nm process, Samsung is sure to be tweaking that 14nm process for higher performance in future offerings so GF can license any newer Samsung 14nm LPP processes.
“14nm is more densely packed
“14nm is more densely packed than 16nm”
By only a marginal amount: these processes from TSMC and GloFo are a bit misleadingly named. They’ve picked one particular element of the patterning process they can perform exceptionally highly, and chosen that as their naming metric. In practical terms, they are almost identical in feature size, as can be seen from the Chipworks comparison of the two A9 dies.
Yes BUT GPUs use high density
Yes BUT GPUs use high density automated design/layout libraries and have more layers than the CPUs that utilize the low density automated design libraries that use less layers and pack transistors less densely to allow CPUs to be clocked higher. At 14nm and using higher density layout design automated libraries to achieve more massively parallel processing units per unit area, GPUs can not be clocked as high as CPUs! And there are even variations among the GPU style high density design libraries that allow some GPUs to be clocked higher than others!
The smaller a process node gets the less the numbers of substrate atoms per unit area there are to absorb the heat phonons and transfer these heat phonons efficiently to carry heat away from the transistors. Even Intel had problems at 14nm, but CPUs are laid out less dense by design, GPUs are made designed/laid out denser, and that means less heat can be allowed to be generated. When you cut a process node size by half(28nm to 14nm) the density goes up by 4 times and depending on the process node/circuit pitch and the overall layout (done by the automated design libraries/software) GPUs can be very densely packed and unable to be clocked very high. So the design engineers and the yield engineers along with the bean counters design the GPUs for the intended market, and the trade-offs are calculated well in advance of the final design being frozen and then certified and brought to market.
I forgot to add, AMDs CPUs
I forgot to add, AMDs CPUs have all the async compute implemented in their hardware so those extra circuits add to the heat budget, but increase the performance relative to any GPU designs the implement async compute in software/firmware and less in their hardware.
Why do you think that Nvidia went with 16nm and higher clocks to make up for some of that asycn-compute disadvantage that they have!
Look at the power drown, a
Look at the power drown, a GTX 1070 consumes about the same as the RX 480, they went up to that level of clock simply because they were able to do that
Designing with high clock
Designing with high clock targets is a bit of a risk. Designs aimed at higher clock speed targets require deeper pipelines. The number of pipeline stages unfortunately can not be changed easily, so those decisions may have been made a very long time ago. Making the pipelines too deep can result in a huge amount of extra power consumption and so can having too high of clock speed target for your process technology. Nvidia certainly spent a lot of man hours doing optimization to optimize power and clock speed for the 1080/1070. That isn’t that much of an issue for a $700 card, but it isn’t exactly optimal for a mid-range $200 card. The 480 design can probably be tweaked a lot considering it is both a new design and a very new process. We might get a more optimized design later for a 480x or something with higher clock. I think the 480 represents a good value as is though, especially with DX12. I don’t think Nvidia’s DX12 hardware support is equivalent yet; AMD has worked on DX12 like hardware and drivers much longer due to their invention of Mantle, which is very close to DX12.
Bah, maybe from the new
Bah, maybe from the new process, the design is not a great departure from prior GCN iteration.
Also Polaris should be compared to Pascal before deciding how much good are, of coerce this will be possible only when NVIDIA release a card with a similar sized GPU
This card is being sold
This card is being sold wholesale. I think AMD is doing this to boost up their fab while gaining press and market share.
Developing the fab technologies is an ART. In order to ramp up production to produce cheaper you have to spend money in one way or another so that the fab can get the practice and expertise it needs in order to meet future targets.
One way is to produce parts and just trash them only to bundle in their cost to products that pass spec. The other is to sell lower spec parts in bulk.
AMD chose a lower frequency part that matches the previous generation from nVidia at a low price point to help their fab master the art of production. They are logically going where AMD always goes, to the value market where most of the money is to be made.
If AMD wanted they could easily produce a GTX1080 card and likely do have internal models. They would just cost twice to three times what nVidia charges.
The practice AMD gives their fab now will also help with their future CPU runs.
I want a GTX1080. What I can afford is two RX480’s.
Without the proper testing
Without the proper testing mule there is no way of Knowing if that is the GPU/die itself or the card’s other power drawing features, so it may not be any of GF 14nm process node’s standard performance to blame. And remember AMD’s async compute is going to be keeping more execution resources running with few remaining idle because of better scheduling, so expect more power usage simply because the execution units are being fully utilized. That smaller 14nm die is going to get hotter, and if the testing SKUs is of a reference design the cooling may not be there to stop some of the power/heat related extra power draw related issues.
Hot circuits have higher leakage and if the cooling solution is not on top of things then the heat feeds back into the circuits causing more leakage leading to more heat. It’s a vicious feedback cycle sort of thing.
How much larger is the 1070’s die. Is the 1070 a binned 1080 part with some units disabled, and it has more dead/unpowered silicon available with which to absorb and dissipate the heat relative to a RX 480’s die/die size. The RX 480 is on 14nm node is about 14% smaller than the 16nm node and at 14nm is more densely packed over the same unit area. And what is the circuit pitch on TSMC 16nm node compared to GF ‘s 14nm node. The RX 480’s die is much smaller so less heat transference can happen over a unit of time compared to a larger die on an larger process node.
Maybe the AIB RX480 boards will have better cooling, more testing needs to be done. More testing needs to be done with DX12/Vulkan full optimized games, and maybe there needs to be some driver tweaks over the next few weeks also. it’s a brand new card so teething problems come with any new GPU release.
AMD answered this at
AMD answered this at PCPER.
The GPU in the RX-480 was started almost three years ago. They had MOBILE customers in mind for launch. Then VR came along and they needed something to compete.
AMD was forced to increase the frequency for a design not optimized for that to get to the VR Ready target which they just BARELY did.
What’s nice about this admission is that it’s likely VEGA has been designed with higher frequencies so should be closer to NVidia’s frequencies on GTX1080.
I don’t think they pushed the
I don’t think they pushed the frequency to keep up with VR, quite clearly their main concern was to be able to compete against (yet old) NVIDIA’s product
Calling competing is not the
Calling competing is not the right term here. They wanted to reclaim a big portion of the market share they have lost in the past few years; as well as build hype for their future cards. By putting out such a huge value card they are cementing their name out there again and grabbing the biggest part of the market share. Contrary to what everyone thinks the flagship cards make up very little of the market share as a whole. For a sub 250 dollar card there is no comparison for the 480, it’s a no brainer, a complete wash, not a competition. Nobody in their right mind would buy anything but a 480 if they have sub 250 dollars right now for a video card.
yep depends what you wont out
yep depends what you wont out off a graphics card, for me vr bear minimum will not do, my 970gtx which i have had for a over a year is not enough for demanding games in vr, need more POWER, not for me.
something’s not right with
something’s not right with the “Polaris 10 – Radeon RX 480 Specifications” table it says that R9 380 has three times more ROPs than R9 390 and it also states that RX480 is on 16nm
Fixed!
Fixed!
So it has about the same
So it has about the same power target as a Nvidia GTX 970 and performance wise it’s trading blows with it usually coming out on top. Given that there are massive discounts now on the 970, i even got mine for 220 euros 3 weeks ago, I fail to see how AMD can win the graphics market. Sure, it has DP 1.3 & 1.4 but for most people that will not matter as much as the Nvidia branding which is on most people’s mind for the last year.
I really wanted AMD to have a win here but I fail to see how they can conquer the market for the time being. Maybe I’m not seeing something but in my eyes this is a flop in terms of excitement in the way it’s positioned now.
It will win with the VR
It will win with the VR games/newer games using async compute, and the RX480 has more async-compute future proofing ahead. There will be more DX12/Vulkan fully optimized titles putting more gaming compute onto the GPUs in the future, so look for those user’s systems that have less powerful CPUs to benefit even more from Polaris. Let’s start testing the games with weaker CPUs, and as time goes on and the games become able(Via DX12/Vulkan) to do more on the GPU gaming compute acceleration on async compute enabled Polaris GPUs, let’s see how the RX480 improves with time. Those GTX970/Maxwell GPUs are not going to do well on future Fully DX12/Vulkan enabled game titles. Let’s test for 6 months to a year and see with all manner of CPU based SKUs and with the RX480 and GTX970, Nvidia can not program their way out that async compute deficiency in their GPU’s hardware.
This is merely “OK”, not the
This is merely “OK”, not the big win I was hoping for AMD. It’s certainly the card to get right now at 200-250$, but it probably won’t be after the 1060 launches.
It looks to me that either GloFo 14 nm is very limited, or GCN just doesn’t have the legs to clock much past this without considerable changes to the architecture. Perhaps if Vega is done on TSMC we’ll have something to compare to, but it’s alarming that Nvidia can get 2 GHz clocks and AMD’s chips have a hard time reaching 1.3 Ghz.
Doesn’t look too good in terms of AMD gaining much market share back with this. Given AMD’s resources, perhaps it’s wishful thinking that they could surpass their competitors at this point.
Basically, anyone who doesn’t
Basically, anyone who doesn’t have a G-Sync monitor or depend upon Nvidia proprietary software such as Shadowplay would be a foolish person for spending more or the same money on a 970.
Better to wait some week then
Better to wait some week then
I would like to know what
I would like to know what happened to the RX460 and when it will be released?
AMD needs 14nm process to
AMD needs 14nm process to match what nvidia did with 28nm two years ago? Where I live the requested prices of RX480 seem to be on par with 970 custom designs. So great job AMD you have done well… Trololollololooo.
Sincerely, the Internet. (some of us liked your ads though – We give you that much.).
Nice!
Getting one 8GB as soon
Nice!
Getting one 8GB as soon as they are on sale in my country! 😀
Is GTX 970 on stock?
Because
Is GTX 970 on stock?
Because that’s tad low.
I own both R9 290x and GTX 970.
In witcher 3, GTX 970 butchered R9 290x by 10-15fps on 1080p.
Honestly with these price point, it’s quite a good upgrade for the people who own a card that is LESS powerful than a R9 290 or GTX 970. Remember his is $199 card folks, dont get your expectations to high or you’ll wind up with a broken heart 😀
As for me, i hope Vega won’t dissapoint next!
Yep, the GTX 970 is at stock
Yep, the GTX 970 is at stock clock, you can read clock speeds of each card in every chart
This.
I’ll be slapping one of
This.
I’ll be slapping one of these in to replace my venerable 7970-ghz. Its such a steal, I’d be crazy not to.
Later I’ll buy an AIB vega part when those are available, since things are looking good for vega. 2nd Gen GLOFO 14nm, and a shitload of ROPs/CUs. Bring it AMD!
Would have been a cool card a
Would have been a cool card a year ago…
You nailed it.
You nailed it.
https://media.licdn.com/mpr/m
https://media.licdn.com/mpr/mpr/p/7/005/06b/2b8/00c69eb.jpg
True to my word. Barely
True to my word. Barely better than a stock 970. A partner 970 or overclocker’s reference one will beat it comfortably for comparable or less wattage. ROFL. Who was right anonymous? You can suck one. Must cite source. I can give you can bunch if you want but we both know it isn’t necessary. I’m not the a**hole Nvidia fanboy you claim I was. It is everything I said it would be except possibly worse. It’s a good value if you only look at price. In fact I feel sorry for AMD. It is what it is. Maybe I should change my user name to RX 480 1080 gtx killer not. Hype was over top for this. Who told everyone not to get excited over it. If you’re an AMD fanboy this is a good buy for you. Everyone else not so much. I should rub it in more but Nvidia might raise the prices even more because of this fail.
Anyone who declares a winner
Anyone who declares a winner this early in the game with the new DX12/Vulkan fully enabled titles on the way, and the benchmarking software still needing to fully catch up with the New gaming/graphics API ecosystems, is truly an egregious fool, or a paid astroturfer, or in your case both!
Never trust any reviewer who declares an ultimate winner this early in the competition, especially with the new DX12/Vulkan APIs, games, gaming engines, and benchmarking software needing the time to fully be able to properly adapt and measure the hardware/games after all this rapid software/hardware technological change in such a short amount of time.
That’s rich. I’m paid by no
That’s rich. I’m paid by no one. I wish I could start my own tech site though. I call it like I see it. I’ve been involved in PC gaming since 1990. I’ve seen quite a bit and maybe know a thing or too. About time any volume of dx12/Vulcan games hit the market, these cards will be obsolete as Nvidia will have designed a card that well supports these Api’s. That’s assuming dx12 doesn’t die from lack of support. No company is going to risk financial livelihood by only making dx12 version of game. I notice you don’t mention openGL which is as viable but is it because AMD cards don’t do it as well?
So you would rather be stuck
So you would rather be stuck with the underperforming DX11 version of games for the next 2 years? Regardless of budget, I wouldn’t buy any Nvidia card right now; they are behind in hardware support. Although, I consider paying any more than about $300 for a video card to be foolish. DX11 will die quickly. DX12 is here to stay and will be adopted very quickly due to the console market. Both major consoles can support DX12 titles, and DX12 is strongly favored due to the improved multi-threading capabilities. Both have low power 8-core CPUs. I hope you aren’t recommending to your friends that they buy obsolete Nvidia cards.
I sure don’t want the larger
I sure don’t want the larger power draws of dx12 either. Basically the CPUs overhead is decreased a little and wattage drops a little. However, the GPU consumption increases beyond what the CPU loses and this is with or without Asynchronous compute enabled. This should not be happening. A lot of sites tout CPU wattage can drop 50% which is nothing compared to a video card wattage increase which these sites don’t tell you about. For the little performance gain I’d rather not have directx 12 or asynchronous compute.
http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476-5.html
Or consider this one. Basically huge gains from directx 12 can be had if you have a weak dual or quad core processor.
http://www.legitreviews.com/ashes-of-the-singularity-directx-12-vs-directx-11-benchmark-performance_170787/2
I know both of these feature AOTS but it’s the defacto AMD standard bearer for directx 12. Guess you’re stuck with it. Maybe AOTS isn’t coded well and shouldn’t be considered a valid benchmark.
Awww, yaaaay! The little
Awww, yaaaay! The little fanboy liar is back! I’m so glad to see you, I missed slapping you around the comments.
And you’ve already started right in with the lies! “I’m not the a**hole Nvidia fanboy you claim I was.” And yet every comment you’ve made since you came back proves that, yes, you really are.
Now, in that big long comment thread a week ago where you consistently embarrassed yourself and didn’t even know it, the only claim you made about the RX 480 was that two of them in Crossfire wouldn’t come close to a single 1080. But, wouldn’t ya know it, they do a pretty decent job keeping up at least. And the difference in price is bigger than the difference in performance. Oh, and those are reference cards, too. A pair of AIB cards with significantly better cooling and power delivery will only get better.
No, two reference RX 480s do not beat a single 1080, as things stand right this minute. (See what I did there? That’s called “accepting the facts”. I hope you learn to do that soon, because you’re wrong REALLY OFTEN.)
You’re still wrong about your claim that Freesync only working up to 90Hz, and so far everything you’ve shown to support that claim has been on one 144Hz monitor with a Freesync range up to 90Hz. Show me something that doesn’t rely on the Asus MG279Q that supports your claim and we’ll talk.
Then there was this gem: “There is a lot bigger difference between AMD review samples and retail samples than 1-1.5%.” And then you pointed to two completely different review sites, using two completely different hardware platforms, and referenced the overall FireStrike scores instead of the graphics scores, I guess because you somehow thought that was proof enough.
What about your claim that buying the less expensive AMD card would wind up being more expensive once you include the added power cost on one’s electric bill? Hint – it would take about 18 to 20 years to make up the difference on one’s electric bill. Are you going to admit you were wrong about that? (No, of course you’re not.)
Remember saying this? “Do you really think a $200 video card is going to clock at 1500 mhz when their Furyx enthusiast card maybe could reach what 1150-1200 mhz. A 380 core clock is 970 mhz. The base of Rx 480 is presumed to be 1266 mhz is 30% boost and 1500 mhz is 55% boost. Doesn’t seem too likely.”
We’ll see when the AIB cards come out – if some of them can get over 1500MHz, will you admit you were wrong? (Or will you turn around and cling to, “nuh uh, that’s not $200!” instead?)
I also remember this little nugget of cowpie: “AMD fanboys use all the excuses in the world. I’ve read so much BS from them such as I still get 60fps with my ancient HD (insert model number of your choice). No need to upgrade yet. Well you fanboys are the reason AMD is in such dire straits. Buy a product once in a while instead of bragging.”
Amusing that someone trying to power a 4k monitor with his GTX 760 would think his own argument didn’t apply to him. I’m shocked, I tell you. Shocked.
Oh, don’t forget about this one: “Nvidia cards have more built in limiters and protections in their cards versus AMD.” I’m still hoping you’re actually going to talk about what “limiters and protections” Nvidia has that AMD doesn’t have. Because it sounds more like a claim that you pulled out of your bottom and stated it as a fact in the hopes that everyone would think you knew what you were talking about and accept it. I think you made it up.
In fact, I know you made it up. You said so yourself – you were “assuming” that AMD’s marginally higher failure rate was “probably” related to temps and “possibly” less protections in place. Just admit that it was just your fanboy bullshit and move on. Clinging to a lie when you’ve been proven wrong is just sad.
Oh I’m so glad you’re back, princess. I’m gonna have so much fun with you.
If you call this “fun” you’re
If you call this “fun” you’re a very sad pathetic individual
Yeah you haven’t proven me
Yeah you haven’t proven me wrong either with facts and cites. Yours is largely opinion as well. You cherry pick things and a lot of stuff I posted was right but you didn’t address them at all. Such as Radeon’s horrible power consumption in 1080 video playback. You glossed over most and picked a few out and said I was only right about maybe one thing. OK. I could post more but don’t really care to.
About freesync you may be right but about it going up to 144hz but it’s most effective range is between 40hz to 90hz meaning it performs best here. Yes it’s supported range can be 9 hz to 200hz? but thus far only goes down to about 30hz. I don’t think either freesync or gsync have to do much beyond 90 hz because you need one or two powerful cards to tap this.
You are entitled to your opinion as well. Is the going over to spec of PCI express power limits fake then? AMD has admitted to the problem already. I think this is more serious than Nvidia’s dvi doesn’t work above certain range for overclockable Korean monitors. So I’m an ahole for trying to spare AMD fanboys from harming their systems. OK.
About review samples versus retail there isn’t much to go on as most tech sites do not buy retail samples to compare to the press samples so no surprise won’t find much. It was disappointed purchasers who said they didn’t get anywhere near the numbers reviewers get. Although may be due in part that the Furyx performs better with a stronger CPU $1000+ that most people don’t have. But shopping a few cards around to the reviewers because of “limited” supply, doesn’t look on the up and up. As far as I know Nvidia hasn’t done this but both companies probably cherry pick cards for review.
Electricity costs about .07 where I live for 1 kilowatt hour. This is cheap compared to Europe and other places. A difference of only 60 watts of power over 18 years at 8 hrs a day is going to cost $220 dollars more assuming rate holds same which it won’t. It’s around $12.25 a year more. Want me to prove my math. A comparable Nvidia card don’t cost that much more. LOL. It’s $37 to $61 during a card’s average lifetime of 3 to 5 years. More time playing or way higher rate as well as increase in rate over time effects this. Maybe drastically. Wattage of comparable AMD to Nvidia can go 100 watts+ more for a few more frames more or with asynchronous compute enabled. Is asynchronous compute “free” performance then. If it’s worth it to you for the average increase of 5-10% then OK but it isn’t free. Another point you glossed over. So do you exaggerate much.
Exactly. The $239 (8GB) reference hits what near 1400 mhz so far at best. The $200 (4GB) model is supposed to be weaker with slower RAM. Not confirmed yet however.
As for the 760gtx, I bought that when it was new 3 years ago and I bought my 4k monitor less than 6 months ago. I gave my old system away to a needy person at work and gave my 1080 monitor to him as well. My card will do 4k on higher details than the console does with 720-900p at least 30 frames. It’s usually well over 30fps but doesn’t hit 60 unless I compromise settings a bit. Only exception is AC Unity, which I get same frame rate at higher settings as it uses my card’s entire 4 gigs even at 1080. Older games of course. I always have option to play at 1440 or 1080 as well. The monitor is future looking and has amazing detail as well. I am also looking to upgrade my card to play newer games better as well. No rush it’s adequate for now.
More may have been a less than optimal choice of a word. Maybe it implied number. I should have said “better” protection.
What do you presume the higher failure rate correlates to. Maybe cheaper parts on a cheaper quality video card from a deep in debt company possibly cutting corners. You usually get what you pay for. I say that because it could be the reason or something else entirely. I’m not an engineer. You are entitled to your opinion as well. Lie is a strong word tell me 100% I’m wrong because you know the real reason. I’m waiting for your proof. Get real.
Resorting to childish name calling. Who is the one having fun? I’ll just have to use better word selection and prove the littlest things to your nitpicking.
“Microsoft’s Chas Boyd was
“Microsoft’s Chas Boyd was on-hand at AMD’s editor’s day for Polaris and previewed ideas that MS is implementing to help improve multi-GPU scaling. The best news surrounded a new abstraction layer for game developers to utilize for multiple GPU support that MS would be releasing on GitHub very shortly. According to Microsoft, with only “very little” code adjustment, DX12 titles should be able to implement basic multi-GPU support.”
The way to go is with the DX12, and Vulkan(More Open than DX12), graphics APIs in charge of multi-GPU load balancing. Let this multi-adaptor be done in the graphics APIs/OSs and more folks involved in creating the Multi-graphics-adaptor load balancing algorithms. This is the way multi-GPU load balancing and support should have been done in the first place, with the OS/Graphics APIs in charge of sending the work to the GPU/s of any make or model that are plugged into any PC/laptop or other computing device.
Keep the hardware drivers simple and close to the metal, and move most of the milti-GPU load balancing support into the graphics APIs/OS, and standardize the way that a computing system accesses its available processing resources. Vulkan lets the GPU ODMs register extensions to the API to handle any new feature sets with the graphics API more in charge of the workloads given to each GPU/processor, and its probably the same for DX12. As far as load balancing between multi-processors GPUs, CPUs and others, it’s better to have the entire computing industry in on developing the load balancing algorithms for GPU/ other processor multi-adaptor load balancing, instead of just the companies that make the GPU/Other processor hardware.
So M$ is releasing some middleware, but I’m more in favor of standardizing things more formally in the Graphics APIs and in the OSs, for the proper management of any processing hardware installed on a computing system.
not amazing, even a little
not amazing, even a little disappointing considering it’s 14nm 2300SPs and it’s pretty close to the 970 (28nm 1660SPs) at almost everything (power usage and performance)
also the 970 overclocks like a champ and will leave the RX480 (which doesn’t overclock) far behind…
BUT as a $200 card it’s pretty decent, huge improvement over the old stuff like the 960 for sure…
also as a first GPU at GloFo, looks pretty OK I guess? it can only get better!?
anyway, I hope the best for AMD, and looking forward for the 470 and 460, since those would probably be more suitable for me.
Dirty Rally – 1920×1080 –
Dirty Rally – 1920×1080 – Frame Variance
The R9 390 still takes a commanding lead, running 16% faster than the new Polaris 10 GPU, but obviously at lower power.
* higher power consumption