The new Radeon R9 300-series
It’s finally time to take a look at our first retail R9 300-series card, the Sapphire Nitro R9 390 8GB!
The new AMD Radeon R9 and R7 300-series of graphics cards are coming into the world with a rocky start. We have seen rumors and speculation about what GPUs are going to be included, what changes would be made and what prices these would be shipping at for what seems like months, and in truth it has been months. AMD's Radeon R9 290 and R9 290X based on the new Hawaii GPU launched nearly 2 years ago, while the rest of the 200-series lineup was mostly a transition of existing products in the HD 7000-family. The lone exception was the Radeon R9 285, a card based on a mysterious new GPU called Tonga that showed up late to the game to fill a gap in the performance and pricing window for AMD.
AMD's R9 300-series, and the R7 300-series in particular, follows a very similar path. The R9 390 and R9 390X are still based on the Hawaii architecture. Tahiti is finally retired and put to pasture, though Tonga lives on as the Radeon R9 380. Below that you have the Radeon R7 370 and 360, the former based on the aging GCN 1.0 Curacao GPU and the latter based on Bonaire. On the surface its easy to refer to these cards with the dreaded "R-word"…rebrands. And though that seems to be the case there are some interesting performance changes, at least at the high end of this stack, that warrant discussion.
And of course, AMD partners like Sapphire are using this opportunity of familiarity with the GPU and its properties to release newer product stacks. In this case Sapphire is launching the new Nitro brand for a series of cards that it is aimed at what it considers the most common type of gamer: one that is cost conscious and craves performance over everything else.
The result is a stack of GPUs with prices ranging from about $110 up to ~$400 that target the "gamer" group of GPU buyers without the added price tag that some other lines include. Obviously it seems a little crazy to be talking about a line of graphics cards that is built for gamers (aren't they all??) but the emphasis is to build a fast card that is cool and quiet without the additional cost of overly glamorous coolers, LEDs or dip switches.
Today I am taking a look at the new Sapphire Nitro R9 390 8GB card, but before we dive head first into that card and its performance, let's first go over the changes to the R9-level of AMD's product stack.
The New Radeon R9 300-Series Lineup
Many of you are going to be interested in the specifications and numbers first, so let's start off with a handy little table that details the Radeon R9 300-series against the relevant Radeon R9 200-series products.
R9 390X | R9 390 | R9 380 | R9 290X | R9 290 | R9 285 | |
---|---|---|---|---|---|---|
GPU Code name | Grenada (Hawaii) | Grenada (Hawaii) | Antigua (Tonga) | Hawaii | Hawaii | Tonga |
GPU Cores | 2816 | 2560 | 1792 | 2816 | 2560 | 1792 |
Rated Clock | 1050 MHz | 1000 MHz | 970 MHz | 1000 MHz | 947 MHz | 918 MHz |
Texture Units | 176 | 160 | 112 | 176 | 160 | 112 |
ROP Units | 64 | 64 | 32 | 64 | 64 | 32 |
Memory | 8GB | 8GB | 4GB | 4GB | 4GB | 2GB |
Memory Clock | 6000 MHz | 6000 MHz | 5700 MHz | 5000 MHz | 5000 MHz | 5500 MHz |
Memory Interface | 512-bit | 512-bit | 256-bit | 512-bit | 512-bit | 256-bit |
Memory Bandwidth | 384 GB/s | 384 GB/s | 182.4 GB/s | 320 GB/s | 320 GB/s | 176 GB/s |
TDP | 275 watts | 275 watts | 190 watts | 290 watts | 275 watts | 190 watts |
Peak Compute | 5.9 TFLOPS | 5.1 TFLOPS | 3.48 TFLOPS | 5.6 TFLOPS | 4.84 TFLOPS | 3.29 TFLOPS |
MSRP (current) | $429 | $329 | $199 | $329 | $269 | $229 |
There are quite a few changes that need to be noted, starting with the new GPU code names given to the R9 300-series. Both the R9 390 and 390X are using a new spin of the Hawaii GPU, now called Grenada, while the updated Tonga GPU is being dubbed Antigua. Though not in the table above, the R7 370 uses Trinidad (updated Curacao, which is an updated Pitcairn) while the R7 360 uses the Tobago GPU (an updated Bonaire). Confused yet? I am; and it's going to take some time for me to really get those new names in my head. To be fair, AMD is out there trumpeting the new code names as gospel and they would probably be fine with them not being a part of the discussion, but technical users want technical answers.
So what changes were made in these new spins of GPUs? AMD was quick to comment on the term "rebrand" that will no doubt be associated by many with the Radeon R9 300-series. They insist that engineers have been working on these GPU re-spins for over year and simply calling them "rebrands" takes away from the work the teams did. These GPUs (the 390 and 390X at least) have a "ground up" redesign of the software microcontroller that handles the clocks and gating to improve GPU power efficiency. As you would expect for a GPU built on the same 28nm process technology that has been around for many years, AMD has tweaked the design somewhat to better take advantage of evolutions in TSMC's 28nm process. And, thanks to higher clocks on both the GPU and the memory, performance increases will be seen over the existing R9 200-series as well. Being able to run around 50 MHz higher on the GPU and 250 MHz (1.0 GHz effective) on the memory inside the same power envelope shows that AMD has done SOMETHING, though how much that means for consumers is up in the air.
Obviously we need to judge all of that for ourselves.
The second and most obvious change in these cards is the move from 4GB of memory by default to 8GB of memory on both the 390 and the 390X. Obviously that is a huge jump in memory capacity and is surely a welcome change, but the benefit of that added memory, even at single display 4K resolutions, is questionable. NVIDIA's flagship GTX 980 Ti has 6GB of memory and thus far we haven't been able to max that out. Putting 8GB on both the R9 390 and 390X gives AMD a bullet point at the very least and the potential for better performance in future games that may require it. It's an interesting contrast though, knowing that the AMD Radeon R9 Fury X will peak at 4GB of memory while two cards that are lower in the stack than it feature double that.
With the increase in memory capacity comes a sizable increase in memory speed - moving from 5000 MHz on the reference design of the R9 290X/290 to 6000 MHz on the R9 390X/390. This boosts the memory bandwidth from 320 GB/s to 384 GB/s, an increase of 20%! In areas where memory was the bottleneck the new GPUs should see noticeable performance advantages.
XFX R9 290 DD, ASUS R9 290X DC2, Sapphire Nitro R9 390
In terms of raw compute of the GPU though, AMD rates the new Radeon R9 390X at 5.9 TFLOPS, up from 5.6 TFLOPS of the R9 290X (+5.3%) and the R9 390 at 5.1 TFLOPS up from 4.84 TFLOPS on the R9 290 (+5.3%). Those are pretty modest gains and its obvious that any retail cards that overclock the R9 290X/290 today are going to get pretty close to matching or beating those compute rates.
Finally, let's look at pricing. AMD is setting the MSRP of the R9 390X at a surprisingly high $429. The R9 390 follows at $329 and the R9 380 drops all the way down to $199. Compare that to the R9 290X (currently selling for $329), the R9 290 (selling for $269) and the R9 285 (selling for $229) and you have confusion running rampant. Is an R9 390X really going to be worth $100 more than a Radeon R9 290X? How can the R9 390, with fewer stream processors and nearly identical GPU clocks, cost the same as the R9 290X at current prices? At first glance it's easy to get irate about those prices; they look like increases for a new brands rather than drops. But let's see how the performance card plays out first.
“AMD is struggling in the
“AMD is struggling in the market and they need to win in dominant fashion to take back market share, not eke out victories with older, but refreshed, technology.” WHy all of this negativity ?
It’s consistently faster than the GTX 970 in your testing, it has (more than) double the VRAM, it runs very cool and quiet and it costs the same as the GTX 970.
If anything this card should be recommended.
Don’t pick out one line and
Don't pick out one line and turn this into a negative review. It is a positive result on something that I think everyone expected to be pretty bland. The truth is that AMD has always had a perf/dollar advantage over GeForce cards – the 290X was exactly the same thing at exactly the same price. And it wasn't enough for AMD to gain market share. They need new GPUs, new features, excitement, etc. I think Fury X can bring all of that.
I know your limited on time
I know your limited on time but it would been nice to see games that are VRAM hungry in the test other then GTA V.
To see if the additional 4GB of ram benefits those games.
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_Titan_X/images/memory.gif
I certainly expected more
I certainly expected more game tests I just have to assume time was against you.
Just hope this doesn’t rub off into your Fury X/Nano reviews.
Time was deinitely tight on
Time was deinitely tight on this one, though we may revisit some results using the 15.15 driver on the older card (making it a true apples to apples comparison).
That would be great, some
That would be great, some guys on OCN had qualms with reviews because of drivers… This would strip out any of those concerns.
We certainly raised an
We certainly raised an eyebrow at the seemingly artificial limitation in the driver, which is why it was mentioned in the review and we are looking into a more accurate test.
the problem is. is perceived
the problem is. is perceived value. even with a new chip revision. the performance is pretty much the same. +5-10% cause of the ram oand factory O/c but is that worth the extra cost over the old 8gb 290 series. the answer is no it isnt.
so what is the best card for maxed out 1440P gaming? it would still be the 980 or Ti right?
It’s ok, Ryan. Keep up the
It’s ok, Ryan. Keep up the good work. 🙂
BTW, for some reason the word ‘word verification’ is on top of the picture that is the word verification (using Chrome).
This review shows the the AMD
This review shows the the AMD 390 is superior to the 970GTX and has double the memory. Unless there is some niche feature on the 970 there is no longer a reason to consider it. Anything else is just brand bias.
Better drivers, lower temps,
Better drivers, lower temps, lower power draw which helps overclocking. It took AMD 2 years to what make it do 5-10% better na I will take 970 all day.
People are getting many BSODs
People are getting many BSODs with the latest Nvidia drivers, after 347 version, especially when browsing. That “better driver” stuff is looking more stupid by the day.
People react very different considering about what company they are talking about.
If Nvidia’s drivers are crap in games, it is a problem with the game. If AMD’s drivers are crap specifically in GameWorks games, they blame AMD. Did Nvidia fixed performance issues with 700 series cards?
If AMD produces stable beta drivers, they complain about the lack of WHQL drivers. If Nvidia produces a new *hotfix* driver every week that still crushes like the last one, and I am talking with 900 series models, people ignore the instability problems of those drivers and just feel happy they have to clean install a new driver every week.
As for what AMD did in 2 years, you have to look at Fury cards, not the rebrands that are necessary for a company that fights at the same time a giant like Intel and a much stronger financially company like Nvidia and still survives and not only that, but offers strong competition, at least against Nvidia. Nvidia still sells Fermi(GT 620) cards, not to mention that ancient G210.
I haven’t had any trouble at
I haven’t had any trouble at all with Nvidia drivers for years.
Same, since GTX680, 780, 970
Same, since GTX680, 780, 970 and now 980 Ti never experience any fucking driver issues when using Chrome, Firefox and video editing apps like Sony Vegas, Adobe Premiere and other dev tools on a daily basis.
Just take care of your pc with regular maintenance if you value your machine and want tip top performance noobs.
Don’t forget the GTX970’s can
Don’t forget the GTX970’s can overclock a hell of a lot higher than any R9 290x and this one.
I have a simple reference 970 and I’m running 24/7 @1.5Ghz Core & 8Ghz Memory blowing way past all the benchmarks much higher than 980’s with factory overclocks.
why 24/7 are u trying to warm
why 24/7 are u trying to warm climate, if u have ssd it boots like 10s
why 24/7 are u trying to warm
why 24/7 are u trying to warm climate, if u have ssd it boots like 10s
You’ve got to head over to
You’ve got to head over to the AVSIM’s Prepar3D forum and count how many people there talk about nVidia driver issues. It’s great’n all that nVidia releases new drivers, but every release seems to bring new headaches for Prepar3D fans. nVidia’s cards seem to have the consensus that they are the best for Prepar3D even though their driver support is awful.
AMD’s drivers work great. The only negative is Crossfire support and the ability to tweak individual game settings to the extent you can with nVidia cards and the Nvidia Inspector application.
Nah you’re so wrong. I
Nah you’re so wrong. I already have 3 friends that moved over to a GTX970 due to having all sort of driver issues on their 7970’s, R9 290, R9 280’s and they don’t take much care of their pc and programs. Running a shit ton of useless crap on their pc’s. 1 is on windows 7 the others on win 8.1
Since they popped in that 970 and installed the latest drivers all has been sound so I have real proof I’ve seen here that I believe. Believe me or not I don’t rly give 2 shits
Man , I have 270X , mu
Man , I have 270X , mu roommates have 7870 and 290 we dont have problems with drivers , only I got the flickering problem ( with the MSI card witch has an issue and the shop send me asus one coss they didn’t have MSI on stock).
It’s very wrong to say EVERYBODY has problems with AMD drivers and NO ONE has problems with nVidia drivers. In my practice I see people do crazy shit with their PC and blame the hardware or software.
If you put some TLC to your PC it will return the favor.
I wish you remain problem free and enjoy what ever works for you ( 3.5 gb … fck if it works for you and you are happy 🙂 ).
Best regards
Cherniq
Yep. I don’t have these
Yep. I don’t have these driver issues with my 290, and I have several friends who don’t either. I call BS.
I run a 960 in an old HP Z600
I run a 960 in an old HP Z600 and a 740 in a z97 based HTPC and have NEVER had driver related issues. I’m using the very latest WHQL drivers from NVidia and I keep them updated.
I also have nearly 600 gaming titles hrough Steam and have seen no issues.
From My experience – NVidia rocks.
I have run a number of AMD cards in the past too (all the way up to the R9 285). I had some driver issues in games with them but, no BSOD’s.
I don’t overclock…
Perhaps pushing components a bit too far is part of the “driver” issues.
“I haven’t had any issues
“I haven’t had any issues with drivers thus no issues exist”
What a compelling arguement!
I have zero trouble at all. I
I have zero trouble at all. I bet something else is going on. When a car breaks down somewhere doesn’t mean all cars will stop functioning.
It’s really hard to
It’s really hard to understand you with AMD’s dick in your mouth. Perhaps you should pull it out before speaking.
You seriously need to tone down the fanboy. This whole AMD vs Intel vs NVIDIA is getting pretty fucking old. From you and the other AMD fanboys.
Nobody makes perfect products, they all have pros and cons. Intel makes mistakes, NVIDIA makes mistakes, AMD makes mistakes.
“rebrands that are necessary for a company that fights at the same time a giant like Intel and a much stronger financially company like Nvidia”
AMD and NVIDIA are rebranding GPUs because they are waiting for smaller process nodes from TSMC and GLOBALFOUNDRIES. We knew this last year. Quit fooling yourself.
two words: anger management
two words: anger management
I have a 980 i can confirm
I have a 980 i can confirm you full of shite. Keep fighting the good fight though.
I am not a gamer have been
I am not a gamer have been using Nvidia for years as a photographer and just now I am going to go for AMD video cards reasons are is AMD works better on openCL,openGL With Adobe ,and as you see by far it is more than Nvidia
Bellow is the reason:
AMD ADOBE SUPORT
AMD A10-7800 APU
AMD Radeon HD 6650M
AMD Radeon HD 6730M
AMD Radeon HD 6750
AMD Radeon HD 6750M
AMD Radeon HD 6770
AMD Radeon HD 6770M
AMD Radeon HD 6950
AMD Radeon HD 6970
AMD Radeon HD 7480D
AMD Radeon HD 7510M
AMD Radeon HD 7530M
AMD Radeon HD 7540D
AMD Radeon HD 7550M
AMD Radeon HD 7560D
AMD Radeon HD 7570
AMD Radeon HD 7570M
AMD Radeon HD 7590M
AMD Radeon HD 7610M
AMD Radeon HD 7630M
AMD Radeon HD 7650M
AMD Radeon HD 7660D
AMD Radeon HD 7670
AMD Radeon HD 7670M
AMD Radeon HD 7690M
AMD Radeon HD 7730M
AMD Radeon HD 7750
AMD Radeon HD 7750M
AMD Radeon HD 7770
AMD Radeon HD 7770M
AMD Radeon HD 7850
AMD Radeon HD 7850M
AMD Radeon HD 7870
AMD Radeon HD 7870
AMD Radeon HD 7870M
AMD Radeon HD 7950
AMD Radeon HD 7970
AMD Radeon HD 7970M
AMD Radeon HD 8470
AMD Radeon HD 8550M
AMD Radeon HD 8570
AMD Radeon HD 8570M
AMD Radeon HD 8670
AMD Radeon HD 8670M
AMD Radeon HD 8690M
AMD Radeon HD 8730M
AMD Radeon HD 8740
AMD Radeon HD 8750M
AMD Radeon HD 8760
AMD Radeon HD 8770M
AMD Radeon HD 8790M
AMD Radeon HD 8870
AMD Radeon HD 8950
AMD Radeon HD 8970
AMD Radeon R7 265
AMD Radeon R7 APU
AMD Radeon R7260X
AMD Radeon R7M260
AMD Radeon R9 280
AMD Radeon R9 280
AMD Radeon R9 280X
AMD Radeon R9 285
AMD Radeon R9 290
AMD Radeon R9 290X
AMD Radeon R9 295X2
Nvidia Bellow :
GeForce GTX 285 (Windows and Mac OS)
GeForce GTX 470 (Windows)
GeForce GTX 570 (Windows)
GeForce GTX 580 (Windows)
NVIDIA® Tesla C2075 card (Windows) when paired with a Quadro card as part of an NVIDIA Maximus™ configuration
Quadro FX 3700M (Windows)
Quadro FX 3800 (Windows)
Quadro FX 3800M (Windows)
Quadro FX 4800 (Windows and Mac OS)
Quadro FX 5800 (Windows)
Quadro 2000 (Windows)
Quadro 2000D (Windows)
Quadro 2000M (Windows)
Quadro 3000M (Windows)
Quadro 4000 (Windows and Mac OS)
Quadro 4000M (Windows)
Quadro 5000 (Windows)
Quadro 5000M (Windows)
Quadro 5010M (Windows)
Quadro 6000 (Windows)
Quadro CX (Windows)
Tesla C2075** (Windows)
Quatro will be the better ones but some of the photographers have said that the AMD have work much better with AMD because Adobe is using the openCL,openGL and that goes for Apple Mac as well.Sample are like Final Cut video editing program uses openCl so you are better using AMD.
So what it comes down to is if you are a gamer and Nvidia does it for you then use if it does what you want but if AMD does the same for less $$$$ then go for it .I only use what works for me in my profession,each card has minus and pluses.If my photo editing works better in Nvidia I will use it but now that I know AMD works better in AMD i will try it .
But sometime it is not the driver card it is windows that apparently doesn’t support openCL like all ways Microsoft doesn’t like open source it only pretends to like it.
But saying all this Nvidia is catching up to AMD on openCL,openGL .So if Nvidia works for you use it,if AMD does the same use it.My processor is AMD3+ FX 8 core no overclocking but for the same I pay more in Intel been using AMD CPU’s since my first computer .Bang for the buck .
So guys don’t argue put it up on the scale and which is better for you use it .
I also want PhysX, Gameworks
I also want PhysX, Gameworks enhancements, and Shadowplay.
PhysX works with AMD or
PhysX works with AMD or Nvidia, on the CPU. It is built into every Unreal Engine game.
AMD has ShadowPlay when you install the AMD raptr client.
Most, not all, Gameworks enhancements work on both cards.
Physx on the CPU is
Physx on the CPU is absolutely unplayable you will get 10-15 FPS.
AMD doesnt not have shadowplay, i dont think you know this but nvidia cards have a encoder built in that allows for recording while playing with minimum performance hit
Gameworks enchancements work better on nvidia cards… google “AMD gameworks on withcher 3″…
So basically you dont know anything about anything. Grab a gun and kill yourself
So,all Unreal Engine 4 games
So,all Unreal Engine 4 games are unplayable? Really?
AMD has the same feature. It’s not called ShadowPlay, since that is just a marketing name, but it is the same feature. AMD was touting putting these nifty new encoders into their GPUs a year before Nvidia.
So, I should base all my knowledge of a subject on one game because it is representative of all games? I won’t, even if you want me to.
Now, as to may knowledge, I have been working with the Unreal Engine for a few months, and hve signed up for the Nvidia NDA stuff, as well as looking at AMDs offerings.
Does this mean I know everything? Yes, yes it does.
Same goes for you, game works
Same goes for you, game works that is used in witcher 3 uses DX11 tessellation. AMD cards are known to suck in that STANDARD. Funny how nvidia used a standard which AMD fans whined they should do, they did with hairworks and yet AMD fans still whine about it cause their cards suck at tessellation. So Nvidia doesn’t use standard, gets chewed out. They use a standard and still gets chewed out. Funny how there is a double standard in that.
No, not strange. People are
No, not strange. People are idiots. All of us are biased. I try to combat my bias, I don’t succeed all the time.
But, frankly, I dislike all fanboys and the hate.I like technology, and think its all cool. Use the tech that works best for you, and don’t worry about keeping up with the Jones.
No.
“Physics” works on the
No.
“Physics” works on the CPU; “PhysX” is something else, features beyond what you find in normal physics rendering. “Shadowplay” is an Nvidia exclusive – you can do something similar on AMD cards, but it isn’t Shadowplay. “Gameworks” –
Gotcha! You fell into my “Gameworks” trap. The fact is, yes, most of what you see in games can be done on AMD graphics cards without issue. But not if you listen to AMD, a company that would rather whine and cry about what Nvidia does than fix their drivers. I’m sick of it, and I’ve started saying so. I’m sick of AMD playing the “Oh, feel sorry for me, I’m just a poor underdog with a heart of gold” game. AMD is better than that. And they demonstrated what I’m saying (and what I said before they fixed their Witcher drivers quite clearly when they cried about Gameworks hair in Witcher 3 – and then released a fix for their drivers that resolved the performance issues.
I’m no Nvidia fanboy, I’ve recommended the 290X over even the GTX 980 on Borderlands forums because, at the time, Borderlands 2 ran better on it than on the 980 – and Borderlands is an Nvidia optimized game. I think the price/performance value of the 290X is excellent and I still encourage people to take a look at their particular application, their particular games, and buy whichever graphics card suits them the best.
So, for me, it isn’t a matter of what people say on forums, it’s a matter of performance (and by “performance” I mean visual quality, not just frame rates) in particular games (the ones I play; the “average performance across all games and resolutions” number doesn’t do me much good). And it is to a certain degree about PhsyX (which, again, PhysX is NOT physics – but it IS something AMD could provide if they chose to).
AMD needs to man up and stop complaining, both on the CPU and GPU front. Sure they don’t have the money Intel does – but the fact is they beat Intel at their own game before; they can do it again – but not if they keep up this “poor me we can’t compete because we’re too small” attitude. The fact is they forced Intel into a cross-licensing agreement when they created AMD-64 and brought us into the 64-bit computing world on the desktop.
Maxwell is a monster; if Fiji can beat it, even if it’s “only” by an “average of” 10% that proves my point again – AMD CAN if AMD WILL. Overcoming Maxwell is a feat to be admired, I don’t care what anyone else says. But the people that most need to stand up and take notice of AMD’s successes are the folks at AMD themselves.
PhysX is a Physics engine,
PhysX is a Physics engine, that has a CPU implementation and a GPU implementation. The CPU implementation is integrated into the Unreal Engine 4 source code, among others. The only “feature” the GPU implementation has over the CPU implementation is speed. This may involve using different algorithms, but the end result is speed, not features.
Mark you should only look at
Mark you should only look at the performance #s and draw your own conclusions. I think everyone underestimates the the cult like devotion and brand loyalty thats rife in this industry.
I’m going to assume the
I’m going to assume the negativity Mark notes is drawn from little game data and a lot of talk about industry so one can’t really blame him for feeling that way.
Wholeheartedly agree with your statement about taking the data you need (across multiple reviews) and draw your own conclusions.
The negativity is because its
The negativity is because its basically a 290x but much more expensive.
Not to mention the 970 is a fantastic overclocker and has before beaten even the 980 with a stable overclock.
the only amd cards worth mentioning are the fury range
Interesting that its pretty
Interesting that its pretty much neck in neck with the r9 290X, meaning that if these older cards fall dramatically in price they may represent the better buy.
Also note the higher frame variance in GTA V, I’m not sure why the Nvidia card did so much better in this title, may just be a driver thing.
None the less the moral is the story is the gtx970 is going to have to drop price, or loose market share, which is, rebrand arguments aside, what was expected of the new r9 390.
Also worth noting with 8gb Vram and all those cores, this new card should be a beast for productivity tasks on workstations, especially at this price point.
No reason nVidia needs to
No reason nVidia needs to drop the GTX970 price. AMD’s product is 5-10% faster for the same price but that isn’t the reason to lower theirs. nVidia offers additional advantages with their software over AMD and as the market leader they can continue to use this as a competitive advantage and keep the price intact.
Gamestream, Grid, and Gameworks compatibility are reasons to choose nVidia over AMD when the price is equal.
I have seen a lot of people
I have seen a lot of people saying that they needed to turn HairWorks off for nvidia cards because it causes crashes with witcher 3. If it doesn’t cause crashes, it seems to lower performance to unacceptable levels whether it is nvidia or AMD cards. Hardocp turned it off I their test saying it was not worth the performance hit. Reguardless of how powerful the tesselation unit is, the rest of the gpu still needs to keep up with all of the extra triangles generated by the tesselation unit. I don’t know what else is included in GameWorks, but it does not seem to be a selling point to me. I suspect we will get a few games with nvidia’s proprietary solutions, but they will mostly have open standards as a backup. Gamestream and grid seem irrelevant also.
Gameworks with 0x AA looks
Gameworks with 0x AA looks horrible indeed, but only on Geralt and is a mere 5-7fps drop for me. So I swapped the hairstyles so that gameworks on also uses default non-gameworks hairstyles. Now the monsters look great at a distance with 0x AA unless you look at them under a magnifying scope. With a little Gaussian blur you can sort that sharpness shimmering and voila.
I think I would rather have 8
I think I would rather have 8 GB of memory on the card rather than 3.5 GB. I would like to see some more detailed analysis of the power consumption. They are listing the 390 and 390X as the same TDP even though they have different clocks.
Finally some sense.
Lol at
Finally some sense.
Lol at all those 3.5gb owners defending their cards. A 1200mhz r9 390 can get 13.8k score in firestrike which is slightly higher than 970 at 1500mhz, so any game not destroyed by nvidia should run fine on the r9 390. The card in my eyes is a clear win, a successor like the FX8350 was to the FX8150.
Since it comes with 8gb at 329$ with no double vram edition planed later, the price in third world countries like mine should be not be much gouged. We can finally have more vram than 4gb at cheap prices. Bring it on skyrim!!
970 has 4GB…sorry, no 3.5.
970 has 4GB…sorry, no 3.5. 512mb runs a bit slower than the other 3.5, but it’s 4GB total period.
On top of that you can OC the crap out of 970 (all nv cards) and blow away what is here for less watts. 4K is useless for these things (too slow, turning down crap all day), so no reason to not buy 970. Last WHQL drivers from AMD were Dec 8, meanwhile NV releases them monthly (sometimes 2 in a month like may) and day1 drivers for every major release. Gameworks, gsync (that actually gets rid of all the issues it is supposed to, freesync still blur etc), cuda and workstation perf blows these away. That extra 100w here can cost you a lot over the life of the card too especially if you have more than one person gaming on it. You can easily spend a few hundred over 3-5yrs and many states in USA or many places around the globe like AU (25c/Kwh, 15 states in usa over 15c). Punch that into a calc and realize how much you save! Kids are the worst in the summer months while off and can really blow up your bill (or WOW users etc…LOL).
http://www.tomshardware.com/reviews/amd-radeon-r9-390x-r9-380-r7-370,4178-7.html
Victories in workstation stuff are pretty massive in 3/5 apps here and not even showing Adobe AE/Premiere which use CUDA (along with another 200+ pro apps).
Price is a non-issue when you consider the extra 100w bulb all the time you game over years (total loser), so all things considered, NV is still a better decision. Maybe fury changes this, but it doesn’t seem like a watt saver either, but it will at least add perf.
Regarding compute
Regarding compute performance, THG is comparing 980Ti vs 390X. Well, they are not comparing it, you appear to be. But this match 390X must obviously loose. There is no non-Ti 980 or 970 in those charts.
There is relevant one comparison, though, Radeon 380 vs. GeForce 960. 380 wins 3 tests (18%, 8.5% and 12% margin), 960 wins one (3.5% margin), one is basically tie.
Also see the charts, they do not always start from zero. Therefore graphically perceived differences are much larger than real one.
Regarding 970, the 0.5GB is MUCH slower than the 3.5GB, though I would agree nobody was really able to point out that it would matter. Actually, I wanted to buy it. But since AMD users can enjoy Hairworks with much lower performance impact due to tesselation override, I expect to go with AMD team again.
Yes, that was the point 980ti
Yes, that was the point 980ti smokes 390x by a large margin. You’re also missing my main point, this is the worst case scenario for NV, as it completely ignores CUDA, which is HUGE in a workstation scenario as almost every pro app out there can use it (either built-in or plugin). IE, there’s a reason NV owns 75-80% of this market.
960 is terribly hobbled, while 970 has double the bandwidth and can oc like mad. The 970 can OC to match 980, so dismissing these is not exactly apples/apples. We’re comparing the top end stuff here, and 960 is not going to be used the way 970 and above would be. You can do pro stuff on the 970 and still enjoy it, while 960 is kind of a joke for that stuff. Pixel rate is nearly doubled also (86% faster). There aren’t that many reviews to point out workstation stuff, I chose the best I could get to compare to a 390 vs. NV stuff. The charts are showing massively leaning 980ti right? You can do the math. The 3/5 I mentioned are 0-120, and victory is massive (66 to 100 is a big deal, 59 to 91 is a big deal). The two you’re choosing to use, yeah, not a big deal. As noted it doesn’t show what happens in CUDA enabled apps which number well over 200. If you don’t use pro apps I guess this means nothing, but to anyone who does, it’s HUGE.
The 380 (not 380x, or 390 for that matter and 390x worse) hits more watts than a 970, and 970 doesn’t exactly rocket up when OCed (about 15w). For many 970 vs. 390 story will be watts and heat. Not to mention Day1 drivers, Cuda, ocing, gsync (that works, freesync still a work in progress so far, ghosting etc) etc.
But hey, Buy AMD if you want. I own one right now myself 😉 Though probably not once 16nm NV hits Q2 next year. I expect AMD’s situation financially to get worse (and R&D also, dropping for the last 4yrs) so I expect drivers etc to also degrade further (project cars, witcher 3 episodes etc show the trend, last WHQL dec8 2014 etc). ZEN may turn things around, I don’t expect fury to do much with so few models using fiji and yields looking to be low on the chip itself and mem probably too (price of both will make profits nil). They are doing a 550-580mm^2 chip on a new process and new HBM mem, so won’t likely take in the profits of NV on 980ti/titan on a refined process with old hat memory. No financial person expects anything from AMD until mid 2016 at the earliest. I have to consider that when thinking about buying a card and the support I’d expect to receive especially as we see it now before it gets even worse $$$ wise. I might think differently if I upgraded gpus yearly, but I don’t these days so long term support is important (and the cost of that long term wattage too).
You’re also not recognizing how NV uses the 512MB that is slower. It would be MUCH slower if it wasn’t programmed to only be used for less used data. In that sense it isn’t as slow as you (or the specs) imply. They keep the heavily used data OUT of that part, so it’s not a huge hit (3% larger hit than a 980 takes in same situations, so not a big deal and they had to look to find even that). NV does a great driver job of hiding it, and it’s better to have it in there than the other way which was to just cop out and remove it entirely. I like the fact they left it in and took the time to program to use it effectively. You can do that when you have the R&D cash. That is why as you note, nobody can point it out 😉
I don’t know if its because
I don’t know if its because of the new ceo, but AMD is actually looking competitive in performance, I just needs confirmation on quality drivers and i am sold Nvidia better watch out lol.
They are certainly getting
They are certainly getting very good results with the improvements to the existing GPUs, but Fury is where we’ll probably really start to see significant gains. I’m with you on driver support, but I haven’t tried out 15.15 yet – definitely needed after so long on 14.12
If game companies work
If game companies work towards standards of DX there shouldn’t be a need for “Optimization” from either side.
It should work from the day its sold at its optimal level. Only needing updates to hotfix bugs not “Optimization” per game.
For all we know there could be a whole lot of tricks going on inside the driver.
That would be an Ideal world.
That would be an Ideal world. Unfortunately, you, the gamer, exist, and you want good performance from every game. And Game Developers write game engines that do unplanned for things, push technology in a way no one thought possible. So it was never planned for in DX9, 10 or 11.
DX12 and Vulkan change that by making the driver dumber, thus allowing the Engine Developer to treat it like the General Purpose compute engine they already do.
Also, once the GPU companies realized that if they optimized their driver for the best titles, they could get an advantage in the market place, all bets were off.
The hardware architecture is
The hardware architecture is different between companies so there will always be a need for optimization for drivers.
why are you still on 14.12?
why are you still on 14.12? Amd’s beta drivers are just as stable as nvidia’s whql or amd’s past whql drivers.
Because it is the current
Because it is the current release, and what AMD recommends for any 200-series card. Yes I’m aware of beta drivers, and no they haven’t gone through the validation to be release drivers. For any 300-series card AMD provides the 15.15 download page. I imagine this will be WHQL certified soon and replace 14.12, but it isn’t at the moment and my machine has 14.12.
Just my own compulsive behavior perhaps, but it’s the same reason I’m staying with Windows 8.1 until Windows 10 launches officially – even though many people are happy with the beta.
I’m using 15.5 betas on my r9
I’m using 15.5 betas on my r9 280 very stable for me{win 7 64).
Give them a try you can always rollback. It wont blow up your PC 😉
Or I could throw caution to
Or I could throw caution to the wind (and do a full backup) and install the modded 15.15 driver found in a couple of forums out there… Very tempted by this
We tired it here a few hours
We tired it here a few hours ago. No smoking guns to speak of. 15.15 does help GTAV performance a bit on the older card though (to be expected as optimizations were no doubt added for that title). Aside from GTAV, the driver mismatch should not disqualify these particular results.
One article in TechPowerUp
One article in TechPowerUp and all the internet talks about AMD drivers. You have beta drivers that work from AMD. Nvidia’s drivers after version 347 are a mess with crushes all over the place and 700 series performance problems in many cases. Forget the placebo effect when installing new WHQL/beta/hotfix drivers every week and look at reality which is that all latest Nvidia drivers are problematic.
I’ve had (nor heard of) no
I've had (nor heard of) no issues after 347. Comparing the stability and quality of AMD and Nvidia drivers is a bit of a joke – AMD has a while to go before they can build up a good track record in that respect. Even AMD fans complain about their drivers (except for you, of course).
The Omega driver seems to be
The Omega driver seems to be rock solid. And that is the driver they recommend. Except for new cards, of course.
Unfortunately I am on a Macbook Pro, so I am supposed to use the official driver. When I don’t, it is very unstable. Oh well.
perfect reply, Allyn, you are
perfect reply, Allyn, you are a good man.
https://forums.geforce.com/de
https://forums.geforce.com/default/topic/836914/geforce-drivers/official-nvidia-353-06-whql-game-ready-display-driver-feedback-thread-released-5-31-15-/2/
This is like whole windows vs
This is like whole windows vs OS X debate. Certain group of people keep saying windows is full of security flaws and OS X is completely secure. Point I am making for people that are idiots, Look at market share numbers they are massively in favor of nvidia, so any bug there is will be found. But like windows vs OS X war, Nvidia is working on FIXING them and AMD well they will when they get around to it like Apple.
Allyn, I usually don’t reply
Allyn, I usually don’t reply to people directly, because crap gets out of hand very quickly. But to say something isn’t happening because you haven’t experienced it does not mean it’s not happening. Why do I say this because I have a GTX970 and I personally have had these problems and have been to numerous forums with 900 series card owners having the same problems. I went all the way back to driver 347.88 and have not had any problems since.
I like PCPER and the content you guys provide. You seem like a nice guy, but damn you come off as smug a-hole sometimes. You might not like that, but it is what I see sometimes. Anyway keep up the good work.
Sorry, that’s what happens
Sorry, that's what happens when I'm in 'shut down the troll' mode. If you can point me to some of those threads and give me some more info, I'd be happy to look into it further. If we can reproduce the issue, we can lean on the appropriate folks and try and get some action.
Ok Allyn, here are a few of
Ok Allyn, here are a few of those threads that you requested. I don’t understand how you can’t just type in Nvidia driver 353.06 issues and you get several thread forums on the first page. You can type in the driver before that come up with a few threads on the first page, but here you go.
http://linustechtips.com/main/topic/378474-goddamnit-nvidia-again-driver-issues-35306/
http://www.tomshardware.com/answers/id-2673336/display-driver-driver-stopped-responding-recovered-nvidia-driver-353.html
http://steamcommunity.com/app/24010/discussions/8/617336568080451762/?insideModal=1
https://www.reddit.com/r/nvidia/comments/38k6o5/has_anyone_figured_out_how_to_stop_the_35306/
In researching those threads,
In researching those threads, it appears to be something specific to a few of the recent driver versions *and* some other software (afterburner, etc) that may not be meshing nicely. Has anyone seen the crashes happening with today's driver (353.30)?
I have not installed it yet.
I have not installed it yet. I might give it a go to see what happens. I have had no stability issues ever since I went back to 347.88. I even installed the hot fix driver 350.05 and still had problems. As I said I will install the latest one and get back to you.
Cut the Nvidia PR BS and read
Cut the Nvidia PR BS and read this http://forums.overclockers.co.uk/showthread.php?t=18675135
If that doesn’t satisfy you I have plenty more to prove the last few nvidia drivers have been a disaster vs AMD solid drivers. The same is true in the past with 6 series nvidia drivers. BTW these are ALL nvidia GPU owners.
Am not surprised one bit you guys weren’t given AMD Fiji samples, alongside your partner in crime TechReport and a few others. Nvidia PR sites thought they were actually too big to ignore lol
I suppose the FCAT finally took its toll eh – Double standards on calling nvidia refresh and AMD re-brands or refusing to use FCAT when Nvidia are worse. Half the time you guys have no idea what AMD architectures are, and neither does Josh the so called AMD pro.
So who`s the Joke now?
Check the attitude. We are
Check the attitude. We are not on any sample blacklist. Feel free to check back and see who has Fiji reviews once they are published.
what do you mean driver
what do you mean driver support?.. you’re an apple hipster gtfo
Awesome! When it goes down
Awesome! When it goes down this road we have the ultimate sophistication. Don’t currently own a Mac, even if a 2014 MacBook Pro 13 is what I had with me the one time I was in studio for the podcast, but they are solid computers with really accurate screens for photo editing if nothing else. Main system is Windows 8.1. It is “PC” Perspective after all 🙂
Don’t listen to shit from
Don’t listen to shit from other users. Yes Nvidia has more draw calls in DirectX11. This is an unchanged and true fact.
However DirectX12 AMD WDDM2.0 driver brings improvement even in DirectX11 games. So if you buy AMD, switch over to Windows 10 and enjoy the true power of the card.
As much as the 3.5 GB burned
As much as the 3.5 GB burned me and still burns me, i guess it really isn’t an issue as Oculus is obviously going to need next gen horsepower by winter of 2017.
If i didn’t have a card and i was desperate, i’d probably just go in cheap and get an amd 290, 280 on sale to hold me over for a year.
I doubt we’ll even get to true 4k gaming until 2018
gtaV a 2 year old game isn’t a reason to buy a video card for the pc master waits
we’ll see what facebook releases i guess…
Nice review, one question
Nice review, one question though:
What advice would you give for someone picking between the 390 and the 970? I was hoping it might be in the conclusion.
970 – 3.5gb vram, r9 390 –
970 – 3.5gb vram, r9 390 – 8gb vram which can be filled to the last megabyte. R9 390 is a clear victor. 1200mhz r9 390 can beat 1500mhz GTX970.
Watt consumption? Oh
Watt consumption? Oh snap…
Makes me think of that lonely go-kart with an F-16 engine. It’s fast, but it needs an extra pick-up for fuel.
The difference in electricity bill compared to their prices where i come from between these 2 brands has been covered in 5-6 months.
you do know that a 100w
you do know that a 100w difference is only 20-30$ over 2000h of gameplay? Thats 83 straight days at maximum power = 30$
stop the bullshit.
Numbers aren’t the only stat
Numbers aren’t the only stat which a consumer uses to decide which card to buy. After purchase support, additional software and features factor in. This is especially true when two products are virtually the same performance wise.
The fact is that nVidia has the money to continue to invest in the software and features for their platform that will continue their huge market share advantage over AMD.
The cards are fairly even in
The cards are fairly even in the results, so it's up to your brand preference (or need for >4GB of VRAM).
Can games be patched to force
Can games be patched to force the use of the additional memory? I realized the developers could do so, but they have litle incentive for older games.
Game engines already appear
Game engines already appear to have some intelligence to them in that they query the available assets and tend to avoid going over what is available unless everything is cranked way up in settings. If they can't max an 8GB or 12GB card, then they were likely not configured / loaded with assets sufficient to use that much RAM. A patch would be more like that Shadow of Mordor 6GB texture pack.
Or all the Ultra Super Huge
Or all the Ultra Super Huge and Big Skyrim addons? 🙂
Another important point to
Another important point to consider is Variable refresh rate, that is if it something important to you.
Too much to type here about it, but PCPer has some some good articles outlining how each work and the pros/cons with each. VRR is something I most definitely want when I get a chance to upgrade in the future.
What’s up with the 7970 and
What’s up with the 7970 and 680 test results on the Frame Rating page? Lol. Typo or old graph?
Those were pre-frame pacing
Those were pre-frame pacing driver days. Quite old and a mis-representation of the current state of the 7970. The 7970 is still holding its own and was the superior choice compared to the lowly 2gb 680 at the time.
You don’t think PCPer would
You don’t think PCPer would miss an opportunity to throw up a negative graph in what would otherwise be a very positive showing of an AMD product vs. nVidia?
Seriously? It’s the example
Seriously? It's the example graph – taken from early tests of the very test method that directly resulted in AMD fixing their frame pacing issues. You're welcome.
Those are the same *example*
Those are the same *example* graphs Ryan has used for all frame rating articles since the beginning. If you read the review you’ll notice what specific cards were actually tested.
Those are example graphs made
Those are example graphs made to show and explain how our testing methodology works and how you can interpret the data on them. There is a lot of data in our testing and thus it's important to us to continue to reiterate the analysis portion for any new readers or readers that need a refresher. 🙂
It might be better to use the
It might be better to use the tested cards data, rather than this old data, to illustrate the point. Looking over that old data will cause those that really need to see it to overlook it as irrelevant, and not see look at it.
Thanks for all the hard work.
Maybe. Or I should just put a
Maybe. Or I should just put a big watermark on them that says "EXAMPLE DATA"
Sorry, I guess that’s what I
Sorry, I guess that’s what I get for skimming the article in the middle of the night. haha. And for the record, I don’t think you guys hate on AMD… at least not their graphics cards. lol.
What I hoped to see was
What I hoped to see was updating whole 300 series to latest 1.2 GCN so whole series gets improved tessalation, Freesync, Trueaudio, color compression…
Than with Fury thay could improve over that.
Question. I have a r7-260,
Question. I have a r7-260, it’s a msi OC edition, and it seems as far as I look, it’s the ONLY r7-260 (not X, just 260) non reference card made. So question, would I be able to crossfire my Bonaire r7-260 with a Bonaire r7-360, or is the world just not that simple.
The world isn’t that simple.
The world isn’t that simple. Now, that’s not saying you can’t hack together a driver that works, but it won’t be simple.
Once the Catalyst 15.15
Once the Catalyst 15.15 drivers get merged into the beta stream and include support for the R-200 and HD7000 series, then you’ll probably be able to Crossfire them. As things stand now, driver support separates the R-300 and R-200 series, so anyone looking to run Crossfire of that nature needs to wait a little bit longer.
So, Ryan and crew, do you
So, Ryan and crew, do you have any plans for a benchmark that can take advantage of the larger memory amounts in today’s cards? What would you need for such a benchmark? Could you use something like the Unigine Heaven Benchmark? As in a tech demo designed to stress the highest end cards available, such as the Titan X?
Nothing lined up yet; I still
Nothing lined up yet; I still think that actual game benchmarks are what is best for the gamers and the industry. If nothing utilizes 8GB of memory in the real-world then that's what AMD and its partners have to live with.
For education purposes though, I do see the merit in looking for games or synthetic tests that push those boundaries.
I was thinking about that
I was thinking about that Epic demo from GDC. It was made specifically to push Unreal Engine and the Titan X as hard as it could. In fact, it was targeted at running 30fps at 1080p on a Titan X.
The demo they just released uses 24 GB of system RAM when loaded, it uses 8K textures, and the demo has more than 100 miles squared area, all on one level.
While no current game pushes cards that hard, UE 4 is capable of it, so there may be some benefit. That said it was using cutting edge tech that had no optimizations, and was created in like 3 months, but it should be a good test of these new super cards we are dealing with.
Holy crap, and that’s
Holy crap, and that's something I can download??
Yes, sign up for an Epic
Yes, sign up for an Epic account, download their Epic Games Launcher, the same one that gives you access to the Unreal Tournament Alpha builds. On the first screen the first time you launch, choose Unreal Engine 4, you can always get to the rest of it later.
It should default to the Learn area, and scroll down to the “Engine Feature Samples” and you will see “A Boy and His Kite.” Click that image.
Here’s the fun part. You will have two options to download this.
One is a precompiled 64bit executable. This is 6 GB zip file. For that, scroll down til you see the yellow link “Download this non-development runtime build.”
The other option is to download the Unreal Engine 4.8 release, and Download the Developer build, where they give you access to all the assets for use in Unreal Engine projects. This download is 24 GB.
And my measly MacBook Pro didn’t know what hit it.
You could develop or hire a developer to create a benchmark from the assets, if you wanted.
When you put up charts the
When you put up charts the way you do then it’s best to start the Y-axis (FPS) at a value greater than zero in order to give a greater separation between results.
Secondly pick bold primary colours so that it’s easier for the reader to distinguish between cards.
Starting at 0 ensures that
Starting at 0 ensures that the gap in performance between the parts is actually FAIRLY represented. Setting it to non-zero tends to exaggerate differences that aren't substantial.
Starting higher than zero
Starting higher than zero artificially inflates the performance delta and is generally a bad way to visualize performance results.
I know, this is more work,
I know, this is more work, but it would be useful to show both types of graphs. The issue with the current style is you don’t really get a sense of the shape of the graph, there isn’t enough resolution.
I do agree for comparison starting at zero is the way to go.
Oh I follow now, you mean a
Oh I follow now, you mean a 'zoom in' when the results are very close? I do get that, but one could argue that if the cards are that close, you are within <1 FPS difference and we might be trying too hard to call a winner when it's really just a tie.
Right, but I am talking about
Right, but I am talking about the actual frame pacing issue, not the comparison. Its all well and good that the cards are in line with each other, but it is hard to tell the variance at the current level. As in, the cards results over right each other, so you can’t really see if there are big dips and things.
Ryan,
I’m hearing rumblings
Ryan,
I’m hearing rumblings about the performance is more due to the 15.15 drivers than the cards themselves. I think Guru3D has hacked 15.15 to work on the 200 series any chance you could run a quick test and see how the 290 performs with those drivers?
We’re looking into it!
We're looking into it!
I’m looking into it, and that
I'm looking into it, and that is why I commented on that driver discrepancy in the Test Setup page.
Let just face it if you are
Let just face it if you are using R290 it is not worth the up grade.
Just upgrade to a Fury X or comparable.
If your are 2 grade below that up grade. It cost relatively the same for Nvidia or AMD.
But if you are like me and are thinking of your budget them you will wait.
I have a HIS ice Q 7870 4GB and HIS ice Q 7850 2GB also Asus 7750 3GB.
All of this graphics cards are do for up grades.
Starting with My daughter the 7850 she will get a graphics Pro card.
My son and I will provably get Fury X, he is works with Audio production.
I develop inside Linux software & CAD and the only thing that use windows for is Gaming.
Most people think that Drivers will solved all of there problems, not true.
Software comparability has a lot to do with there problems.
Take your problem with said anti-virus software on the podcast, all that it took a backup software.
That made it unstable plus your password software, a 47 Min. lost of productivity.
And I hope it was not encrypted, you would have lost everything at shutdown.
That is windows and it dependencies makes one comparable not both.
To the point: drivers are not the only point if a graphics will work with apps.
It is all the applications in concert together will make windows work or fail.
The last thing anyone would like is to have the anti-virus start running during a game.
Some will do that there are at least 5 suites that cause problems wild gaming if not configured right.
So it not blue or green or red’s fault, it is all the software running together and harmonize.
If the GCN 1.2 I think they
If the GCN 1.2 I think they would have been awesome cards, it’s a shame. However an element of the decision not to release 390(X) cards at GCN 1.2 could be because the performance gains would distract consumers aways from the benefits of Fury and HBM. The gap between cards would be too low.
Remember ‘Son of Fury’ is coming too, which will make for an interesting choice between 390(X) and R9 Nano.
https://pcper.com/news/Graphics-Cards/AMD-Announces-Radeon-R9-Nano-6-Graphics-Card
Interesting, it appears that
Interesting, it appears that the 390 is the best bang-for-the-buck of higher-end cards.
@PCPer – at any point during all of this, has there been any mention of a change to address the “below VRR window” of freesync monitors?
No part of that really is
No part of that really is affected by this GPU launch.
AMD did not bring enough to
AMD did not bring enough to the table with this card to increase marketshare. 390 uses 100w more than GTX 970 while offering marginally better performance and sacrificing the ability to do things like HDMI 2.0 and GSync. 100w higher under load means the 390 will be significantly louder as well.
freesync
freesync
After all this hype I can sum
After all this hype I can sum up this result with one word-
Yawn.
Um, I don’t remember seeing
Um, I don’t remember seeing any mention, or testing of AMD’s implementation of Super Resolution. Did you get a chance to see how it compared to Nvidia’s implementation, as well as how much overhead does it have?
All these benchmarks are also
All these benchmarks are also going to be sort of irrelevant in July when Windows 10 comes out. Will there be plans to re-test?
Great job as usual with the
Great job as usual with the review. Thanks, Ryan!
As far as I can see, there is no reason to upgrade my Sapphire Tri-X R9 290X. As your partner on TWIT’s This Week in Computer Hardware Patrick Norton would say: I await your Fury X review with baited breath. 😉