AMD has announced a price cut for the Radeon R9 Nano, which will now have a suggested price of $499, a $150 drop from the original $649 MSRP.
VideoCardz had the story this morning, quoting the official press release from AMD:
"This past September, the AMD Radeon™ R9 Nano graphics card launched to rave reviews, claiming the title of the world’s fastest and most power efficient Mini ITX gaming card, powered by the world’s most advanced and innovative GPU with on-chip High-Bandwidth Memory (HBM) for incredible 4K gaming performance. There was nothing like it ever seen before, and today, it remains in a class of its own, delivering smooth, true-to-life, premium 4K and VR gaming in a small form factor PC.
At a peak power of 175W and in a 6-inch form factor, it drives levels of performance that are on par with larger, more power-hungry GPUs from competitors, and blows away Mini ITX competitors with up to 30 percent better performance than the GTX 970 Mini ITX.
As of today, 11 January, this small card will have an even bigger impact on gamers around the world as AMD announces a change in the AMD Radeon™ R9 Nano graphics card’s SEP from $649 to $499. At the new price, the AMD Radeon™ R9 Nano graphics card will be more accessible than ever before, delivering incredible performance and leading technologies, with unbelievable efficiency in an astoundingly small form factor that puts it in a class all of its own."
The R9 Nano (reviewed here) had been the most interesting GPU released in 2015 to the team at PC Perspective. It was a compelling product for its tiny size, great performance, and high power efficiency, but the dialogue here probably mirrored that of a lot of potential buyers; for the price of a Fury X, did it make sense to buy the Nano? It was all going to depend on need, but very few enclosures on the market do not support a full-length GPU, as we discovered when testing out small R9 Nano builds.
Now that the price will move down $150 it becomes an easier choice: $499 will buy you a full R9 Fury X core for $150 less. The performance of a Fury X is only a few percentage points higher than the slighly lower-clocked Nano, so you're now getting most of the way there for much less. We have seen some R9 Fury X cards selling for $599, but even at $100 more would you buy the Fury X over a Nano? If nothing else the lower price makes the conversation a lot more interesting.
In before Arbiter arrives to
In before Arbiter arrives to remind us all how much better Nvidia is.
I love the communith that
I love the communith that has developed in these comments. 🙂
So mr Shrout, question, at
So mr Shrout, question, at the end of the day does the furry’s ram’s bandwith make up for or exceed the performance of cards with more gig’s lower speed ram, or is it still more ram is more betterer?
Or that JHH is up to his old
Or that JHH is up to his old CES tricks again! Oh look that stage prop JHH is holding, it’s 2009 all over again!
This is what the price should
This is what the price should have been to begin with. AMD always playing catch up.
I wonder how this will affect
I wonder how this will affect NVIDIA’s prices. It’s great to see some competition in the GPU market…
Very nice, I’m really
Very nice, I’m really interested in buying one now for my mini ITX build!
AMD dropping prices again, as
AMD dropping prices again, as it’s the only way they can try to compete. Really shows how far behind their architecture and drivers are compared to Nvidia’s.
Also, we shouldn’t believe this price drop until it is tested and confirmed by independent reviewers.
Or maybe it’s because the NEW
Or maybe it’s because the NEW GPU generation is incoming from AMD, and the older technology always comes down in price! At Least AMD has fully in hardware Asynchronous-Compute in the NANO, while Nvidia may not even have its lack of fully in hardware Asynchronous-Compute implemented with Pascal!
DX12 and Vulkan are going to run better with AMD’s fully in hardware Asynchronous-Compute ability better able to keep AMD’s GPU core execution resources utilized. VR gaming will place great demands on a GPU’s Asynchronous-Compute resources so they need to be fully implemented in the GPU’s hardware to keep up with demanding VR games. Nvidia has to implement its GPU processing thread dispatch and management in software, while AMD has its GPU processing thread dispatch and management implemented fully in hardware for a much faster response to changing GPU loads and thread context switching.
Nvidia will waste available GPU execution resources while they sit idle waiting for Nvidia’s inefficient GPU processing thread dispatch and management in software to react, and software based GPU processing thread dispatch and management will never be as fast as hardware based GPU, or CPU, processing thread dispatch and management inmplemented fully in the processor’s hardware.
P.S. reviews are not needed for a price drop, as the Nano’s hardware is the same, just price the part at the retailer/s to see if the part is available at the new lower price.
Hey, how much do you know
Hey, how much do you know about hardware GPU processing thread dispatch and management? Can you tell us some more about hardware GPU processing thread dispatch and management versus software GPU processing thread dispatch and management, versus hardware CPU processing thread dispatch and management and software CPU processing thread dispatch and management.
I’m really interested in learning more about processing thread dispatch and management.
now you’r just being a dick
now you’r just being a dick for not knowing how the hardware GPU processing thread dispatch and management work… 😀
but he’s got a point, AMD got an advantage on DX12 and vulcan, IF the game is developped on them, which untill now vulcan got 0 and microsoft screwed everyone with their PC gaming crap, still no dx12 games, and the future games developed on a new API can be counted on the fingers of my hands, i still wonder how AMD managed to do what they did with Mantle.
so not only the game need to be on a new API to utilize async, the work load need to be high, because at lower work load Nvidia is even faster, and i dont see many demanding VR games, or maybe one robinson the journey using cryengine, and even then not sure if it’s DX11 or new api.
AMD got an advantage on DX12
AMD got an advantage on DX12 and vulcan
Show me the advantage. I don’t see anything right now. The early DX12 benchmarks just show how bad AMD was with DX11.
nVidia, like AMD are still working on optimization of DX12.
you can read these 2
you can read these 2 articles
1-interview with oxide games working on DX12 game ashes of the singularity. http://wccftech.com/async-shaders-give-amd-big-advantage-dx12-performance-oxide-games/
2-investigation follow up on what the first article started by extremtech http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far
and btw yes one of reasons why Nvidia cards perform better than AMD, the latter have more pipelines (up to 8) to allow simultaneous work at high loads, but the pipe are narrow, Nvidia have only 2 pipiline but wider, and the fact that DX11 is a crapy API in alot of thing like multithreading, nvidia often gets the upper hand with easier optimisation, and my guess thats one of the reasons AMD started mantle, DX11 was going nowhere, and i bet DX12 will be the same, just a feeling of deja vu, pc gaming went out the window when windows store failed, so now i dont think microsoft gives a crap about pc gaming or evolving API, unless they see a threat somewhere for their monopoly they wont bother.
Simultaneous multithreading, its done on the CPU and the GPU hardware on AMD’s ACE units! (1)
From ACMQueue, professional trade juornal.
“GPUs a closer look
As the line between GPUs and CPUs begins to blur, it’s important to understand what makes GPUs tick.
KAYVON FATAHALIAN and MIKE HOUSTON, STANFORD UNIVERSITY”
Asynchronous Shaders White
Asynchronous Shaders White Paper.
I don’t understand. Are you
I don’t understand. Are you even the same person? You didn’t say a single thing about hardware GPU processing thread dispatch and management. Every time you post you talk about hardware GPU processing thread dispatch and management. You talk about it so much it makes me wonder if you just copy the phrase hardware GPU processing thread dispatch and management and then paste hardware GPU processing thread dispatch and management anytime you want to talk about how much better AMD is at hardware GPU processing thread dispatch and management and how Nvidia does their GPU processing thread dispatch and management in software.
So I wanted to know more about hardware GPU processing thread dispatch and management. Because you’re clearly the expert on hardware GPU processing thread dispatch and management. But you didn’t talk about hardware GPU processing thread dispatch and management at all. You just posted links, and excerpts from those links that don’t talk about hardware GPU processing thread dispatch and management.
Please, tell me more about hardware GPU processing thread dispatch and management.
Im having a hardware
Im having a hardware GPU processing thread dispatch and management headache.
You need to ask your SMOM to
You need to ask your SMOM to help you with a headache pill, but it’s bad for the gene pool for you Daddy to procreate with his daughter to have you, and that makes your sister your mom, or SMOM as it is known! You Game-Necks are a threat to civilization!
Wow, look at you. Just
Wow, look at you. Just falling apart in your anger. Can’t make logical arguments so you resort to insults. Poor child.
Go back to talking about hardware GPU processing thread dispatch and management.
You keep using that word, but it doesn’t mean what you think it means.
Maybe bring it up again when there is a single worth while application(s) that use it – other than a benchmark that is made in partnership with AMD.
No, I know what is means! And
No, I know what is means! And Why did Nvidia hire away AMD’s top HSA person! Nvidia Knows it will not be able to gimp the hardware that will be running any VR games, but the Green team is behind with the proper hardware based resources! also read this:
“AMD Dives Deep On Asynchronous Shading”
Did Nvidia hire away AMD’s
Did Nvidia hire away AMD’s top HSA person! so that they could do hardware GPU processing thread dispatch and management? Because it would be great if Nvidia could do hardware GPU processing thread dispatch and management instead of software GPU processing thread dispatch and management.
That AT article you linked – do they talk about hardware GPU processing thread dispatch and management?
You are very angry that
You are very angry that Nvidia does not have fully in its GPU’s hardware, GPU processor thread dispatch and management, and Nvidia GPUs are going to have some seriously idle GPU execution resources while waiting for things to be done in software!
Just imagine Intel’s hyper-threading(SMT) only managed in software and see what a train wreck that would be. Those fully in hardware SMT scheduling units on Intel’s CPU cores can manage/dispatch many instructions in the time its takes to fetch one instruction from memory, so a whole hyper-threading(SMT) thread management in software solution would not be workable, and the same goes for SMT on the GPU’s cores as the hardware method is much more responsive than any software based Simulations Multi-Threading(SMT) dispatch/management schemes! Those SMT thread dispatch and scheduling mechanisms are implemented in hardware blocks that are clocked in multiples of the processor’s core clock speed on Intel’s CPUs, so doing SMT in software would not work to keep the execution pipelines fully loaded when things operate on the processor’s core so quickly that the only way is to manage a processor’s hardware threads is in the CPU core’s hardware!
Now take Intel’s hyper-threading(SMT) and expand that many times to hundreds/thousands of threads operating on the GPU and things are going to be even more problematic for any GPU cores trying to manage processing threads in software, even the partially managed in software processor threads on Nvidia’s current GPU SKUs. So SMT is an example of Asynchronous compute, and the hardware units that manages the SMT for a CPU’s core/s needs to be fast to be able to keep those instructions fed into the CPU core’s execution pipelines for one or more processing threads that can be managed by a SMT enabled CPU(2 processor threads per core on Intel’s consumer SKUs).
The GPUs are going to need to have their many more processor threads managed in hardware with the GPUs thread management hardware able to keep the GPUs thousands of FP/INT/other execution units fed with instructions SMT style or execution resources will sit idle. The very reason hyper-threading(SMT) works so well to keep a CPU’s core execution resources loaded and working efficiently is that if one processor thread stalls for a memory fetch, the other processing thread can quickly be started up and its instruction queue can be managed and dispatched while the other thread waits for a memory, or other dependency to be resolved, before the stalled processor thread can be restarted and its instruction queue can be serviced while the other processor thread resolves its data/code dependencies(memory fetches, waiting for intermediate results, others). None of Intel’s hyper-threading(SMT) thread dispatch/management is done in software, that is too slow it’s all done fully in hardware, and the same sorts of thread management in hardware rules apply for a GPU’s execution units, especially for DX12/Vulkan APIs and VR gaming that needs lots more processing done to maintain the appearance of reality.
“That AT article you linked – do they talk about hardware GPU processing thread dispatch and management?” sure it did you just don’t understand computing hardware concepts, CPU or GPU! and that’s your problem!
You really, really can’t tell
You really, really can’t tell when you’re being mocked, can you?
I’ll make it easy then. I’m seriously making fun of you. Like, laughing my ass off mocking you.
Yes I can, just by your
Yes I can, just by your repeating the “GPU processing thread dispatch and management.” So yes I see that you are a Game-Neck and like those other -Necks your lack of understanding of technology and everything else technology related makes you angry. You would rather have things simple like sports, but that’s not how things work in the high tech industry. You -Necks with your Waffen SS mentality are what makes gaming and gamers look so bad, that and the knuckle draggers that think GPU/CPU makers are sports teams really brings things down lower than whale poop for computer gaming!
Oh and fix in the long Annoying(Intentionally) reply’s spelling of Simulations Multi-Threading(SMT), it should be Simultaneous Multi-Threading(SMT), damn LibreOffice spell checker! LibreOffice you folks need to devote more time to your dictionary and fixing the glaring omissions to its database. And that includes adding the prefix multi to the dictionary, and the proper ability to spell check plurals of English words!
lol I’m not angry. I’m
lol I’m not angry. I’m actually quite amused and entertained. You, however, are clearly melting down. Maybe you need to get back to talking about GPU processing thread dispatch and management.
You missed a prime
You missed a prime opportunity to say “Mantling down” 🙁
I didn’t think he’d get it.
I didn’t think he’d get it.
they should have priced down
they should have priced down all the Fury line up, now it just doesnt make sense, but still good, 390 rules the 970 segment, and now the Nano rules the 980 segment.
i hope for AMD this shows up in sales because they have great products at that range.
Have to admit, at this price,
Have to admit, at this price, it is almost a better choice then either the Fury.. and almost better then FuryX…
Hope nvidia responds with a nice price cut
If the reports of the price
If the reports of the price cut are correct, AMD management have shown why they are totally incompetent yet again. They should have dropped the fury X to $600, and instead used the full 4096 steam processor fury chip in the Fury priced at $550, and then put the cut down fury chip in the Nano and priced that at $500.
A -150$ cut simply not
A -150$ cut simply not enough. Still not quite there yet. Even though I’ve already bought one Nano for my fresh ITX build, I won’t be buying another one until price drops to 280$ TOPS. I seriously believe that’s the actual evaluation price for this piece of hardware, no less or more than just that. Nano should be priced at 280$ exactly, everything is a simple overprice ahead of it’s actual value.
Goodness, you’re dumb.
Goodness, you’re dumb.
You want a $280 R9 Nano, and
You want a $280 R9 Nano, and I’M the scrub?
Go away, loser. Grownups are talking.
AMD has got to recoup its
AMD has got to recoup its engineering costs for HBM and other improvements and costs of manufacturing, so that price you want may be a little too low, and it’d going to take more than one HBM based SKU from AMD for AMD and its HBM partner to recoup HBM’s total development costs. Better driver support and maintenance of AMD’s driver software stack is costing more also, so that price after the $150 reduction is just about right for AMD to stay in business and keep the innovations coming. So the Fury line of SKUs are about to become replaced by the Polaris based lines based on 14nm/16nm process node, but AMD as invested millions in HBM’s R&D, and those costs have to be paid for with higher cost SKUs for those that want the HBM memory technology.
I see AMD maybe letting the after market GPU makers bump up the Nano’s wattage/other features as the newer Polaris parts start coming online, to give users of the Nano based mini PC builds more value for the money and better Nano performance. That Nano part being the same hardware as is on the Fury X will give AMD a lot of room to adjust the Nano’s performance to compete with Nvidia’s SKUs at a better price/performance metric than Nivida can offer for any comparable competing product to the Nano.
It’s not low, since I’m
It’s not low, since I’m taking in the consideration further cuts. I’m not saying it should’ve priced it like this right from the get-go, I’m saying that it shouldn’t stop at price cuts, and should keep on slashing it’s price until it stops at 280$. It doesn’t meant this should happen right away, quite frankly I’ll be fine if Nano gets to that price point within two or three years from now.
I could see if the nano was a
I could see if the nano was a limited edition with maybe a thousand unit made. But a full height double wide card costing $200 more then a GTX970, thats insanity.
At best this card should be $400, not $500.
BTW, you can see AMD problem. This card got almost no review on sits like newegg, and its always in stock.
AMD NANO release was/is a total bust….
AMD seem to have ‘fired’ the wrong people, no one at AMD seem to understand the GPU market dynamics and pricing.
Hey ryan is this good for
Hey ryan is this good for rendering like in octane renderer. Im thinking of buying 4 of this and use it for rendering.
one thing pisses me off about
one thing pisses me off about the Nano, is AMD not letting other manufacturer like Sapphire make a costume one, and all the Fury (pro) have super big coolers too big for the pcb, and i guess this is forced by AMD on them, yes we understand that AMD made a small factor card, that doesnt mean you have to make all the other offers too big to show the contrast, i think this is an extremly poor choice by AMD, yes their reference cooling solution isn’t bad, but it isn’t great either, why force ppl to pick average, this stupid politic i would have expect it from a competitor but not AMD, it’s so out of character and just plain stupid, i hope they change that.
I, too, would love to see
I, too, would love to see some good custom Nano cards. I think the AIB’s could do some pretty cool stuff with them. But if you’ll allow me, I’m going to play devil’s advocate for a minute. Because I think I understand why it is they did it that way, and it’s entirely due to what AMD intended with the three Fiji cards.
The Fury X is also a reference-only card. Full-on Fiji chip, watercooled, power-hungry. It wasn’t designed to run cool or be efficient. It was supposed to be brute-force powerful, and the closed-loop cooler was a signature part of that. AMD didn’t want the board partners to toss the watercooling and slap on a huge air cooler, the whole point of the watercooling was to allow “dream” overclocks. (Whether they were successful in this or not doesn’t matter, it’s what they were trying to do.)
The Fury (Pro), on the other hand, is what AMD gave to the board partners. No reference, guys, do whatever you want. This was the slightly-cut-down Fiji, still running at high clocks and high power. It wasn’t designed to be power efficient, either. It was supposed to be just-slightly-less-than the Fury X but still badass – and the AIB’s can customize them. XFX took two easy routes – they used the reference Fury X PCB on both, put a 3-fan air cooler on one and a Fury X watercooler on the other. Sapphire also used the reference Fury X PCB and a 3-fan air cooler. ASUS and Gigabyte actually made full-on custom PCBs for their 3-fan Fury cards. (Sapphire also later came out with a custom PCB, slightly longer than the reference but still much shorter than the ASUS and Gigabyte cards.)
But the Nano was different – it was supposed to show that the full-on Fiji chip could also be quiet and ITX-compact. While they basically had to throttle it back based on temperature to keep it from overheating, they managed to make it work, and pretty successfully. Here, again, the reference design is the signature feature – that short PCB and single-fan cooler are the what make it the Nano. AMD doesn’t want the AIB partners to build longer-but-still-ITX-friendly PCB’s and dual-fan coolers and whatnot. They need that little heatsink and fan on it.
The really cynical way to see it would be that Fiji was a proof-of-concept for Polaris or something, and the Fury X and the Nano were just flagship concepts.
The more optimistic way to see it would be that there *is* going to be a Nano-equivalent in the Polaris/Arctic Islands/400-series/whatever it is lineup that AMD will be releasing this year, and it’s probably more likely to have custom AIB variants, now that the concept is proved.