During today's 2015 AMD Financial Analyst Day, CEO Dr. Lisa Su discussed some of the details of the upcoming enthusiast Radeon graphics product. Though it wasn't given a name, she repeatedly said that the product would be announced "in the coming weeks…at upcoming industry events."
You won't find specifications here but understanding the goals and targets that AMD has for this new flagship product will help tell the story of this new Radeon product. Dr. Su sees AMD investing at very specific inflection points, the most recent of which are DirectX 12, 4K displays and VR technology. With adoption of HBM (high bandwidth memory) that sits on-die with the GPU, rather than across a physical PCB, we will see both a reduction in power consumption as well as a significant increase in GPU memory bandwidth.
HBM will accelerate the performance improvements at those key inflection points Dr. Su mentioned. Additional memory bandwidth will aid the ability for discrete GPUs to push out 4K resolutions and beyond, no longer limited by texture sizes. AMD's LiquidVR software, in conjunction with HBM, will be able to improve latency and reduce performance concerns on current and future generations of virtual reality hardware.
One interesting comment made during the conference was that HBM would enable new form factors for the GPUs now that you now longer need to have memory spread out on a PCB. While there isn't much room in the add-in card market for differentiation, in the mobile space that could mean some very interesting things for higher performance gaming notebooks.
Mark Papermaster, AMD CTO, said earlier in the conference call that HBM would aid in performance but maybe more importantly will lower power and improve total GPU efficiency. HBM will offer more than 3x improved performance/watt compared to GDDR5 while also running more than 50% lower power than GDDR5. Lower power and higher performance upgrades don't happen often so I am really excited to see what AMD does with it.
There weren't any more details on the next flagship Radeon GPU but it doesn't look like we'll have to wait much longer.
Release it now AMD want to
Release it now AMD want to benchmark it so bad , and want the zen in first 2016 please AMD
Just to temper the >3x
Just to temper the >3x perf/watt slide quote so there isn’t a mass wave of misconceptions….
Not going to be anywhere close to those kinds of actual performance increases.
Performance = Bandwidth
GDDR5 on Hawaii = 320GB/s
HBM is quoted at = 1024GB/s
So even being conservative and saying power is the same (watt), you are looking at 3x perf/w in terms of bandwidth.
However, this will not translate to anything close to this in real world frame rate performance.
Still will be interested to see how much power consumption is reduced by HBM and how the parts fare overall!
nah fuck power saving i want
nah fuck power saving i want MORE POWER! i wana pump so much juice through that card its should cook bacon
and you didnt buy an R9 295X2
and you didnt buy an R9 295X2 already because?
im poor i cant afford r9
im poor i cant afford r9 295×2 if i could i would
This new HBM product probably
This new HBM product probably isn’t going to be cheap either. They may have lower cost versions later, but I am not sure how much lower. Adding a transposer increases assembly complexity, which could increase price.
since when has top of the
since when has top of the line technology been cheap, this is why hbm is only going to be in the r9 390 if rumors are true
It might have a bit of an
It might have a bit of an advantage in games with big open worlds like GTA and Watch Dogs, where fast texture swapping seems to play an important role. Thinking that it might help to smooth out fps and minimize texture pop-in in conjuction with a maxed out draw distance. If nothing else, it’s wishful thinking 😉
I think the answers are all
I think the answers are all right there. 3X the perf per watt as GDDR5. And 50% power reduction vs GDDR5. So, it sounds like about 1.5 times as fast as GDDR5.
If a GDDR5 board used 10 watts and got 10GB/s, at 10 watts HBM could do 30GB/s. However, since its also a 50% power reduction, I think you’re looking at a 5 watt HBM board competing with a 10 watt GDDR5 board. So there’d be 15 GB/sec HBM vs 10 GB/sec GDDR5. Not an overall earth shattering performance increase. But still better.
This is how I read it as
This is how I read it as well. Expect 50% improvement in memory BW at half the power. The former is good, the latter is pretty *meh*. Especially when you further temper it with the fact that the GPU will share heat with the memory where the memory used to have its own cooling.
Well, currently memory doesn’t truely have its own cooling, but it’s away from the GPU and doesn’t really interfere with it thermally.
If that is what it is,
If that is what it is, problem is way AMD words it is very deceptive. Which isn’t outside the normal for AMD and their marketing department.
All marketing is deceptive,
All marketing is deceptive, from every marketing person that ever uttered a single word about any product. Why do you think there is such a market for paid technology consultants or technology websites behind expensive pay walls with the highly paid engineer/technology writers.
Marketing is there to fool the fools, and any company with an expensive potential investment in computing hardware is going to hire an expert, PHD level, consultant to look over things and disregard any marketing garbage.
YOU know that, but you have an agenda, and how about that Nvidia marketing department, just as disgusting! But you are one of those Brand fanatics that feigns disgust for your hated competitor’s marketing, all while not finding any fault with your brand’s marketing shysters. Both AMD’s and Nvidia’s marketing is deceptive, as is all marketing! Get over your pathological love for a brand, and get some of that real information that is out there, it can be obtained for free at any college library that subscribes to the professional journals, and most libraries have bound copies of the professional journals, or subscription access to the journal’s online articles from the library’s terminals.
Marketing is all about fooling the slackjaws, and knuckle draggers.
LOL, you make me laugh.
LOL, you make me laugh. Nvidia is just as bad, if not worse about deceptive marketing, I like how you conveniently left that part out. 😉
So AMD has no mention of
So AMD has no mention of project SkyBridge on its new roadmap, and AMD has gotten out of the server OEM business with the mothballing of its SeaMicro business. And some commenters on some of the other websites are assuming that Just because AMD has closed SeaMicro for any new customers, an is winding down its SeaMicro OEM division, that AMD can not sell mainboards, and custom ARMv8, and x86 server SKUs to third party OEMs who are still in the dense server/other server markets.
AMD has not canceled K12, K12 is very much alive as a custom ARMv8a microarchitecture that can be sold to those very same third party dense server OEMs that remain in the dense server market, ARM and x86 based. AMD’s K12 custom ARMv8a microarchitecture will borrow heavily from AMD’s ZEN x86 microarchitecture feature set, so maybe there will be the first custom ARMv8 ISA based microarchitecture(K12) with SMT capabilities, and give the OEMs that choose any AMD K12 based CPUs/APUs an advantage that even Apple has yet to add to its A series Cyclone microarchitecture, as of yet there is no indication that Apples A9’s will support SMT.
Remember AMD still owns all of the SeaMicro IP, including the very fast and fully coherent freedom fabric, so expect the third party server OEMs to still benefit from much of the SeaMicro IP that AMD will be baking into its server mainboards, and server CPUs/APUs of both the x86, and custom ARMv8 ISA based variety. AMD may no longer be a server OEM, but that does not stop AMD from providing server products based on AMD IP to the OEM market.
I was hoping for a little more information on K12. but AMD may not be interested in the low end Tablet market, but with Zen and K12, AMD should have SKUs that can be used in high end windows based tablets(ZEN), and a High performance Android/Linux based tablets(K12) for a high end custom ARMv8a ISA based SKU that tablet OEMs can utilize to compete with Apples A series iPads. AMD does need to get its graphics into the ARM based tablet market, so the custom K12 will give Apple something to worry about in that ARM based market, while Zen based tablet APUs will compete with Intel’s core i/m series tablets in the widows based tablet market.
There is no reason to believe that any of the valuable SeaMicro IP can not/will not be utilized by AMD to make products for any third party OEM server market, and AMD’s management have been saying that AMD will be focusing of their core business model, so AMD may be out as a server OEM, but in NO way is AMD out as a supplier of server parts to the third party OEM server makers, dense, or high performance server parts.
Well, its always awkward to
Well, its always awkward to be both a component supplier and a complete product manufacturer. You end up competing with your own customers. I can see wanting to get out of the complete server market since any real success there would be a source of friction with the OEMs.
It was more not a source of
It was more not a source of any revenue/profits that made AMD shutter it’s SeaMicro division than having any seriously competitive market share for server products against the other OEMs, and AMD still has SeaMicro customers to take care of with support, until the customers can be moved over to an alternative OEM. AMD did not have enough cash flow to pump into SeaMicro, or a compelling CPU server product to hope to keep SeaMicro on the growth curve towards better profitability, Zen/K12 may change the picture, but Zen is still a ways off, K12 even longer, and it’s little too late for that, so it was better to cut the losses by getting out of being a server OEM.
AMD will probably do fine with Zen, and K12, and the OEMs will certainly be looking at the customer base that can not afford Intel’s high priced products, and if Zen can get into Intel’s Haswell ballpark with the IPC metric with more affordable pricing, like AMD had with its Opterons in the past, then AMD will be much better off financially. AMD’s CTO is already mentioning a Zen+ next generation with even more IPC than the first generation Zen microarchitecture with the added benefit of HBM for future Zen CPUs/APUs, so the entire AMD server line looks to be a more competitive line, with the guarantee of additional IPC gains to close the gap with Intel’s product line, or maybe even reach parity. Either way AMD will have to price to compete, and hope Intel can not drastically cuts prices over too long time frame. AMD’s HPC accelerated APUs with the Greenland GPUs will probably give Intel some headaches in the workstation/server market for big number crunching at an affordable price for those that need all the teraflops they can get for their money spent.
Did you see K12 slipped to
Did you see K12 slipped to 2017? (and this is AMD, two years out may as well be three or four)
Also, this market for “High performance Android/Linux based tablets” you keep dreaming about simply doesn’t exist.
This is off topic in a thread
This is off topic in a thread about future GPU releases.
In team games when a certain
In team games when a certain team is favored by the system, the opponent will have not only to beat that team in the field, but also the referees. Let’s hope that AMD will have something good enough to not only be competitive, but also beat the press that will do everything to find something negative and try to concentrate reader’s attention on that negative.
Believe me, after damn near a
Believe me, after damn near a decade of performance dominance by Intel, pretty much all of the press wants AMD to have a highly competitive and successful part. AMD was still pretty competitive through the Phenom II years, but Bulldozer has been damn near a disaster. Having a healthy competition between those two leads to a lot more reviews and stories and interest. Having a strong AMD is in the best interest of consumers and reviewers alike.
Am I reading the comments for
Am I reading the comments for the wrong article? I though this was about GPUs. AMD has been doing pretty good in that field. They currently own the $/performance crown in every market segment.
AMD had 35% of the discrete
AMD had 35% of the discrete GPU market as of 4th quarter 2013. One year later it was 25% and they underwent massive layoffs. This is despite cutting prices to the point they are selling products below cost and losing money… 180 million in the 1st quarter of this year.
owning price/performance
owning price/performance crown doesn’t mean that they do well in business
Used to own the 280X and it
Used to own the 280X and it gave me artifacting and graphical glitches from day 1, went through 2 replacements and the issue still persists.
Swapped to a GTX780 and couldn’t be happier, now I’m running on GTX980 and I don’t see myself changing back to AMD anytime soon or ever.
I’m really interested to see
I’m really interested to see the GPU landscape this summer. I’m itching to upgrade, hoping to find a sweet spot in the $200-250 range for a 1080p build. I realize that AMD probably won’t have anything new in that price range for a long time..
Likely anything in that range
Likely anything in that range will be a rebranded last gen silicon.
I’m really interested to see
I’m really interested to see the GPU landscape this summer. I’m itching to upgrade, hoping to find a sweet spot in the $200-250 range for a 1080p build. I realize that AMD probably won’t have anything new in that price range for a long time..
if only there could be an new
if only there could be an new fx cpu to go with it 🙁
One more year, sometime in
One more year, sometime in 2016. Lets just hope they can live up to the long built hype unlike the failure that was Bulldozer.
I still remember before
I still remember before Bulldozer debut, AMD claimed the flagship will be 50% faster than Core I7 (Nehalam microarchitecture).
What a claim.
40% IPC? This is kinda
40% IPC? This is kinda unrealistic claim. Does AMD have actual product ready yet before making this claim?
The last time when AMD provided such claim, we got a Bulldozer disappointment.
And for the HBM?
The big problem? GCN design doesnt scale properly with high bandwidth memory bandwidth. Not to mention GCN is consuming quite a lot power.
I can see AMD is trying to reduce VRAM power consumption so that the power saving can be used to add more GCN cores.
Cant be sure this will work since GCN chip is quite hot.
Can you please backup the
Can you please backup the statement about GCN not scaling properly with high memory bandwidth? Furthermore. The biggest chip in the Nvidia 700-series (GK110) consumes almost as much power as for example Hawaii. People tend to look at which run hotter and point to AMD’s reference design of the R9 290X, which basically has an insufficient cooler.
Hmm…..
http://www.eteknix.c
Hmm…..
http://www.eteknix.com/memory-scaling-amd-kaveri-a10-7850k-apu/7/
https://pcper.com/reviews/Memory/Ultra-Speed-DDR3-Revisited-AMD-APU-Memory-Scaling/Graphics-Benchmarks
It will be easier if I select AMD APU to explain.
DDR3 speed scaling will do for my explanation on why I said GCN can’t scale well. (after all, it doesnt matter what type of memory, right? We just wanna check the impact of memory scaling)
On eteknix and pcper article itself, you can notice the diminishing return for every increase in frequency.
Some discrete gpu review also shown this memory scaling diminishing return.
*However, this is not an apple to apple comparison. I admit the APU memory controller might be different than discrete gpu memory controller. So my assumption might be wrong.
*I notice AMD slides and presentation keep on mentioning HBM high bandwidth and low power consumption. But there isn’t any concrete info on what is the high bandwidth impact on GCN.
One slide mentions >3X performance/watt compared to GDDR5 with 50% power saving vs GDDR5. Seem to me, GCN hits the wall of dimishing return faster than I expect.
As for cooling issue…..I was referring to AIB aftermarket design. Even with aftermarket cooling solution, GCN is still hotter than Kepler (flagship products comparison).
On APUs the bandwidth is the
On APUs the bandwidth is the limiting factor, so they shouldn’t be used as example why GCN alledgely scales worse with memory speed.
For the memory tech to show a
For the memory tech to show a true benefit, Nvidia will also have to use it.
Memory bandwidth for most current games, does not have much of an impact, as they are not bandwidth intensive enough to bottleneck the memory. But if there is a massive boost in performance, then you may see game developers change the way they use the memory in such a way that it can us additional bandwidth if it will reduce some of the computational workload on the GPU.
The issue is that such a design cannot take place if it will mean that one half of the market will suffer massive memory bandwidth bottlenecks.
You can always have higher
You can always have higher resolution textures for those with the bandwidth to use them.
“With adoption of HBM (high
“With adoption of HBM (high bandwidth memory) that sits on-die with the GPU, rather than across a physical PCB, […]”
To be pedantic, HBM isn’t on-die, which I assume that you know. It is closer to being on-die than PCB mounted memory, but it is not equivalent to on-die memory at all. Also, a “physical” PCB? What meaning is “physical” supposed to add here?
There is a good picture of it in the included slides, although I doubt it will look much like that. You do not want the memory sticking up above the gpu die.
“You do not want the memory
“You do not want the memory sticking up above the gpu die.”
Well, actually, that’s what we’re moving to next.
Just don’t expect any driver
Just don’t expect any driver updates.
Current driver was released before Xmas.
(Not counting Beta’s)
The most interesting thing I
The most interesting thing I see here is “GPUs are just the start” and “Opportunities to extend across AMD product portfolio”. What does this mean?
It would definitely be interesting to have HBM used with a powerful APU/SOC for mobile solutions. You could have the performance of a dedicated GPU solution with “integrated” graphics; it would be closer to a GPU with integrated CPU. Nvidia and Intel do not have direct competition to this. Nvidia can deliver an ARM core, not x86. Intel obviously has the cpu, but they do not have a good GPU yet.
It would also be interesting to have a socketed version of this. You would essentially have a giant L4 cache. The CPU portion of the APU would also have access to the HBM memory, although not all applications would benefit from this. HBM isn’t going to be lower latency than on-die caches, but it may be significantly lower than standard DDR3 system memory.
An APU with HBM would be similar to Intel’s crystalwell part (128 MB on-package eDRAM L4). Intel does not want to sell this for anything but mobile. If Intel made a socketed version of this, it could eat into their high-end, large cache Xeon parts which cost several thousand dollars. There are server and workstation applications that could probably benefit from such a part due to the large L4, but they do not want to sell it for the price of a mobile cpu. A Xeon with >=30 MB of cache cost around $2000 or more; looking at the list of Xeons on wikipedia, some are listed at close to $7000.
Really, it isn’t in Intel’s best interest to bring out HBM or HMC memory until they have a competitive GPU; they still lose to AMD IGPs for graphics applications. It seems like AMD is really driving the technology forward here with things like mantle, HSA, and HBM, but in their position, I guess they have to. Intel is just profiting from the status quo. It would be nice if AMD could pull off a similar situation to what they had with Opteron originally. For the Opteron launch, AMD just had the more advanced system architecture with on-die memory controllers and point-to-point processor interconnect, but it didn’t last that long until Intel caught up. Intel obviously has the money to put into GPU R&D, so I don’t expect the window of opportunity to be that big.