The rumours are flying today, with some purportedly leaked performance results of AMD's upcoming Fiji XT based card, the Fury X. The leak at Videocardz shows the results of 3DMark's Firestrike Ultra and Extreme for an AMD Radeon Graphics Processor in single card configuration and Crossfire results for Extreme only. The results show a card that can keep up with the Titan X and by extension the new GTX 980 Ti as well. At 1440p resolution, the Firestrike Extreme benchmark, the new AMD card seems to lag slightly behind NVIDIA in single and dual GPU configurations, but not by much while in the Ultra test at 4K the AMD GPU pulls ahead, likely thanks to the new HBM-1 memory.
They also claim to have a source who has run the new GPU though the CompuBench suite which gives us more information about the general architecture. The tests show a card with 64 Compute Units, which translates into 4096 Stream Cores if it is designed similarly to current Radeons. The tests also confirm the 1050MHz core clock and more interestingly the 4GB of HBM-1 will be clocked at 500MHz memory clock with a 4096-bit bus, which is good news for those who like their resolutions as high as they can go. Nothing is confirmed yet but these numbers bode well for the new Radeon architecture if they are true.
(Image credit: VideoCardz.com)
Isn’t that supposed to be
Isn’t that supposed to be HBM-1 and not HMB-1 memory?
taken from your article “…Ultra test at 4K the AMD GPU pulls ahead, likely thanks to the new HMB-1 memory.”
If this thing cost less than
If this thing cost less than 980ti then look out for a price war!
Funandjam you really should get a life dude it a tech article an yes yes i know i commented on you but damn its annoying to always see these school teachers correct tech articles is fun and interesting news not a damn English course
Fixed, and necessary.
Fixed, and necessary. Spelling is important kids
costs less… NoOOoo way..
costs less… NoOOoo way.. they just made new technology. HA cost less
There’s a difference, I
There’s a difference, I wasn’t acting like a total douche-nozzle to Jeremy or anyone else by pointing it out. In fact, I don’t see any other way to easily let the writer and/or editor know of a ‘typo’ other than to comment on the article itself.
If you had taken the time to notice, I posed it as a question. That implies I am not totally sure of what it is supposed to be and since the tech writers will usually know more about what they are writing about than the people reading, I thought it was a fitting question because maybe he knew something that I didn’t and wanted him to share.
Also, it wasn’t a simple case of something like typing the word “teh” instead of “the” which is easily understood what it really should be. We are talking about a typo that is some brand new tech, and again I wanted him to clarify if it was a typo or it was something else entirely. Turns out it was a typo and Jeremy is usually really prompt and cool about fixing it(even if the word in question was a minor word like “the” instead of the very subject of the article itself like “HBM”)
And lastly, mr anonymous, perhaps it is you that should “get a life” instead of complaining about things you didn’t take enough time to fully understand.
We can get away with a lot of
We can get away with a lot of “error” in speech, but written words work better when the “rules” are followed as closely as possible.
And it certainly makes a difference in the naming and labeling of technical matters. As an example, someone writing “gb” instead of “GB” is simply wrong, as “b” means bit and “B” mean byte. (A lower case “g” has no meaning, it’s just sloppy.) Encouraging people to write more accurately helps them not make these kinds of mistakes – the ones that do matter – and I think that is very important for good communication. 🙂
Oh wow, 2% faster than the
Oh wow, 2% faster than the 980Ti. I would not quite call that “pulling ahead”
Depends on the price. If its
Depends on the price. If its a hundred or two cheaper then it pulls ahead on price/performace level.
Be fair, gpus have gotten so
Be fair, gpus have gotten so fast, fighting for 1-2% on the top is a big deal, especially considering how far behind they(amd) were.
faster is faster dosnt matter
faster is faster dosnt matter if is 2% or 100%
If price rumors of that fiji
If price rumors of that fiji are true, it would be 100$ per % for that increase. rumored price was gonna be 850$ though with 980ti being 650$ don’t see AMD being able to get away with more then 700$ at this point.
AMD doesn’t need a 2% win they need a 20% win. Reason that is Software wise Nvidia’s software package is well above what AMD has.
Dream on, shill…
Dream on, shill…
Nope, the card was said to be
Nope, the card was said to be released at the $850 price point from multiple outlets. Obviously, that’s impossible with the launch of the 980 Ti, but if AMD has to slash to price a ton, it’s possible they could lose money on it, which they simply can’t afford.
LegitReviews also confirmed the suspicion the 390x is just a rebranded 290x with extra VRAM and a clock bump:
http://www.legitreviews.com/radeon-r9-390x-taken-apart-to-reveal-radeon-r9-290x_166065
Doesn’t matter if you win by
Doesn’t matter if you win by an inch or a mile, winning is winning – Still rather wait for the results from actual reviewers rather than this piss rumour that ain’t worth digital ink used to shop it.
Well it means you’ll have to
Well it means you’ll have to pull a lot less money out of your wallet to purchase from AMD to get the same amount of GPU power, so that price/performance metric on the fury will certainly pull ahead with the AMD kit. And the engineering on the AMD fury is already done for the 8 GB memory on HBM 2, just make the stacks higher, and its still a 1024 bit bus running to each HBM memory stack, so a larger stack will not be to difficult, but that’s more on Hynix to get those stacks higher and the 8GB HBM memory will be on the way. That’s a 4GB card doing damn well against an 8GB card from the competition. And the memory clocks are going up for HBM 2 along with the larger memory stacks. Hopefully a furious price war is in order and the consumer will be the winner. Remember if one side of the great GPU war emerges the outright winner then the consumer loses big time.
Keep those dirty filthy GPU dogs fighting with each other, they are just suppliers of parts, they are not anything more! The technology is what is important, technology produced by engineers, not the filthy CEOs, or marketing monkeys. The more they fight/compete the more the consumer wins! And 2% is definitely inside the margin of error on any benchmark, so let the price war began anew.
What I’d like to know is how much single precision, and double precision performance for the dollar that I am getting, as well as the overall total price/performance metric, I want the GPU that I’m getting able to be used for more than just gaming workloads, so SP/DP performance is important too.
So it’s fine for this card to
So it’s fine for this card to be a pathetic 2% faster with HBM because somewhere down the line they’ll have another card with HBM2? Keep justifying… you do know Nvidia will have HBM2 as well.
AMD: “Coming Soon” “Problem doesn’t exist” “Well fix it later, just buy it now”
oh boy… we have a badass
oh boy… we have a badass over… no wait i think its actually called a fan boy/girl.
while we should wait for reviewers benchmarks and not rumors, if it’s true, 2% is still ahead and you’re dismissing it. when it’s nvidia ahead by 2% they get standing ovations. You aren’t objective critics, you’re assholes.
The fact of the matter is that if any of intel/amd/nvidia would go out of business the others would raise prices and not by a small margin i would expect. so yeah keep at it and see what happens.
I didn’t see this kind of comment when the i7 was pulling just a few % in front of a FX… It was all handy dandy even if it costs twice…
i have people coming to me with under 500$ budgets for gaming pc’s and in 2015 asking which is the best new pentium that came out… A significant number of them end up with spending half of that on a i5 or w/e the local has in stock or prebuilt, and can’t use it for shit, instead of just facing facts that competition is good for the consumer and throwing an amd part in there would help them a lot.
Being a performance elitist is one thing, being a fanboy is another, and helps no1.
Thinking about it 4gb of ram
Thinking about it 4gb of ram vs 12gb and they are neck and neck. If the speed of the memory is what allows for this than holy shit cant wait for HBM-2. Nice job AMD nice job indeed!
Mind you this test is not
Mind you this test is not memory intensive. Looks out for game benchmarks with titles that use more than 4GB of vidmem to see the problem with this product.
I honestly wonder what will
I honestly wonder what will happen in that exact situation. Perhaps the bandwidth/speed will compensate, perhaps it will be crippled, who knows.
Bandwidth can’t make up for
Bandwidth can’t make up for lack of memory. If say game is using 4-4.5gb yea probably won’t have much of an impact. Push the usage to 5gb, 6gb and up. More and more system memory has to used and it will impact it more and more.
Not really, all that extra
Not really, all that extra bandwidth for HBM gives the texture compression units plenty of bandwidth to compress those textures down and make them fit in that 4GB HBM, and all it takes is the proper gaming engine/driver and graphics APIs to properly stage the active area scenes ahead of the action, compression and proper memory staging from system RAM to HBM can and does keep the action going. Those fat 1024 bit memory busses to each HBM stack can provide all the uninterrupted bandwidth that the compression units need, all while not gumming up the GPUs SPs, ROP, tessellation, etc. unit’s available bandwidth.
HBM marks the beginning of the end for PCB based RAM as the PCB based RAM is off the module and starved for wide busses.
Interposer memory will probably be getting even wider in future HBM generation beyond HBM 2, and look for 2048 bit wide busses for each stack on future silicon interposers.
Interposer or CoWoS memory is
Interposer or CoWoS memory is the future, but off package HMC and DDR4 is going to be necessary for applications where you need massive amounts of memory.
I think GP100 will have up to 32GB of HBM. I think this is really impressive that they were the first to get a product you can buy wkth HBM and it has decent performance but this is definitely another 7970.
Its the first 2.5D GPU with 3D RAM you can buy, like the 7970 was the first 28nm GPU.
However, Titan X and GM200 and 204 were hugely compromised architectures. Nvidia basically did a Haswell refresh with GM200 and 204. They stripped out all the real compute(FP64) and didnt increase memory amount or bandwidth at all. Instead they used color compression to play games slightly better than the now ancient GK110.
GK110 was at least double to triple the performance of GF100. GM200 is 50% faster than GK110 for games and infinitely slower for anything requiring DP.
If AMD has something bigger, with more memory and significantly better compute performance than Nvidias GK110 and GK210, then this would be more than just a cool memory configuration. Matching GM200 is probably going to win fans back if they price them less than 980ti for gamers. If so, hopefully they can make more interesting products based on this first generation.
It is REALLY nice to see a company doing something different though.
If i was in charge, id still have used 16GB of HMC with 480GB/s though. They could have made the card the same size but the water block would be a little more complex. Maybe a vapor chamber instead depending on how they laid outthe HMCs. With 2×8 pins i dont think this is any more power efficient than a vapor chamber cooled off package HMC would be.
Hope they are aggressive
Hope they are aggressive ($150-250 cheaper) on equivalent card pricing.
Clearly this graphic was
Clearly this graphic was created by someone who favored Nvidia (assuming it isn’t fake). OC numbers are only on Nvidia cards. As if you can’t OC AMD parts…
That is standard response by
That is standard response by AMD fan boyz. With 0 proof what so ever makeing claims like that. Just cause their hyped up GPU isn’t showing performance that has been hyped, they claim the graph was made by someone on nvidia’s team. Numbers for AMD card might been pulled from 3d marks site so they didn’t own the card. ones with 980ti overclocked are cause reviews have the card and were able to test overclocked
That is standard response by
That is standard response by Nvidia fan boyz, to the standard response by AMD fan boyz. The card the provides the best price/performance, as well as the most single precision, double precision, for other graphics/GPGPUs uses for the money invested in the hardware will be the winner for the consumer. So enjoy your king of the hill antics, and pathological need for domination, and your empty wallet.
Poor ATI. All this time and
Poor ATI. All this time and new technology and they can only match 3dfx titan x? Really?
I was going to take the plunge but their drivers compared with it only being roughly as powerful as 3dfx stuff, it better be decently cheaper =/
That was one of the poorest
That was one of the poorest troll attempts i have ever seen on the internet.
And your reply is like all
And your reply is like all the useless dime a dozen replies on the internet….
Disappointing performance
Disappointing performance from the 390x if that is the actual release clock speeds. It would have been nice if it could have outperformed the 980. Fiji might be really interesting if you can turn the memory clock up a bit.
http://www.legitreviews.com/r
http://www.legitreviews.com/radeon-r9-390x-taken-apart-to-reveal-radeon-r9-290x_166065
The 390x and its bretheren are simply rebranded cards. This has been known for a while now, and it isn’t shocking the performance is basically identical versus the 200 series cards.
I’m more upset at AMD claiming the cards support DX12 when they fail to meet Microsofts FL_12_0 directive. That’s borderline false advertising, and AMD is really opening itself up for legal action when people complain their cards actually can’t do DX12.
http://www.legitreviews.com/r
http://www.legitreviews.com/radeon-r9-390x-taken-apart-to-reveal-radeon-r9-290x_166065
The 390x and its bretheren are simply rebranded cards. This has been known for a while now, and it isn’t shocking the performance is basically identical versus the 200 series cards.
I’m more upset at AMD claiming the cards support DX12 when they fail to meet Microsofts FL_12_0 directive. That’s borderline false advertising, and AMD is really opening itself up for legal action when people complain their cards actually can’t do DX12.
Long-time reader, second time
Long-time reader, second time poster, just to say it’s depressing to see all the in-fighting and insults. I really don’t understand it, it’s much more satisfying in my mind to have a legitimate debate or discussion. Just because someone doesn’t share your point of view, doesn’t necessarily mean they’re hating on you.
Anyway, looking forward to the show in a few days, I hope the reveal coincides with a release, and the rumors of a delay are false. Though I guess with HBM being so new, maybe yields are gonna be low and so availability may be low for a bit(I think Josh touched on this in a recent podcast). Seems like a real possibility, it must be a lot of work getting all these parts onto the PCB and working, being that this is all new.
I’m not in the market for a new card atm, but I hope Fiji is competitive, I’d love to see it beat 980 Ti, or even Titan X, just to see how Nvidia reacts. I’m also really hoping the re-brands, if rumors are true, are at least enhanced in some meaningful way, Hawaii is looking sort of dated, but we’ll see.
I’m sort of thinking to myself, how many times can a brand, any brand, re-brand something before it’s a real problem. Nvidia did this as we all know a few years ago, they got a lot of life out of Tesla.
Hawaii is missing some
Hawaii is missing some performance enhancing features like higher throughput tesselation and the new color compression but it isn’t that far behind otherwise. The color compression just saves memory space and bandwidth. The importance of the tesselation is a bit over hyped I think. Regardless off how powerful the tesselator is, the rest of the gpu still needs to be able to handle all of the extra triangles generated by the tesselator. With the Witcher HairWorks mess, it seems like Nvidia GPUs mostly can’t handle the highest levels of tesselation while maintaining a reasonable frame rate. It isn’t necessarily useful for it to have a more powerful tesselator except for marketing and artificial benchmarks.
Skewed results? Why not
Skewed results? Why not include the 295×2 in the chart. Something tells me its a Green Team trolling
Well they did show the Fury X
Well they did show the Fury X in Crossfire, so leaving out the 295 X2 isn’t a big deal in my opinion.
But I don’t really pay attention to these things. When PCPer and various other tech sites I respect gives us more in-depth analyses I’ll be paying more attention, but for now, it’s just gossip across the back yard fence. 🙂
you can just double what 290x
you can just double what 290x does, and subtract a some. which would put it around a 980ti OC.
Key key point of this “leak”
Key key point of this “leak” is not mentioned in the article.
And that is that no benchmarks were run by these guys.
That’s why the data seems skewed.
All results except the Fury are actual results.
The Fury results are purely extrapolated on “known” data, such as number of stream units and clock speeds.
It’s obliquely mentioned by videocardz themselves, but no one who’s reposted their info seems to get that they never actually fired up 3dmark in this instance.
Well then, the extrapolater
Well then, the extrapolater should be congratulated on a fine job. If it was just extrapolation, why didn’t the extrapolator extrapolate both the extreme and the ultra for the fury CS?