After announcing the Radeon VII this week at CES, AMD has quietly released its own internal benchmarks showing how the upcoming card potentially compares to the Radeon RX Vega 64, AMD's current flagship desktop GPU released in August 2017.
The internal benchmarks, compiled by AMD Performance Labs earlier this month, were released as a footnote in AMD's official Radeon VII press release and first noticed by HardOCP. AMD tested 25 games and 4 media creation applications, with the Radeon VII averaging around a 29 percent improvement in games and 36 percent improvement in professional apps.
AMD's test platform for its gaming Radeon VII benchmarks was an Intel Core i7-7700K with 16GB of DDR4 memory clocked at 3000MHz running Windows 10 with AMD Driver version 18.50. CPU frequencies and exact Windows 10 version were not disclosed. AMD states that all games were run at "4K max settings" with reported frame rate results based on the average of three separate runs each.
For games, the Radeon VII benchmarks show a wide performance delta compared to RX Vega 64, from as little as 7.5 percent in Hitman 2 to as much as 68.4 percent for Fallout 76. Below is a chart created by PC Perspective from AMD's data of the frame rate results from all 25 games.
In terms of media creation applications, AMD changed its testing platform to the Ryzen 7 2700X, also paired with 16GB of DDR4 at 3000MHz. Again, exact processor frequencies and other details were not disclosed. The results reveal between a 27% and 62% improvement:
It is important to reiterate that the data presented in the above charts is from AMD's own internal testing, and should therefore be viewed skeptically until third party Radeon VII benchmarks are available. However, these benchmarks do provide an interesting first look at potential Radeon VII performance compared to its predecessor.
Radeon VII is scheduled to launch February 7, 2019 with an MSRP of $699. In addition to the reference design showcased at CES, AMD has confirmed that third party Radeon VII boards will be available from the company's GPU partners.
I am willing to take a chance
I am willing to take a chance on this card once they release waterblocks, I have had great success over the years squeezing insane levels of performance out my amd cards, cannot wait. If this card can hit 2,000mhz under water and it scales well with voltage, those numbers are going to look a lot better. My 1080ti with the XOC bios under water at 2,101/12,000 averages 62 fps in far cry 5, this card is apparently doing that at 1800mhz, any kind of OC well push this card well pass that.
You’d better buy an electric
You’d better buy an electric radiator, it’s more efficient to warm cavemen! 😉
Won’t matter much with my
Won’t matter much with my current setup(360mm, 240mm, 140mm rads), it can cool down a tank, hopefully EK has blocks available in February.
That’s a nice side grade I
That’s a nice side grade I guess lol
That’s some right pretty
That’s some right pretty improvments for Radeon VII, Uncle Jed! It darn sure looks like AMD is Tick-Talking with its Vega Micro-Arch and that there is as good as a whole mess collard greens with pot liquor floting on top from them ham hocks, if you look at them better results!
Now if we could just get Granny to explain the finer details of xGMI and those AI ISA extentions on the Vega 20 Tapeout. What’s that, Uncle Jed, Granny done signed an NDA!
What they give to granny to sweeten the deal? Jed: they done gave granny a dump truck full of lard and pleny of sacks of quick lime for making soap. Jed: you be sure to scrub behind your ears boy, else I’ll have Ellie May dip you like her critters.
Jed: Ho doggy! That JHH over at Nvidia’s got smoke coming out of his ears like Mr Drysdale when he thinks I’m ’bout to withdrw my Money from his piggy bank! That’s some billions in Nvidia Share Holder Value that went up the smoke stack quicker than the Ozark express on a down grade straightaway!
Ok, so you are a massive
Ok, so you are a massive asshole, trolling for no reason, whatever.
I must say however your troll delivery, was spectacular. Parafasing a show most people under 40 never heard of, the clear feeling of superiority as you call us all red necks, the total complete dickishness, its……. Beautiful.
Well done sir!
Jed is not a RedNeck, Jed is
Jed is not a RedNeck, Jed is a hillbillie but Most Gamers like you are sure as the sun comes up egregious RedNecks.
Jed’s got some redeeming qualities but not (Rednecks)/You.
You are that duck wanker guy, you fit the bill of a Redneck(Gamer). Even Jethro, compared to you, is as erudite as Miss Jane.
You are just too emotionally engendered to your Games?Gaming Hardware you Redneck Mother. Looks like you are one with the r/Amd or r/Nvidia fandroids who are a bit too fanatic without a single cell of grey among the whole lot!
But Ho Doggy! That JHH is done gone all meltdown after all that share holder value whent up like a puff of bull durham roll-your-own at the Saturday night hoedown!
What’s it take to buy your brand loyality, a box of biz detergent with the free towel inside or a moonpie and and RC Cola! You simpleton, you can not even take a pop cultural refrence from way back without getting offended!
Just look at Nvidia’s CEO and that smoke shooting out of them ears, like 2 Wabash Cannonballs on a collision course.
They could take away all the Gamers’ GPUs and use them for something more productive and that would be fine by me!
Ear to ear smile, that was
Ear to ear smile, that was spectacular. I’m gonna keep checking the site a few times today, see what else ya got.
And for the record, I’m not offended, I’m entertained. You’ve honestly made my day, it’s the most creative example I’ve seen in a long time of someone calling everyone at PCPER a shill, and there have been A LOT OF YOU over the years, so thanks dude, that was a sweet little giggle you gave me as I sat on the crapper this morning.
Never said that PCPer where
Never said that PCPer where all shills and one has to have moved on to a cushy corporate job to be suspcted as a Shills/Marketing type. You’re just a damn fool without a single brain cell to be found! You fit in nicely with both the rabid r/Amd and r/Nvidia fanzoid bumpkins.
You move way more crap out of your mouth than out your lower sphincter hole. You are still butthurt from the first post that got you so Triggered. But You will be forever known as the duck wanker dude with that infamous ode based on that corkscrewed up logic and reasoning of yours.
The best of the best is of no concern to me as it’s the most affordable with a high enough frame rate and a low enough fame variance to not have any noticable frame Devivery/Quality issues. Really the RTX 2080TI still is much too costly and all the new and shiny RTX not currently made use of in the majority of gaming titles.
You really need to stop huffng that Toluene but the damage is already done! Now the only thing that remaines between you hairy ears is fully lipid based and encased in hard bone.
This right here? This is good
This right here? This is good news for everyone. For the first time in a long time there is a choice at the very top end. Sure if you want the best of the best of the best there is only the 2080ti, but in that oh so coveted #2 spot there is an actual choice, and to my memory it hasn’t been that way since…….. I wana say 7970 vs 480? but my memory is pretty hazy, as y’all know. Still WAY outa my price range but this should create a few ripples, get the team green to play a little more aggressive with pricing rather than focusing on proprietary hardware and software (gsync and gameworks ect)
Fair competition is good for everyone as far as I am concerned, I am optimistic.
480 released in 2010 with
480 released in 2010 with fantastic (sarcasm) thermals and was soon replaced by the gtx 500 series a few months later.
the 7970 which released in Dec 2012 for a retail price of $500, was the most expensive and powerful single gpu solution available at the time, handily beating the gtx 680, which itself was released earlier in the year.
it wasnt until the release of the gtx 700 series, later in 2013, that we had the competition to which you are referring. It was in this climate, that we saw the most parity between these two companies. Every subsequent generation since, Nvidia has been slowly pulling away, to the point where we find ourselves now, where the most expensive, powerful, and sensible (titans for data scientist who also game) single gpu solution is 3x more costly than just a five years prior. its good to be king
Yeah, I was thinking 580,
Yeah, I was thinking 580, mist (pot smoke) of time and all….
Lisa Su loves that “sensible
Lisa Su loves that “sensible (titans for data scientist who also game) single gpu solution is 3x more costly than just a five years prior. its good to be king”.
So That’s Including Nvidia’s more costy consumer/gaming variants.
And that’s because if Nvidia increases its RTX GPU’s MSRPs/ASPs that makes it more likely that AMD can still undercut Nvidia’s higher pricing and have some profit’s remaining after the HBM2/Vega 20 DIEs BOM’s are factored in. The Radeon VII’s BOM is still going to be rather high but the RTX 2080’s MSRP/ASPs are good for AMD also. AMD can get higher GPU MSRP/ASPs and still undercut Nvidia’s even higher pricing. That’s a big win for RTG’s continued viability. Lisa Su says Woo Hoo to Nvidia’s higher MSRPs/ASPs because she is a CEO Also and the shareholders are always watching!
You gotta love Nvidia’s JHH for Upping those RTX GPU’s MSRPs/ASPs because that makes Lisa Su’s/RTGs job that much easier to get sufficient Consumer/Professional GPU revenues to fund RTG’s competition with Nvidia. It’s a battle of the Gigabucks between AMD’s RTG and Nvidia and currently Nvidia is outspending AMD’s RTG.
Lisa Su in the excutive wash room singing this tune:
ASPs ASPs!
Nvidia’s gone crazy with those ASPs!
ASPs ASPs!
Nvidia’s Gone full on Mad with its RTX GPUs’ ASPs!
Higher ASPs Higher ASPs!
Via those Mad Nvidia GPU MSRPs.
It’s good for Nvidia and its good for me!
Beacuse that’s higher MSRPs for RTG!
To the PCPer stuff moderating
To the PCPer stuff moderating the comment section:
Please don’t delete CamelCaseGuy’s comments. We should feel lucky to witness first hand the making of a modern epic. These comments are unqiue cultural artifacts of high import. You don’t want to destroy such irreplacable pieces of art which will undoubtly become a centerpiece of the cultural heritage of our era. You don’t want to be the Taliban.
This is a sincere plea.
Ha ha! Taliban, and calling
Ha ha! Taliban, and calling folks CamelCaseGuy, those Taliban kooks are just the Rednecks of their reagon a bit too obsesed with religon and a little too dependent on Guns, Violance, and other trappings of 12Th century philosophy.
Most Gamers are very close to the Taliban at heart what with their education levels consisting of mostly hand to eye coordination training and little else.
CollosalCollapse is just what Nvidia’s share prices experienced and JHH is mad as a hornet because his hornet’s nest egg has lost more than 50% of its value with that punch right in Nvidia’s market CAP.
So I’ll create this little portmanteau of the words: Gamer and Redneck! And that will become Gameneck and that’s what the majority of gamers consist of.
Just go over to the WCCF-T primate house and watch the Green and Read Gamenecks fling that freshly pooped defecation at eachother. And that partisan nonsense is mosty based around that Maximum FPS metric and who has the fastest GPU. It’s more of a GPU drag race with those Bumpkins with their little ability to understand anything technology related!
Taliban, and CamelCaseGuy in the form of your piss poor retorts, and that Taliban special kind of kook exists wherever the fairy tails can be found that offer salvation as a service with those crazy perks in some “Afterlife”.
Ha ha! Ha HA! Speed Racer, you could not reason your way out of a wet paper bag with the help of a bazooka!
WCCF-T comments truly are
WCCF-T comments truly are some of the worst things the internet has to offer. It’s a sea of idiots bitching that some other idiots with different graphics cards are bitching about their favoured graphics cards, and therefore the solution is to bitch harder and louder than the other lot. Bitch^3
wtf! are you kidding me bruh?
wtf! are you kidding me bruh? (insert tech site here) comments are WAAAAY worse than WCCF-T!
They’re an ocean of morons, crying about some other fool with more RGB than them, and therefore the only solution is to build a wall…WITH EVEN MOAR RGB!!!
…selective memory there.
…selective memory there. The Fury X and 980 Ti went toe to toe back in 2015. That wasn’t even the second best for either of them, unless you count the Titan (which I don’t).
Back then the best of the best only set you back £550, and the prosumer Titan cost like £800. I think Nvidia are hoping everyone’s forgotten that.
It’s disappointing because it’s price matching a card that has been lambasted since its launch for being overpriced to the point of not being worth buying, and it’s matching its performance too; but it’s not at feature parity. You’re paying the same amount for less. To offset the lack of DLSS and Ray Tracing, it has to be cheaper than the 2080, or outperform it.
indeed. they never shouldve
indeed. they never shouldve dropped the ‘fury’ branding imo.
There was also the GTX 780 vs
There was also the GTX 780 vs R9-290 vs GTX 780 Ti vs R9-290X tit-for-tat before Maxwell launched.
But is it really a choice?
But is it really a choice? Same price for die shrunk Radeon 64 to match RTX performance, but without RTX or DLSS technologies? It would be a different story if Radeon VII brought something new to the table as well. But even if Ray Tracing and DLSS are not ready yet, they are most definitely the type of technologies that are the future of gaming.
So if the choice is between Radeon VII or RTX 2080 with new technologies; there really isn’t a choice. Especially with Nvidia announcing they are going to start supporting FreeSync.
It’s one thing to be a fan of
It’s one thing to be a fan of something and then another to be taken advantage of to the point where people will accept the card the AMD is releasing for gaming that is not even impressive and could be argued is horrible in so many points. AMD fans can’t even like admit one cards are horrible that AMD releases. All AMD is doing here is dangling a card in front of their fan base like a fresh bone in front of a dog hoping that’s overweight for the video cards that they’re going to release later in this year which main essence b crap anyway. Furthermore, AMD is pulling their little magic trick putting 16 GB of vram on the card. And again the AMD fans drool. Bottom line is the Nvidia cards are better and have them better for several years. Forget about the pricing. It is what it is but you have to give credit where credit is due. An AMD deserves no credit in their video card department. Instead of wasting money putting 16 gigs of vram on the card they could have liked partnered with Asus and put a all-in-one water cooler on it and overclock that to death. the card would still be trash but at least it would be much more interesting then what they released now. I used to be an AMD fan and an ATI fan way back when. I switched over to Nvidia for obvious reasons and on the CPU side I had to switch over to until because AMD again could not keep up. They still can’t keep up but the fans still puppy dogs follow. That might sound harsh but it’s the truth. I’m the type of person that will go to either side. But I like performance and good graphics and features. If AMD actually brings out products that really give Intel and Nvidia a significant shot in the performance area then I will gladly consider switching. But I’m not just going to keep waiting and making excuses and bashing the competition. That’s childish and unreasonable. AMD is not going to save the day. They just don’t have the money to spend.
The card that AMD is
The card that AMD is releasing is close to the GTX 2080 in raster gaming performance and it’s also on a 7nm Process node shrink with all of Vega 20’s Extended AI “Vega-2” ISA extentions. That and it’s xGMI capable with other tweaks that will have to wait for the NDA to expire before all the questions are answered.
So That Vega 20(like the Vega 10) Tapeout was Tweaked for the profesional markets and so was the GP102/TU102 Base Die Tapeouts that where/are used by Nvidia to bin out the GTX 1080TI/RTX 2080TI.
So both AMD’s and Nvidia’s Falgship Gaming offerings come for GPU DIE/Tapeouts that are Professional Market designed first and foremost for that Professionl GPU market. GPU/DIE Tapeouts DIE harvesting for Flagship GPUs has been the norm that makes use of Professional Market Design GPU Base Die Tapeouts, so there is no difference there.
AMD’s Vega 20 diffusion lines over at TSMC has been running for more than 6 months and by the very nature of that imperfect diffusion process there are built up plenty of DIE/Bins of some non performant for the professional market Vega 20 DIEs that are now being used for the consumer Radeon VII.
Most definitely as a result of that 7nm process node the Vega 20 Base Die Tapeout is rather smaller area wise. So its DIE/Wafer yield numbers are going to be better than The Vega 10 Base Die Tapeout on the GF/Licensed from Samsung 14nm Process node.
It’s a very logical thing from a business standpoint/die production standpoint for Vega 10 production(In the form of a Vega 20 replacment) to be replaced with Vega 20 production on TSMC’s 7nm node. There are other obvious reasons for AMD to move away from 14nm to 7nm but you get the gist of AMD’s reasoning.
There will be a lower binned Part derived from Vega 20 that will have a closer complement of Shaders:TMUs:ROPs to AMD’s current Vega 10 based Vega 56. So all the desktop Vega GPU production will be shifted over to Vega 20 based 7nm production with plenty of perks for AMD and any Vega 20 consumer variant gaming customers. AMD will also get improved DIE/Wafer yields that will get better over time and any Vega consumer customers will get Radeon VII performance in the range of the GTX 2080 on raster oriented gaming titles.
Vega 20 has an extended ISA that supports so new AI oriented operations that will make for better AI based upscaling/other AI based Denoising, Filtering, operations etc. TSMC’s 7nm node and whatever Memory Controller/xGMI/Other Vega 20/Radeon VII additions/tweaks information will have to wait for closer to release date.
It’s not “All AMD is doing here is dangling a card in front of their fan base like a fresh bone” as its a valid business decision for AMD to go to TSMC/7nm ASAP and get the DIE area space savings for better DIE/Wafer yields. So Vega 10/14nm, Both Consumer and Professional SKUs, are being transitioned to TSMC’s 7nm/Vega 20 replacments and there are other Vega 20 features that will improve gaming that are still under NDA.
If you have the funds then you are just looking for reasons to stay with Nvidia and you are free to do so. But others are fine with Raster Oriented gaming and RTX/Turing has to get a large enough game title adoption rate to make sense currently. The Vega 20 based Radeon VII looks just fine in the context of competing with the RTX 2080 this generation and Vega 10 GF/14nm production will be slowly phased out anyways. Vega 10 based Vega 64/56 are now older designs that lack the newer features sets and abilities of Vega 20. There is no telling just what features may have been added/fixed/tweaked on Vega 20 becsue the NDA is still in effect.
AMD is not in the market to defeat Nvidia for the Flagship gaming title and maybe there will be a Dual Vega 20/single PCIe card variant based on a Vega 20 DIE bin that closely matches the current Vega 10 based Vega 56 in Shaders:TMUs;ROPs metrics. AMD already markets a Dual Vega 10(Vega 56 like) GPU DIEs/Single PCIe card Variant for the cloud game/streaming market, the Radeon Pro V340, that’s getting replaced with some Dual Vega 20 binned DIEs variant also. And by extention AMD could probably market a Vega 20 based Dual GPU/Single PCIe Card consumer/Gaming variant that would give the RTX 2080Ti some competition in raster oriented gaming titles.
TSMC’s 7nm node has plenty of advantages over GF/14nm and TSMC may just have some excess 7nm Wafer capacity as the result of Apple’s disappointing iPhone/iPad sales numbers. Who really knows for sure but Vega/7nm is what is being transitioned over to with no turing back.
Why spend so much time
Why spend so much time discussing the fans you profess to dislike so much? Leave them to their thing, you stick to what’s what. As it is your comment is not “the truth”, it’s your opinion, and clearly a biased one at that (only fanboys complain about some other brand’s fanboys, nobody else cares).
The truth is that not everything outside the absolute top-end is “trash” and while AMD’s current products might not interest you personally, don’t pretend your perspective has some greater universal applicability.
Releasing this is a win-win for AMD. They get to recycle imperfect dies manufactured on a new process node, improving their profit margins while simultaneously re-introducing some notion of competition at the high end. In business terms it’s a smart move and it was quite an unexpected one, too.
I do agree that the card is overpriced given the feature deficit vs. Nvidia’s best, but then we’ll see how that shakes out in practice. You can’t easily raise a product’s price after release to protect profit margins, but you can certainly drop it to increase sales. This at least gives us *something* to provoke competition and drop prices back to more sane levels, even if the competitive situation is still less than ideal.
A truly smart consumer will wait and buy whatever best suits them at the price they’re willing to pay.
So AMD’s 7nm GPU tech is what
So AMD’s 7nm GPU tech is what it takes to match Nvidia’s 16nm best from last gen? That’s a big yikes from me.
AMDs cpu division is incredible, however their gpu side has been a laughing stock. If they priced this at 499, I could dig it, otherwise I’d still jump on the green ship at this price point.
Really, the RTX 2080 is
Really, the RTX 2080 is Turing not Pascal and Volta is for Professional usage.
“AMDs cpu division is incredible, however their gpu side has been a laughing stock”
AMD’s GPU/RTG side is targeting mainstream mostly where the most unit sales volumes come from. So it’s RX 590’s at 12nm and now Vega 20 DIEs harvested and used for the Radeon VII at 7nm. You just Know that AMD will replace any Vega 10 DIE based production at 14nm with Vega 20 based DIE prodiction at 7nm! And with Smaller DIE area size for Vega 20/7nm at more Vega 20 DIEs/Wafer.
Hay Jethro, You do Know that on that 16nm TSMC process node that Nvidia is probably using 9.5T libraries with more Finfets per Cell. So Nvidia is trading more die space savings for more 4 Fin(Fenfet) transistors instead of the 3 or 2 Fen transistors that can not be driven to higher clocks. Nvidia’s Pascal and Turing DIEs are rather large compared to AMD’s 14nm, 12nm and now 7nm GPU Die Tapeouts.
So AMD if they are using 7.5T libraries at 10 fins per cell instead of 9.5T libraries at 12 fins per cell on whatever TSMC’s 7nm pitch metrics are utilized is getting even more die space saved. Smaller GPU die area results in more DIEs/Wafer at 7nm so that improves die/wafer yields.
So maybe that’s why AND’s all in with 7nm for the DIE/Wafer yield increases. Vega 20/Radeon VII is closer to the RTX 2080’s raster gaming metrics so that’s where the competition is. It’s also dependent on what automated layout libraries were used for the 7nm Vega 20 tapeout that AMD chose to utilize. And any TDP(Heat dissipation metric not power usage Metric) for Vega 20 may be higher because AMD may have chosen to go with higher transistor density than Nvidia normally chooses. That TDP metric that AMD uses is for the cooling solution that is necessary to cool the die! So more transistors packed into less area increases the TDP metric required to properly cool the processor DIE.
There will be some Vega 20 DIEs coming off of TSMC chip diffusion lines that may not have sufficient working Shader cores or nCUs to even bin down to become Radeon VIIs! So look for the Vega 10 Based Vega 56 to be replaced with some Vega 20 derived variant also with slightly better perfomence than VEGA 56 that will probably compete better with the RTX 2070.
AMD will be replacing its Radeon V340 that’s based on Dual Vega 10s(Each DIE binned similar to Vega 56’s complemets of shaders:TMUs:ROPs)/Single PCIe card Cloud Gaming variant with a Vega 20 based replacment. It’s not going to be hard for AMD to also produce a comsumer/prosumer gaming/other uses Dual Vega 20 DIEs/Single PCIe card variant also. So that may be at RTX 2080Ti levels of raster gaming performance and then some.
“AMDs cpu division is incredible” and yes in not to many more business quarters AMD will be making more from Zen Epyc/Naples and Zen-2 Epyc/Rome server market revenues to surpass Nvidia’s GPU only market revenues.
So the effective relative markups/to DIE’s cost on Zen/Epyc and Zen/Rome are even higher relative to what those Nvidia server GPU SKUs cost to produce and that more costly GPU Die BOM eats into any of Nvidia’s Pro GPU markups.
So even though the Zen/Epyc and Zen/Rome SKUs MSRPs are smaller than Nvidia’s top end Pro GPU prices that Server CPU Die/Chiplet production for AMD costs a fraction of what it cost Nvidia to produce one of its V100, or TU102, DIEs for professioal GPUs. AMD’s CPU server market share will get large enough to produce revenue streams that are larger than Nvidia’s GPU revenue streams and then AMD’s doing great for Consumer CPU revenues that are also growing.
Navi will not compete with any RTX 2080TI as Navi will replace the RX 590/lower mainstream market segement.
AMD is not in a pissing contest with Nvidia over Flagship gaming as the unit sales volumes are too low. The only way for AMD to try and compete currently is by creating another Dual GPU/Single PCIe card SKU from any remaining Vega 20 die production where the working number of shaders/CUs is not sufficient to bin a Radeon VII. So remember that Radeon Pro Duo and maybe something like that with some Dual Vega 20 based DIEs that are each binned similar to Vega 56’s complement of Shaders:TMUs:ROPs.
The Vega 20 Die based Radeon VII is AMD’s new King that’s still not able to match Nvidia’s King but watch out for any Dual Vega 20 Binned DIEs/Single PCIe card variant because there are bound to be plenty of Vega 20 Die Bins that do not make the grade to be used in servers/workstations or even the Radeon VII. It’s better than throwing away any Vega 20 DIEs that lack enough working Shaders/nCUs for even Radeon VII production.
You Know that Vega 20’s DP FP to SP FP ratio is back to 1:2 so that’s a lot more DP FP for some prosumer’s GPU usage in addition to gaming usage. It all depends on the Vega 20 chip line production that’s been ongoing for 6+ months. So that’s probably sufficient numbers for defective dies for any Radeon VII/One Lower Binned Vega 20 based consumer variant production also.
It’s pretty clear that GCN
It’s pretty clear that GCN does not have the legs they’d hoped it would as an architecture; it just doesn’t seem to scale well.
Given that Vega 20 is pretty much just a die-shrink of Vega 10, which was itself not a whole lot more than a tweaked die-shrink of Fiji, I’m hoping the newer stuff that they’ve actually designed up-front for 7nm will be a little more competitive.
Stop blaming this on GCN
Stop blaming this on GCN because AMD can not afford 5 different GPU base die tapeouts that are each targeted to a specific GPU market segement like Nvidia can afford!
Nvidia for example on its Pascal Micro-Arch has GP100, GP102, GP104, GP106, and GP108. So that’s 5 different GPU(Base DIE) tapeouts each with varying numbers of Shaders:TUMs:ROPs while for comparison AMD had at the time that one Base Die tapeout for the Vega(Vega 10 BIG Die).
The GP102(Also a pro market foucsed die/tapeout) has a total of 96 ROPs max compared to the Vega 10 based die tapeout’s 64 max. And Nvidia has 4 more Base Die tapeouts with lesser amounts of total Shaders:TUMs:ROPs! So Nvidia can more fine grain target all the Top to Botton GPU market segemets.
These 5 Tapeouts probably cost above 1 Billion dollars for Nvidia but it give Nvidia greater latitude to target specific GPU markets with a specific GPU Tapeout.
It’s not GCN is AMD’s lack of RTG resources to afford to compete with Nvidia’s on a specific die tapeout to specific die tapeout level. Nvidia makes more use of GP104, GP106 and GP108 for mostly gaming oriented SKUs. GP102 can be said to be a professional market oriented Base Die Tapeout because GP102 has more Quadro variants and only one Consumer variant the GTX 1080Ti. So Nvidia’s Flagship GPUs come from a binned Professional Focused Base die tapeout, GP102.
So AMD could only afford One Base DIE tapeout, the Vega 10(Big Die), and it’s the same Vega 20(Added AI ISA extentions and other tweaks of 2nd gereration Vega) on TSMC’s 7nm node.
You are not correct in blaming AMD’s GCN that if AMD only chose to increqse the render backend ROP resources a little more coud have gone head to head with Nvidia’s GP102 based GTX 1080Ti(88 ROPs). Just go over to TechPowerUp’s GPU database a look at the GTX 1080Ti’s Pixel Fill Rate GPixels/s and that was the highest of any other consumer market Nvidia or AMD Gaming GPUs.
The Vega 20 based Radeon VII has more Tweaks and higher clocks so its Pixel fill rate has gone up even with only 64 ROPs max. AMD has done more tweaking than just that TSMC 7nm process node shrink.
Also AMD’s TDP metric is still misquoted/misunderstood and misreported by the Technology press. I’m sure that Vega 20 is going to be a very transister dense design on TSMC’s 7nm and also AMD uses denser/smaller layout design libraries on its GPUs in order to save more space. So 7.5T 10 fins per cell libraries compared to 9.5T 12 fins per cell libraries with the 7.5T librarise able to pack more Transistors per MM^2. Vega 20 is going to need a higher TDP(Recommended Cooler TDP Number) because it’s on 7nm with more Transistors per MM^2 so the heat per unit area generated is higher on Vega 20/7nm than Vega 10/14nm.
The power used by Vega 20 is still unknown beause the NDA is still in effect. But Vega 20’s small size means that the cooling solution’s TDP metric has to go up in order to keep that smaller more densly packed Vega 20 die properly cooled. Vega 20’s actual power usage figures will have to wait for independent testing.
GCN in not at fault for RTG’s/AMD’s lack of a billion+ dollars just laying around that can be used to target more gaming/GPU oriented base die tapeouts. And That is the ture reason that AMD/RTC can not compete with Nvidia, Base Die Tapeouts(5+) to Base dies Tapeout(1) on the desktop. Vega 20 is still using Vega 10’s Shader:TMUs:ROPs numbers so it’s up to more Tweaks on Vega 20, Vega 20’s Micro-Arch Extentions, and whatever little Tweaks/Improvments AMD had the time to bake into Vega 20’s Shader cores, TMUs, ROPs.
Radeon VII is also only 60 nCUs and that’s less than Vega 64’s 64 nCUs so Vega 20 has some more Tweaks than was revealed at CES 2019. I’m sure that 16GB HBM2 and 1TB/s is also helping things also and even for tasks like compression and decoding that excess bandwidth comes in hady during heavy gaming scenes.
I have always thought that AMD’s ROP’s may be just slightly more efficient than Nvidia’s ROPs if one takes the GPixel fill rate and devides by the numbers of ROPs and factors in for the difference in clock speeds.
But for Vega the ROPs are a direct client of the L2 cache so maybe there are improved/larger L2 caches on Vega 20 compared to Vega 10. So at 7nm TSMC and with any other Tweaks Vega 20 is right up there competing with the RTX 2080 for raster oriented gaming titles.
Folks you need to go watch Bulidzoid’s latest video on CPU desnity where he makes direct refrence to Nvidia’s GPU density comperd to AMD’s GPU density. It’s a little aside comment that Buildzoind slips in the there, about how apparently Nvidia likes lower density librares with larger numbers of Fins(Finfets) Per cell than AMD who uses denser layout libraries with less Fins pre cell. Nvidia makes use of more 4 fin transistors that can be clocked higher at the cost of dessity while AMD is using denser libraries and more 3 fin/less transistiors that can not be clocked/driven to higher clock rates efficienrly. Nvidia’s GPU dies are less dense per MM^2 and thay are larger so the TDP/thermal dissipiation issues are less on larger dies that are using less dense libraries.
There is nothing AMD can do currently to compete any better with Nvidia’s billions other than gamers pooling their resources and loaning/giving AMD’s RTG at least 3 Billion dollars that’s earmarked for the Tapeout of gaming only focused GPUs. Nvidia leads there by 5 to 1 against AMD.
Cut the ram in half, drop the
Cut the ram in half, drop the price and it will be worth it.
this are very good numbers
this are very good numbers there on 4k.
omg evga raised the price
omg evga raised the price across the board on their 2080tis even the entry level $999 black went up by 10% to $1099 thanks to competition.
Unless a company can get high
Unless a company can get high sales volumes, graphics cards are loss leaders.
It is likely that AMD, with current sales volumes is not making big money.
Nvidia, like Intel, slowed down R an D, to reap the benefits of past development. The underdog has a great product at a competitive price. Nvidia now has to trim the fat, and watch their pricing.
Because the Radeon Vega 7 is
Because the Radeon Vega 7 is priced as high as a 1.5 year old 1080ti at the same performance I don’t believe Nvidia will lower their price on anything higher performing than a vega 7 performance. Just look at evga they just raised the price of their 2080tis across the board although that could be due to tarrifs too.
Currently 2019 is still a bad year for upgrading to enthusiast or high end pc gaming I’m hoping the end of the year will be more competitive not just in the cpu front but in the long needed gpu front as well.