It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
So, yeah, we missed a few weeks. We're going to blame it on Jim and his lack of pneumonia-proof lungs. What a loser. But, hey, they say whatever doesn't kill you makes you stronger. Turns out that's not true at all.
Without further ado, the role of Ryan Shrout in today's performance will be played by plucky young up-and-comer Josh Walrath:
00:56 – Do you think we'll see AMD GPUs with ray tracing capabilities in the next few months to compete with NVIDIA?
06:06 – Has NVIDIA changed the yardstick for measuring GPU performance? The metrics until now have been higher frame rates at higher resolutions, but it seems we’re about to start prioritizing ray tracing performance instead. Would a gamer playing today at 4K or 144Hz consider it a downgrade to switch to 1080p with ray tracing? Will consumers who invested in 4K and high refresh rate displays feel cheated by the shift to ray tracing, even though there are only a handful of supported titles scheduled for the near future?
11:46 – Would you rather see companies push to truly achieve mainstream 4K HDR 60fps performance with better textures and polygon counts instead of this new shift to ray tracing?
14:11 – I recently added a Samsung 970 EVO to my system, but I wasn’t able to install Windows on it unless I disconnected all of my other drives first. What could have caused this?
17:01 – Is anyone else unable to map a network drive in Windows 10 after upgrading to version 1803? Did Microsoft kill off HomeGroup without sufficient testing? Help!
18:48 – Why do different types of RAM work better on Intel or AMD platforms?
20:57 – Do you expect the new Intel HEDT refresh parts to hold the higher frequencies at the same power level as before? And if so will that be down to the soldered IHS or more of the actual improvement in the cores/chip? Will the “optimization” provide an IPC increase?
23:16 – Do you think Zen 2 / Ryzen 3 will bring memory controller performance into parity with their Intel counterparts?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos (usually) each week!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
Will you treat DLSS as lower
Will you treat DLSS as lower quality but faster 4k (since it renders about half as many pixels) or higher quality slower 1440p (since it renders more pixels and will look much better) in your reviews?
Also with Nvidia, MSFT, Sony, Steam, EA and UBisoft (among others) talks g about the move to streaming games over the next 5 years – wouldn’t this be the best way for gamers to see the benefits of raytracing instead of spending $1200 on a GPU?
If Nvidia allowed you to game on a 2080ti on the Shield Console using streaming – wouldn’t that be revolutionary? Same thing for the next Xbox – which is rumored to have a big streaming focus.
“Do you think we’ll see AMD
“Do you think we’ll see AMD GPUs with ray tracing capabilities in the next few months to compete with NVIDIA?”
All GPUs, cuurent and more than 3 generations past, can do Ray Tracing workloads including CPUs where Ray Tracing was originally done.
But really look at how long it takes to design and certify/validate a new CPU design and GPUs take longer so AMD would need some years lead time for any Hardware Based Ray Tracing design to be certified/validated. AMD’s going to have more AI related GPU Micro-arch tweaks in Vega 20 so maybe Navi can also have some Tesor Core like functionaliy similar to what Nvidia makes use of for its DLSS. That said Ray Tracing is a compute workload and compute workloads can be accelerated on any GPU’s spare shader cores for some more limited Ray Tracing workloads.
Similar GPU hardware based Hybrid Ray Tracing IP to Nvidia’s hardware based Hybrid Ray Tracing IP has already been implemented years ago by the Imagination Technologies folks in their PowerVR Wizard mobile line of GPUs. So maybe AMD can License but more than likely AMD can design some of their own but that can take years to end up in ASIC form.
Nvidia’s limited “Real Time” Hybrid Ray Tracing output is so limited currently that Nvidia has to make use of a Tensor Core Based AI denoising algorithm just to denoise that limited Ray Tracing Output that comes from Nvidia’s RT cores.
Really Nvidia is developing Ray Tracing for the Professional markets and making use of binned professional DIEs for its consumer gaming SKUs. Just Look at the top end TU104 veriant and that’s going to a Quadro Variant with the RTX 2080 making use of a lesser binned TU104 based Die. The TU102(GP102 also) base die tapeout has always been used for Quadros with the lowest binned TU102(GP102 also for 1080Ti) being used for the RTX 2080Ti variant, with the TU106 base die tapeout being used for the RTX 2070 this generation.
So with Both the TU102 and TU104 base die tapeouts Nvidia is first looking to get Quadros and not any consumer variants, at first binning, with any defective DIEs that can not be binned as any Quadro Variants will become RTX 2080s(From TU104) or RTX 2080TIs(from the lowest Binned TU102 dies).
Nvidia did not raise its ROP counts this generation for its TU102 Base die tapeout(still 96 ROPs max) so the RTX 2080Ti still only gets 88 ROPs and little change in pixel fill rates and for the TU104 base die tapeout it still gets that same 64 ROPs max:
[TU 104 base die tapeout for Quadro first]
Shading Units: 3072
TMUs: 192
ROPs: 64
Tensor Cores: 384
RT Cores: 48
But any TU104 base die based RTX 2080 consumer die binns get only:
Shading Units: 2944
TMUs: 184
ROPs: 64
Tensor Cores: 368
RT Cores: 46
So Nvidia’s TU104 base die full tapeout is for Quadros and the lesser TU 104 binned variant gets less resources for consumer/gaming.
So Nvidia’s is really pushing RTX for the Quadro market where that Ray Tracing and other AI IP will be popular with the professional 3D animation folks and Pro graphics design folks. And they will pay more to get Quadros.
If only AMD would re-tapeout a Vega variant with 96 avilable ROPs AMD could have really matched the GTX 1080Ti in that FPS metric for raster type gaming workloads. But AMD wanted Vega for the professional Compute/AI markets also just like Nvidia and AMD lacked the funds to afford to do any large numbers of Base Die Tapout variants like Nvidia could afford to do!
Just look at AMD’s financials a few years back and on an engineering level AMD can compete but more Tapeouts require more engineering teams/more millions per base die tapeout variant and AMD did not have the funds to match Nvidia’s funding levels. And even to this day AMD still has to grow a bit more Market Cap/Revenues wise to be able to match the big players dollar for dollar in R&D billions spent on more engineers and such.
When AMD starts earning revenues like Nvidia and AMD gets its Market Cap up in the larger billions like Nvidia then AMD will have the funds like Nvidia has for AMD to compete head to head in GPUs with Nvidia. Competition in the technology market really requires the Funding in order to do so and that’s really required for high technology IP like GPUs, CPUs and other such IP.
Per GPU Base die Tapeout Mask Sets run in the Millions and that adds up if Like Nvidia you have 5+ different Base die Tapeouts per generation compsred to AMD’s only one/few base die tapeouts. When Vega was firet released AND only had that One Big “Vega 10” Base die tapeout and that was used for the professional MI25/Radeon Pro WX AI/Compute variants and also for the consumer Vega 56/64 variants.
It’s just too bad that the Vega 10 base die tapeout was designed only to compete with GP104(64 ROPs max) and not GP102(96 ROPs Max). So AMD’s Vega 10 base die based RX Vega 56(64 ROPs) is matching the GTX 1080Ti in every Unit matric but that ROP metric where the 1080Ti’s 88 ROPs are what leads in the pixel(GPixels/s) fill rates and higher FPS on raster gaming workloads, clock speeds are helping Nvidia also but not as much as the higher ROP counts for the Ti.
Does higher L3 per core
Does higher L3 per core increase single thread perf? Does this mean i7 8700k has faster single thread than i5 8600k if freq is same and HT is off?
Why can’t Nvidia release a
Why can’t Nvidia release a version of their 2080ti without all of that extra expensive crap? I mean sure, I understand the need to pay for R&D however, why do we need to pay for, supposedly, 10 years worth of research all at the same time. And where is that Moon Landing Demo with Ray Tracing they were showing off a couple of years ago?
“Why can’t Nvidia release a
“Why can’t Nvidia release a version of their 2080ti without all of that extra expensive crap?…”
To do a 2080ti minus the RTX tech would mean multi-millions of dollars to create another GPU architecture.
“… I mean sure, I understand the need to pay for R&D however, why do we need to pay for, supposedly, 10 years worth of research all at the same time… ”
Nvidia charges that much for their RTX cards because they can.
“…And where is that Moon Landing Demo with Ray Tracing they were showing off a couple of years ago?”
that demo is stuck back in 2014 still, used to debunk the moon landing debunkers. Since mystery is already solved, why do it again?
funny, no sooner that I post
funny, no sooner that I post this above and the next day Nvidia releases the moon landing demo with RTX:
https://www.youtube.com/watch?v=QIap1jL14WU