Today at GTC NVIDIA announced a few things of particular interest to gamers, including GameWorks RTX and the implementation of real-time ray tracing in upcoming versions of both Unreal Engine and Unity (we already posted the news that CRYENGINE will be supporting real-time ray tracing as well). But there is something else… NVIDIA is bringing ray tracing support to GeForce GTX graphics cards.
This surprising turn means that hardware RT support won’t be limited to RTX cards after all, as the install base of NVIDIA ray-tracing GPUs “grows to tens of millions” with a simple driver update next month, adding the feature to both to previous-gen Pascal and the new Turing GTX GPUs.
How is this possible? It’s all about the programmable shaders:
“NVIDIA GeForce GTX GPUs powered by Pascal and Turing architectures will be able to take advantage of ray tracing-supported games via a driver expected in April. The new driver will enable tens of millions of GPUs for games that support real-time ray tracing, accelerating the growth of the technology and giving game developers a massive installed base.
With this driver, GeForce GTX GPUs will execute ray traced effects on shader cores. Game performance will vary based on the ray-traced effects and on the number of rays cast in the game, along with GPU model and game resolution. Games that support the Microsoft DXR and Vulkan APIs are all supported.
However, GeForce RTX GPUs, which have dedicated ray tracing cores built directly into the GPU, deliver the ultimate ray tracing experience. They provide up to 2-3x faster ray tracing performance with a more visually immersive gaming environment than GPUs without dedicated ray tracing cores.”
A very important caveat is that “2-3x faster ray tracing performance” for GeForce RTX graphics cards mentioned in the last paragraph, so expectations will need to be tempered as RT features will be less efficient running on shader cores (Pascal and Turing) than they are with dedicated cores, as demonstrated by these charts:
It's going to be a busy April.
Those performance figures are
Those performance figures are inflated as they are comparing running at full resolution on a 1080 Ti with upscaling from a lower resolution on a 2080. Comparing RT Turing without DLSS with Pascal doesn’t look good for Turing at all.
Those performance figures are
Those performance figures are inflated as they are comparing running at full resolution on a 1080 Ti with upscaling from a lower resolution on a 2080. Comparing RT Turing without DLSS with Pascal doesn’t look good for Turing at all.
What do you mean?
If you’re
What do you mean?
If you’re talking about running non ray tracing applications (traditional rasterization), yes, 2080 isn’t much of an upgrade over the 1080Ti.
But if you enable DXR/ Raytracing on the 1080Ti it gets crippled hard while the RTX cores of the 2080 can finally flex their muscles. It’s even shown in the graph as Turing RT.
Though most people should ignore the results of Turing RTX since that uses DLSS.
That’s because Nvidia’s GTX
That’s because Nvidia’s GTX 1080Ti has the exact same number of Shader cores as the Vega 56 and Pascal’s SMs can not dual issue Int and FP instructions like Both RTX and GTX Turing SMs can. AMD’s GCN SKUs with Async-Compute will relatively shine using their excess shader cores for Ray Tracing calculations acceleration. Radeon VII will even have some extra FP 64 units that can still be utilized for Ray Calculations even if the most significant digits are discarded, you can still do 32 bit math on a 64 bit FP unit and discard any MSD values greater than any 32 bit register could hold.
None of the non RTX hardware enabled GPUs are going to best RTX/Turing while GTX/Turing will do nicely compared to Pascal. AMD’s shader heavy designs are really going to earn their keep with both DX12/DXR and Vulkan/Vulkan-Ray-Tracing-Extentions prividing alternative ray tracing on the GPU’s shader cores code paths for their respective Graphics APIs for older GPU hardware that lacks any dedicated Ray Tracing/BVH hardware units.
I mean that these huge 1.6x,
I mean that these huge 1.6x, 3x increases rely on Pascal running the game at full resolution, but Turing running at a low resolution and then upscaling — and introducing hideous artefacts in the process. Turing with RT but without DLSS has a modest increase, but it’s nothing like what is being sold by the graphs.
They can keep their vasoline
They can keep their vasoline smeared monitors and reflections in puddles. Report back when they knock a couple hundred bucks off prices. File it all under physx, hairworks and sli. Stuff nobody wanted and should have been open sourced.
Most of GameWorks is open
Most of GameWorks is open source now. They started making it open source years ago.
I don’t really see the hate SLI gets either. Two 1080s (non Ti) are more capable than 2080 Ti for much less money. It’s Nvidia who want SLI to be as hated and unsupported as possible.
It’s not totally Nvidia’s or
It’s not totally Nvidia’s or AMD’s fault on sli/xfire dieing. game publishers, especially the “AAA” * publishers are in the business of hitting deadlines and devs have to cut corners. Guess what is one of the first things to go?
If anything, it’s a combo effort, game companies don’t want to make the effort to save time and gpu makers want the extra potential sales or upsales.
Your dual 1080 scenario is flawed. The issue is that sli/xfire is only supported in a subset of games and even then in a smaller subset does it actaully scale well.
If one is ok with that subset of games for sli and htat’s all that person plays, then by all means go for it. Otherwise IMO it’s not a good idea.