There is something about this phrase which describes a feature of NVIDIA's newly announced VR SLI that excites the kid in me "multiple GPUs can be assigned a specific eye to dramatically accelerate stereo rendering". Maybe you can't afford two GPUs per eye but the fact that it would work if you could manage it is rather impressive. NVIDIA has announced new SDKs specifically aimed at VR design and performance, GameWorks VR and DesignWorks VR. Epic has announced that Unreal Engine 4.3 will support these new tools and you can grab them from NVIDIA's developer website right now if you so desire. You can read more about specific features and optimizations these SDKs will provide at this article on The Inquirer.
"The company said at the release of version 1.0 of GameWorks VR and DesignWorks VR that the SDKs will solve the power-guzzling problems associated with complex, immersive VR graphics processing."
Here is some more Tech News from around the web:
- Move aside Google Maps, the future of navigation is just three words @ The Inquirer
- Microsoft makes Raspberry Pi its preferred IoT dev board @ The Register
- Banking trojan Dyreza is targeting Windows 10 and Microsoft Edge users @ The Inquirer
- Linksys LCAB03VLNOD 1080p 3MP Outdoor Night Vision Bullet Camera Review @ NikKTech
VR SLI (and AMD’s Liquid VR
VR SLI (and AMD’s Liquid VR equivalent) aren’t an automatic boost. Unlike with regular SLI/crossfire, you can’t apply it to an existing game. It needs to be build into the game engine in order to properly distribute and dispatch jobs to one or the other (or both) GPUs. Like everything to do with VR, latency is paramount, and with two GPUs you need to be VERY careful in how jobs are distributed to avoid one GPU delaying the other.
Multi-res Shading is the really interesting API, something actually enw from Nvidia: hardware acceleration of multiple viewports. Currently aimed at cutting down on wasted pixels around the periphery, but it could be used to break the 180° rectilinear rendering barrier for future HMDs (and existing CAVEs).
The naive engineer in me
The naive engineer in me thinks you can just assign one GPU per eye, issue a rendering job to both GPU’s, wait for both of them to complete and display it to the individual eyes.
The only thing that this system would be bad at is if one eye suddenly had a drastically different workload from the other, but I’m struggling to think of such a situation.