Simultaneous Multi-Projection: Big changes for Surround and VR gaming
I mentioned previously the primary architectural change in Pascal from Maxwell was the addition of the multi-projection block inside the PolyMorph 4.0 engine. The ability to support more than one projection is brand new to GPU hardware, and can improve both performance and quality for gamers on multiple panels and for VR, but the subject is complex, needing a bit of background to understand. Here is an excerpt from some NVIDIA documentation to help set the stage.
Since the early days of 3D rendering, the graphics pipeline has been designed with a simple assumption that the render target is a single, flat display screen. However, in recent years advances in display technology have led to many new types of display scenarios that do not fit the classical assumption. Surround multi-monitor setups are an excellent solution to give a sense of immersive realism in 3D games, and curved single-display monitors are also becoming popular. VR display systems put a lens between the viewer and the screen, requiring a new type of projection that is different from the standard flat planar projection that traditional GPUs support.
Traditional GPUs can support these types of displays, but only with significant inefficiencies—either requiring multiple rendering passes, or rendering with overdraw and then warping the image to match the display, or both.
The notion of “projection” has been fundamental since the dawn of 3D computer graphics. Geometric objects in the scene are modeled in three dimensions. However, in order to display a view of the scene on a flat display, the scene needs to be projected onto the screen, a process referred to as perspective projection. Projection is the computer graphics equivalent of drawing a picture on a window that exactly matches the view of the real world that you saw when looking through the window.
Single monitors and single projections are a problem that have been solved but there are numerous display technologies today and in development that complicate the projection model. NVIDIA provided a couple of examples to demonstration.
When using a three monitor gaming setup, called NVIDIA Surround or AMD Eyefinity, the game presents and increased field of view but does so with a single, flat plane projection. While it should be geometrically correct if the user keeps their monitors on the same flat plane, most gamers will want to tilt and angle the side monitors inwards. When you do that though, you are fundamentally changing the mapping of the world to that projection and your display setup, warping the image on the edges even further.
A quick fix for this issue would be to render each monitor separately, each with a viewport properly angled in relation to how you have setup your monitors. With previous architectures this would require three times the work for scene management and setup including OS runtime, driver processing, front end and geometry processing. With Pascal and Simultaneous Multi-Projection it can render that scene with a single pass, preventing the duplication of geometry and setup work. The same number of display pixels will need to be processed of course, but Pascal has now improved the correctness of the image without decreasing performance.
Single Pass Stereo
One of the big hurdles of VR gaming today in regards to performance is the penalty you get for rendering stereoscopic images for each eye independently. That results in double the amount of work from driver to setup to rendering and rasterization. SMP can remove that performance penalty by having two projections, one from each eye’s location, cutting in half the amount of geometry processing, scene submission and OS scheduling work.
In Single Pass Stereo mode, the application runs vertex processing only once, but outputting two, (rather than one), positions for each vertex being processed. The two positions represent locations of the vertex as viewed from the left and from the right eye. The SMP hardware takes care of picking the right version of the vertex and routing it to the appropriate eye.
Lens Matched Shading
Another early features of SMP is applicable to VR applications today as well, called Lens Matched Shading. This allows the GTX 1080 and Pascal GPUs to more closely match the rendering workload to the final required image for a VR headset.
Most enthusiasts today understand that for a GPU to render a scene for a VR head mounted display, there are several steps involved. First the GPU renders the scene to a traditional flat plane and then that rendered output is mapped to a pixel location for the output display surface, creating a skewed image that is “unskewed” buy the lens on the Oculus Rift or HTC Vive. Based on Oculus’ own parameters, the GPU is asked to render an image at 2.1 Mpix though it is mapped to surface of just 1.1 Mpix, meaning the GPU is rendering 86% more pixels than are necessary.
You might remember that NVIDIA released a technology with the GTX 980 Ti launch called Multi-res Shading that enabled the scene sections to render at different effective resolutions, lowering the workload on a game engine.
LMS improves on this by creating four quadrants and projections that attempt to closely match the final requested lens distorted view from the VR provided (Oculus or SteamVR). These parameters are adjustable to match the VR headsets (and any upcoming ones) better and the developer will have the ability to make the quadrants larger or smaller to balance performance and image quality. NVIDIA estimates that with its first implementation of Lens Matched Shading they can reduce the GPU workload from the 2.1 Mpix listed above down to 1.4 Mpix. That is still higher than the 1.1 Mpix final mapping done by Oculus but it should save 50% on pixel shading throughput.
By combining Single Pass Stereo and Lens Matched Shading, NVIDIA claims to deliver 2x the performance of a GPU without Simultaneous Multi-Projection, aka Maxwell.
The benefits of SMP will likely extend past the technology integrations we mentioned here with implications for curved monitors and more than three display Surround configurations. It’s also important to note that this isn’t a driver-level feature that can be turned on and it will fix VR or Surround configurations automatically; it requires developer integration. So what we might have at first thought was instant fix to the dreaded fish-eye problem of NVIDIA Surround gaming, is in fact a “wait and see” project to see what games and game engines implement the technology. It’s painfully obvious to see the advantages it provides, and hopefully developers do as well.