Ray Tracing and Gaming – One Year Later

Source: PC Perspective Ray Tracing and Gaming – One Year Later
In case some of you guys aren’t checking out the RSS feeds of our articles or just skip over the icons above, I really need to make sure you see the new ray tracing article from Daniel Pohl.  As a follow up to Daniel’s previous article that first introduced ray tracing to our readers, this newest iteration dives into some examples of how ray tracing can be more efficient than rasterization when it come to rendering games. 

There are lots of examples, diagrams and explanations for all levels of reader ray tracing experience, and even a video to demonstrate how ray tracing could affect game play.  Be sure to check it out!

Ray Tracing and Gaming - One Year Later - Processors  1
Artistic representation of a building with 4 levels with each 4 rooms

When ray tracing, we want to figure out whether a piece of geometry is hit or not. In ray tracing speak we talk about a “camera” and when we shoot rays through it to determine what is visible we refer to these as “eye rays” or “primary rays”. Naively, we could just shoot eye rays everywhere and see what gets hit and what does not. Clearly, in the above example 15/16 of these rays would be wasted. Thinking about this situation a little, it seems obvious that it is very unlikely that the camera will “see” anything in the right-most room on the lowest-level, so why even bother checking if a ray hits any of the geometry from that area?

Video News

About The Author

Ray Tracing and Gaming – One Year Later

Manufacturer: Intel Ray Tracing and Gaming – One Year Later
Ray tracing faster than rasterization: Example 2
The second example is a case study on multiple reflections in reflections on spheres and tori.

 Ray Tracing and Gaming - One Year Later - Processors  2 A Torus is the mathematical term for a geometric object that has the shape of a donut. The plural of a Torus is some Tori.

Mmm… Donuts!    











Ray Tracing and Gaming - One Year Later - Processors  3
Multiple reflections in reflections Click here for 80MB PNG file (10200×6000)

If we could just pretend for a moment, and ignore the fact that is scene would be impossible to render this scene using rasterization, not only is it possible to do this correctly with ray tracing, it’s actually much cheaper than you might think! Ray tracing efficiently and accurately calculates all reflections on an as-needed basis. That means that only what is actually visible will be actually calculated. Consider the following sequence of images to see how the rays are actually used:

Step 1: “Eye” rays – at least one per pixel – (Tinted Red)
 
Ray Tracing and Gaming - One Year Later - Processors  4


Step 2: Primary reflections (Tinted Green)
 
Ray Tracing and Gaming - One Year Later - Processors  5


Step 3: Secondary reflections (Tinted Blue)

Ray Tracing and Gaming - One Year Later - Processors  6


Step 4: 3rd order reflections (Tinted Yellow)

Ray Tracing and Gaming - One Year Later - Processors  7


Step 5: …

So clearly, with each iteration, the number of subsequent reflection rays decreases dramatically, therefore even a complex scene with many reflections represents only a small incremental cost with ray tracing.

How are reflections handled in rasterization?

In order to simulate reflections, the scene is pre-rendered from at least one camera position and saved to a texture map (called a reflection map, sometimes a cube map) which is then applied as a layer of texturing in a subsequent rendering of the scene, a similar method is used to create shadow maps, each of these passes requires rendering the scene before you really render the scene!

There are different cases and methods:
  • Below is an example of a one-pass reflection map texture notice the fish-eye lens distortion effect.

    Ray Tracing and Gaming - One Year Later - Processors  8

  • Here is a 2-pass reflection mapping which results in parabolic distortion.

    Ray Tracing and Gaming - One Year Later - Processors  9

  • Here we have 6 passes to create a cube-map.

  • There are several more algorithms that result in only minor improvements over these techniques

Limitations of the rasterization approaches:

  • Rendering into a texture can lead to visible jaggies due to incorrect sampling issues (an object that may have been far away from the camera in the reflection map calculation pass, and hence represent a tiny part of the reflection map, might end up being much closer to the camera during game play and hence get distorted).

  • Rendering into a texture consumes time. Worst case: In the final image only one pixel of the reflection is visible, but to create the cube map approach the scene has been rendered additionally six times.

  • How to handle multiple reflections? Very difficult to achieve, and in most cases just hopelessly impossible. It requires pre-calculating the reflections then rendering the scene, then evaluating the reflections in reflections and then re-rendering the scene and…

  • As these reflection maps are not based on physical laws therefore the reflections are not correct anyway.

What all those approaches try to do is an approximation of ray tracing.

As mentioned in the first example: The speed of ray tracing scales logarithmically with the number of triangles. This applies, of course, also for all reflection rays. So in a scene with a very high poly count and with reflections ray tracing can take even greater advantage of this situation (ie single rendering pass, only pay for reflections where they actually happen).

« PreviousNext »

Video News

About The Author

Leave a reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest Podcasts

Archive & Timeline

Previous 12 months
Explore: All The Years!