Ray Tracing and Gaming – One Year Later

Source: PC Perspective Ray Tracing and Gaming – One Year Later
In case some of you guys aren’t checking out the RSS feeds of our articles or just skip over the icons above, I really need to make sure you see the new ray tracing article from Daniel Pohl.  As a follow up to Daniel’s previous article that first introduced ray tracing to our readers, this newest iteration dives into some examples of how ray tracing can be more efficient than rasterization when it come to rendering games. 

There are lots of examples, diagrams and explanations for all levels of reader ray tracing experience, and even a video to demonstrate how ray tracing could affect game play.  Be sure to check it out!

Ray Tracing and Gaming - One Year Later - Processors  1
Artistic representation of a building with 4 levels with each 4 rooms

When ray tracing, we want to figure out whether a piece of geometry is hit or not. In ray tracing speak we talk about a “camera” and when we shoot rays through it to determine what is visible we refer to these as “eye rays” or “primary rays”. Naively, we could just shoot eye rays everywhere and see what gets hit and what does not. Clearly, in the above example 15/16 of these rays would be wasted. Thinking about this situation a little, it seems obvious that it is very unlikely that the camera will “see” anything in the right-most room on the lowest-level, so why even bother checking if a ray hits any of the geometry from that area?

Video News

About The Author

Ray Tracing and Gaming – One Year Later

Manufacturer: Intel Ray Tracing and Gaming – One Year Later

One Year Later

Daniel Pohl’s latest work takes a look at how ray tracing engines have developed over the past year and discusses how ray tracing can be advantageous when compared to current rasterization techniques. The chances of utilizing both ray tracing and rasterization is also explored.


This article is intended to be a follow up article to one released about a year ago through PC Perspective. A lot of things have changed since then. Real-time ray tracing for desktop machines is just around the corner.

Introduction

If you are new to the topic of ray tracing you might want to read through this section.

Ray tracing is a rendering technique that generates a 2D image out of a given 3D scene. This is done by simulating the physics of light propagation using rays. The algorithm shoots, for every pixel on the screen, a so-called “primary ray” from the perspective of the eye of the viewer. The ray tracing algorithm then determines which object is hit first on the path of the ray.

Ray Tracing and Gaming - One Year Later - Processors  2

At that hit-point a shader program is invoked which could, for example, cast another ray to simulate (say) a reflection at a mirror.

Ray Tracing and Gaming - One Year Later - Processors  3

Through so-called “shadow rays” it can be easily determined if a given pixel is lit or in shadow. If a ray from the point in question can be shot to the light source without being blocked, then it can be concluded that light reaches that point. In the other case – when it is blocked – then we can conclude that the point is in shadow.

Ray Tracing and Gaming - One Year Later - Processors  4

The ray tracing approach I am reporting about in this article is calculated completely on the CPU. No graphics card is needed to create the image. (Once created, we merely transport the pixels to the graphics card to have it paint the image onto the monitor).

Another approach to generate a 2D image out of a 3D scene is called “rasterization” and is currently performed by special-purpose hardware graphics cards. Currently, this the standard way that games “render” the images you see using standard libraries like DirectX or OpenGL.



« PreviousNext »

Video News

About The Author

Leave a reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest Podcasts

Archive & Timeline

Previous 12 months
Explore: All The Years!