Introduction to Ray Tracing
Guest author Daniel Pohl shares with us his insight into using raytracing for game rendering and how the increasing power of multicore processors are helping bring it to a reality.
Guest author Daniel Pohl shares with us his insight into using raytracing for game rendering and how the increasing power of multicore processors are helping bring it to a reality.
Imagine playing a computer game in the quality seen in the movies of the ‘Lord of the Rings’series; absolute realistic lighting, details like in real life, believable skins and much more.
(‘Lord of the Rings’Movie: Rendered image with ray tracing – Copyright Newline Cinema)
One technique that was used creating the images for those movies is called ‘ray tracing’. This is an alternative rendering technology compared to what actual graphic cards on modern PCs and consoles do. For many years ray tracing has only been used for offlinerendering and the generation of pictures for movies often took many days to calculate. Realtime ray tracing has been made possible with the OpenRT Ray tracing library. Through using many PCs over an Ethernet network interactive frame rates could be rendered in high resolution. Now, four years later CPUs have progressed a lot and ray tracing works in small resolutions on a single PC in realtime, but more on this later.
OpenRT
OpenRT is a ray tracing library developed by the Computer Graphics Group of Saarland University. The syntax of the API is nearly identical to OpenGL. A free Linux test version for programmers can be downloaded.
Webpage: http://www.openrt.de/
Ray tracing
So, how does ray tracing work? The algorithm is quite simple.
A basic ray tracing scenario.

From a virtual camera (denoted by the eye), primary rays are shot through every pixel on the screen.

For every ray the intersection hitpoint_{primary} with the object closest to the camera is calculated. In the example this is the green sphere.

The shader program of the green sphere is called.

In this example the material of the green sphere should be totally reflective, so the shader shoots a reflection ray from hitpoint_{primary} into the reflected direction.

The reflection ray hits an object and hitpoint_{reflected} is calculated. In the example it is the red triangle.

The shader program of the red triangle is called.

A shadow ray is shot from hitpoint_{reflected} into the direction of the light source (not shown in the figure).

There is an object between the point on the triangle and the light source.

Nothing is added to the triangle shader color, therefore the point represents shadow.

The triangle shader returns its color, which is added to the frame buffer.

A shadow ray is shot from hitpoint_{primary} into the direction of the light source.

There is no object between hitpoint_{primary} on the triangle and the light source.

Through the attributes of the light source a color values is calculated and added to the frame buffer.
For efficient ray tracing a space acceleration structure (e.g. a BSP http://en.wikipedia.org/wiki/Binary_space_partitioning) is used, which holds all geometry of the 3D scene. Using this technique the blue cube from the figure above was not touched for the calculation of the final color.
Next Page – Quake 3: Ray Traced