Who is Caustic Graphics?

Caustic Graphics is formally announcing availability of the CausticOne ray tracing accelerator card today and we are finally allowed to unveiling the technology we learned about at GDC last month. If you think the ray tracing and rasterization debate was interesting before, wait until you see what this new player has to say.
Ray tracing.  Many have said that graphics rendering by ray tracing is the only way to truly achieve the imagery end-game we all seek.  Many have disagreed.  PC Perspective has been here covering both sides of this debate for quite some time.

We have hosted articles from ray tracing visionaries; we have interviewed prominent game developers and tech minds on the topic; we have seen what ray tracing developers envision; we have looked at Intel’s team of minds on ray tracing; and now we investigate another claim on the path of ray tracing’s move to the mainstream.

Who is Caustic Graphics?

In March of this year, Caustic Graphics surfaced as a break through hardware and software design company with claims of solving the problem of ray tracing.  Founded by a trio of individuals from Apple Computer, one of which I met with in their offices in San Francisco, the company was originally formed in 2006 and has been operating silently until now.  The company CTO, James McCombe, was “number 3” in the world of OpenGL at Apple and joined the company in 2001 when there was basically no support for 3D technology on Mac.  He was one of three people to design the software stack for OpenGL on the platform and helped develop the languages used for fragment shading, worked on the specifications for HLSL (high level shading language) and ended his run at Apple working in the embedded group on low power, efficient rasterization for the iPod and iPhone. 

Caustic Graphics was formed after the three founders spent upwards of eight months together building models of ray tracing, testing out algorithms and deciding what direction the new company would eventually take.  Today I can finally discuss the technical details behind the hardware and software that I learned in my recent visit.

Where is ray tracing used today?

I think few people would be able to argue that in today’s world of rasterization-based rendering, there is a tremendous amount of effort involved in creating appearances and effects such as shadows, transparency and reflections.  Rasterization itself is really a complex collection of hacks that have been compiled over 20+ years of computing – effective and efficient but perhaps not ideal.  There are many areas of graphical design in which ray tracing would simplify a designers life; lights, shadows, anti-aliasing – all of these can be programmed for in a more simple manner with ray tracing than with rasterization if only the speed of ray tracing were increased. 

There are some places today where ray tracing is already used and developers realize the benefits of the technology.  The obvious area is in films – much of the Pixar film work is ray traced as are most of the final high-quality visual effects done in Hollywood.  Industrial designers, architects and others are already sold on ray tracing and would surely love to see acceleration hardware for a slow process they deal with everyday.

Caustic Graphics Ray Tracing Acceleration Technology Review - Graphics Cards 10
Sample of ray tracing work by Intel – from our “Ray Tracing and Gaming – One Year Later” article

Another interesting place where ray tracing is used is in game development itself, even if it isn’t used during game playback on consoles or PCs.  Game studios will very often use ray tracing to produce the “pre-baked” data sets used in rasterization for visual effects like spherical harmonics or light maps.  Caustic Graphics hopes to entice these developers to at least start playing with its hardware by saving them time in the creative process.

Rasterization and Ray Tracing: that age-old debate

I have discussed the basic differences between rasterization and ray tracing many times before, but the basics of the debate bear repeating.  In rasterization triangles are created and then broken up into sets of screen pixels that become threaded and processed on a GPU.  The beauty of rasterization is that these screen pixels are processed in small groups and adjacent pixels will likely be running the same, or similar, shader code and thus you have a lot of threads doing the same thing to each pixel.  Because of this, the SIMD nature of a GPU is well utilized and the smaller caches on-board perform very well with a strong locality of reference – GPUs are very efficient at rasterization.

Ray tracing works quite differently in that rays are “shot” from the camera (traditionally at least) to every pixel on the screen.  When a ray intersects with an object in 3D space, as the triangle configuration remains the same as with rasterization, a shader will run.  That shader has the ability to spawn additional rays and will sometimes create a LOT of new rays bouncing off in different directions.  The more rays a shader is allowed to create, the more detailed the rendered image will be.  But of course as the number of rays increases the performance hit on ray tracing increases quickly.  If each of those rays created by the first shader hits another object that runs yet another shader creating individual rays then it is easy to see how complex this tree of ray tracing data can become.  The problem for GPUs processing this type of data set is that the strong locality of reference of rasterization no longer exists – rays that bounced off the first object will likely not follow the same path and thus proceeding shaders will diverge into randomness.  GPUs do not handle randomness very efficiently. 

In the past, and even today, there is a lot of discussion on the ability to solve ray tracing with current generation hardware and with mainstream existing algorithms.  Most of the current ray tracing implementations are based on the idea of “packet tracing” where blocks of pixels are grouped together to trace a set of rays that start out adjacent to each other.  That will definitely increase efficiency of ray tracing for the first bounce or so but as the number of bounces increases and the randomness returns, the packets aren’t able to “stay together” in terms of their memory locality.  Even with advancements in shader processor development to include inter-thread communication (DX9 required this move) modern GPUs do not have enough bandwidth to handle the data shuffling required for RTRT (real-time ray tracing) processing, at least according to Caustic. 

In most cases for rasterization, pixels are completely independent of each other but in ray tracing that is not the case – with many rays being traced inside each pixel they becoming massively DEpendent.

« PreviousNext »