Hybrid rendering, graphics APIs and mobile ray tracing

PCPER: Based on your new data structure method using ray tracing, could you couple this with current rasterization methods for hybrid rendering?

CARMACK: I saw the quote from Intel about making no sense for a hybrid approach, and I disagree with that.  I think that if you had basically a routine that ray traces this area of the screen in the sparse voxel octree it’s going to spit out fragments, it’s going to wind up having a depth value on there that you could intermix with anything else.  Even if you had a ray trace against a conventional architecture you would still want to have a fragment program there that would look almost exactly like current fragment programs that we’ve got right now.  I couldn’t imagine wanting to do something that didn’t have a back end like that.  I mean you might even have vertex processors – the stuff that Intel is doing right now, ray tracing into the geometry, it’s very likely that you would in the end want to be able to run the triangles in there that you are ray tracing against through vertex and fragment processors and you’re just getting the barycentric coordinate of your ray trace stab.  You have to know what you hit but then you have to know what you want to do there.  You would want in addition some ability to send dependent rays out from there as extra elements.

John Carmack on id Tech 6, Ray Tracing, Consoles, Physics and more - Graphics Cards 11
Quake / id Tech 1, 1996 – courtesy of MobyGames.com


It’s reasonably likely that if my little data structure direction pans out you’ll probably still want to do characters as skinned and boned with traditional animation methods.  While you could go ahead and work out a voxel method of characters using refraction skeletons around characters and you could do animation, you probably wouldn’t want to because we can make characters that look pretty damn good with the existing stuff and if everything continues to get 10x faster without us doing anything you’ll probably want to do characters conventionally.  But if you can do the world and most of the static objects at this incredible level of detail that you would get with the sparse voxel octree approach that seems like a completely reasonable way to mix and match. 


Now there are aspects that mix and matching would work poorly.  It would be nice to be able to solve the shadowing problem really directly by ray tracing.  You could do that in a completely ray traced world, you just send the shadow rays out and jitter them and do all the nice things that let you solve the aliasing problem nicely.  But if you rasterized characters traditionally with hardware skinning, the voxel ray tracer wouldn’t find any intersections with the characters, and they wouldn’t cast any shadows.  So there are down sides to that but what I want to get out of ray tracing here is not a lot of what would be considered the traditional benefits of ray tracing: perfect shadows – shadowing would be damn nice to be solve but we can live without that – things like refraction and multiple mirror bounces.  Those just aren’t that important and we have every evidence in the world about that because in the real world where people make production renderings, even if they have almost infinite resources for movie budgets, very little of it is ray traced.  There are spectacular off line ray tracers but even when you have production companies that have rooms and rooms of servers they choose not to use ray tracing very often because in the vast majority of cases it doesn’t matter.  It doesn’t matter for what they are trying to do and it’s not worth the extra cost.  And that’s going to stay fairly similar throughout the next-generation gaming hardware models.


What I really want to get out of the ray tracing is this infinite geometry which is more driven by the data structure that you have to use ray tracing to access, rather than the fact that you’re bouncing these multiple rays around.  I could do something next generation with this and I hope that it pans out that way – we may not have dependent rays at all it and may just use ray tracing to solve the geometry problem.  Then you can also solve the aliasing problem by stocastically jittering all the sample centers which is something that I’ve been pushing to have integrated into current rasterization approaches.  Its obvious how you do it in a ray tracing approach; you jitter all the samples and you have some dependent, refinement approach going on there. 

John Carmack on id Tech 6, Ray Tracing, Consoles, Physics and more - Graphics Cards 12
Quake II / id Tech 2, 1997 – courtesy of MobyGames.com


I think that we can have huge benefits completely ignoring the traditional ray tracing demos of “look at these shiny reflective curved surfaces that make three bounces around and you can look up at yourself”.  That’s neat but that’s an artifact shader, something that you look at one 10th of 1% of the time in a game.  And you can do a pretty damn good job of hacking that up just with a bunch environment map effects.  It won’t be right, but it will look cool, and that’s all that really matters when you’re looking at something like that.  We are not doing light transport simulation here, we are doing something that is supposed to look good.

PCPER: So current generation consoles and PC graphics cards aren’t going to be capable of running this new type of sparse voxel octree based technology?  And do you think vendors adding in support for it for next-generation hardware would be sacrificing any speed or benefits to rasterization?


CARMACK: Right not at all.  You could certainly do it (sparse voxel octree) but it’s not going to be competitive.  The number of pixels that you could generate with that would be less than a 10th of what you could do with a rasterization approach.  But the hope would be that in the coming generation we might have the technology for it.  


No matter who does what, the next generation is going to be really good as rasterization, that is a foregone conclusion.  Intel is spending lots of effort to make sure Larrabee is a competitive rasterizer.  And it’s going to be ball park competitive, we’ll see how things work out, but a factor of 2 plus or minus is most likely.  But everything is going to be a good rasterizer.  We should have enough general purpose computational ability to also be able to do some of these other novel architectures and while everybody thinks it’s going to be great I have to reiterate that nobody has actually shown exactly how it’s going to be great.  I have my ideas and I’m sure other people have their ideas but it’s completely possible that the next generation of high end graphics is just going to be rasterizing like we do today with a little more flexibility and 10x the speed. 


PCPER: Do you think DirectX or OpenGL will have to be modified for this?


CARMACK: They are almost irrelevant in a general purpose computation environment.  They are clearly rasterization based APIs with their heritage but there is a lot of heard room left in the programming we can do with this.  Almost any problem that you ask for can be decomposed into these data parallel sorts of approaches and it’s not like we’re capping out what we can do with rasterization based graphics.  But when you get these general purpose computing things going, they will look like different environments for how you would program things.  It could look like CUDA or with Larrabee you could just program them as a bunch of different computers with SIMD units.

John Carmack on id Tech 6, Ray Tracing, Consoles, Physics and more - Graphics Cards 13
Quake III: Arena / id Tech 3, 1999 – courtesy of MobyGames.com  


PCPER: Intel has discussed the benefit of ray tracing’s ability to scale with the hardware when they showed off the Q4: Ray Traced engine on a UMPC recently.  What are your thoughts on that possible advantage?


CARMACK: Speaking as someone that is a mobile developer and a high end console developer, that’s a ridiculous argument. 


PCPER: Rasterization can scale just as easily?


CARMACK: Yeah.  The idea of moving ray tracing onto the mobile platforms makes no sense at all. 


PCPER:  What are your thoughts on Intel’s purchase of Havok and Project Offset?  One theory is that Intel is going to be making a game engine either for demos or to sell.  Do you think this is their hope in addressing the ability to “show a win” as you mentioned before?


CARMACK:  That’s what they have to do, that’s always been my argument to Intel and to a lesser degree the other companies.  The best way to evangelize your technology is to show somebody something.  To show an existence proof for it, to kind of eat your own dog food, in terms of working with everything.  Instead of just telling everyone you should be able to do great things with this, the right thing to do is for them to produce something that is spectacular and then say “ok everybody that wants this here’s the code”.  That’s the best way to lead anybody; it’s by example.  They’ll learn the pros and cons of everything directly there and I very much endorse that direction for them. 

« PreviousNext »