Hybrid rendering, graphics APIs and mobile ray tracing
PCPER: Based on your new data structure method using ray tracing, could you couple this with current rasterization methods for hybrid rendering?
Quake / id Tech 1, 1996 – courtesy of MobyGames.com
Quake II / id Tech 2, 1997 – courtesy of MobyGames.com
PCPER: So current generation consoles and PC graphics cards aren’t going to be capable of running this new type of sparse voxel octree based technology? And do you think vendors adding in support for it for next-generation hardware would be sacrificing any speed or benefits to rasterization?
PCPER: Do you think DirectX or OpenGL will have to be modified for this?
CARMACK: They are almost irrelevant in a general purpose computation environment. They are clearly rasterization based APIs with their heritage but there is a lot of heard room left in the programming we can do with this. Almost any problem that you ask for can be decomposed into these data parallel sorts of approaches and it’s not like we’re capping out what we can do with rasterization based graphics. But when you get these general purpose computing things going, they will look like different environments for how you would program things. It could look like CUDA or with Larrabee you could just program them as a bunch of different computers with SIMD units.
Quake III: Arena / id Tech 3, 1999 – courtesy of MobyGames.com
PCPER: Intel has discussed the benefit of ray tracing’s ability to scale with the hardware when they showed off the Q4: Ray Traced engine on a UMPC recently. What are your thoughts on that possible advantage?
PCPER: Rasterization can scale just as easily?
PCPER: What are your thoughts on Intel’s purchase of Havok and Project Offset? One theory is that Intel is going to be making a game engine either for demos or to sell. Do you think this is their hope in addressing the ability to “show a win” as you mentioned before?