Ryan Shrout: One of the interesting topics going around the graphics world now is that of the “infinite detail” engines, Voxels, and these types of things. I know that you have played around with Voxels, written little Voxel engines, and that kind of stuff. What do you think about this debate on the possibility of the “Infinite Detail” engine?
John Carmack: Okay, for one thing, I think it’s important to separate the notion of infinite detail and Voxels. You can have Voxels [that are not infinitely detailed] because many of the Voxel engines that I’ve written have been at finite coarse levels of detail. The fact that you can instance detail in [Voxels]… in may ways it sounds awesomely cool: “infinite detail,” but if we look at all of the trends that we’ve been doing and Rage epitomizes in many ways, procedurally generated detail is usually not what you want. This has been an argument going back decades: “now is the year of procedurally generated textures and geometry.” We’ve heard that for a decade and it never has come true. What has won is being able to manage the real data that we want. Procedural-ism is really just a truly crappy form of data compression. You know, you have the data that you really want, and procedural-ism makes you something that might resemble what you really want, but it’s a form of extraordinarily lossy data compression that lets you produce something there.
And that really is the problem with the voxels. Infinitive detail is basically procedural-ism and you can do that with polygons, voxels, atoms, splats, whatever. They are tied together now with people talking about the recent demo that came out, but they are really orthogonal topics. I don’t think the notion of infinite detail is actually all that important. It is more important to be able to get the broad strokes of the artistic vision in there, and if you take uninspired content and look at it at the molecular level, it is still uninspired content. It’s not really going to make a big difference. What is potentially useful about voxels is that it allow you to access them in a certain way that may be more efficient than say a rise pipeline of dicing polygons, doing displacement mapping, and things like that. They are both ways to give huge amounts of geometric detail, which is obviously the next frontier after we uniquely texture everything. We want to uniquely “geometrify” everything. We all want to go there, but it’s also important to realize that I was showing MegaTexture demos five years ago. It took five years for it to turn into a production quality game, and you could make a cool, flashy demo of like “look, isn’t this amazing, we can stamp down stuff and everything looks totally different? There’s no tiling, and we can use procedural information to generate all of this,” but there is a huge amount of work that goes into building a robust production quality system that you can build real game worlds with.
I’ve revisited voxels at least a half dozen times in my career, and they’ve never quite won. I am confident in saying now that ray tracing of some form will eventually win because there are too many things that we’ve suffered with rasterization for, especially for shadows and environment mapping. We live with hacks that ray tracing can let us do much better. For years I was thinking that traditional analytical ray tracing intersecting with an analytic primitive couldn’t possibly be the right solution, and it would have to be something like voxels or metaballs or something. I’m less certain of that now because the analytic tracing is closer than I thought it would be. I think it’s an interesting battle between potentially ray tracing into dense polygonal geometry versus ray tracing into voxels and things like that. The appeal of voxels, like bitmaps, [is that] a lot of things can be done with filtering operations. You can stream more things in and there is still very definitely appeals about that. You start to look at them as little light field transformers rather than hard surfaces that you bounce things off of. I still wouldn’t say that the smart money is on voxels because lots of smart people have been trying it for a long time. It’s possible now with our current, modern generation graphics cards to do incredible full screen voxel rendering into hyper-detailed environments, and especially as we look towards the next generation I’m sure some people would take a stab at it. I think it’s less likely to be something that is a corner stone of a top-of-the-line triple A title. It’s in the mix but not a forgone conclusion right now.
Ryan Shrout: In the near-term, what are your thoughts on tessellation? Does id Tech 5 implement any tessellation?
John Carmack: No, we don’t have any tessellation. Tessellation is one of those things that bolting it on after the fact is not going to do anything for anybody, really. It’s a feature that you go up and look at, specifically to look at the feature you saw on the bullet point rather than something that impacts the game experience. But if you take it into account from your very early design, and this means how you create the models, how you process the data, how you decimate to your final distribution form, and where you filter things, all of these very early decisions (which we definitely did not on this generation) I think tessellation has some value now. I think it’s interesting that there is a no-man’s land, and we are right now in polygon density levels at a no-man’s land for tessellation because tessellation is at it’s best when doing a RenderMan like thing going down to micro-polygon levels. Current generation graphics hardware really kind of falls apart at the tiny levels because everything is built around dealing with quads of texels so you can get derivatives for you texture mapping on there. You always deal with four pixels, and it gets worse when you turn on multi-sample anti-aliasing (AA) where in many cases if you do tessellate down to micro-polygon sizes, the fragment processor may be operating at less than 10% of its peak efficiency. When people do tessellation right now, what it gets you is smoother things that approach curves. You can go ahead and have the curve of a skull, or the curve of a sphere. Tessellation will do a great job of that right now. It does not do a good job at the level of detail that we currently capture with normal maps. You know, the tiny little bumps in pores and dimples in pebbles. Tessellation is not very good at doing that right now because that is a pixel level, fragment level, amount of detail, and while you can crank them up (although current tessellation is kind of a pain to use because of the fixed buffer sizes on the input and output [hardware]) it is a significant amount of effort to set an engine up to do that down to an arbitrary level of detail. Current hardware is not really quite fast enough to do that down to the micro-polygon level.
It’s almost like procedural data where we’ve heard tessellation is going to be the “big thing” since the NP patches from the ATI stuff, and there are reasons why it never caught on. Because… in the early days of shells on things, when you say “well we’ve got a Bézier spline, or a NURB, or something like that,” what we would find is that, well, if we are going to have this net of 16 vertexes around here, you can do cooler game art by making that 16 vertexes for triangles. You’ll have cooler protrudes rather than your smooth Gumby shape. Now that we have the ability to go ahead and do texture sampling, and do real bump mapping, it becomes interesting from a content point, but we don’t quite have the power to do the entire world. You can run a character down like that with current generation stuff, and that’s probably useful directions but you can’t yet go ahead and render your 2 million pixel world at sub-pixel micro-polygon triangles (and certainly not at 60fps). That’s the type of performance that we are going to get no matter what, and I think the smart money bet for next generation consoles is that early on they are just going to be hyped up versions of our current technologies, but when people build technologies from scratch for that, the smart money would be on a tessellation based, you know, all the way down to micro-polygon levels with outside bets on voxel, ray tracing, and hybrid engines.
Ryan Shrout: I think you mentioned in your key note yesterday that Rage was not going to have support for Eyefinity? What is your long term view on those technologies (multi-displays, 3D technologies, how they can differential from consoles)? Is that something that you think is going to, or should, catch on?
John Carmack: Historically id is known for having a lot of patches for weird, quirky things, and I hope we can follow that up with this [after getting] the game out of the door, and the great thing is we still have another month on the PC as the console certification process takes longer than the PC, so the PC is at parity with the consoles now. You crank up the AA and all this, but we hope that we can do some extra things for the PC. Not much is going to go in initially, but I have research engines that I hope that we can release, at least as novelty patches which is what I used to do in the Geo-Quake, Quake World days. It’s not clear exactly what we are going to have with that, but I’ve promised myself that after Rage is done I get to buy a bunch of toys on the PC. I’ve got my Kinect SDK, ordered a new head mounted display, and probably will set up the multi-monitor stuff and run through all of that. A lot of that is legitimate research where i need to gauge “how important are these things that we can do, and what benefits you get by adding these additional layers?” I’ve been saying this for years, but my money is still eventually on direct ocular scanning. I think mobile devices will probably be the thing that drives it home, where we are carrying supercomputers around in our pockets that are crippled by lack of IO devices on there. I think that once we get the thing that clips on your glasses, and laser scans into your eye that gives you very high resolution.
I really strongly believe at this point that the big impact changes where people are going to say “wow, this game is so much different than what we’ve had before,” is going to be from IO devices. It is going to be… we’ve got rendering ability to do this for sure… we can do incredible virtual reality world rendering, but if you are just looking at it on a TV set there is a limit to what the extra quality will do. But if we can get down to below the perceptible response level of looking around and experiencing the world, even if we took the worlds that we just had today and were able to get that extra level of immersion… I think that’s going to make the games really go to the next level. It’s not really clear what that is… is it a consumer head mounted display, is it a free-form display, looking around at things? I think [the reason why] we haven’t gotten enough vibe from 3D displays and head tracking 3D displays is that you are still looking at a window into the world. The next big step has to be from something that attaches and moves with you.
Ryan Shrout: Something that came up, Rage is going to be… your targeting 60fps. Is there going to be any sort of benchmarking or testing modes?
John Carmack: That’s something we don’t have in yet, where it’s… in fact we need to implement at least in the next month some sort of micro-benchmark just to tell if you crank the settings to a specific level what quality it is. We don’t have time demo runs through all of this, but we probably will have some sort of artificial scenes set up just to benchmark in the video settings. We’ll then spit out a number so that you can see “can I crank up AA or down on this?” It’s frustrating on the PC that while you might have the hardware capability to run at these extremely high resolutions and all of this, we get tripped up a lot on the drivers, texture management, some of the fencing, and other resource management.
Ryan Shrout: Okay. id Tech 5, in terms of the licensing, what sort of tools do you have available for the developers?
John Carmack: It’s a high level corporate decision that there is no external licensing. Only companies in the ZeniMax family have access to the id Tech 5 technology, and there are a couple of other teams working with it right now. For the user community, all of our tools are only available in the 64 bit version. We ship the 32 bit version that we treat as a sort of console platform. We will be releasing the 64 bit version after the fact, but honestly there is going to be a limit to what people can do with it because there is so much infrastructure that goes into building a MegaTextured world. I suspect that there will be one or two people who go through the trouble to figure out how to really build a MegaTextured world, or do some of the stamping effects on there. Mostly it will be for changing game play stuff. You can set up new layers, build a new multi-player layout, and build a nightmare difficulty level going through it. Unfortunately, there is a terabyte of source art that goes into building the game and we are certainly not going to be pushing all that out for download.
Ryan Shrout: Now that you are part of the Zenimax family, you have been seeing what other developers such as Bethesda have done with Skyrim. How has that effected what you have done with Rage and what you think you’ll be doing in the future?
John Carmack: The great thing about ZeniMax is we have these Christmas “get-togethers” where all the teams get up and show their product to just the family, not worrying about how it’s going to be taken by the press or public. It’s really neat to be part of a family like this where it’s not id against the world. It’s our team here, and we can cheer for Skyrim. There have been some specific things that I’ve looked at [after] hearing Todd Howard talk about design decisions in Skyrim [concerning] what an adventure game is, and what people get out of it. There are some very specific things like how people feel about your loot and items, how you want to fondle your items, and scroll through and look at the different things. A lot of things I have as specific bullet points for Rage too, things that I want to go into and do a better job… I want to play this up. There are different genres and there are things that you choose to do one way that detracts from certain other aspects, but there is low hanging fruit that is just better: making items cooler, making more things that you can look at, things that you can completely ignore, or if that’s your thing you can go in there and drool over all of your stuff and plot your acquisition strategy for all of this. We can add this level of things to our game, and we started to with Rage but there is a lot more we can get out of it. We can add this and it can make the game much better for lots of people. Some people might not ever notice that extra stuff is there, but for some people it’s going to double the fun that they have with the game. It is great to be here with masters of the craft. There is a lot that we can learn from them, and it’s a good relationship.
Ryan Shrout: Thanks for talking with us.
John Carmack: Thank you.