Activision recently showed off its Next-Generation Character Rendering technology, which is a new method for rendering realistic and high-quality 3D faces. The technology has been in the works for some time now, and is now at a point where faces are extremely detailed down to pores, freckles, wrinkles, and eye lashes.
In addition to Lauren, Activision also showed off its own take on the face used in NVIDIA's Ira FaceWorks tech demo. Except instead of the NVIDIA rendering, the face was done using Activision's own Next-Generation Character Rendering technology. A method that is allegedly more efficient and "completely different" than the one used for Ira. In a video showing off the technology (embedded below), the Activision method produces some impressive 3D renders in real time, but when talking appear to be a bit creepy-looking and unnatural. Perhaps Activision and NVIDIA should find a way to combine the emotional improvements of Ira with the graphical prowess of NGCR (and while we are making a wish list, I might as well add TressFX support… heh).
The high resolution faces are not quite ready for the next Call of Duty, but the research team has managed to get models to render at 180 FPS on a PC running a single GTX 680 graphics card. That is not enough to implement the technology in a game, where there are multiple models, the environment, physics, AI, and all manner of other calculations to deal with and present at acceptable frame rates, but it is nice to see this kind of future-looking work being done now. Perhaps in a few graphics card generations the hardware will catch up to the face rendering technology that Activision (and others) are working on, which will be rather satisfying to see. It is amazing how far the graphics world has come since I got into PC gaming with Wolfenstein 3D, to say the least!
The team behind Activision's Next-Generation Character Rendering technology includes:
Team Member | Role |
Javier Von Der Pahlen | Director of Research and Development |
Etienne Donvoye | Technical Director |
Bernardo Antoniazzi | Technical Art Director |
Zbyněk Kysela | Modeler and Texture Artist |
Mike Eheler | Programming and Support |
Jorge Jimenez | Real-Time Graphics Research and Development |
Jorge Jimenez has posted several more screenshots of the GDC tech demo on his blog that are worth checking out if you are interested in the new rendering tech.
I’m a 3D artist and I will
I’m a 3D artist and I will admit, I’m not that amazed by this. Granted, real time is an achievement, but then how real time was it, what is the polygon count? And where is the hair? And where is the environment.
We’re still ways off before that is going to be available in production. I just hope the “ways off” will be proven wrong in a short time.
A step in the right direction, just not a major leap.
I agree with Prodeous.
I was
I agree with Prodeous.
I was disappointed to see the old mouth effect. Technique that was used in the 90s.
I do agree that a combinations of corporate teamwork would greatly improve the technology to where they should stand today.
Nice share!
A little bit higher
A little bit higher resolution mesh, with better texture mapping! I can not wait until there will be available the ability to 3d model with particles that simulates real clay, and then convert the outer particles to a mesh It can be a pain in the A$$ fighting with mesh geometry! What I really want is a kickstarter project that takes the the AMD firepro A300/A320 series APU (APUs with professional level integrated Graphics and certified drivers) and puts it in a laptop with the google chromebook pixel’s screen resolution! PcPer please ask AMD why I can not buy this APU for a home build, It is only available for OEMs at the current time! My letters to Santa have gone unfulfilled, these last few years, for a Macbook PRO retina with a professional level graphics CPU/GPU and professional level drivers, and driver support!
This all well and good
This all well and good but…will their future games be as good these graphics?
Impressive for a DEMO?
Impressive for a DEMO? Yes.
Try a full fledged game where EVERYTHING is upto this level.
On a side note, no one has gotten mouth sync with speech and ear movement at same time.
I’m a physician and the muscle connections are not accurate. For example, when someone gives a big smile, the muscles will also contract the muscles near the mandible which in turn will contract to a very small degree the ears. Also missing is the dilation/contraction of the pupils with changes in light and head bobbing.
These minute details matter. To the trained eye such as myself, it’s very noticeable. To the untrained eye people just sense that something is “off” and thus breaks the illusion of the image being real.
Was it good? Yes. Could it fool the average person. Hell no.
Still uncanny valley.