NVIDIA has introduced new research at the NeurIPS AI conference in Montreal that allows rendering of 3D environments from models trained on real-world videos. It's a complex topic that does have potential beyond scientific research with possible application for game developers, though this is not to the "product" stage just yet. A video accompanying the press release today shows how the researchers have implemented this technology so far:

"Company researchers used a neural network to apply visual elements from existing videos to new 3D environments. Currently, every object in a virtual world needs to be modeled. The NVIDIA research uses models trained from video to render buildings, trees, vehicles and objects."

The AI-generated city of a simple driving game demo shown at the NeurIPS AI conference gives us an early look at the sort of 3D environment that can be rendered by the neural network, as "the generative neural network learned to model the appearance of the world, including lighting, materials and their dynamics" from video footage, and this was rendered as the game environment using Unreal Engine 4.

"The technology offers the potential to quickly create virtual worlds for gaming, automotive, architecture, robotics or virtual reality. The network can, for example, generate interactive scenes based on real-world locations or show consumers dancing like their favorite pop stars."

Beyond video-to-video this research can also be applied to still images, with models providing the basis for what is eventually rendered movement (the video embedded above includes a demonstration of this aspect of the research – and yes, dancing is involved). And while all of this might be a year or two away from appearing in a new game release, but the possibilities are fascinating to contemplate, to say the least.