SIGGRAPH 2017 is still a few months away, but we’re already starting to see demos get published as groups try to get them accepted to various parts of the trade show. In this case, Physics Forests published a two-minute video where they perform fluid simulations without actually simulating fluid dynamics. Instead, they used a deep-learning AI to hallucinate a convincing fluid dynamics result given their inputs.
We’re seeing a lot of research into deep-learning AIs for complex graphics effects lately. The goal of most of these simulations, whether they are for movies or video games, is to create an effect that convinces the viewer that what they see is realistic. The goal is not to create an actually realistic effect. The question then becomes, “Is it easier to actually solve the problem? Or is it easier having an AI learn, based on a pile of data sorted into successes and failures, come up with an answer that looks correct to the viewer?”
In a lot of cases, like global illumination and even possibly anti-aliasing, it might be faster to have an AI trick you. Fluid dynamics is just one example.
That’s real pretty, being
That’s real pretty, being able to interrupt the water flow is amazing.
Here is a video talking about
Here is a video talking about an older paper using similar ideas: https://www.youtube.com/watch?v=iOWamCtnwTc
It wouldn’t surprise me if these sort of things become the next big revolution for games in a few years. There are components here that could be used for pretty much any part of a game. From things like physics simulations to animation, asset creation and character AI and actual quest creation.