Compare and Contrast Mapping

A sizable portion of NVIDIA's presentation discusses how things are typically done, with each situation compared to the way Ansel operates. I'll go over three of these:

  1. Problems stitching game images
  2. Inconsistent detail when taking high-resolution shots of game assets
  3. Depth perception in 360-degree images

The first problem is that, when you stitch game images by hand, the engine has no idea what you are doing. Most titles over the last decade have switched to HDR rendering, which allows the game camera to adapt to different light intensities. Going outside this range provides bloom, which, at the time, gamers thought was the only purpose of HDR, but it also allowed artists to create scenes without tuning a specific lighting profile.

To see what I mean, play the outdoor mission of Doom 3 (2004) or the indoor missions of Battlefield 2: Special Forces (2005). The lighting looks terrible, and a good portion of that reason is because the artists were fighting the game's baked profile. Halo 2 was funny: each level had an artist-defined light profile for outdoor, and an artist-defined light profile for indoor. The engine would switch between the two based on where the character is. Of course, the correct way to do this is by rendering all possible lighting values, which is why we call it High Dynamic Range Rendering, and crushing the result in post-process just before it's output to your display. This allows the lighting profile to smoothly transition as you move around the environment.

If you are stitching a photograph, however, you will need to move the camera. Again, moving the camera changes the lighting profile. You do not want the lighting profile to change within a single image. At this point, you are stuck. The engine has already crushed the information you require into a presentable frame, and it's gone. In Ansel's case, post-processing doesn't need to happen on every tile individual. As such, NVIDIA has programmed it to wait until all of the images are stitched together, and this processing only happens once.

The second issue is that, when you blow up an image, the game doesn't really know that you don't care about real-time performance. If your camera is a few hundred yards away from a tiny object, the game engine will drop it down to its lowest priority — why load a 2048×2048 texture for something that probably takes up two pixels of a 1080p screen? Well, at 4.5 gigapixels, that could matter. Ansel solves this by signaling to the game to always use its highest detail assets. This will dramatically decrease your performance, but who cares? You've paused time to compute a 4.5 gigapixel image! You might as well make it a good one!

The third issue is that, when creating a stereoscopic 3D image, you need to have a good separation between the left and right camera. In a 360 photosphere, this means that the camera needs to move as it captures detail that's not directly forward, and do so mathematically precise. The fix is to lock both cameras to a pivot in the center of the head, but it looks wrong if you don't, and it probably complicates the stitch (because each center isn't a fixed point in space).

Looking Forward (and Conclusion)

Especially if it becomes a success, there is a lot of room for NVIDIA to improve upon Ansel in the future. Near the end of the presentation, a guest asked whether Ansel could be integrated with ShadowPlay. The NVIDIA presenter responded, “So the question is can we implement it with ShadowPlay and that's a great idea. Stay tuned.” This was followed by what I would describe as slightly-devious laughter.

But let's think about that for a second. It would be clearly impossible to stream 4.5 gigapixels, let alone render it, at all times. What might be possible, and they're strongly suggesting that they've thought of something, is that the SDK might give them the ability to record the desired information to re-generate a block of time. Will this mean that Ansel can be extended to video itself? Who knows.

As for game support — NVIDIA is comfortable to announce nine titles:

  • Tom Clancy's The Division by Ubisoft
  • The Witness by Thekla Inc. (Jonathan Blow)
  • LawBreakers by Boss Key Productions (Cliff Bleszinski's new company)
  • The Witcher III: Wild Hunt by CD Projekt Red
  • Paragon by Epic Games
  • Fortnite by Epic Games
  • Unreal Tournament by Epic Games
  • Obduction by Cyan Worlds
  • No Man's Sky by Hello Games

As of this moment, I do not have access to Ansel. All we have to go on is NVIDIA's presentation, which included a handful of controlled demos. Like everything, we should avoid expectations until it ships and can be used, or tested by third-parties (if applicable). As I said at the start, though, in-game photography is interesting. Even in Unreal Engine 4, I had some fun with the high resolution screenshot console command to take 8K stills of their high-quality environments, like the Elemental Demo scene. I could see myself playing around with that in a few shipped video games, and, more importantly, seeing what others create and sharing their personal experiences.

« PreviousNext »