Compare and Contrast Mapping
A sizable portion of NVIDIA's presentation discusses how things are typically done, with each situation compared to the way Ansel operates. I'll go over three of these:
- Problems stitching game images
- Inconsistent detail when taking high-resolution shots of game assets
- Depth perception in 360-degree images
The first problem is that, when you stitch game images by hand, the engine has no idea what you are doing. Most titles over the last decade have switched to HDR rendering, which allows the game camera to adapt to different light intensities. Going outside this range provides bloom, which, at the time, gamers thought was the only purpose of HDR, but it also allowed artists to create scenes without tuning a specific lighting profile.
To see what I mean, play the outdoor mission of Doom 3 (2004) or the indoor missions of Battlefield 2: Special Forces (2005). The lighting looks terrible, and a good portion of that reason is because the artists were fighting the game's baked profile. Halo 2 was funny: each level had an artist-defined light profile for outdoor, and an artist-defined light profile for indoor. The engine would switch between the two based on where the character is. Of course, the correct way to do this is by rendering all possible lighting values, which is why we call it High Dynamic Range Rendering, and crushing the result in post-process just before it's output to your display. This allows the lighting profile to smoothly transition as you move around the environment.
If you are stitching a photograph, however, you will need to move the camera. Again, moving the camera changes the lighting profile. You do not want the lighting profile to change within a single image. At this point, you are stuck. The engine has already crushed the information you require into a presentable frame, and it's gone. In Ansel's case, post-processing doesn't need to happen on every tile individual. As such, NVIDIA has programmed it to wait until all of the images are stitched together, and this processing only happens once.
The second issue is that, when you blow up an image, the game doesn't really know that you don't care about real-time performance. If your camera is a few hundred yards away from a tiny object, the game engine will drop it down to its lowest priority — why load a 2048×2048 texture for something that probably takes up two pixels of a 1080p screen? Well, at 4.5 gigapixels, that could matter. Ansel solves this by signaling to the game to always use its highest detail assets. This will dramatically decrease your performance, but who cares? You've paused time to compute a 4.5 gigapixel image! You might as well make it a good one!
The third issue is that, when creating a stereoscopic 3D image, you need to have a good separation between the left and right camera. In a 360 photosphere, this means that the camera needs to move as it captures detail that's not directly forward, and do so mathematically precise. The fix is to lock both cameras to a pivot in the center of the head, but it looks wrong if you don't, and it probably complicates the stitch (because each center isn't a fixed point in space).
Looking Forward (and Conclusion)
Especially if it becomes a success, there is a lot of room for NVIDIA to improve upon Ansel in the future. Near the end of the presentation, a guest asked whether Ansel could be integrated with ShadowPlay. The NVIDIA presenter responded, “So the question is can we implement it with ShadowPlay and that's a great idea. Stay tuned.” This was followed by what I would describe as slightly-devious laughter.
But let's think about that for a second. It would be clearly impossible to stream 4.5 gigapixels, let alone render it, at all times. What might be possible, and they're strongly suggesting that they've thought of something, is that the SDK might give them the ability to record the desired information to re-generate a block of time. Will this mean that Ansel can be extended to video itself? Who knows.
As for game support — NVIDIA is comfortable to announce nine titles:
- Tom Clancy's The Division by Ubisoft
- The Witness by Thekla Inc. (Jonathan Blow)
- LawBreakers by Boss Key Productions (Cliff Bleszinski's new company)
- The Witcher III: Wild Hunt by CD Projekt Red
- Paragon by Epic Games
- Fortnite by Epic Games
- Unreal Tournament by Epic Games
- Obduction by Cyan Worlds
- No Man's Sky by Hello Games
As of this moment, I do not have access to Ansel. All we have to go on is NVIDIA's presentation, which included a handful of controlled demos. Like everything, we should avoid expectations until it ships and can be used, or tested by third-parties (if applicable). As I said at the start, though, in-game photography is interesting. Even in Unreal Engine 4, I had some fun with the high resolution screenshot console command to take 8K stills of their high-quality environments, like the Elemental Demo scene. I could see myself playing around with that in a few shipped video games, and, more importantly, seeing what others create and sharing their personal experiences.
So this will be feature will
So this will be feature will be available cross OS platform?
“Windows Tests Of The GTX 1080 Tip Up, But No Linux”
https://www.phoronix.com/scan.php?page=news_item&px=GTX-1080-Embargo-Lift
Notice the Epic Games titles?
Notice the Epic Games titles? The UE4 engine has a feature that can record your session to recreate a playthrough. It is currently used in Unreal Tournament to create high-quality videos, and I could see this being a natural fit for Ansel.
I haven’t played Paragon, but I imagine they have the same feature, since it is available to every game made with UE4.
Also, in Mad Max player can
Also, in Mad Max player can take screenshot with similar functionalities. Actually, it might inspired Nvidia to implement this universally on games.
1. HDR – what kind ? Dolby
1. HDR – what kind ? Dolby Vision ? Standard Dynamic Range (SDR) ?
or HDR10 ?
the HDR have in the card 1000 nits or 4000 nits ?
Unregistered As usual what supports the card’s hardware or software ?
2. in the picture
http://www.guru3d.com/index.php?ct=a…=file&id=21784
Write Contras over 10,000 :1 with nvidia 1080
In other words previous video cards we received the Contras 2000:1 ???
3. What color system supports nvidia 1080 video card ?
Rec.2020 color gamut or rec P3 ? or only rec 709 ?
http://www.guru3d.com/articles_pages…_review,2.html
If anyone has an answer that will bring links
It is not question of logic
it isn’t HDR video capture.
it isn’t HDR video capture. Ansel is for single frame image capture like using the game engine to render a single picture.
the HDR format used is listed as EXR. it is like making a RAW image capture from a camera not like any of the video standards you are asking about, here is a discription:
EXR Capture
EXR is an open image file format created by Industrial Light and Magic that provides higher dynamic range and depth. It allows for 16 or 32 bit floating point storage per channel, compared to the traditional 8-bit integers in most formats. Capturing in this format enables you to choose your exposure in post, as well as apply extreme color correction without banding artifacts.
the capability for the new GTX1080 and the GTX1070 to output a HDR signal to a display isn’t limited to a peak brightness level like 1000 nit or 4000 nits. neither is it contrast limited or is it confined by a color gamut standard like BT.709, DCI-p3 or BT.2020 or what HDR video meta-data support there is be it Dolby Vision or HDR10.
Those are limits of a display, not a GPU’s output.
All the GPU needs to do is support HDMI 2.0a and HDCP 2.2 to pass the HDR signal and the meta-data to the display and they both do that.
the monitor or display has to be the device to meet the spec and the media being fed it so the games or the videos you play have to be encoded as a HDR signal or pass the meta-data to the display, the GPU just has to support the output.
as for Ansel doing HDR stuff, using the EXR container, it means the data it has allows you to edit the output using a much larger image source than the data in a jpeg , it is the way Pixar stores frames when it renders films and is common in the industry.
Overall is is a pretty smart way to save the images and it wont be just “fake” HDR tone mapping from a 8-bit color source, 16-bit or 32-bit is even more than what HDR video is asking for , that is one 10-bit or 12-bit.
I love the “All Games” claim
I love the “All Games” claim by Nvidia for ShadowPlay. So does it include the new DOOM? No, OpenGL not supported. “All Games?” Yes if DirectX, apparently. Not sure about Vulkan.