Let’s take a closer look than we had at the GTX 1080 announcement
NVIDIA's Ansel Technology
- – NVIDIA blog claims "GTX 600-series and up"
- – UI/UX is NVIDIA controlled
- Allows NVIDIA to provide a consistent UI across all supported games
- Game developers don't need to spend UX and QA effort on their own
- – Can signal the game to use its highest-quality assets during the shot
- – NVIDIA will provide an API for users to create their own post-process shader
- Will allow access to Color, Normal, Depth, Geometry, (etc.) buffers
- – When asked about implementing Ansel with ShadowPlay: "Stay tuned."
“In-game photography” is an interesting concept. Not too long ago, it was difficult to just capture the user's direct experience with a title. Print screen could only hold a single screenshot at a time, which allowed Steam and FRAPS to provide a better user experience. FRAPS also made video more accessible to the end-user, but it output huge files and, while it wasn't too expensive, it needed to be purchased online, which was a big issue ten-or-so years ago.
Seeing that their audience would enjoy video captures, NVIDIA introduced ShadowPlay a couple of years ago. The feature allowed users to, not only record video, but also capture the last few minutes. It did this with hardware acceleration, and it did this for free (for compatible GPUs). While I don't use ShadowPlay, preferring the control of OBS, it's a good example of how NVIDIA wants to support their users. They see these features as a value-add, which draw people to their hardware.
Ansel comes from that same mindset, but with still photography the starting point. When activated, Ansel freezes time and allows the user to set up the perfect camera angle with a free-flying camera, add filters, and save a high-resolution image, currently up to 4.5 gigapixels but NVIDIA could raise that if users honestly need more. (1080p is ~2 megapixels.) This can be 2D, stereoscopic 3D, or a whole 360-degree photo-bubble (with an equirectangular projection). It can also be saved as an HDR image (in the EXR format) to adjust the exposure in an image editing program like Photoshop. They can also be modified from within Ansel with filters, which could be provided by either NVIDIA, or custom-made from the game's many buffers.
Say what you like about Instagram, but it shows that the urge to share expressive still images is not limited to those who were called to photography. Not everyone will be good at it, but more good content is produced as the tools get easier and cheaper. I don't know if they're attempting to move existing members of this subculture toward NVIDIA with these tools, but it might compel existing users to get used to it… and to keep choosing green whenever they want more performance.
This brings me to “who can use it?” According to the press briefing, this feature will be available to both Maxwell and Pascal. According to their blog, however, it “will be available on supported games for NVIDIA GeForce GTX 600-series and up.” This means that it will also work on Kepler, barring some terrible typo. Either way, NVIDIA is not locking it down to Pascal, which makes sense because older cards might be due for an upgrade.
Ansel will not be available for every game. It is an SDK, which means that it must be compiled in by the game developer. NVIDIA originally targeted “one line of code” to implement Ansel, but that later expanded to the order of about a hundred lines, give or take a few dozen. That's still a miniscule burden, though, which is required for such a niche feature (otherwise, basically no-one would implement it). It should be just enough for the developer to prevent Ansel from being used to cheat.
NVIDIA provides two examples of effort required to integrate Ansel. The Witness, a puzzle game from indie developer, Jonathan Blow, added the feature with about 40 lines of code. The Witcher 3 needed a little more effort, with a reported ~150 extra lines required for Ansel. This metric cannot quantify the burden on QA, which is something to look at post-launch, but making the feature painless is clearly a top-priority design goal.
Part of the reason for such a small line-of-code count is that NVIDIA controls the user interface. All of the sliders and options will be consistent from game to game. This is good for the game developer, because programming (and testing) a user interface takes a lot of time and effort, and designing a special one for a handful of GeForce gamers would be as low priority as it gets. This is also good for NVIDIA, because their users will have basically no learning curve as they travel between titles. So long as NVIDIA's implementation is complete and intuitive, it's a win-win.
Ansel's user interface as seen within The Witcher III
Again, this should be consistent across all supported titles, as it's NVIDIA's UX.
As I briefly mentioned earlier, Ansel will be able to access more than just the final image. Many games are based on a deferred rendering method, which means that the engine generates several buffers independently. This is very fast, especially for scenes with complex, dynamic lighting, although it doesn't play nice with anti-aliasing and a few other effects.
Ansel can access some or all of these buffers, which lets it know about the object's depth, surface normal orientation, base color, lighting, and so forth. This can drive much more complex filters than would otherwise be possible, in Photoshop or other editors, because it retains information about the scene. NVIDIA lists a bunch of effects that will ship with Ansel, such as color correction and lightshafts, but they are also implementing an SDK for users to add their own. It is unclear whether this is HLSL, whatever shading language the game compiles down into, or a special language just for Ansel, but I'm excited to see what we can achieve when we have access to the game's intermediate buffers.
While only a handful of games support capturing their depth buffer at the moment, so NVIDIA isn't announcing anything yet, users should be able to even adjust properties like focal length and depth of field. These are things that you cannot do correctly from a screenshot.
Of course, while we now know that Ansel is an SDK, that wasn't clear until a few minutes into that section of their GTX 1080 press event. Intercepting 3D data from within video games has been done before, such as Adobe's Acrobate 3D Capture, but this would be a ridiculous task nowadays with games as programmable as they are. At the time, when NVIDIA announces a technology like this with no background, I started to question my understanding of reality. Did they figure something out? Are they doing some crazy, per-game profile?
Nope! The developer is involved, and that's a good thing.
So this will be feature will
So this will be feature will be available cross OS platform?
“Windows Tests Of The GTX 1080 Tip Up, But No Linux”
Notice the Epic Games titles?
Notice the Epic Games titles? The UE4 engine has a feature that can record your session to recreate a playthrough. It is currently used in Unreal Tournament to create high-quality videos, and I could see this being a natural fit for Ansel.
I haven’t played Paragon, but I imagine they have the same feature, since it is available to every game made with UE4.
Also, in Mad Max player can
Also, in Mad Max player can take screenshot with similar functionalities. Actually, it might inspired Nvidia to implement this universally on games.
1. HDR – what kind ? Dolby
1. HDR – what kind ? Dolby Vision ? Standard Dynamic Range (SDR) ?
or HDR10 ?
the HDR have in the card 1000 nits or 4000 nits ?
Unregistered As usual what supports the card’s hardware or software ?
2. in the picture
Write Contras over 10,000 :1 with nvidia 1080
In other words previous video cards we received the Contras 2000:1 ???
3. What color system supports nvidia 1080 video card ?
Rec.2020 color gamut or rec P3 ? or only rec 709 ?
If anyone has an answer that will bring links
It is not question of logic
it isn’t HDR video capture.
it isn’t HDR video capture. Ansel is for single frame image capture like using the game engine to render a single picture.
the HDR format used is listed as EXR. it is like making a RAW image capture from a camera not like any of the video standards you are asking about, here is a discription:
EXR is an open image file format created by Industrial Light and Magic that provides higher dynamic range and depth. It allows for 16 or 32 bit floating point storage per channel, compared to the traditional 8-bit integers in most formats. Capturing in this format enables you to choose your exposure in post, as well as apply extreme color correction without banding artifacts.
the capability for the new GTX1080 and the GTX1070 to output a HDR signal to a display isn’t limited to a peak brightness level like 1000 nit or 4000 nits. neither is it contrast limited or is it confined by a color gamut standard like BT.709, DCI-p3 or BT.2020 or what HDR video meta-data support there is be it Dolby Vision or HDR10.
Those are limits of a display, not a GPU’s output.
All the GPU needs to do is support HDMI 2.0a and HDCP 2.2 to pass the HDR signal and the meta-data to the display and they both do that.
the monitor or display has to be the device to meet the spec and the media being fed it so the games or the videos you play have to be encoded as a HDR signal or pass the meta-data to the display, the GPU just has to support the output.
as for Ansel doing HDR stuff, using the EXR container, it means the data it has allows you to edit the output using a much larger image source than the data in a jpeg , it is the way Pixar stores frames when it renders films and is common in the industry.
Overall is is a pretty smart way to save the images and it wont be just “fake” HDR tone mapping from a 8-bit color source, 16-bit or 32-bit is even more than what HDR video is asking for , that is one 10-bit or 12-bit.
I love the “All Games” claim
I love the “All Games” claim by Nvidia for ShadowPlay. So does it include the new DOOM? No, OpenGL not supported. “All Games?” Yes if DirectX, apparently. Not sure about Vulkan.