Shedding a little light on Monday’s announcement
Shedding a little light on Monday’s GDC announcement.
Most of our readers should have some familiarity with GameWorks, which is a series of libraries and utilities that help game developers (and others) create software. While many hardware and platform vendors provide samples and frameworks, taking the brunt of the work required to solve complex problems, this is NVIDIA's branding for their suite of technologies. Their hope is that it pushes the industry forward, which in turn drives GPU sales as users see the benefits of upgrading.
This release, GameWorks SDK 3.1, contains three complete features and two “beta” ones. We will start with the first three, each of which target a portion of the lighting and shadowing problem. The last two, which we will discuss at the end, are the experimental ones and fall under the blanket of physics and visual effects.
The first technology is Volumetric Lighting, which simulates the way light scatters off dust in the atmosphere. Game developers have been approximating this effect for a long time. In fact, I remember a particular section of Resident Evil 4 where you walk down a dim hallway that has light rays spilling in from the windows. Gamecube-era graphics could only do so much, though, and certain camera positions show that the effect was just a translucent, one-sided, decorative plane. It was a cheat that was hand-placed by a clever artist.
GameWorks' Volumetric Lighting goes after the same effect, but with a much different implementation. It looks at the generated shadow maps and, using hardware tessellation, extrudes geometry from the unshadowed portions toward the light. These little bits of geometry sum, depending on how deep the volume is, which translates into the required highlight. Also, since it's hardware tessellated, it probably has a smaller impact on performance because the GPU only needs to store enough information to generate the geometry, not store (and update) the geometry data for all possible light shafts themselves — and it needs to store those shadow maps anyway.
Even though it seemed like this effect was independent of render method, since it basically just adds geometry to the scene, I asked whether it was locked to deferred rendering methods. NVIDIA said that it should be unrelated, as I suspected, which is good for VR. Forward rendering is easier to anti-alias, which makes the uneven pixel distribution (after lens distortion) appear more smooth.
Read on to see the other four technologies, and a little announcement about source access.
The second technology is Voxel Accelerated Ambient Occlusion (VXAO). Currently, Ambient Occlusion is typically a “screen-space” effect, which means that it is applied on the rendered image buffers. It can only use the information that is available within those buffers, which is based on the camera's 2.5D projection of the world. Voxel Accelerated Ambient Occlusion calculates ambient occlusion results upon a grid of voxels, in world space. It is not limited to camera's view of the world.
This is the current technology, which collects data from the camera's buffers.
The actual occlusion information is gathered by ray tracing from points within this voxel grid, outward in a hemisphere, to other points in the voxel grid. Axis-aligned voxels are highly efficient to ray trace, especially compared to triangles. Volume elements (vo… x el…) that are very close to other objects tend to appear darker, which is basically because indirect light bounces have fewer potential directions to come from.
VXAO uses world-space data, which properly shades the ground under the tank.
SSAO, on the other hand, has no way of knowing how deep the tank is.
In a truly realistic simulation, global illumination would be computed directly, rather than dimming your added “indirect” light term by some AO value at various points in space. That is slow, though… like, “too slow for Pixar” levels of slow. SSAO does a pretty good job considering its limitations, but VXAO takes the approximation further by accounting for the actual environment (rather than the camera's slice of it, as we've mentioned). This should be a major improvement for moving cameras, although you can definitely see the difference even in screenshots.
The buffer NVIDIA creates. Partially filled voxels visualized as blue; full as red; empty clear.
The third technology is called Hybrid Frustum Traced Shadows (HFTS), which increases the quality of dynamic shadows. Rather than using just shadow maps, shadows are also computed by rasterizing geometry in light-space and determining if a list of screen pixels are occluded by them. The two results, frustum traced shadows and the soft shadows that are calculated by PCSS, are interpolated between by distance from the occluding object. This gives sharp, accurate, high-quality shadows up close that smoothly blur with distance.
Some sites reported, based on Monday's original press release, that NVIDIA is ray tracing these shadows. That is incorrect. NVIDIA states that the algorithm is two-stage. First, the geometry shader constructs four planes for each primitive in the coordinate system that the light sees when it projects upon the world. Basically, imagine that the light is a camera. The pixel shader then tests every (applicable) screen pixel, converted into the light's coordinate system, to see where it is. If it overlaps with a primitive, and that primitive is closer to the light than that screen pixel is, then that screen pixel is shadowed from that light. Unless I'm horribly mistaken, this looks like an application of the Irregular Z-Buffer algorithm that NVIDIA published in a white paper last year. They have not yet responded to my inquiry about whether this is the case.
Those were the three released features. The last two are classified as experimental betas.
The first of these is NVIDIA Flow. This technology simulates combustible fluid, fire, and smoke. It does so with an adaptive voxel simulation. This version is now able to leak outside of its bounding box, and it also handles memory properly in that case. It will be added to Unreal Engine 4 in Q2 of this year, although they did not specify whether it would be available in Epic's binary version in that timeframe, or just the GitHub source.
The second technology is PhysX-GRB. This is their popular rigid-body physics simulation, which has been given a major speed boost in this (experimental) version. NVIDIA claims that it is about two- to six-fold faster when measured under heavy load. They show a huge coliseum being reduced to rubble as balls from space crash upon it, managing ~40 FPS on whatever GPU they used. NVIDIA also claims that both CPU and GPU solvers should now produce identical results. “Flipping the switch” should just be a performance consideration.
NVIDIA closed their presentation with a few announcements of GameWorks source code being released to the public. PhysX, PhysX Clothing, and PhysX Destruction are already available, and have been for quite some time. Two new technologies are being opened up at GDC as well, though. The first is their Volumetric Lighting implementation that we discussed at the top of this article, and the second is their “FaceWorks” demo, which models skin and eye shading with sub-surface scattering and eye refraction.
NVIDIA has also announced plans, albeit not at GDC, to release the source for HairWorks, HBAO+, and WaveWorks. Again, they are not ready to announce a timeline yet, but their intentions have been declared. In fact, they intend to open up “most or all technologies over time.” This is promising because, while registered developers can access source code privately, the community at large benefits when the public gains access. They say that they do not want to open up projects until they've matured, and that makes sense. Both Mozilla and The Khronos Group do the same, holding some projects to their chest until they believe they are ready for the public.
The part that counts is whether they actually are released when complete.
Are these libraries being
Are these libraries being released under the same EULA as previously?Does Nvidia still have final approval over any modifications to their code included in shipping content?That would make this more of a merging of their two gameworks licensing levels as oppesed to and actual public release of their proprietary source code.
Any restrictive EULA is just
Any restrictive EULA is just more proof of vendor Lock-in, and in Nvidia’s case it’s their GameWorks middleware ecosystem! Any restrictions or need for Nvidia’s blessing above the normal open source(Truly) code licensing is just not going to be accepted. It looks like Nvidia is trying to do with middleware what it can not do through hardware based competition against AMD’s asynchronous compute abilities, until that is Nvidia has time to fix their hardware deficiencies on the asynchronous front!
It’s just a ruse to say you are “opening the source code up” when that opening comes with NDA/Other EULA restrictions that negate any reasonable true open ability to use the code freely with reasonable restrictions like many truly open source code projects are licensed under.
I honestly think this move is
I honestly think this move is to prep Devs for their Game engine that Tom Peterson was telling Ryan about,I am speculating that announcement of this engine may be what is listed as stay tuned in their roadmap, March 2017=GDC 2017.
and then finally in 2018 we will see them launch their DirectN API =D bahaha
And look at this, and it’s
And look at this, and it’s not coming from WCCF T! So very intresting!
“Rumor: NVIDIA Working On Their Own Distribution For Linux Gamers”
https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-NLinux-Distro-Rumor
Lol, not going to be
Lol, not going to be accepted, just because you area fanboy doesn’t mean everyone is. While I do think no one really like the fact that gameworks is proprietary, remember untill gameworks cost money to develop, or you rather have them do the same thing amd has always done hype the shit out of their open projects only to abandon them later on, atleast with gameworks we are getting new effects and if you dont like them don’t use them amd can only offer something for hairworks that’s it
FanBoy for what!
FanBoy for what!
GimpWorse…
GimpWorse…
This is awesome. I don’t
This is awesome. I don’t think it is wise to be completely open source as it bring’s its share of issues. Number is investment. When something is entirely open, investment drops as everyone looks for the other to invest and reap benefits from it. Keeping it closed or semi-open at least allows you to get some return from your investment. Based on my impressions I think Gameworks is no semi-open which is a good thing.
its interesting when it comes
its interesting when it comes to the shadows (AO is interesting as well but shadows in particular)
I PREFER the shadows shown before HFTS. it LOOKS better. What nvidia is trying to do is look more realistic, though whether its achieved is debatable. In reality though, the more details in the original shadows is a better effect and HFTS is not worth the massive performance loss. Totally pointless.
The Ambient occlusion is hit/miss. The darkening looks better sometimes, but its definitely not accurate especially when the sunlight is supposed to be shining right on the objects. Shadows are shadows, we do not see darkening IRL on objects unless a light source is absent or causing a shadow. AO in itself is iffy, but a useful effect without proper lighting. adds depth
Hmm. Yeah, HFTS is apparently
Hmm. Yeah, HFTS is apparently a heavy operation, but it does solve issues like shadow detachment and fine variations in geometry.
As for Ambient Occlusion, eh. It shouldn't operate on the portion of light received directly from a source. It should only operate on the ambient term, which could be calculated in several different ways. We do see a darkening in the physical world outside of "light source is absent or causing a shadow," though. For a somewhat extreme example, look at the ground under a car on a day with severe overcast clouds.
It is obviously in Nvidia’s
It is obviously in Nvidia’s best interest to get developers to use effects which take a lot of GPU power since they are in the business of selling more GPU power. I have seen people complaining about low frame rates in some games, even with high end hardware, but it is unclear whether this is due to bad optimization or adding effects which take a lot of processing but are not actually worth it for the visual quality. Many people are fine playing games with limited graphics if they provide good gameplay. GPU makers are really going to be depending on VR to drive more powerful GPU sales.
If you asked anyone why it is darker under a car, I think that they would just say that it is in shadow. Ambient occlusion is not a term that would be used to describe things in the real world (usually). It is a bit of a semantic issue though. On a cloudy day, the entire sky is a diffuse light source. You also have the light from reflections, refractions, and other light effects. At what point do you stop modeling a diffuse light source and just lump it all together as “global illumination”? It they are modeling a sunny day and actually casting shadows, then you would not need to ambient occlusion to produce darker shadow under a car. It would already be in shadow because of other effects.
For most of the screen images I have seen in real games, ambient occlusion didn’t make much difference to me. The example shown in the article appears to not be using any other shadowing so it is an extreme case. It isn’t an unrealistic case though because of the previously mentioned cloudy day. I am not sure what developers are using to compute shadows under these circumstances, if they are using any shadows in these cases. For a sunny day or anything indoors, you should already have shadowing based on the light sources. Any ambient occlusion effects will be subtle. Most people wouldn’t notice them much.