Depth of Field
The second cinematic effect Valve hopes to implement into Source is an artificial depth of field. Real life cameras (both still and video) inherently have a focal distance that is used to put the subject of the image in the clearest focus. Points either nearer or farther than that single focal point get progressively more blurry the farther they get from the focal point. This affect is often used in still photography and video photography to draw attention to particular points in a scene or to add artistic drama to some object.
No effects
Depth of Field
No effects
Depth of Field
Today’s video games don’t readily implement this feature and instead we are used to seeing the entire scene in complete focus from the nearest object to the farthest. This was done mostly because the developer has no real way of knowing where the gamer would be looking at any given instance and thus moving the focal point might cause more harm than good. Valve believes that there is a valid use of depth of field effects in game engines to help draw the user’s attention to particular points of interest, especially in single player game play models.
Video link – No Effects (~ 45 MB)
Video Link – Split Screen – No effects vs With Depth of Field (~ 33 MB)
Video Link – Split Screen – No effects vs With Motion Blur and Depth of Field (~ 30 MB)
The depth of field effect is also implemented using the same accumulation buffer method that we discussed in the motion blur effect, and because of that, the real-time frame rate possibilities of adding depth of field to the Source engine aren’t there yet.
No effects
Motion Blur and Depth of Field
No effects
Motion Blur and Depth of Field
Current Day Approximations
There have been some attempts from other developers to implement both depth of field and motion blur into games and demos in real time on current day hardware using some approximations. Both ATI and NVIDIA have shown these demos at separate GPU launches, but all the currently used methods have some artifacting issues that Valve wasn’t comfortable with including in their engine.
Depths of field approximations have mostly been limited to image-space approximations. Notice the artifacts around the characters that definitely take away from the realism of the effect.
Motion blur has traditionally been implemented using a frame feedback method that reuses images from older frames and slightly dims them instead of removing them from the screen to try and show a blurring effect. Valve provided a video (linked below) showing how frame feedback could be implemented in Source and how horrible it made the game look and feel.
Video Link – Frame Feedback Motion Blur (~ 7 MB)
Vector motion blurs were another method that actually does provide decently motion blur effects. The problem with this method though is that it requires predefined parameters that tell the GPU which direction to blur. While this method works well for demos and games that have a very linear imaging engine, a game like Day of Defeat or HL2 where the user has completely open control provides too many issues for developers to tackle.
So even though some other methods do exist to get almost the same effect, Valve is looking to take these effects to the next level but unfortunately we have to wait for the hardware to catch up to the software in this instance.