When VR started to take off, developers begun to realize that audio is worth some attention. Historically, it’s been difficult to market, but that’s par for the course when it comes to VR technology, so I guess that’s no excuse to pass it up anymore. Now Valve, the owners of the leading VR platform on the PC have just released an API for audio processing: Steam Audio SDK.

Image Credit: Valve Software

First, I should mention that the SDK is not quite open. The GitHub page (and the source code ZIP in its releases tab) just contain the license (which is an EULA) and the readme. That said, Valve is under no obligation to provide these sorts of technology to the open (even though it would be nice) and they are maintaining builds for Windows, Mac, Linux, and Android. It is currently available as a C API and a plug-in for Unity. Unreal Engine 4, FMOD, and WWISE plug-ins are “coming soon”.

As for the technology itself, it has quite a few interesting features. As you might expect, it supports HRTF out of the box, which modifies a sound call to appear like it’s coming from a defined direction. The algorithm is based on experimental data, rather than some actual, physical process.

More interesting is their sound propagation and occlusion calculations. They are claiming that this can be raycast, and static scenes can bake some of the work ahead-of-time, which will reduce runtime overhead. Unlike VRWorks Audio or TrueAudio Next, it looks like they’re doing it on the CPU, though. I’m guessing this means that it will mostly raycast to fade between versions of the audio, rather than summing up contributions from thousands of individual rays at runtime (or an equivalent algorithm, like voxel leakage).

Still, this is available now as a C API and a Unity Plug-in, because Valve really likes Unity lately.