Using a GPU for audio makes a lot of sense. That said, the original TrueAudio was not really about that, and it didn't really take off. The API was only implemented in a handful of titles, and it required dedicated hardware that they have since removed from their latest architectures. It was not about using the extra horsepower of the GPU to simulate sound, although they did have ideas for “sound shaders” in the original TrueAudio.
TrueAudio Next, on the other hand, is an SDK that is part of AMD's LiquidVR package. It is based around OpenCL; specifically, it uses AMD's open-source FireRays library to trace the ways that audio can move from source to receiver, including reflections. For high-frequency audio, this is a good assumption, and that range of frequencies are more useful for positional awareness in VR, anyway.
Basically, TrueAudio Next has very little to do with the original.
Interestingly, AMD is providing an interface for TrueAudio Next to reserve compute units, but optionally (and under NDA). This allows audio processing to be unhooked from the video frame rate, provided that the CPU can keep both fed with actual game data. Since audio is typically a secondary thread, it could be ready to send sound calls at any moment. Various existing portions of asynchronous compute could help with this, but allowing developers to wholly reserve a fraction of the GPU should remove the issue entirely. That said, when I was working on a similar project in WebCL, I was looking to the integrated GPU, because it's there and it's idle, so why not? I would assume that, in actual usage, CU reservation would only be enabled if an AMD GPU is the only device installed.
Anywho, if you're interested, then be sure to check out AMD's other post on it, too.
just as useless as anything
just as useless as anything before it
the reason being MicroSoft killed off Direct Sound 3D in DirectX – the most asinine decision MS made regarding DX
the alternative, OpenAL never truly took off
Nope. Unless AMD’s specific
Nope. Unless AMD's specific implementation is doing something funky, these libraries send raw sample buffers to the sound devices, implementing, in software, everything that DirectSound would have provided. It has to be implemented in software, though.
that’s exactly the problem,
that’s exactly the problem, AMD’s implementation needs to be coded separately trough OpenCL – will never took off
this is why OpenAL never took off, it’s not a standard library of Windows or DirectX, even if OpenAL is cross platform – so, developers go with DirectX 9 DirectSound even today
Marketing the API to
Marketing the API to developers doesn't really have anything to do with DirectSound 3D. We'll need to wait and see whether AMD can convince middleware (FMOD and Wwise) and engines (UE4, Frostbite, Unity, and so forth).
marketing!?!?
it’s a matter
marketing!?!?
it’s a matter of development time and money
they need to code it in separately, other than they used to do it with DX DirectSound library
since it’s OpenCL, it should work with Intel and/or nVidia; for that they have to be sure it doesn’t create issues with how either of them handle OpenCL
then there’s the problem of GPU% usage – even if it’s asynchronous, you could end up with all sorts of issues, including desynchronization; and to alleviate that you need a legacy path, DX DirectSound
and then you get back to the original problem – why!?
Because they need to
Because they need to implement some audio subsystem, and one that provides physics-based environmental effects is interesting, especially for VR.
If this is available on
If this is available on consoles, and can use the same OpenCL code, then it probably has a much better chance of being more widely used. If it can take advantage of asynchronous compute, then that could be a big win also; why wouldn’t it run via asynchronous compute though? It seems like that would be the best way to implement it. I don’t know if specialized hardware like the original TrueAudio is necessary. It probably doesn’t actually take that much power compared to graphics processing.
I have wondered if integrated GPUs could be used for general asynchronous compute jobs. These jobs should be able to run anywhere the resources are available, but the system would need to be aware of the asymmetry of the GPUs and be able to schedule the jobs accordingly. Many systems will be going to APUs in the next few years anyway though. With how powerful 14 nm APUs will be, especially if they use HBM, most of the mainstream market will use APUs.
“since it’s OpenCL, it should
“since it’s OpenCL, it should work with Intel and/or nVidia; for that they have to be sure it doesn’t create issues with how either of them handle OpenCL”
Why? Why does AMD have to be sure it doesn’t create issues with how Intel and/or Nvidia handle OpenCL? Isn’t that up to Intel and/or Nvidia?
not sure specifically what
not sure specifically what the above quote was MEANT to mean, but perhaps it refers to OPENCL being a standard, and in general everybody is supposed to play nice together.
That said, they are under no obligation if using OpenCL code to actually make it work on other hardware so I’m a bit confused on this.
I can understand wanting a “just works” across the PC ecosystem to assure it is adopted though. That’s the type of OPEN software we really need.
I just want games to include
I just want games to include support for this audio processing coupled with support for this headset, and all will be well:
https://www.youtube.com/watch?v=Kp7HGs46lt0
I want a headphone recreation of the sorts of dolby atmos million dollar theater speaker setups when I play games and watch movies. And sans the base, it seems like these speakers might be able to deliver that. Same for vr, though I’m jumping in after we get better displays.
“I was looking to the
“I was looking to the integrated GPU, because it’s there and it’s idle, so why not?”
Of course but you know this part will probably be branded by Intel so … still you’re right, today’s computers comes with plenty of unused resources that should be put to work. However i really think the market could benefit from a generic crossplatform opencl audio api supported by a big brand like AMD, so maybe openAL can also get something out of this.
I did my audio processing
I did my audio processing experiments on an Intel HD 4600 (~20,000 – 30,000 stereo sounds). They're good little OpenCL devices, and Iris Pro is much better than that. While AMD does graphics very well, Intel is also quite capable.
This could be used in phones
This could be used in phones for anything audio related? So no need for a speciffic audio processor in an SOC, just use a portion of the semi programmable gpu. If so, a big leap forward for a design win in an i-Phoney.
Or am I viewing it in a manner that is too simplistic.
Audio processed like
Audio processed like magnetism ?OK . I like the idea .but there is a problem with this idea. Go in the shower 3 at the performance stopping center in North Carolina. I 95 exit 106. You ll notice that geometry can amplify sound . Kind of like a mirror that make you look fat .same thing happen here .it’s all nice and good idea if it’s wanted . But it add a lot of everything. Also what they plan to use fast Fourier transform 2014 or newer?or they plan to stick to old way? I m all for better sound , but gees we be been in the classic way of sound for a while and if physic ignore sound ,we should probably too. There is a lot of thing going on in sound that might surprise ton of s ientific . Physical properties, quantum properties , Galactica properties .juste because it isnt visible (like magnetism,electricity, inertia etc doesn’t mean it doesn’t exist lol.if I didn’t know better I would call sound a force. So amd you might want to ask science first before going on with this
This is spam wrapped in Poop!
This is spam wrapped in Poop! Worded as if its was run through Google translate 3 times starting with Pig Latin and progressing through Chinese, Gaelic, and finally english words that appear to be some form of gibberish! Although it could just be a snippet of the US tax code!
Yeah, I wasted a few seconds
Yeah, I wasted a few seconds of my life trying to understand that post. Audio is quite well understood. Understanding how it works, and replicating headphones are different things.
I can easily understand why pressure difference makes an airplane fly, but building one is a lot harder.
I meant 3D audio working with
I meant 3D audio working with a game, VR app in conjunction with headphones.
Just support and utilize
Just support and utilize DTS:X and do it real time. Forget all this other BS.
This has nothing to do with
This has nothing to do with the problems that DTS:X attempts to solve.
What will the next version be
What will the next version be names ? TrueAudio New Next? Why not simply call it TrueAudio 2.0? The “Next” name is great for stuff that is in development, but for a released product it’s really bad.
It’s an interesting
It's an interesting challenge. TrueAudio was a decently well known trademark, but the technology didn't take off. Calling it TrueAudio 2.0 would be confusing, because it doesn't do what TrueAudio 1.0 did.