Introduction
We explore the technology and performance of aptX audio.
Bluetooth has come a long way since the technology was introduced in 1998. The addition of the Advanced Audio Distribution Profile (A2DP) in 2003 brought support for high-quality audio streaming, but Bluetooth still didn’t offer anywhere near the quality of a wired connection. This unfortunate fact is often overlooked in favor of the technology's convenience factor, but what if we could have the best of both worlds? This is where Qualcomm's aptX comes in, and it is a departure from the methods in place since the introduction of Bluetooth audio.
What is aptX audio? It's actually a codec that compresses audio in a very different manner than that of the standard Bluetooth codec, and the result is as close to uncompressed audio as the bandwidth-constrained Bluetooth technology can possibly allow. Qualcomm describes aptX audio as "a bit-rate efficiency technology that ensures you receive the highest possible sound quality from your Bluetooth audio device," and there is actual science to back up this claim. After doing quite a bit of reading on the subject as I prepared for this review, I found that the technology behind aptX audio, and its history, is very interesting.
A Brief History of aptX Audio
The aptX codec has actually been around since long before Bluetooth, with its invention in the 1980s and first commercial applications beginning in the 1990s. The version now found in compatible Bluetooth devices is 4th-generation aptX, and in the very beginning it was actually a hardware product (the APTX100ED chip). The technology has had a continued presence in pro audio for three decades now, with a wider reach than I had ever imagined when I started researching the topic. For example, aptX is used for ISDN line connections for remote voice work (voice over, ADR, foreign language dubs, etc.) in movie production, and even for mix approvals on film soundtracks. In fact, aptX was also the compression technology behind DTS theater sound, which had its introduction in 1993 with Jurassic Park. It is in use in over 30,000 radio stations around the world, where it has long been used for digital music playback.
So, while it is clear that aptX is a respected technology with a long history in the audio industry, how exactly does this translate into improvements for someone who just wants to listen to music over a bandwidth-constrained Bluetooth connection? The nature of the codec and its differences/advantages vs. A2DP is a complex topic, but I will attempt to explain in plain language how it actually can make Bluetooth audio sound better. Having science behind the claim of better sound goes a long way in legitimizing perceptual improvements in audio quality, particularly as the high-end audio industry is full of dubious – and often ridiculous – claims. There is no snake-oil to be sold here, as we are simply talking about a different way to compress and uncompress an audio signal – which is the purpose of a codec (code, decode) to begin with.
How aptX can improve wireless sound quality
Standard Bluetooth audio streams are handled with a codec called SBC (low complexity subband codec), and it uses a lossy form of compression as it encodes the signal, to be decoded at the receiving end. What do I mean by "lossy"? With this type of compression, a significant amount of the original signal is removed from the audio source, and this is done using something called psychoacoustic auditory masking, a technique in which signals can be eliminated based on the idea that certain frequencies would otherwise be “masked” by other frequencies, and can therefore be omitted. This is the method used by an MP3 encoder to reduce file size without causing significant noticeable quality loss. I will not attempt to convince anyone who claims to hear no audible difference when comparing MP3 to a CD source that there is anything wrong with this, as I am only pointing out that such compression results in a significant enough loss of the original signal to be considered “lossy” compression.
To understand how aptX is different, I'll begin with a look at standard pulse code modulation, which is the simplest form of digital audio (Compact Disc audio, Blu-ray soundtracks, and WAV/AIF files are common examples of PCM). The process begins by capturing the original analog sound wave at intervals defined by the sample rate. For 44.1 KHz audio the waveform is sampled 44,100 times per second, with each sample’s amplitude being recorded using a particular bit-depth (16-bit in the case of CD audio). The higher the sample rate and bit depth, the more accurate the resulting representation of the original analog wave will be.
A quantized waveform (image credit: Digital Sound & Music)
That’s it. It is an extremely accurate method of digitizing sound, but it requires a significant amount of data for high resolution. The terminology might be a little daunting, with words like “quantization” used to describe not only PCM, but aptX streaming as well, but these are not especially difficult concepts. With PCM each sample of the analog signal taken must be be quantized, or simply, turned into the ones and zeros that your device will store to represent the audio signal (for example, 16-bit is the CD bit depth for quantization). In the 'high-resolution' music world many PCM files are losslessly compressed using formats such as FLAC or ALAC to achieve a smaller file size, but to go beyond what these codecs can provide in bandwidth reduction – a necessity when streaming audio over Bluetooth – more is required.
So how can aptX make a difference, considering it is still operating within the restriction of the 345 Kbps Bluetooth audio streaming specification? The difference is in the way the signal is compressed before being transmitted, as aptX does not use psychoacoustic techniques to achieve its compression. It is still considered “lossy”, but aptX uses time domain-based, not frequency domain-based, compression using ADPCM (adaptive differential pulse-code modulation).
"As it's name implies ADPCM is a technique which re-codes the difference between two digital audio samples, using quantisation step-sizes that adapt to the energy of the input audio signal. In this way ADPCM can provide a similar audio quality to linear PCM but at a much reduced bit-rate."
With the ADPCM compression used in aptX Bluetooth streams, the difference between the quantized samples can be transmitted and used to reconstruct the original signal on the receiving end, saving data, and therefore valuable bandwidth.
Without going into any more detail from my research into this subject (those links are all PDF files, just to warn you!), I will simply say that there is a lot of dense reading to be found on the subject of ADPCM. And anyway, that's enough technical mumbo-jumbo! None of this means much if aptX doesn't improve the sound of a Bluetooth stream. So, how does aptX sound? I will discuss my findings on the next page.
Nice article Sebastian,
Nice article Sebastian, thanks!
I’d be interested in your thoughts (an article) on MQA should you get the chance and have the desire to write about it.
Thanks! MQA is an interesting
Thanks! MQA is an interesting idea, but there are some big questions about it – even though audiophile magazines such as Stereophile and The Absolute Sound have praised it. But TAS for example has also said a particular $945 USB cable "revealed an even larger and more dimensional spatial perspective" (the full 'review' is in their 2016 buyer's guide). MQA can certainly be questioned, as it operates in rather mysterious ways that are not fully explained. While not an independant souce, as they sell their own ADC/DAC products (which are very highly respected, by the way), it's interesting nonetheless to read what Benchmark has to say about MQA.
Hans Beekhuyzen did a YouTube
Hans Beekhuyzen did a YouTube video that explained MQA well, I think. Sounds like an interesting methodology – and one he claims reduces jitter to an almost insignificant level.
https://www.youtube.com/watch?v=T5o6XHVK2HA
Whether or not its something that will improve the listening experience for most people – I really doubt it, most people don’t hear the kind of faults in sound reproduction he does.
Tidal (and I think a couple of others) have started streaming MQA music. As I understand it there is a software-level implementation which allows for some benefit even if you don’t have a DAC with the technology built in. Of course a lot of their music files are just converted files from lower quality methodologies, and no improvement in compression or streaming can correct the work of a heavy-handed “sound engineer” that’s stomped all over the music (as in the loudness problem you mentioned), which I think is a bigger problem than whatever compression type is used.
The hardware certification process is a serious issue. Schiit has said they won’t use MQA because it means letting the parties responsible for the qualification a look into their proprietary hardware – and of course there will be licensing fees passed on to the customer. They might re-evaluate if it becomes wide-spread, but right now they have no plans to implement it.
Anyhoo, as you well know discussions about sound quality tend to degrade into trash-talking and are of no educational value. Personally I like to hear from a variety of sources about products and technologies. I respect your opinion, which is why I’d like your treatment on the subject. 🙂
ADPCM in this day and age?
ADPCM in this day and age? That was outdated decades ago. It was popular because it was fairly simple to implement and gave reasonable compression. But that was in the 1980’s!
Now, we have vastly more DSP embedded in all of our devices and ADPCM is not a good choice.
But, I’m sure Qualcomm has it pattened up and down, so of course they’ll push it to broad adoption.
It’s not outdated at all, and
It's not outdated at all, and it used professionally for good reason. The focus of the article was its advantage in situations where we don't have the bandwidth for lossless compression, which still requires far more than the ~328-345 Kbps available with via Bluetooth, for example, what other method would you suggest to prevent the signal loss of psychoacoustic auditory masking such as SBC? Re-compression is a major problem using this technique, and can only be mitigated if you listen exclusively to uncompressed/lossless files.
Qualcomm purchased it in 2015 from CSR, and it has been an industry standard in compression since the 1990's. No professional application is going to use frequency-domain compression. Now that we have the bandwidth for PCM tracks with Blu-ray, I understand that it seems redundant for that use-case – however, aptX was never used for the home version of even the DTS sound system it was integrated with for commercial theater sound as AAC became the standard for DVD soundtracks. It is a pro-level form of compression that is now being implemented in consumer Bluetooth devices, and that's all.
You mention DSPs, and while a proprietary solution could certainly either be 'baked-in' to an SoC, how would it be transmitted without a proprietary connection? Bluetooth's limitations are what they are. Sony, for example, offers a very high quality wireless solution (LDAC) with their digital Walkman players, but it requires the use of – again – proprietary wireless technology only found in select Sony headphones and those compatible players. Without an alternative wireless connection, no DSP is going to circumvent the bandwidth limitation of Bluetooth.
And speaking of DSPs, now that Qualcomm owns aptX audio, it won't be long until Snapdragon processors can offer the tech in their audio DSP, which means that more and more smartphones going forward could potentially offer an aptX Bluetooth connection with compatible headphones (and the number of available, compatible headphones will obviously increase). Apple chose not to include aptX with the current iPhone, which means you are stuck using AirPods for the best wireless sound. Again, proprietary tech, and only one option on the market.
As aptX audio is actually gaining traction in hi-fi audio with some of the best wireless headphones and speakers adopting it, I again call to question the assertion that it is "outdated" technology.
ADPCM and the variant known
ADPCM and the variant known as AptX are quite old. The most recent commercial use of AptX was in 2007. ADPCM *was* popular in the late 80’s and early 90’s because it could be used to encode FM radio quality audio. I know this because *I was there doing it* commercially. We used ADPCM because it was the best sounding CODEC that could run in real time on DSP chips available to us. That changed in the early/mid 90’s when other CODECs became practical because of the increased power of PCs and of DSPs.
SBC is a very simple CODEC but, at high bit rates, has way fewer artifacts than ADPCM until you get to very high bit rates. I might point out that AptX is not a strict time domain ADPCM–instead it is a subband filter and the resulting subbands are then predictively coded. At the bit rates used in BT, SBC does very little masking that you would be able to detect. The reason they do psychoaccoustical masking is *because you can’t hear it*. SBC is perfectly fine for the application for which it’s being used.
If I were to offere a different CODEC in its place, the last thing I would do would be to suggest something like AptX or any other ADPCM derived system. I’d suggest opus. It’s low latency, very high quality at low bit rates, resistant to packet loss, and is very near lossless at reasonably low bit rates–rates well lower than what AptX needs. This lower bandwidth need could go into more FEC or on lowering the transmitted bit rate to increase the Eb/N0 of the RF link.
The bandwidth available on BT are signifigantly above what you need for near lossless audio. SBC and opus are both easily implemented on the DSPs that are *already* inside of our phones. Every little BT module *is* a DSP. That’s host most modern RF processing is done. In the context of Qualcomm, every single one of their phone chips has a DSP built in that is well larger than needed for SBC or opus.
There is no need for a propriatary CODEC to solve this problem be it from Qualcomm or Sony.
ADPCM and AptX have as little place in this day and age as tube amps.
Tube amps are the shit, yo.
Tube amps are the shit, yo.
They are, indeed, shit.
They are, indeed, shit.
See you throw these big words
See you throw these big words around dude but you haven’t even experienced the difference. If you had you wouldn’t be arguing.
It sounds like it’s plugged in. Period. And I’m an audiophile freak.
Normal Bluetooth sounds OK, but not good like this.
What’s your deal anyway? Trying to defend your iPhone 7 or something lol
Sebastian, is that you?
Sebastian, is that you?
great article, learned a lot
great article, learned a lot about the history of this tech. thanks~!
Thank you!
Thank you!
Is there a relation with the
Is there a relation with the Audio-Technika tech that was (i think) disclosed at CES?
[edit] I think, the answer is yes: https://www.pcper.com/news/General-Tech/CES-2017-Audio-Technica-Expands-Wireless-Headphone-Lineup-aptX-Bluetooth [/edit]
Yep! AT is expanding wireless
Yep! AT is expanding wireless headphones and includes aptX audio support.
The problem is aptx is not
The problem is aptx is not supported on the platforms that matter: Pure Google (Nexus AOSP), iOS (iPhone / iPad), and Windows (without CSR Bluetooth dongles)
I’m not investing hundreds of dollars into headphones that won’t be aptx-enabled on most of my devices.
30$ Bluetooth headphones on
30$ Bluetooth headphones on Amazon support it. In fact most all headphones support it. You can also clearly hear the difference. My old 200$ Android phone from 2 years ago supports it as well.
The only way it won’t be supported is if your in the Apple ecosystem but then you don’t care about quality anyway.
There’s a nice budget
There’s a nice budget audiophile niche this appeals to. A future iteration of that little 25w stereo tube amp with aptX/Bluetooth sold by Monoprice would be awesome plugged into nice full range driver speakers. Madisound sells a Fostek kit that’s supposed to be Lights out. If Bluetooth audio was truly improved, hi fidelity audio would be a mainstream thing again.
The Monoprice amp:
https://www.monoprice.com/mobile/product/details/13194?gclid=CjwKEAiA2abEBRCdx7PqqunM1CYSJABf3qva9HXkiuCyj_qeQ6F1N9GNV9TZFucITsSG7wxZ_Hi7axoCELHw_wcB
The Madisound kit:
https://www.madisoundspeakerstore.com/full-range-speaker-kits/fostex-p1000-bh-4-full-range-back-loaded-horn-kit-pair/
Samsung has something called
Samsung has something called UHQ audio which is supposed to be better than aptx. Is it possible that you evaluate UHQ audio? I thoroughly enjoyed reading this one, thanks!
345Kbit is actually quite a
345Kbit is actually quite a bit for audio — that’s enough room for a 320Kbit MP3 or even something like Ogg Vorbis which is not wrapped up in a bunch of licensing BS. Not sure why they went with AptX instead. Hrmm
Because Qualcomm bought CSR
Because Qualcomm bought CSR and CSR makes *all the BT chipsets* Well, a very large chunk of them that aren’t in Qualcomm chipsits already.
aptX audio why not in V20 lg
aptX audio why not in V20 lg ???
Been using Logitech G933
Been using Logitech G933 headphones and not happy all sorts of trouble even after a free replacement set. Bought sennheiser Momentum which support aptx and Telme2 Toslink to BTadapter attached to SPDIF MB port. I am amazed with the audio. Best I have ever had for wireless audio by far.