If you’ve started down the rabbit hole of audiophile gear, you’ve probably come across folks out there imploring you to buy a digital to analog conversion (DAC) unit. It can be a little startling being told that you don’t have the right equipment, but before you go racing off to figure out how much money you’re going to be blowing: read this article first to know if you actually need one. Chances are good that you’re completely fine without it.

This is a long article where I try to be as complete as possible, so feel free to skip around. I just don’t want anyone to see this and feel like they were misled or I glossed over something important.

So what is a DAC?

A DAC simply converts a digital signal into an analog one so that your headphones can then create sound. It’s that simple! Most DAC chips are found in the sources of whatever you’re listening to, and generally run a manufacturer from $3 to $30. At this point, it’s a very basic component of any smartphone, though the headphone jack seems to be a dying feature (Editors’ note: a pox on your house, Apple).

A DAC simply converts a digital signal into an analog one so that your headphones can then create sound.

Much like headphone amplifiers, standalone DACs came about as a response to poor audio quality at the consumer level. Back in the day, it was a lot tougher to find good hardware, and nerds like me had to deal with devices that couldn’t keep up with higher-end headphones and speakers. Sometimes the DAC assembly would be improperly shielded—introducing staticy noise—or it’d be a little too cheap, making the output kinda crappy. Lower sample rates, badly encoded MP3s… there were tons of things that children of the 80s and 90s had to deal with when it came to audio. Who wants to listen to low-quality tunes?

But digital music has come a long way since then. Better tech has made shortcomings of even the cheapest chips almost nonexistent, while digital music has exploded in quality past the point of diminishing returns. Where it used to be true that your Walkman’s or laptop’s internal DAC chip wouldn’t be suitable for high-bitrate listening, there are plenty of more compact units nowadays that can keep up.

How does a DAC work?

A diagram showing the difference between high and low bitrate.

Low bitrates (a) can mangle the waveform a bit, but higher bitrates (b) can sound better in certain circumstances.

Now that you know the why of DAC, let’s delve into the how.

All audio, whether it’s stored on vinyl or in an MP3 is a compression wave when it’s played back. When computers record an analog signal, typically it will be displayed in what’s called a waveform, or a representation of the wave where the Y axis is amplitude (how powerful the wave is), and the X axis is time. Each wave will have a crest and valley—called a period—and how many periods there are in a second is called frequency (displayed as Hz). If you’ve heard that word before, you know that what frequency a sound is also corresponds to what note it is. The higher the frequency, the higher the note.

The job of the DAC is to take a digitally stored recording and turn it back into an analog signal. To do that, it needs to translate the bits of data from digital files into an analog electrical signal at thousands of set times per second, otherwise known as samples. The unit then outputs a wave that intersects all those points. Now, because DACs aren’t perfect, sometimes this leads to problems. These problems are jitter, narrow dynamic range, and limited bitrate.

Before launching into the nuts and bolts of how everything works, you need to know three terms: bitrate, bit depth, and sample rate. Bitrate simply refers to how much data is expressed per second. Sample rate refers to how many samples of data are taken in a second, and bit depth refers to how much data is recorded per sample.

 

What is jitter?

I’m going to preface this section just like I addressed it in the audio cable myths article: Jitter is mostly a theoretical problem at this point, and extremely unlikely to rear its head in any equipment made in the last ten years. However, it’s still useful to know what it is and when it might be an issue, so let’s dive in.

A diagram showing jitter.

A demonstration of jitter: waveform a and b are identical, but the low sample rate of b has fooled the DAC into thinking the frequency is halved.

So remember how I said that sample rate can lead to some problems? Jitter is one that gets a lot of attention, but not much understanding. Basically, sometimes a sound that’s really really high in frequency like a cymbal shimmer, harmonic, or other high note will have this strange warbling or oscillating sound that wasn’t in the original recording. What’s happening is that the DAC is accidentally creating a lower frequency note because the signal is just close enough in frequency to the sample rate—or the samples are taken at inaccurate times by an older, crappier clock mechanism.

How do you avoid this problem? Increase the sample rate of course! The more data points you have, the less likely an error will happen in a given set of frequencies. However, there is a point where this simply does not audibly help anymore. Essentially, you can eliminate this problem if you’re able to sample at least twice per period, thereby forcing sampling errors to exist only in the highest frequencies that you’d likely be unable to hear anyway. Considering that the uppermost limits of human hearing range from 12-22kHz (as in, 12,000 to 22,000 periods per second), doubling that rate nets you somewhere within 24-44 thousand samples per second, or 44kHz. That last number sound familiar? It should: 44.1kHz is the most common sample rate for MP3 files!

What is bit depth and dynamic range?

If you’ve listened to really old MP3 files or crappy MIDI music from your old consoles, you’ll probably notice that they can’t really ramp up volume in a given music track all that well, or that competing instruments are really really difficult to pick out if they’re all going at once. This is what bad dynamic range sounds like. Dynamic range in this instance simply refers to the difference between all possible volumes of sounds in a given file.

What governs the theoretical limits of the dynamic range of an audio file is the bit depth. Basically, every single sample (discussed above) contains information, and the more information each sample holds, the more potential output values it has. In layman’s terms, the greater the bit depth, the wider the range of possible loudness of notes there are. A low bit depth either at the recording stage, or in the file itself will necessarily result in low dynamic range, making many sounds incorrectly emphasized (or muted altogether). Because there’s only so many possible loudness values that a sound could have inside a digital file, the lower the bit depth, the crappier the file should sound however you listen to it. So the greater the bit depth, the better, right?

A photo of the Sennheiser HD 800 with a Headroom DAC, Headroom amplifier, and Headroom power supply.

Adapted from: Flickr user chunso That’s certainly an impressive amount of equipment, but quite overkill.

Well, this is where we run into the limits of human perception once again. The most common bit depth is 16, meaning: for every sample, there’s a possible 16 bits of information, or 65,536 integer values. In terms of audio, that’s a dynamic range of 96.33dB. In theory, that means that no sound under 96ish dB should be deleted or incorrectly assigned a loudness value.

While that may not sound terribly impressive, you really need to think hard about how you listen to music. If you’re like me: that comes from headphones 99+% of the time, and you’re going to be listening to your music at a volume much lower than that. For example, I try to limit my sessions to about 75dB so I don’t cook my ears prematurely. At that level, added dynamic range isn’t going to be perceptible, and anyone telling you otherwise is simply wrong. Additionally, your hearing isn’t equally-sensitive across all frequencies either, so your ears are the bottleneck here.

While I'm a super big crank when it comes to silly-ass excesses in audio tech, this is one point I'm forced to concede. However, the necessity of 24-bit files for casual listeners is dramatically overstated.

So why do so many people swear by 24-bit audio when 16-bit is just fine? Because that’s the bit depth where there theoretically shouldn’t be any problems ever for human ears. If you like to listen to recordings that are super quiet (think, orchestral music)—and you need to really crank the volume in order for everything to be heard—you need a lot more dynamic range than you would with an over-produced, too-loud pop song would in order to be heard properly. While you’d never crank your amp to 144dB, 24-bit encoding would allow you to approach that.

Additionally, if you record music, it’s always better to record at a high sample rate then downsample, instead of the other way around. That way, you avoid having a high-bitrate file with low-bitrate dynamic range, or worse: added noise. While I’m a super big crank when it comes to silly-ass excesses in audio tech, this is one point I’m forced to concede. However, the necessity of 24-bit files for casual listeners is dramatically overstated.

What’s a good bitrate?

While bit depth is important, what most people are familiar with in terms of bad-sounding audio is limited bitrate. Ever listen to music on YouTube, then immediately notice the difference when switching to an iTunes track or high-quality streaming service? You’re hearing a difference in bitrate.

If you’ve made it this far, you’re probably aware that the greater the bit depth is, the more information the DAC has to convert and output at once. This is why bitrate—the speed at which your music data is decoded—is important. If the bitrate is low, not enough data will be converted to create the analog wave, meaning less information is converted, meaning you hear crappier audio. It’s really as simple as that.

320kbps is perfectly fine for most applications... and truth be told most people can't tell the difference.

So how much is enough? I usually tell people the 320kbps rate is perfectly fine for most applications (assuming you’re listening to 16-bit files). Hell, it’s what Amazon uses for its store, and truth be told most people can’t tell the difference. Some of you out there like FLAC files—and that’s fine for archival purposes—but for mobile listening? Just use a 320kbps MP3 or Ogg Vorbis file. The amount of space “lossless” files like FLAC takes up is enormous, and for little to no perceptible benefit when you’re on the go.

If you’ve got space to spare, maybe you don’t care as much how big our files are—but smartphones generally don’t all come with 128GB standard… yet. But if you can’t tell the difference between a 320kbps MP3 and a 1400+kbps FLAC, why would you burn 45MB of space when you could get away with 15?

When do I need a DAC?

The reason you’d get a DAC today is that your source—be it your computer, smartphone, or home system—is introducing noise or incapable of outputting sound at the bitrate of your files. That’s it. I know that’s a really anticlimactic tl;dr, but that’s really the long and short of it. The only other time you could possibly want something super high-end is if you’re recording audio for professional applications, but even then the equipment used for processing it can handle it relatively cheaply.

Because DACs are a largely spec-driven item, you can almost always pick out the one you need simply by looking at the packaging. FiiO makes plenty good products for cheap, and if you want an amplifier to go along with the DAC so you never have to worry about either, their E10K is a solid pickup for under $100. You could also decide to throw money at the problem by picking up an ODAC or O2 amp + ODAC combo, but that may be overkill. But seriously, don’t sink too much money into this. It’s just not worth it.

 

Leave a comment