Hacker News new | past | comments | ask | show | jobs | submit login

Even if the lower sample rate of 48kHz would be entirely reasonable and 96kHz is overkill, 24 bits still makes an audible difference for the material I listen to (modernist classical music and ECM jazz), which is why you can find 24/48 from some labels. For pop music, which of course is distinguished by little dynamic range, then 16 bits would be fine just like on the CD format.



The only issue with low bitrates is noise floor, so if you're hearing other distortions, it's not caused by the bitrate but maybe your room treatment or headphone drivers. A recording would have to have a hilariously large amount of headroom for 16-bit noise floor to be noticible when the music is played at a desired level, and while symphonic and jazz recordings have ridiculously high dynamic range, it's not 100dB of headroom, maybe 60, so 16-bit should be fine.

If you still think it's a problem, adding good dithering with ffmpeg's quantizer/resampler flags will make the noise floor 6-10dB smaller.


> 24 bits still makes an audible difference

The article (and numerous other sources I've seen over the years) disagrees with you so I'm curious why you're so certain?


As the article notes:

"It's true that 16 bit linear PCM audio does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions."

Now, 24-bit may be overkill, but 24-bit is the next step up from 16-bit among standard encoding formats, and as the article notes, there are no drawbacks with 24-bit encoding except greater use of disk space.


The article says:

"[...] does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions."

Note the words "theoretical" and "ideal".

In your post it sounds like you're claiming that you can regularly hear a difference under normal listening conditions - which contradicts my reading of that sentence.

My gut feeling is that the difference you're hearing is placebo.

To put it another way - either the article is making an inaccurate statement, you're mistaken - or you've got golden ears and only ever listen to music in specially prepared environments.


The article is making numerous inaccurate statements, because it's got an agenda and the author is invested in lossy media encoding quite heavily. There's a degree to where it's relative: in the car with the windows open you'll not be hearing 16 bits of audio resolution.

Monty's gotta monty, and this argument has been going on from the very earliest days of digital: back when people behaved exactly the same way over digital recordings that are now commonly accepted to be excruciatingly bad for a variety of reasons (generally having to do with bad process and wrong technical choices).

You can get a HELL of a lot out of 16/44.1 these days if you really work at it. I do that for a living and continue to push the boundaries of what's common practice (most recently, me and Alexey Lukin of iZotope hammered out a method of dithering the mantissa of 32 bit floating point (which equates to around 24 bit fixed for only the outside 1/2 of the sample range, and gets progressively higher precision as loudness diminishes). Monty is not useful in these discussions, nor is anyone who just dismisses the whole concept of digital audio quality.


I'm not dismissing anything. I'm arguing for the power of human self-deception. I feel the same way about connoisseurship in most other realms; food and wine being the obvious examples.

I believe it's a combination of imagined differences and barely perceptible differences elevated to implausible heights of significance.

Even if one can hear the difference between 16 and 24 bits it will be almost imperceptible in most listening conditions and when it is perceptible it will on the threshold - and certainly too subtle to affect the quality of the experience in any meaningful way.


To put things in perspective, 16-bit PCM audio has a noise floor around -96dBFS, ie. the difference between the loudest possible sound the format can contain and the noise floor is 96dB. That's what the bit depth determines; the level of the noise floor in relation to the loudest reproducible sound. It does not add any more detail, it's not like the resolution of an image file, the added bit depth does not allow for finer-grained details, audio doesn't work like that.

96dB is a lot more than you probably think, it's like the difference between an anechoic chamber (nominally ~0dB) and someone jackhammering concrete right next to you (~90-100dB). Add to this that even a quiet room has a noise floor around 20-30dB, and to even hear the noise floor in CD quality audio, a full-scale peak would hit 130dB!

Try generating a sound at 0dBFS, the attenuate it in steps of 10dB and make note of when you can't really hear it anymore. At -50dB the sound is already extremely low and barely audible, and there would still be 46dB of attenuation available.

In addition to this, noise-shaped dither can push the noise floor towards frequencies where the human ear is less sensitive, giving a perceived noise floor of around -120dBFS. In other words, 24-bit audio for distribution and listening is absolutely pointless and has absolutely no audible difference when compared to 16-bit audio.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: