Hacker News new | past | comments | ask | show | jobs | submit login

Dithering is a horrible thing to be doing, and 44.1 is an awkward rate. So while I agree that 192khz is dumb, 24/48 is a better standard than CD.



No, dithering (properly) is usually what you should be doing when you quantize.

See Vanderkooy and Lipshitz 1987 for why.


Paper seems to be paywalled. I can't imagine any possible purpose for dithering before encoding that wouldn't be better served by dithering on playback.


At 16 bits dithering is probably pointless for listening purposes.

What dithering does is it decorrelates the quantization noise with the signal. Absent it, quantization generates harmonic spurs. In theory, on a very clean and noiseless signal these harmonic spurs might be more audible than you'd expect from the overall quantization level.

In practice, 16 bits is enough precision that these harmonics are inaudible even in fairly pathlogical cases. But quantization eliminates the potential problem by replacing the harmonic content with white noise.

Adding noise on playback just adds noise, it would not remove the harmonic generation.

The _best_ kind of dithering scheme is a subtractive dither, where noise is added before quanitization and then the _same_ noise is subtracted from the dequantized signal on playback. This is best in the sense that it's the scheme that completely eliminates the distortion with the least amount of additional noise power. But it's not ever used for audio applications due to the additional complexity of managing the synchronized noise on each side.


> But it's not ever used for audio applications due to the additional complexity of managing the synchronized noise on each side.

Mersenne twister with a shared seed in metadata?


The article is saying you can use dithering to represent sounds quieter than your 16-bit "0000000000000001". That's what I'm objecting to.


Consider the case of a 1-bit image. Let's say the "signal" is a smooth gradient of black to white from one side of the image to the other. If you simply quantize to the nearest value, one half of the image ends up solid black and the other half, solid white. No amount of after-the-fact "dithering" of this image will recover the original gradient - it is lost forever.

Now supposing we add noise to our signal before we quantize. A given pixel at 25% gray (which under the previous scheme would always end up solid black) now has a 25% chance of ending up white. A contiguous block of such pixels will have an average value of 25% gray, even though an individual pixel can only be black or white. Thus, by flip-flopping between the two closest values ("dithering") in statistical proportion to the original signal, information is preserved.

https://en.wikipedia.org/wiki/File:1_bit.png


Sure, I know how it works. But it sacrifices resolution (spatial in this example, temporal in the case of audio) and compresses poorly. Rather than dithering, you should use a higher bit depth so that you can represent your original gradient (with the desired smoothness) directly.


You dither before quantization in order to decorrelate the quantization error. If you don't do this, you risk artifacts in any digital filtering (or for that matter, playback) done afterwards. This includes any requantization.

In audio if I recall correctly it is also important to avoid obvious noise modulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: