Let's not resort to comparing credentials here. For the record, I've designed and built audio hardware, and I'm the author of a sample-rate conversion library, which does SIMD band-limited sample rate conversion, and I also wrote the accompanying test suite. I'm not just some dude who read a blog post about audio.
Let's talk about bandwidth. If you have a pure sine wave and want to measure its amplitude, you can do a DFT on your signal and measure the appropriate bin. Let's assume that the sine wave does land in one particular bin. If your data is 16-bit with dithering, the dithering and quantization will add noise to all of the bins, but the noise will be equally divided. As you increase the length of the sample that you're analyzing, the bandwidth of each bin decreases, and the amount of noise in each bin decreases as well. However, the signal will always be concentrated in that one bin.
So, as you decrease the bandwidth, the quantization noise decreases as well. This is equivalent to saying that you have increased resolution.
I know this is counterintuitive. However, this is the foundation of how most modern ADCs work. It's called delta-sigma modulation, and it uses a low-resolution ADC internally to derive a high-resolution digital output. It's also been used in DACs. For an extreme example, look at DSD, which gives high-resolution outputs using a 1-bit signal.
The argument that "if 16 bits is enough, why do we need dithering" is kind of pointless, because we don't use 16-bit audio without dithering. It's like asking, "if this amplifier is good enough, why does it use negative feedback?" The answer of course is that negative feedback increases the linearity and flattens the response of the amplifier, and makes it less sensitive to variations in manufacturing and temperature.
Let's talk about bandwidth. If you have a pure sine wave and want to measure its amplitude, you can do a DFT on your signal and measure the appropriate bin. Let's assume that the sine wave does land in one particular bin. If your data is 16-bit with dithering, the dithering and quantization will add noise to all of the bins, but the noise will be equally divided. As you increase the length of the sample that you're analyzing, the bandwidth of each bin decreases, and the amount of noise in each bin decreases as well. However, the signal will always be concentrated in that one bin.
So, as you decrease the bandwidth, the quantization noise decreases as well. This is equivalent to saying that you have increased resolution.
I know this is counterintuitive. However, this is the foundation of how most modern ADCs work. It's called delta-sigma modulation, and it uses a low-resolution ADC internally to derive a high-resolution digital output. It's also been used in DACs. For an extreme example, look at DSD, which gives high-resolution outputs using a 1-bit signal.
The argument that "if 16 bits is enough, why do we need dithering" is kind of pointless, because we don't use 16-bit audio without dithering. It's like asking, "if this amplifier is good enough, why does it use negative feedback?" The answer of course is that negative feedback increases the linearity and flattens the response of the amplifier, and makes it less sensitive to variations in manufacturing and temperature.