
The 16-bit v/s 8-bit Blind Listening Test, Part 2 - nkurz
http://www.audiocheck.net/blindtests_16vs8bit_NeilYoung.php
======
dietrichepp
Oh boy, this test is going to ruffle some feathers. However, the choice of
audio clip ensures that you can't tell the difference easily.

It's known that the only difference between well-prepared files of different
bit depths is the noise floor. Basically, in any PCM audio file, you can
expect noise of about 1 ULP. So in a 16-bit file you get a noise floor of -90
dB, and in an 8-bit file it's at -42 dB. This is the ONLY DIFFERENCE.

People who are here listening for distortion artifacts aren't going to find
any, and they're going to be surprised. That's because you can always choose
to introduce noise instead of distortion artifacts when you do the conversion.
The noise is actually added on purpose, it's "dithering noise", and it's not
there to mask the distortion, but to entirely replace it.

The math works like this. Take a high-resolution input, say, 16-bit. You're
converting it to 8-bit. If you just convert, you'll introduce a distortion
signal of about 1ULP. However, if you introduce 1ULP of uncorrelated
rectangular noise, the distortion signal is now also uncorrelated noise, which
is less objectionable than correlated noise. It's also harder to hear.

Then, how do you tell the difference between 16-bit and 8-bit audio files? You
can only do it if there's a quiet part somewhere in there. As soon as there's
a quiet part, you can hear a "hissing" sound in the 8-bit file at -42 dB. If
you're listening to the middle of a rock song like this, the hissing sound is
going to be buried in the mix. Heck, a typical guitar amp will already be
putting out uncorrelated noise.

~~~
massaman_yams
10/10, but I had to listen closely. Sennheiser 650s & external DAC/amp, quiet
room, hearing is still good to ~16khz.

The primary place I picked out to hear this not the quiet parts, but rather a
hiss above 10khz, which was audible even at the beginning - and particularly
at the beginning. It's almost mistakable for brighter cymbals, if you're not
listening closely. It also changes in perceptibility over the duration of the
sample.

~~~
CarVac
I tested myself using that same trace: I got 10/10 with the sound of the
cymbals, then confirming with the fadeout at the end.

This is with HD-598's with 16 feet of cheapo headphone extension cords, loud
old refrigerator running in tiled studio apartment, and motherboard built-in
sound output.

------
aurelian15
I scored 10/10 in the test, but only because I experimented with this myself
after I read Christopher Montgomery's article in 2012: As explained there,
with active dithering, bit-depth and signal-to-noise level are interchangeable
-- a signal with a small bit depth has a small signal-to-noise ratio. So one
can clearly hear a subtle noise in the 8-bit samples, especially in the fade-
out at the end.

In my experiments I found that for almost anything, that is not orchestral
music, and that is newer than 1990, 8-bit sample depth are enough. That was
really depressing.

By the way, a nice jest is to tell some random bystander that you're going to
present them "8-bit" music and then play the 8-bit version of a song that was
a victim of the loudness war. Supposedly, the 8-bit times were better than
most people think. ;-)

------
scelerat
I tested myself using my MacBook Pro's speakers. 7/10 correct.

I'm a musician and have a large CD, vinyl record, and digital audio
collection. Honestly the knots some people twist themselves into over formats
amuse me. I tend to focus on the music and the rhythm rather than the
recording quality, at least above a minimum quality level. I prefer vinyl just
because I do. :-)

[edit] the test page suggested my results were close to random, and I agree. I
couldn't really tell a difference through the laptop speakers. It felt like I
was guessing for each sample.

~~~
mod
Wait a couple of hours, to forget your responses, and try again.

~~~
scelerat
I honestly couldn't tell. I'm guessing if I tried again I'd get anything
between 4/10 and 7/10.

~~~
ericfrederich
Read the "comprehensive article" linked to from the main story. It is worth
the read.

------
hatsunearu
Regardless of whether or not this test is bunk or not, this person's website
has a lot of cool audio samples:

Subwoofer Harmonic Distortion test:
[http://www.audiocheck.net/testtones_subwooferharmonicdistort...](http://www.audiocheck.net/testtones_subwooferharmonicdistortion.php)

Polarity test:
[http://www.audiocheck.net/audiotests_polaritycheck.php](http://www.audiocheck.net/audiotests_polaritycheck.php)

Practical effects of bit depth:
[http://www.audiocheck.net/audiotests_dithering.php](http://www.audiocheck.net/audiotests_dithering.php)

Headphone Ultimate Test (the binaural audio one is pretty awesome):
[http://www.audiocheck.net/soundtests_headphones.php](http://www.audiocheck.net/soundtests_headphones.php)

------
ubercore
10/10 using the fade-out at the end. Choice of clip is a poor example of the
benefits of higher sample depth, because the dynamics are relatively constant
and there are a lot of noisy overtones.

Having $700 headphones helps, too.

~~~
morsch
Using the fade out makes it trivial to detect. Listening closely with
headphones it's fairly easy (i.e. get 9+ right) to hear it in the first couple
of seconds and throughout the song as well. I guess there's a reason we use 16
bits. Now I want to try it with 24 bits, maybe with a song that has more quiet
parts in it.

Edit: The other test using Gangnam Style is _much_ harder to detect (for me),
I scored 6/10 on my first attempt (and I won't try again, hearing that riff 10
times in a row is plenty):
[http://www.audiocheck.net/blindtests_16vs8bit.php](http://www.audiocheck.net/blindtests_16vs8bit.php)

~~~
ubercore
Yeah. The whole thing about these tests is that it's like testing whether you
can tell a solid black box in an 8bit or 16bit png. Yeah there cases where
you'll see the difference and ones that you won't. Why pick such limited
examples, when it effectively undercuts _any_ value the test might have in
terms of people's perception of bit depth?

------
SwellJoe
This is a terrible source material for this kind of test. The biggest audible
difference between 8 bit and 16 bit, assuming the same sample rate, is the
noise floor. But, this source material is LOUD throughout, which masks the
noise floor. It is in dynamic material that bit depth makes an audible
difference. The prior listening test used Gangnam Style, which is also loud
and lacking in dynamics that would reveal audible distortion due to low bit
depth.

The thing is that I agree with Monty on the (lack of) need for 24/196 audio
for consumers. My degree is in audio, and I (mostly) understand the physics of
the thing (and have done tons of blind listening experiments in controlled
environments using professional equipment, as well as using high quality
measurement tools for spectrum analysis, etc.). But, this particular test
isn't really a useful proof that we don't need 24 bit and 196k. It only proves
that loud material can mask noise, even at 8 bits.

Using a more dynamic Neil Young song would have been a better choice. Maybe
_Old Man_ , or similar. Acoustic guitar is good for allowing one to discern
audio equipment flaws. It is commonly the source material people use when
trying to rate good pieces of equipment that have very small differences
across a very broad price spectrum (microphones, preamps, etc.).

Nonetheless, I got 8/10 right using crappy ear buds in a noisy place, which
isn't much better than random chance. I'm pretty confident I'd be able to
recognize 8 bit vs 16 bit on any reasonably dynamic source played on good
equipment in a good listening environment. But, I also _know_ I can't
recognize 16 bit vs 24 bit (I've tested), and I know Neil Young can't either.
Neil Young was around for the era when ADC/DAC quality was very poor; it
actually was true that many early digital recordings were inferior to the very
high end analog recordings of the time. And, it's even true that when
recording, mixing, and mastering, it is useful to work at higher bit depths
(because there's a lot of summing, raising the noise floor each time; though
that may just mean you need to process at 24 or 32 bits, rather than actually
record at higher depths).

It seems obvious, to me, that the reason most music sounds like shit is
because of the loudness wars (dynamics reduction), lossy compression (cramming
a lot of music down tiny network pipes), and the low quality of most people's
listening equipment and environment. Those factors utterly dwarf the noise
floor of 16 bit audio, and completely nullify any frequency advantages of
higher bit rates (even if we could hear them).

In short: Pono is elitist bullshit, as most audiophile stuff is.

~~~
semi-extrinsic
> Acoustic guitar is good for allowing one to discern audio equipment flaws.
> It is commonly the source material people use when trying to rate good
> pieces of equipment.

My go-to track for testing, recommended if rock is your preference, is Muse's
"Undisclosed Desires". The beginning of the second verse has a "hidden" bit
you will only hear on good headphones. First time I heard that song on some
Grado SR80s I literally jumped from being spooked.

~~~
veli_joza
Are you talking about whispered part underneath sung lyrics? I can hear them
on my cheapo headphones, but I never payed attention to them before.

~~~
semi-extrinsic
Huh, maybe you can hear it on cheap headphones if you know there is something
there to listen for.

------
4ad
I did 10/10, but that's because I knew exactly what to look for, and how to
listen to it. I didn't use my expensive open headphones, but some cheaper
closed headphones that passively block external noise, and I used a far louder
volume than usual.

Of course the source is specially selected to make the test hard. With music
with real dynamics, that hasn't been compressed to death, the test becomes
trivial. 8 bit gives you 48dB of SNR, that is lower than vinyl and tape.

~~~
emn13
However lots of music is meant to play on the radio, and not to be too dynamic
since that would just get lost. The clip is likely less compressed than a
considerable portion of radio play, which in turn is likely how most recorded
music is listened too, even today.

------
jdalgetty
Could we have had 8-bit audio of that quality back in the Soundblaster days
assuming the computer behind it was capable of processing it?

~~~
boomlinde
The original Soundblaster was a pretty noisy thing and was limited to 22.5
kHz, AFAIK. You could use noise dithering to improve the perceived quality,
but it wouldn't be as effective at that sample rate.

~~~
raverbashing
Yeah, but later models (Sound Blaster Pro) added 44.1kHz (but only mono,
stereo would be 22.5kHz, then it went 16-bit and that was it.

Then of course it became obsolete for 99.9% of use cases

------
ericfrederich
Why would you need a 4k video when your only display is 1080P? Same thing goes
for audio. Your ears are only capable of hearing a subset of what is
reproducible with 16 bit audio, no need for 24 bit.

It is all well explained in this article:
[https://xiph.org/~xiphmont/demo/neil-
young.html](https://xiph.org/~xiphmont/demo/neil-young.html)

The only time you'd ever need more than 16 bits is if you're going to do more
than just listen to the music. If you're mixing/mastering that is another
story.

------
raverbashing
Interesting concept (scored 4/10 on the first test, there's also a similar
test with a different song)

But to be honest I don't think the snippets are good candidates for the
comparison

~~~
Nadya
_> Don't get me wrong, differences between 16-bit and 8-bit are clearly
audible, and are demonstrated on my Dynamic Range, Dithering and Noise Shaping
page. However, because contemporary popular music has such a limited dynamic
range, these differences become subtle, if not inaudible, when tested on it._

From the summary thing above the blind test. They're aware of this, this test
was mostly done for irony (as also explained in the blurb.)

~~~
aidenn0
Unfortunately for them (as others noted) they do fade-out at the end, which
makes the noise audible on even a decent set of headphones.

------
wronskian
10/10 but perhaps my sensing method would be considered cheating : the
underlying noise throughout the whole track was noticeably more
dithery/glitchy in one set of clips than the others, so I presumed that
identified the 8-bit clips. Speakers are tinny laptop speakers which I think
probably overemphasised the effect.

------
Eiriksmal
I linked to this from my comment on the "24-bit is overkill" article, some
other peoples' results are mentioned there.

>
> [https://news.ycombinator.com/item?id=10522850](https://news.ycombinator.com/item?id=10522850)

------
jonny_eh
I guessed randomly and got "8/10 definitely not a random guess".

Someone needs to check their p values.

~~~
machrider
I think it's sarcastic. Elsewhere he says unless you get 9 or more correct,
you probably can't tell.

------
thanatropism
I had 6/10 on Neil Young and 5/10 on Gangnam Style.

That said, he claims "less than 9 is not meaningful". But by my calculations
an 8/10 is significantly different from chance alone; to some extent, 7/10 is
as well.

Suppose random guesses are drawn from a Binomial distribution -- each sample
is an urn containing balls black and white (both equally likely) and we count
how many white balls we can get.

This is the CDF: people who get 1 answer right score better than only 1% of
people. \--- 1 0.0107422 2 0.0546875 3 0.171875 4 0.376953 5 0.623047 6
0.828125 7 0.945313 8 0.989258 9 0.999023 \--

A "7" is almost at 95% significance.

------
ericfrederich
I love this link (also linked in the main article).
[https://xiph.org/~xiphmont/demo/neil-
young.html](https://xiph.org/~xiphmont/demo/neil-young.html)

Whether a person prefers 8, 16, or 24 bit is besides the point and has to do
with their tastes. That article clearly debunks any arguments saying that 24
bit is superior in re-creating reality. In fact he makes some points against
it.

I love the analogy about a television that can display UV rays.

------
mansr
MP3 files, seriously? A lossy audio encoder will remove much of a static noise
floor (and sacrifice some of the signal in the process). That's the point.

------
donlzx
Speaking of listening test, I would recommended the following website for the
interested:

[http://www.klippel.de/listeningtest/lt/](http://www.klippel.de/listeningtest/lt/)

A couple of years ago, we asked our colleagues to do the tests and the results
were quite interesting.

(The tests require Microsoft Silverlight, which might be a little hard to
setup these days.)

------
kazinator
I didn't bother with the test, because the 16 bit version of the audio clip
sounds bad. It has some of the qualities of a demo that had been recorded with
a casette-based four-tracker.

16/44.1 can reproduce a shimmeringly deep, richly textured audio experience. A
valid 16/8 test will take a sample _of that sort_ : a great sounding 16 clip,
converted down to 8.

This article is a veiled _ad hominem_ , which says that Neil Young is wrong
about audio because he once recorded some music that sounds like shit. The
underlying fallacy is the idea that whenever some artist promotes higher
sample rates or wider sample sizes, we should arbitrarily criticize that claim
based on some poor example of their own music, rather than using science.

~~~
semi-extrinsic
> a shimmeringly deep, richly textured audio experience

... not sure if trolling, or "audiophile".

~~~
kazinator
My guitar rig includes a 31 band EQ. Does that answer your question?

------
Zitrax
In contrast to all the 10/10 posts I couldn't hear the slightest difference.

Even after reading all the hints about what I should listen to in this thread.

------
anon4
The hover text for the links gives it away unfortunately.

~~~
pieterr
You are supposed to do the test "blind". :-)

~~~
mattl
My screenreader read them to me?

------
nerd_stuff
This article is bordering on pseudoscience.

When you do experiments you _must_ understand your experimental design enough
to understand exactly what you're testing and what conclusions can be drawn
from that. If you do A/B testing of 8-bit/16-bit (which is a good idea) you
have to understand you're doing the test _through your current audio
hardware_.

The whole point of the Pono player is to have higher quality hardwhere
_everywhere_ , including pre-amps and DACs so you have a chance to hear subtle
differences.

If you do this test through your laptop speakers you really might not be able
to tell the difference. If you'd like to "debunk" the Pono player then do this
test through a Pono player pushing music through quality studio headphones.

Making things worse they're using Neil Young's _Rockin In The Free World_
which contains, get this, large amounts of harmonic distortion to begin with.
The pre-existing harmonic distortion will only serve to mask any distortion
and fidelity loss from truncating to 8-bits.

If you don't own high-fidelity music equipment then this test is like proving
high definition television is impossible because you can't tell the difference
between a standard and high definition signals on your standard definition TV.

~~~
dietrichepp
> The pre-existing harmonic distortion will only serve to mask any distortion
> and fidelity loss from truncating to 8-bits.

This is blatantly incorrect. The 8-bit conversion is not truncation, it is
dithering. Dithering to 8-bit does not introduce _any_ distortion at all,
whatsoever, of any kind. If you don't understand the mechanisms and the
science of how bit depths work, then you're going to come to false
conclusions, like the conclusion that there's any point to the Pono player at
all. We're not talking about just "subtle differences" here. In order for it
to be even _theoretically possible_ to hear the difference between 16-bit and
24-bit audio, you have to bring your audio system into a quiet room and then
crank the volume levels well above into the threshold at which you can damage
your ears, and even then, you still won't be able to tell the difference with
the most dynamic music.

So, suppose you have a quiet room in your house, with an ambient noise level
of about 30 dB. If you raise the 16-bit audio level so the noise floor is
above 20 dB, then the peaks are going to be well into the 120 dB range. That's
like having a symphony orchestra in the room with you, at the very peak of
their performance, with all the instruments playing at once. If you've ever
listened to a symphony orchestra, you know that the background noise is NOT 30
dB, but somewhat higher. So even at the peak of a symphony, your CD recording
should still be able to reproduce the various unwelcome bits of noise that the
musicians produce (stomachs gurgling, breathing, shuffling in their chairs,
etc.)

~~~
audiosampling
Here is my take on dithering, with online demos :
[http://www.audiocheck.net/audiotests_dithering.php](http://www.audiocheck.net/audiotests_dithering.php)
(still 8-bit though, because at 16-bit, it is barely noticeable, and it won't
serve educational purposes very well)

~~~
Pyxl101
Interesting demo! Thank you for sharing.

