
24-Bit vs. 16-Bit Audio Test – Part II: Results and Conclusions - signa11
http://archimago.blogspot.com/2014/06/24-bit-vs-16-bit-audio-test-part-ii.html
======
taeric
More people really need to watch this video.
[http://xiph.org/video/vid2.shtml](http://xiph.org/video/vid2.shtml)

~~~
robert_tweed
This is such an awesome video. I really hope there's a follow-up at some
point, because as someone with a decent amount of experience with computer
graphics, but only a fairly general understanding of sampling theorem, the
stuff that would naturally come next is what most interests me, namely:

\- What happens when you start combining waves together to create more complex
signals? This is pretty important, since any real instrument produces hundreds
of harmonics with complex attack and decay properties, so decomposition of the
fundamental frequencies won't be as accurate as with toy examples.

\- Following on from that: the effects of aliasing. It definitely exists and I
have a very good understanding of aliasing with respect to computer graphics,
but what effect does aliasing really have on an audio signal? In CG it's
something that's talked about all the time and there are tons of papers about
it, but it seems (from the outside at least) that audio guys only ever talk
about aliasing in very hand-wavy terms.

\- Although we can't hear sounds (much) above 20KHz, we can detect artefacts
such as "beats" produced when harmonics are slightly mismatched. Is is
possible to show that such information either isn't lost, or that what is lost
is either below the noise floor or outside the audible frequency range? This
particular one is a fairly common complaint made by audiophiles about
44KHz/Nyquist, so it would be nice to see it addressed head-on.

FWIW, I'm a bit of an audiophile myself, but not one of those people who
thinks they can hear a difference between 192kbps MP3 and uncompressed, let
alone 16/44 vs 24/192\. Generally the noise floor in the original recording is
too high to tell the difference, even if you could hear a difference in
artificially constructed pathological cases. But I am interested in really
understanding what information is lost, what isn't, and why that may or may
not make any difference. In other words, what are those pathological cases?
This video is a really good start and clears up a lot, but in some ways it
only scratches the surface.

~~~
rspeer
I can point you to a follow-up on one of those: An example on Wikipedia [1]
lets you hear the effect of aliasing on a sawtooth wave.

When audio is aliased, the high harmonics will wrap around the Nyquist
frequency and come back as audible, inharmonic tones. Aliased audio really
sounds quite bad, in a way that would make any listener say "oh god make that
horrible noise stop".

This is probably why it's easier to recognize the effects of aliasing in video
than in audio -- you're not hearing much aliased audio. Aliased video is
fairly common and seen as acceptable in some contexts, while anyone who
produces aliased audio is going to fix their mistake before inflicting it on
listeners.

[1]
[https://en.wikipedia.org/wiki/Aliasing#Online_audio_example](https://en.wikipedia.org/wiki/Aliasing#Online_audio_example)

~~~
robert_tweed
Presumably though the effects of aliasing will also also produce distortions
at a smaller scale, which might not be as obvious. That would result in a loss
of information, but would not be quite so easy to spot and filter out. The
pathological case is really just a good illustration of what happens and why,
whereas what's trickier is understanding the true impact it has on real-world
data.

For instance, you won't notice Moiré patterns in photos very often, but it's
easy to construct a test image that demonstrates the problem. The underlying
issue of sampling error hasn't gone away in the photo: it just gets lost in
the background noise or among smooth-shaded surfaces without hard edges, which
makes it harder to see. But maybe now and then you'll have a sharp edge in a
real-world image where the effect is noticeable. Depending on how close it is
to the pathological case, it might not jump out and make itself immediately
obvious.

I mention Moiré patterns because these are (I believe) precisely the same
effect as the given audio aliasing example. Moiré patterns aren't a form of
sampling error per se, but they can be caused by uniform sampling (which is
how digital quantisation typically works for both audio and images).

~~~
aidenn0
Moiré patterns are the same thing, but if you put your video signal through an
analog low-pass filter before sampling, then you wouldn't get Moiré patterns.
Audio is typically low-passed before sampling for this reason.

------
casion
I'd like to point out the potentially non-obvious here. This is a test of
delivery format.

There still are benefits to using 24-bit audio in the recording and processing
stages. This is in large part due to most recording systems expecting 0dbVU =
-18dbFS and the subsequent processing that can bring the noise floor well into
the audible range (dynamic range processors are notoriously effective at this,
and heavily used in modern music). We could take a simple example of a snare
drum being recorded at 16bit, being EQ'd with a +6db boost anywhere on the
spectrum, then compressed with a reduction peaking around 10db (not uncommon).
After brickwall limiting in the final mix, this track will easily have a noise
floor (via quantization error only) > -68dbFS best case. (-96db starting.
-12dbFs peaking snare, +6 + 10 to noise floor, assuming limiting with no gain
reduction) -68dbFS is already audible in a critical listening scenario. With
dozens (sometimes hundreds) of uncorrelated signals being subjected to similar
processing, this noise floor raises well into the audible range for even a
modest playback system.

While I realize that delivery format is the only thing important to most
people, it is important to differentiate since the article does make a point
to separate out musicians, sound engineers and hardware reviewers. These are
groups of people that _should_ be aware of the benefits of higher sample
resolution. Since it's fairly obvious that most people in these categories are
confused about their ability to discern delivery formats, it's not beneficial
to confuse them even further about working formats.

To be more succinct, the difference between 16bit and 24bit is largely
inaudible when the source material is worked in a higher resolution format and
properly converted.

~~~
zamalek
> There still are benefits to using 24-bit audio in the recording and
> processing stages.

An analogy of what you said:

It's similar to HDR for audio[1] (but not exactly like it). HDR can be used
for photography that, once composed and edited, will present more realistic
information to our eyes. For example, with HDR you wouldn't have an
overexposed sky - however the HDR is only used in order to _get to_ that final
16 bit image (and even with 16 bit your eyes have a hard time discerning
different colors).

The same applies to audio. __Listening __to 24-bit is pointless, however, if
you are editing something you want to retain as much information as possible
until the final render so that you don 't run into clamping issues as you
described.

Therefore, sites that provide 192/24 downloads are valuable. If I'm a DJ
getting music for my gig I do want those production quality files, as I cross-
fade between two songs I don't want artefacts popping (excuse the pun) up.

On to my own opinion: 24-bit is still not good enough. DAWs should be working
in floats. Audio needs to go true HDR, 24-bit is a cop-out. Why would you even
used a 24-bit int when floats are there and ready to go? Imagery went floating
point, what, 10 years ago? Why can't audio catch up? Being able to exceed the
clip in my DAW in a channel, and then wrangle it back down in another would be
awesome.

 _Unrelated: That Xiph video really amazes in terms of what nature does. We
rarely care about it (we do in terms of e.g. intercontinental fibre cables),
but nature does all of this when we send a signal to a speaker. Even normal
sound does actually have a band limit and does behave (albeit, far higher
dynamic range) exactly the same way automatically. Shoot a signal down a fibre
cable that can 't handle it, and you'll get Nyquist. Too high frequency for
RTP air? Expect distortion (that we can't hear). You don't even have to
include electronics to get nature to impose these limitations for you, you
have to do no extra work. Completely amazing - a deeper level of logic that is
mind boggling._

[1]: [http://www.slideshare.net/DICEStudio/audio-for-
multiplayer-b...](http://www.slideshare.net/DICEStudio/audio-for-multiplayer-
beyond-mixing-case-studies-from-battlefield-bad-company-
frostbite?from=ss_embed)

~~~
casion
192khz however is not beneficial to the processing though. There is an
argument that can be made for 96khz in a limited set of cases of processing as
a form of implicit pre-process upsampling, but itt can actually be
detrimental. (see for instance: [https://www.gearslutz.com/board/mastering-
forum/968641-some-...](https://www.gearslutz.com/board/mastering-
forum/968641-some-thoughts-high-resolution-audio-processing.html))

Since this is a discussion about bit-depth, I don't see much of a reason to
clutter it with a discussion about sample rate. This subject is already
difficult enough for most people to understand it seems.

~~~
zamalek
I might add that it's factually impossible to determine what it means, from
the listener's perspective.

As far as "limited cases", I work at an ISV, so I am preconditioned to not
accepting limited cases - as demonstrated by excessive overtime just this very
week. Our customers do some really crazy shit with our software.

------
TheOtherHobbes
Slight problem - DXD is not PCM. It's downsampled DSD, which isn't a true PCM
format and is of debatable value in a bit-depth test.

DSD uses single-bit delta-sigma modulation at a very high sample rate. You
have to downconvert it before you can hear it, and this adds
noise/dither/distortion. One of the problems with DSD is that it's not
entirely clear what useful bit-depth you're left with after downsampling,
because there are theoretical reasons for criticising one-bit sampling. See
e.g.

[http://sjeng.org/ftp/SACD.pdf](http://sjeng.org/ftp/SACD.pdf)

A useful test would start with high quality unmastered and unprocessed 24-bit
PCM recordings and A/B them with 16-bit downconversions. (Remember, even
orchestral recordings are mixed in a studio and the individual stems usually
have some dynamic processing and gain riding, even if it's not as obvious as
dance music pumping.)

I'd expect a test like this to use a bit meter like Bitter to confirm there's
useful information in the lower bits, and not just rely on a vague estimate of
the dynamic range.

[http://www.stillwellaudio.com/plugins/bitter/](http://www.stillwellaudio.com/plugins/bitter/)

Ironically, all of the reviews of the Bozza track say that the BluRay audio
version sounds cleaner than the SACD source used here. (I have no idea if this
is true. But if someone has both and wants to do a blind A/B, that would be
interesting.)

It's also worth mentioning there are easy-to-find test tones you can use to
check how clean your audio hardware is at extreme sample rates. They're not
directly relevant to bit-depth tests, but they're a good torture test for
audio.

[http://www.audiocheck.net/testtones_highdefinitionaudio.php](http://www.audiocheck.net/testtones_highdefinitionaudio.php)

~~~
hvidgaard
It doesn't make sense that it is unmastered. We want to test the difference of
a properly made 24 bit output downsampled to 16 bit and the original.

There is difference between 16, 24, and 32 bit recordings, but it just so
happens that 16 bits are enough to give an effective 120dB dynamic range, if
you use proper dithering. If you do not want to use dithering "because it's
not pure", you still have 96dB of dynamic range. So with dithering 16 bit
allow you to represent a mosquito and a jackhammer in the same room. And even
without (and that isn't a good idea, as the noise now consist of harmonic
distortion which is a lot easier to hear than dithered noise) I really doubt
anyone can tell the difference, 96dB is still quite a bit.

------
shmerl
This was posted here before, but it's a great article if anyone still didn't
read it yet: [https://people.xiph.org/~xiphmont/demo/neil-
young.html](https://people.xiph.org/~xiphmont/demo/neil-young.html)

------
S_A_P
I'm really glad there have been more audio related posts on HN lately. Maybe
it's just my own bias, but seems that print/online media related to audio is
dying off and is now limited to forums and hobbiest sites now. The few that
are still around are difficult to take seriously with some of the snake oil
that gets reviewed. (Carbon fiber disc stabilizers anyone?) I would love to
see a serious attempt at an enthusiast "magazine" done again...

~~~
thirdsun
I agree - audio technology, music production tools and of course music itself
have always been topics of interest for me. However there's a whole jungle of
misconceptions half-knowledge to browse through before you find good content.

For audio production tools and other related niche topics, I can recommend
createdigitalmusic.com

~~~
MrJagil
I have long had the idea for a HN for music. I have found few places that
resembles, such as the muffwiggler forum and some stack exchanges...

~~~
S_A_P
Ha! I used to chat all the time with Muff Wiggler on KVR... I still lurk there
but rarely post. My handle is Stupid American Pig.

------
mark-r
There's a lot of great data here, and the author obviously tried to cover all
of the bases. Unfortunately I'm still bothered by a couple of aspects.

Firstly, the question of "can you hear a difference" is completely orthogonal
to the question of "which do you think is 24 bit". By using the answer to the
second question to infer an answer to the first, you're entangling them. If
someone could reliably hear the difference but on half the songs they
preferred 16 bit and on the other half preferred 24 bit, their own answers
would cancel each other out.

Secondly, all it takes is ONE PERSON who can reliably tell the difference [1]
to prove that the difference is audible, even if it's only to a very small
subset of the population. The test was structured to detect the abilities of a
group, not a single person. I'm perfectly willing to believe that as a group,
people on average can't tell a difference, but that doesn't tell me whether
_I_ can tell a difference.

[1] Reliably telling the difference would mean being consistent on double-
blind A/B testing, repeated enough times to achieve statistical significance.

------
kmike84
I think the conclusion ("there was no evidence that 24-bit audio could be
appreciably differentiated from the same music dithered down to 16-bits") is
not correct.

EDIT: I'm not sure if the conclusion is correct or not, but the logic that
lead to the conclusion has flaws.

50%/50% accuracy means a random guess - people can't distinguish 24-bit from
16-bit.

But if accuracy is less than 50% for a large enough sample it means the
difference between 24-bit and 16-bit is heard.

Article said: "As a subgroup (total of 31 respondents), the self identified
respondents with a "good amount" of musical background did not do well. In
fact, this group of respondents consistently scored worse than the combined
result."

People with persumably better ears ("musicians" and "hardware reviewers") were
less accurate than regular people, especially on Vivaldi. I think this means
they did well. This means they heard the difference - it is not possible to be
significantly less accurate than 50% without hearing a difference. They failed
on deciding which one is "better", but they were able to differentiate 16-bit
music from 24-bit.

~~~
danbruc
_This means they heard the difference - it is not possible to be significantly
less accurate than 50% without hearing a difference._

For small sample sizes being far of is quite likely, for example the chances
for getting at most 1/3 of 31 (11 or less) 50/50 guesses right is 1 in 14.

~~~
kmike84
I agree that sample sizes are small, and there is no proper analysis done -
the difference can be only from a chance.

But the article states the difference is statistically significant and then
draws a wrong conclusion from it (not accurate => can't differentiate).

------
mark-r
Perhaps the 16-bit sample really did sound better in some cases?

The lowest bits of a D/A converter are the most non-linear. By avoiding them
you might get a more accurate waveform overall.

This would explain the people who were confident that they knew which was
which, even when they got it consistently wrong. It would depend greatly on
the specific D/A converter so you'd expect it to go both ways.

~~~
cnvogel
Sorry, but no!

    
    
        1.0 1.1 1.9 2.0 2.1 2.9 3.0 
    

is still a better approximation of a linearly increasing list of numbers than

    
    
        1.0 1.0 2.0 2.0 2.0 3.0 3.0
    

and I doubt that any 24bit DAC or ADC in existence will interpolate as bad as
I did in this example. Whatever distortion due to quantisation the latter
creates, the former will create less of it.

~~~
mark-r
Your example is overly simplistic. Dithering is what allows a low bit sample
to emulate a high bit one, and I don't see any evidence that you applied any;
even if you did you wouldn't get a feel for how it operates with such a short
sample.

~~~
cnvogel
The initial statement was: "lowest bits of a D/A converter are the most non-
linear. By avoiding them you might get a more accurate waveform overall."

You can of course apply dithering to any signal to mask quantization noise.

It's just that a perfect N-bit DAC will require less dithering to mask its
quantization distortion/noise than a non-perfect/not-quite-linear N-bit DAC
which again will introduce less quantization distortion/noise than the (N-k)
bit DAC.

In fact, you could say that a N-bit DAC is just "perfect reproduction" plus a
non-linearity that amounts to the step-width of 1-LSB, e.g. it's resolution.

And a 24-bit DAC that unfortunately is off by 8 LSB (2^3) along its 0...2^N-1
to 0V..Uref curve is still as good or better as a 21-bit DAC (24bit-3bit) in
reproducing a waveform, and still better than a 16 bit DAC. A graphical
representation of this is commonly found in datasheets and called "integrated
non-linearity."

------
Zigurd
Quality reports on audio depend, as far as I know, entirely on conscious
reporting of quality that's accessible to a test subject through
introspection.

By definition, this makes it easy to debunk "golden ears." Because loudness
(energy) determines what we pay attention to in sound, this is why sonic
detail that's low in energy compared to the total can be dropped without test
subjects being able to report the missing information. And maybe this is
valid. If we can't report our experiences, are they really experiences?

But I find this unsatisfactory if only from the point of view of experimental
design. Does the brain really throw this information away at a low level? Does
our ear "compress" audition on the way to other parts of the brain? Or does
our subconscious experience uncompressed music differently?

------
vitoreiji
While the conclusion is in accordance with what I would expect, I think the
study suffers a lot from not having a control group. From these results, there
is no telling if participants screwed up in replay, or if they were all just
guessing anyway or whatever. This should be redone with at least one sample
pair where one of the samples is deliberately reduced in quality is delivered
to a subset of the participants.

------
keenerd
Shoot, most people won't be able to tell 8 bit audio from 16 bit audio. Try
the following:

    
    
        sox highres.flac --bits 8 lowres.wav dither
    

Wave is required because flac doesn't do 8 bit and we want to be 100% certain
nothing sneaky is going on. You might be able to notice a slight increase in
background hiss if you are in a very quiet room.

~~~
casion
That background hiss IS the difference. I'm not sure what you think the
difference would be otherwise, but the increase in the noise floor caused by
quantization error will be the difference between the formats.

I also don't know what 'lowres.wav' is (is this linked in the article?), but
on classical or jazz recordings the difference is very noticeable due to the
lower 'average' amplitude of the recordings. If you did this on a modern pop
recording that's smashed to hell and back... then yeah, many people won't even
notice the noise.

~~~
keenerd
No, most think that if you reduce something to 8 bits that it will sound like
NES-powered voice mail being played back through a piezo buzzer.

If you converted something to 8 bits and used proper dithering, no one who
heard it would exclaim "You monster! That is only 8 bit audio!"

Where as if you reduced something to a low bitrate mp3, people would notice
immediately and call you out on it.

~~~
casion
That is an issue. People don't understand what bit-depth reduction sounds
like. That doesn't change the fact that the hiss is the difference. That is
the quantization noise floor, and the primary artifact of the conversion.

Rather than try to goad people into thinking that there is no difference, it
is better to educate them on what the difference actually is (or could be).
From there an honest interaction can be had regarding the potential perception
of these differences.

Quite simply, just because some people are misinformed about what qualitative
effect is occurring, that does not discount the fact that there is a
qualitative effect occurring.

------
woah
Why are there so few women in the audiophile community? Do female audiophiles
face the same adversities as women in tech?

~~~
wyager
> Do female audiophiles face the same adversities as women in tech?

How would that work? There isn't even any opportunity to be excluded from
being an audiophile.

It's the exact same thing that accounts for a lot of the discrepancy between
men and women in professional tech: the fact that men, on average, like
gadgets more than women.

~~~
lentil_soup
I really doubt men like gadgets more than women per se, it's a
social/cultural/education thing that leads them to move away from those
interests.

~~~
renaudg
No it's not. [http://www.livescience.com/22677-girls-dolls-boys-toy-
trucks...](http://www.livescience.com/22677-girls-dolls-boys-toy-trucks.html)

------
xlayn
It's all about obsesive compulsive. I count myself on that group... I bought
every recommended pair of headphones promising more and more magic but one day
you come to notice that it's just a sound equalizer and some people like it
some way... some other. so 192kbps 44khz and mdrv6 to a 30$ player; that's it.

~~~
gcb0
not really. headphones range a lot within the frequencies you DO hear.

also, the weight vs outside sound isolation ratio varies with price.

those are all observable and measurable things.

speakers, they also vary on those audible frequencies, but after you are past
$150 per speaker you are only dealing with quality after very loud volumes.

over $3000 for a home system? just be honest with yourself and confess you are
buying the prettiest furniture that match your decor.

~~~
aycangulez
True. What often seperates a good speaker from a mediocre one is how it
behaves at high volumes. Most speakers made for home use regardless of the
price lose composure when cranked high. The main culprit is usually the dome
tweeters (a type of hi-frequency driver) that start to compress at high
volumes. The solution is to use a compression driver (CD) with a waveguide but
mostly for aesthetic reasons, there are virtually no commercial speakers on
the market that use CDs.

The alternative is to use public announcement (PA) speakers. Virtually all of
them are equipped with CDs, but they are relatively large, and unless you have
a dedicated listening/home theater room, they won't really fit a home's decor
well.

~~~
JonnieCache
The other option is ribbon tweeters:
[https://en.wikipedia.org/wiki/Tweeter#Ribbon_tweeter](https://en.wikipedia.org/wiki/Tweeter#Ribbon_tweeter)

They sound great. Sometimes too great.

------
ChrisGranger
As I said in the other recent 24-bit audio thread, any improvement in sound
quality offered by using 24-bit will be inaudible for the vast majority of
listeners, the extra low-level detail of the increased dynamic range lost in
the noise floor of a typical room.

~~~
steven2012
You mean it's inaudible for ALL the listeners. There is no subset of listeners
that would be able to tell the difference between 24-bit and 16-bit listener
with any type of statistical significance.

~~~
ChrisGranger
Downvoted for saying "vast majority" instead of ALL? Sigh.

For ALL the listeners in this specific test? Yes, almost certainly. Naturally,
just randomly picking a song, even one that subjectively sounds really good,
and listening to 16-bit or 24-bit versions of it at moderate volume in a
typical room will _absolutely_ prevent anyone from choosing correctly with any
statistical significance. The OP's test was doomed to fail from the outset.

That doesn't mean it's impossible to detect any difference under any
circumstances.

~~~
hvidgaard
16bit gives you an effective dynamic range of 120dB - noone, and I challenge
to to prove me wrong, can detect differences beyond that. To quote from the
last articel, that is enough to record the difference between a jackhammer and
a mosquito in the same room.

~~~
retrogradeorbit
> that is enough to record the difference between a jackhammer and a mosquito
> in the same room.

That is utterly laughable. Stated like someone who has never actually tried to
record sounds with large dynamic range. But I suppose it's how you define
"record". So if a mosquito is 40dBA, and a jackhammer is 130dBA, that's a
difference of 90dB. Now I don't know of any preamp that has such a low noise
floor, but assuming one existed, if we set the gain staging such that the
jackhammer is 0dBFS, then the mosquito is peaking at -90dBFS. Thats 6dB above
null, or ONE BIT. So your "recording" of the mosquito is one bit flipping on
and off.

Quite the recording! Statements like these are what make recording engineers
roll their eyes and think here we go again.

> 16bit gives you an effective dynamic range of 120dB

16 bit gives you 96dB of dynamic range. And less than that of usable dynamic
range. You may say "effectively" as is, once dithered, but then you're
accepting that the recording is done at a higher bit depth and then dithered
down, thus refuting the original statement that 16 bits is enough dynamic
range to record said sounds.

~~~
hvidgaard
Dithering give roughly 120dB dynamic range in 16bit audio, that is the
mosquito 30dB above the noise floor. That isn't ideal, but I struggle find a
situation where it matters. If the volume is low enough for the jackhammer to
not damage your ears, you wouldn't be able to hear the mosquito anyway. One
could argue that if we had the audio equipment it would be nice to represent
the mosquito and the jackhammer without gain control, but with all the dynamic
compression we have seen the last decade, I doubt it.

------
Derbasti
Some confidence intervals for the hypothesis would be really handy. Still,
great article!

------
r721
Discussion of this test at hydrogenaud.io:
[http://www.hydrogenaud.io/forums/index.php?showtopic=106156](http://www.hydrogenaud.io/forums/index.php?showtopic=106156)

------
Joeboy
What I would like to know is:

Is there any individual human who can reliably distinguish between 16 and 24
bit audio? If somebody believes they can, where can I send them to establish
whether it's true or not?

~~~
throwawayaway
you can hear the difference between summing 16 channels of 24bit audio vs.
16bit audio in a daw. 24 sounds better. then when you render it, you can't
tell the difference between a 16bit dump and a 24bit dump.

i think the recent aphex twin release was out on 24bit and 16bit, it would be
a great test subject matter for the foobar abx plugin.

with all types of music, you can train your ear to listen for what mp3 hiccups
on. i know nothing about classic music, but i can spot a 320kbps mp3 a mile
off due to terrible sounding high hats and crashes in genres where they are
prominent. disco records also suffer very badly, just something about how they
were recorded. i wouldn't know what to listen for in classical.

~~~
swift
High hats and crashes are exactly the thing I noticed improved when switching
my music from MP3 to FLAC many years ago. The difference is _very_ obvious to
me; I really don't think it's placebo.

That said, I've heard that AAC handles them much better, and that modern MP3
encoders do a better job. I haven't had a chance to do an A/B test to check
that. I'd love to confirm that AAC has solved this problem, because my music
collection is taking up way more space than I'd like!

~~~
emn13
Try opus. I was amazed how good that sounds at ridiculously low bitrates. I'd
be surprised if you can tell the difference between 80kpbs opus and FLAC (i.e.
raw) files outside certainly very rare corner cases. And unlike mp3, those
corner cases don't sound terrible, though that's subjective. Even at 64kbps
the difference wasn't obvious without careful listening - to me, YMMV :-).

If you do try it, make sure to use the 1.1 encoder (which deals with difficult
samples by detecting that and upping the bitrate more aggressively than
previous versions), and you might as well increase the maximum framesize the
maximum (60ms) since you're not interested in low-latency applications.

------
antonios
> Furthermore, 20% used an ABX utility in the evaluation process suggesting
> good effort in trying to discern sonic differences.

Take those results with a (large) grain of salt.

~~~
chronial
Doesn’t matter - the respondents didn’t know which sample is which.

------
stefantalpalaru
> biological gender

This is getting ridiculous. Just call it "sex" already.

------
retrogradeorbit
Try repeating the experiment without the dither.

~~~
spankalee
That would allow the subject to easily tell the difference between the two
samples by looking to see which had all 0's in the 4 lowest bits.

~~~
retrogradeorbit
This makes no sense to me. Please explain. The difference between 24 and 16 is
8, not 4 for one.

Are you saying the "16-bit" sample files actually had noise added to the least
significant 4 (8?) bits? This is not dithering. Dithering is adding noise
BEFORE truncation. A dithered 16 bit rendering of 24 bit audio will only be 16
bit. An undithered 16 bit rendering of 24 bit audio will also only be 16 bit.

~~~
mng2
I think the GP is basically saying, how would you propose to run an internet
survey asking anonymous audiophiles to blindly A/B (no peeking at the bits)
full and truncated files? The dithering is probably the least questionable
part of the methodology.

