
24/192 Music Downloads and why they make no sense (2012) - zpiman
https://people.xiph.org/~xiphmont/demo/neil-young.html
======
mruts
Problem with music sounding bad doesn’t really have much to do with the
distributed format: V0, V1, or 320 mp3s should sound pretty much the same
compared to 16-bit flac. You can only the difference between mp3 and flac at
shitty bitrates no one uses anymore (like 120).

The reason why a lot of recent digital music sounds bad is because of the
intentionally terrible mastering. Since everyone is listening of crappy
earbuds, they compress the hell out of it and destroy all dynamic range. This
is why when downloading music you should avoid remasters (there are some
exceptions, like the Beatles mono and stereo boxed sets that came out awhile
ago) and go for the first edition presses.

This is also why modern vinyl releases sound a lot better than digital: they
are mastered differently since its assumed everyone is going to be listening
on good equipment.

That being said, I think flac is generally a good choice for a music
collection. You can’t transcode mp3s without killing the quality so if you
ever want to convert formats (like for a mp3 player), you should stick with
flac (16-bit, 48hz).

The original idea of 24-bit 192hz flac was for vinyl rips, where
hypothetically you might be getting more information.

~~~
martin_a
But who is using MP3 players these days any more?

I found myself to buy an iPod in... like... 2011 or so. Converted all the CDs
I had to FLAC because losless was the way to go.

Two or three years (let it be 5, doesn't matter) pass by, I got a better
Smartphone, Spotify Premium and don't touch my 1xx GB of FLAC music anymore,
because I don't want to carry around another device etc.

I'm not sure but I think "owning" music like in "I got some files here on my
drive" seems dead to me. That obviously has downsides but I feel lucky to use
Spotify these days and being able to discover new music every day and listen
to all of it on the go without buying something, converting it and more.

~~~
mark-r
I rip my CDs in a two-step process: first to FLAC, then convert to mp3. The
mp3s go in my phone, I have 33GB so far and my collection isn't even half
ripped. I haven't checked how big the FLACs are lately but I'm sure they'd be
a much bigger burden.

~~~
nirvdrum
Slightly off topic, but do you use something other than iTunes for this
process? I'm looking for a good way to manage a FLAC library.

~~~
AsyncAwait
As far as I am aware, iTunes is not even able to play back FLACs, so when I am
on a Mac, I use Clementine, ([https://www.clementine-
player.org](https://www.clementine-player.org)), or cmus,
([https://cmus.github.io](https://cmus.github.io)).

Converting etc. I do exclusively on my Linux desktop, so can't help you there.

~~~
ken
iTunes doesn't support FLAC, but it does support ALAC, whose implementation is
also open-source. And it has a neat feature where it can store ALAC on your
computer, and automatically transcode to a (much smaller) lossy format when
syncing to a mobile device.

~~~
AsyncAwait
Yeah, I'm aware, I just feel like FLAC is the more popular, more multi-
platform friendly/preferred option, definitely seems to have more momentum
behind it, it's easier to buy FLAC then ALAC for one.

~~~
thirdsun
Buying options shouldn‘t be a concern at all as long as they are lossless -
you simply convert them to the lossless format of your choice. There won‘t be
any quality lost. Lossless to lossless is still lossless.

Your format of choice should be dictated by your mobile platform - if you use
iOS device or simply like iTunes, go for ALAC. Any decent player will handle
FLAC and ALAC, but Apple requires ALAC. If Apple isn‘t a concern for you,
there‘s no reason to use anything but FLAC.

Personally, I use ALAC since I use iOS. So far there haven‘t been any
downsides.

~~~
AsyncAwait
> Buying options shouldn‘t be a concern at all as long as they are lossless -
> you simply convert them to the lossless format of your choice. There won‘t
> be any quality lost. Lossless to lossless is still lossless.

Absolutely, but it's an extra step that to me brings little practical benefit,
since FLAC is already the source format & is more widely used practically
everywhere outside Apple's ecosystem.

> Your format of choice should be dictated by your mobile platform - if you
> use iOS device or simply like iTunes, go for ALAC. Any decent player will
> handle FLAC and ALAC, but Apple requires ALAC. If Apple isn‘t a concern for
> you, there‘s no reason to use anything but FLAC.

I use iOS as my smartphone platform for now, (waiting for the Librem 5), but
Linux on the desktop, so that's why I prefer FLAC. It's worth noting however
that iOS itself _does_ support FLACs perfectly well, just iTunes doesn't, (I
prefer not to deal with iTunes at all, so not a concern for me), but if you
use something like Airsonic, you're set.

I do have a set of AirPlay speakers however, since I wanted something
wireless, but still lossless, which kind of means AirPlay is the only option &
that does transcode my FLACs to ALAC on the fly, so there's definitely an area
where I use ALAC, even if indirectly.

------
teknico
Already discussed. Main posts:

[2012]
[https://news.ycombinator.com/item?id=3668310](https://news.ycombinator.com/item?id=3668310)

[2014]
[https://news.ycombinator.com/item?id=8689231](https://news.ycombinator.com/item?id=8689231)

[2015]
[https://news.ycombinator.com/item?id=10520639](https://news.ycombinator.com/item?id=10520639)

[2017]
[https://news.ycombinator.com/item?id=15127633](https://news.ycombinator.com/item?id=15127633)

~~~
diminish
It's a very good article which shows up again and again. Think it's 2040,
singularity reached. AI runs the world and on HN we have this article popping
up very frequently like every hundred Planck time unit .

~~~
xoa
It is a good article, and since the misunderstandings are persistent in the
same way a lot of other commercially exploited mysticism is it remains a
relevant one as well. Having said that and to try to add on something new to
these discussions, since you brought up this:

> _Think it 's 2040, singularity reached. AI runs the world and on HN we have
> this article popping up very frequently like every hundred Planck time
> unit._

One argument I can see in principle for 24/192+ _sound_ (not music) recordings
would be if someone was a serious transhumanist and honestly did anticipate
that some humans will move beyond baseline human sensory limitations in the
foreseeable future (by 2040 would certainly count). Combine that with the sort
of incredible environmental destruction we're seeing right now, with enormous
numbers of species going extinct, forests being destroyed, insect/bird levels
plummeting/moving even if they aren't going extinct entirely, etc. It doesn't
seem entirely unreasonable to imagine that in 2040 somebody with genetically
enhanced or bionic ears who really could hear ultrasonics (and had grown up
with that, so their brain had developed from the start with that input) would
find themselves not being able to ever hear "what it was really like" back in
the 2010s even for a simple walk in the woods. If they had been here in person
they'd be able to hear all sorts of things, but our standard recordings
wouldn't have any of that, and in that time the whole character of forests may
be different forever ala the silent spring. It's similar I think to one of the
obvious guiding principles of modern archaeology, which is to try to disturb
as little as possible precisely because we recognize there will be superior
tools and sensors in the future which could pick up things we can't right now.
Saving as much raw data as feasible in many experiments is also like that,
even if we can't process it all now decades down the line new insights might
be found.

None of that has anything to do with music which is a subjective human
artistic creation. Even though instruments give off sounds beyond our
perception, by definition we aren't taking those sounds into account in the
creative process. Future transhumans would undoubtedly create transhumanist
art taking full advantage of any enhanced senses, but that wouldn't apply
retroactively.

~~~
sjwright
> One argument I can see in principle for 24/192+ sound (not music) recordings
> would be if someone was a serious transhumanist and honestly did anticipate
> that some humans will move beyond baseline human sensory limitations in the
> foreseeable future

True, except that few microphones provide a useful signal over 20kHz, and in
the case of produced music, that segment of the signal was never heard or
"signed off" by the original artists/engineers and therefore can't be
considered part of the _artist 's intent._

------
deevious
This reminds me of Monty's A Digital Media Primer for Geeks[0] and Digital
Show & Tell[1] - the delivery, the explanations and the way the experiments
are set up is superb.

[0] [https://xiph.org/video/vid1.shtml](https://xiph.org/video/vid1.shtml) [1]
[https://xiph.org/video/vid2.shtml](https://xiph.org/video/vid2.shtml)

~~~
teknico
The article's author, Chris "Monty" Montgomery, is one of the authors of Ogg
Vorbis [1] and Opus [2].

It puzzles me that many people don't yet know about Opus. Let me quote the FAQ
[3]:

"Does Opus make all those other lossy codecs obsolete?

Yes.

From a technical point of view (loss, delay, bitrates, ...) Opus renders Speex
obsolete and should also replace Vorbis and the common proprietary codecs too
(e.g. AAC, MP3, ...)."

[1] [https://xiph.org/vorbis/](https://xiph.org/vorbis/)

[2] [http://www.opus-codec.org/comparison/](http://www.opus-
codec.org/comparison/)

[3]
[https://wiki.xiph.org/OpusFAQ#Does_Opus_make_all_those_other...](https://wiki.xiph.org/OpusFAQ#Does_Opus_make_all_those_other_lossy_codecs_obsolete.3F)

~~~
ValentineC
I thought for a moment that Spotify uses Opus, but it turns out that they use
Vorbis. Wonder why a switch isn't on their roadmap.

~~~
kingosticks
Do they publish their roadmap?

I'd imagine they consider what they have is good enough considering the
backwards compatability issue it'd likely introduce.

~~~
ValentineC
They don't publish their roadmap, but there have been threads on their
community forums suggesting this, and an official "Not Right Now" response:

[https://community.spotify.com/t5/Live-Ideas/Music-Use-
Opus-C...](https://community.spotify.com/t5/Live-Ideas/Music-Use-Opus-Codec-
instead-of-OGG-Vorbis/idi-p/1207548)

~~~
kingosticks
I've yet to see them implement anything suggested on their forum (or github).

------
pjc50
The choice of colour analogy is unfortunate, because there really _are_
colours that are "out of gamut" and cannot be accurately reproduced on normal
monitors. If you have the opportunity to and look at one of the IKB works in
person you'll see what I mean.

[https://www.tate.org.uk/art/artworks/klein-
ikb-79-t01513](https://www.tate.org.uk/art/artworks/klein-ikb-79-t01513)

[https://contemporaryartetc.wordpress.com/2007/09/13/fact-
of-...](https://contemporaryartetc.wordpress.com/2007/09/13/fact-of-the-
day-61/)

~~~
xoa
I don't quite agree with you, taking into account it's an analogy designed to
help illustrate the issue for general audiences. It's not as if we don't have
ProPhoto RGB or other wide gamuts or don't understand the issues of rendition
accuracy and resolution within the visual spectrum. There was never any debate
that sRGB alone in particular was quite limited, or that dynamic range was an
issue. It's just that it represents a ton more data and is technologically and
commercially much, much harder. As tech has caught up displays have continued
to chase human visual limits, starting with resolution, then frame rate, and
finally major industry wide improvements to gamut and range with
BT.2020/2100\. I mean heck, it wasn't _that_ long ago that we barely had color
at all. I still remember well the first 8-bit system I ever got, or back when
I regularly had to manually change between 16-color/256/16k to prioritize
resolution or color because my system just didn't have enough VRAM to handle
both at once. Audio did far better at matching human limits much, much longer
ago.

But within the visual spectrum but not showing on the screen is still _within
the visual spectrum_. The article examples refer to infrared and UV+ for
contrast, and that's entirely correct. Monitors displaying either of those
would make no difference (well, beaming ionizing EM at your face raises
significant concerns audio doesn't at any level) at any point. They're simply
beyond human eyes period. It's an accurate analogy. Failing to reproduce
something _within_ human limits would be what you're talking about, but that's
a solved problem and not something 24/192 offers you anything with.

------
KerrickStaley
Question: Is 192 kHz better when you want to slow down (or speed up) a track
significantly while keeping pitches the same? Does it produce less noticeable
artifacts?

When DJing, I often speed up or slow down a track I'm cueing in order to match
the tempo of the playing song. So having 192 kHz tracks might be better
(although usually you try not to change a song's tempo too far from the
original anyway).

~~~
penagwin
I personally read the article to be addressing 192 kHz as a consumer of the
music, I have a feeling for those producing (or mixing, etc.) it's a bit
different.

It's kinda like how there's advantages of recording at 8k, better cropping,
supersampling, etc. But for the average consumer there's no perceivable
difference between the pixel density of 8k footage and 1080p footage on their
7" screen anyway.

~~~
meatmanek
Yeah, the author is not arguing against using 24 bits when recording, just
when distributing to end users.

If the producer is planning to slow down the audio (and wants the ultrasonic
components to become audible), then recording at higher sample rates makes
sense, and the author doesn't address this; probably this is pretty rare in
practice. You'd also need ultrasonic-capable microphones.

The much more common operation is to filter or amplify the signal, and for
that, more bits per sample is better to avoid amplifying your quantization
error. The author covers this in the "When does 24 bit matter?" section.

~~~
sjwright
> and the author doesn't address this

No, it's literally excluded from consideration in the article's title. This is
about music downloads, not music production.

------
aidenn0
The only rebuttal to this that I have found compelling is that 24/192
downloads make sense if you are going to sample the music in your own
creations. Recording and mixing with extra dynamic range, combined with only
needing to low-pass once at the end has demonstrable advantages. Of course
this was a response to marketing that was definitely _not_ targeted at
samplers, so it's not so much of a rebuttal as arguing at cross points.

~~~
vortico
Yes, adding any sort of nonlinear distortion to audio will make frequencies
depend on other frequencies, i.e. audible frequencies in the output of an
effect can depend on supersonic frequencies in its input. For example, if you
add a 100 kHz sine wave through a high-gain guitar amplifier, you'll
definitely be able to hear it.

I didn't really see a mention of this point in the article since there was no
"So when _do_ you need 192 kHz?" section, but in its defense, DACs,
amplifiers, speakers, and room ambiance are all incredibly linear in 2019, so
for music listening, most super-sonic frequency content doesn't turn into to
lower frequencies. It _does_ matter when you're using the very nonlinear Apple
earbuds, but if you were doing that, you wouldn't care about audio quality in
the first place.

~~~
sjwright
He mentions this in the section "192kHz considered harmful" without the
misleading rubbishing of Apple earbuds (which are among the best regular
earbuds on the market, for what it's worth).

In most sensible systems, super-sonic content should be filtered out before it
has a chance of doing nothing other than risking the fidelity of the final
output.

As for your quip about a 100 kHz sine wave sent through a guitar amp, what
you'd be able to hear are the distortions and subharmonics which are below 20
kHz—and if they're desirable in the recording they would need to be captured
as their sub-20 kHz components. Capturing the >20 kHz components will do
nothing but make the sound wildly and randomly inconsistent depending on the
consumer's system.

------
gwbas1c
When I casually researched the upper limit of human hearing, I came across
something that mentioned that some people can detect lowpass filtering up to
27khz.

That's less than half an octave over the "traditional" 20khz limit. Even the
20khz limit is more of an average then a strict biological limit.

It also means that a sampling rate somewhere at 54khz is the "ideal" limit
when trying to pick a sampling frequency that is completely transparent to
everyone.

This is less than half an octave higher than the traditional 44.1khz rate,
just 22% more data.

That's the thing that really drives me nuts about high sampling rates. The
minute improvement really only needs a very slight boost in sampling rates,
not 96khz or higher.

~~~
justin66
> When I casually researched the upper limit of human hearing, I came across
> something that mentioned that some people can detect lowpass filtering up to
> 27khz.

Link?

~~~
gwbas1c
[https://en.wikipedia.org/wiki/Hearing_range#Humans](https://en.wikipedia.org/wiki/Hearing_range#Humans)

~~~
justin66
Thanks. It's wikipedia but there's actually some good stuff there. The
important link from that page seems to be:

[https://asa.scitation.org/doi/full/10.1121/1.2761883](https://asa.scitation.org/doi/full/10.1121/1.2761883)

A decade has passed and it would be interesting to know how many people have
reproduced the results detailed in the abstract. I gave it a quick read and at
first glance it looks like an impressive experiment:

 _Hearing thresholds for pure tones between 16 and 30kHz were measured by an
adaptive method. The maximum presentation level at the entrance of the outer
ear was about 110dB SPL. To prevent the listeners from detecting subharmonic
distortions in the lower frequencies, pink noise was presented as a masker.
Even at 28kHz, threshold values were obtained from 3 out of 32 ears. No
thresholds were obtained for 30kHz tone. Between 20 and 28kHz, the threshold
tended to increase rather gradually, whereas it increased abruptly between 16
and 20kHz._

------
stevewillows
If you use foobar2000, you can use ABX Comparator to compare between various
bitrates and formats. Start with a lossless format and convert it.

[1]
[https://www.foobar2000.org/components/view/foo_abx](https://www.foobar2000.org/components/view/foo_abx)

------
musicale
I'd be happy with CD quality - usually I have more than enough download
bandwidth and storage space for it. Apple has had Apple Lossless for years but
Apple Music (and the iTunes store) still use(s) lossy compression. Movies are
now 4K, but Apple has been stuck on 256Kbps AAC since 2009. :(

Though as others have noted CD quality won't improve a terribly mastered
recording from the loudness wars.

------
gpjanik
I wonder about one thing. Sure you can't hear above/under certain frequencies,
but these frequencies still resonate with parts of your body (that are not
ears) and you might feel it in other ways than just hearing, and also their
presence generates harmonics. Not sure if it is observable to a human, but
just because you don't hear N hertz, doesn't mean you can't hear its
harmonics/it doesn't affect your _perception_ of the rest of the signal at
all. Cutting off some frequencies can create patterns that are not hearable
per se, but might induce unwanted sensory feelings (*opinion, not a fact). I
think using physics to break this down doesn't make much sense, and that the
most practical debate solution would be to do a double-blind test on a
statistical group of the so called audiophiles.

------
rurp
I have a related question that someone here probably has a good answer for. I
recently heard a song I like on the radio while driving. Shortly after it
played I pulled up the same song on Spotify with my phone, plugged my phone
into the car stereo through the headphone jack, and played it. The quality was
MUCH worse. What's the likely reason for that?

I know very little about audio but my best guesses are:

1\. The media cable was poor quality and/or playing music through the
headphone jack is worse quality than radio station airwaves.

2\. Spotify was sending back poor quality audio, possibly because I was not on
wifi.

I'm sure the particulars matter but does anyone have a best guess as to why
the quality would be so much worse? I don't really expect mainstream radio
stations to serve up the highest quality audio, but maybe my assumptions are
way off.

~~~
ecocentrik
3\. The A/D converter used by your car stereo headphone jack is low quality
and introduced sampling artifacts.

4\. You have a high definition radio and were listening to a high quality
digital signal over FM as opposed to an FM analog signal.

It's probably a combination of all of these.

~~~
hunter2_
There is no "high definition radio." In the context of FM radio the H stands
for hybrid and the D stands for digital. The digital often sounds worse when
you compare them. Less noise for sure, but synthetic treble, almost as bad as
Sirius XM.

------
jrace
This same argument can be made for many things

Why have an engine in my car that can exceed all speed limits?

Why have a heating and cooling system in my house that can exceed any
comfortable level?

Why have lights that get brighter than I need?

Why have an internet connection that exceeds what I need now? ========

I keep all my music rips in uncompressed FLAC - 1) because i can 2) because I
have the most flexibility (transcodes) 3) because it is capable of capturing
_more_ signal that the original contains

No point in bottlenecking my audio just because _other_ people are unable to
appreciate it.

~~~
function_seven
Your examples all have good reasons though.

> _Why have an engine in my car that can exceed all speed limits?_

So I can drive faster than the speed limit if I want to. (And I do)

> _Why have a heating and cooling system in my house that can exceed any
> comfortable level?_

Well, you shouldn't oversize your HVAC system if you want to save money. But
it's nice to be able to achieve your target temp in a reasonable time period.
Any system that can heat your house by 10°F in 20 minutes will—as a side
effect—also be able to heat it to 90°F if you were to set it there.

> _Why have lights that get brighter than I need?_

Other people may need that extra brightness. You can choose dimmer lights if
you want. In any case, there's a clear difference between the two choices.

> _Why have an internet connection that exceeds what I need now?_

Again, other people may need that extra bandwidth. If you can choose a slower
one, then do so.

The point of this article is that 24/192 downloads do not improve anything.
It's like having a car engine with blue anodized cylinder heads. Nothing about
the performance will benefit from the color change of the heads. Or using gold
plated ducts for your heating system. The quality of the air is not affected
by that.

Our ears are not capable of hearing the differences when they affect only
frequencies above our range. Imagine if those lights boasted that they
rendered 200nm light more faithfully. That improvement is wasted on your eyes.

~~~
sjwright
More analogies—

It's like printing your brochures at 160,000 DPI instead of 2,400 DPI. The
difference is entirely imperceptible by the human sensory system without
artificial augmentation.

It's like capturing the invisible infrared light spectrum in a cinematic movie
camera so it can be projected back to cinemagoers as infrared light in the
theatre.

~~~
function_seven
Yours are much better than mine. Closer to the real issue with 24/192.

------
dusted
Mastering should be done for studio monitors. No, studio monitors do not sound
the same, but they are somewhat neutral, they sound somewhat in the same
ballpark, which is the point of them to begin with, have a flat frequency
response (which does not imply that they "sound flat" just that music mastered
for active-subwoofers sound flat).

This way, those who wish to hear how the music was intended to sound, will
have a somewhat decent chance of coming near to what it sounds like, and
people who want other flavours can still simply buy equipment which colors it
in the direction they desire.

------
tohnjitor
High resolution audio is important to me as a sound designer because of the
ability to severely slow down a piece of audio without any aliasing or
stuttering.

At 96KHz and higher with certain samples I can slow down by 80% and it will
still sound good.

~~~
ZoomZoomZoom
Do you mean slowing down while lowering pitch (without resampling)? If so,
you're correct, as you bring harmonics from out of limit of human hearing back
and the result sounds natural.

But if you mean just changing the speed of the sound, than you need to change
the algorithm you're using. There should be no difference in quality due to
sources having different sample rates.

------
gregf
I store stuff in flac, i got a large nas at my house. Then i can down convert
to any other format I might need. I enjoy the flacs when I'm home.

------
elihu
Storage space is cheap, and we have the ability to record and store music in
24/192 or any other format we want. Even if it's useless to us now, it may be
of value some day when our genetically-engineered descendants can hear up to
40khz or when someone invents a direct condenser-microphone-to-brain
interface.

------
mixmastamyk
TL;DR: 24/192 is useful at the mastering stage, to avoid error creep from
mixing and effects. This is way beyond human hearing however, so scaling down
to 16/44.1 (CD quality) in the final mix for playback won't result in
noticeable degradation. CD quality was chosen on principles of human limits.

Perhaps we could give a bit of extra headroom for kicks, to widen the envelope
at extremes however. A useful amount would look more like 20/48 rather than
quadruple or sextuple the resolution. No one produces in this format though,
the next one up is typically 24/96.

~~~
mywittyname
Like how banks keep track of fractional pennies when calculating interest,
because rounding them off at time of calculation would introduce cumulative
error. Instead, they round them off at payment time.

~~~
mixmastamyk
Yes, as done in an image compositing pipelines as well, with higher
resolutions and bits per pixel of color information until final output.

------
ehutch79
Please do not confuse the usefulness of 24/192 for playback and listening
enjoyment, and it's usefulness for recording and heavy 'in the box'
processing.

~~~
TheOtherHobbes
In the box processing uses 32-bit or 64-bit float. Fixed-point DSP processing
was a thing maybe ten years ago, and even then the standard was 56-bit. 24-bit
is nowhere close to good enough for ITB DSP.

That aside - the bit depth part of this article is silly and wrong. With an
unprocessed acoustic recording, the difference between 16-bit and 24-bit
sources is fairly easy to hear on professional equipment.

By the time rock/pop/IDM/etc has been mixed and mastered, the dynamic range
can be so limited you might as well distribute it at 8-bits. (Barely an
exaggeration, BTW.)

This is not even close to being true of jazz, orchestral, and folk recordings.
Typically recording engineers allow somewhere between 10dB and 20dB for peaks,
which means the actual recorded resolution of sustained non-peaky instruments
and quiet sections is somewhere around 12-bits - comfortably low enough to
hear quantisation errors, even with dither.

So for some genres, 16-bits is plenty. For others it's nowhere near good
enough.

In 2019, there's really no practical reason not to distribute music as 24-bit
FLAC for high-end use. If you're listening on mobile you may as well use one
of the better compressed formats. But for home playback, 24-bit is master-tape
quality with no significant downside.

Sampling rate is a more complex issue. 48k is significantly better than 44.1k
for the reasons mentioned.

Vinyl can go up to 100k or so, although not very accurately, and some people -
including some very highly respected professional audio equipment designers,
like Rupert Neve - believe that makes a difference.

But it's very hard to record ultrasonics "just in case" because the
microphone->preamp->ADC chain has to handle them accurately, and that rarely
happens. So there's very little of value up there in most recordings anyway -
although maybe more on vintage tape masters than on modern digital recordings.

Personally I'm equally happy with 48k or 96k. The 192k recordings I've heard
have been disappointing, possibly because of the intermodulation effects, but
also because jitter becomes more of a problem at high rates.

~~~
Applejinx
Very inadequately. There was a quadrophonic vinyl system that failed
commercially, which played back surround speakers using modulation of a 30K
carrier tone. You had to use a special (aka 'good') stylus, and it sort of
worked. The resulting carrier tones would go from 18 kHz to 45 kHz and the
fact that this worked at all is evidence that vinyl goes up that far if you
let it: wear will tend to scrub off that information unless it's a high energy
transient, in which case there's a big chunk of plastic refusing to be worn
off (but you'll dull it).

------
perfmode
My favorite music these days is left-field and lo-fi hip hop. High-fidelity is
pretty irrelevant. Music is produced, mixed, and mastered at home.

e.g. [https://pitchfork.com/reviews/albums/earl-sweatshirt-some-
ra...](https://pitchfork.com/reviews/albums/earl-sweatshirt-some-rap-songs/)

------
Mediterraneo10
Thanks to this article, when I torrent music that is shared in 24/192 format,
I always resample to 24/96:

ffmpeg -i foo.flac -ar 96000 -acodec flac bar.flac

~~~
badfrog
I'm curious why you torrent music still when streaming is so widely available
and free/cheap?

I did a lot of torrenting back in the 2000s, but thinking back on it I spent a
ton of time finding things, organizing my file system, transcoding, editing
metadata, etc. I do not miss that hassle at all now.

~~~
Mediterraneo10
I spend about half of every year traveling, often in particularly undeveloped
countries and/or far from a mobile signal. Having my entire music collection
on a portable hard drive is more convenient for me personally than being bound
to streaming.

~~~
badfrog
Makes sense, thanks for answering!

------
bronlund
This is a good example of the Dunning–Kruger effect. Guy reads some books on a
subject and thinks he understands all there is about it and think everybody
else is stupid.

------
turdnagel
Should this have a (2012) tag? Neil Young did end up releasing music in this
format with his Pono player, which failed[0].

[0]: [http://www.noise11.com/news/r-i-p-pono-neil-young-kills-
off-...](http://www.noise11.com/news/r-i-p-pono-neil-young-kills-off-his-
digital-player-20170423)

~~~
floatingatoll
Footer of the page suggests 2024, not 2012, as the latest edit date. You can
ask the mods to add it by emailing them (link in HN footer).

~~~
floatingatoll
20 _1_ 4, sigh.

------
pointe
Title should have [2012]

------
Ohren
I'm not a big believer of audiophile stuff, but when I'm listening to 24k
music I'm hearing new instruments, new sounds. It's not the case with
everything though. Am I retarded ?

~~~
teilo
Confirmation bias. If you listened to the same track, but at 44.1/16, and were
_told_ it was hi-res, you would have the same reaction.

------
lfmunoz4
author doesn't take into account that although you cannot hear above 20khz or
below 20hz. That doesn't mean you cannot sense it. Sound after all is just air
vibrating, therefore obviously there must be an effect on the body.

------
jmull
Just to be a Devil's advocate (just a little)...

Sure, 24/192 doesn't physically improve the sound you are able perceive when
listening to it.

But listening to music is a highly subjective emotional experience.

If a listener _cares_ about getting the best possible quality listening
experience and _feels_ downloading 24/192 music will achieve that, then the
listener _will actually enjoy_ music more knowing it is playing from a 24/192
source.

Listening to music is all about the feels.

Of course, I get how this can be abused. Next thing you know someone will be
selling 32/320 for twice as much, then 64/480 for three times as much, etc.

Not that this kind of article isn't still really important. It is. It provides
_a lot of reassurance_ to audiophiles that they can enjoy their music to the
maximum without buying into the 24/192 hype.

And that's what it's really all about: the best enjoyment of the music.

~~~
aw3c2
And like homeopathy, this should be scrutinized scientifically. There is
nothing wrong with eating tiny bits of sugared balls, but don't tell others
they are somehow of special powers.

------
PascLeRasc
I have a degree in electrical engineering, and I'm currently in a graduate
course on computer music systems, so I hope that qualifies me enough to avoid
the author's ad hominem attacks he seasons this stinkpiece with.

I can't stand seeing frequency response charts and scientific measurements in
articles about audio. Like my favorite audio reviewer says [1], I listen to
music for enjoyment and I talk about audio in subjective terms like "warm",
"lush", "wide soundstage" \- not "unexpected 14.5kHz falloff". I don't go to a
restaurant and demand to see pH tests or measure the temperature of my steak
myself. I'm not going to do blind A/B listening tests because I don't care
about that. If you told me you liked one wine, would it be appropriate for me
to say "No you didn't. You don't have taste buds that can tell the difference
between that and any other wine."? Of course not.

Music is an entirely subjective experience and trying to distill it down to
data is both condescending and telling of how little an author cares about
music. Even if you don't care about subjective experiences of audio, why are
you so bothered by letting people like what they like? How does it affect your
life that I listen to music encoded at 24/196?

[1] [https://www.youtube.com/watch?v=RlCG2fK-
abo](https://www.youtube.com/watch?v=RlCG2fK-abo)

~~~
vortico
So you think our senses transcend what tools have the ability to measure?
Maybe that was the case in 1970, but in 2000+, hearing (and vision) is
completely understood scientifically and far surpassed by measuring apparatus
at every frequency range. Saying otherwise is an appeal to what is called
_audio mysticism_ and is caused by placebo and confirmation bias, which was
mentioned in the article.

~~~
IWeldMelons
If it was true, the entirety of hi-fi industry would not exist. I myself built
a number of amplifiers and, after a certain threshold, roughly .02% THD (total
harmonic distortion) at 20khz, there is a very little correlation between the
THD (what is usually measured)numbers and perceived quality of the sound.
Which means, while it is true that is everything could be measured, no one
measures the right thing (perhaps some weird subtle phase shifts or some
almost immeasurable frequency response deficiencies)

~~~
mrguyorama
Are you suggesting that things like healing crystals work, because the market
for them exists?

~~~
IWeldMelons
You are putting words in my mouth. I am telling what I actually verified.
Amplifiers with lower THD and IMD often sound worse than those with less
impressive meausured parameters. Ergo, the measured parameters are not
relevant. We need to find the actually important paramters and measure them.

~~~
ovao
But “sounds worse” in this case is relevant only to your preferences and to
the preferences of some N of listeners, where N is totally unknown. The
parameters you believe amp manufacturers should measure are therefore only
relevant to _your idea_ about them.

THD, SNR, frequency response and many other metrics are easy: they define
either accuracy or precision. If you want more than that, add external affects
hardware or DSP. The purpose of the amplifier is to amplify.

~~~
IWeldMelons
It is relevant to a very large number of people, and that is why it is
reflected in the price of the equipment. A very well known engineer, who has
not ever peddle snake oil, Nelson Pass, sells his rather primitive audio
equipment at high prices. Jusdging

You seem to be highly inexperienced in the subject, THD, IMD etc are
parameters that are measured on highly artificial signals - singke sine waves
or mix of small numbers of sine waves. Real music signals more resemble white
noise than clean sine waves, and there might be hundreds of different very
subtle modes of distortion, that are very difficult to measure but which have
very significant influence on the precepted quality of the sound.

~~~
ovao
It’s not clear to me how one person selling boutique equipment at high prices
is indicative of any flaw in measurement methodology. There are many companies
doing this, of dubious real value to consumers. How is Nelson Pass different,
and how is that relevant to what we’re discussing?

