Hacker News new | past | comments | ask | show | jobs | submit login
24/192 Music Downloads and why they make no sense (2012) (xiph.org)
295 points by zpiman 42 days ago | hide | past | web | favorite | 314 comments

Problem with music sounding bad doesn’t really have much to do with the distributed format: V0, V1, or 320 mp3s should sound pretty much the same compared to 16-bit flac. You can only the difference between mp3 and flac at shitty bitrates no one uses anymore (like 120).

The reason why a lot of recent digital music sounds bad is because of the intentionally terrible mastering. Since everyone is listening of crappy earbuds, they compress the hell out of it and destroy all dynamic range. This is why when downloading music you should avoid remasters (there are some exceptions, like the Beatles mono and stereo boxed sets that came out awhile ago) and go for the first edition presses.

This is also why modern vinyl releases sound a lot better than digital: they are mastered differently since its assumed everyone is going to be listening on good equipment.

That being said, I think flac is generally a good choice for a music collection. You can’t transcode mp3s without killing the quality so if you ever want to convert formats (like for a mp3 player), you should stick with flac (16-bit, 48hz).

The original idea of 24-bit 192hz flac was for vinyl rips, where hypothetically you might be getting more information.

Since everyone is listening of crappy earbuds, they compress the hell out of it and destroy all dynamic range.

More compression and less dynamic range is beneficial for certain environments. Noisy subways. Watching TV in a noisy downtown apartment. Basically, crappy, noisy environments. In those, compression will help you actually hear the music and speech. However, the fact that this should be done in the master is an artifact of an earlier time. Now that signal processing is small and cheap enough to be ubiquitous, music should be mastered for the best equipment, then appropriate signal processing should be done by playback.

The problem is, that there is a lot of older equipment out there that wouldn't be able to do this. So the signal gets compressed before distribution, as a compromise for the least common denominator of equipment out there. Otherwise, a big chunk of the population would think the master sounds like crap. To them, in their particular situation, it would.

EDIT: Come to think of it, the current system, where most music is more compressed, but where the people who care can still get a high dynamic range version, is a very good compromise. The problem is that the latter group's selection isn't quite filled out by the market.

> More compression and less dynamic range is beneficial for certain environments. Noisy subways. Watching TV in a noisy downtown apartment. Basically, crappy, noisy environments.

Good point, I think particularly for movies or such this makes sense. I want to be able to watch a movie such that I hear what the characters are speaking, without blowing my windows out of their frames during some action scene. Yes, I realize in real life explosions, guns etc. are really loud, and this makes the movie less realistic.

Yes, I realize in real life explosions, guns etc. are really loud, and this makes the movie less realistic.

Really loud? Ear-damaging loud! When the realism becomes actually endangering to your health, your escapist media has gone a bit too far.

Film is mixed with much more dynamic range than music, modtly due to the fact that the environment is typically quite controlled.

This means usually the digital -6db is the maximum loudness with a short term maximum of -1db and dialogue at -9db.

Music today should be mixed according to the EBU R128 today (at least in Europe and for radio) which is a serious win against loundess maximizers and limiters.

I’d argue for in decice DSP compression for small earbud people, and give the whole dynamic range to the rest of us : )

But there are easy ways to kill dynamic range with an algorithm. On windows this is called "loudness equalization." On the otherhand, there is no way to go back from little dynamic range to more dynamic range.

So I think it makes sense that records are mastered with a lot of dynamic range, so the people who actually enjoy music can enjoy it, and the people who don't can just equalize it themselves.

But there are easy ways to kill dynamic range with an algorithm. On windows this is called "loudness equalization." On the otherhand, there is no way to go back from little dynamic range to more dynamic range.

So I think it makes sense that records are mastered with a lot of dynamic range, so the people who actually enjoy music can enjoy it, and the people who don't can just equalize it themselves.

You do realize that you just restated my comment, but left out the analysis of the current day situation? BTW, equalization doesn't directly change dynamic range. Equalization is meant to change frequency response. It can change dynamic range by causing clipping.

You're right except for one thing: nobody here was talking about altering frequency response. Windows loudness equalization is not equalization, despite the silly name. Ironically, I imagine Microsoft specifically didn't call it compression because most consumers only think of the other compression. Good grief.

You're right except for one thing: nobody here was talking about altering frequency response.

If you're generally talking about "Loudness Equalization" then in many cases, it really is equalization. I don't know about anyone else, bit I've been talking generally about loudness equalization the whole time.

Windows loudness equalization is not equalization, despite the silly name. Ironically, I imagine Microsoft specifically didn't call it compression because most consumers only think of the other compression. Good grief.

Well, you learn something new every day. In this case, it's yet another time marketers have completely diluted the technical meaning of terminology.

This reminds me of how accessible the equalizer was in Winamp. I spent a lot of time creating custom configurations for my music. I had no expertise and the results were questionable but it was fun. I wish Spotify was more fun.

Dynamic range is a solved problem if people cared.

You attach some metadata to the audio file that says certain parts should be level boosted in a noisy environment and there you go.

Just give everbody the full range and use dsp compression on decices with well defined sensible defaults.

Similar to the thing we did witb vinyl back in the day where we wanted to fit more music onto the thing and applyed the standardized RIA filter when cutting the template — every phono preamp reverses this effect.

The tbing is people need to have specs they can mix and master for. Making something up makes mixinf unpredictable and that is bad.

Or, just turn on software compression on a modern device with such a feature implemented, and there you go.

And this is not a theoretical solution, this is literally just ReplayGain operating in track-level mode.

ReplayGain only changes track gain, not dynamic range.

In the 1990s, car stereos sometimes had a "loudness" button which did exactly as you suggest.

The loudness buttons were more to equalize than to affect the dynamic range.

At lower volumes, we perceive mid-range frequencies to be more prominent than at higher volumes. The loudness buttons would add lows and highs and/or lower mids so that the music would "sound better" at lower volumes.


Moreover, stereos had this in 1980 and probably a lot earlier.

Going back, seemingly forever, home stereos also had a "loudness" button. Many still do. Usually, there's just some equalization involved, so it's not exactly what I'm suggesting.

Correct, the "loudness" function is a compensation in the lower frequencies in relation with the volume level, most modern DSPs have that.

Isn’t radio a big factor in this? Broadcast radio is noisy and has pretty limited dynamic range. This may be a cause.

Radio is actually compressed in real-time by use of broadcast compressors, so they solve the problem rather directly.

Exactly, radio would be relatively unaffected by masters having high dynamic range.

Oh I had no idea. Fantastic.

For those interestes, if you produce anything for radio in Europe you need to keep the EBU R128 guidelines which are actually quite well thought out https://www.iconnectivity.com/blog/2017/6/10/ebu-r128-the-im...

> then appropriate signal processing should be done by playback.

Microchips for leveling audio gain existed in the 1980's and were found in consumer equpiment like TV's.

> This is also why modern vinyl releases sound a lot better than digital: they are mastered differently since its assumed everyone is going to be listening on good equipment.

I'm going to disagree here. They are mastered differently because the physical limitations of the media require them to be mastered differently _and_ it just so happens that the physical limitations help limit mastering tricks in a way that produces less fatigue-inducing, brick-wall-limited mastering output.

A heavily compressed master creates huge peak-to-trough cuts in the vinyl which can cause the needle to literally jump out of the groove, even with RIAA limiting applied.

The assumption of the gear is definitely not true in any mixing or mastering experience I've had. Mastering tries to balance the final product across a range of listening devices, not some unobtainable ideal system. NS10s are kicking around because they sound like arse and make for mastering results that work well on car stereos and other "inferior" systems.

You can put brickwalled audio on a vinyl record and have it play just fine if you cut it at a lower volume. This negates the reason for mastering it that way in the first place, but it's cheaper than redoing the mastering, and many people buying vinyl only do so for the image. See:


That's a fair point. And I totally agree on that second statement. :D

NS10s are kicking around because they're unusually good at time domain performance. For instance, they have miserable bass not only because they're small boxes and smallish drivers, but because they're an infinite baffle design, which is significantly better for time domain performance than bass reflex. The enclosures also dissipate energy quite well, and it's well established that this contributes to being able to 'translate' mixes: you get a better sense of what's actually in the track using NS10s than you might with many 'better sounding' speakers.

They spotlight midrange with a presence peak right where the ear's most sensitive, and this is in part because the woofer is actually designed more like a midrange: thin paper, conical rather than curved cross-section, both of which also contribute to 'sounding bad' tonally while delivering energy more unforgivingly.

They're not really about mastering, though, they're about mixing because if you have elements out of balance it will be screamingly, annoyingly obvious on NS10s. That's not down to their bad-soundingness, it's down to their ability to be incredibly unforgiving.

That may be true, but I've seen some vinyl mastering jobs that looked as bad as digital. I won't claim to be a mastering engineer or anything, but after comparing many vinyl releases and digital releases, it seems like there is something going on besides the physical limitations of the medium.

> modern vinyl releases sound a lot better than digital

Look, I actually grew up with vinyl and 4-track tape, and audio cassettes. Unlike most folks being all trendy and hip nowadays, I've years of using that stuff.

Analog is shit. It's noisy, has a ton of distortion, and it gets shittier every time you copy it. Oh, and if you just keep it in storage, guess what, it decays just by sitting there (vinyl collects dust and scratches when used, slightly different).

In 2002 I built my DAW (digital audio workstation) and recorded my first tracks in 24 bit digital. Zero noise, zero distortion, no generation loss. It was like alien technology.

Digital is better in every way, by a wide margin. Period.


Current mastering practices prevailing in the industry make no difference on this matter. Analog is still garbage. Find digital copies that are mastered properly and you'll be fine.

Couldn’t agree more. I grew up with cassette and LP. First time I heard a CD, specifically Pink Floyd’s Money with the cash register, it was jaw dropping. LPs are cool for the artwork, but that’s it.

That being said, I still only buy music in CD, due to all the hassle of DRM and playback. I just want to drop in a CD and listen to the entire album, not futz with computers, encoders, and software.

I have a simple CD player, kit built tube amp, and homemade single driver speakers.

> I still only buy music in CD

That's what I do. I tend to favor old master copies.

I rip them to both FLAC and MP3. The former is for listening at home, the latter for mobile scenarios. I store everything on the Linux server at home, and share via UPnP. VPN into the home network gives me access from anywhere.

Foobar2000 is my preferred player on Windows, BubbleUPNP on Android.

> kit built tube amp

What's the distortion on that thing?

The vast majority of tube kit schematics are very old tech, stuff that engineers from the 1930s would recognize. Their THD (coefficient of total harmonic distortion) is very high. What is known as "tube sound" is basically just huge THD, along with a specific distribution of energy across the harmonic orders.

It's fun as a hobby, and for the satisfaction of building stuff on your own, but even the simplest schematics built on modern principles vastly outperform these things by essentially all metrics.

Some tube amps are built specifically for low THD, but unfortunately they are rare. When in doubt, use solid state.

> homemade single driver speakers

I used to build everything myself back in the day. Speakers and amps was just part of it. Also did automation, radio frequency (I'm a licensed HAM radio operator), digital circuits from logic gates to DAC/ADC to systems with microprocessors to small computers. It's a small miracle I didn't actually go into electrical engineering.

You really need multiple drivers and likely a subwoofer also, to cover the whole audible spectrum.

KD4HSO here. I do RF circuit design for a living, but built the tube kit just as I have never messed with any non-microwave tubes. It’s a single ended class A with 300B output tubes. Have not measured the THD yet.


I’m not worried about the whole spectrum as I only listen to classical on it, primarily string quartets. The speakers are folded horns, so bass response is reasonable.

Anything like Dream Theater or Iron Maiden is in the car. Would definitely be solid state with a subwoofer for that.

> mp3s should sound pretty much the same compared to 16-bit flac

I did a blind test between 128-mp3, 320-mp3 and flac hearing classical music. While it's true that the 128-mp3 is obvious to find, it also isn't too difficult to find the 320-mp3. Flac just sounds better. Described as a feeling, flac is more voluminous and doesn't feel cut short. For fun, I also let my parents take this test and they could tell, too.

That's why I converted all our CDs to FLAC. Storage is cheap anyway.

There is some definite perceptible loss in accuracy in the treble even at V0 or 320. There's a song with a synthesized treble effect that sounds quite different on MP3 vs. FLAC by Planet Funk, I think it was "Who Said? (Stuck in the UK)".

Other than that MP3 (or Vorbis or Opus, which would probably do better on that song) is great for portability, but I'd still use FLAC for storage.

Interesting. Are you sure you were using the right settings and a recent version of LAME for this?

The only artifact I can reliably hear in 320 kbps MP3s is pre-echo, for instance with castanets, and only in a few very specific situations. Apart from this, V2 and above sounds completely indistinguishable from the original to me.

Which is great ...for you...

but what if in the future you could hear the differences? And now your entire collection is in MP3 V2 - now what?

No reason to not rip everything in uncompressed FLAC these days.

But if your happy with your audio now, great!

The reverse is overwhelmingly more likely. For 8khz+ frequencies (where most compression takes place) your hearing declines pretty quickly once you hit 30 years. Whatever real compression differences you're hearing at 20 will be largely gone 10 years later, and almost certainly 20 years later.

Notch Inhibition Induces Cochlear Hair Cell Regeneration and Recovery of Hearing after Acoustic Trauma


Stick to FLAC and keep your fingers crossed for cochlear regeneration tech ;)

But in 20 years there will be new 20-year-olds. And one day there may be 40-year-olds that didn't damage their hearing with leaf blowers and rock concerts. And there may be a future compression algorithm so good that no one can tell, but you need a lossless original to take advantage of it.

Not quite 30 -> 55+ is where we would expect to see presbycusis,

And even though you may have diminished hearing you still have awareness of frequencies above 8Khz.

At 30 we're looking at typically 5db loss over 8khz, at 40 more than 10db. Sure, we're not talking about total hearing loss, but the reduced sensitivity is going to overwhelm any real artifacts from 16/44 that one might have genuinely detected at 20.

Imagined artifacts will probably remain or might even increase though, since people typically have much higher spending power at 40. :)

Any evidence of this?

I worked in audiology for my entire 30's and tested my own hearing (biocalibration) once a week.

Never once saw a reduction in my hearing thresholds (250Hz - 8kHz)

Nor have I read any supporting documentation that agrees with a reduction of 8kHz thresholds before the age of 55.

There was never a reason to rip in anything less than lossless codecs! Lossy codecs are only for consumption in technically restricted conditions, lossless are for storing and listening.

You also lose some of the fullness on the extreme low end, it's noticeable even with a fairly low end subwoofer.

Something also ends up missing in the midranges. I was working on a track once where all I had was 320 mp3 version of the vocals. At some point I replaced it with a flac copy of the same vocal recording, from the same original wav source and the difference was noticeable right away without changing any of my equalizer settings or anything. It just punched through more and the clarity improved.

The low end thing just isn't true at all, and I don't know where that myth comes from.

I have a room-corrected setup with two properly adjusted subs, and MP3 does just fine on deep bass content.

Regarding solo vocals, the history of the MP3 format says: "The song "Tom's Diner" by Suzanne Vega was the first song used by Karlheinz Brandenburg to develop the MP3. Brandenburg adopted the song for testing purposes, listening to it again and again each time refining the scheme, making sure it did not adversely affect the subtlety of Vega's voice".

That's not to say that they did a perfect job, but human voice was a very high priority.

And the encoders have continued to improve. So an earlier encoders may have messed with the voices, but a reasonably recent version of LAME would do so much better.

MP3's real weakness is fast sharp transients, such as castanets and harpsichord in sparse recordings, where no other sounds can mask them. It's a fundamental weakness in the format, and cannot be completely solved.

Newer formats such as Ogg Vorbis, Opus and AAC do not suffer from this weakness.

>The low end thing just isn't true at all, and I don't know where that myth comes from.

Well for me it came from running multiple copies of the same bass heavy tracks encoded in different formats through spectrum analysers. But I guess those lie?

>Regarding solo vocals, the history of the MP3 format says: "The song "Tom's Diner" by Suzanne Vega was the first song used by Karlheinz Brandenburg to develop the MP3. Brandenburg adopted the song for testing purposes, listening to it again and again each time refining the scheme, making sure it did not adversely affect the subtlety of Vega's voice".

Human voices come in a wide range of tones and frequencies. Optimizing something for one voice doesn't mean all voices will benefit from the same optimizations. The specific track I was referring to had a lot of variation in high and low notes. You can tell me all you want what I did and didn't hear.

>"Well for me it came from running multiple copies of the same bass heavy tracks encoded in different formats through spectrum analysers. But I guess those lie?"

Of course it's going to look different in a spectrum analyzer, the whole point of lossy compression is to discard parts of the audio to save space.

You can't evaluate the quality of a lossy codec by looking at spectrograms. They're designed to fool human ears, not measurement software.

"Since everyone is listening of crappy earbuds, they compress the hell out of it and destroy all dynamic range."

For readers of your comment and your child comments, it is important to note that the compression you are talking about in that sentence is not the same as the compression that most people are thinking of when discussing digital file formats (mp3, etc.).

This might be helpful:


Here is a link[0] to CD dynamic range database from where you can check how a particular mastering fares.

[0] http://dr.loudness-war.info

There are of course multitude of other factors impacting mastering quality, but as far as DR goes, this DB is a pretty good source.

This website is a useful resource, but it has some limitations. The algorithm used does not take into account the frequency response of the human ear. If a track contains a lot of very deep bass, it's possible for it to have a low DR score but still sound like it has a high dynamic range. The measurement can also be fooled by surface noise and filtering when measuring vinyl:


MP3s at any bitrate cannot properly reproduce certain sounds, as preecho can happen and it is fairly easy to train yourself to notice it. Pretty much any modern lossy format can be made transparent at high enough bitrates though.


pre-echo will appear on very fast attacks; clapping, some types of drums and castanets are usually cited as the worst. Some styles of electronic music have fast attacks as well.

With older encoders, cymbals were terrible, but lame's psychoacoustic model is pretty good at masking those artifacts these days (at least at high bitrates).

For examples, some quick searching on hydrogen audio found a couple songs reported to be ABX distinguishable with lame and after previewing the songs, they do in fact have a lot of quick attacks:

Human Disease by Slayer (Some very fast and cleanly played drum parts; also it's mostly not snares; snares sound terrible at low bitrates, but I personally can't distinguish high-bitrate snares from uncompressed snares)

Show Me your Spine by PTP (The "instrument" used for the base rhythm has an unnaturally short attack).

Any sound that starts or end very suddenly. Drums of any kind will sound kind of weird in MP3. Applause used to be the bane of MP3. Encoders got better at it, but it's apparently impossible to get rid of completely. As far as I understand it it's the accoustic equivalent of ringing artifacts that you can see in low quality JPEG's or old MPEG videos.

Playing a set of high-frequency pure sine waves is the failure point for MP3, AAC, Vorbis, and Opus. Dial-up noises are close to this, which you can try encoding/decoding. And this is no surprise, since the point of the V.90/92 protocols is to cram as much information as possible into analog frequencies, and the point of psychoacoustic lossy codecs is to remove the least efficient frequency information of our log-frequency scaled ears.

But this is kind of a pedagogical example. Not the point of who you're responding to.

So, a high frequency DTMF[0] construction should render a pre-echo? What’s the lowest definition of “high” pairs that would satisfy showing this encoding breakage but be hearable by a majority of people[1]?

[0] https://en.wikipedia.org/wiki/Dual-tone_multi-frequency_sign...

[1] https://en.m.wikipedia.org/wiki/Presbycusis

No, pre-echos are unrelated to high-frequency sine waves. And DTMF aren't what I'd consider high-frequency. I'm talking about 4-20kHz.

Ahhh... I think I misunderstood your “dialup noises” as dial tone noises. You’re talking about the high pitched squelch-y initial negotiation of a modem.

V.90 and V.92 both sound largely like spectrally shaped noise. Even if you give very regular inputs, the data is compressed and scrambled, then spectrally shaped and then output as 8-bit 8kHz log-weighted PCM.

https://www.youtube.com/watch?v=MtiRBFWkRKs beginning at 54 seconds.

I don't really hear the pre-echo.

Actually I love that I don't hear compression artifacts in music or see them in JPEGs. Makes my gear so much cheaper. :-)

My anecdotal experience is that music sounds the best to me currently via my good headphones connected to either my phone's good DAC, or to my PC's separate sound card.

Phone = LG V20 which has an [ES9218](https://www.androidauthority.com/lg-v20-quad-dac-explained-7...) chip for its DAC.

Headphones = Sennheiser HD380 pro, pretty good for under $200.

Soundcard = "ASUS Xonar DGX PCI-E GX2.5".

Sound source = FLAC, Google Play Music subscription)

I'd like to upgrade to a really nice DAC and headphone amp to connect to the PC via USB, but that's way down the list of spending priorities.

I know that I'd probably have trouble distinguishing between audio components and sources in a blind listening test, and of course I have tinnitus, but I think my current "setup" if you can call it that is good enough for most stuff.

I am absolutely with you on the loudness wars though. It's a joy to listen to stuff that has real dynamic range, but it's not something I obsess over when I'm listening to music in the car for instance.

Vinyl releases are mastered to be less loud than digital releases because vinyl cannot reproduce mixes that digital systems can. The side effect is that lots of times they sound better. I think in a perfect world an artist would offer you vinyl if you want it, along with a digital version of the vinyl master. You could skip the whole ripping vinyl process entirely.

One of the "nice" things about being hard of hearing is that I can't hear any difference between flac and mp3s down to around 96 or lower for most music, so hypothetically I don't have to worry about this stuff.

Of course in practice I do still keep flac rips around because I'm a data hoarder and what if I decide I want to reencode all my music to opus or something? But at least I have the option to stop caring.

So vinyl has only about the equivalent of 10-14 bits of resolution (I don't remember the exact number I heard and it has been a while) and waveforms within our hearing range are far larger than what 192khz can potentially accomodate. The only use I've found for such high-resolution is audio is using it as base material for further effects processing... certain distortion units and whatnot that operate on a sample level can sometimes give nicer output when fed super hi-res audio

No, not at all. Vinyl has a wildly inconsistent noise level where rumble predominates, and people conflate this with bits of resolution. Vinyl's behavior is not easily pinned down relative to 'bits of resolution', because the noise floor is skewed so intensely towards low frequencies.

To say nothing of how generally available vinyl records (especially old ones) have wildly different rms/peak measurements than generally available CDs and digital recordings have. This is partly 'Loudness War' and partly vinyl's inability to even do the loudness war thing and cope with blocks of heavily limited audio in the first place.

So you'll end up with a record where you can play it, and the peaks are 30 freaking dB over the RMS and it sounds amazingly open and uncompressed… while there's also groove noise that is every bit as loud as the music is (admittedly annoying).

A person arguing the vinyl/CD dynamic range thing would make the claim that the record was equivalent to maybe TWO bit digital audio, or four bit. The most cursory listen to such a comparison will show how inadequate it is.

2-bit digital audio? Like only four total values of dynamic range total? 4-bit meaning only 16 total possible amplitudes? Is that even physically possible? ;)

I agree that the quality of the record -- AND its playback equipment -- among other physical factors will dramatically effect the numbers. My "10-14" quote only applies for ideal conditions: a newly-minted, unplayed disc on a high-quality preamp which together with the turntable and clean needles can produce a very low noise floor. Obviously I'm never going to get this with my dad's old Dead vinyl that he played to death, or with cheap needles, or with those crappy Crowley turntables at target....

Anecdotally, on my home system with clean records, I can make nearly-CD-quality recordings, with the differences only really apparent on flat studio monitors or a good Hi-Fi.

> 2-bit digital audio? Like only four total values of dynamic range total? 4-bit meaning only 16 total possible amplitudes? Is that even physically possible? ;)

Surprisingly, yes. With noise shaping (https://en.wikipedia.org/wiki/Noise_shaping), very coarsely-quantized digital audio can produce high signal to noise ratios in the audible frequencies, via quantization techniques that push the error towards ultrasonic frequencies.

This doesn't violate information-theoretic limits because noise shaping requires very high sampling rates. The 1-bit Sony DSD format (https://en.wikipedia.org/wiki/Direct_Stream_Digital) used a 2.8MHz sample rate.

In the case of vinyl, the effective sample rate is physically limited by the (linear) record speed divided by the vinyl grain size, and to a rough approximation the bit depth would be log of the maximum groove amplitude divided by the grain size. However, the analog cutting mechanism would greatly limit the opportunity for dithering and noise shaping -- for example a needle cannot cut a wave shorter than the tip size.

Plus, the groove on the outside is moving past the needle faster than the inside. The best sounding track on an album is going to be the first one.

Yes, but we are not talking about the mixing/mastering sample rate, but the distribution sample rate/resolution.

High resolution is absolutely important in some mixing scenarios to prevent pre-ringing and aliasing in the effects chain (distortion effects or otherwise). But once you have your hi-res master, there is zero advantage to distribute it that way. At that point, a 48Khz/16-bit FLAC is as good as it gets.

> This is also why modern vinyl releases sound a lot better than digital: they are mastered differently since its assumed everyone is going to be listening on good equipment.

I had always assumed they were taking the same master and just carving it into vinyl. I wonder what percentage of "modern vinyl releases" are actually remastering before pressing...

At minimum they have to move all the bass to the center and apply RIAA EQ.

You're certainly right, many aren't (I always check). A lot are though. I'd give it maybe like 50/50.

The problem is that they aren't being mastered differently - there's a website that lists vinyl releases (can't find the link) and compares it to the CD masters and they're often the same thing. Older CD masters from the 80's or 90's are re-released compressed to drive sales. The latest vinyl fad has just become a new means for record companies to to exploit a "new" medium and race for the bottom - they know many new listeners on cheap players actually just want to hear what they get out of the earbud.

Releated: https://thevinylfactory.com/features/analogue-digital-vinyl-...

There are, of course, those brands that care about remasters, but I don't think they're a majority of the market unless you're looking at classical and older jazz.

Some time ago, I stumbled across a YT channel of some guy, a self-professed studio expert, who "remasters" some 80's metal albums to give them a big, "modern" sound. The uploads are heavily commented with positive reviews.

Basically, to my ears, it just sounds like a bunch of early reflection reverbs were added (an effect that was mature in the 1980's in its high-end implementations and used in studios to get "bigger" guitar sounds and whatnot.)

Of course, it sounds great for all the viewers who are using cheap (or even not-so-cheap) earbuds, or computer speakers.

What these nincompoops don't get is that these albums were made to be cranked up on a powerful stereo, with full sized speakers, in some kind of room. That guy is basically just ruining great albums who were actually recorded and mastered by people who did know what they were doing. Like, oh, Detonator by RATT and whatnot.

Back up for a second with the last paragraph there: if the record was mastered for a room sized stereo, it assumes that the room adds its reverb to the sound. With loudspeakers, the room is a distorting filter in the signal path. This and the HRTF distortion are skipped over when listening to headphones/earbuds. So it does make a lot of sense to add these effects to the audio signal in the headphones case. Done right, the headphone playback is indistinguishable from a stereo in a room - mounted to your head, because the spatialized speakers are relative to your head, no matter where you look.

So, there is a case to be made for this kind of processing. But I won't trust a random mastering "guru" with unknown credentials to get that right.

Right. Only, the accompanying obnoxious rhetoric was along the lines that those studio engineers didn't have the techniques and equipment for this modern sound, and that these old albums need a face lift.

Urgh, this is just crazy talk.

> This is also why modern vinyl releases sound a lot better than digital: they are mastered differently since its assumed everyone is going to be listening on good equipment.

In my opinion that‘s a myth and certainly not a given. There are plenty of subpar vinyl masters and terrible pressings out there. And it‘s not that difficult to find good digital masters these days. More important than the medium is the genre, label and target audience - I have a pretty obscure and diverse taste, including rarities from past decades which are finally being re-issued for the first time and while mixdowns certainly vary in quality it‘s mostly fine and the result of a careful process these days.

However things might be worse when it comes to mainstream music.

Exactly if it’s mastered wrong the nitrate has nothing to do with the issue this music would sound just as awful cut to vinyl from a bad master. Now you just can’t hear beyond 22050 so 192 is insanely wasteful. But poor mastering is absolutely the core issue not encoding algorithms

Actually, no, it might sound better cut to vinyl. Remember, vinyl doesn't have the frequency range or dynamic range that digital audio does, and it has to be mastered using the RIAA Curve because of the properties of the medium. One factor here is that the stereo separation on vinyl can't be too large, or else the needle will literally jump out of the groove! In short, you can't just take CD music (no matter how well or poorly mastered) and cut it to vinyl as-is.

> it might sound better cut to vinyl

There's nothing extra in a 192kHz signal that would help with the vinyl mastering process. You could make a technical argument for the benefits of a 24-bit source, but in practice even those benefits would be utterly swamped by the SNR of vinyl.

You're completely misunderstanding. No, there's nothing extra in the 192kHz signal that would help, but that's not the point. Just remastering it for vinyl, which has certain limitations, might make it sound better than the digital format. As I said, you cannot just cut CD audio to vinyl; the medium won't allow it. You'll probably have the needle bouncing out of the track. So remastering for vinyl might actually make it sound better than the abomination that is the sound of modern CD-audio (because of the Loudness War and extreme compression), not because the fidelity is better (in fact, quite the opposite), but because vinyl's limitations will prevent them from making the audio sound as horrible as it does on CD.

I responded to what you wrote. You’re making a different argument here. If you want to complain about how popular music is being mastered, do that. Don’t conflate that with unrelated arguments over distribution formats.

And you might even have a point, as long as you acknowledge that “remastering for vinyl” doesn’t actually necessitate distributing on vinyl, and what you describe as “the sound of modern CD audio” is entirely the fault of human decisions and not the CD format itself.

Also, you’d need to acknowledge that your description of what “sounds better” is a subjective assessment. It is fair to say that vinyl sounds better to you if what you like is that RIAA processed, variable noise floor sound.

> This is also why modern vinyl releases sound a lot better than digital: they are mastered differently since its assumed everyone is going to be listening on good equipment.

I have just downloaded "Radiohead - The bends" and "Smashing Pumpkins - Mellon_collie_and_the_infinite_sadness", both apparently from vinyl and in highest quality but I don't hear any difference from the CDs I bought and ripped years ago (using headphones "Beyerdynamic DT 770 pro" directly connected to a Lenovo P71 notebook).

Maybe you meant some more modern music or something else...? Thx

But who is using MP3 players these days any more?

I found myself to buy an iPod in... like... 2011 or so. Converted all the CDs I had to FLAC because losless was the way to go.

Two or three years (let it be 5, doesn't matter) pass by, I got a better Smartphone, Spotify Premium and don't touch my 1xx GB of FLAC music anymore, because I don't want to carry around another device etc.

I'm not sure but I think "owning" music like in "I got some files here on my drive" seems dead to me. That obviously has downsides but I feel lucky to use Spotify these days and being able to discover new music every day and listen to all of it on the go without buying something, converting it and more.

I hike a lot and hate using my phone's battery power for music. On top of needing that power for other things, it just feels wasteful. I bought a cheap MP3 player to try out in 2016 and have been hooked ever since. These devices are smaller and lighter than spare phone batteries or power banks.

In addition, I find that I use the MP3 player when I'm out running normal errands precisely because I've organized my music by hand and even edited tracks by hand in some cases. Examples would be things like rare covers that can only be found on YouTube, or favorite songs from niche internet music communities which were poorly mastered.

It's also a bit of a gear hobby now since there are so many MP3 players on the market. Prices are low and performance is great.

I have to agree about the iPod though, as I found the need for proprietary software, and really annoying software at that, made me use it less and less until my 32GB iTouch was mostly used as an ebook reader. I also prefer physical buttons for my mp3-listening while on the go.

But aren't you worried you'll lose access to your music? I have to own it! I can't have it at the whim of multiple third parties to take down as they see fit. It's too important.

Nothing in my Spotify is rare, I can hunt it all down again. If Spotify pulls the plug then the biggest hassle will be recovering the track names of all the music in my sprawling playlists (which I should probably start backing up now). The benefit of Spotify to me is spending $10/mo on the >$10 of new music I listen to each month.

So where do you store these files that you'll never lose access?

CDs? People with a room full of 8 tracks or cassettes would like to have a word.

HDDs? Those fail all the time, plus any sort of natural disaster could wipe out your collection.

Online backup? This seems like the only real option, but for me the risk/reward just doesn't fit.

At least for now, the record companies and the service providers are both incentivized to have as much of their catalogs as possible on streaming services. Until that changes, streaming works for many.

One copy on each of:

-My desktop at home

-My server in the basement

-My work laptop's external hard drive

-An external hard drive in a fireproof lockbox (server backup)

-An external hard drive on a shelf at work (server backup)

-An external hard drive in my parent's house 150 miles away (server backup)

Try prying my files from my cold dead hands.

How do you keep them all in sync when you rip new music?

I put it on my desktop. I back it up to the server. Over the next few weeks/months, I connect an external drive to the server, update, and swap with the other 2. The work laptop drive gets updated from the server over rsync/SSH when I want to listen to my new stuff (almost right away).

At least for now, the record companies and the service providers are both incentivized to have as much of their catalogs as possible on streaming services. Until that changes, streaming works for many.

There is a solution for the rest: let me mix songs from Spotify, my own library, and any other services I pay for in a single playlist.

On my server at home, which has redundant data drives (drivepool) and is backed up locally.

I can access this from every device in my house, and from outside my network.

I can put anything I want onto my phone, USB stick or iPod and play in most any modern car.

Your beef with CDs is they'll become out of date? Did you not read the article, CDs are already pretty much the pinnacle of audio formats.

You don't back up your hard drive?

I had an album (albeit free) on bandcamp dissappear from my library.

Luckily it's on a backed up RAID6 array, in private server, streamable whenever I want.

I mirror all my purchases onto equipment I own, and so I guess I get the benefits of both.

> Online backup? This seems like the only real option, but for me the risk/reward just doesn't fit.

What risk? You can privately store your music anywhere, it's completely legal to do so.

I'm considering which online backup service to use for my music collection, is there a particular one you'd recommend, other than the obvious players like Dropbox, Google Drive and OneDrive?

Preferably not hosted in the US, for privacy/bandwidth reasons.

Backblaze no question about it. Use encryption and you won't have to worry about privacy. It's expensive to recover your data, so you'll need other backup methods too unless you're made of money.

Backblaze is very lacking in Linux support, unfortunately.

For now, I've decided on pCloud, in addition to an on-site copy on my NAS and a copy on a portable drive that I store at work and update semi-regularly. A couple of rsync scripts take care of everything.

I know it's a cloud storage service and not a an actual proper backup service, but they offer 15 days of rewind as standard, and you can get a full year of rewind as an add-on, which I am considering. That should hopefully protect me from accidental deletes, and give me enough time to restore if my house burns down.

The thing that has really sold me on pCloud is that their Linux client is absolutely amazing. Compared to the barely functional Dropbox client and the non-existent Google Drive client[¤], it is an absolute joy to use. At the moment it's an Ubuntu-only AppImage, but they're working on an improved Electron version.

One additional nice thing is that pCloud is a Swiss company, so their privacy laws (and the GDPR) apply. They do host their servers in the US, so you're not completely free from theoretical NSA/PRISM snooping, but in my case I'm primarily storing my music library. They can go ahead and snoop through the tags of 300+GB of music for The Anarchist's Cookbook or whatever.

[¤]InSync is pretty nice, and I did buy a license for it a while ago, but it's still not as good as pCloud's client.

I use my own 10€/month 2TB server with OVH but also have it synced to Google Drive.

I wasn't clear. The risk of a streaming service turning off some music vs. storing and backing up everything in some lossless format like FLAC.

Not OP, but my collection is

- stored on the desktop for fast and performant access

- synced to an NAS daily for central access around the house/network

- uploaded offsite to cloud storage daily as backup

Do you feel the same about cable TV or Netflix?


> I'm not sure but I think "owning" music like in "I got some files here on my drive" seems dead to me.

I really don't think that's true. I think the "listening market" looks a lot like it did before; a large number of casual listeners and a smaller number of people who are in to their music enough to care about details. The second category does things like talk about differences in mastering between different releases, for instance, and Spotify or Apple are not going to offer you that 1973 Berlin recording or whatever. Tidal tries to cater to this market, but they don't have a massive amount of stuff. And then you get to bootleg collecting and people who record performances, old music that didn't make the digital jump and all sorts other recordings that will never make it commercial services.

I'm not a "real audiophile" or obsessive about collecting things, but I do have a lot of music (last I looked, about 60k distinct artifacts - mostly individual songs, but some of those are albums or nonmusical, also some dupes and garbage). And a lot of that is not on commercial services.

> But who is using MP3 players these days any more?

I use my iPod Shuffle exclusively for portable music listening. Cannot beat the form factor, only have to charge it once a week or two (and sometime far longer between charges), and helps me relegate my mobile surveillance/communications device to phone-duties-only as much as possible.

I rip my CDs in a two-step process: first to FLAC, then convert to mp3. The mp3s go in my phone, I have 33GB so far and my collection isn't even half ripped. I haven't checked how big the FLACs are lately but I'm sure they'd be a much bigger burden.

If you are concerned about space, consider vorbis, AAC, or opus. They all will achieve a higher quality at a given bitrate (or equivalently a lower bitrate for a given quality).

Note that the difference is not large. A 128 kbps opus or AAC might be comparible to a 160 or 192 kbps MP3. So it's less than 2x improvement of file size.

AAC has an additional advantage though, which is that many phones and receivers can transmit AAC files over Bluetooth without reencoding. this is technically possible for MP3 too, but very few devices implement it.

the loss of quality from transcoding lossy to lossy is usually a lot worse than the difference in quality between codecs and bitrates (within reason).

Interesting, didn't know the Bluetooth fact. I don't usually deal with AAC myself, since opus is so close in every quality/feature, and the AAC patent license is sometimes costly to use commercially ($0.98 per software sale).

Even though it's theoretically possible to send over Bluetooth without reencoding, I wonder if it happens in practice. The audio pipeline has too many stages and each of them would have to retain the encoding.

good point. the developer settings on my pixel 2 allow me to set the preferred codec, but I've never dug into it enough to know whether the setting is actually honored. all my music is MP3 anyway so it's going to sound awful over Bluetooth no matter what.

Similar developer settings on the S8+/Note 9 - as soon as you connect to a device that doesn't support your chosen codec, it'll reset. I can tell the difference between APT-X and AAC, but I've got no idea if the AAC is being re-encoded.

I'm not that concerned, 256K mp3 has been good enough. Although it wouldn't be hard to automate a conversion to another format for my entire collection, given that I have lossless originals.

If I have a large FLAC collection and want to export the whole thing to MP3 or AAC copies, what would I do to automate that?

For AAC or MP3:

ffmpeg and a makefile with a pattern rule is pretty reasonable; (substutite any make-replacement if you prefer). If you are doing AAC, make sure you use the Fraunhofer FDK AAC not the builtin one (the builtin one used to be terrible, but is now somewhere between "okay" and "pretty good" but the FDK is still considered better last I checked, and your distro may not have an up-to-date ffmpeg).

ffmpeg is pretty good about preserving metadata.

If you want ID3v1 tags for MP3 (only needed for older players), then pass -write_id3v1; there's little downside to putting the id3v1 tag on there as it's quite small.

Links for basic ffmpeg encoding; it shows with .wav input but ffmpeg can read flac just fine and should preserve tags: 1,2

For Ogg output, oggenc can read flac directly and preserve tags, so I've never tried using ffmpeg.

I however, ripped my CD collection to a single flac per disc plus a TOC, and abcde[3] will automate that, including a musicbrainz or CDDB lookup for tagging.

1: https://trac.ffmpeg.org/wiki/Encode/MP3

2: https://trac.ffmpeg.org/wiki/Encode/AAC

3: https://abcde.einval.com/wiki/

It's point and click with Foobar 2000, though there might be a plugin needed. Certainly not anything that isn't on their download page. You should be able to populate the playlist, right click, and convert to whatever format you'd like. I've done this for batches of thousands of tracks without much difficulty.

Slightly off topic, but do you use something other than iTunes for this process? I'm looking for a good way to manage a FLAC library.

As far as I am aware, iTunes is not even able to play back FLACs, so when I am on a Mac, I use Clementine, (https://www.clementine-player.org), or cmus, (https://cmus.github.io).

Converting etc. I do exclusively on my Linux desktop, so can't help you there.

iTunes doesn't support FLAC, but it does support ALAC, whose implementation is also open-source. And it has a neat feature where it can store ALAC on your computer, and automatically transcode to a (much smaller) lossy format when syncing to a mobile device.

Yeah, I'm aware, I just feel like FLAC is the more popular, more multi-platform friendly/preferred option, definitely seems to have more momentum behind it, it's easier to buy FLAC then ALAC for one.

Buying options shouldn‘t be a concern at all as long as they are lossless - you simply convert them to the lossless format of your choice. There won‘t be any quality lost. Lossless to lossless is still lossless.

Your format of choice should be dictated by your mobile platform - if you use iOS device or simply like iTunes, go for ALAC. Any decent player will handle FLAC and ALAC, but Apple requires ALAC. If Apple isn‘t a concern for you, there‘s no reason to use anything but FLAC.

Personally, I use ALAC since I use iOS. So far there haven‘t been any downsides.

> Buying options shouldn‘t be a concern at all as long as they are lossless - you simply convert them to the lossless format of your choice. There won‘t be any quality lost. Lossless to lossless is still lossless.

Absolutely, but it's an extra step that to me brings little practical benefit, since FLAC is already the source format & is more widely used practically everywhere outside Apple's ecosystem.

> Your format of choice should be dictated by your mobile platform - if you use iOS device or simply like iTunes, go for ALAC. Any decent player will handle FLAC and ALAC, but Apple requires ALAC. If Apple isn‘t a concern for you, there‘s no reason to use anything but FLAC.

I use iOS as my smartphone platform for now, (waiting for the Librem 5), but Linux on the desktop, so that's why I prefer FLAC. It's worth noting however that iOS itself does support FLACs perfectly well, just iTunes doesn't, (I prefer not to deal with iTunes at all, so not a concern for me), but if you use something like Airsonic, you're set.

I do have a set of AirPlay speakers however, since I wanted something wireless, but still lossless, which kind of means AirPlay is the only option & that does transcode my FLACs to ALAC on the fly, so there's definitely an area where I use ALAC, even if indirectly.

Heh. I'm actually on a Linux desktop but figured most people would reply with an iTunes-based solution. Cmus looks interesting I'd love to hear what your workflow is for converting, naming, tagging, getting artwork, etc.

Heh, nice :-) Yeah, cmus is incredibly convenient for rapid playlist management once you learn the shortcuts, (there's an excellent quick tutorial $ man cmus-tutorial).

I mostly use 7digital & HDTracks to acquire FLACs these days, but when I rip from CDs, I use https://github.com/whipper-team/whipper to do the job.

FLACs from 7d/HDTracks are already named & tagged properly so I only deal with it occasionally and when I do, https://picard.musicbrainz.org works well for acquiring tags & artwork.

When I need to rename/tag manually, https://kid3.sourceforge.io has been working nicely.

Also I haven't used it myself, but there's a lot of positive chatter around https://github.com/beetbox/beets for tagging etc. I just prefer not to have my files touched in such an automated way :-)

I rarely actually convert from FLACs these days, since I have set up Airsonic, (https://github.com/airsonic/airsonic), on my home server. I now have access to the lossless files directly, from anywhere.

When I do convert, I usually just use https://github.com/kassoulet/soundconverter - nothing fancy, but does the job. I do not maintain my whole library in both, lossless & lossy formats since I have set up Airsonic, but when I do want to save data & do not have access to WiFi, I just let Airsonic use lame to transcode to MP3s on the fly, (rare). If you cannot do that, don't have regular access to data on the go etc. I'd honestly just use https://ecasound.seul.org/ecasound/Documentation/examples.ht... and put it in a script that checks if a .flac file in a folder or subfolder has a corresponding .mp3/.ogg file and convert if not, then just use find to filter out the format I don't want to copy over. :-)

Awesome. Thank you for a thorough response. Airsonic looks like just what I want, too. I have a FreeNAS system and would love to centralize my music catalog there.

Over the years I've ripped my CDs maybe 4 or 5 times. I used to have a PowerBook G4 and an early iPod, so I ripped to M4A/AAC. Nothing else played that, so then I went MP3 with storage limitations of the day dictating bitrate. Now, I just want to rip to FLAC and never deal with that again.

On Windows "foobar2000" is fantastic for playback and transcoding, looks pretty basic, but performs well and has lot's of plugins to modify look and feel as well as extra functionality.

On Mac, XLD is great for ripping and transcoding, but I'm not sure what's the hot favourite for playback these days.

Thanks for the pointer. I'm largely on Linux and my wife largely on Mac, but I'm sure I can spin up a VM somewhere if the software is worth it.

Oh snap! Haven't used Windows or Mac outside of occasional work use for >10 years - all BSD or Linux (mostly Debian since then)... I've heard good reports from people using foobar2000 under Wine, but on Linux there's many fine options depending on your preferences - Audacious or Deadbeef are more like foobar, or there's Quod Libet or Ex Falso if you prefer something "bigger". Personally I haven't used these GUI players for a while as I tend to have a terminal window always open and just point `mpv` to a directory, playlist or file. I use `ffmpeg` (compiled with recent codecs) for transcoding and for ripping there's `RubyRipper` or `abcde`.

I use an older version of Media Monkey on the PC. I would have upgraded to a newer version but they removed the interface to the LAME encoder. This was before the patents expired so I should check them out again, but the old version does everything I need. I quite like it.

I made comment a little further up before I saw this, but there's lossless format that iTunes and Apple devices support called ALAC. You can convert to and from FLAC files with avconv.

I've been ripping to FLAC, and then convert to ALAC via avconv. The ALAC files go into iTunes, FLAC files stay on my server as an "archive". I then let iTunes convert the files it syncs to my phone / ipad to which ever size I need for that device, and I can still listen to uncompressed songs when I'm at my desk.

I keep the Flac around in case sometime in the future I want to change formats for whatever reason.

Why do you think FLAC is better than ALAC for your archive? They are both open source, lossless formats.

This is pretty reasonable. 16/44.1 FLACs aren't that large, especially considering that 4TB HDDs are available for $70 these days.

I do the same. MP3s of yy entire CD collection sit on an SD card in my car and a Sandisk Ultra Fit USB drive in my wife's car. The FLAC files live on an external USB drive in my home.

I still have Spotify for the times I want to listen to something I don't own or want to listen to one specific song without drilling down multiple menus to find it.

Do you manually convert to mp3? iTunes has an option to convert lossless audio to a lossy, space saving AAC at a bitrate of your choice on the fly when syncing to an Apple device. I‘m sure there are similar solutions in Android land.

Well I never used CDs. Unfortunately what.cd got taken down, but a couple years ago, it was probably the biggest and most complete collection of music in the world.

Nowadays, I also just use spotify since I don’t have a quality source for music. But if what.cd was still around, I would dump spotify in a second.

what.cd was continued by redacted.ch, and the community is currently quite strong.

There is also Orpheus (nee Apollo, Xanax) and notwhat.cd, which spread the community out a bit, but also helps increase the bus factor.

I use my phone as an MP3 (Opus, actually) player, with a selection of music from my ~20K track collection. This works better for me than unlimited access to all music, because it makes me have to listen to a smaller selection of content, so I give each album more attention.

While I do also have a Spotify Premium subscription, I am using it a lot less now than I used to. At least 10% of the album's I have simply aren't available on Spotify, and possibly never will be. Underground self-released artists very often don't bother with streaming services, or are outright against the entire concept in the first place, claiming that it devalues the music. It certainly doesn't pay very well. There's also the issue of music disappearing because of rightsholder disputes, such as most of the Motörhead discography being unavailable for an extended period of time. That sort of thing just isn't acceptable.

Honestly I've come to realize that I prefer a smaller nicely curated collection over a massive unwieldy semi-unlimited library, with questionable curation. I have reported hundreds of curation errors to Spotify, but they keep popping up, especially errors involving two identically-named artists being mixed together.

I will admit that I am very particular about tagging, labeling and sorting by genre. Spotify is woefully inadequate in this regard. For my own collection, I am in full control, which makes it much easier to sort and handle.

Your smartphone or laptop is like an MP3 player with respect to mastering, not like an expensive amplifier and speakers. Your smaryphone/laptop has an amplifier that's optimised for low energy usage, not fidelity, and loudspeakers optimised for size. Music which has been mixed and mastered without regard for how it sounds on your smartphone is sold as "24/192" or "vinyl" or such. The 192 does not matter technically, it's just an identifying mark, and some sort of identifying mark is necessary.

I don't think this advice is aimed at your typical Spotify user (i.e. the majority of people).

Spotify is fine for casual listening, but if you're picky about quality, you're going to diy it, and if you're diying, 24/192 is pointless.

Oh another cool thing about vinyl is the needle can couple to the environment too, try driving its case with another speaker or putting it in front of it's own big amp for feedback.

Also, I'm a little bit surprised that nobody focuses on more "out of the box" perception of sound. One can absolutely sense hgh frequencies, personally feel kind of like pressure where you can't pop your ears to equalize. Playing around with this feeling adds emotional tension and color to tracks.

Also, interference patterns are perceptible, and they sound kind of... Different from pure tones, idk.

> This is also why modern vinyl releases sound a lot better than digital: they are mastered differently since its assumed everyone is going to be listening on good equipment.

Sorry, I don't know much about sound so here comes probably the most stupid question of the day (but hope never dies):

does this mean that I might get better sound if I would buy a vinyl & one of those turntables which can directly digitize to USB, then if I would buy & download the digital song directly (or maybe even the CD)? Thx

(Assuming you're okay with piracy…) You're better off searching for vinyl rips where people with good equipment have done the heavy lifting for you.

Actually you're right (what shall I say, for some mysterious reason I didn't think about it) => I'll first do a comparison with whatever I find and then if it does sound better I'll try to achieve the same results on my own (I'm obsessed with owning originals). Thx :)

> That being said, I think flac is generally a good choice for a music collection.

One other consideration for a music collection from CDs is getting a good rip in the first place. I've had some horrible rips in iTunes, even with error correction enabled. I have much more confidence using a tool like XLD that supports AccurateRip, which probably doesn't work with a lossy format.

If you want to transcode after the rip, fine, but you may as well hang on to the FLAC.

IIRC, XLD rips to WAV first anyway, then compresses it to FLAC (I know EAC does).

The reason why a lot of recent digital music sounds bad is because of the intentionally terrible mastering.

I guess that's why the vinyl versions of my wife's albums always sound better than the downloaded versions. Even to my really quite bad ears.

Most LPs these days are made from the same masters as the CDs (or downloads/streaming), with only the bare minimum of processing done to make them viable to pressing to vinyl, ie. mono bass and RIAA equalization. Only releases marketed specifically to audiophiles tend to get any extra effort put into them, and that is a vanishingly small segment of customers.

The loudness war isn't happening because of "crappy earbuds", the earbuds included with smartphones have been rather good for a long time now. The ones that came with my Samsung S8 were designed partially by AKG (Samsung owns the Harman Group, including AKG) and are really damn good. Apple's included earbuds are also very good now, a far cry from the original iPod earbuds, which were decidedly mediocre.

The real issue is radio and Youtube/streaming services from before they implemented loudness targets, and it's been going on since the 50s at least, just listen to some old singles from back then, they're mastered as loud as they possibly could, with the technology of the day. The objective has always been to make your song sound louder than the next song, because louder music sounds more impressive to a casual listener, it's simply more attention-grabbing.

In the beginning of the digital era, there was actually some hope that better dynamics would happen. In the guidelines for Sony's earliest digital recording equipment, the recommendation was to target an average level of -20dBFS, to use very little or no compression, and "let peaks fall where they may". Just imagine that, 20dB headroom!

In the worst days of the loudness war (~early 2000s) lot of music was mastered with barely 3-4dB of dynamic range, with peaks banging hard against 0dBFS. I have some CDs from that era, and they clip and distort like crazy, because everything was just pushed to 11, to be as loud as possible. "Californication" by Red Hot Chili Peppers is an excellent example, it's absolutely horrid.

Since then, two major things have happened to improve sound quality somewhat. Firstly the compression devices and plugins have improved massively, modern sidechain compression is really impressive, entire genres like EDM/dubstep simply wouldn't exist if not for the improvements in compression tech. Secondly, all of the streaming services use volume normalization now, with a set average sound level. Songs can peak over this average value, but the average must be in line with the target. This also results in brickwalled "turn everything to 11" tracks sound a lot quieter, because they have no peaks to use the additional dynamic range available.

> (there are some exceptions, like the Beatles mono and stereo boxed sets that came out awhile ago)

Didn't the beatles famously create their music to be listenable on the terrible radios of the time?

So this is why streaming Google Music on my fairly nice sound system ends up sounding like total crap?

In 2019 music streaming should be more like video streaming, in that different bitrates should be user selectable, and processing ("cinema" vs. "night mode") is done by the playback equipment.

It's a very good article which shows up again and again. Think it's 2040, singularity reached. AI runs the world and on HN we have this article popping up very frequently like every hundred Planck time unit .

It is a good article, and since the misunderstandings are persistent in the same way a lot of other commercially exploited mysticism is it remains a relevant one as well. Having said that and to try to add on something new to these discussions, since you brought up this:

>Think it's 2040, singularity reached. AI runs the world and on HN we have this article popping up very frequently like every hundred Planck time unit.

One argument I can see in principle for 24/192+ sound (not music) recordings would be if someone was a serious transhumanist and honestly did anticipate that some humans will move beyond baseline human sensory limitations in the foreseeable future (by 2040 would certainly count). Combine that with the sort of incredible environmental destruction we're seeing right now, with enormous numbers of species going extinct, forests being destroyed, insect/bird levels plummeting/moving even if they aren't going extinct entirely, etc. It doesn't seem entirely unreasonable to imagine that in 2040 somebody with genetically enhanced or bionic ears who really could hear ultrasonics (and had grown up with that, so their brain had developed from the start with that input) would find themselves not being able to ever hear "what it was really like" back in the 2010s even for a simple walk in the woods. If they had been here in person they'd be able to hear all sorts of things, but our standard recordings wouldn't have any of that, and in that time the whole character of forests may be different forever ala the silent spring. It's similar I think to one of the obvious guiding principles of modern archaeology, which is to try to disturb as little as possible precisely because we recognize there will be superior tools and sensors in the future which could pick up things we can't right now. Saving as much raw data as feasible in many experiments is also like that, even if we can't process it all now decades down the line new insights might be found.

None of that has anything to do with music which is a subjective human artistic creation. Even though instruments give off sounds beyond our perception, by definition we aren't taking those sounds into account in the creative process. Future transhumans would undoubtedly create transhumanist art taking full advantage of any enhanced senses, but that wouldn't apply retroactively.

> One argument I can see in principle for 24/192+ sound (not music) recordings would be if someone was a serious transhumanist and honestly did anticipate that some humans will move beyond baseline human sensory limitations in the foreseeable future

True, except that few microphones provide a useful signal over 20kHz, and in the case of produced music, that segment of the signal was never heard or "signed off" by the original artists/engineers and therefore can't be considered part of the artist's intent.

Could it be also because HN Front page algorithm (my speculation) of favoring domains (with something like Domain Authority) based on previous votes?

No. Some domains are penalized because of too many lightweight or off-topic posts, but no domains get a boost.

Feels like we should have spaced repetition/Anki for highly-upvoted articles

It’s supposed to have (2012) in the title if it’s an old article.

This reminds me of Monty's A Digital Media Primer for Geeks[0] and Digital Show & Tell[1] - the delivery, the explanations and the way the experiments are set up is superb.

[0] https://xiph.org/video/vid1.shtml [1] https://xiph.org/video/vid2.shtml

The article's author, Chris "Monty" Montgomery, is one of the authors of Ogg Vorbis [1] and Opus [2].

It puzzles me that many people don't yet know about Opus. Let me quote the FAQ [3]:

"Does Opus make all those other lossy codecs obsolete?


From a technical point of view (loss, delay, bitrates, ...) Opus renders Speex obsolete and should also replace Vorbis and the common proprietary codecs too (e.g. AAC, MP3, ...)."

[1] https://xiph.org/vorbis/

[2] http://www.opus-codec.org/comparison/

[3] https://wiki.xiph.org/OpusFAQ#Does_Opus_make_all_those_other...

I use Opus for music playback for all my archived music. The reason it's not more widespread was opposition of the likes of Apple to free codecs. Today they are losing this, and Opus is making its way even to Apple's systems.

Is there a particular reason you don't opt for lossless formats (e.g. FLAC) for your music archive? I imagine the only constraint would be space, though storage gets cheaper by the year.

I use FLAC for storage, and Opus for playback. I.e. in essence I use both. The benefit of lossless is ability to re-encode later to any new codec (Opus-next?) if it will be useful. For playback, transparent Opus is good since it takes less space.

I.e. in practice - in my main archive I use FLAC. On some portable players and etc. I use Opus encoded from that FLAC.

That's why I always try to buy music in FLAC when possible and stores like Bandcamp are great for it.

I love opus just as much as I loved musepack and vorbis, the one thing all of them lack to one degree or another is support and hardware acceleration. If I throw an opus file on my Android 8.1 phone, it has no idea what to do with it unless I manually open it with vlc or foobar. For the regular user the support needs to be seamless, otherwise they are not going to bother.

I thought for a moment that Spotify uses Opus, but it turns out that they use Vorbis. Wonder why a switch isn't on their roadmap.

Do they publish their roadmap?

I'd imagine they consider what they have is good enough considering the backwards compatability issue it'd likely introduce.

They don't publish their roadmap, but there have been threads on their community forums suggesting this, and an official "Not Right Now" response:


I've yet to see them implement anything suggested on their forum (or github).

> Wonder why a switch isn't on their roadmap.

Just a guess but I bet it's because the cost would be higher than the extra revenue it would generate.

The use of analogue gear in #2 is one of those things that as someone who _already believed what Monty is showing here_ I wouldn't have thought to do. But it really heads off a bunch of arguments.

And twenty years from now it's going to be hard because you'll have to scrounge the gear from a museum instead of it being available for a reasonable price from eBay or borrowing it off somebody who kept it in the cupboard after upgrading to modern digital gear. So I'm glad Monty did it in that era where the gear was still available.

Honestly it is remarkable how many engineers (self-proclaimed or otherwise) in audio don't understand the basics of sampled systems and quantization. You'd think that anyone making broad claims about these kinds of systems would have at least a rough understanding of the foundational principles, but no.

The choice of colour analogy is unfortunate, because there really are colours that are "out of gamut" and cannot be accurately reproduced on normal monitors. If you have the opportunity to and look at one of the IKB works in person you'll see what I mean.



I don't quite agree with you, taking into account it's an analogy designed to help illustrate the issue for general audiences. It's not as if we don't have ProPhoto RGB or other wide gamuts or don't understand the issues of rendition accuracy and resolution within the visual spectrum. There was never any debate that sRGB alone in particular was quite limited, or that dynamic range was an issue. It's just that it represents a ton more data and is technologically and commercially much, much harder. As tech has caught up displays have continued to chase human visual limits, starting with resolution, then frame rate, and finally major industry wide improvements to gamut and range with BT.2020/2100. I mean heck, it wasn't that long ago that we barely had color at all. I still remember well the first 8-bit system I ever got, or back when I regularly had to manually change between 16-color/256/16k to prioritize resolution or color because my system just didn't have enough VRAM to handle both at once. Audio did far better at matching human limits much, much longer ago.

But within the visual spectrum but not showing on the screen is still within the visual spectrum. The article examples refer to infrared and UV+ for contrast, and that's entirely correct. Monitors displaying either of those would make no difference (well, beaming ionizing EM at your face raises significant concerns audio doesn't at any level) at any point. They're simply beyond human eyes period. It's an accurate analogy. Failing to reproduce something within human limits would be what you're talking about, but that's a solved problem and not something 24/192 offers you anything with.

Those are very nice examples. I always fell back on "go stare at the sun for a while for a shot of color that your monitor can't handle" but admittedly it was pure laziness.

Question: Is 192 kHz better when you want to slow down (or speed up) a track significantly while keeping pitches the same? Does it produce less noticeable artifacts?

When DJing, I often speed up or slow down a track I'm cueing in order to match the tempo of the playing song. So having 192 kHz tracks might be better (although usually you try not to change a song's tempo too far from the original anyway).

No, it's not. You answered your own question here:

> while keeping pitches the same

All 192khz does is preserve higher frequencies. If you're keeping pitches the same, there's no advantage to using an extremely high sampling rate for your source material. The advantage comes if you're going to lower pitches.

(Note that some algorithms need higher sampling rates to avoid aliasing. That shouldn't be the case anymore, but if you're hearing a substantial increase in quality just going up to 192 khz, most likely one of your algorithms is faulty.)

(Note 2: I say "substantial increase" because some people can detect up to 27khz.)

I personally read the article to be addressing 192 kHz as a consumer of the music, I have a feeling for those producing (or mixing, etc.) it's a bit different.

It's kinda like how there's advantages of recording at 8k, better cropping, supersampling, etc. But for the average consumer there's no perceivable difference between the pixel density of 8k footage and 1080p footage on their 7" screen anyway.

Yeah, the author is not arguing against using 24 bits when recording, just when distributing to end users.

If the producer is planning to slow down the audio (and wants the ultrasonic components to become audible), then recording at higher sample rates makes sense, and the author doesn't address this; probably this is pretty rare in practice. You'd also need ultrasonic-capable microphones.

The much more common operation is to filter or amplify the signal, and for that, more bits per sample is better to avoid amplifying your quantization error. The author covers this in the "When does 24 bit matter?" section.

> and the author doesn't address this

No, it's literally excluded from consideration in the article's title. This is about music downloads, not music production.

I record and mix metal bands and have stuck with 48 kHz for years like many of the engineers I know. 96 kHz sounded better to my ears last time I checked in the studio (it's been years, maybe I wouldn't notice now that I'm older) but it's not worth the heavier storage and processing impact when nobody is actually going to use my stuff that way. I certainly don't feel limited by working at 48 kHz, either, but the hit to my workflow would be significant. Additionally, a lot of converters start imposing track limits when you go beyond 48 kHz, so that's one more reason to stay put.

More important than sample rate is AD/DA quality. I'll trust a new high-end converter at 48 kHz than an old prosumer device at 192 kHz.

Plenty of the albums we love as listeners were recorded at 44.1 or 48. Plenty were recorded with absolutely horrendous equipment but played and mixed by professionals who created magic. MANY modern vinyl releases where people brag about superior sound quality are just the CD master in all its 16/44.1 glory remastered for vinyl. Little of it matters when the end result is special.

When slowing down the track, you're changing the effective sampling rate (e.g. 192 kHz turns into 96 kHz at half the speed). This article is about regular playback, so in your case it might make sense to have a higher rate.

Not likely. The DAC will almost invariably oversample anyway, and even though stretching may not happen in a high-sampling-frequency domain, the result eventually does.

The only rebuttal to this that I have found compelling is that 24/192 downloads make sense if you are going to sample the music in your own creations. Recording and mixing with extra dynamic range, combined with only needing to low-pass once at the end has demonstrable advantages. Of course this was a response to marketing that was definitely not targeted at samplers, so it's not so much of a rebuttal as arguing at cross points.

Yes, adding any sort of nonlinear distortion to audio will make frequencies depend on other frequencies, i.e. audible frequencies in the output of an effect can depend on supersonic frequencies in its input. For example, if you add a 100 kHz sine wave through a high-gain guitar amplifier, you'll definitely be able to hear it.

I didn't really see a mention of this point in the article since there was no "So when do you need 192 kHz?" section, but in its defense, DACs, amplifiers, speakers, and room ambiance are all incredibly linear in 2019, so for music listening, most super-sonic frequency content doesn't turn into to lower frequencies. It does matter when you're using the very nonlinear Apple earbuds, but if you were doing that, you wouldn't care about audio quality in the first place.

He mentions this in the section "192kHz considered harmful" without the misleading rubbishing of Apple earbuds (which are among the best regular earbuds on the market, for what it's worth).

In most sensible systems, super-sonic content should be filtered out before it has a chance of doing nothing other than risking the fidelity of the final output.

As for your quip about a 100 kHz sine wave sent through a guitar amp, what you'd be able to hear are the distortions and subharmonics which are below 20 kHz—and if they're desirable in the recording they would need to be captured as their sub-20 kHz components. Capturing the >20 kHz components will do nothing but make the sound wildly and randomly inconsistent depending on the consumer's system.

Cymbals specifically produce a lot of ultrasonic audio, and some high sample rate recordings actually capture it. If you slow them down enough you can hear the difference.

yup; from the piece: "An engineer also requires more than 16 bits during mixing and mastering"

It's also not so likely that a given recording would actually have the larger dynamic range.

When I casually researched the upper limit of human hearing, I came across something that mentioned that some people can detect lowpass filtering up to 27khz.

That's less than half an octave over the "traditional" 20khz limit. Even the 20khz limit is more of an average then a strict biological limit.

It also means that a sampling rate somewhere at 54khz is the "ideal" limit when trying to pick a sampling frequency that is completely transparent to everyone.

This is less than half an octave higher than the traditional 44.1khz rate, just 22% more data.

That's the thing that really drives me nuts about high sampling rates. The minute improvement really only needs a very slight boost in sampling rates, not 96khz or higher.

I've been in an internet argument among very serious digital audio experts (such as from Bell Labs) where the consensus reached was this: for properly done audio export as a final stage to be heard by the most critical listeners, and by properly done I mean the output is dithered and not simply truncated and everything else is done properly:

20, possibly 22 bit, and 60 to 80K.

Given that people screw that up by failing to dither to fixed point formats, you could push it to 24 bit, which is a generally supported word length. Since multipliers of common lower sample rates (44.1 and 48) give us 96K, that is also a good 'extra padding' to be certain of never encountering an issue.

I'm with Dan Lavry w.r.t 192K being unnecessary. Done properly, 96K gets everything, including extreme phenomena or artificial sound (for instance, I have a Farfisa organ that's capable of producing reedy thin sounds of extraordinary clarity, from simple electric tone generator circuits). I use 24/96 for my music stream recordings, while also streaming to YouTube at a much lower quality.

> When I casually researched the upper limit of human hearing, I came across something that mentioned that some people can detect lowpass filtering up to 27khz.


I'd want to know (a) if it was an analog or digital filter, (b) if the >20kHz signal intensity was normal/plausible and (c) how they ensured that the playback system wasn't generating intermodulation distortion products.

Thanks. It's wikipedia but there's actually some good stuff there. The important link from that page seems to be:


A decade has passed and it would be interesting to know how many people have reproduced the results detailed in the abstract. I gave it a quick read and at first glance it looks like an impressive experiment:

Hearing thresholds for pure tones between 16 and 30kHz were measured by an adaptive method. The maximum presentation level at the entrance of the outer ear was about 110dB SPL. To prevent the listeners from detecting subharmonic distortions in the lower frequencies, pink noise was presented as a masker. Even at 28kHz, threshold values were obtained from 3 out of 32 ears. No thresholds were obtained for 30kHz tone. Between 20 and 28kHz, the threshold tended to increase rather gradually, whereas it increased abruptly between 16 and 20kHz.

If you use foobar2000, you can use ABX Comparator to compare between various bitrates and formats. Start with a lossless format and convert it.

[1] https://www.foobar2000.org/components/view/foo_abx

I'd be happy with CD quality - usually I have more than enough download bandwidth and storage space for it. Apple has had Apple Lossless for years but Apple Music (and the iTunes store) still use(s) lossy compression. Movies are now 4K, but Apple has been stuck on 256Kbps AAC since 2009. :(

Though as others have noted CD quality won't improve a terribly mastered recording from the loudness wars.

I wonder about one thing. Sure you can't hear above/under certain frequencies, but these frequencies still resonate with parts of your body (that are not ears) and you might feel it in other ways than just hearing, and also their presence generates harmonics. Not sure if it is observable to a human, but just because you don't hear N hertz, doesn't mean you can't hear its harmonics/it doesn't affect your _perception_ of the rest of the signal at all. Cutting off some frequencies can create patterns that are not hearable per se, but might induce unwanted sensory feelings (*opinion, not a fact). I think using physics to break this down doesn't make much sense, and that the most practical debate solution would be to do a double-blind test on a statistical group of the so called audiophiles.

I have a related question that someone here probably has a good answer for. I recently heard a song I like on the radio while driving. Shortly after it played I pulled up the same song on Spotify with my phone, plugged my phone into the car stereo through the headphone jack, and played it. The quality was MUCH worse. What's the likely reason for that?

I know very little about audio but my best guesses are:

1. The media cable was poor quality and/or playing music through the headphone jack is worse quality than radio station airwaves.

2. Spotify was sending back poor quality audio, possibly because I was not on wifi.

I'm sure the particulars matter but does anyone have a best guess as to why the quality would be so much worse? I don't really expect mainstream radio stations to serve up the highest quality audio, but maybe my assumptions are way off.

Radio stations often do additional processing of music to make it louder and more crisp when played on a car stereo, often by using techniques such as multi-band compression: the sounds is decomposed into several bands, and each band has dynamic range compression applied with different parameters to maximize the perceived sharpness/loudness.

It destroys a lot of subtlety and sonic detail in the original, but in exchange you get an overall louder, more in-your-face sound, with highs that come through even on bad audio systems. On car stereos, where you have a lot of low-frequency rumbling sounds, this especially makes a difference. And if you ask a random person to give a subjective quality assessment of original vs that processed audio, they'll almost always feel as if the latter is of higher quality.

For more info see e.g. [1].

[1]: https://www.soundonsound.com/techniques/multi-band-compressi..., section "Broadcast Applications for Multi-Band Compression."

Spotify definitely sends you low quality audio at times. Most people won't notice on common speakers and headphones, but even my "bad" speakers revealed the difference.

Amazon Music seems to be pretty good as far as quality is concerned. I think they download the MP3s onto the phone's local storage so they don't have bandwidth issues? Either way, I could hear the difference between Spotify and Amazon music. The difference between Amazon music and my own MP3s was not as apparent.

Pandora seems to sound "fine", although I seldom play it loud enough to notice. Spotify was the only one where I noticed the quality being notably bad. It's possible it's due to a low bandwidth fallback. And maybe they throttle their own servers at peak times, in addition to detecting the lack of local wi-fi.

The other thing to note is that Spotify will send you lower quality audio on the mobile vs. desktop client.

When I last moved, I plugged my phone (running Spotify) into my receiver to check that I'd gotten my speakers set up right. It was so muffled-sounding that I was worried I'd somehow damaged my speakers!

3. The A/D converter used by your car stereo headphone jack is low quality and introduced sampling artifacts.

4. You have a high definition radio and were listening to a high quality digital signal over FM as opposed to an FM analog signal.

It's probably a combination of all of these.

There is no "high definition radio." In the context of FM radio the H stands for hybrid and the D stands for digital. The digital often sounds worse when you compare them. Less noise for sure, but synthetic treble, almost as bad as Sirius XM.

There's quality setting in app settings, where you can choose the quality. Choices are automatic, low, normal, high very high. I guess "automatic" can adjust the quality based on connection.

I don't know if this is still common practice, but radio edits were often mastered differently back when I briefly studied music production. The station also likely uses signal processing to compress the dynamic range and increase the "loudness". Listening in a car is quite different than listening on home audio equipment, hence the different processing.

This same argument can be made for many things

Why have an engine in my car that can exceed all speed limits?

Why have a heating and cooling system in my house that can exceed any comfortable level?

Why have lights that get brighter than I need?

Why have an internet connection that exceeds what I need now? ========

I keep all my music rips in uncompressed FLAC - 1) because i can 2) because I have the most flexibility (transcodes) 3) because it is capable of capturing _more_ signal that the original contains

No point in bottlenecking my audio just because _other_ people are unable to appreciate it.

Your examples all have good reasons though.

> Why have an engine in my car that can exceed all speed limits?

So I can drive faster than the speed limit if I want to. (And I do)

> Why have a heating and cooling system in my house that can exceed any comfortable level?

Well, you shouldn't oversize your HVAC system if you want to save money. But it's nice to be able to achieve your target temp in a reasonable time period. Any system that can heat your house by 10°F in 20 minutes will—as a side effect—also be able to heat it to 90°F if you were to set it there.

> Why have lights that get brighter than I need?

Other people may need that extra brightness. You can choose dimmer lights if you want. In any case, there's a clear difference between the two choices.

> Why have an internet connection that exceeds what I need now?

Again, other people may need that extra bandwidth. If you can choose a slower one, then do so.

The point of this article is that 24/192 downloads do not improve anything. It's like having a car engine with blue anodized cylinder heads. Nothing about the performance will benefit from the color change of the heads. Or using gold plated ducts for your heating system. The quality of the air is not affected by that.

Our ears are not capable of hearing the differences when they affect only frequencies above our range. Imagine if those lights boasted that they rendered 200nm light more faithfully. That improvement is wasted on your eyes.

More analogies—

It's like printing your brochures at 160,000 DPI instead of 2,400 DPI. The difference is entirely imperceptible by the human sensory system without artificial augmentation.

It's like capturing the invisible infrared light spectrum in a cinematic movie camera so it can be projected back to cinemagoers as infrared light in the theatre.

Yours are much better than mine. Closer to the real issue with 24/192.

> No point in bottlenecking my audio just because _other_ people are unable to appreciate it.

The entire point of the post is that _nobody_ can appreciate it. It is entirely a waste of space at best, and a cynical marketing ploy at worst.

Mastering should be done for studio monitors. No, studio monitors do not sound the same, but they are somewhat neutral, they sound somewhat in the same ballpark, which is the point of them to begin with, have a flat frequency response (which does not imply that they "sound flat" just that music mastered for active-subwoofers sound flat).

This way, those who wish to hear how the music was intended to sound, will have a somewhat decent chance of coming near to what it sounds like, and people who want other flavours can still simply buy equipment which colors it in the direction they desire.

High resolution audio is important to me as a sound designer because of the ability to severely slow down a piece of audio without any aliasing or stuttering.

At 96KHz and higher with certain samples I can slow down by 80% and it will still sound good.

Do you mean slowing down while lowering pitch (without resampling)? If so, you're correct, as you bring harmonics from out of limit of human hearing back and the result sounds natural.

But if you mean just changing the speed of the sound, than you need to change the algorithm you're using. There should be no difference in quality due to sources having different sample rates.

I store stuff in flac, i got a large nas at my house. Then i can down convert to any other format I might need. I enjoy the flacs when I'm home.

Storage space is cheap, and we have the ability to record and store music in 24/192 or any other format we want. Even if it's useless to us now, it may be of value some day when our genetically-engineered descendants can hear up to 40khz or when someone invents a direct condenser-microphone-to-brain interface.

TL;DR: 24/192 is useful at the mastering stage, to avoid error creep from mixing and effects. This is way beyond human hearing however, so scaling down to 16/44.1 (CD quality) in the final mix for playback won't result in noticeable degradation. CD quality was chosen on principles of human limits.

Perhaps we could give a bit of extra headroom for kicks, to widen the envelope at extremes however. A useful amount would look more like 20/48 rather than quadruple or sextuple the resolution. No one produces in this format though, the next one up is typically 24/96.

Like how banks keep track of fractional pennies when calculating interest, because rounding them off at time of calculation would introduce cumulative error. Instead, they round them off at payment time.

Yes, as done in an image compositing pipelines as well, with higher resolutions and bits per pixel of color information until final output.

Please do not confuse the usefulness of 24/192 for playback and listening enjoyment, and it's usefulness for recording and heavy 'in the box' processing.

In the box processing uses 32-bit or 64-bit float. Fixed-point DSP processing was a thing maybe ten years ago, and even then the standard was 56-bit. 24-bit is nowhere close to good enough for ITB DSP.

That aside - the bit depth part of this article is silly and wrong. With an unprocessed acoustic recording, the difference between 16-bit and 24-bit sources is fairly easy to hear on professional equipment.

By the time rock/pop/IDM/etc has been mixed and mastered, the dynamic range can be so limited you might as well distribute it at 8-bits. (Barely an exaggeration, BTW.)

This is not even close to being true of jazz, orchestral, and folk recordings. Typically recording engineers allow somewhere between 10dB and 20dB for peaks, which means the actual recorded resolution of sustained non-peaky instruments and quiet sections is somewhere around 12-bits - comfortably low enough to hear quantisation errors, even with dither.

So for some genres, 16-bits is plenty. For others it's nowhere near good enough.

In 2019, there's really no practical reason not to distribute music as 24-bit FLAC for high-end use. If you're listening on mobile you may as well use one of the better compressed formats. But for home playback, 24-bit is master-tape quality with no significant downside.

Sampling rate is a more complex issue. 48k is significantly better than 44.1k for the reasons mentioned.

Vinyl can go up to 100k or so, although not very accurately, and some people - including some very highly respected professional audio equipment designers, like Rupert Neve - believe that makes a difference.

But it's very hard to record ultrasonics "just in case" because the microphone->preamp->ADC chain has to handle them accurately, and that rarely happens. So there's very little of value up there in most recordings anyway - although maybe more on vintage tape masters than on modern digital recordings.

Personally I'm equally happy with 48k or 96k. The 192k recordings I've heard have been disappointing, possibly because of the intermodulation effects, but also because jitter becomes more of a problem at high rates.

Very inadequately. There was a quadrophonic vinyl system that failed commercially, which played back surround speakers using modulation of a 30K carrier tone. You had to use a special (aka 'good') stylus, and it sort of worked. The resulting carrier tones would go from 18 kHz to 45 kHz and the fact that this worked at all is evidence that vinyl goes up that far if you let it: wear will tend to scrub off that information unless it's a high energy transient, in which case there's a big chunk of plastic refusing to be worn off (but you'll dull it).

> That aside - the bit depth part of this article is silly and wrong. With an unprocessed acoustic recording

It's neither silly nor wrong—the article's title literally excludes it from consideration. This is about music downloads, not music production.

My favorite music these days is left-field and lo-fi hip hop. High-fidelity is pretty irrelevant. Music is produced, mixed, and mastered at home.

e.g. https://pitchfork.com/reviews/albums/earl-sweatshirt-some-ra...

Thanks to this article, when I torrent music that is shared in 24/192 format, I always resample to 24/96:

ffmpeg -i foo.flac -ar 96000 -acodec flac bar.flac

To thank the article even more, you could resample to 16/48kHz.

Even if the lower sample rate of 48kHz would be entirely reasonable and 96kHz is overkill, 24 bits still makes an audible difference for the material I listen to (modernist classical music and ECM jazz), which is why you can find 24/48 from some labels. For pop music, which of course is distinguished by little dynamic range, then 16 bits would be fine just like on the CD format.

The only issue with low bitrates is noise floor, so if you're hearing other distortions, it's not caused by the bitrate but maybe your room treatment or headphone drivers. A recording would have to have a hilariously large amount of headroom for 16-bit noise floor to be noticible when the music is played at a desired level, and while symphonic and jazz recordings have ridiculously high dynamic range, it's not 100dB of headroom, maybe 60, so 16-bit should be fine.

If you still think it's a problem, adding good dithering with ffmpeg's quantizer/resampler flags will make the noise floor 6-10dB smaller.

> 24 bits still makes an audible difference

The article (and numerous other sources I've seen over the years) disagrees with you so I'm curious why you're so certain?

As the article notes:

"It's true that 16 bit linear PCM audio does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions."

Now, 24-bit may be overkill, but 24-bit is the next step up from 16-bit among standard encoding formats, and as the article notes, there are no drawbacks with 24-bit encoding except greater use of disk space.

The article says:

"[...] does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions."

Note the words "theoretical" and "ideal".

In your post it sounds like you're claiming that you can regularly hear a difference under normal listening conditions - which contradicts my reading of that sentence.

My gut feeling is that the difference you're hearing is placebo.

To put it another way - either the article is making an inaccurate statement, you're mistaken - or you've got golden ears and only ever listen to music in specially prepared environments.

The article is making numerous inaccurate statements, because it's got an agenda and the author is invested in lossy media encoding quite heavily. There's a degree to where it's relative: in the car with the windows open you'll not be hearing 16 bits of audio resolution.

Monty's gotta monty, and this argument has been going on from the very earliest days of digital: back when people behaved exactly the same way over digital recordings that are now commonly accepted to be excruciatingly bad for a variety of reasons (generally having to do with bad process and wrong technical choices).

You can get a HELL of a lot out of 16/44.1 these days if you really work at it. I do that for a living and continue to push the boundaries of what's common practice (most recently, me and Alexey Lukin of iZotope hammered out a method of dithering the mantissa of 32 bit floating point (which equates to around 24 bit fixed for only the outside 1/2 of the sample range, and gets progressively higher precision as loudness diminishes). Monty is not useful in these discussions, nor is anyone who just dismisses the whole concept of digital audio quality.

I'm not dismissing anything. I'm arguing for the power of human self-deception. I feel the same way about connoisseurship in most other realms; food and wine being the obvious examples.

I believe it's a combination of imagined differences and barely perceptible differences elevated to implausible heights of significance.

Even if one can hear the difference between 16 and 24 bits it will be almost imperceptible in most listening conditions and when it is perceptible it will on the threshold - and certainly too subtle to affect the quality of the experience in any meaningful way.

To put things in perspective, 16-bit PCM audio has a noise floor around -96dBFS, ie. the difference between the loudest possible sound the format can contain and the noise floor is 96dB. That's what the bit depth determines; the level of the noise floor in relation to the loudest reproducible sound. It does not add any more detail, it's not like the resolution of an image file, the added bit depth does not allow for finer-grained details, audio doesn't work like that.

96dB is a lot more than you probably think, it's like the difference between an anechoic chamber (nominally ~0dB) and someone jackhammering concrete right next to you (~90-100dB). Add to this that even a quiet room has a noise floor around 20-30dB, and to even hear the noise floor in CD quality audio, a full-scale peak would hit 130dB!

Try generating a sound at 0dBFS, the attenuate it in steps of 10dB and make note of when you can't really hear it anymore. At -50dB the sound is already extremely low and barely audible, and there would still be 46dB of attenuation available.

In addition to this, noise-shaped dither can push the noise floor towards frequencies where the human ear is less sensitive, giving a perceived noise floor of around -120dBFS. In other words, 24-bit audio for distribution and listening is absolutely pointless and has absolutely no audible difference when compared to 16-bit audio.

I'm curious why you torrent music still when streaming is so widely available and free/cheap?

I did a lot of torrenting back in the 2000s, but thinking back on it I spent a ton of time finding things, organizing my file system, transcoding, editing metadata, etc. I do not miss that hassle at all now.

I spend about half of every year traveling, often in particularly undeveloped countries and/or far from a mobile signal. Having my entire music collection on a portable hard drive is more convenient for me personally than being bound to streaming.

Makes sense, thanks for answering!

Streaming music services lack all the options in foobar2000 that I've grown accustomed to over the last 10+ years.

Personally, I buy music rather than torrent, but the pace at which I buy new music (either on bandcamp or physical CDs) costs me about the same as a Spotify premium subscription anyways, only I get to keep the music forever.

>I'm curious why you torrent music still when streaming is so widely available and free/cheap?

To provide you with another answer, most of the artists I listen to aren't on any of the music streaming services. Because local underground bands whom only have CD's handed out at their shows rarely exist outside of the pirating scene - which has a knack for distributing local underground bands with limited release/number of CDs. A small percentage of the bands/artists are on Spotify or Bandcamp but most aren't.

I buy what I can because I enjoy having the album arts but most of my music cannot be purchased or streamed.

There's also no guarantee that the streaming services will still exist in 10, 20, 30+ years - but there is an almost 100% chance that the hardware and software necessary to listen to or convert .flac will exist for me to continue to listen to my music.

I refuse to pay streaming subs, I buy second hand CDs for pennies and rip to flac. I'll always own my content and play it whenever/wherever I want at the best quality.

Some bands still refuse to be available on streaming (ex: Tool). Some will never be on streaming.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact