Hacker News new | past | comments | ask | show | jobs | submit login
24/192 music downloads are silly (xiph.org)
604 points by tosh on Dec 2, 2014 | hide | past | web | favorite | 424 comments



I once spent ~2 hours explaining all of this (well maybe not all of it) to a friend of mine who was studying sound design at the time. He didn't believe me.

I even transcoded some 24/192 FLAC Pink Floyd I had lying around and made him do a double blind test to show him that he'd prefer the slightly louder song every time, even if the louder song was 192kbps vs the FLAC. He did. He still doesn't believe me.

He still thinks he can hear the difference between FLAC and MP3 to this day. He works as a sound engineer now.

I don't think any amount of reasoning will make some people change their minds. Some people buy $500 wooden knobs to make their volume pots sound better. (or was that a hoax? i can't tell anymore)


> Some people buy $500 wooden knobs to make their volume pots sound better. (or was that a hoax? i can't tell anymore)

Some people buy small pyramids to elevate their cables off the floor, some people buy mats to put onto your CDs before putting the CD in a player (http://dagogo.com/millenniums-m-cd-mat-carbon-cd-damper-revi...), some people buy $1000+/meter digital interconnect cables (http://www.theabsolutesound.com/articles/transparent-referen...), some people buy $7200 power cords (http://www.theabsolutesound.com/articles/crystal-cable-absol...) and $350/m HDMI cables (http://www.theabsolutesound.com/articles/nordost-releases-fi...).

Self-styled audiophiles are, by and large, idiots with way too much money plagued by magical thinking. Developer bullshit has nothing on them.


> some people buy mats to put onto your CDs before putting the CD in a player

An 80-minute, 700 MB CD-R fits 80 * 60 * 44100 * 2 * 2 / 2^20 ~= 807 MB of audio.

Why is that?

The 100MB difference is not just due to the audio TOC being of smaller size than the ISO9660 or UDF file system metadata. It's also because of differences in error correction. I don't have the spec on hand but I recall from when I was investigating this that CD-ROMs use more bits for error correction than audio CDs. That's why you can fit more audio data than "filesystem data" on a CD-R. Reading (ripping, digitally) an audio CD will likely result in different digital audio files every time, since the error correction is not that good, but good enough, for audio.

I read into this when I was wondering why my CD-DA extracted .wavs came out with a different checksum every time. Vibration is one of the factors that would make the same audio CD, read with the same CD player, produce different digital signals some of the time or even every time.

CD-ROMs however, which store digital data, need better correction - you definately don't want a bitflip in your .exe, while a minor amplitude diff — an uncorrected bitflip in the upper bits of a 16-bit PCM signal — is no biggie.

So… I'm not saying that the people using CD mats are informed (or have tested whether the mat makes a difference, or would even know how to go on about testing this, scientifically), but there's more to it than what I originally thought — which was "it's digital so it's never degraded". I wouldn't have known without checking the md5sum of my .wav, though.


Uh, no. Bit-perfect ripping is trivial and routine, and tools like the AccurateRip DB (which has checksums for around three million different titles you can use to verify the checksums on your own rips) and the CUEtools database (which has recovery records you can use to correct bit errors on your own rips) prove it. I routinely get bit-accurate single-pass high-speed rips--no "paranoid" settings or re-reads--of discs dating back thirty years or more, and so do hundreds of thousands of other people. If you get different checksums on successive rips of the same CD, either the disc is damaged or the drive you're using is failing.


Oh sure, your rips may be perfect at the bit level, but how do you know that they're free of sub-bit quantization that isn't detectable by electronic circuits but can be heard by the human ear?

This sub-bit jitter and interference can travel along with a digital file and sneak right past your ordinary bit-level error detection and correction, no matter how lossless you make it. That's because these errors aren't visible in the bits. They occur at a deeper and more subtle level, in between the bits.

Even if you prove mathematically that two files contain the exact same bits, you can't prove that the human ear won't hear any difference, can you?


We've discovered digital homeopathy.


Funniest reply I've read all day.


The decoder/player doesn't know how to read between the bits.

Same file -> same playback.

If you hear the same sound file twice (or two identical files) and hear something different, you software is broken or you're imagining things.


Ah, well, the human ear is a much more finely tuned instrument than your decoders and players. Think of the feelings you get when you hear the ocean waves, the birds sing, a thunderclap!

Can you turn this into mere "bits"? Of course not!

That's why it is so important to protect against sub-bit quantization errors, and this can only be done with proper interconnects. Ordinary cables allow the bits to travel willy-nilly until they jam up against each other creating a brittle, edgy soundstage. Quality interconnects are tuned, aligned, and harmonically shielded to keep those precious bits - and the all-important spaces between them - in a smooth flow.

And then, we can hear all of the things that make us human.


I'm very glad you stuck with the bit (har har) and didn't resort to just telling him he missed the joke. Well done.


Whooosh.


For a second I was terrified at the thought that you're being serious.

That comment is just perfect.


Flawless satire.


Something about it being on HN also makes you assume a post isn't a joke starting out so I read for a lot longer before I realized what was happening.


That's also between the bits. See?


Thank you so muuuuuuuch for the uncontrollably laugh i'm having now


Interesting. I'll have to check those projects out. I have the same problem as the GP -- I have a script that rips CDs, taking multiple reads until it gets two bit-for-bit identical copies. And just about every time at least one track is silently "corrupted."

(I put the scare quotes on because I haven't actually bothered to check if there is an audible difference. But it does confirm the GP's experience.)


> bit-accurate single-pass high-speed rips

"military-grade encryption"


CD-audio "Red Book" data does have error correction (Cross-Interleaved Reed-Solomon Code). Whether you get a bit-perfect audio rip depends on how much error correction and retrying you do.

I remember experimenting with writing a CD ripping program in the 90s, using Windows APIs, and I found, like you, that I got different data each time. But modern rippers such as EAC does this stuff much better and will for the most part give you bit-perfect rips.

That mat does nothing. And if you read that linked page, you will see that he claims it drastically improves audio quality (bass, etc.), which is pure nonsense.


> Vibration is one of the factors that would make the same audio CD, read with the same CD player, produce different digital signals some of the time or even every time.

Err, no.

That makes sense when you have an analog version of the audio picked up by an analog transducer (i.e. a vinyl record) but makes no sense with an isochronous stream of quantized samples.

I suppose a vibration could cause a small phase shift in when the sample physically appears under the LED, but but since the D->A conversion is clocked by a PLL it is irrelevant.

If you have extreme warping or shaking (e.g fling your discman onto the floor or stick your finger on the disk while it's spinning) then a sample might not appear at all, but that's something different than you are talking about.

I suppose it's theoretically possible that some extreme warping or vibration could cause a bit flip, but that's what the ECC is for.


From Wikipedia (Compact_disc lemma):

"[…] The change in height between pits and lands results in a difference in the way the light is reflected. By measuring the intensity change with a photodiode […]"

I'm no signals expert. Are you saying that there is no quantization in that intensity change measurement?

Regardless of quantization, maybe you're right on vibration not being a major source of errors (I know little about electronics and PLLs).

But then, what are the error sources that made the engineers put an extra 276 bytes of Reed-Solomon error correction per 2352-byte sector on a mode-1 data CD-ROM (vs none on an audio CD, which has just has the frame ECC and nothing extra). See https://en.wikipedia.org/wiki/CD-ROM#CD-ROM_format .


There was a day when the clock going in to the D/A converter could be affected by the bitstream coming off the CD. Those days are long gone I'm sure. Everything is buffered in RAM, overclocked, and digitally processed before it hits the D/A.


On a decent drive, cdparanoia should successfully rip with no errors on a clean disc. I have done the md5sum test before.



"Oh, and virtually no PC on earth has that kind of I/O throughput; a Sun Enterprise server might, but a PC does not. Most don't come within a factor of five, assuming perfect realtime behavior."

Some statements just don't age well ...


It's generally the lack of synchronization and positioning information compared to data CDs that gets you. In particular, on many older drives you can't reliably start the rip at the same place each time, so even if all the corrections and fixups work perfectly and you get a bit-exact rip (which isn't hard) you still won't get the same file twice.


> I read into this when I was wondering why my CD-DA extracted .wavs came out with a different checksum every time.

You sure there isn't something in the wavs like a creation date field that would always cause the checksum to be different? That would make way more sense than "vibration"....


I did check by diffing the RIFF headers. Here's some info on the metadata: https://ccrma.stanford.edu/courses/422/projects/WaveFormat/ . There's little to no room for variance.


FYI the difference is almost certainly due to the seeks not being sample-accurate, so your rips are bit identical for each sample, but you are starting in very slightly different places. Either that or you have a really broken CD-ROM drive (which is also possible).


>Self-styled audiophiles are, by and large, idiots with way too much money plagued by magical thinking.

I agree on the "plagued by magical thinking" part, but not all these people are idiots. Some of them are quite intelligent, in fact. I think they just want to be "in the know", and are able to suspend their normal skepticism in order to belong.

One of the smartest and most productive programmers I ever met was taken in by this nonsense. He replaced all the metal bolts in his power supply with teflon because the metal bolts disrupt the magnetic field around the transformer and you can hear that, maaaan!

He did have a nice sounding system, for which he spent about $10k more than one that would have sounded the same.


> but not all these people are idiots

Not to be picky, but it bugs me to see when people talk about two different things and don't understand each other.

Intelligence is not a linear value that could be compared like "person1 intelligence > person2 intelligence". With both of these terms there are always skipped implication of the area of intelligence.

By "idiots" he meant "small amount of/incorrect knowledge in the area of audio quality and human hearing", and by "intelligent" you mean "big amount of knowledge and efficiency in the area of writing computer code".


If you're going to be "picky", please be picky about something you understand. A "small amount of/incorrect knowledge" is ignorance, not a lack of intelligence. When you label someone an idiot you're not talking about his lack of knowledge. You're talking about his intelligence.

Now, if by "idiots" he means "ignorant people" then he's using the word incorrectly. But there's no actual indication that's what he meant. At some point you just have to assume people mean what they say.

And despite what people want to believe, the last fifty years of psychometrics research indicates there really is such a thing as "basic intelligence" (which they call "g"), and people with more of it do better on a wide range of intellectual tasks. So you really can say "person1 intelligence > person2 intelligence".


What you are referring as "basic intelligence" is actually a combination of neuroplasticity and general knowledge. Neuroplasticity is a speed at which brain can learn new things, but even then you can't say "person1 neuroplasticity > person2 neuroplasticity". That is because brain is composed of many parts that can have different plasticity. Also neuroplasticity (i.e. "intelligence") is not static and can even change over time depending on what parts of the brain are most active.


Yet again, none of what you've written here is actually true. I suggest you peruse the wiki page.


James Randi has a speech (you can probably find it on Youtube) about how it is easier to fool smart people than average people. Smart people think they can't be fooled.


The most generous approach to audiophiles is to allow for a placebo effect. i.e. they get greater enjoyment from listening to what they believe is a perfect sound system, regardless of whether the gold-plated cables actually do anything.


I think that's exactly right.


> not all these people are idiots. Some of them are quite intelligent

You can be very intelligent in one domain and be a complete idiot in every other domain. The result ends up being unless we're talking about that one domain, they're an idiot.


I would refer to that as knowledgeable and ignorant rather than intelligent and idiotic. It doesn't really make sense to say that Gary Kasparov is an idiot regarding the construction of log cabins. Rather, he is an intelligent person who is ignorant of the construction of log cabins.


> it doesn't really make sense to say that Gary Kasparov is an idiot regarding the construction of log cabins

It does if he starts voicing 'iditotic' opinions and beliefs as to the construction of log cabins.


I have to laugh at this, really

But I suspect the metal ones are better. More magnetic shielding

So, IF your audio system uses a linear power supply (and it should) AND it is badly filtered (it should have good filtering) you can hear the 60Hz/50Hz from the power network (assuming it's not creeping in your system through other means as well - most likely they are)


$130 USB cable, with 8 non-ironic/satirical reviews: http://www.amazon.com/AudioQuest-Carbon-75m-2-5-Cable/produc...

(Also, the reviews on this $15,000 speaker cable are amazing: http://www.amazon.com/AudioQuest-Terminated-Speaker-Cable-Di... )


Had anyone metioned DVD Rewinder? http://www.dvdrewinder.com/index.html Also demagnetizer is a cool device http://www.acoustic-revive.com/english/rd3/rd3_01.html


> Self-styled audiophiles are, by and large, idiots with way too much money plagued by magical thinking.

Not just that. But the arrogance and fanboyism is rampant.

God forbid you ever consider buying a Bose or Beats product.


Bose and beats* are by, every /objective/ measure, shitty products.

Subjectively, you might like them, but the faithfulness of audio reproduction is not a subjective matter. You can play a tone and measure how well that tone is actually played back.

You can then also objectively compare things that produce that playback quality at various price points and figure out if they're priced competitively.

There is plenty of fanboyism in high end audio, but that's not why they say Bose and Beats are shitty. It's because Bose and Beats ARE shitty.

*The Solo 2 Beats actually measure very well. They're even competitively priced... with other overpriced fashion statement headphones. They're still overpriced vs. headphones that are just meant to play music well.


Are Beats shitty or just expensive? How little would I have to pay to get same quality?

I am finding it hard to believe that they are actually shitty, while I find it very easy to believe that they are way overpriced.

I have never listened to Beats headphones but I imagine they have a lot of base-boost (based on absolutely nothing), but that is not the same as shitty.


So, we have to define, in your opinion, what would make a pair of headphones shitty.

If you are going to reduce them to the basest level of what the purpose for a speaker or headphone is, to reproduce the input sound, then yes, they are shitty, because they are not good at that.

From a purely objective standpoint, you are going to have to judge them based on that. Why would you want the speakers or headphones to make a different sound than what the signal is?

If you want to move away from an objective measurement of what makes a headphone good or not to something that's purely subjective (i.e. 'I like how they sound'), it's impossible to answer that question.

The Solo2 are a pair of Beats headphones that actually measure really well - they're good at the base purpose of a transducer. But they're $250. You could buy a pair of Sony MDR-7506 that measure similarly (IIRC, a bit better, even) for $85.


> You could buy a pair of Sony MDR-7506 that measure similarly (IIRC, a bit better, even) for $85.

Or Superlux HD668B which can be found for $30 last time I checked.


Or if you're willing to spend more, Beyerdynamic DT-770/880/990 family stuff.

Or if you want IEMs, you're not going to beat Hifiman RE-400 for any IEM under $300.


>If you are going to reduce them to the basest level of what the purpose for a speaker or headphone is, to reproduce the input sound, then yes, they are shitty, because they are not good at that.

Though I don't personally like the cold, base heavy sound of Beats, I don't really get how you could know this, because most people have no idea what a piece of music should sound like. They know how they think it should sound, they know how they like it to sound, but very few know how it should sound. The only real exception to this is music with "real" instruments like pianos who's sound is familiar to enough people that their reproduction can be reliably determined. Even then, however, unless you know the piece well, it's unlikely most of us are in a good position to make a judgement about the speaker's quality.

So what factors are you using to determine if the sound is reproduces correctly?


So, uh, we can measure the frequency response made by headphones.

http://www.innerfidelity.com/content/headphone-measurement-p...

http://www.headphone.com/pages/evaluating-headphones

This is a pretty scientific matter - when I say "the purpose is to reproduce the input sound", we can tell exactly what is supposed to be reproduced, and we can tell exactly how capable the speaker is of reproducing it.

Some exceptions have to be made due to how having headphones on your head causes the sound to change, but again, these are pretty much known quantities - to get the equivalent of a flat response from a speaker, you will see change X in bass response, change Y in treble response, etc for headphones.

It's not a question of esoteric "The artist and recording engineer meant for this to be played on Kef blades powered by a Cary tube pre-amp feeding into a Mcintosh amp setup using a rail to rail ladder DAC", but a "We know how frequency response should look when measuring equipment and if it doesn't look like that then the sound you are getting out of it is different than the source material"


> This is a pretty scientific matter - when I say "the purpose is to reproduce the input sound", we can tell exactly what is supposed to be reproduced, and we can tell exactly how capable the speaker is of reproducing it.

You are assuming the song was mixed by someone wearing headphones that perfectly reproduce the input sound. Suppose the person who mixed a song was using beats headphones or other headphones that audiophiles consider inferior but that they know the majority of people use to listen to music. Wouldn't that then mean Beats headphones actually provide the listener with the actual, intended experience?


So, headphone use in studios is not generally for creating the final mix. Monitor speakers are used nearly exclusively in professional studios as what you are mixing for. Headphones have multiple places in the production process where they are used, but they're not the final target.

There's a few reasons for this. The most pragmatic is that doing so will produce the track that sounds the best on the widest variety of setups - EQed or not. There's also not any single headphone out there that is used so predominately that it would make sense to cater to it in specific. The closest might be apple earbuds, but people using those probably aren't too concerned about sound quality anyway, so it doesn't make sense to mix with those in mind either.

From a theoretical standpoint, you're not necessarily wrong, but it's just not how things currently work, and there's not really any reason why it ever would work that way in a professional studio.

I make no claim as to what the people making music exclusively in their bedroom are doing, though.


Interesting. I guess the main concept I'm exploring is the idea that if you don't control for the sound quality that the person mixing it (or more importantly, the person approving the mix) then it's hard to make any claims about how the sound was "meant to be heard".


Given whenever I stand near someone on a train with them I can hear a fair amount of their music (not anywhere near as bad as Apple earbuds though), I assume they can't be that great - that or the listener has very bad hearing.

I have a set of Sennheiser HD 202 which don't have anywhere near the same leakage and cost £35. I haven't tried Beats so can't say much about audio quality, but in my experience high leakage usually means that the audio is poor too. It also means you will listen to music louder to compensate, which leads to more distortion.

> I am finding it hard to believe that they are actually shitty

In that case the marketing team have done a good job :-)


>Given whenever I stand near someone on a train with them I can hear a fair amount of their music (not anywhere near as bad as Apple earbuds though), I assume they can't be that great - that or the listener has very bad hearing.

This makes the mistaken assumption that isolation and good sound are related, which -- as open headphones and speakers can attest -- is not true. The goal of a speaker or headphone is to reproduce music faithfully. Unless you are familiar with the music's origin or it has real instruments who's sounds you can easily identify, it's impossible for most people to tell if the music is reproduced "faithfully". So there are a couple of general rules that most "audiophiles" will consider when dealing with volume:

1. Music played at louder volumes generally sounds better than that at lower volumes. You can hear more of what you are intended to hear.

2. Music often goes up and down in volume, so you want to hear the broadest range of volume.

3. The best listening devices both allow high volumes without clipping and low volumes with clarity.

The point is, just because you can hear it, doesn't mean they are bad headphones.

It also doesn't mean they are good headphones or that the people aren't inconsiderate. It simply means that "sound leakage" isn't really a decent criteria unless it's something that important to you.


Leakage is sometimes intended so it's not necessarily an indicator of quality. See the HD800's. You'll hear them in any open-plan office, for sure. There's no attempt to keep the music from leaking, their only priority is sound quality (which is, at this price, a matter of taste and preference).

http://en-us.sennheiser.com/dynamic-headphones-high-end-arou...


Yeah, Dr. Dre is on record saying that he's not an audio engineer but he knows what makes hip-hop music sound good. So he never claimed they had "flat response" or anything.


In response to your and your parent’s blanket claims

> Bose and beats* are by, every /objective/ measure, shitty products.

> God forbid you ever consider buying a Bose or Beats product.

If you need faithful audio reproduction, start with the room. There are reasons for buying a portable Bluetooth-enabled speaker, and also reasons one may consider specifically Bose SoundLink. Sound quality, in this sense, is not among them.


As far I've tried NC headphones, nothing comes even close to what Bose offer with QC25, any other brand I've tried cancel out less noise than the bose. Sound quality might not be the best, but the intented environment is the limiting factor anyway, and they do a great job at dealing with environment noise.


The biggest problem "audiophiles" don't seem to get is that accurate reproduction is not the end goal of music. Enjoyment is. Audiophiles have convinced themselves to find enjoyment from accurate reproduction, and that's OK. But the majority of the world does not see it that way.


Bose knows how to coax a bass note or two out of a small plastic box.


Bose knows marketing.

I've got excellent bookshelf speakers that were cheaper than the equivalent from Bose, but the reviews and tests showed them to be way better.

My (small!) speakers end up producing way too much bass for the room they're in, in fact, and I use Foobar 2000 with the "MathAudio Room EQ" plug-in to get a flatter speaker response from them. But their problem isn't that they can't produce bass notes.


Whoosh ... I think my comment went over some people's heads. A "one note bass" isn't a good thing, technically. I did not say that Bose is great at making speakers with excellent bass.


A little too subtle, I think. Sounded too much like you were using understatement.


Those are low-budget audiophile products, ie they are still marketed mainly to boost the ego of purchasers.

There are three brands of headphones that pros use: Beyerdynamic (typically DT-100 or DT-770), Sennheiser (typically HD-5/650) and Sony (typically MDR-7506/9). Beyerdynamics have somewhat better isolation so they're more popular in music studios, Sonys are more comfortable when you have to wear them all day so they're more popular on film sets; I favor the 7506 and am on my 4th or 5th pair. Some people love Sennheisers but I personally don't care for the ergonomics.

They're not beautiful, lightweight, or fashionable, but they're a lot nicer to listen to - which is why one or other of them was almost certainly used at the recording stage. If it was good enough for the people who made the recording, it's good enough for you. Also, you'll save money compared to the 'quality' consumer brands.


There are a few more brands and models that pros use. The AKG K240 has been in use for decades, and I think they're quite charming. Audio Technica has made a lot of in-roads into mid-level pro studios in particular (i.e. not million dollar rooms, but still quite good studios that make good records regularly). I've also seen Shure SRH series headphones in professional contexts.

But, your statements about "pro" headphones are accurate. They aren't the nicest looking, but they are really good, and I always recommend a good pro set of headphones over the marketed crap from Beats, Bose, Monster, etc. $250 will buy a lot of headphone quality from one of the pro audio manufacturers.


I have to try the Beyer some time. Agreed on the Sennheiser ergonmics - and the price.

I can't stand the Sonys. They're specifically designed for tracking & editing - all that screech points up Bad Things Happening. But they fatigue my ears.

Laugh now, but I landed on the Koss KTXPRO1 ( which are $20 to $40 ) and have basically stopped looking :) Most comfortable thing I've ever used and they're actually pretty flat, except for a little bass bump and a smidgen of upper mid. I think I'm on my tenth pair. They're a bit too light weight - if you catch the cable on something they'll fly off your head.

And yeah - I bet the $500 vs $20 figures into my perception of things.

I can mix on 'em, at least to the rough stage.


That's nothing laugh-worthy. Audio equipment faces massive diminishing returns. If you're looking for midrange sound you might as well pay attention to the people recommending sub-$50 headphones as to the ones talking of $150. For instance, the $30 Panasonic HTF600s is better-sounding than the $160 ATH-M50x frequently recommended as an entry-level audiophile headset. As for studio-quality headphones, chinese Superlux/Takstar models are as good as the $200-300 range.


As a die hard fan of the M50x (and no qualifications whatsoever for judging headphones) I'll have to explore those Panasonics you mention. I haven't considered that company as a quality maker of audio gear since the portable CD player era. Did you do the test yourself, or are you going off of a review site? I typically rely on head-fi, but I'm always looking for a recommendation in this field.


From all the reviews I've seen, Bose noise-cancelling headphones are pretty much the best you can buy. Especially if you want earbuds (the QC20s). They're extremely expensive though. Do you (or anyone) have suggestions for alternatives?


For the price of Bose noise-cancelling headphones you can get headphones from the three brands the parent mentioned (throw in AKG for good measure) that sound better in 'lab conditions'. But if your discerning feature is 'noise-cancelling', i.e. headphones that sound excellent in noisy environments like trains, coffeeshops or open work environments I believe Bose is the king and will be as long as their patents are enforced.


I had an in-ear Noise Cancelling Philips, that's around $20

And for heavy noise cancelling goals it was very good (like, being able to work with someone with a lawnmower or a drill next to you)

Granted, half of the noise isolation is passive, half active, still, very good


I believe they're good as far as they go because Bose has the strongest patent portfolio in this area, but wearing any kind of noise canceling headphones immediately gives me the unpleasant sensation of having my eardrums sucked outwards. I'm not sure why; I think it's a side effect of the tiny latency inherent in the design. It's so unpleasant to me that I stopped paying attention to new products in that category so I'm a bad person to ask.


tomc1985: hope you see this - your account has been hellbanned for over two years (about 850 days, with one comment visible 270 days ago - not sure how that happened).

Your comments over those years don't seem bad at all - sometimes perhaps a little confrontational but not aggressively so. Perhaps HN could allow users above a certain karma threshold vote on [dead] posts, with those scores going towards a "repeal fund" - make decent comments over a certain period and get temporarily un-banned.


Bose QC15s and QC20s are the best active noise-cancelling headphones out there... but the problem is, hey still have very mediocre sound, but they do the noise-cancelling part well. They are also massively overpriced.

Sennheiser HD280 Pros have extremely good passive isolation, and will beat QC15s at a fraction of the price, in both isolation and audio quality.

So yes, Bose still loses when you look at the big picture. Bose is very good at marketing, they are not very good at making quality audio.


I use Klipsch x7i and they allow you to listen to audiobooks on a low volume setting in an underground train. Which, in Moscow, seems like a perfection.

The sound is very clear, but not balanced. Neither an expert nor a musician though.


There's a fair amount of pros that utilize Audeze and HiFiMan gear as well. Planar magnetics are popular.

Erik Larson is pretty vocal about his use of LCD-2s for mastering. Which... Honestly, I'm not generally a fan of his work, so it's not necessarily a ringing endorsement.

Also kind of surprised at your lack of mention of AKGs - they're another very popular brand for studio work.


Honestly I forgot about AKG. They're pretty good, although I don't see them a lot in commercial environments (~15 year window).


Almost every studio used to have nothing but AKG K240 phones for monitors in the room. I haven't worked professionally directly in the field for years (I work in live sound occasionally now, and do interact with recording engineers occasionally), but I still see them discussed regularly enough online to assume they are still common. I love the look of them, and always have. To me, they are the definition of "studio headphone". (They aren't what I use in my home studio, as there are better phones, if you're willing to spend more money, but they are a good headphone for a good price.)


These Yamaha headphones are excellent, http://www.musiciansfriend.com/pro-audio/yamaha-rh3c-profess.... Durable, they collapse and sound great. I have another set of open phones with foam surrounds, the foam is not stable and collects gunk if used in a backpack. The Yamaha phones very respectable replacements for the MDR unit and stay clean while taking up little space.


What's different between the $80 7506 and $200 7509?


Slightly larger driver, slightly heavier, supposedly has a greatly-extended frequency range of 5Hz - 80 KHz vs 10hz - 20Khz in the 7506. Of course your typical D/A converter won't even render such low frequencies due to DC coupling, and if they were there you'd want to EQ them away pronto as they would eat all your headroom. While I continue to enjoy excellent high-frequency hearing even in my mid 40s (to my surprise), neither I nor anyone else needs a tweeter that goes to 80 KHz.

The MDR-7509 and its successors the 7910 and 7920 have a lower impedance than the 7506 (24 vs 63 ohms) so if you plug them into the same sound source the higher-numbered models will be a bit louder - and as we all know, 'louder = better' for most people. This plus the larger driver is somewhat helpful for DJs, who work in very loud environments, but that's a fast track to hearing damage.

http://en.wikipedia.org/wiki/Sony_MDR-V6

Why I like the 7506 so much: on film sets I give them to eople to listen in and they say 'is it on? I don't hear anything.' Then I turn the volume down or make a small noise next to the boom microphone and jaws drop. Plugged into a quality microphone like a Schoeps, which has a very flat frequency response, it's like there's nothing there. I always have two pairs now because if one gets damaged I can't deal with other brands at all.


thanks for the info; I'm considering upgrading my office headphones (I've got some random $20 over-the-ear pair right now). $80 is certainly reasonable, and I prefer transparent speakers in general.


>God forbid you ever consider buying a Bose or Beats product.

This has little to do with the article. If you've got 100 bucks to spend on a pair of headphones, it's only fair to point out that with certain products you're not getting the best sound out of your money.


Well, Bose / Beats are clearly overpriced for what they deliver.


> God forbid you ever consider buying a Bose or Beats product.

Meh, they're okay, but there are better choices out there.

I've a pair of Sennheiser HD600 and it's one of those things that make you go "holy cow, all the hype is justified".

And no, I'm not one of those folks who think gold-plated cables make a difference. Right now I'm listening to MP3 Internet radio on a pair of cheap behind the neck street cans.


I have a pair of Sennheister 280 HD Pros - they've lasted me about 7 years, an excellent set of headphones. I used it to help critique music for lots of artists on their production, and I know lots of artists who use it as a cheap pair of mixing headphones.

Work bought me a pair of 380 HD Pros, and I'm impressed on how much of an upgrade they are over the 280s - I can only imagine how good the other Sennheisers are.


Open back vs closed back.

The 280s are closed back. Great for isolation, for not letting ambient sound interfere with the music. It also changes the way the transducers work, a little bit.

The 600s are open back. Obviously there's no isolation, but the transducers work more freely. It's a bit easier to distinguish tiny sounds from a huge background.

I've both the 600 and the 280. Great phones both, in different ways.


A friend has had a set of these for ages. We found a difference between two source setups. A particular Sony DVD player sounded incredible - each note seemed perceptible in a 3D space [1]. A CD player he had, didn't. [2] We tried with Yamaha amp, without, different configs of widgets. That DVD player with nothing added was the best. He gave it to his sister and I haven't heard anything like that since.

I nearly went off on a tangent and bought an amp etc, but I'm happy with my much cheaper HD380's - great price/performance :) But those 600's are awesome.

[1] I've since learned it's called soundstage

[2] How would the source influence soundstage? Sounds irrational to me. Hey, one sounded better than the other and I don't know why.


If there's psychological phenomena that make them enjoy it more, what's the issue?


And there are jobs created for engineers who design that equipment - simultaneously recycling more money in the economy and contributing to the statistics of STEM job prospects! Everybody wins!


And I get cheap hight definition AD and DA conversors to use in my projects!

I'm certainly not complaining.


Broken window fallacy.


Sure, you need to take externalities into account. One can imagine an audiophile who would rather spend $100K on a sound system that sustains 3 jobs at a niche sound-system-design business, instead of angel-investing that money in a growth company that would eventually create 300 jobs. But what if the audiophile's daughter is then inspired to build another growth company when she grows up, because she was intrigued by how much her dad would geek out about the electronics in her sound system? Nothing is clear, it's incredibly hard to quantify probabilities about any of these things. As long as the audiophile isn't neglecting responsibilities or breaking windows to obtain his sound system... that is, as long as negative externalities are not a foregone conclusion... we should let him enjoy his passion.


Certainly, there are valid moral arguments to allow, even encourage, this state of affairs. I was just pointing out that the economical argument that was offered is a broken window fallacy. Refuting an argument in favor of X is not an argument against X.


Wouldn't most of the digital cable costs be due to analog interference in the machines they are connecting and them acting like antennas and not that the signal is corrupted on the way?

Unless it's an insulator (like fiber optic cables), you are hooking both a 50 foot antenna and a digital transmission line to your box; if you want just the digital transmission line, you have to insulate the hell out of the antenna part of it.


The first paragraph in the power cord "review" sounds like it is rapidly approaching Poe's Law


On the other side of their spectrum, I went "near" an audiophile shop test booth and I was sucked by clarity and density of the sound in the air (this was a drum solo track). Some audiophileness is good.

I'll head back to my 40$ crappy mp3 player now.


I actually think you'll find many "audiophiles" enamored of the Sansa Clip Plus and Zip, which fall into that price range. Probably the most highly thought of MP3 players after the iPod classic 5.5.


Now that Android supports USB DACs, you can just get portable DAC+Amp combos that you can stick in your pocket with your phone. No need for a dedicated device anymore.


$350/m HDMI cables... so, Monster?


Far from me the idea to defend them, but $350/m is about 3 times higher than Monster's most bullshitty bullshit cable. Even their "2000HD HyperSpeed HDMI cable" only had a $115/m MSRP in 5ft, falling to $32/m in 35ft: http://www.monsterproducts.com/Monster_Video_ISF_2000HD_Hype...


My great business idea was to produce a line of "organic" cables.

Our company would go to the remote places of the earth to hunt down copper dragons (as in DnD) and harvest their veins to make audio cables.

The "natural", "organic" copper has a warmth to the audio signals flowing through it that artificially produces cables just can't provide (they have harsher undertones).

Then we'd also have silver and gold cables, harvested from, you guessed it, silver and gold dragons.

These would truly be "monster" cables.


I plan to disrupt the audiophile business (and crush you in passing) with my homeopathic cables.

As we melt the gold we mix in a few atoms of "rare earth" elements (rare == expensive == so good "they" don't want you to have access) which is then diluted by adding more melted gold until only the imprint of the rare earth atom remains.

The gold will the be hand drawn by virgins (in truth these will be strong, hairy, 50-yo virgins with dreadful hygiene and B.O. though strong enough to pull, but we need not add all that confusing stuff, we'll just say "virgins"). The wire will be lovingly laid into hand-made insulation made from organic pinniped leather.

I see a variety of future applications both in the home (connect your cable modem to your WiFi access point) and business (data centers). To quote Rony Abovitz, we'll soon be "the size of Apple".

Invest now, while you still can!


dragoncopper, have the website default to some non-existent northern european language/font with a translation button (british flag). Burled wood with reds and yellows. will buy.


PETA's going to bust your ass.


Well, you might be surprised on Apple's Store page - esp. the accessories section. Or you like that?


When someone spends money on something in a silly way, do you consider them to be an idiot? Would you make the same statement about someone who spends 200$ on a bottle of scotch versus buying a 35$ bottle?

Let the 'idiots' spend their money driving an industry that is combining the creation of electronics with functional art. I'm not sad to see a $40,000 DAC. I don't have to buy it, and it's cool to know someone built something of silly 'value'.

For example, look at this thing: https://www.naimaudio.com/statement It's absolutely silly, and the cost is outrageous. I'm happy that they built it though. It was actually built as part of the acquisition that Focal made of Naim. It seems that they allowed the engineers at Naim to go nuts once the company was acquired.

I like seeing silly things that people build. It doesn't make me sad that someone spends thousands on ridiculous items that from an engineering perspective don't make a difference.


You are mixing two things. There is a difference between Intel charging you >$1K for a CPU that is 5% faster (in games) than overclocked $300 one versus scammer selling magical power cables (made by gypsy virgins in Romanian monastery on top of the highest mountain, in full moon) to some rich retards.

$40,000 DAC? someone really went ahead and fabbed their own silicon (~$1mil for low volume run)? or did they maybe picked up $100 (at most) part, put it in a shiny box and started looking for suckers?

Do you even realize what $40K gets you? we are talking military grade Agile^^key'hole/RohdeShwartz multi gigahertz arbitrary waveform generators here, not some pityfull audio stuff.

Here are some examples of multi thousand dollar scams: http://www.lampizator.eu/LAMPIZATOR/REFERENCES/wadia%20WT%20... http://www.lampizator.eu/LAMPIZATOR/REFERENCES/Goldmund/gold... http://www.lampizator.eu/LAMPIZATOR/REFERENCES/THETA%20Unive...

Audiophoolery is on the same level as creationism, it only works on uneducated simple minds with no metacognition.


It costs a good deal to make a discrete R2R ladder dac instead of buying a chip from TI (http://www.msbtech.com/products/dac4.php), however I've now seen some cool DIY projects that are doing even this on the cheap: http://www.diyaudio.com/forums/vendors-bazaar/259488-referen...


>Agile^^key'hole/RohdeShwartz multi gigahertz arbitrary waveform generators here

I don't even know what that is. But that's what I'm calling my next breakcore track.


HP was known for building generally good test equipment, including arbitrary waveform generators. Op is humorously referring to the fact that the test and measurement division was spun off first as Agilent, then Keysight (seriously?), and probably something else by tomorrow (marketing is furiously brainstorming new meaningless names. They only need to merge with Danaheer and rename themselves Flukeronix for the circle of life to be complete.


It's one thing when you plop down a ton of cash for something where you embrace the "silliness" or whatever makes it special (e.g. buying exotic cars with monstruous engines to drive them into traffic). It's another, however, believing something is objectively "better" because it cost more; e.g. spending money on Monster Cables thinking you can hear a difference.

In audio, you have people who love vinyl because they enjoy the distortion it makes, and that's perfectly fine; but you have others who somehow believe it sounds closer to the original, which is demonstrably ridiculous.


I completely get that as I personally prefer to collect Old Stock vacuum tubes for my listening purposes (http://imgur.com/a/INXVX). I just think folks should be left to their own devices to enjoy an avocation as they please.

Trying to defend something through completely subjective argument is silly, I've a hard time discussing 'objectively better' technology with audiophole folks, but if someone said 'I like this more' I really can't hope to debunk that through any sort of mathematical characterization of performance.


>I just think folks should be left to their own devices to enjoy an avocation as they please.

Only if they shut up and never tell other people that should be listening at 24/192. However, the people being complained about here spout nonsense like that all of the time.


Wait! Who's telling who what?

I think you find you have it backwards. I don't hear anyone here telling you that you should be listening at 24/192. Go look at all the comments and count them up. All I hear is people saying that you should be listening at 16/44, because it sounds exactly the same, or even sounds better, and if you think otherwise you're obviously an idiot, stupid, audiophool who spends $5000 on a power cable.

I sure know who I think should shut up. It's all those arm-chair-experts who don't even own any decent hifi gear. Why would they? It's all crap and my second hand ipod headphones beat it all hands down anyway. Right?


The guy that wrote this article didn't write it because snobs were being reasonable...


This article makes points better than I did with my post: http://www.johngineer.com/blog/?p=1741


If you enjoy music (who doesnt) and are bending toward learning electronics I highly recommend projects like Twisted Pear kits and build yourself a DAC. http://www.twistedpearaudio.com/landing.aspx

The other fun stuff is building your own Speaker kits, hook all this up with a Pi Music Box and you have yourself a kind of home made Sonos. http://www.woutervanwijk.nl/pimusicbox/


I would make the argument that if some of the silliness was stated for what it was, gullible rich people would spend their money on something equally frivolous that did more to drive innovation


That's rather a wide-ranging bit of character assisination. Some of us just enjoy listing to music on decent equipment (If you can buy it at your local big box store, it's probably not sufficiently "decent") properly setup (which doesn't mean expensively - just basic proper speaker placement and the like).


> Some of us just enjoy list[en]ing to music on decent equipment

The best definition of audiophile I've heard is somebody who listens to equipment, rather than music.

Of course your favourite Pink Floyd sounds better on a decent stereo rather than a clock-radio, but if somebody is forever chasing the proper "colour" for their speakers, or swapping amps for the perfect tone... they might be an audiophile.

edit: s/you might be/they might be/


>The best definition of audiophile I've heard is somebody who listens to equipment, rather than music.

Audiophiles are an easy target. There are a lot that do stupid shit like buy $5000 power cables, expensive risers to lift cables off the ground, etc. A lot are pretentious, even if they're not insane or dumb.

But that's a rather inflammatory, and in my opinion, unfair position to take. I would probably be considered an audiophile - I have put quite a bit of money into audio equipment. But I love music. I listen to it basically constantly. It's one of my primary sources of entertainment - and I don't just mean 'I have music on when I do other shit.'

Each week I spend probably 20 hours doing nothing but relaxing with a bit to drink and some music on. Not reading, not surfing the net, not doing anything but closing my eyes and enjoying the sound. I'm listening to the music.

At times, yes, when I have been testing out new equipment before deciding if I want to buy it I go through and I do blind ABX tests with level matching. In this case, yes, I am listening to equipment. But this is a very minor portion of my total listening time.

I know you're probably not being totally serious with the post, but I do think it's a bit unfair towards those of us that love music, but also have invested time and money into getting a setup that sounds better for increased enjoyment.


You're using "audiophile" as if it only represents say , the 5% most wacky partipants in the hobby.


You've completely misunderstood the article. It's discussing differences in the the PCM sampling rate, not the MP3 encoding bitrate. Or in otherwords, all these samples they compare are FLAC.

192 KHz has nothing to do with 192 kbps.

I'm one of those audio engineers who mistakenly thinks I can tell between an MP3 and a FLAC. Yet somehow I understand the difference between sample rate and encoding bitrate, and you do not.

All of this is besides the point. I'd much prefer a 24/44 sample than a 16/96 or 16/192. Bit depth has a much larger impact on the sound than sample rate.


Bit depth affects dynamic range, and that's it. The only thing a 24-bit sample can do better than a 16-bit sample is accurately reproduce the difference between very loud sounds and very quiet ones. That's all. For the vast, vast majority of music listeners, the difference will be insignificant, as they don't listen in an environment where a dynamic range of 144 dB can be used to anywhere near its full effect.


This is slightly incorrect. The combination of bit length and sampling rate determine both dynamic range and frequency fidelity. Although it's common to hear the two values used to represent these two separate physical measurements, it's just another case of explaining new tech (80s CDs) to old technical consumers (70s hi-fi types). You can measure a reduction in dynamic range by reducing either sampling rate or bit length.

You'll frequently find 1-bit A/Ds and D/A at > 5Mhz on high fidelity systems. That 1-bit signal is converted to/from a higher bitrate, lower sampling rate signal without loss of fidelity. If you're interested in looking at alternate bitrate encodings you should just look at the Super Audio CD format https://en.wikipedia.org/wiki/Super_Audio_CD

On your second point, I agree, we live in a noisy world and hearing 144dB of dynamic range would require serious isolation.

What most people should be able to hear with 90Dbs of dynamic range, are the harmonics created by undersampling a high frequency signal. To quickly explain I'll use a 1-bit lower frequency scenario. Lets say we have a 2Khz sine wave and a 1-bit 5Khz sampling rate. The 2Khz signal is going to be represented by a different 2 samples every cycle. The result will be a signal that is no longer a sine wave and closer in frequency to 800Hz (wild approximation) than 2Khz. Low pass filters are used to to keep those harmonics from being too pervasive but they still sneak into the signal near the high frequency range. Transpose this example to our current audio standard and you might realize that in order to accurately represent the high end we need a little more than 16bit 48Khz.


In your last example about the harmonics, I can see that being so in the most simple case, but surely that's a limitation of the playback system, not the storage medium. And isn't that addressed by the oversampling that is used almost universally in DACs now?


It's a limitation of the recording/storage medium and is addressed by oversampling and low pass filters in ADCs in acoustical recording. Most audio recording, processing and mixing is performed in 24 or 32 bit. Once the data is down sampled for distribution in 16bit 44.1Khz you run into the limitation again where you have fewer samples to represent higher frequencies. The only remedy is to attenuate those frequencies before downsampling. I'm unsure of the role oversampling plays in DACs so I can't speak to that.


My understanding is that oversampling is done in DACs so that digital filters can be applied without introducing the effects that you describe at the top end of the frequency range. Basically it interpolates to a higher sample rate so that there's more play in filter selection/application.

In terms of attenuating the high frequencies before downsampling - have I misinterpreted Nyquist? I thought that there was no loss in fidelity, right up to half the sampling rate.


I figured it had something to do with the application of digital filters. There are only two significant samples of a source at half the sampling rate, so it should be able to represent a frequency at exactly half the sampling rate but I can't see how it could accurately represent a frequency at just a few hz below 1/2 the sampling rate. Notice I'm using the word frequency. A sine wave with a frequency of 22,050hz encoded at 16 bit 44Khz is not going to look anything like a sine wave.


Right, 22,050hz looks exactly like a triangle wave on a computer screen. The thing is that a triangle wave is composed of a fundamental sine wave (22kHz), and a series of ascending odd harmonics above it. So after the filters nix everything above 22kHz it looks exactly like a sine wave on a scope.

So you're right that you lose information as you go higher in frequency, but there is also commensurately less need for information to recreate it precisely because the filters remove the detail anyway (and if not the filters, the human ear).


Sure 22,050hz and 11,025hz get smoothed out into perfect sine waves and the human ear can't hear 22Khz anyway. But the Nyquest frequency isn't some magical threshold that you cross and suddenly everything is perfectly preserved. It's a folding frequency that determines where aliasing is going to occur, or rather where it's not going to occur. A 44Khz sampling rate is based roughly on western tuning (440Hz A) and makes no attempt to accurately capture sounds and frequencies that are not tuned to western music. As you move away from the folding frequency, there are frequencies in the human audible range that can not be represented, so they're discarded or attenuated by anti-aliasing filters. As far as I'm concerned CD audio is outdated tech that most of the world just doesn't care enough to drop. It's ridiculous in a world of 5K retina screens that people can't see the value in higher resolution audio.


>It's ridiculous in a world of 5K retina screens that people can't see the value in higher resolution audio.

Not if they can't hear the value in higher resolution audio. For many people the only difference in HD audio over 16-bit 44.1 Khz is that the files are bigger. If someone can't hear the difference, it's no surprise that they don't care to move to a new format.

The screen analogy isn't perfect as most people can still readily tell the difference between an HD image and a significantly lower resolution one. (Though yeah, we're getting closer to pixel densities surpassing people's ability to resolve pixels as well, provided they're not putting their nose to the screen. It won't be too long now.)


There's a difference between not being able to hear and not knowing what to listen for. Listen to the highs on a well tuned hi-fi system and you can hear the difference between CD and SACD. Listen for the sound of a singer taking a breath or the slow ring of a cymbal. If you've never heard these things in person to begin with then you're at a disadvantage trying to hear how badly they are represented in recording technology from the 1980s.

Don't argue from a position of ignorance. Make friends with a recording engineer and have them play you a 32bit mix followed by a 16 bit mixdown.


Yes, and also remeber that as you approach the nyquest frequency, the ability to encode phase is lost. People always talk abou sampling capturing different frequencies but they forget to think about phase.


Which gets back to your original point about both sample rate and bit depth contributing to the dynamic range. I'm very curious: can you give me some search keywords that would get me to the math behind that? Also, I've never heard the claim that the 44.1kHz rate biases toward 440Hz tuning, is there somewhere I can read more about that?


pulse code modulation, pulse density modulation...

44.1kHz/100 = 441hz. But that's nonsense in the same way that saying that a signal at the Nyqvist frequency can be accurately encoded. @diroussle pointed out that you lose phase but there's another consideration, sync. If your signal at Nyquist is not in sync with the sampling frequency, then it's going to be represented as a signal offset, an out of phase line.


I understand PCM, 1 bit DACs, et al. What I'd love to see are some equations relating the three quantities of bit rate, sampling frequency, and distortion. Turns out to be very hard to Google.

In any case, thanks for your patience, I'm glad to have cause to reconsider my position on this topic.


Yes. And 16 bit cannot by itself represent without distortion the full dynamic range of a lot of music. Most samples, most of the time, do not use a full 16 bits. This is why during CD mastering dithering is used.

Take it from me, when you master 24 bit stereo tracks, and you don't dither, huge amounts of low level detail disappear. The detail in the quiets is there in 24 bit, and lost when its truncated to 16 bits. Add the dithering, and you get increased noise, but the detail comes back.

One could suggest that with dithering 16 bits can represent it. But that's with a whole bunch of noise added to the signal. You can argue that noise is not audible, but it is _just_, and when mastering you can audition the different dither spectrums to find which dither least impacts the music.

http://www.digido.com/articles-and-demos12/13-bob-katz/16-di...


I certainly won't argue that 16-bit is just as good as 24-bit from an objective standpoint, as 24-bit is obviously superior, full stop. I'm just saying that for most listeners (everyone except those who listen at high levels in dedicated, treated listening rooms in very quiet environments) the difference will be inaudible almost all the time. Extremely low level detail doesn't really matter if it's lost in the >20 dB of natural noise in your room.

At that point the issue may become moot as other problems like standing waves, harmonic distortion, inaccurate speaker frequency response and so on creep in and affect music playback to a subjectively larger degree than '16-bit versus 24-bit does', IMO.

All that said, 24-bit is definitely the way to go since we might as well do it right even if only x percent of listeners will notice.

As an aside, thank you for being one of the conscientious 'good guys' in the studio. I collect music and wish I had a nickel for every sloppy recording I've heard.


Yes. I completely agree. From my perspective, even if one person in a hundred can hear a difference, then I'm going to pay attention. I don't want to boil everything down to meet the average. I think it's fine to release 16/44. I think well produced music sounds excellent in that format. It's one of the reasons the CD has done so well. It's just hifi enough to capture everything. And it's amazing to think this technology had its début in 1982!

But for so long, for those of us who want a higher quality (hearing it exactly as they would have heard in the studio during the production) there was nothing we could do. Willing to pay more for it, doesn't matter. You just can't get it. It's still that way.

What gripes me is the attitude of many, including this xiph article, that hires versions "make no sense", that "there is no point" and thus everyone should just be happy with what they've got and that anyone protesting is an "audiophool" or believes in magic fairies or something. We all get lumped in with those people buying $3000 IEC power cables. For many people it's all black and white, there is no room for grey. You either think that 128kbps mp3s sound identical to the analogue master tape, or you are a fool spending $20,000 on magical stickers to increase the light speed in your CD player.

All I want is to be able to buy the mix and hear it as the engineer heard it in the studio. That would be nice. I know it's not for everyone, but it doesn't make me crazy.

As food for thought, have a read of what Rupert Neve said about Geoff Emerick's hearing ability (being able to discern a 3dB rise at 54kHz) here: http://poonshead.com/Reading/Articles.aspx

"The danger here is that the more qualified you are, the more you 'know' that something can't be true, so you don't believe it. Or you 'know' a design can't be done, so you don't try it."


What's the argument in favor of using extremely high sampling rates though? Using 48 kHz instead of 44.1 seems reasonable (as in the Philips digital compact cassette that never really caught on), giving a little bit of headroom for wider frequency response, moving the filters a little higher or whatever, but I've seen D/A convertors that use 384 kHz, and I just can't fathom what the point is... It smacks of the "if some is good, more must be better" mentality.

There's definitely nothing crazy about wanting to hear a recording with as much fidelity to the master as possible. Yeah, I do remember people saying that 128 kbps MP3 was "CD quality" in the early days of the format, and that was a laughable claim indeed. One would have to be pretty tin-eared to think 128 kbps was hi-fi, although I'd say there were valid use cases for it, at least back when portable music players had storage in the megabyte range instead of the gigabytes we have today.

So many of those audiophile tweaks are just outright scams, and a fool and his money are soon parted. I guess education is the only way to combat that.

As for Emerick's ability to hear anything at 54 kHz, much less discern a 3 dB difference there, well, I am really, really skeptical. I'm obviously not in a position to say it's impossible, but it strikes me as an outright superhuman ability that should have been tested scientifically.


I'm not sure there is a compelling reason to distribute final music pieces in 192kHz.

I can only speak from my own experiences, and I record and mix in 24/96 but for reasons that don't really relate to music distribution. When doing further processing, some plugins sound better with their algorithms taking 96k instead of 44k. Every plugin has been written with compromises. And I find I can push hires audio further in the digital domain before unpleasant artefacts arise.

It's very much like image processing. If you take a picture with a cheap basic 1 mega pixel camera and then play with the curves and sharpness, at a certain point smooth graduated colour becomes "posterised". If you take the shot with a DSLR (with 12 bits of each primary colour) then you can push the image a lot further before the posterisation occurs.

I have found the same occurs for audio. I can manipulate the sound with less artefacts when its hires. The plugins sound more transparent and smoother. I tend not to go above 96kHz because this effect is achieved at 96, and 192 (to my ears) sounds no better and I'd just have bigger files and more CPU load from the plugins processing the extra data.

The bandwidth of 96kHz is just short of 50kHz, so if as an added benefit I satisfy the one in a million Geoff Emerick's, then all the better.

But then once the final mix is rendered and no more processing needs to be done, ie for distribution, then this hires advantage seems moot. Maybe there is still some advantage for people or devices that may post process the sound digitally in some way, like a digital equaliser in your playback device, or something like that. But then again, that device could always upsample before processing.

I tend to use 88kHz if the final destination is intended to be CD, and 96kHz otherwise (so there is less aliasing when sample rate converting to 44kHz).

The reason I harp on about the bit depth is because in my experience that is where we are falling short. If I take my hires sources and convert to 44 or 48 with a high quality SRC I hear no difference at all. But when I change to 16 bit the difference is enormous. There is always a degredation. And it's never a good thing. It seems silly to just be throwing away that bit depth because of a 1982 format that people aren't even listening on anymore.

Also on the topic of SRCs, this site has some interesting comparison. For the record I do my SRC conversion with iZotope RX 64 bit SRC. http://src.infinitewave.ca/

So in conclusion, I want 24 bit tracks. If they're given to me as 44, 96, 192... whatever. As long as they're 24 bit. Enough with the 16 bit! :D


I think you're misreading what the OP said. He wasn't making the mistake of associating 192 kbps with 192 kHz. It was just a coincidence that he used 192 kbps MP3 transcodes. He could just as easily have used 224 kbps MP3s versus 24/192 FLACs in the test he described. The point was that his friend couldn't tell the difference between MP3s and FLACs.


Fair enough. Then the OP comment is completely off topic. What has encoding bitrates got to do with the original article?


Wait until he gets some of'em Shakti Stones! [1]

One reviewer [2] notes: "The effect of the first few Shakti products was not as apparent as when the effect became compounded. Each built on the others' ability to eliminate EMI in the component on or under which it was placed. Music became more relaxed, with greater clarity. Space and ambience increased. The soundfield became considerably more open and defined. At a certain point, the effect became quite startling as another Stone or On-Line was added. Shazaam!"

[1]: http://www.shakti-innovations.com/audiovideo.htm

[2]: http://www.audaud.com/audaud/DEC01/EQUIP/equip3DEC01.html

PS: Sorry for bringing these up. They're quite the recurring joke in audiophile discussion.


Curious - what's the difference between Shakti and Ferrite beads?

http://en.wikipedia.org/wiki/Ferrite_bead

I used ferrite beads to remove EMI in a pair of self-powered speakers. I was hearing the local college station broadcast at a very low level when nothing was playing. I added them to the speaker wires, along with a basic EMI power strip, and the interference was gone.


A perfect example of Poe's law.

https://en.wikipedia.org/wiki/Poe%27s_law


Indeed.. I actually had to do some research to convince of me otherwise [1].

It could still be parody though, even though they actually sell this stuff... :P

[1]: http://www.thecableco.com/Manufacturer/Shakti-Innovations


It is baffling to me that people even talk about 24/192. There are such vast differences in audio quality related to speakers, loudness, amplifier, mastering, and EQ, before you even get to the source format.

For some reason people seem to latch on to the format thing, before being able to make judgements about the more important factors.


It doesn't end there. For most people, I strongly suspect moving around your furniture would affect your sound reproduction more than changing your equipment.


Most people listen to music using headphones nowadays.


I'm interested to know the source of this assertion?


I have no source either, but I'd bet a very small sum of somebody else's dollars that "most people" in the US, at least, listen to most of their music in the car.


I mind the Alan Parsons interview - a sound engineer whose own music is favoured by many audiophiles - who says, guess what: fix the room first.

http://boingboing.net/2012/02/10/alan-parsons-on-audiophiles...

"I do think in the domestic environment, the people that have sufficient equipment don’t pay enough attention to room acoustics. The pro audio guy will prioritize room acoustics and do the necessary treatments to make the room sound right. The hi-fi world attaches less importance to room acoustics, and prioritizes equipment; they are looking more at brand names and reputation."


This, absolutely. The single most important thing in the audio chain from the electrical plug at the wall to your ears is the room...

An actual live musician can still sound poor in an acoustically-terrible room.


Quality headphones is how most people even get close to it mattering.


If it's any consolation, I'm a pro sound engineer and I entirely agree with you. I do like 24 bits (although I'd be satisfied with 20 bits) but I can't be bothered recording at anything higher than 96khz and even that is mainly wiggle room in case I want to do extreme pitch-shifting or suchlike. Most of the time I use 24 bits/48Khz.

Sometimes I think I'd like to buy some expensive measurement microphones and record at 192Khz for Science, eg to find out if there are tunes in cricket stridulations or whatnot. But then I get over it.


Preface: I consider myself an audiophile, but keep reading before you judge. I completely, wholeheartedly agree with the original article and the science behind it. There's no question that our hearing just isn't good enough to discern the minute differences between sound of sufficient quality (which is well defined in the Xiph article).

However, all I'll say is, it's very different to hear or feel a difference, than to prove it 100% without any doubt in the exacting conditions of an ABX test. You behave differently and aim your listening at different things in the special case of critical testing, than when normally listening.

You have to know and respect this to make good arguments against hardcore audiophiles. Only once you give credence to the possibility can you bring on the real science: that the true bandwidth of the ear and the nyquist theorem truly does mean that any signal within our range of hearing can be encoded perfectly in double the sampling frequency and some 65 thousand steps—assuming an ideal decoding, of course, which means, yes, you should respect the idea of DAC design.

The world is full of idiots who are easily parted with their money. But don't throw the baby out with the bathwater. Pursue good quality audio equipment, to a point, because damn, it is enjoyable.


>However, all I'll say is, it's very different to hear or feel a difference, than to prove it 100% without any doubt in the exacting conditions of an ABX test. You behave differently and aim your listening at different things in the special case of critical testing, than when normally listening.

so what you are saying is you know what sounds better if you read the label BEFORE listening to it? :) There is almost always a difference, no one is claiming otherwise. Pointing which one is closer to the original (not "better", because "better" might mean louder/overdriven bass) is the real test, and EVERY SINGLE audiophool to date fails at this point - Randi foundation did run a $1M pot for someone to spot a difference between 'audiophile grade' power/speaker cables versus coat hangar at one point.

http://www.bostonaudiosociety.org/bas_speaker/wishful_thinki...


>He still thinks he can hear the difference between FLAC and MP3 to this day. He works as a sound engineer now.

It's really not fair to compare FLAC vs MP3 to "hi-rez" FLAC vs regular FLAC

There's legitimately some instruments that do not compress well. The harpsichord is a particular example that you should be able to hear the difference on on any sort of decent equipment.

Bur hi-rez vs regular flac is something that I don't think can be really detected by humans. I've gone through and done the Philips golden ears challenge to completion, and have very high end equipment, blind ABX FLAC vs MP3 on a lot of songs I am familiar with, but have never once been able to successfully ABX between a 24/192 flac and a regular one.


What the heck is "hi-rez" FLAC? My understanding is that FLAC is lossless. How would you have varying degrees of lossless?


By '"hi-rez" FLAC vs regular FLAC"' the parent post means something like 24/192 vs 16/44.1, not the amount of compression. It has a higher resolution than the other example.


24/192 flac as opposed to 16 bit flac. flac/mp3 is about compression, bits and khz are about sampling/representation.


Greater than 44.1khz sampling rate and/or 16bit depth.


Wasn't a cello (I keep remembering hearing "cello attack") supposed to be problematic on mp3?


MP3 has specific algorithmic weaknesses above 16khz[0] which even the highest legal bitrate can't cover up… sometimes. It's actually easiest to hear it with cymbals, or the old LAME sample "fatboy".

You can just not use MP3 though. It's 2014! Use AAC!

[0] http://wiki.hydrogenaud.io/index.php?title=LAME_Y_switch


I don't understand why everyone doesn't use Opus for anything. Damn thing is free, beats AAC for psychoacoustics, beats Speex for latency...


The early encoder releases were tuned for voice chat, not music, so the best rate control mode is CBR and it hasn't seriously been tested otherwise. It is pretty amazing considering it's only so good by accident!

Also, everyone is satisfied with AAC already, so there's no good reason to throw out your music collection or your HW accelerated decoding platform.


> There's legitimately some instruments that do not compress well.

Are you sure? I thought that was just a problem particular to early encoders for the Vorbis codec, which were alleviated by altering the tuning parameters of the encoder.


I'm basing it purely on modern (Within the past couple of years) encoded 320kbps/V0 MP3s.

I have not done any personal blind ABX tests on AAC or modern ogg vorbis, so I can't really speak to them.

I'm going to keep 'archival' quality stuff in FLAC anyway, just so I'm covered for any advances in compression tech or whatever, and I stream to my mobile stuff, so size concerns aren't a huge deal for me. My ABX testing has just been for the sake of the mp3 vs FLAC argument.

So, AAC and Vorbis might have very well solved the problem of compressing some of these instruments.


If it is no better, then the person who thinks it is better benefits from it.

If it is better, then the person who thinks it is better benefits from it.

If it is no better, then the person who thinks it is no better doesn't benefit from it.

If it is better, then the person who thinks it is no better doesn't benefit from it.

If the objective is subjective benefit, then placebo is a benefit; assuming your bank account is large enough and you don't care to give your money to someone who really needs it.

Edit: An answer to this is the Carl Sagan quote at the end of the article:

"For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring."

Of course, it isn't really possible to not 'persist in delusion'. One can try, but he won't know if by trying he is perpetuating a grander delusion.


Did you read the article?

He explores (and technically explains) how higher sampling rates can actually be much worse due to equipment.


I should add the combination:

If it is worse, then the person who thinks it is better benefits from it.

If it is worse, then the person who thinks it is worse loses value with it.


If it is no better, then the person who thinks it is better benefits from it.

Only if everything else is equal. It's rare that's no downside to the benefit - for example, something costing more because it's "better".


The DECT phone standard had to be marketed as "6.0" for the US market because non-technical people were trained to believe that 5.8GHz was better than 2.4GHz and a phone that ran at 1.9GHz would never sell in the US. This is the same thought process driving the audiophools' desire for bigger numbers regardless of reason. It doesn't help that the fundamentals of sampling theory aren't particularly intuitive.


The article is correct, but it's not true that nobody can't tell the difference between an MP3 and a FLAC.

I've personally done blind A/B testing in my (then) studio to discover the point at which I can't distinguish between MP3 and uncompressed audio. These days the encoders are really good, so it gets real hard at around 256kbps. I'm confident I could reliably pick out 192kbps though.


MP3 is a really old format with some known flaws even at the highest bitrates - try using something newer like Vorbis or Opus.


Why expect audio clarity to be uniform among everyone. Not everyone has 20/20 vision. Some have 20/100, others 20/10.


While I think that there are people who actually can tell the difference between lossless and lossy audio (with a decent bitrate), I'm not counting me among them.

Yet, I only buy lossless music since I plan to keep my music library around for ages and this allows me to change to a different format in the future if needed. This is an aspect of lossless audio, which is often overlooked.


The article specifically mentions that lossless formats offer advantages over compressed formats. The argument is 24/192 encodes lossless files vs 16/48, which I feel the author soundly confirms.


In my personal testing a few years ago, I couldn't tell the difference between source and 192kbps MP3s either.

I still rip CDs to FLAC but only to transcode them to lossy formats for later listening. I do this in case I decide to switch lossy formats in the future (note: due to differences in psychoacoustical models, you should never transcode from one lossy format to another).


> He still thinks he can hear the difference between FLAC and MP3 to this day

I don't know if I would tell the difference in your test, but where I have noticed it the cause might be bad MP3 encoding - MP3 encoding quality varies widely... The difference between good and bad encoding may be far greater than between good MP3 and FLAC.


Maybe it depends on the song as well? Classical music is supposed to have a much greater frequency spectrum - meaning, the effects of MP3 encoding becomes apparent when played on high fidelity equipment.


It has greater dynamic range (think volume, this is the domain of the bit rate properly called bit depth) than most compressed studio-produced music but the frequency range or spectrum (think pitch, which is limited on the high end by the sampling rate/2 via the Nyquist-Shannon theorem) is not different if not also clipped off by mixing.


mp3 sucks for bass and sub bass physical response. It's incredibly easy for a non audiophile (like my girlfriend back in the day) to tell the wooly, muffled bass of a quantized in my car vs the actual cd on the same system. Perceptual coding is not perfect.


I don't know about comparing digital to digital, but I know I can tell the difference between a 320kbs mp3 and a good quality LP record. Its in the treble. I guess any kind of digital just ruins that.


Yeah, that's an easy difference to spot -- LP playback is just so lossy, with so much artifacting, that it's just not worth the bother.


MP3 in particular has issues above 16KHz. This is solved in modern formats such as AAC and Ogg Vorbis. It has nothing to do with "any kind of digital".


You can't hear the difference between MP3 and FLAC? I'm sorry but either you're deaf or your equipment is very poor. Bad example.


I keep FLAC copies just so that I have lossless versions that I can convert into the lossy format du jour, but for actually listening, I mass-convert to lossy. I've listened to FLAC and properly encoded MP3 files (@192 and 160) on $50,000 audio equipment, and I can't tell the difference.


I'm curious, did you listen in a properly treated room? As in with bass traps, panel traps, diffusers, a cloud, first reflection points covered, etc? Because I can hear a difference on less that $10,000 of playback equipment, but in a fully treated listening room. And I know from the days when I used an untreated room, there would be no way I could tell in an untreated room, even with a million dollars of equipment.

It's the most overlooked part of the listening chain, and is in fact the most important part. In fact it always shocks me how many "audiophiles" will pump tens of thousands into audio equipment for their reflective, untreated, boxy listening space. A $1000 pair of speakers in a room with $5000 in room treatment will totally blow away $20,000 pair of speaker in a room with $0 in room treatment. Every time.


It was an audiophile who'd spent a small fortune decking out his "listening room". I imagine that he knew what he was doing.

I humored him by teasing out which one was which without him noticing, and then saying the lossless one sounded better. Didn't want to hurt his feelings. And really, the sound system as a whole sounded awesome. I just couldn't tell the difference between the formats.


To put it simply and honestly, you are not paying enough attention. There is no human-discernible difference between FLAC and V0/320kbps. But implying that 192kbps and 160kbps are sufficiently perfect is somewhat ludicrous. You can still hear noticeable artifacts on cymbals at 192kbps. They will sound like an absolute mess of warbling, and all of the audio will be imbued with a slight tinge of white noise.

There is a reason the mp3 scene moved away from 192kbps, and it doesn't have anything to do with bandwidth availability. It's because 192kbps sounds terrible.


>There is no human-discernible difference between FLAC and V0/320kbps

I can't on almost any modern music (which is the majority of what I listen to), but when I was going through the philips golden ears course, I did a fair amount of blind ABX on harpsichords, cymbals, and a few other instruments at v0/320kbps and didn't have much trouble identifying them.

Granted, at that point I had been going through something specifically intended to help train you for discerning differences in audio, but they were distinct enough I don't think I would have had any trouble beforehand, either.

On some stuff I couldn't immediately tell that some sounded better - just different. Though on some of the samples the FLAC was easily better to my ears.

(My criteria for a 'successful' ABX was accuracy of at least 8 out of 10 using the foobar ABX comparator plugin)


> He still thinks he can hear the difference between FLAC and MP3

If we're to take Tidal and Spotify(at highest quality) as representative of those two (please correct me if I'm wrong, no expert) then the difference is night and day. Perhaps Spotify could use a higher quality mp3 encoding?


I agree with the silliness of 192kHz, but not 24-bits. Here is why:

In typical PCM recordings, like CDs, mid-range frequencies (e.g. 1kHz to 4kHz) are recorded with lower amplitudes because our ears are more sensitive to them.

Sampling theory is correct and 16-bits can reproduce any waveform with ~100dB of range, however, in a complex waveform consisting of low, mid and high frequencies, the mid- and hi-range frequencies quite simply get shortchanged.

Imagine a recording of a bass sinusoid and a mid-range sinusoid of equal volume. It might use e.g. 10 bits to store the bass and only 6 to store the high frequencies. (2^10sin(200wt)+2^6sin(4000wt)). That means the resolution of the high frequencies is less than the lower frequencies. When the volume of those frequencies changes dynamically, the high frequencies' amplitudes are more quantized. That is quite simply why 16-bits are not enough.

This is similar to the problem with storing waveforms unprecompensated on vinyl. The precompensation makes up for the non-uniformity of the medium. It could be done with 16-bit digital as well. Or alternatively, larger sample sizes like 24 can be used.

I haven't A/B tested this. The A/B test in the article compares CD with SACD. SACD isn't PCM, so its artifacts are going to be totally different from 24-bit PCM.


If you're playing a 16 bit PCM at a reasonable listening level of 85dB SPL, then your 6 bit sinusoid is at 25dB SPL, which is quieter than a whisper at 6 feet away in a library. The quantization noise floor of a 6 bit recording is a further ~30dB quieter.

So, the noise of that signal is -5dB SPL. 0dB SPL was set to be the lowest possible perceivable level of a single sound in an aechoic chamber. And that's not even considering other sounds in the recording, or ambient noise levels in a typical living room, etc.

In your example, moving to 24 bit would been a long way from having any effect (other than a 50% increase in file size). And if you use, say, an 8 bit signal as an example, then things are even less noisy. Note that the noise is the only consideration here: any fidelity loss is represented in that figure.

The audio engineers of yore who (among other things) decided that 16 bits was more than enough for final mixdown were much more competent than they get credit for (many were downright amazing at what they did, in fact). They thought of stuff like this.


The correct way to attack this isn't by attacking the theory. It's to gather a lot of people and ask them to press a button indicating whether the audio they hear is 16-bit or 24-bit.

If the results are no better than chance, then 24-bit doesn't matter, regardless of how sound the underlying argument is.

EDIT: The experiment would also be extremely difficult to design. For example, you'd need to run this test with music, not simple sounds. So the question is, which music? I think whatever is most popular at the time would be a good candidate, because if people are listening to music they hate, they won't care about the fine details of the audio. But that introduces an element of uncertainty and noise into the results which is hard to control for.

Some people might deliver accurate results with https://www.youtube.com/watch?v=2zNSgSzhBfM but not with https://www.youtube.com/watch?v=4Tr0otuiQuU whereas for others it's the opposite.

Or, it could be the exact opposite: Maybe you can only detect whether a sound is 24-bit when it's a simple tone, and not music.

Age is also a factor. My hearing is worse than a decade ago.

The headphones used by the test are another factor. If you feed 24-bit input to headphones, there's no guarantee that the speakers are performing with 24-bit resolution. In fact, this may be the source of most of the confusion in the debate. I'm not sure how you'd even check whether speakers are physically moving back and forth "at 24-bit resolution" rather than a 16-bit resolution.


For example, you'd need to run this test with music, not simple sounds. So the question is, which music? I think whatever is most popular at the time would be a good candidate

A quick summary would be that most "popular" music has been mastered with the following goal: the song should be recognizable and listenable on a FM-radio with only a limited bandwidth midrange-speaker. One of the many things they do to achieve this is by eliminated almost all dynamic range through a process called "compression" (dynamic compression, not digital-compression).

They also limit the spectral range to not have "unheard" sounds cause distortion when played through limited bandwidth amplifiers and speakers.

This means that the kinds of musical pieces which could benefit from the increased dynamic range of 24-bit would be thoroughly excluded from the test.

And then you'd probably get the "expected" result, but only because you now test whether music mastered specifically not to have dynamic range benefits from having increased dynamic range. For which the answer is given.

Note: I'm not claiming 24-bit end-user audio has merits, of which I have little opinion. I'm just pointing out the flaw in the proposed experiment.

If you feed 24-bit input to headphones, there's no guarantee that the speakers are performing with 24-bit resolution.

Not sure if you're just imprecise in your language here or if you're genuinely confusing things. Speaker-elements, as found in both speakers and headphones are analogue. They operate according to the law of physics, and respond to changes in magnetic fields, for which there is practically no lower limit.

They have no digital resolution. A quick example: Take your 16-bit music, halve the volume and voila! You are now operating at "17-bit resolution". Halve it again. 18-bit resolution. Etc.

There's probably some minimum levels of accuracy, yes, but it just doesn't make sense to measure it in bits.

If you're aware of this and were just trying to adjust the language to the problem at hand, I'm sorry for being patronizing, but I just wanted to make sure we keep things factual here.


24 bit resolution is important for capture, because it leaves headroom for mistakes. 16 bits is enough for mastering.


There's also headroom for the signal processing in the equipment. Equalization or volume control done poorly can lower your dynamic range, for example when turning the volume down on windows then turning it up on an external amp.


The experiment would also be extremely difficult to design.

I disagree. I think all the factors you are concerned about can be eliminated with a large enough sample size, like in the thousands (or maybe 10s of thousands).

You allow each person to select the genre of music they like, and you play a few clips from a few songs of each bitrate. Then they guess which is 24-bit and which is 16-bit.

I'm not paying to set it up. But it could all be done online without too much grief. It would be good to track the other statistics (age, headphone brand, etc.) as well, and see if something falls out of that.


Pretend your headphones only moved with 8-bit resolution. There is no possible way the experiment could derive a useful conclusion, but you might trick yourself into thinking it did. Especially if your sample size was 10,000 people.

More realistically, the participant might choose music for which no 24-bit recording exists.

It's very important to control for every variable. It's actually not possible to gather info about what headphones the listener is wearing. Even if it was, it wouldn't be possible to know whether they're doing the experiment in a quiet room, or whether there's a traffic jam just outside their apartment window, or whether their dog is barking during the test. Stuff like that.

Crowdsourcing this is an incredibly cool idea, but it'd just be so easy to believe you've performed a reliable test even though some variable undermined it.

I forgot another variable: Whether the music was recorded at 16-bit resolution. Most musicians use 24-bit, but it's easy to imagine that some of their samples might've been quantized to 16-bit without them realizing it.


It's very important to control for every variable.

It's not, actually. Say you have 10,000 listeners and you randomly assign each one to 16-bit vs 24-bit listening. You have enough listeners that any differences between the groups are due to chance and will very close to even out. Now, if you find people are unable to distinguish between 16-bit and 24-bit you might want to try the test again with more control over the environment, but if you find a substantial difference in a large blind randomized test that's a real finding.


More realistically, the participant might choose music for which no 24-bit recording exists.

Well, obviously we'd need to have a limited set of music selections for which we have 24-bit recordings.

As you suggest, I expect the biggest impact on playback fidelity is going to be other factors like the noise in the system (likely a PC) and such.

But the flip side of it is that's also a good real world test. If the only time you can tell a difference is to be in an acoustically dead room with top end equipment, then the higher sample rate really isn't worth it.


But the flip side of it is that's also a good real world test. If the only time you can tell a difference is to be in an acoustically dead room with top end equipment, then the higher sample rate really isn't worth it

Hey, that's a great point! Hadn't considered that.

Proving "most people can't tell the difference between 24-bit and 16-bit in real-world settings" is less compelling than proving "no one can ever tell the difference," but it's still very relevant.


If the results are no better than chance, then it remains possible that a small subset of the test group actually can appreciate an improvement. Content providers may like to cater for that small subset. disclaimer: I am not in that hypothetical subset.


If the results are no better than chance, it means the study methodology is flawed OR there's no effect.


No, it means the study methodology is flawed OR the effect is too small to be detected with the sample size.

So you'd have to decide in advance what difference is meaningful and choose your sample size to ensure you can detect it.


There is another reason 24-bit music is desirable: it's good for remix culture (silly IP laws notwithstanding).

If I pay for music, I want to truly own it, including the possibility of someday making a mashup, a music video, a hip-hop beat, et cetera. A 24-bit source gives casual creatives the same quality material as the original masters, for a relatively paltry 1.5x increase in file size.


There's also the fact that you can FEEL some sounds that you can't hear.

Chris Randall of Sister Machine Gun (at least used to?) use a low-frequency generator at live shows to produce a sound that the audience could feel but not hear in order to make the music more intense. I suspect that you'd gain some of that effect with a larger bit size.

...or I could be completely wrong. Whatever.


The increase in sample rate to 192kHz only allows frequencies above 22KHz to be represented (i.e. no effect whatsoever on the low frequencies that you mention). Pushing the bits to 24 only lowers the noise levels (which at 16 are already demonstrably imperceptible).

What you mention, though, points directly to what /will/ improve the quality of sound reproduction: speakers. It gets harder and harder to move that much air with precision as you get lower and lower in frequency. It's a definite technical limitation, but it's to do with very high-power amps and giant speakers, not the recording format.

We have (to the degree that humans can prove that they can perceive), perfect reproduction from digital recordings, perfect amplifiers for reasonable prices (at lower-than-concert-power-levels at least), but we haven't yet developed good enough speakers to cover the whole perceptible range of frequencies to anywhere near the same degree.

Audiophiles love to try and improve the whole chain, but really the only place it matters is at the very end.


According to this interesting-looking article here http://www.theregister.co.uk/2014/07/02/feature_the_future_l... the problem isn't frequency response but things like time delay, and it's not due simply to the very end but also to systems leading up to and around it like inaccurate crossovers and speaker cabinets that introduce time delay.


Agreed - my definition of "the end" being anything past the far end of the speaker cable.


The entire sub-bass genre is based around that concept. Case in point James Blake https://www.youtube.com/watch?v=oOT2-OTebx0&hd=1


Well, first of all there is no problem with encoding low frequencies here (we do all the time! like the fact that the notes are not all played at the same time...).

What you feel is parts of your body resonating (because the low frequency sound is exciting modes of your body). This is unlikely to happen at high frequencies, partially because it would necessarily be much smaller parts of your body (see [0] for a diagram of typical body resonance frequencies) which we probably don't feel and because attenuation of sound greatly increases at higher frequencies (for example, see [0] for air), making it likely impractical to excite any such modes. My guess is that you might cause some tissue damage if you had significant ultrasonic excitation in your body (see [2] for something that may or may not be true...).

[0] http://physics.stackexchange.com/questions/37543/does-the-hu... [1] http://www.kayelaby.npl.co.uk/general_physics/2_4/2_4_1.html [2] http://www.tovatech.com/blog/4376/ultrasonic-cleaner/ultraso...


You hypothesized about tissue damage from ultrasonic excitation, so this tangential post may be of interest. Tissue damage from ultrasound is an intentionally-caused phenomenon being used (and experimented with) in some non-invasive medical procedures, where focused energy can locally ablate a tumor for instance. HIFU, high-intensity focused ultrasound, is the technique: http://en.wikipedia.org/wiki/High-intensity_focused_ultrasou...


I don't understand what you mean, so I can't say you're wrong, but higher frequencies are not more quantized than low.

In fact, you should be very careful about to think about quantization in digital sound reproduction, because it can easily lead you astray. Think of bit depth as a measure of dynamic range. Do look at that digital music primer for geeks posted elsewhere in this thread, it makes for awesome reading.


He was referring to the specific example he gave in which the higher-frequency had an amplitude of 2^6 and the lower frequency sinusoid had an amplitude of 2^10


And that is not how digital sampling works - you don't allocate bits to different frequencies.


I don't think anyone is arguing that you do, the high-frequency sinusoid he described has a peak-to-peak amplitude of 2 * UINT6_MAX (although it appears as though the intention was a sinusoid with a p-p amplitude of UINT6_MAX) which on its own (in terms of the integer numbers given in his example) be represented in a 6-bit system. This isn't really relevant though, because that signal would be 0dBFS in a 6-bit system. The higher 10 bits are far from unused by the sinusoid in a 16 bit system, as they allow the amplitude of the sinusoid relative to the other sinusoid to remain unchanged. The little stairstep digitally-sampled sinusoid picture might look a little more rough for the "6-bit" sinusoid than the "10-bit" sinusoid but that's A)kind a pathological case and B)not at all representative of what gets sent to the amplifier after the DAC. (Think about the spectrum of all those little "stairsteps" and what happens to them once they are sent across a shunt capacitor...) anyways


I guess it doesn't matter to human ears with a well-mastered 16 bits, but the video linked in the OP explains that typically dithering noise is shaped (toward frequencies we're less attuned to). The models used to lossy-compress also typically put more noise in some of the higher freqs.


On the other hand, with perceptual audio coding (e.g. mp3, m4a, ogg) you do.


Different frequencies aren't stored in different bits. It's all mixed together. If you mix -10dB of 100 Hz and -20db of 1000Hz, the composite will be the same at 8 bits, just much, much noisier. There is, curiously enough, nothing at all like "vertical resolution" in digital audio - resolution in the amplitude dimension.

24 bits is great for recording, but the end distribution medium only needs to be 16 bits, after you normalize the thing you tracked at 24 bits.

You can get libsndfile and FFTW and do the tests yourself.


The resolution of both sinusoids in the example you suggest is exactly 16bits

Edit: While it would be possible to represent each of those individual sinusoids in 6 and 10 bits individually, the signal you describe has the high frequency signal "riding on " the low frequency signal. you need 16 bits to represent the amplitude of that signal at e.g. t = pi/4.


2^10 + 2^6 = 1088. You only need 11 bits to not clip with that signal.


you'd need 11 bits to encode the signal at 0dBFS. Theres amplitude information stored in those other 5 zeros...


192kHz may not be as silly as you think: http://www.ncbi.nlm.nih.gov/pubmed/10848570


Except that high frequencies attenuate fairly quickly in air.

Sound attenuates proportional to f^2 in air. So, a 10-fold increase in frequency causes a 100-fold increase in attenuation. So, a 100KHz signal has 10,000 times the attenuation for a 1KHz signal over the same range.

In addition, attenuation due to water vapor is particularly bad above about 40KHz. Much of that study cannot be replicated, and people have tried: http://en.wikipedia.org/wiki/Hypersonic_effect

In addition, the effect went away when using headphones.

However, I can certainly believe that if you can pump enough energy to rattle things at ultrasonic frequencies you are going to get a result. Especially since ultrasonic frequencies rattle in water particularly effectively.

As an example, if I pump enough energy into an ultraviolet or infrared signal at your eye I will eventually get a detection result in your brain. However, pain and a burned retina are not what we think about when we consider a brain response.


Thanks for posting that! However, it's worth point out that that study has its detractors: http://en.wikipedia.org/wiki/Hypersonic_effect

I find the intermodulation argument a convincing one - it's hard to not have it affect the test, and if they haven't taken specific steps to avoid it then it would be easy to join a large amount of tests that have fallen prey to it.

I note, however, that Wikipedia doesn't mention any specifically brain-scan-based studies that counter it. If you know of any more I'd be very interested in hearing about it.


What gets me about the intermodulation effect is that some people want to have it both ways. They are so stuck in the 'there is no difference' camp they fail to see the self contradiction in the argument.

On one hand, higher frequencies above 20KHz can't be heard at all, so there's no point having them! You can't hear them!

Then on the other hand, higher frequencies above 20kHz affect the audible region of the sound (intermodulation distortion), so you can hear them, so make sure they aren't there!

What if the presence of the higher frequencies in a spectrum that shares a harmonic relation to the audible region causes intermodulation distortion that is pleasing and musical to the ear. What if the complete absence of this high frequency information, or alternatively a non-harmonic higher frequency signal (say some kind of switching or power supply noise) causes the audible region to be perceived in a less pleasant manner?


Intermodulation in this case is distortion that wasn't in the source material. It's a product of the failings of the playback system to perfectly recreate high frequencies without distorting the lower ones, and will vary depending on which system it is played on.

Certainly some people find certain kinds of distortion pleasant, but the people arguing for 192kHz claim increased fidelity, not pleasing distortion - when it is just the opposite for any stereo that introduces these artefacts.


There's a special irony to the fact that this high fidelity audio format is being promoted by Neil Young. Young's a rock musician. He's been around loud noises (e.g. rock concerts) most of his life. He's also 69 years old. Our ability to hear high frequencies decreases dramatically with age and exposure [1]. If anyone were able to discriminate 24/192 from 16/44.1, it sure as heck wouldn't be an elderly rock musician.

[1]: http://www.patient.co.uk/health/presbyacusis-hearing-loss-of...


To be fair, I have worked with older sound engineers, and they can hear a lot of audio artifacts that I miss, just because they've been paying closer attention for a lot longer than I have, much in the same way my wife (who plays violin 6+ hours most days) can hear tuning and pitch problems better than I can.

High frequency limiting is not the only artifact that results from data compression.


Yeah. I worked for a long time as a professional musician in an orchestra. I fucked up my hands from practising too much, so I switched career.

I can reliably hear a pitch difference of ~0.2hz at this site: http://tonometric.com/adaptivepitch/

and that is after 15 years in a symphony orchestra having my ears blasted by the brass and percussion section (with a demonstrated hearing impairment from my time in the orchestra).


Neil Young is an enigma, and has never been very consistent. (I say this as a HUGE fan of his -- one of my favorite musicians who has ever lived).

For example, he just released an album that was recorded in what is basically a phone booth. http://www.clashmusic.com/news/neil-young-makes-entire-album...


The industry wants to be able to sell you something "better" and 24/192 is clearly bigger and therefore better than 16/48.

This is the same reason I'm convinced we're going to get 8k phone displays someday.

If the recording industry wants to sell me a "platinum" version of recordings, what I'd really like to have is different mastering of an album: at least one for noisy environments like the car, and one for higher-quality environments like my home theater. If you're familiar with "The Loudness Wars", this is a reaction to that. NiN tried to do this with their "audiophile" mix of Hesitation Marks (although a lot of people think they did not succeed, http://www.metal-fi.com/terrible-lie/ )

On the other hand, I don't need to buy any new equipment to support that, so the equipment guys aren't going to be happy. I don't know if there's any silver bullet for them--if there is a hypothetical advancement that would cause me to upgrade my system, I can't envision it.


Until phones are 600dpi like paper, I'm fine with display resolution continuing to increase, thanks.


4k would exceed that significantly on a 6" display. Thus why parent compares an 8k phone to the 24/192 discussion - the benefits are nothing more than being able to advertise a larger number.


Ah. Sorry. I didn't do the math.


Let's do the math. If I didn't screw it up (and I probably did), an 8k iphone 6 would be around 1600 dpi.


Imagine being able to use a physical magnifier to see more information on the screen. Wouldn't that be great? at least for a curious child?


I'm imagining the computer consoles in Brazil. https://imgur.com/pFUTYH2


I kind of had that effect at a trade show a while ago where they had huge demo displays like 12 foot /6000 pixels across and I couldn't see all the detail at a usual viewing distance but could make it out in small areas by wandering up close. It was kinda cool. I'm not sure it would have that much value in a domestic setting but in something like a museum / gallery could be good.


I'd think a curious child with a magnifier would be more interested in seing what the display is made of.


It would also increase the effective color count, even if it didn't help resolution.


so, carrying that to its logical conclusion, does that mean that at some point we will create displays w/ such a high resolution that they will, in some sense, be creating a "universe" which is indistinguishable from that which they are displaying? or, i guess that would mean the display essentially is what it is displaying. is that even possible in our universe, or would that be akin to creating energy / matter from nothingness?

clearly the display would blink "let there be light"* at startup.

* http://www.multivax.com/last_question.html


Like microfiche?


It's not like a smarthphone has that much extra processor and battery to waste.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: