In controlled listening tests most people have trouble distinguishing mp3 at 128-160 VBR kbits from the uncompressed original. 320 kbits is just a waste of bandwidth but people just assume more is better.
In a previous life I was a sound engineer. Under controlled conditions, my own listening equipment and lossless source files, with which I am familiar, I can identify 64kbps vs 128kbps (p = .01), 128kbps vs 192kbps (p = .01), 192kbps vs 256kbps (p = .03), and probably 256kbps vs 320kbps (p = .07), n = 30, LAME=3.something for all tests. If you are in Austin, you can come over and watch me do this in person.
I have no doubt that the general population may be (statistically) unable to distinguish 128kbps vs 256kbps, but that says nothing about a minority of individuals, many of whom are large music purchasers.
Fellow audio engineer here. Mp3 has this noticeable frequency dropoff at 16kHz, which makes it detectable regardless of the bitrate. Did you test mp4 (or whatever), which does not have that?
I seem to remember scientific studies that basically claimed that 192 kBit mp4 was indistinguishable from uncompressed sound.
That said, I still prefer uncompressed audio for mindful listening. For casual listening, I frankly don't care.
I did a similar test with AAC, which as I understand it (not a compression engineer) doesn't suffer from the same 16KHz problem.
64kbps vs 128kbps (p = .01)
128kbps vs 192kbps (p = .02)
192kbps vs 256kbps (p = .05)
256kbps vs 320kbps (p = .16)
This test taught me that (for my ears), AAC 256kbps is a good all-around codec for my music. You should do your own test (you might hear differently than me, apparently I hear differently than everyone in the Gizmodo and Maximum PC "studies"). But I would be surprised if it was simply a coincidence that Apple chose to standardize on 256kbps AAC, exactly the point where I have serious trouble distinguishing bitrates.
The current lame encoder uses a variable cutoff frequency depending on the quality setting. At the recommended "transparent" settings (-V2 or -V3) it uses a polyphase filter with transition band of 18671 Hz - 19205 Hz.
You should be able to discern most encoding effects on £1k/each speakers and a £500 sound card, if not then you have wasted your money.
OTOH as anyone who has tried to write music cheaply knows, distortions much larger than those typically caused by lossy compression quickly vanish on "normal" listening gear.
Of course, the people who care about encoding artifacts are much much more likely to have an expensive signal chain.
There are a small handful of "golden ears" testers that can ABX samples at much higher bitrates than average. You might be one of them. Most people that think they can do this fail to do so in a real test though.
And yet it is trivial to teach every one of them to identify 128kbit (at least) from uncompressed in just a few short minutes. And it's the kind of thing you pretty much can't un-hear. Those who want better quality have a legitimate gripe, though for business purposes related to your statement it's best to make a higher bitrate stream as a configuration option and not the default.
I think the author is on the wrong track discussing endoing times for 320kbit - it's much more likely that spotify is interested in keeping down their bandwidth costs.
A major streaming provider that I'm familiar with actually delivered streams that were 15%-20% under the quoted bitrate on many popular tracks for a few years, but only during periods of peak bandwidth consumption. It saved a significant amount of money and afaik was never detected (they no longer do so).
That said, spotify's baseline is -q5 Ogg Vorbis (true VBR at an average of around 160kps), not 128k MP3. Huge difference. VBR mode is a huge win for an codec, and Vorbis in general is less "obvious" than MP3. In particular, drums aren't a dead giveaway that the file is compressed, unlike any MP3 <256)
That depends on what you're listening to. For all my progressive/power metal, 128 just sounds terrible and empty. I'm not very well versed in the terminology, so I don't know how better to describe it. On the contrary, 320 sounds full.
People make a lot of those kinds of qualitative statements about sound quality, but when they actually do a rigorous A/B test they usually can't tell the difference.
I've done quite a bit of A/B testing on metal at ~128 kbits and it's very difficult to spot differences on most tracks. Modern lossy audio encoders are very, very good.
Using decent in-ear headphones (I like the Etymotic HF2), listening to Justice, I can definitely without a doubt tell the difference between 128 kbit and +256 kbit. Or more specifically, 128 kbit and lower for specific music makes me feel nauseous.
I'm guessing this type of music simply doesn't compress as well as say... Red Hot Chili Peppers.
I'm not performing a rigorous A/B test, and I can believe that I would fail said A/B test given other conditions (other speakers, other music, etc). I would love for this to be true for all conditions and save all that storage space. Unfortunately, in my personal real-life conditions, better quality does make for a better listening experience.
Are you doing the encoding and performing A/B tests yourself? There are all kinds of things that can hurt audio fidelity a lot worse than bitrate. Some MP3's are poorly encoded by some crappy shareware application. Some are transcoded from an already-lossy source. Some productions will compress better than others (supposedly, some producers actually mix and master with inevitable compression in mind).
My original source was pirated music at 128kbits. Since then, I bought it and have the 320kbit version which immediately sounded incredibly better. Occasionally I'll hear it on 128kibts or 192kbits on Pandora and such, and the difference is very noticeable to me.
My data is purely anecdotal but I feel strongly about it and would be willing to put money where my mouth is if someone wants to call me on it.
Most tracks is not enough. I don't want to lose time fiddling with optimal quality/size ratio per song. 256kbps AAC, clickety click, fits anything and makes a nice, small rip for zero overhead.
Maybe it's a technology difference, but that's the general feel I have across most tracks I've listened to. I'm not A/B testing specifically, but I was complaining about 128 kbit tracks and asking why they sounded empty, even before I knew about bit rates. Systematic Chaos, on 320 for example, sounds amazing.
I'll do some specific testing before I say anything else. :)
Audio perception is highly subject to placeo effect. If you're curious the community at http://www.hydrogenaudio.org/forums/ has worked very hard to make this a more rigorous scientific process.
It's pretty remarkable how good modern lossy encoders are really. I consider it one of the more impressive feats of software engineering of the last decade.
To my [untrained] ears, the difference between lossless and lossy compression on Ravel's Bolero is remarkably stark. The delta between 128/160 and 320 is not as clear as above, but still noticeable.
In sum, I suppose this would depend on the nature of the music being listened to.
At 128Kbps (Even VBR) I have numerous songs which end up being simply distorted and blocky. 160kbps MP3 is a minimum for those. There's simply too much information to pack in certain areas.
> only at frequencies outside the hearing range of most people.
That is the idea, but the reality is that past a certain bitrate, songs begin sounding weak/metallic. That bitrate is dependent upon the listener, the equipment, the song, and the codec.
I cannot give you any nice, objective numbers here since sound quality is heavily subjective. But you cannot discount subjective experience here simply because a study found that x% of the general population cannot discern the difference between 128kbps MP3s and 320kbps MP3s. My own experience is that many songs suffer with 128kbps MP3s, particularly classical. I've used at least 192kbps MP3s since I started storing my music collection on a computer.
Also consider the fact that if bitrate was irrelevant, why are the content providers tending toward higher bitrates? We can safely assume they'd prefer to act in their own interests and keep bandwidth as low as possible.
Did you do a well controlled blind test is? That’s, I guess, the relevant question. I don’t think you can trust your ears if you know what you are listening to.
I do actually recommend doing just that. It doesn’t matter who can and cannot hear what, what matters is whether you can hear the difference in a blind test. I did just that before I started buying compressed music. (I didn’t try 128kbps MP3s. I consequently don’t know whether I can hear the difference. I tried 256kbps AAC files – those were the ones I was planning on buying – and I most certainly couldn’t hear the difference.)
MP3 certainly is limited, it even has some problems that are inherent to it, not even a higher bitrate can fix those. Short, sharp sounds (think castanets), for example, are a problem.
Because of the way human hearing works, loud sounds mask quiet sounds. MP3 (and other lossy compression algorithms) use this mask to hide noise (the noise that results from compressing the audio). In order to be able to do that the algorithm has to figure out where the mask is and there is, of course, a time dimension to that mask. MP3 can’t have arbitrarily short masks, it is consequently possible that the noise that’s supposed to be hidden under a loud sound spills over to sections where everything is actually quiet. This happens when there is a short loud sound followed by silence. You know, castanets.
No high bitrate can solve that problem (it can only reduce the overall noise that has to be hidden) but newer compression algorithms (like, for example, AAC) are more flexible with their masks and don’t necessarily have the same problem.
Go to http://www.hydrogenaudio.org/forums/ and you can find all the nice objective numbers you want. The reality is that they now do listening tests at <96kbits because the encoders are too good above that threshold. Subjective is no more useful here than it is any other quantifiable, scientific application.
Providers push higher bitrates because customers think they're better and demand them.
"When all 12 trials were tabulated across all listeners, the high school students preferred the lossless CD format over the MP3 version in 67% of the trials (slide 16). The CD format was preferred in 145 of 216 trials (p<0.001)."