This was surprising. With all that poking fun of audiophiles, I expected there would not be much of a difference.
I got 5 out of 6 correct, and the one I missed was pretty near miss (I picked at random between 320 kbps and uncompressed sample). And these were quite clear choices, many times I just needed few seconds: 128 kbps sounded worse every single time, 320 kbps vs uncompressed was a bit harder, but still pretty noticeable if I paid attention.
It wouldn't probably make a big practical difference for a typical "background noise" listening, but it may have impact if you just want to sit back, relax and focus on music (lower bitrates for me sounded "muddled", losing details in high frequencies).
BTW I'm no audiophile, no special audio gear, just cheap (but decent) 9 EUR in-ear headphones plugged into a notebook.
One person, 5 out of 6, could also be attributed to chance.
The random average for a large number of people and 2 options would be getting 3 out of 6 (like a coin toss). But in the samples would be several 4, 5 and even 6 out of 6 too. In this case it's like 15/18, while is very good but still possible.
> With all that poking fun of audiophiles, I expected
> there would not be much of a difference.
The fun poking is mostly about things like 192kHz sample rates and thousand dollar power cables. It's uncontroversial that people can hear mp3 artifacts.
> 96kHz is common for live too (as it is half the latency of a 48kHz system).
I don't really understand this, but I guess what it means is that the first sample of a digital signal gets from some place to some other place a ninety-six thousandth of a second more quickly. I'm not sure why that would matter. It's the time it takes sound to travel 4mm, which seems inconsequential, and the overall shape will be the same phase.
In many cases (like, if there's a computer involved anywhere), a high sample rate means you need higher latency to avoid the risk of underruns.
If you've got to send it from stage to the mixing console at the back and then back to the stage or wherever the amps powering the line array is, then latency becomes more important? What if the console is half a km away?
Also, if the sound processors are performing calculations in the idle time between samples (like calculating FIR filters or something like that) or in the idle processor time remaining after processing the sample, a higher sample rate will mean the calculation gets done faster (and is therefore audible faster). Else you'd change a setting and wait to hear it (and it would be noticeable perhaps), I guess?
Then a ninety-six thousandth of a second's worth of latency seems pretty irrelevant? Even if you were decreasing the latency by using a higher sample rate, which you aren't.
The signal coming out of a FIR filter will come out at the same time whatever the sampling rate. I guess it's conceivable, if you have no buffering whatsoever, that the very first sample will come out slightly quicker, but that is honestly irrelevant. The overall signal will have the same timing at either sample rate. Unless you've had to introduce more latency to cope with the demands of the higher sample rate.
This was surprising. With all that poking fun of audiophiles, I expected there would not be much of a difference.
What I see more and more online, is that people are falling into a pattern of poking fun of groups without really understanding the scientific, factual, or ideological basis for doing so. Furthermore, most people fall into the pattern after a semantics-free pattern match, quickly making a decision without substance. (One might suppose that the true priority in these situations is the opportunity to have fun at someone's expense, not the ideological or scientific issue at hand.)
When I was college aged, we called such jumping to conclusions "prejudice." One is coming prematurely to a conclusion, possibly contrary to a properly informed decision. Even in such a vaunted forum as HN, I see people proudly announcing how they have jumped to a conclusion based on signalling. How is this any different from a Mad Men character deciding another's credibility based on their alma mater and the cut of their jacket?
When it comes down to it, the "audiophile" set has myths and disinformation floating around within it mixed in with actual science. Note that this is true for any set of people derived from a shallow labeling, like "programmer."
My results were similar. I got 4 out of 6 and the 2 I missed I picked the 320 kbps. It was more difficult to tell the difference between uncompressed and 320 kbps when the composition was busier, as with the Coldplay track. The piano track at the end was clear as day, the uncompressed version was warmer, fuller, had clearer reverb. Same with the Vega track.
I am guilty of poking fun at audiophiles. I got 5/6 correct; the error was 320kbps. 128kbps was clearly inferior every time. And I was listening on my Mac's speaker with volume at 60%. Now I'm going to have to re-rip my CDs into FLAC. I'll admit I was wrong... a little.
I got 5 out of 6 correct, and the one I missed was pretty near miss (I picked at random between 320 kbps and uncompressed sample). And these were quite clear choices, many times I just needed few seconds: 128 kbps sounded worse every single time, 320 kbps vs uncompressed was a bit harder, but still pretty noticeable if I paid attention.
It wouldn't probably make a big practical difference for a typical "background noise" listening, but it may have impact if you just want to sit back, relax and focus on music (lower bitrates for me sounded "muddled", losing details in high frequencies).
BTW I'm no audiophile, no special audio gear, just cheap (but decent) 9 EUR in-ear headphones plugged into a notebook.