Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, his style was highly argumentative. He left no room for subjectivity or talking about subjectivity in anything. Which leaves out a fair portion about what audio really is to the human brain, whether or not you perfect its reproduction mathematically.

His point of view, and his way of presenting it, were far too unbalanced and inhumane, regardless of their correctness. This was the reason he was not received well; not for the facts themselves.

This fact generalizes to a great many technical problems.




From what I've seen, NwAvGuy and other objective-audio types generally tend to have a better understanding of the subjective parts of audio though. It helps if you don't treat it as a mystery that can be affected by changes in the audio we can't measure. (Also, they're rather less likely to design amplifiers that destroy the equipment they're connected to than certain audiophile companies...)


> Which leaves out a fair portion about what audio really is to the human brain, whether or not you perfect its reproduction mathematically.

Hey I have a question, it's kinda off-topic but maybe someone reading this thread can answer this for me. Your comment made me think about this because I'm not sure if the effect I'm hearing is subjective, psycho-acoustic, physical or mathematical in nature.

I've been coding my own audio-synthesis toys on and off as a hobby for over a decade now, I like to believe I understand a thing or two about it :)

A little while back I wrote some very simple code to synthesize waveforms from the summation of sine-waves.

Summing sine-waves of frequencies 1..N with amplitudes 1/N produces a very nice bandlimited sawtooth waveform. This works, it sounds crisp, like a sawtooth, exactly what you'd expect.

Then I started thinking about phases. This is the important part: I have always understood that the human ear cannot perceive phase. I'm specifically talking about a mono signal (no L/R phase differences), perceived through a headphone (no interfering room acoustics), a continuous waveform (so any phase-shift only happens modulo 2pi maximum).

Obviously shifting the phases linearly with respect to the frequency just results in the same sawtooth wave shifted in time. Works exactly as expected, no audible differences.

So I continued with other ways of meddling with the phases, which resulted in radically different-looking waveforms. Adding a constant amount gives you a kind of comb-like impulse train with rounded bottoms. Setting the phase to frequency squared (times pi/3 or pi/5) gives you some really funky pointy looking shapes. Frequency cubed (again times pi / some integer constant) looks like a sum of a series of sloped square waves. Picking a (fixed) random phase for every component, gives something noisy-looking (but periodical, of course).

Pretty cool, really. FFT analysis of these waveforms confirms they still have exactly the same frequency components as a regular sawtooth wave. I triple checked, it works exactly as intended.

But here comes the twist: they sound different!

I checked on headphones, speakers, at varying levels of volume/amplitude to see if there was any non-linearities causing the difference in sound somewhere along the signal chain (nope, they sound different in exactly the same manner regardless of amplitude). If I had to describe he difference, I'd say they sound a bit more "hollow", in the sense that a square-wave sounds hollow, but not quite (and they obviously still have all their harmonics). It's not a subtle difference though, I can hear it very clearly.

What gives? I thought we couldn't hear phase, only phase differences, and phase cancellations? As in, only when one phase interferes with another, along the same frequency.

It kinda upsets the very foundations on which my whole mental model of audio DSP has been based: phase is irrelevant, unless either: A) phase shift is large enough to cause an audible delay, B) phase difference between left and right ear, C) two signals interfering with phases on the same frequencies, or D) you're doing some non-linear post processing. And I've never really seen anything written about it in literature, otherwise.

Anyone got an idea? How can a steady periodic signal sound different even if the frequency components amplitudes stay the same, but only the phases are shifted? And in what way does this affect the sound, I have a fairly good intuition of how frequency amplitudes change the sound (EQ, filtering), but not the phases.

It's a mystery! (well, to me)


Yep, I'd have to see waveforms.

My instinct is that the phase is causing some distortion somewhere along the audio path, between the signal (the ideal) and the analog. Between that is the DAC, the amplifier, the wiring, and the transducers, all of which can induce distortion of varying degrees based on various properties of the input.

But your experiment validates the simple concept that small, seemingly insignificant differences in audio can cause unexpected and perceptible effects! Awesome.


This is interesting, but it would help if you could provide .wav files for others to hear and see for themselves.


I read him differently. He only criticized objective claims which were scientifically implausible or found to be untrue, e.g., that vinyl had higher fidelity than CD as a medium, that humans could hear a difference between cheap and expensive cables, or when his measurements differed remarkably from manufacturers' specs, i.e., false advertising. He did not criticize subjective preferences such as vinyl over CD, fancy cables over cheap ones or tube amps over op-amps. Only when false/implausible objective claims came into play did he refute them.


This is true, he never argued against simple preference.

His bedside manner in most discussions left a lot to be desired, is all.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: