
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle - X4
http://arxiv.org/abs/1208.4611
======
jstanley
Intriguing title and abstract, but I don't really understand what this means,
or its implications. Can anyone summarise for a layman?

~~~
dekhn
Sure. In general, if you have a time-domain signal (like a music file) you can
break it up into little pieces and compute the frequencies in the pieces. For
example, you could take a small part of a song, and say "this part of the song
is a 400hz sound for 0.3 seconds". This is typically done with an algorith
called the fourier transform, which is used to convert between time-domain and
frequency-domain signals. However, as you make smaller and smaller time
fragments, you pay a cost- less accuracy in the estimate of the frequency
(because you have fewer samples to work with). In general, this relationship
could be expressed as "accuracy of frequency * accuracy of time = a constant".

Presumably, what they found is that human's auditory system is not doing
simple operation like that but has the ability to pick out frequencies in
short time segments (IE, while listening in real time) with better resolution
that a short-time fourier transform. IE, somebody could estimate the frequency
of a pitch faster than what you'd expect by the equation I gave above.

None of this violates the laws of physics; the ear and brain would just need
to do a different type of analysis. Probably more like wavelets.

~~~
jstanley
Gotcha, thanks.

Is there a possibility that overlapping DFT windows could get you increased
accuracy? I mean, instead of looking at each 0.3s individually, you could look
at 0.3s segments, starting each segment 0.1s after the previous segment
started so that there is some overlap.

~~~
dekhn
I wish I knew.

