
Can't you just turn up the volume? - varunsrin
https://medium.com/@Amp/cant-you-just-turn-up-the-volume-4ecb7fc422a
======
korethr
This is nifty.

My father has hearing loss and it's bad enough that not only does he use
hearing aids, he is constantly turning them up until they feedback and start
ringing. He doesn't hear the ringing, but I do, and I have some hearing damage
of my own. And he wonders why his hearing aids go through batteries so fast.

Their description of how hearing loss works gives me some ideas on how I can
help my father manage his hearing loss better than just constantly buying
batteries.

However, I do have one complaint with the article, and that's their (mis)use
of terminology, specifically, dynamic range. Dynamic range is not, as they
claim, the range of frequencies one can hear, from lowest to highest (e.g.
20Hz-20kHz). That's bandwidth. What dynamic range is, is the ratio of quietest
to loudest sounds possible, often expressed in dB.[1]

For example, as they mention, human hearing has about 120dB of dynamic range.
An audio CD can encode a dynamic range of 96dB. The 24-bit files professional
audio studios work with can represent up to 144dB of dynamic range.

Perhaps it's a pedantic distinction, but using already existing terms for what
you mean to say is less likely to cause confusion than misusing one that means
something else.

1\.
[https://en.wikipedia.org/wiki/Dynamic_range](https://en.wikipedia.org/wiki/Dynamic_range)

~~~
analog31
It seems to me that a simple way to eliminate the feedback issue is to
separate the microphone from the speaker. Hearing loss runs in my family, so
I've watched my relatives struggle with hearing aids, and I've decided that
when it's time for me:

1) I will build it myself and figure out the details, to hell with the
audiologists and their racket.

2\. I will carry the microphone and amplifier in my shirt pocket. No more
feedback.

Ironically, with the advent of personal electronics, everybody wears some sort
of gizmo on their body, so I think we could just persuade the elderly that
hearing aids don't need to be invisible any more.

~~~
Dwolb
I'm not a hearing aid expert but the microphone and earpiece may currently be
tightly coupled because it has to be. That is, human ears are very sensitive
to phase shifts and time delays. Placing the microphone in a location that is
not your ear may throw off your hearing somewhat (possibly ability to locate
from where a sound originated).

~~~
roberthahn
I can vouch for this. I used to have an FM system - this is a mic that
transmits on a very specific FM frequency to a receiver that's paired with my
hearing aids. I loved it because I could easily use it to hear people near me
in loud rooms, but it sucked in meeting rooms. The reason why is that I would
hear someone speaking through my hearing aids and through the mic - at
slightly different times. This is the phase shift Dwolb speaks of. This
reduced clarity significantly.

That said, if analog31 wants to wear the mic on his shirt, that distance is
actually good enough for most 1 to 1 conversations. Just understand that
you're trading off "spatial perception" (no left/right balance if only one
mic).

------
Jemaclus
I'm completely deaf in my left ear, and I wear a hearing aid in my right ear.
What's really cool is that my hearing aid has BlueTooth, and starting with the
iPhone 5S, Apple supports direct-to-hearing-aid technology. That means when I
get a phone call, it streams directly from the phone to my hearing aid -- not
out of a speaker, _directly_ to my ear. Very, very cool.

Here's more info: [https://www.apple.com/accessibility/ios/hearing-
aids/](https://www.apple.com/accessibility/ios/hearing-aids/)

That said, if I had an older hearing aid or didn't have this one, I'd
definitely use this app. They are spot on about hearing loss and how it's more
than just a volume thing. In fact, most of hearing loss is really an
_understanding_ thing. I can hear your voice just fine -- I just can't hear
100% of it, so the words don't make sense to me right away.

------
carlob
I once went to a 3h long blackboard talk given by James Hudspeth [1] on the
physics of hearing.

It was one of the most fascinating things I've ever heard: it turns out that
not only the cochlea performs a Fourier transform of the sounds we're hearing,
but it can also selectively amplify some frequencies, by vibrating the very
same hair that detect the sounds.

Sometimes the mechanism that amplifies some sounds goes wrong and that's why
sometimes old dogs seem to emit a high pitched sound from their ears, and also
the cause of some forms of tinnitus.

If you have some time to kill do go read the wikipedia pages of the cochlea
and hair cells, it's really fascinating stuff!

[1]
[http://www.rockefeller.edu/research/faculty/labheads/JamesHu...](http://www.rockefeller.edu/research/faculty/labheads/JamesHudspeth/)

~~~
HCIdivision17
I have always wondered if that high pitched sound was real! No one else heard
it, and it makes so little sense I always figured it was my hearing.

~~~
72deluxe
Do you hear that high-pitched sound all the time? That's tinnitus.

Or do you only hear it from dogs?

~~~
HCIdivision17
It turns out I have really decent hearing. So it's unlikely to be tinnitus; at
worst I have a sort of hearing after-image when a tonal frequency suddenly
cuts out (and now I'm guessing that's an effect of the ear actively filtering
- I can't wait to watch that video).

But yeah, I did hear it on the dog. It's so anomalous to hear a high pitched
noise coming _from_ an ear that I was willing to question my own
hearing/sanity. The best I could manage to guess was that there was some sort
of small gas pocket leaking, which made a little bit of sense since my dog
(samoyed) had a bubble on his ear (that eventually gave him a sadly adorable
floppy ear when it eventually drained - prolly an aural hematoma); the bump
never actually seemed to deflate due to the noise, though. At the time I was
an early teen just sort of beginning to learn analytical techniques, so that
reasoning was the best I could muster. These days I tend to trust/understand
my senses more.

------
roberthahn
Interesting article. Thanks for posting, varunsrin! I've been following
development of SoundFocus for awhile.

I'm profoundly deaf. This is a technical term classifying the degree of
hearing loss; to give you a sense of where this fits, the typical
classification range is mild, moderate, severe, profound, total.

Between a combination of hearing aids and lip-reading, I've done a reasonable
job of integrating into a hearing society. Not perfect, but ok.

I've often wished for a different approach to correcting hearing. It
crystallized for me after I read this article by Jon Udell:
[http://blog.jonudell.net/2014/12/09/why-shouting-wont-
help-y...](http://blog.jonudell.net/2014/12/09/why-shouting-wont-help-you-
talk-to-a-person-with-hearing-loss/)

In that article, what Jon found was that his mom would hear best if you spoke
at a low to medium volume close to her ear - this worked better than any
shouting at a greater distance could accomplish.

And it should be easy for you to simulate - get a friend to talk to you from
50' away - you can still hear them, but there's some detail loss that wouldn't
happen if they're 3' away.

I still benefit - a lot - from MBC, but if someone could come up with a way to
make the incoming sound sound as if it were right beside me, man, that would
really help me understand people clearly.

One non-technical solution, that people use to ensure that deaf people can
understand you clearly is to enunciate consonants audibly. An example of this
is is the word "red" \- it becomes "erREDdead". I don't know if there's a name
for this so I can't point you to a page describing how to extra-enunciate all
the letters. As useful as it is, people speaking to me like that always makes
me feel like I'm dumb, because they sound dumb saying it. Clearly I have
issues :-)

~~~
72deluxe
I don't think you have issues. I wonder if they sound dumb because of the
difference to "normal" speech, where we typically say words in a daft manner
to young children or those learning to speak? We associate them with reduced
capabilities (in a sense) because they are children and are still learning.
That isn't meant offensively, more that we know the child needs to learn?

But if it helps you hear, I think it's great!

That article you linked to was interesting. Thanks.

------
kabouseng
My wife is an audiologist, and she and her colleagues found this excellent.
One suggestion if I may (actually that my wife made), is that where you have
the soundcloud files demonstrating MBC, you add one demonstrating what a
person with hearing loss would hear, before the one with the MBC.

That way a person can judge the improvement that MBC gives to a person with
hearing loss, instead of just judging the reduction of quality to that of a
person with perfect hearing.

But again, excellent article!

------
dghughes
My mother as a teenager listened to her transistor radio all the time, it was
a new thing when she was young. She held it up to her ear and it was turned up
very loud now she suffers from fairly profound hearing loss but only in a
specific high range she can hear low bass normally.

If you talk to her and then turn on a tap to get a glass of water the
conversation over, the fridge motor comes on conversation over, any non-verbal
sounds is noise that obscures all words to her. "What?" is the response to
nearly every word from anyone mouth has to be repeated twice except in a dead
silent room. She listens to the TV on level 20 and it's very draining to
everyone around the person.

But she won't get a hearing aid! She's 70 years-old but refuses to even
discuss it. It's odd how if you say to a person who can't see they may need
glasses it's OK but if you say to a person hard of hearing the may need a
hearing aid it's like you said the most obnoxious thing ever to say to anyone.

~~~
72deluxe
"very draining to everyone" hahaha that's made me chuckle. Thanks! I think
you're right about "it's like you said the most obnoxious thing ever" hahaha

Is there a way of highlighting how often she says "what?" like a tally chart?
I would try that with my mum to get a message across. She'd likely be deeply
offended, but I think the message would get across.

~~~
dghughes
I would have to ask my dad but I know when I am there I swear it's every time
I say something except in an incredibly quiet room so tracking isn't necessary
since it always occurs from my perspective.

It's probably partly due to hearing loss and also just habit because I know I
will say something like "Are you going to go for your walk now or later?" she
says "What?" and I repeat "Are you go.." and she interrupts as if she knew all
along what I had said ugh! Although context helps many situations.

I also find she mistakenly interrupts current conversations between people if
they are discussing something she interrupts the conversation with a new
conversation since she can't hear. She also has a hard time with conversation
flow as if on a phone conference call when everyone interrupts each other due
to not being able to follow the flow of the conversation.

------
gwern
For people with hearing loss & run Linux: if you have your audiogram and an
idea how your hearing loss varies by frequency, you can try to do selective
boosts by frequency through a Pulseaudio filter. I discuss it a bit in
[https://plus.google.com/103530621949492999968/posts/32qSkcQP...](https://plus.google.com/103530621949492999968/posts/32qSkcQPbmQ)

~~~
jkot
Thank you! This is pretty usefull for testing on grandma ☺

------
zenocon
"Why can't I just turn up the volume on my iPhone?" is something I ask myself
everyday and shake my fist toward Cupertino. Seriously, the gain on the phone
is severely limited. Try listening to a voice call on speakerphone in even a
moderately quiet environment with just a twee bit of ambient noise. It is
maddening that I can't get any more volume out of this device without
jailbreaking it.

~~~
darklajid
I'm not trying to pull your leg here, but ..

why would you try to use the speaker if there's noise around you? I mean, why
wouldn't you - like - put it up against your ear instead?

~~~
ics
Any scenario where you'd want to use the speaker phone has little to do with
noise. Sometimes I want it while driving because I don't have a headset (and
don't plan on getting one, since I don't own a car and drive rarely). Or in
the shop while drilling, soldering, or whatever. Or outside working on
something. It's about using the phone while doing something else, often
something that you couldn't do with something blocking one ear or dangling
anything from your head. Then of course there's the case where multiple people
are listening/speaking through the same phone.

~~~
soperj
Seriously.. just don't talk while driving. It's dangerous for everyone.

~~~
ics
I would love to not talk while driving. I would also like to not listen while
driving. In fact, I would like to not drive while driving! That being said,
I'm much less concerned about spending a minute talking to a dispatcher than
the guy/gal flying down I-95 while breaking up with someone over the phone.
(Speaking of which, at least with a phone you can hang up... but some
passengers, man...)

Edit: I would also posit that in some cases, a short phone call can actually
do a great deal to remove distraction if it is a settleable matter. The
conversation you have in your head while driving may be just as bad as the one
you'd have on the phone, except that it may dwell longer.

~~~
72deluxe
Very true about the conversation in your head. You can sometimes drive great
distances and not remember any part of the journey, which is worrying in case
you missed dangerous road conditions etc.

This is particularly true if you have a "lot on your plate".

Perhaps putting on an irritating radio station would help?

------
marquis
While I appreciate the article, having hearing loss is not like losing context
of an image such as not being able to see the bear on the tricycle. It's more
like the image is fuzzy and depending on the factor of loss, it might be a
bear or it might just be some fuzz:

[http://i.imgur.com/vKn7oTf.png](http://i.imgur.com/vKn7oTf.png)

Audio compression, especially when using psychoacoustic principles, helps by
lowering the noise of the unwanted sounds e.g "probably not a human voice" or
"not a bear" in this case and increasing certain frequencies for a person's
particular hearing range so they can "see" the image better.

~~~
dghughes
I'd agree the fuzzy analogy is better since a hearing impaired person may
think they understand but they don't, sometimes totally wrong.

I recall reading that vowels are easier to hear than consonants or maybe it is
vice versa? "Hello how are you today" may seem like "Hll hw r tdy" which to
the hearing impaired person may seem like "How am I tidy?" or something
totally incomprehensible but their brain makes up something close
(incorrectly) by filling in the blanks.

The Monty Python sketch "I'd like to buy a hearing aid" feels like what I go
through daily when trying to communicate with my mother.

I showed it to my mother thought it was funny, sometimes when she thinks she
knows what I said but it's not even close it's like the sketch.

[https://www.youtube.com/watch?v=T7UqhDs8zj4](https://www.youtube.com/watch?v=T7UqhDs8zj4)

------
pizza
This article would be great if they replaced their frequency-domain 'dynamic
range' terminology with the standard word for it, bandwidth.

~~~
korethr
Thank you. This was the word I was looking for, but escaped me when I wrote my
own commment.

------
anigbrowl
_What’s the solution? Multi-Band Compression (MBC), a technique that’s been
used by the $6 billion hearing aid industry to solve this specific problem.

An MBC uses intelligent design instead of a one-size-fits-all method. With the
right data about your hearing pattern, it can mash the full sound into your
range so that you get all the information you need._

Audio engineer here. That is patently untrue. MBC is a super-useful technique
and is indeed helpful for mitigating hearing loss in relatively transparent
fashion, but it does not and cannot bring sounds from outside someone's
audible hearing range back within it. It will dynamically rebalance incoming
audio in inverse proportion to the degree of hearing loss within a set of
frequency ranges, but many kinds of sensorineural hearing loss involve the
death of cilia cells (the tiny hairs thatvibrate at particular frequencies,
much like the bins of of an FFT) which can result in a total loss of
perception at or above certain frequencies.

[http://en.wikipedia.org/wiki/Sensorineural_hearing_loss](http://en.wikipedia.org/wiki/Sensorineural_hearing_loss)

To 'mash the full sound into your range' requires a technique known as
frequency shifting, but that's problematic because it destroys the harmonic
relationships of the incoming material and sounds disorienting, at best.

In any case, I think the illustration of the bear on the tricycle is absurdly
simplistic and makes me wonder to what the degree the pp designers really
grasp the underlying concept. A much more appropriate parallel would have been
to show an image with a severe Gaussian blur, which more closely parallels the
actual experience of hearing loss in terms of both empirical measurement
(higher frequencies tend to be more severely attenuated in cases of induced
hearing loss) and subjective experience (blurring hinders edge detection,
which is analogous to transient detection in audio, and which has a large role
in speech intelligibility.

[http://en.wikipedia.org/wiki/Gaussian_blur](http://en.wikipedia.org/wiki/Gaussian_blur)

If you're struggling with hearing loss, then you should really, _really_
consult an audiologist, work out the basis of your hearing loss (which is
sometimes as simple as impacted earwax), and work out a treatment strategy. If
you're suffering from degenerative hearing loss then listening to overly-
compressed music could actually accelerate it, and listening on headphones or
earbuds (many of which bias the sound for increased impact) could also
contribute to the problem. It's a truism in the pro audio world that most
people are _awful_ at self-measurement and tend to over-equalize in the
absence of proper experimental control protocols.

I apologize for the rather negative tone of the post; I appreciate the people
at SoundFocus are trying to provide people with something useful and helpful
at minimal cost, by leveraging the pretty good audio hardware in their phone.
However, hearing loss tends to be a one-way thing, and I think that offering a
product to that market without a clinician on the team is a bad idea. There's
a lot more to being an 'audio ninja' than understanding the fundamentals of
DSP.

~~~
pistle
I was about to question everything until I read this. Thanks AE. My
understanding of MBC is more like a crossover (or mix of low, bandpass, and
highpass filters) network followed by compression per band within each section
of the frequency range.

Now, I want to go test out what it would be like to 'compress' frequencies.
Something like a notch filter that shifts nearby frequencies around the target
frequency away into regions above and below. It adds noise, essentially,
within the compressed range, but maybe it's tolerable and is useful for
someone with a narrow band hearing loss. It could potentially be interesting
musically.

Maybe such a filter exists, but I am not familiar with it.

~~~
anigbrowl
If you're interested in frequency shifting, Harald Bode was the leading
engineer in this area. You can read a gentle introduction here: and if you
look around there are some VST plugins that emulate the Bode designs.

I haven't tried using this for precision stuff - over a small range it might
well improve intelligibility at the expense of only minor distortion. I tend
to reach for it when I want to give sounds an extra weird dimension, it sounds
somewhat orthogonal to the normal harmonic distributions we're familiar with.

~~~
wglb
Looks like you were going to paste a link but it didn't stic. . . .

------
larrybolt
This was an interesting read, but what to me personally is more interesting is
how to prevent hearing loss over the years.

I might be wrong on this, but I recall a hearing specialist advising me not to
use earbuds at all, or at least limit the use to max 1 hour at a time. A
dynamic headphones, such as the good old Superlux HD 681 would be "better" in
the long term. (Not trying to advertise for that headphone, it's just one of
the few that is cheap, good, and I can rewire myself and even add a plug so I
can easily buy new aux cables).

However I cannot wear my headphone longer than 6 hours without it getting
annoying. And running with a big headphone is a big no, but than again, I'm
nowhere near to running longer than 30 minutes in a row.

Anyone who has his own thoughts on this topic?

~~~
Anechoic
_I might be wrong on this, but I recall a hearing specialist advising me not
to use earbuds at all, or at least limit the use to max 1 hour at a time._

The issue isn't necessarily about all "earbuds," the issue is that many
earbuds (including the ones included with Apple iOS products) don't seal the
ear canal very well, so a listener is exposed to outside sounds in addition to
sounds from the player. Since the outside sounds have a tendency to mask the
sounds coming from the earbuds, the listener will often turn up the volume to
better hear the audio material and therefore be exposed to SPL's that can
cause hearing damage over long exposure times.

The advice to use something like the Superlux HD 681 is that circumaural
headphones offer some (not a lot, but some) shielding from outside noise, so a
user won't be tempted to increase the volume level as much. Active noise
canceling headsets and in-ear-monitors (like Etymotics-brand) provide better
sound isolation so that users can keep the volume at more moderate levels.

~~~
bashinator
Exactly. With my Etymotics in-canal buds, I can set the volume on my music
player to about 25-30%. With the stock buds or even non-isolating over-the-ear
headphones, the same subjective loudness requires about 60-75% on the volume
control.

~~~
markrages
This is not a good measure, because different earphones have different
efficiencies.

And the type of earphone changes the efficiency as well: The sound from the
Etymotics all goes to your ear, but the others waste some sound out of your
ear.

------
Scaevolus
A spectrogram would show what MBC is doing far more directly than the waveform
plots.

------
soj
Hi folks,

In order to summarize what I've read so far: This promotion article about
SoundFocus is clearly not written with help of a professional from within the
hearing aid industry nor from someone with clinical experience. The author
shows to be good with language, probably an engineer who makes links with
technical terms as if he knows what he is talking about.

I find this article very misleading and not a help for the hearing disabled or
their relatives. It reminds me of a very useful course I once
followed:'physiology of the ear for physicists'. It would be good if the
author or developers find something similar.

I realize that my post breaths some arrogance and of course it is easy to burn
something down. But yes, I know better. And yes, I could have written this
article that would market SoundFocus properly (in a similar style if you like)
with only useful and correct information.

Maybe I should... ?

Cheers -a professional-

~~~
soj
My bro points out I did not mention a single flaw/example. (I am busy enough
as it is)

Besides things already mentioned by others, talking about dead regions,
upwards spread of masking perhaps temporal scatter and tuning curves would
have made it a lot more juicy.

One flaw: time and volume are NOT self explanatory. Think about recruitment
and such. Another flaw: multi-band compression is not doing what is suggested
by the picture of the bear where the bicycle is missing. This visual example
fits much better to a person with a dead region for whom frequency compression
is applied. And this is not a one-size-fits-all method. Different techniques
are available for this particular phenomenon (as there is frequency shifting
as well).

Anyway... let's mention a positive side. I appreciate the attempt of
communicating on the topic and it was a good try to make things clearer for
some. Better luck next time.

Happy bro?

------
sandworm
I wish people would put half the effort into prevention that this article puts
into mitigation.

Work beside a machine humming at a particular frequency and you will loose
that frequency even if the sound doesn't seem loud at the time. And simply
jambing in a pair of earplugs doesn't make you immune. They have limits.

------
adamnemecek
I might be wrong here but isn't what you call "dynamic range" usually referred
to as "hearing range"? Dynamic range has a bit different connotation AFAIK.

~~~
baddox
A lot of the concepts in this article appear to be deliberately simplified and
terminology deliberately abused, and I would argue too much so. Take this
paragraph for instance:

> As you look at the waveform, the problem should become apparent. Sound is a
> 3-dimensional construct, but we can only represent 2 dimensions on a
> textbook or a monitor. In the waveform representation, we see Time on the
> x-axis plotted against Volume on the y-axis.

The two axes in most waveform plots are sound pressure and time, not volume
and time. The fact that the waveform depicted is nearly symmetrical reflected
across the x axis should hint that this is the case.

------
dlandis
> If you know people who have hearing loss, you’ve noticed that they can’t
> tolerate loud noises that you’re fine with, but they also can’t hear some of
> things that you can hear perfectly well.

So what is the explanation of why they can't tolerate certain loud noises? I
feel like the article was going to address that aspect of hearing loss as well
but never did.

~~~
cheng1
Hearing is an range. So they have reduced range on both side.

~~~
dlandis
Hmm, I'm not sure you understand it any better than I do.

The article says:

> _Well, if you get hearing damage at a specific frequency, you’ll start to
> lose sensitivity to the quiet sounds at this frequency. However, your
> sensitivity to loud sounds remains the same._

If their sensitivity to loud sounds remained the same on the one hand, why
would they be unable to tolerate certain sounds on the other hand. Seems
contradictory.

~~~
LnxPrgr3
I don't have an exact answer, but it's a common problem:
[http://en.wikipedia.org/wiki/Hyperacusis](http://en.wikipedia.org/wiki/Hyperacusis)

Hearing is complex: you have the cochlea acting as both sensor and first pass
signal processing. There's a muscle that acts as a built-in gain control that
can cut sounds by about 20dB, partly so your own voice doesn't deafen you.

Hearing damage isn't just loss of sensitivity—apparently it can alter the
shape of cochlear filters' response too, changing the way masking works. And
I'd imagine the brain tries to compensate as hearing loss progresses, which
could have interesting effects.

More info:

[http://en.wikipedia.org/wiki/Auditory_masking](http://en.wikipedia.org/wiki/Auditory_masking)
[http://en.wikipedia.org/wiki/Acoustic_reflex](http://en.wikipedia.org/wiki/Acoustic_reflex)

Edit: Less related, but a similar phenomenon that happens to me:
[http://en.wikipedia.org/wiki/Misophonia](http://en.wikipedia.org/wiki/Misophonia).
Hearing has odd failure modes.

------
itgoon
Could you add a "hearing test" to your app, which does at least rudimentary
tuning? Call it "headphone calibration" and I bet you'd improve the listening
experience of people who don't know they are hearing impaired.

------
eccstartup
There is a drift in the soundcloud audio. Audio goes ahead of visual wave. So
I missed the last visual beep in both compresses and uncompressed version.

------
cheng1
So it's like a CPU can't process fast enough and start to losing signals.

Just like the choppy adobe flash player on OSX!

------
s0rce
Spatial frequencies in the image might have been a much better analogy instead
of simply cropping the image size.

------
danielsamuels
I really wish this app was available on OS X, I would very happily pay for it
and use it every day.

------
shreyas056
tl;dr "hearing loss is caused by uneven or abnormal frequency response of ear
compared to that of normal human"

------
alphabetsoup
the music world already uses copious amounts of multiband compression, as if
all listeners have hearing loss

------
tempodox
Why is this post on top?

------
a2kadet
> "Sound expresses itself in three dimensions: time (seconds), volume
> (decibels) and frequency (Hertz)."

Is anyone else as irked about the authors choice of the word dimensions as
much as I am? I can't read past it. Wouldn't "factors" be a better fit?

~~~
jeremysmyth
No. "Dimensions" is the right word, because they are three orthogonal scales
along which musical notes can be measured[1]. The author could also have
suggested timbre and other possible dimensions, but the three stated apply to
all sound, including (importantly) sine waves, the simplest type of sound.

[1] Technically, frequency is a function of time too (and timbre a function of
the interaction of multiple frequencies and envelope changes, another function
of time) but these are all independent uses of time.

~~~
tjradcliffe
Technically there are two complementary sets of dimensions: time and amplitude
vs frequency and phase. Both are complete encodings of the waveform.

The article is extremely muddled from a technical point of view. When dealing
with perceptions it is _extremely_ important to distinguish physics and
physiology. In optics we have radiometric (physical) vs photometric
(perceived) values:
[https://en.wikipedia.org/wiki/Photometry_%28optics%29#Photom...](https://en.wikipedia.org/wiki/Photometry_%28optics%29#Photometric_versus_radiometric_quantities)

It appears in the article they are doing some kind of implicit averaging over
the ear's response function at each frequency, which may make sense in terms
of perceptions but makes very little sense in terms of physics.

A much better visual analog would be a blurred photograph rather than a
cropped one. "Turning up the volume" simply increases the brightness of the
images, which doesn't do a damned thing to reduce the blurring.

One thing that people with normal hearing don't get is how much information is
in the high frequencies, which are where the most loss normally occurs,
although there are also "notch" losses that happen to people whose ears are
routinely subject to loud noises in narrow bands.

We tend to think of "high frequency" sounds in terms of single notes, but in
speech the high frequencies are most important in the unvoiced constants, the
"s" and "th" sounds and similar. Losing the high frequencies blurs the edges
of speech, often making the shape of it unrecognizable. Frequency-dependent
enhancement sharpens the edges and brings it back into useful focus.

~~~
ssalazar
> Technically there are two complementary sets of dimensions: time and
> amplitude vs frequency and phase. Both are complete encodings of the
> waveform.

Minor note, the "frequency and phase" is actually frequency and complex
amplitude, which encompasses both phase and scalar amplitude as we think of it
intuitively.

In the mathematical theory there is also provision for complex amplitude in
the time-domain, but this is rarely needed in practice (and never found in
real-world signals).

