Oh buddy do i have a xiph rant for you. I am more or less entirely willing to chalk this up to what I shall politely call “top-down effects”.
Frequency tuning and rapid readjustment of it is the key feature of modern hearing aids and we’re just barely scratching the surface of what we can and should do.
Here’s why: presbycusis, or age-related hearing loss, has a cruel fact about it. As the threshold of audibility rises for a particular frequency, the threshold of pain lowers. Rather often, the reason people don’t use hearing aids is that they create a situation in which you can hear but also every sound is painful!
Precise digital control of per-frequency intensity amplification can make that a lot less terrible. If it hurts to hear at 4khz, we can squelch that frequency but boost the 100-400hz range and your speech audibility should still get the boost you need to talk to your grandchildren. Throw a BLE connection to a nice multicore GPU smartphone into the mix and we can start offloading really complicated processing to the thing in your pocket.
> Throw a BLE connection to a nice multicore GPU smartphone into the mix and we can start offloading really complicated processing to the thing in your pocket.
In principle yes, but latency is the nasty problem in this scenario. The wearer is still hearing some ambient sound in addition to processed sound, and if processing latency is greater than about 10 ms (IIRC) it drives the listener nuts and they won't wear the aid. The problem with smartphones is they have too many layers of general-purpose OS cruft sitting on top of the
A/D and D/A. I've read that IOS is better than Android in this regard, but for lowest latency you really need a dedicated realtime processor and a realtime OS.
For context, sound travels about 3.4 metres (11 feet) in 10 milliseconds. Of course, this is added to the time taken for the sound to reach the microphone.
Our brains apparently correct for this sort of lag without us noticing, but maybe (I have no idea) this correction is calibrated to perceived distance.
The obvious stopgap measure is the neck-torc approach. Offload computation to a bespoke thing and use an app for tuning, or the like.
I've seen them compensate for things like movies; I assume they send Audio ahead of time, and "hold" back the video until it's in sync.
QC35s buffer about 75ms worth of ambient noise and then phase-invert it and play it back. This destructively interferes with the outside noise, and cancels it. But you have to sample your environment on some time window to get it to work, so when ANC is on, there's lag. This is also why ANC only works well for slowly-varying noise like airplane drone, but can't handle impulsive sounds like gunshots.
Gahhh. This just makes me want to cry.
Look, sonney-me-lad, you might want an analogue amp in your ear, that has a terrible frequency response, is prone to feedback and will most likely destroy what frequency range you have left, but lets not blame this as a technical problem. (don't start me on battery life)
You like the "warm" sound of "analogue". Hand in hand with this claim, is that somehow one can have the ability to tell digital recorded sound from analogue. As an ex recording engineer, I can tell you, you're full of crap. I could[+] make a very convincing "analogue" sound using a mackie D8b digital desk and 24 track digital recorder.
I can tell you now, that if I played a cd, housed it in a box that looked like it had tube on it, then attached a record player, Piped it into a decent amp with some NS-10s. It would sound great. To convince you of course I'd need to clip off the top frequencies, (8k+) add some clicks and hiss (raise the noisefloor) you'd then be espousing the wonders of its "warm analogue sound".
Why have digital hearing aids concurred the world? because they are better. One can steer the microphones, they use less power, the frequency response is trivially tuned to your hearing range, there are limiters in there to stop damage, The speaker can be calibrated, as can the microphone.
you can add a directional handheld mic so you can point it at people so you can hear them talking. You can add background noise reduction trivially.
All of this can't be done with frankly shitty analogue hearing aids.
Don't fuck up other people's lives with your nonsense.
You know, you may have a product idea there. Not so much for hearing aids, but as a standalone device.
Being able to audibly isolate a speaker in a room full of other speakers would be a boon for his case (and I would presume many others). Anyone on HN working in this area?
A quite interesting further development of these is this paper  which uses brain activity to determine which particular speaker the listener is trying to hear, and then optimises that.
I expect to see full medical devices that do this in the next decade or so.
DeLiang Wang's group at Ohio State is doing great stuff for speech extraction in complex environments, as are some of my colleagues in other universities in collaboration with Starkey.
It’s coming along!
Here are some examples of new research with neural networks https://www.youtube.com/watch?v=FMEk8cHF-OA og https://www.youtube.com/watch?v=zL6ltnSKf9k
It's no coincidence that louder often means spending more on drinks and quicker table turn overs. So the noisy places are out competing the quieter places.
Sad, I find about half and 75% of places are unacceptably loud during normal peak hours. Fortunately google displays how popular a place is (if often varies by business) so I can go off peak.
That said, I had a classmate with a hearing impairment. His device had a tiny wireless microphone that preferentially fed his hearing aid (I.e., he could hear other stuff too, but the mic was loudest). The teacher usually wore it, but it would be given to you if you were working with him on something. It was certainly a little clunky, but it did seem to help him a lot. Perhaps something similar could work for your dad?
Unfortunately, I have no idea what the device was called—and it was a long time ago- but maybe this is enough of clue for his audiologist.
I’ll pass that tip along to him. Thx!
Alternatively, let me know and I can walk you through the process.
Yes, directionality is an issue. But the brain post processing portion has also dwindled.
So i guess getting a hearing aid sooner might be a good approach?
1) Hearing loss in specific volume/frequency notches instead of loss strictly at the high/low end (sometimes referred to as "hidden hearing loss", which is more a statement about the capabilities of routine screenings than it is a statement about the nature of the hearing loss)
2) Deficits in attention regulation, which are seen in numerous conditions (most obviously ADHD, but also autism spectrum, PTSD, traumatic brain injury, schizophrenia and "schizophrenia-like" conditions, and some forms of dementia)
It’s actually hard to separate the two. On the one hand, the hearing aid might not faithfully reproduce cues that let you separate different sources. On the other, “higher” brain areas can modulate early sensory areas (maybe even the cochlea) to make sounds relevant to your current behaviors more salient while attenuating irrelevant signals.
The big picture is: sadly we don't really have any way to objectively measure hearing quality and its improvement due to hearing aids. We can push buttons whether we hear tones or not. We can count understood/misunderstood words. But the results don't have clear relationship with real-life hearing situations. For example, in my case, in silent room tinnitus appears which interferes with testing. And sometimes I hear better, sometimes worse.
Unlike vision, it's very hard and time-consuming to test with any repeatable/reproducible patterns, or to accurately explain to another person problem with quality of the sound.
Also, the hearing aids market is very tightly controlled by oligopoly of manufacturers and doctors. There is no way to self-adjust them. I'd very much buy user-programmable BTE package capable of 80 dB gain which I need for some frequencies. But there is plainly no such thing available without any strings attached. I have played with jackd under Linux, too, don't remeber clearly but it was problem to set the equalizer with 60dB differences.
It's also infuriating how sub-par the accessories market is. I have a compilot air 2 (bluetooth add-on) that lets my hearing aids connect to my phone, car, computer, etc. It's great, but at $300 I really expected it to work for more than a couple of hours... It also has to be clipped within 12 inches of BOTH hearing aids? My phone communicates through bluetooth across the house to my computer... Is there really no way around having to clip this dongle on the neck of my shirt? It doesn't bother me now, but damn. People in meetings and in public constantly think I'm recording them or something. Nevermind the disgusted look I've gotten at a urinal. It's just... Not ideal.
If you're interested, this place should get you started, or let me know and I'll assist you. https://forum.hearingtracker.com/c/hearing-aid-self-fitting-...
And you're going to need a wireless programmer, too. Unfortuntately, you're out of luck a little, since Widex is not using the industry standard hardware, but it's own proprietary programmer - Widex Pro Link: https://forum.hearingtracker.com/t/widex-beyond-programming/..., which seems to be available on Ebay.
There is a one-word description of that sentence, and the word is "bullshit." It might be the case that most digital hearing aids today do not reproduce sound as faithfully as older analog ones. But it is always possible to build a digital signal processing path that is audibly indistinguishable from an analog one in a double-blind test. It might cost more or require better engineers than those who currently design hearing aids, but blaming the problem on "digital technology" is utter nonsense.
- Improved battery life.
- Better tailored to specific hearing loss (e.g. certain frequencies).
- Telecoil compatible.
- Improved Feedback Control. Better safety features.
The entire article could be boiled down to an argument that "analogue is natural, digital is artificial." It compares digital hearing aids to MP3s. The problem is, there's nothing inherently natural about the way analogue audio works, and it can manipulate and alter the sound profile just like digital (inc. compression/artifacting, particularly when space constrained/dealing with cross-interference).
Just because a technology is older doesn't automatically make it more "pure." If they argued for a way to configure a hearing aids/remove artificial filtering, that's fine (I agree), but to suggest that digital and analogue audio fundamentally work differently from a perceptual perspective isn't really factual.
The good point though is the one about the industry lacking participation from people with hearing loss themselves. The "nothing about us without us" argument. Plus a sub point about the US healthcare system being expensive and not actually consumer friendly.
(Hearing aid wearer here who works for a digital audio company here! But I get mine from the NHS, so I pay nothing and expect no choice. They work well but not perfectly - the multiple sources problem is still big)
That's just false and shows how little the author understands about audio systems. The microphone and driver in a hearing aid (and their interaction with the ear canal) alone will change the sound drastically, regardless of whether the signal processing between the two is analog or digital.
They're wrong. Vinyl is back because it's hipster and other reasons, and it's in absolutely no way better.
In order for analog to be better either:
1) The (analog) filters on the hearing aids are bad. Not unique to digital.
2) You can hear more than 22KHz. I doubt it, mate. No offense but you're not young anymore, so very unlikely among an already rare population.
3) Math is wrong. Fourier analysis shows that digital doesn't "approximate" the frequencies of analog. It completely contains the exact same information.
"It turns the world into mp3" digital quality is not necessarily bad, some models might have issues, of course.
- Digital hearing aids are smaller, lower power and easier to configure than analog hearing aids (think "fits in your ear canal" vs. "you wear the original iPod on your belt")
- The benefits of full-analog audio are contentious at best and almost certainly imperceptible for people with hearing loss
If you want further convincing that A->D->A conversion is perfectly fine, https://xiph.org/video/vid1.shtml is excellent.
Why do writers so often accuse experts of "hubris" while arguing that their own poorly informed opinion is superior to the well-informed opinion of said experts?
That said, most digital systems don't make it easy (certainly not easy enough to be relevant to a casual user) to obtain the many noise / filtering / distortion effects you get with an analog setup, and that's a legitimate complaint. While the audiophile scene is rife with placebo effects and woo, I've long wondered if some UX lessons couldn't be salvaged. Could hiss+pop+lowpass+distortion+hardware_pickiness settings be packaged into a form that would make people willing to consume them in a digital context?
Is the purpose of this article anything other than to generate clicks from hipsters who will click on anything that paints digital in a negative light?
Duh no. And this article does a disservice to itself, as it looks like there could be a real issue of fidelity in the digital chain. Bluetooth is known to cause issues (lag, bandwidth variations, …), and the processing chain might be problematic:
> the digital processor samples incoming sound at a rate far lower than that of an old CD player
I don't know if there's any truth to it, but it should certainly be investigated and if true fixed. It might also be that the perceptual models used in these hearing aids are not correct for, well, hearing aids.
 though some of the complaining could be a question of habit, the wearer's old hearing aids had a coloration to which they'd gotten more and more used, the different profile of the new ones is jarring even if it might be more correct
22kHz sampling isn't a problem in practice as:
- HAs focus on speech comprehension, not music. Speech doesn't high much content above 5kHz, so even 22kHz is double the Nyquist rate.
- Music is perfectly enjoyable at 22kHz (but obviously better at 44kHz)
- People getting hearing aids have hearing loss, and the majority of hearing loss is in the high frequencies
People often tell stories about playing a sound or being in a lab etc. they think they heard above 20khz, but mostly likely what happened is whatever inevitable non-linearity exists in their playback chain acts as a mixer creating frequencies well in the easily audible range.