>That seemed incredibly expensive. I haven’t done much electronics, but digital signal processing doesn’t seem that expensive?
The cost is in R&D, not just for human trials but for DSP engineer salaries. It's an extremely specialized skillset and the companies that hire for it look for PhDs most of the time.
In particular, optimizing adaptive filtering to clock in the dozens of cycles per buffer with fixed point DSPs to maximize battery life, while providing features like feedback reduction, noise cancellation, working around the infamous "cocktail party" problem, with modern features like Bluetooth... all with audio fidelity on par with some serious audiophile gear.
My understanding is that there are also some other market forces too like very long lifecycle management and low volumes for new products, but idk how real that is.
Point is, I don't think those devices are really overpriced - at least no more than any other medical device paid for by insurance companies.
I'm reading some of their documents and a few things jump out at me. They mention supporting FFT/IFFTs on the platform. This stands out to me since doing that on a battery powered device is unusual. Firstly because it's not cheap in cycles, and secondly because useful FFTs require rather large buffers of memory which harms your latency, and may come at a premium. When I've seen them required (e.g, a codec), it was typically on a dedicated FPGA with the algorithm burned in, not running on the DSP.
That chip is doing graphic equalisation, multi-band dynamic range compression, noise cancellation, feedback cancellation, data logging and wireless communication with an absolute maximum power dissipation of 50mW. It's a $100 chip for good reason.
A big part of why hearing aids haven't faced much disruption, as I understand it, is their qualification as a medical device. A major competitor of my current employer had a policy of many years that they would not do hearing aids, in spite of extensive audio and audio processing experience internally. The reason? Amar Bose wanted nothing to do with medical device certifications.
All but one - low power. A typical pair of true wireless headphones runs for three hours on a charge; a typical hearing aid runs for a week on a zinc-air cell. Any Turing-complete processor will do whatever kind of DSP you need it to do, but the trick is doing it incredibly efficiently.
Even that part is becoming less true. I've seen multiple datasheets in the last year that promise worst case FFT/IFFT performance at 2mA. Measured performance has been less than a quarter of thst.
Easier said than done, though.
Do you know of other hearing aid options cheap enough to not require being covered by insurance?
How many people have hearing aids, though? Seems like the market for this would be much smaller than other things and therefore the price higher.
Sure, it's simple economics, but in this case it is too simple.
It turns out that the hearing aid market has inelastic demand . In 2010, market penetration for hearing aids in the U.S. was around 24% (8.2 million users). Amlani estimated that with a complete subsidy of hearing aids, market penetration would only increase to 34% (11.2 million users) .
Combined with the lifespan of 5-8 years, that is a quite small scale for an asic. (Apple will sell 40+ million airpods this year alone).
: Amlani, A. M. (2010). Will government subsidies increase the US hearing aid market penetration rate? Audiology Today, 22(2), 40-46.
: Lee, K. & Lotz, P. (1998). Noise and silence in the hearing instrument industry. Working Paper, Department of Industrial Economics & Strategy, Copenhagen Business School.
That's an interesting use of the word 'only', you're looking at a 30% or so increase and at that scale this would not have a huge affect on the price of the chip itself because once the costs of developing and the start-up costs have been born the rest is marginal. These chips probably cost < $2 to produce even at this quantity.
What you seem to forget is that once it is worth doing an ASIC that is pretty much proof that the economies of scale are there. The very rare cases where an ASIC is still expensive is when they are top of the line in switching speed, density or pin count and these devices have none of that.
A ~thousand to several thousand dollar product with a market penetration of 24% (low) becomes completely free, and the market penetration shifts to 34% (still low). I think 'only' is justified.
> the costs of developing and the start-up costs have been born the rest is marginal
The NRE costs have not been 'born' until the product is EOL. They are distributed across the price of each unit. My hypothesis is that this chip has a sufficiently low volume that the NRE cost / chip is substantial.
> What you seem to forget is that once it is worth doing an ASIC that is pretty much proof that the economies of scale are there
When you have strict power requirements (i.e. need a battery life of a month), it can still make sense to go for an ASIC even with relatively low volume. Add to this an inelastic demand curve (i.e. you will sell the same number independent of price), and there isn't a compelling reason to try to do it with a DSP or FPGA.
> These chips probably cost < $2 to produce even at this quantity
If we assume that OnSemi could design this chip for $10M, then they would have to sell 5M of them to have your proposed unit cost of < $2 (assuming a wafer cost of zero, which is obviously wrong). I would guess that $10M is a lowball for the total development cost, and that 5M is way optimistic for volume (that would pretty much require this random chip is in 100% of hearing aids sold in the U.S. in the last few years). They've probably sold an order of magnitude less than that.
Maybe I'm wrong here, but it isn't as obvious to me as it apparently is to you.
Whilst I wish the OS hearing aid project every success, what we really need is open source implants - the pricing is even more insane than hearing aid technology and is relatively well understood. The existing, certified implants could be used, but with an OS/DIY processor - circumventing many of the potential certification issues.
I went to bed with perfect hearing and then woke up with 90% hearing loss in one ear (sudden nerve deafness). Some of it has come back but the gain is not flat across frequencies. It is incredibly disorienting. I was almost hit by a car the other day since I now look in the opposite direction when I hear a car accelerate. I no longer enjoy listening to music and it limits my ability to be witty in casual conversation since I now often second guess what was said. Music sounds weird.
I know a hearing aid would help but I don't want anyone to know I have this problem. Hopefully my brain can just recalibrate the frequencies especially since I have one pretty normal ear.
Regarding your suggestion. I believe a good cochlear implant has about 22 channels. A healthy human ear can discern 300,000+ frequencies. This is why cochlear implants are only given to totally deaf patients. The technology is no where near where it needs to be used on patients with hearing loss.
He does not wish to have his body hostage to any company, and I can understand that.
And most are under $10.
This is absolutely not true. I would bet that > 50% of all devices shipping with a digital signal processor are doing FFT/IFFT at some point.
> useful FFTs require rather large buffers of memory
Loads of realtime FIR filtering is implemented with overlap-add or overlap-save - i.e. block-based FFT/IFFT.
However, I agree with the OP that it does have a fairly large number of shortcomings:
-- The number of man-years that have been spent on commercial products is huge, and they provide a highly optimised solution in a competitive domain. My late grandfather had, in the 1970s, a hearing aid with feedback reduction powered by an analogue computer (aka "electronics") that hung around his neck.
-- A lot of the tuning parameters of these algorithms really are hardware specific and would require quite a lot of tuning /iteration
-- At the end of the day, a Teensy is a moderately large rectangular board that will not fit behind your ear, has nonzero power requirements, and is a general purpose CPU. A 3D printed case is an expensive way of making a plastic box to put it in. If you were going to go down the open hardware route, you'd start somewhere very different, with power efficient dedicated DSP units on a small, thick multilayer board milled to be a bit more ergonomic. A modern hearing aid needs a new battery every month or so, and is powered by a 0,54 g 1.4 V 180 mAh battery (that is a 4x6 mm cylinder [h x d]). You're not going to get anything like that for a general purpose CPU.
Still, this is a fun project, and I commend people doing it. As ever with anything to do with the USA and healthcare products, however, I can't help but think that their efforts would be better spent trying to get universal healthcare. The cost to the NHS for two hearing aids, multiple fitting appointments included, is around £400.
And in terms of battery life, I don't know. I do know that most battery powered audio devices with DSP throw floating point math out the window from the get-go, and I haven't seen a job opening for DSP in hearing aids that didn't mention fixed point math in awhile. I don't know of any processors that fit the bill there however, those things usually have a proprietary IDE/debugger/flasher you need to pay for.
Traditional aids are aimed at (1) old people who can't handle more than 4 buttons on a TV remote, who (2) are willing to accept whatever they're given and put up with inconveniences, and (3) don't notice their audiologist and aid vendor are kinda in cahoots to their detriment.
There's all kinds of interesting things aid users might want to experiment with, once an audiologist gives them a profile of their hearing loss. Aids have programs that help for different situations, like crowded room, quiet conversation, etc, and the user may want to adjust those settings themselves.
The Apple AirPods with the "live listen" feature on iPhones (and iPads) is already being used by some people as a substitute hearing aid. You can use "live listen" with an actual hearing aid as well, but I've seen people using it with just AirPods and they seem to find it quite helpful. What's missing is the software to do the audio processing specific to that person's hearing loss. Plus overcoming the regulatory burden.
I've got a set of Bose Hearphones, about $500, and which use the Bose Hear app. You can modify treble and bass but that's about it; even so if you are hard of hearing then the default iPhone sound profiles can be used to fine tune the in-ear audio and they work well as a low cost hearing aid. Same deal with my AirPods. Today Tim Cook could wave a magic wand and literally cure deafness for hard of hearing iPhone + AirPod users; ain't happening.
No question Bose Hearphones with the Bose Hear app or AirPods with a complementing iPhone app could easily perform spectral analysis with an easy-to-use-equalizer, to increase and/or decrease the amplitude of specific areas of the auditory spectrum; this would be ideal for tinnitus patients and would without question put the entire hearing aid industry out of business. Current generation hearing aids are about $15 worth of analog parts that they are selling for thousands of dollars each, just by virtue of having invested in some horseshit FDA regulatory process when infinitely more capable technology has been available to hearing impaired individuals for well over two decades at this point.
This is not complex, and what sucks about tinnitus is that it affects each person differently which in turn requires specific auditory tuning for each individual. But companies such as Bose or Apple that have the technology with more than sufficient computational horsepower to replace hearing aids simply refuse to do so, for whatever reasons that are likely FDA regulatory hurdle in nature.
On a side note as to OP, $300 for a Teensy-based hardware platform with BT stack is f'ing outrageous, probably worse than what the actual hearing aid vendors are ripping off.
I use a site called MyNoise to play nature sounds to drown out background noise (neighbors, traffic). The thing that sets MyNoise apart from the other ones I've tried is that it has a EQ you can tune, and specific instructions for tuning it: for each frequency range you find the lowest volume where you can still hear it. The result is a curve matched to your hearing curve -- the tuning process accounts for hearing loss and tinnitus. Now, if that curve could be combined with the "live listen" / Bose Hear... then you got yourself a hearing aid, using hardware you already own!
Had a relative who tried that with an app. The problem is that most folks still assume that if you've got headphones in your ears, you're listening to something else (not them.) Reactions varied from impatience (people who thought he was impolite for refusing to remove his headphones to focus on them) to anger (refusing to interact with him until he removed the headphones.)
You probably don't want to give hearing-impaired people another barrier in communication.
1. As others have mentioned in this thread, people are reluctant to talk to people wearing earphones. Although perhaps this is less of a problem than the stigma of "hearing aids".
2. Earphones don't usually contain microphones. If you want directionality, the mikes have to be located in or on the ear, not on your phone. Some earphones do contain microphones, and they are the place to start.
3. IOS has pretty low latency in its audio channel, but Android is (or was) terrible. There were so many layers of software in the Android audio path that there was no way to process live sound without so much delay that a hearing aid wearer hearing both live and processed sound together would just hear mush. I read somewhere that Android has fixed this, but I don't know.
4. Bluetooth induces some latency into the live audio path as well. I don't know how much.
2000 bucks are the type with a dozen advanced effects, bluetooth connection to work as music player headphones, and adjustable with a remote or an app.
So, people in the US might find it cheaper to order European aids or smuggle something from Canada. (BTW, afaik plane tickets cost way less if you buy them in the airport just before the flight.)
Also, personally I'd be wary of twiddling the params of a hearing aid since I'm not an audiologist nor any kind of doctor at all and don't know if my choice would cause more damage in the long term. And if audiologists mostly reside at hearing aid dealerships then I'm not sure they will just give you a chart of your hearing so you can walk home and tune your aid by it.
Ah, and azalemeth's comment reminds me: if the board just has jacks for headphones then you can't even use the hearing charts without also knowing the frequency response for the headphones.
The technology is probably not the most expensive part of high end hearing aids. It's the service.
Glasses are more like a hearing aid that just makes everything louder. Which will work in most cases. But many people need or want more.
The problem is, it's not good to put too much sound into the ears.
Why is a human tester needed? I'm not seeing any reason that couldn't be automated.
Firstly, people already have to be told to not play music too loud with headphones. And secondly, when you buy an aid, the audiologist tells you that you won't be comfortable with it for some time until you get used to it—even though the aid is supposedly tuned to the exact profile of your hearing loss. People aren't good at getting used to uncomfortable things without adjusting them to their short-term liking.
On top of that, audio engineers, musicians and graphic artists know that it's difficult to do fine adjustments of audio or graphics for long because the senses become tired and ‘burned out’ after a while and you don't see or hear the same (even just five–ten minutes is enough sometimes). Novices are likely to be unfamiliar with these effects, have less stamina for them, and unable to counteract them without overcompensating.
Being able to hear even every other word would improve quality of life of many people. Probably one in three words is the acceptable limit of "good enough" where suddenly they are worthwhile to hassle with. My grandparents now hate family gatherings because they might understand only one in five sentences spoken directly to them because of hearing problems and the quality of the hearing aids they're able to afford with their insurance.
I'm curious if the Tympan solution has done this testing? What is the perceived downside to not doing human testing on what is presumably going to be a hobby project?
Those two powerful partners are working on relaxing the laws some so that small companies/teams could create something like the Tympan.
I'm still not sure if the Tympan is a "human use approved hearing aid" or if it's just for research purposes. Hopefully I'll have discovered that before I publish part 2.