Hacker News new | past | comments | ask | show | jobs | submit login
Deep Learning enables hearing aid wearers to pick out a voice in a crowded room (ieee.org)
188 points by WheelsAtLarge on March 10, 2017 | hide | past | favorite | 59 comments



> The greatest frustration among potential users is that a hearing aid cannot distinguish between, for example, a voice and the sound of a passing car if those sounds occur at the same time. The device cranks up the volume on both, creating an incoherent din.

It may be a simplification of the article that I'm misinterpreting, but as someone who got a hearing aid in early 2016, that's not how (modern) hearing aids work.

I got my hearing tested which enabled a frequency response of my hearing loss to be plotted (my hearing at low frequencies is fine, at higher freq I have moderate loss). My hearing aid is then tuned to match the inverse of that freq plot (ie boost volume of high frequencies, leave low freq alone).

You don't actually want a HA that arbitrarily boosts 'speech' since that won't be matched to your needs and has unintended side effects (like music can sound overly harsh/bright) because un-needed frequencies are being boosted or supressed).

-- On a tangent, after I got my new HAs, I complained to the audiologist that they didn't sound very good. Everything sounded far too crisp. She pointed out that having lived with hearing loss for 5-6 years, I actually had almost no idea what something should sound like since my brain had got used to a world with muted high frequency sounds.

That blows my mind ... a bit like how do you know the color green is green. Maybe it's purple, but you have been told by someone else that it's green.

After a few weeks, my brain re-learnt what sound should sound like and now it sounds 'normal' with HA in. Without HA, everything is a little more muffled (as you would expect) and I really notice how much I used to struggle understanding people (I believe my untreated hearing loss contributed to me losing my job a couple of years ago).

Hearing aids have changed my quality of life (at age 40).


>Hearing aids have changed my quality of life (at age 40).

I've had mine for 6 months. I'm 63 now. They have changed my life as well. I still have some tinniness but the tech has been adjusting the curve and other factors each time I visit her and this last time a few weeks ago the sound quality got much better.

There are still places where they don't work or get overwhelmed by background noise, like at a live baskeball game or football game but I can live with that. I'm now missing only a few pieces of most conversations and music sounds better as well. I also have audio dyslexia so that accounts for some inability to perceive parts of some conversations. The HA don't help with that.

One place they work for me that has some background noise is in my car. I listen to tech podscasts during my commute and most times don't have any issues unless the podcast has problems with too much dynamic range. My car (a Honda Civic) is a 2014 model and does a decent job blocking some road noise.

I was very resistant to getting hearing aids until I read about "Hearing Loss Linked to Accelerated Brain Tissue Loss" [1] and that did it. I'm trying to take reasonable steps in diet, exercise, and with nutrients to prevent or delay dementia. Like most of us here on HN I make a living with my brain and my quality of life would be impacted if I can't think through complexity and problem solving.

At thanksgiving I was talking to my father-in-law about my hearing aids and he also was resistant. He got his first pair a month ago and is extremely happy. He is 81 years old.

My doctor sells hearing aids but actually recommended that I use one of the big box store's services. With a 12 month warranty you can't really go wrong and the prices was about half of what my doctor could sell them for.

[1] http://www.hopkinsmedicine.org/news/media/releases/hearing_l...


The deeply robust elder Deaf community itself is proof that the brain tissue loss described in the article is probably not due to the hearing loss itself. I have never heard of a culturally Deaf individual experiencing this kind of degeneration of the brain. If I had to hazard a hypothesis it would be on the social end of things, such as social isolation causing decreased function, similar to to the social contexts that encourage addiction.

In other words, when one loses hearing, the culture around them fails to accommodate that leading to increased anxiety, stress, and other kinds of undesirable outcomes that are shown to impact health. It is immeasurably tragic in many ways that this phenomenon is being used to sell hearing aids and more anxiety.


>It is immeasurably tragic in many ways that this phenomenon is being used to sell hearing aids and more anxiety.

I think it is plausible that areas of the brain that are used for processing sound atrophy when used less. I'm not experienced with the deaf community so can't comment on that aspect of your comment.

I came upon the article on my own. The article was not used to sell me hearing aids. But it helped me get over the stigma of having hearing aids. So for that I am grateful for having read the article and having it help my motivation. Having hearing aids for 6 months now have been valuable beyond description.


Good point, maybe its the sound processing areas. A greater question is whether areas of the brain atrophy at all due to less stimuli coming through a specific organ.

All evidence I've seen point to brain plasticity; that areas used for specific things get re-mapped to other similar functions. For example, Deaf individuals experience a strong inner voice that activates similar centers in the brain as actual voices.


It depends on how the ear was damaged. The cochlea has two types of hair cells: inner and outer. The outer hair cells amplify the incoming sounds by vibrating in tune with them, and the inner hair cells actually pass the signals along to your brain.

If the outer hair cells get damaged, your ear can still perceive all the frequencies; they just aren't amplified enough. In that case, you're correct that boosting the frequencies back up according to your audiogram (with some nonlinear compression to account for loudness recruitment) can bring back your hearing. The same holds if you have a conductive loss (poor transmission of sounds between your eardrum and your cochlea).

On the other hand, if the inner hair cells get damaged, you can no longer hear at the frequencies corresponding with the hair cells that were damaged. The same holds if you damage the connections between the inner hair cells and the nerves, or if you damage the nerves themselves (in which case you may have normal hearing thresholds but still have trouble hearing). In these cases, even if you amplify to match the loss, you can't bring back normal hearing; hence the need for signal processing to make the best of what you have left.


i'm 35 and i've got HAs for 20 years now. my hearing loss is quite severe (without aides i don't understand face to face conversation without the other person being close to shouting).

modern devices employ another trick: they "compress" frequencies you're hearing less into a spectrum that's not as damaged (this is mostly done if your hearing loss affects the frequencies used for speech).

the side effect is that music i knew from before my hearing loss got so severe sounds strange now, but the aides also got a "music" mode.


I got a new pair of Starkey HAs recently that have a separate processor for music. The music mode turns off compression completely. Even in normal mode, it appears to me that the music processor is being used, because music sounds so much better. I recently had a pair of Altec speakers that I've owned since the early 70s rebuilt, and was listening to Pandora via Chromecast Audio. The song Spill the Wine by War came on, and I could actually hear the strings of the bass guitar vibrate. Before I had these new HAs, bass guitar just sounded like vague, low notes in the background. I was amazed and overjoyed.


I probably need a hearing aid but I am very reluctant to go for a testing because the last time I had one (very long ago and very far away from Canada, I admit) I thought it was incredibly imprecise: they asked me to press a button when I could hear a sound and I was absolutely unsure whether I heard really something already or just imagined. Is it still so very subjective?


Ah, I see.. interesting interpretation. It's a shame they didn't explain the process to you better.

This is how psychometric testing works. It's inherently difficult because in order to estimate the point of subjective "loss", which we call the "just noticeable difference" (or JND), one has to sample more in the area of the variable (amplitude, frequency, etc) that is more difficult for you to distinguish. Consequently, one will always walk away from such an experiment with an impression of having "guessed" and being really not sure if you gave the right answers. But that's because they're trying to estimate exactly that: they're trying to find the point at which you really aren't sure whether you hear something or not.

Basically this: if you guessed perfectly every time that you heard something, that would be a 100% recognition rate. If you always said with 100% certainty that you didn't hear anything, that would be a 0% recognition rate. So logically, the point of hearing loss occurs somewhere between those two extremes.

In order to determine more precisely where, the procedure has to "zoom in" on the point at which you answer correctly 50% of the time, a bit like a binary search. (Or sometimes they want 75% or the time, etc.) In any case, to do so, they need to sample the probability of you answering correctly or incorrectly in that region. This sketches out a probability curve, and then they can fit that curve and figure out the 50% or 75% point on the curve.

They'll sample using either a constant spacing method, random sampling, or a staircase method that adaptively moves towards the 50% point. The latter is more efficient, in the sense that it requires fewer answers from you, so that is what is often used in practice. However, by its nature it is also much more frustrating for the patient, because it will be sampling much more frequently in the region where you are "not sure" of the answer.

I'm really sorry they didn't explain this stuff to you, and allowed you to walk away thinking it was a badly done experiment!


Thanks for that fantastic explanation! It wasn't explained to me either when I had my hearing tests - i just assumed they were doing multiple tests to get an 'average' answer.


Moderately deaf Brit here. The NHS used this technique on me a year ago when I got my hearing aid. Whether it is imprecise or not I don't know, but it generated a graph plot showing, for each ear, a plot of my hearing response against a range of different frequencies, which was annotated with the position of specific phonemes - so I could see which vowels, etc were hardest to hear. This seemed to match my experience, although I didn't test it rigorously.

If you are reluctant about getting a hearing aid, I would totally totally recommend it. It turned my social and professional life around - I was beginning to avoid conversation with certain softly-spoken people and couldn't follow arguments in conference rooms that had any sort of noisy aircon. As a Brit, I have access to free NHS hearing aids and consumables (batteries and the tiny tubes that go into the ear), which helps. I had been planning to buy a smaller in-ear device, but in fact the external device is acceptable in terms of size and appearance (silvery grey).

The device I use has an external control button (a tiny stud) which controls volume but which could alternatively be programmed to trigger different modes (e.g. noisy room vs quiet room). The fact that I almost never have to change the settings also suggests that the initial test (which is programmed into the hearing aid) was somewhat accurate.


You might want to reconsider. http://www.hopkinsmedicine.org/news/media/releases/hearing_l...

Its been great for me. I've had mine for 6 months. They are not perfect but they are a big improvement. See my other comment above.


Yup - it's still done by that method. Although they do run through multiple frequencies multiple times to try and get consistency.

I know what you mean though ... especially if you suffer a bit from tinnitus, it's sometimes really difficult to distinguish if you are hearing a high freq test tone, or just the tinnitus 'noise'.


You might be interested in https://hearingtest.online/ -- it's obviously not medical quality, but with a decent pair of headphones should help you to see what sort of issues you might have.

I'll re-iterate the point other commenters have made, though: even if you're not entirely sure you heard a sound, press the button. It's all useful data: if you're pushing the button during a high-frequency part of the test when there's actually no sound, that's a sign things are not entirely right. And if you only push the button when there is a sound, but don't push it half the time the sound is played, that's also useful data.


> That blows my mind ... a bit like how do you know the color green is green. Maybe it's purple, but you have been told by someone else that it's green.

You may be interested in the "the map is not the territory" idea. "Green" is not a property of an object but of an observer, though "emits light at wavelength N" is a property of an object.


Also qualia and the knowledge argument:

https://en.m.wikipedia.org/wiki/Knowledge_argument


Though I don't understand why it's in principle impossible for Mary to deduce what colour feels like to a standard human. Just because humans are not smart enough to model a being's internal state completely, given just the total physical information, doesn't mean it's impossible.


Mary supposedly illustrates the difference between knowing (or modelling) and experiencing.


"Green" is not a property of an object but of an observer,

But "Green" happens to be a property of other large families of objects -- especially animate objects (foliage, certain insects, birds, and fish) and, more rarely, certain inanimate but nonetheless "special" objects in nature (features in geology; the sky at certain times; and of course, rainbows).

So in that sense -- while "Green" by itself doesn't seem to have intrinsic properties besides an associate to a certain band of the electromagnetic spectrum -- it does have a strong (extrinsic) association to objects which do have interesting intrinsic properties.


I don't really understand your point. "Green" is a label we apply to things which fall into a certain category: namely those which under ordinary circumstances emit or reflect light of a certain wavelength. The observer-dependent things here are:

- the definition of the category (which varies depending on what "ordinary circumstances" are for the observer: people draw the boundaries of light-wavelengths differently), and

- making the judgement "this object is in/is absent from the Green category" for any given object (since our information is imperfect, and [for instance] we may only ever see an object while it is bathed in blue light).

My post was mainly intended to say "there's no paradox involved if you experience green objects differently to me: the word 'green' corresponds to a quale which isn't an inherent property of things in the universe, but an artefact of our experience". Additionally, in this comment, I point out that 'green' can indicate different qualia to different people anyway.


I've gone through periods where I didn't wear glasses for 6 months to a year, and every time I would get new glasses, I had exactly that sensation of everything looking too crisp, because I had gotten used to everything looking fuzzy.


>Everything sounded far too crisp.

That's the way felt when I started wearing glasses at age 33. Everything was suddenly in HD.


This approach surprised me. Why are they doing feature extraction and then feeding that into a DNN? It seems much more straightforward to have the input of the network be noisy samples and the output be clean samples a la super resolution[0] in images. They probably wouldn't want to use fully-connected layers in that instance, but I don't see any fundamental barriers if they have enough computational power to run a neural network already. Am I missing something?

[0] https://arxiv.org/pdf/1603.08155.pdf


The filter bands they're talking about are Bark bands (https://en.wikipedia.org/wiki/Bark_scale) and are actually representative of the way the ear perceives loudness. In a traditional hearing aid, you might have a compressor for each of these Bark bands to counteract the effects of loudness recruitment (http://www.sens.com/helps/helps_d03.htm).


That might work, although I think there are two limitations:

1) Hearing aids have a 10ms latency budget. So no matter how much processing they can do, they're limited by how many samples they can look ahead and that limits the design of the filters. The brain can presumably look ahead further to separate sound streams so I think it's pretty impressive that ideal binary masking works.

2) Hearing aids have a power budget. The ones I've looked at achieve low power by running a FIR filter in hardware to shape the sound while a DSP classifies the sound and adjust the filter taps. The DSP doesn't have to run at the same rate as the filter. That seems well matched to the binary filter approach. Likewise features extraction might not run at the same rate as the DNN.


The latency and power issues can probably be fixed, assuming a good end-to-end model, by using model distillation into a wide shallow net using low-precision or even binary operations. I don't know if that would be enough - we've seen multiple order of magnitude decreases in compute requirements (think about style transfer going from hours on top-end Titan GPUs to realtime on mobile phones) but the usual target is mobile smartphones which at least have a GPU, while it seems unlikely any hearing aids will have GPUs anytime soon... I suppose a good enough squashed low-precision model could be turned into an ASIC.


Not to detract from your larger point but AFAIK the style transfer thing is different. If you're willing to hardcode the style into the net you can go realtime, but the original style transfer paper is able to do different styles without retraining. So they're different algorithms. Unless the SOTA has changed recently.


You shouldn't need to hardcode the style if you provide the style as an additional datapoint for it to condition on. But this doesn't really matter since for fun mobile applications it's fine to pick from 20 or 50 pretrained styles, and likewise for hearing aids.


This is the paper they submitted: http://web.cse.ohio-state.edu/~dwang/papers/CWYWH.jasa16.pdf

I'm not an expert in the hearing field, but makes sense for me: they probably know that some filters work (and are actually used) for hearing, and feeding that preprocessed data would save them training and hyperparameter tuning time.


The only thing I've seen run on raw audio are WaveNet models, and those are way too expensive to get into an embedded chip, no public real-time implementations exist, though Baidu had a paper which claimed real-time execution speed on some server class Intel CPU last week. They do mention that their CPU implementation could be parallelized too.


This sounds like something that could have a potential use for non-hearing-impaired people who have sensory overload issues (e.g. autism).


Is deep learning useful for noise-cancellation headphones/earphones? It's extremely hard to design great noise cancelling, only a very few do it, and hence the prices are very high. If deep learning can reduce costs and increase competition here, i think this sector could really grow.


Good read! Is it possible to know which hearing aids brand has this? Pity that the HA manufacturers are so secretive what their buzzwords mean. It's impossible to get good comparison between offered features.


How is this method better compared to independent component analysis?


For one thing, independent component analysis needs to process signals from as many microphones as there are sources to work properly.


This is true for vanilla ICA.

Independent Subspace ICA models can be applied to more signals than sources problems. It's also possible to use different decomposition methods, or subtracting already detected signals.


Really happy to have this guy as our neural nets professor at ohio state :)


I can't pick out a voice in a crowded room, or indeed separate speech from any sort of background noise. In ideal listening conditions I miss words making sentences not make sense, and often I don't realize someone has started talking until I've already missed the first sentence. However I don't have any physical hearing problem. Each time I've gotten my hearing tested I've been told my tonal hearing is perfectly normal. Yet the problems I have with picking out and understanding speech are absolutely debilitating, and I can't get anyone to understand that it is a disability and that it is real.

At my insistence my audiologist administered a speech processing test, but I was nonplussed to discover this test is completely unrealistic and did not at all match the situations I have trouble with. The way it worked was that it would mix a perfectly clear speech track with white noise, or a repetitive loop of background speech or cafeteria noise. But since the sound streams were mixed together so artificially, my brain could separate the audio streams based on source track, words or no. And since the "interruption" loops were repetitive, my brain could learn the pattern and discount it. So of course I passed that test, too. The speech processing problems I have occur in real environments when executing functions of daily life.

In the end the audiologist told me that maybe my problem is that I have ADHD and that my attention isn't able to lock-on or stay with a conversation well. He didn't know of anyone in my area who treated adults with ADHD, but promised to send me a referral. I'm guessing he never found anyone, because that referral never came. However it eventually led to me getting diagnosed and treated for ADHD on my own. (Although it took almost 2 years to even get an appointment.) I've found that getting a diagnosis and medication for ADHD has improved my life immensely. However it has not helped with the original problem; I still can't separate speech from other noises.

I resonate with the commenter who says he thinks his undiagnosed (physical) hearing loss once contributed to him losing a job. At work, I find excuses to hide/disconnect my phone because I have so much trouble making out what people are saying over a phone. I use chat and IM, and write everything down or ask for things written down. Still, sometimes I'll miss or not understand some verbal instruction and get in trouble. It also causes relationship problems - so many misunderstandings, misheard words, doing the opposite of what my spouse asked or not realizing she said something to me. I avoid some social activities because I know that background noise there will prevent me from participating, or because mishearing people might lead to a social gaffe or a dangerous misinterpretation of safety instructions.

I can't read lips to get by, either - whatever it is in my brain that affects speech processing affects lip reading equally, if not worse, and sometimes when I'm receiving the "all circuits are down" message from my speech centers, I can't even understand someone's sentence no matter how many times they repeat it. But if they write it down on a note I can understand it. In a way, it's like the inverse of dyslexia.

Anyway, not that I have much hope of an answer, but anyone know where I can go to talk about it or what kind of doctor would be actually interested and not just brush this off?


Tech starts 1/3 the way down the article (ctrl+f "clean speech")

> My lab was the first, in 2001, to design such a filter, which labels sound streams as dominated by either speech or noise. With this filter, we would later develop a machine-learning program that separates speech from other sounds based on a few distinguishing features, such as amplitude (loudness), harmonic structure (the particular arrangement of tones), and onset (when a particular sound begins relative to others).

> Next, we trained the deep neural network to use these 85 attributes to distinguish speech from noise.

> One important refinement along the way was to build a second deep neural network that would be fed by the first one and fine-tune its results. While that first network had focused on labeling attributes within each individual time-frequency unit, the second network would examine the attributes of several units near a particular one

> Even people with normal hearing were able to better understand noisy sentences, which means our program could someday help far more people than we originally anticipated

> There are, of course, limits to the program’s abilities. For example, in our samples, the type of noise that obscured speech was still quite similar to the type of noise the program had been trained to classify. To function in real life, a program will need to quickly learn to filter out many types of noise, including types different from the ones it has already encountered

oh


As someone with a cochlear implant who lives with the consequences of overly clever programmers who thought they'd "help" by filtering out noise and volume and whatever else... I really wish they wouldn't. This is a technology that makes me so angry some days that I sometimes wonder if it was worth getting implanted, even though I know it was.


This is something I do wonder about, in this context. I don't have a CI myself, but my 6 year old son does, and I am somewhat concerned that he is-or-might be experiencing partial sound "blindness" (meaning: sure speech processing is adequate but there are surely some things that are processed away). I have a fair amount of experience in music/sound-recording environments and it makes me somewhat sad for him that he's still "missing out" (although obviously this is outweighed by the fact that he can actually hear and communicate now, but I'm sure you get what I mean).

I'd get into the area myself, if I was in anyway useful with DSP code or C++..

May I ask, were your hearing issues (leading to the CI) a recent thing, or long-term? My main interest here is about using machine learning to assist people who do not know sign-language to understand signers rather than to "improve" the actual hearing process (because - personally - my S/L skills are abysmal).


Interesting idea on using ML to help non-signers understand sign language. In this thread's context, the ML is designed to help people hear better. In visual contexts (which sign language lives in) would this hypothetical ML help low vision or blind people see better?

Because people with 20/20 vision just need a sign language dictionary handy and some patience.


> Interesting idea on using ML to help non-signers understand sign language.

The dream for me is something like Google Glass with an app that can subtitle spoken, written, and signed language.

> Because people with 20/20 vision just need a sign language dictionary handy and some patience.

I would think a LOT of patience... the easiest way at that point would just to have the other person fingerspell or write what they're saying; if you're watching something where that's not possible, then the dictionary will just be an exercise in frustration.


> and some patience.

Yeah, well that would be one way of handling it, but unfortunately the real world has terrible issues with not impeding my progress on that front. Not that I'm anti-learning, at all, but - personally - I'm fighting a losing battle against learning German, Swiss-German and Swiss-German Sign-language whilst also being a walking-talking-english-lesson :D

Taking the slow way, with dictionary in hand, is as you point out, an exercise in frustration (especially if the talker/signer is 6 years old).

Yes, I share your dream of something google-glass-like that can add subtitles. There are people working on this (mostly in the UAE, if memory serves). Interesting times ahead - hopefully I won't have to wait long, otherwise I'll have to do it myself and that really would take a while ;)


Long term; I have progressive sensorineural hearing loss that was noticed when I was about two and had reached "profound" levels by the time I was about ten. Implanted in my left ear at 16.

I actually made the decision to be a part of the normal public school system and never learned sign language, partially out of stubbornness, so I can't really help with SL-related questions, though I'd be more than glad to answer hearing/CI-related questions.


Thanks for the offer, but actually have access to plenty of CI-wearers (son at deaf school, lots have CIs.. go figure ;) )


Which one have you got? I know what you mean with the "helpful" bullshit. My conventional hearing aid, on my left ear, has this "smart" mode where it tries to detect speech vs noise, and change the volume or the directionality of the microphone to compensate. You end up with this wildly fluctuating volume all the time where it feels like stationary objects are coming at you. I had them turn that feature off asap.

On the other hand, my cochlear implant (right side), has a directional microphone that's actually incredibly useful in noisy situations. Combined with the directional mic on the hearing aid, I can actually hear almost as well as a normal person in a crowded bar, after 20+ years of avoiding them because of how impossible they were to cope with.

I strongly recommend it if you can get it - the Nucleus Freedom 6. I'm saving my pennies up to get a second one.


I have the Advanced Bionics Harmony BTE. Since my implant is AB, I wouldn't be able to get the Nucleus Freedom 6.

I have an in-ear mic, which does wonders for reducing surrounding noises and also for letting me use a phone normally, but my main issue is with the software itself; I've had issues with it since implantation and they've always been pooh-poohed by audiologists at Hopkins, Tokyo University, and Toranomon. The biggest problem is that it seems to operate on some kind of averaging system -- when there's a noise that's louder than the recent average, everything just cuts out for a few seconds. This is especially noticeable in the morning, where I've just woken up and am trying to get to work, but there are cars and trains etc. making noise and making my hearing cut in and out constantly, which not only drives me up the wall but gives me a terrible headache.


Yep, that's exactly the thing I'm talking about. You'd think they could hire one deaf person at their labs to road test the things, but...

On my hearing aid it's a "feature" that can be turned off. Too bad you're stuck with it.


I have had so many problems with the implant in general that are just brushed off as "well, you're unusual." It won't even stay on my head without me putting a few extra magnets on the headpiece.


Weird, it sounds defective. Have they tried replacing the unit?


I'm on my 4th or 5th BTE now (since 2002), so... yeah. :)


That definitely sucks. Thanks for the headsup not to get one of those though!


> overly clever programmers who thought they'd "help" by filtering out noise and volume and whatever else

I used to work at a CI manufacturer. Just thought you should know that this isn't what happens: all the features are developed by researchers or experts before even getting to the engineers, and go through clinical trials (usually several) to prove their effectiveness. They don't, and can't, add new sound processing on a whim to be 'clever'.

If the sound processing on your device is making you frustrated you should definitely discuss it with your audiologist. A lot of features can be configured or disabled completely.


> I used to work at a CI manufacturer. Just thought you should know that this isn't what happens

I didn't actually think it was; it's just a bit of annoyed snark because this has severely impacted my QOL for a decade. :)

> If the sound processing on your device is making you frustrated you should definitely discuss it with your audiologist. A lot of features can be configured or disabled completely.

Tried it several times; she was unconvinced that it was actually affecting me. Got a new audiologist; she was convinced that the setting couldn't be changed. Moved to Japan, got a new audiologist: convinced that I'm imagining it and doesn't think the setting can be changed anyway.

I hear that a long time ago, people used to be able to buy the adapters to program their processors themselves.


That last part seems to really put a damper on things, since the problem the author describes is that a person with a hearing aid requires speakers to take turns. Apparently, when people speak together, the multiple voices clash. Even if the hearing aid amplifies voices only, that problem remains.

Still, cooler than a lot of things Deep Learning is being applied to these days.


Reinvents is such an awful term.

They didn't 'reinvent' anything, they improved upon an existing shortcoming.


I was thinking they were on their way to making a Photoshop for audio until I heard the before and after samples




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: