Hacker News new | past | comments | ask | show | jobs | submit login
Paralyzed Man Uses Brain Implant to Type Eight Words per Minute (ieee.org)
220 points by sohkamyung on Feb 22, 2017 | hide | past | favorite | 28 comments



I'm impressed they get that much accuracy from the motor cortex. I posted this[1] before - I imagine detecting signals at the spinal cord would provide even greater accuracy (not necessarily in this case). I'm really excited either way.

I'd like to know more about the implanted array though. My impression was that all arrays induce scarring and lose effectiveness over time.

Also, isn't paralysis in ALS caused by the death of motor neurons? If that's so, what exactly was the array sampling? Is it just that there's too few neurons to control muscles, but still enough to type?

[1] http://journal.frontiersin.org/article/10.3389/fnins.2014.00...


> A motor neuron (or motoneuron) is a nerve cell (neuron) whose cell body is located in the spinal cord and whose fiber (axon) projects outside the spinal cord to directly or indirectly control effector organs, mainly muscles and glands.

The neurons in the motor cortex are not motor neurons.


That was my question as well, can anybody provide an answer? Will they need to clear out the scar tissue periodically?

Also, I'm unsure, but this might help prevent scarring and provide a closer to natural interface. http://www.sciencealert.com/scientists-build-an-artificial-n...

I like to keep tabs on this stuff, and would like one day to have an interfaced helper like Siri (Open source alternative of course) that I could use to help my memory.


I wonder if after training, these paralyzed users would develop a sort of language of brain primitives (almost a "thought alphabet") which are easily recognizable by the software and easy signals for the human brain to create. It may vary by user, but I wonder what the general set of easy "letters" would look like.


From what I heard this is actually how it happens. Roughly speaking, upon first connection of the implant the subjects can't do anything at all basically. They have to train themselves and to some extent the software to work together, so that the different patterns in firing of the neurons get matched to different letters. (or movement, or whatever is attached at the output side of the software).


I doubt we have the signal fidelity for that. These are just electrodes that detect voltage changes. They can't see 'thoughtwaves' or somesuch. They see very basic information and with a little training you can set off a voltage threshold. These voltages are really aggregates and the tools used here just aren't granular enough to be very exacting. You can do this right now at home with the Neurosky headset or the Neurosky Jedi game. The former comes with a programmable API. The stuff in clinical labs isn't typically more complex than this. This team went with an implant which is going to provide a higher level of accuracy, but its not going to read thoughtforms directly.

I imagine there's a more efficient way to handle typing for these low bandwidth cases. Maybe chording of common syllables. The layout in the article looks inefficient. You can probably lose accuracy in text to have easier 'speaking.' If you wanted to say 'father' you'd have to hunt and peck six letters. With chording you could click on 'fa' and 'der' and it should be understandable via context. Toss in some predictive logic and you can chord words at once or even entire sentences. Probably easier said than done, of course.


It's like having an SDR next to a digital computer pickup unintentional emissions to guess what the computer is doing, except the computer isn't designed by anyone and the design isn't understood at all yet.

It's a miracle every time someone manages to put the signals together to get something meaningful done.


As a neuroengineer working in the field, this is quite accurate. Understanding the compute architecture goes a loooong way - after all, acoustic RSA key extraction (https://www.tau.ac.il/~tromer/acoustic/) is possible. Whereas we're not exactly even sure how the brain is supposed to theoretically compute, other than that it's tremendously parallelized to a degree we don't quite fathom. The electronics explosion has primarily come out of computational motifs that rely on the lightspeed resolution of semiconductor gates and heavily rely on sequential processing, but the brain doesn't work this way AT ALL.

An important concept here is the 100-steps-rule (https://www.teco.edu/~albrecht/neuro/html/node7.html) - neurons are SLOW! You can out-jog most non-myelinated neural signals, and the vast majority of sensory and motor computations finish in the order of 100 "clock cycles".

Write me a computer vision algorithm that has enough parallelism to complete in 100 cycles, and we can talk about understanding the biological brain compute structure and true brain-computer interfaces.


Interesting. Some stuff I've stumbled upon in the past which is kinda related to that idea:

https://en.wikipedia.org/wiki/Language_of_thought_hypothesis

https://en.wikipedia.org/wiki/Private_language_argument


But that's a stop gap measure at best, sure until the "personal calibrations" can be done programmatically. It would be a neat thing.


I suspect this might be a candidate for PCA, or a similarly driven approach.


I've been thinking along similar lines recently, but don't have fully formed enough thoughts to write. Would love to brainstorm if you're in the bay area at some point. (email in profile)


Will definitely let you know next time I'm around the Bay Area. I'm also interested in ML and education (my startup analyzes feedback comments, and we've been working with student surveys in education recently), so looking forward to chatting sometime


It seems that something like entropy-based helpers that predict probabilities of next letters and make those letters large could increase input speed. One of the first examples of such helper was done in David McKay lab: https://en.wikipedia.org/wiki/Dasher_(software) .


Nowadays everybody has something akin in their pockets: a smartphone keyboard.

Or at least KeySwift (but pretty sure stock android and ios have the same feature) does it:

Instead of having to press a specific letter, you can miss and press another close letter. For exemple if I want to write "hello" but I type instead "hekki" it correctly proposes "hello" as correction.

Same for "trsnskstd" and "translate", "ajithdr" and "another".

So instead of having 26 letters to press, you could make for example 6 groups of 4-5 letters, and let the computer decide which word you wanted to type.


I am glad I am not the only one thinking "Not fast enough! We need to add more speed!"

Stuff like this is amazing though.


Am I the only one freaked out by the privacy implications of such advancements?

I mean it's great for the disabled. But if we don't have a good privacy protection legislation, is it possible in the future that you'll be asked for an inspection of your brain?


>But if we don't have a good privacy protection legislation, is it possible in the future that you'll be asked for an inspection of your brain?

I can already see this kind of thing being abused at border crossings. Borders seem to be a no-man's land, where civil liberties are concerned, so even with privacy protections, I'd expect them to not apply there.


Extensive training and adjustment is required to use this, and that will likely always be the case.


I hope they make a vim plugin for the software.

Edit - This was a joke, but I do wonder if command/movement style interfaces would be much better for this type of thing than a keyboard interface.


They have implemented dasher using the same technique, which is a rapid gestural spelling interface:

https://www.youtube.com/watch?v=nr3s4613DX8


My hand is cramping... I volunteer to write the emacs one first.

But really, this is something I dream about as a spinal disease survivor.


You don't have a foot pedal? (https://www.emacswiki.org/emacs/FootSwitches)


It's so incredibly low-bandwidth! Currently, there are basically two ternary inputs we get from the brain. Lots of work ahead...


I wonder the future holds for this type of technology. I presume this is going to start out expensive. Perhaps one day some firm will offer for you to have this for things like Alexa. But you'll have to have advertising beamed into your mind. And law enforcement will be challenged about whether the data acquired through this is admissable.


Headon! Apply directly to the forehead!


neural lace one step (out of a billion) closer.

i'm amazed how fast some sci-fi ideas drop the fi part these days.


Still better than the new Macbook pro keyboard!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: