Hacker News new | past | comments | ask | show | jobs | submit login
Neural implant lets paralyzed person type by imagining writing (arstechnica.com)
540 points by Engineering-MD on May 12, 2021 | hide | past | favorite | 183 comments



If anybody from the project is reading this, could you tell me where I should send the application, entrance form, fee or even straight up bribes to get my brain and this technology smushed together please?

I'm quadriplegic and since lockdown began last March I've not left this bed once. That's not hyperbole, I literally mean that I haven't left the spot in this bed I'm in right now, writing this comment to you all.

So Yeah, connecting with the world like this is something I need to be a part of. I really don't have words to describe how life changing this would be.


Francis Willett is the lead author on the paper. Here’s his contact info: https://profiles.stanford.edu/francis-willett


That's not cool man. Hope you have a good quality bed at least. I can just imagine what an impact this would have on your life. As you said, no words can describe but I feel you man.


What are you using to type now?


Eye trackers are used as well for those who unfortunately cannot move anything but their eyes. That's the case in very bad/late ALS cases. This technology might really be life changing for the very advanced cases of ALS - they often lose the ability to vertically move their eyes, which makes it very very difficult to work with eye trackers.

Really hope they'll make something like this available to the public.


Depending on if they can move their neck much they might have a forehead dot to track movement for a mouse cursor and a tongue/lip device for clicking. Then it's just onscreen keyboards. Other devices I've heard of are cheek movement or even eye tracking.


In addition to the options other commenters have mentioned, it is possible to fully control a computer with dictation, although it can be very arduous.

The term quadriplegia does not always refer to 100% paralysis, and some have limited use of their hands.


Likely a hired personal care assistant.


You might ask if you can test out this one - no surgery required.

https://techcrunch.com/2021/05/04/cognixions-brain-monitorin...


Hope they get back to you. I am optimistic technologies like these will become widely available soon!


The lead author of the paper: fwillett@stanford.edu


This sounds like the cheesy background story of a supervillain story if I ever heard one.


Best of luck to you


I remember when I first learned to touch-type as a child, and for many years after, I would sometimes 'type' things out on my legs, tabletops, notebooks. At first it was to practice, but after I had mastered typing I kept doing it just because it was still novel to be able to type and feel like I knew how to use technology. Doing that made me realize that typing a character is a very specific and atomic action (compared to writing by hand), as well as an automatic one (once you learn how to touch-type). I wonder if the performance of this solution could be improved by training it to detect the mental impulses that occur when a trained typist imagines typing a character, rather than when the person imagines writing out a character.


Heck, touch-typing is so ingrained in my brain that I sometimes recall how to type a word to remember how to spell it. For longer words I don't write by hand very often, it's simply easier for me to remember and transliterate the muscle memory.


It's the same with guitar playing. I have to do the motions with left hand to remember the notes and chords :)


I can type far more words than i can spell


> Heck, touch-typing is so ingrained in my brain that I sometimes recall how to type a word to remember how to spell it.

I've often completely forgotten passwords and had to rely on this muscle memory to regain access.


I once was so high on MDMA that I could barely speak let alone type something because I just could not understand anything. Not only could I not remember my password, I could not keep my eyes steady enough to see the letters on the keyboard.

I had to unlock my laptop to play some music so I just tried to relax and let my fingers do the thing. I did it.


I once had to leave a meeting to find a keyboard to be able to relay a password to someone over the phone by looking at what I was typing.


It’s weird we call it muscle memory. It’s still a brain memory.


Neuroscientists think that motor action is primary, and all sensory processing is in the service of motor action. [1]

Motor movements are some of deepest knowledge we have.

[1] https://youtu.be/I0JuMyGclr0?t=1257


> Motor movements are some of deepest knowledge we have.

Don't know if it's is a real thing, but in some CSI type show years ago there was a witness with amnesia (couldn't remember their name etc.) whose identity they found out by engaging him into a conversation to distract the consciousness from the "I can't remember" thoughts and then put a form in front of him he was supposed to sign (IIRC it could have been a witness statement) and the movement of signing something was so ingrained that it persisted even through the amnesia.

Anyone know if this is a real thing or just fiction?


I can't comment on the specific example of signing a form, but I did read about a case like this in Robert Sapolsky's "Why zebra's don't get ulcers".

It's a famous case in neurology, a man called "H.M."

Here's what's written in the chapter "Stress and memory" about H.M.:

    "H.M." had a severe form of epilepsy that was entered in his hippocampus and was resistant to drug treatments available at that time. In a desperate move, a famous neurosurgeon removed a large part of H.M.'s hippocampus, along with much of the surrounding tissue. The seizures mostly abated, and in the aftermath, H.M. was left with a virtually complete inability to turn new short-term memories into long-term ones -- mentally utterly frozen in time.

    Zillions of studies of H.M. have been carried out since, and it has slowly become apparent that despite this profound amnesia, H.M. can still learn how to do some things. Give him some mechanical puzzle to master day after day, and he learns to put it together at the same speed as anyone else, while steadfastly denying each time that he has ever seen it before. Hippocampus and explicit memory are shot; the rest of the brain is intact, as is his ability to acquire a procedural memory.
If you ask me, the CSI episode is definitely plausible if you consider that H.M. was real


It is real, not fiction. Oliver Sacks' "Musicophilia: Tales of Music and the Brain" contains a similar case, read "In the moment, music and amnesia". In one of the Sack's talks on youtube, I heard his hypothesis to account for this phenomenon: not every learning is equal; some learnings are so ingrained, they and these learnings are stored in the lower brain, brainstem area.


Another good example of this is that musicians will often deliberately work to get parts of pieces fully in muscle. For fast sequences, if your brain has to get involved, you've already missed a few notes.


Just to note again, the brain is always involved in playing instruments.


I like watching Prank shows. For some reason its entertaining to see what the body does instantly/unconsciously in response to stimuli when there is no time for thought. Below some threshold its like people have no control over what their legs, hands, face, vocal chords will do...


I still do that (tap things out in qwerty when not at a keyboard) constantly. I type well over 100wpm, and qwerty is very, very deep in my brain. On occasions I am significantly intoxicated (pretty rare for me) I can still type accurately at >50wpm even if my words are slurred.

There have been times I wanted to switch layouts but qwerty is just too far in there for too many decades.


I've been practicing touch typing for over 5 years now. I seem to be capped at around ~65 WPM. I sometimes wonder if it will stay there, or will it improve.

I take this to test my speed: https://10fastfingers.com/advanced-typing-test/english

Now I've tried to take the test again after roughly a year, out of curiosity, and I got around 75 several times. Maybe it will improve very slowly past this level?

Do you perhaps remember how the improvement trend was with you?


I played Everquest and would be running from monsters while typing as a young teenager. So I’d have to type as fast as possible. I touch type around 120 wpm without ever being formally trained.

Maybe try something that forces you to type quickly like the game The Typing of the Dead or The Typing of the Dead 2? They’re silly games but might help since they add urgency to the typing.


You get better over time if you intentionally practice improvement. If you don't, 60-70wpm is a common threshold.

My gut feeling is that this is simply the speed range where the benefits of faster typing during composing quickly fall off. If you're thinking about what to say, you'll write a burst, think, write, think, change a bit, write some more- the limiting factor isn't really WPM.

Transcribing, of course, is another matter, but that is a specialized case.


Online chatting, too. My typing speed increased dramatically when I started having conversations online.


Yes, you might be on something here. Tried to reflect back on it after reading your comment, and I think I "feel" content with my current writing speed.

Will try to feel less satisfied with my writing speed, and see if it will help. Thank you for taking time to share!


Do you play an instrument? I have a feeling that I can type quickly (well over 100wpm) because I play the piano as well.


I play guitar, and I am certain that practicing the fine-motor sync skill of using both hands at once has helped my typing, and vice-versa.

For reference, my typing while transcribing rate is in the 100-110 range. My daily-use composition is likely half, simply because I spend most of my time composing in my head.

I will say that the big advantage comes from being completely comfortable touch-typing, without needing to look at the keyboard. Once you've achieved that, the mental load of typing fades in to the background, and you can spend more time considering content instead of the mechanics of creating it.


> You get better over time if you intentionally practice improvement.

Are there more effective techniques than just typing? I mean like measured techniques for identifying and improving problematic aspects of typing rather than just going at it repeatedly in the hopes that I'll improve overall?


Yes. Feedback from a professional typist, who watches and corrects your form as you type. When I was younger, my father had a secretary who helped me practice on an IBM Selectric, and it helped a tremendous amount compared to the time investment.


It’s also possible that your keyboard is slowing you down somewhat. I started using a super crappy keyboard a while ago with wobbly keycaps, poor actuation pressure, and excess key travel, and my typing speed & accuracy both dropped by over 5% just because the keyboard didn’t work well. It probably won’t take you from 75 to 100+, but if you are looking for something that might give you a modest improvement, you might look at the keyboard you’re using and see if you can find one that works better with your hands.


I have been over 100wpm (on qwerty) since a teenager, and I'm in my late 30s now, so unfortunately no. (My scores are routinely 115-125 these days.)


This makes me suspect it might be like instrument playing. A lot easier to learn while young. I'm 29 right now and I've started practicing around 23-24. A bit too late for this kind of muscle memory I guess.

Thank you for sharing, was useful to compare!


Plato said the prime of life doesn't even start until you're 25 and Schopenhauer said something similar (can't recall his exact number). You can still learn and grow a lot way later into life than most people think. If you really care, just try different techniques, measure/test yourself, and continue the cycle. If you can't find the motivation, maybe it's just actually not that important to you, and deep down you have greater interest in other things than learning how to type fast.


I don't have a good timeline for going from qwerty to dvorak to tell you (first change), but earlier this year I went from dvorak to workman and was back up to 100wpm at least after about two months. I was in the 140s with dvorak after a year and a half or so. My best with qwerty was 160. I recently hit 115 with workman. It doesn't cause the same right hand strain dvorak did, so I'll likely stick with it longer and hopefully hit my old speeds.

If you do change layouts, research them a bit first. I liked dvorak, but it used the right hand more than the left by a fair bit, like the opposite of qwerty. My right hand didn't deal with the strain as well as my left hand did in qwerty. Workman is about 50/50 left and right hand usage, and also avoids putting common letters in the middle two columns, which reduces how often stretching is needed.

Colemak seems a bit better than Dvorak on paper, but still has some flaws that lead to creation of a "mod DH" variant that moves D and H to a less straining spot since they're common letters.

Workman solves the same issue and is a lot more efficient than even Colemak, plus the 50/50 hand balance seems entirely unique to Workman. That balance was why I chose it over the Carpalx QGMLWB layout or the AI-generated Halmak layout, which are even more efficient in some cases than Workman. I have a friend who uses QGMLWB, but as far as I know, he doesn't get any pain in his hands. He also doesn't type quite as fast as I have, which may be related.

Lastly, I want to mention that I built an ergonomic keyboard and used it with dvorak for a couple months without my hand pain going away, which is why I bothered changing layouts again. The journey has been educational. I never used to care about stuff like the pinky being weak and overused. Now I have stuff like backspace and enter on thumb keys, I'm thinking about how much I have to stretch to certain keys, etc.

Some of these ergo keyboards take some keys off the right side compared to something like a 60%, but then you realize your right pinky was covering a huge amount of keys, and maybe it's worth having an extra inner column where you move the = sign to. I use a Pinky4 after using a Pok3r in the past, and it seemed annoying at first that the top row didn't fit the - and = right of the 0, but I prefer it now that I'm used to it. I've tweaked the key layout several times in QMK and it's been a lot of fun. So, I'd recommend both changing layouts and replacing your keyboard if you have any interest in this stuff.


You may be interested in the recently published https://engram.dev layout :)


Have you ever tried to label a blank keyboard from memory? Even though you know where all of the letters are and use them all the time without thinking it’s almost impossible.


I’ve found it’s quite doable if you just “type” out different words on a blank keyboard and then use where your fingers land as the label. If you type “animal” you then know the position of all the constituent letters. So all you need to do is type a sentence with all characters, like “the quick brown fox jumps over the lazy dog.”

Interesting effect though. This reminds me that it’s almost impossible to draw a bicycle from memory. https://www.amusingplanet.com/2016/04/can-you-draw-bicycle-f...


Or even just type out the alphabet. But labeling it from left to right, top to bottom, would indeed be tricky once you got past 'qwerty'.


I'm bilingual. My memory committed phone numbers and such are stored in one language only. To translate them, I need to write the number down and read it in the other language.


For me numbers are stored in one language or the other, but are always recalled in the same language. For example, my SSN is in English but my garage door opener code is Chinese.


For some reason, I remember numbers like that as though I were typing them on a numpad. I automatically visualize a numpad and remember the sequence of numbers as distinct locations on it. Funnily enough, even phone numbers use this computer numpad rather than the number pad on a phone.


You can't visualize the numbers? Or translate the "audio" in your mind?


I can, but it's easier to write it, if I'm at a desk. Audio-translating is harder than it seems that it should be... especially if I"m struggling to recall the number. I suspect that recall and translation are like lead guitar and vocals.


I had to re-key my keyboard recently and managed to get all keys right, EXCEPT for a swapped right alt/ctrl. I don’t think I ever typed on those two, so maybe it’s a blank in my muscle memory map :)


I realised a short while ago, that no matter which keyboard I'm sitting in front of, I simply do not use the right alt-gr, windows, menu, control or shift keys. They could just not be there and I wouldn't notice.

I also never use the numeric keypad, but I think that's more to do with being left handed.


Indeed! It is like entering a pincode on a keypad for the office door every single day. Then the new guy calls and asks you about the pin, and you have NO idea. This happened to me and I had to put my hand on something flat and see myself typing.

I suspect this is some part due to the arrangement of the numbers. Numbers on keypads in these kind of devices often has "1 2 3" on the top row. Whereas on the keybaords and calculators it is reverse. The top row there goes "7 8 9".


In the early days of card payment terminals my mum couldn't remember her PIN number to pay at a shop. Had to run over to an ATM and do a pretend transaction and take note of what she entered.


I'm like that with my phone lock pin. I use it visuospacially rather than numerically.


I can do it no problem (have changed all the keycaps on a keyboard a few times), but it is indeed a different kind of memorization. I can recite the qwerty letters all in a row, but for dvorak and workman, I didn't spend as many years with them (or actually ever see keycaps arranged that way on a real keyboard), so I would have to imagine myself typing for those.


I totally can do that with the Russian layout. Probably can with Latin too, but I won't be so sure.


Even the free-form mode 75 character per minute sounds amazingly fast for a thought interface.

I wonder what it's like to use. I know there have been many attempts at this, and they're steadily improving, but I watched a talk at EvoMusArt 2007 where researchers used a smaller array of cranial electrodes to have the user control a mouse. That it worked blew my mind. But they talked about how slow and noisy it was to get the cursor from one side to the other... maybe a minute IIRC. What I remember most distinctly was the researcher said it was physically draining to do, that after 5-7 minutes of this activity they would be sweating and exhausted, without moving.


Can someone explain what is new about this generation of the tech?

People have been able to move mouse cursors and type using only their brain and tiny implants for decades... so far Neuralink seems to just be repeating these experiments, but receives a lot more hype.

Here's an article from 2006[1] with someone moving a mouse cursor and clicking things with a similar tiny brain implant.

Is there something new here, or just the Musk train?

[1] https://www.nytimes.com/2006/07/13/science/13brain.html


From the Ars Technica article:

>the participant was able to produce about 90 characters per minute, easily topping the previous record for implant-driven typing, which was about 25 characters per minute.

So it appears to enable typing at > 3x the speed of previous efforts. At least for this one particular person.


For me, having also seen the cursors and clicking type stuff in the past, what seems new to me (although I don't know for sure that it actually is new) is that they can now read minds on the fidelity of unique letters rather than just more simple "directions" like up, down, in, out.


Previous efforts were more inaccurate, you may remember hearing jargon like “alpha waves” or “beta waves”. Participants would learn to move cursors by learning to create the neurological activity that was being listened for.

This, on the other hand, is watching for complex neural activity, in that it learns what pattern appears when a participant pictures drawing an A.

Think of the difference like: before the input was controlled by turning your whole body, and now the input can take individual sign language letters.


>hearing jargon like “alpha waves” or “beta waves”. Participants would learn to move cursors by learning to create the neurological activity that was being listened for.

Previous efforts included moving a cursor with the implanted Utah array, which I don't think was just based on broad brain waves was it?

Are you thinking of non-invasive devices? Neuralink didn't invent implanted microelectrode arrays. Some of their press has been around being the first one wirelessly connected to a receiver, but it wasn't first there either. Mainly their roadmap for scaling to more and more electrodes and their custom chip for processing is what's new I think, but they aren't sure if the polymer coating they are using for will hold up long term.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3715131/


You can do that without implants and just an eeg headset. I am still not sure why people think it’s a great idea to put things in the brain for non life threatening reasons


Maybe this comment will give you some perspective.

https://news.ycombinator.com/item?id=27138124


Define life-threatening.

I would risk my life to be able to speak at normal speeds, versus incredibly slowly.

I would risk my life to be able to walk, even slowly, rather than be in a wheelchair (I imagine.)


Agreed. The divorce rate of married people who get these government regulatory approved BCI interfaces is also extremely high. There are also other extremely severe problems that such interfaces cause.

EDIT: To the downvoters, it’s true, and there are other severe issues with these implants. Have a read, since many of you are laymen:

1. Audio Narrated Version of New Yorker Article: https://share.audm.com/share/newyorker/mind-machines-kenneal...

2. New Yorker Article: https://www.newyorker.com/magazine/2021/04/26/do-brain-impla...


The divorce rate for people who sustain life changing injuries is also high. I wonder how much overlap you’re seeing here


Don’t act like you can’t compare a group of married people suffering from the condition that doesn’t get the implant and a group of married people with the condition that do get the implants. Did you honestly think they were drawing the conclusion simply from comparing the rate of divorce of individuals with the condition to groups without the condition? It sounds like you basically just concluded the other side knows nothing about science because they mentioned divorce rate in a negative light, when you were lacking core details of their study


It is believed that a lot of this is due to a shift in identity. Look at the examples of this used in people with epilepsy (medically stable for years, that has not progressed) for example. The divorce rates are similar with other users of regulatory approved BCIs.


A local publication had one of the best first person accounts of TBI and the resulting rehab I’ve ever read and I always think about this whenever the subject comes up.

https://www.sandiegoreader.com/news/2010/apr/21/cover/


When I was a teenager, my mother had the court transcript of a bicyclist who had been in an accident without a helmet.

Quite horrifying to read. Absolutely nothing he said made any sort of sense.


> The divorce rate of married people who get these government regulatory approved BCI interfaces is also extremely high.

Divorce rates spike after diangosis of and successful treatment/recovery for/from all kinds of chronic health issues, as they do for all kinds of major life changes. Disruption in living patterns changes the context and patterns of relationships.


Don’t act like you can’t compare a group of married people suffering from the condition that doesn’t get the implant and a group of married people with the condition that do get the implants. Did you honestly think they were drawing the conclusion simply from comparing the rate of divorce of individuals with the condition to groups without the condition? It sounds like you basically just concluded the other side knows nothing about science because they mentioned divorce rate in a negative light, when you were lacking core details of their study


Very nice article. I liked the ending of it

Leggett’s identity changed again once the device was gone. Now she knew great loss, but she also knew things that had been impossible to understand before the device. Like many people with epilepsy, she had often found herself fuzzy for a considerable amount of time after a seizure. That state made it very difficult to notice the signs that preceded seizures which could act as a natural warning light. These days, when she gets a funny, flip-floppy feeling inside, she takes anti-seizure medication. She’s not always sure. Sometimes she gets her husband to weigh in. He says, “Go with your first instinct,” and usually she takes a pill. She is now seizure-free.


Just FYI: the article mentioned Neuralink as context for the general public (and with a bit of snark if I’m reading it right) but this was done by an independent academic research group.

To answer your question, it is merely a continuation of research in the field.


I see the promise of Neuralink as being able to read and write to computers/ai directly at high bandwidth. It's a long way from that, but it's what its going for.

Probably a good analogy is that we had letters and fax machines before the internet, the internet is just much faster with much higher bandwidth.


It should be noted that TFA has nothing to do with Neuralink, this is university research.


I am not a BCI engineer (IANABCIE?), but I think the selling point of NeuraLink is the robot surgery aspect.

Designing the full stack to be minimally intrusive seems unique.


> Designing the full stack to be minimally intrusive seems unique.

Using the words "minimally intrusive" in the context of open brain surgery is quite amusing to me.

No matter who or what performs the surgical intervention, it still involves cracking open and replacing part of the skull and inserting foreign bodies into cerebral tissue.

That's as intrusive as it gets.


The basic idea of the older EEG setups is you have a net of 128 or 256 electrodes on your head, and then the program signal processes all of the waves from them. Problem is your skull makes the waves bounce around, adding a lot of noise and making it hard to parse much signal from the data.

Neuralink uses an implant, inside your skull, so that this noise isn’t a problem.


Not true: You ideally still need as many electrodes as feasibly possible for a more complete BCI interface, implanted or not, regardless of the resolution (constant bitrate) of the electrodes. It is true though that certain implanted electrodes have higher resolution than transitional EEG caps. But at this point, it is not that big of a deal because neurosurgeons can only implant a very limited amount of electrodes. Even if they could implant 256 electrodes, the processing of this would be extremely limited. Even a standard 19 channel EEG cap can require a top-of-the-line computer (think i9 processor, 64 GB RAM, or better) for some use cases, due to latency issues involved especially with the amount of signals being used.

Also: With EEG caps, we have filters that remove the noise from movement and muscle activity very well.


The paper published says they used RNNs to decode handwriting. Similar research from what seems to be the same team used ReFIT Kalman filter around 2011, but this approach looks to produce better results.


Ready for next year’s paper to be the same but using transformers or some attention mechanism instead!


As another poster said, they used RNN to decode handwriting. While there was an increased character rate observed, that is substantial compared to other such systems, that is not necessarily what is groundbreaking here. You would probably need a very powerful desktop (or a laptop with a desktop processor, >$4,000) with the best imaginable specifications to pull this off. So, it is of extremely limited use for the disabled individual.

With brain computer interfaces you use motor imagery [1], where you imagine motor movements (e.g. moving your left arm so that your palms face outward, and imagining the position of your shoulder, arm, and fingers) to control the BCI interface, such as in the case with prosthetics or this case handwriting. I imagine since this is fine motor movements (a different brain circuit, too) that it is much more difficult to visualize.

Motor imagery is also used in stroke rehabilitation. It is also used for sports performance.

Graded motor imagery [2] is a variant of motor imagery and there are apps for that. It basically rewires your brain’s pain circuits and makes pain more manageable and more controllable. It works for chronic pain, of any origin.

For example, in graded motor imagery, apps show you images of arms/legs/etc. in contorted (twisted) positions and at various angles in the picture, and based on the positions of the fingers/toes/etc. you are supposed to identify if the arm/leg/etc. is a “left arm” or “right arm” or “left leg” or “right leg” [3][4].

I have tried graded motor imagery (via apps) and I have excellent spatial skills: it is a quite difficult exercise.

There currently is no huge difference between electrode caps and implanted electrodes (there are multiple types) except that electrode caps are super dorky. The resolution (constant bitrate) is higher for some implanted electrodes, but not by a ton compared to an electrode cap. Generally, a ton of electrodes are not implanted, either.

So, everyone drooling about Neuralink really has no idea what they are talking about here. They have a tremendously long way to go with Neuralink and Musk is making claims that may never even be technically possible.

Also, the (marriage) divorce rate of people getting these neural brain computer interface implants (that are regulatory approved already) is extremely high. People should think about this and other social matters, instead of getting all excited about concepts (not technology) such as Neuralink.

[1] Motor Imagery: https://www.sciencedirect.com/topics/medicine-and-dentistry/...

[2] Graded Motor Imagery: http://www.gradedmotorimagery.com/

[3] Graded Motor Imagery apps [iPhone and Android]: https://www.noigroup.com/product/recogniseapp/


/You would probably need a very powerful desktop (or a laptop with a desktop processor, >$4,000) with the best imaginable specifications to pull this off. So, it is of extremely limited use for the disabled individual./

Not necessarily. They have an extremely small-dimensional input, and have a pretty straightforward problem. The fact that they can train a good model with just a few hundred sentences suggests that it's an easy problem...

For other context, I do neural speech generation on phones for my day job, using an RNN that makes 4000 inferences per second. It works fine on a single thread with most phones produced in the last few years. Another helpful point of context might be 'swipe'-style phone keyboards, which are often RNN based, and turn paths into words.

The focus on giant-model work I think hides how effective small models can be, and how much progress has been made on making models run faster in limited resource environments.

(Do you have a reference on the divorce rate? Not sure I understand the causal link there...)


I cannot pull up studies now (on mobile), but it’s believed to be more of an issue about change of identity than disability or brain injury.

1. Audio Narrated Version of New Yorker Article: https://share.audm.com/share/newyorker/mind-machines-kenneal...

2. New Yorker Article: https://www.newyorker.com/magazine/2021/04/26/do-brain-impla...

My work in AI revolves around biophysical signals. I do not use AI generally in this case but I use a 19 channel EEG (using an EEG cap) for experiments in closed-loop controls. It requires a lot of RAM (ideally 32 GB if not more) to prevent latency.


Thanks for the article; it was a super interesting read. My takeaway was that poking brains can lead to major personality changes, which can lead to divorce and other bad outcomes. (Just hearing 'leads to divorce' made me wonder if it was due to previously non-communicative people expressing themselves in ways they couldn't before... but sounds more like a 'sometimes you get subtle kinds of brain damage' problem.)


People still lived largely offline in 2006. The motivation for BCIs is much higher today than it was 15 years ago.


Where did you live in 2006? Nowhere near me, that is for sure.


(Should be characters rather than words.) Very impressive; I wonder if future versions will eventually even exceed the speed of speech or typing.


This artifical interactive knowledge entity wants access to your attention and emotional state. Think "Accept" to allow permanent access [*]. Think "More info" to instantly know our privacy policy.

This is fine.

[*] We will add your technological and biological distinctiveness to our own.


In the same way that irreversible actions on legacy keyboard systems are confirmed by typing in a series of words, MentalKeyboard will require you to think of a complex thought.

Delete all funds from your bank account? To confirm, think of a polka-dotted elephant. Otherwise, just think of anything else.


I'm not sure if you did it on purpose, but when I read:

> To confirm, think of a polka-dotted elephant.

I immediately though: How does a polka-dotted elephant look like???

I guess my bank account is blank now :( .


This would pose a different kind of problem for people with aphantasia.


"I tried to think of the most harmless thing. Something I loved from my childhood, something that could never, ever possibly destroy us: Mr. Stay-Puft."


You would want to offer a specific something else for the user's executive function to latch onto immediately, even if you continue to interpret anything other than the narrow confirmation pattern as "cancel", just so there isn't only one immediate, recent attractor for "think of something".


thatwasthejoke


To confirm, think of a $thing continuously for 10 seconds


> To confirm, think of a polka-dotted elephant.

You monster! :D


Resistance Is Futile


You're reminding me of the augmented 16 fingered prosthetic hands from Ghost in The Shell, much safer than connecting your cyberbrain to a random computer system.

I imagine a future version of this technology would move beyond converting mental keystrokes into input and instead work on more abstract ideas. Imagine if your IDE could auto-generate certain design patterns as you think about them, instead of forcing you to manually type them out.


Yes I just came to put an addendum on for characters not words. But as you said very very impressive. This could massively improve patient’s quality of life by being able to communicate much quicker


I think so and it probably will require some skills to fully utilize it where kids in the future will start learning to use it from the early age.

I image this is how I probably will become 'obsolete' in regards of technology just like my parent can't catch up with todays technology.


I've worked alongside a group researching similar techniques, it doesn't actually need to be implants - http://bretl.csl.illinois.edu/projects

I think we were able to get up to something like 20 characters per minute, but it was largely UX design. You can go from 6 -> 20 characters. I wouldn't be surprised if the implants improve the speed, but UX could probably play an equally large impact.


i think i believe that. what is the easiest thought abstraction that can be captured by our sensors? well the abstraction is largely defined by the UI. i like to think of it like language. UI components (words) come together to enable complex actions (sentences or thoughts). it evokes questions, like what language does the brain speak in certain contexts for certain outputs? that's gonna be interesting to follow. what if we all think super differently and that makes it hard? i can't imagine why, but i don't have a background in real brains

it may be that our current "AI" tools might be helpful--they're really good at composing "languages" for tying together different types of data. seems that tying noisy brain sensors data to our English alphabet might be an example of that.


Neurosignal decoding was the topic of my PhD (which, to be fair, I quit after a year).

Every time an article comes up on the topic, I read through the comments section and realise how full of shit we all are here at HN. Then I read another article and assume the commenters are all smart people who understand the topic at hand.

I can't remember what the law that describes this is called, but it is absolutely real.


Ok, but rather than posting a meta putdown, why not share some of what you know about the topic? That way we can all learn something. A comment about neurosignal decoding would be much more interesting than yet another complaint about people being wrong on the internet, which we're all in a position to supply.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

https://news.ycombinator.com/newsguidelines.html

To be fair, this problem is a co-creation between comment and upvotes, and upvotes deserve more of the blame. But if you post an on-topic, substantive comment in the first place, then you're contributing to improving this place rather than worsening it. Recent comments on similar cases:

https://news.ycombinator.com/item?id=27110515

https://news.ycombinator.com/item?id=26894739


Hmm, perhaps there was some sort of cultural misunderstanding (I've always assumed you're American, and perhaps the choice of words on my part was quite Australian), but what I posted wasn't intended as an insult or flamebait - I included myself in the group of people who are "full of shit"!

It was more intended as a gentle reminder that there are a lot of comments on HN posted in an authoritative tone by people who are not experts in the field, and perhaps we should all stop and think before contributing to the problem in search of validation.

This is something I honestly care more about than BCI technology, and genuinely felt it was a positive contribution to the discussion as a whole. Based on others' reactions, it didn't seem to be taken negatively either.

Regardless of all that, in one of my comments below I did actually share both a short summary of the field and my experience as a researcher, as well as a 20-page long unpublished literature review. Hopefully that can help enlighten people somewhat on the terminology used in the BCI field and point out some of the limitations and issues with the current state of the art.


I hadn't seen your other comment when I wrote that. Thanks for the kind reply (and for teaching us about BCI technology!)


Thanks for the work you put in as a moderator, and I mean that genuinely. Very few people seem to be able to walk the tightrope between freedom of expression and civility and HN would be a worse place without you.


Oblg. XKCD: https://xkcd.com/386/

(My wife says this particular XKCD reflects me the most!)


Who are you, the master of articulation _/||\_


In a matter of speaking, he is as a moderator on HN.



Gell-Mann amnesia effect[1] by Michael Crichton.

“Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.”

[1]https://www.goodreads.com/quotes/65213-briefly-stated-the-ge...


With zero sarcasm, I truly want to know what your opinion is on the subject. Generally, I find this kind of stuff as interesting with wonderful potential of abuse. Someone more in the know, boots on the ground if you will, is worth infinitely more metric fuck tonnes over anyone else's opinion... to me at least.


My honest opinion: anyone reading this will be dead by the time technology advances to the stage where BCI ethics and abuse is a topic even worth considering.

These articles are designed to make it seem like the technology is developing rapidly, similar to the early CPU and memory chips. A more apt analogy is the progress of electrical systems in the 18th century: we can observe some odd effects but have no idea what it is we're actually looking at. This won't change until we gain a proper understanding of the human brain, just as electronics didn't take off until we understood the atom.

If you want my opinion on the current state of the art (as of 2018), as well as a quick introduction to some of the technology, here is an old draft of my research proposal (You can skip almost all of it except Appendix 1; that's by far the most important argument for why I think the current paradigm cannot deliver significant improvement): https://docs.google.com/document/d/1pmgCpDLEfHlWDu6OoHuoTOQ4...

As you can probably tell, I gave up because I could identify gaping flaws with the current paradigm but wasn't able to formulate a new one out of nothing.

On a side note, I suspect this is why impostor syndrome is so prevalent amongst PhD students. You're not actually expected to contribute worthwhile knowledge to society, but instead expand the body of knowledge in your field. In a static field, unless you are a hyper-genius capable of spinning a whole new paradigm, this means churning out papers which you know are bullshit but are supported well enough by the existing body of research to appear reasonable. In short, grad students feel like impostors because they are.


Exact same reason why I got out of academia, and my field was a lot more "realistic" than yours. (Post-Silicon Semiconductor physics)


I don't think it's fair to expect to revolutionize the field as a PhD student. The goal of a PhD is to prepare a good researcher, probably more important than producing new knowledge. Most papers are maybe boring and have little new things in them but I still think they are worthwhile


Clearly it's not ready for commercialisation (needs weekly recalibration; only a single individual tested etc), but it's still amazing - we're decoding signals from the brain! Isn't that worthy of discussion?


So then what's your take on Neuralink? Are they going to hit a plateau within the current paradigm? And what do you think that plateau will look like technologically? Maybe we could still get pretty far within the current paradigm.


Gell-Mann amnesia effect: https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnes...

> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.


It baffles me that the Gell-Mann Amnesia effect still doesn’t have it’s own Wikipedia page given how pervasive it is.


Add in Dunning-Kruger and you get one hell of a shitshow.


Hmm, since they are having someone imagine writing a character our and using recognition on it, I wonder if there's gains to be had to switching to a simplified writing, like Palm did for their PDAs in the past with Graffiti.[1]

1: https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)


With autocompletion and advanced prediction 90 characters a minute could be effectively much more.

https://www.tabnine.com/


I'd love a neural interface to Dasher, which adaptively learns as you use it which letter combinations are the most popular, and adjusts over time to make it easier and faster to input common text.

It would be wonderful integrated with a context and language sensitive IDE.

https://en.wikipedia.org/wiki/Dasher_(software)

https://github.com/dasher-project

http://www.inference.org.uk/dasher/

>To make the interface efficient, we use the predictions of a language model to determine how much of the world is devoted to each piece of text. Probable pieces of text are given more space, so they are quick and easy to select. Improbable pieces of text (for example, text with spelling mistakes) are given less space, so they are harder to write. The language model learns all the time: if you use a novel word once, it is easier to write next time. [...]

>Imagine a library containing all possible books, ordered alphabetically on a single shelf. Books in which the first letter is "a" are at the left hand side. Books in which the first letter is "z" are at the right. In picture (i) below, the shelf is shown vertically with "left" (a) at the top and "right" (z) at the bottom. The first book in the "a" section reads "aaaaaaaaaaaa..."; somewhere to its right are books that start "all good things must come to an end..."; a tiny bit further to the right are books that start "all good things must come to an enema...". [...]

>.... This is exactly how Dasher works, except for one crucial point: we alter the SIZE of the shelf space devoted to each book in proportion to the probability of the corresponding text. For example, not very many books start with an "x", so we devote less space to "x..." books, and more to the more plausible books, thus making it easier to find books that contain probable text.

The classic Google Tech Talk by the late David MacKay, the inventor of Dasher:

https://www.youtube.com/watch?v=wpOxbesRNBc&ab_channel=Googl...

It's based on the concept of concept of arithmetic coding from information theory.

http://www.inference.org.uk/mackay/dasher/

Ada Majorek, who has ALS, uses it with a Headmouse to program (and worked on developing a new open source version of Dasher) and communicate in multiple languages:

https://www.youtube.com/watch?v=LvHQ83pMLQQ&ab_channel=Dashe...


This is amazing. Think of how liberating implants like this will be to people with locked-in syndrome or with less-serious diseases!

Personally I've struggled with some RSI. Fortunately I've figured out a good way to manage it, but the thought of loosing my ability to type terrifies me. I could see a mature version of this technology being safe and common enough to be an elective thing; then I wouldn't have to worry about hurting my hands!


FDA approved the first BCI a few weeks ago from Neurolutions. Other companies like Synchron are going to beat Neuralink to market as well. https://newatlas.com/medical/first-fda-approved-brain-comput...


I can't comment on who'll come to market first, but invasive BCI (like what's posted), is much harder to justify than non-invasive solutions to the FDA. We have a long way to go before any such solutions will be available for the general public.


Having followed the development and trials of various types of brain implants, the realities and side-effects of living with brain implants can be more than most bargained for. Deep brain stimulation implants (which this isn't) in particular can be incredibly nasty even for those for whom implants are a last resort.


Yes, neural implants are invasive, dangerous, prone to side effects, and undesirable. It's just the first, and currently the only way to get the signal.

Ideally, future generations will use a non-invasive sensor.

If you can invent one with high enough resolution, you will change the world. But first, or simultaneously, the other components of the system will have to be invented.

TFA is about decoding the ill-gotten signal. It's an impressive sign that our information processing technologies, and neuroanatomical understanding are already at the point where the system is viable.

If you could complete development of the non-invasive smart hat by this time next year, the world will be a different place by 2025.


Neural implants ideally would be non invasive, with high spatial resolution and fast frequency/temporal resolution. The reality is that you can only choose two of these three. I don’t see non invasive options being feasible with current technology. There is some good research into more biocompatible invasive options, but problems can take years to be found.


This is somewhat off-topic for this particular work.

The advances demonstrated here are in the algorithm and approach, not in the interface hardware (Utah array).


Like what? Infection?


Infection is one aspect. Another aspect is the battery system that powers the implant, they're usually implanted in the chest with a wire that goes up the neck to the skull. There was a case[1] where tension developed over that wire, leading to pain, immobility and destruction of the connection in brain tissue:

> Steve had the surgery at Stanford, in November, 2012. After the surgery, he had “severe cognitive decline” and a slew of physiological adversities. “The leads [wires] were 18 inches longer than they needed to be, so they coiled it up in the chest and at the top of the head; I could feel them externally,” he says. “And the leads were too tight. I could move my ear and my chest would move, too,” he says of a condition called “bowstringing,” whereby scar tissue encapsulates the wires (partly from the body’s natural response to foreign material), which has been documented in DBS cases and can cause permanent complications. Steve also had many symptoms that were ultimately diagnosed as shoulder and jaw muscle atrophy, spinal accessory nerve palsy and occipital nerve palsy. He reported all adverse effects immediately and continuously throughout the first year of the study, but the trial doctors continually told him that they’d never heard of such symptoms with DBS, even though nerve damage and DBS wire-related “hardware” complications were among the potential risks listed on the informed consent document.

Because they're relatively experimental, it's almost impossible to find a doctor/surgeon/etc that will choose to work on you should you run into complications. If you have problems with the programming of the devices themselves, there isn't much you can do as a patient, and even specialists can't help you. The only people who can help you are those who developed the device. That can be a problem for a device that's meant to remain implanted until death. Removal is also a huge issue, because brain tissues grow on and around the implants. There are people who want their implants removed, but can't find a doctor who is willing to remove them because of the potential for brain injury and the resulting liability.

They can also cause personality changes, suicidal behavior and even homicidal behavior[4]. There are documented cases of increased impulsivity and impaired executive function, which have led to pathological gambling and shopping. Breakdowns of relationships and the ability to work are also documented[3].

Here's an article on the subject[1], and there are numerous studies that look into such effects, like this one[2] that aggregates dozens of studies.

[1] https://www.madinamerica.com/2015/09/adverse-effects-perils-...

[2] https://www.frontiersin.org/articles/10.3389/fnsys.2013.0011...

[3] https://www.frontiersin.org/articles/10.3389/fnsys.2013.0011...

[4] https://link.springer.com/article/10.1007/s12152-010-9093-1


Give it ten years and Bluetooth and a consumer-grade version would be pretty awesome.

Then again, countless potential technologies have be "10 years away" for a long time now.


How long until this is used in interrogations of criminals...


> 200 electrodes in the participant's premotor cortex

Is finding the right neurons just luck? Does the person somehow adapt to the interface?


Not really. Lots of neurons in this part of the brain modulate their activity to movement in heterogeneous ways. The algorithm details vary, but at some level you’re trying to find a 2d x/y velocity signal encoded in the 200d neural signals. This decoder is a bit more sophisticated (using deep learning style approaches), but a Kalman filter was state of the art for a long time.

For the adaptation, there’s a rich literature of neuroscientists in this field studying how the participant adapts to the control characteristics of the decoder, and how the decoding algorithm can be designed to adapt after seeing more data during use. Here’s one paper if you’re interested http://www.stat.columbia.edu/~liam/research/pubs/merel-fox-c...


Great tech for this exact use case.

But I'm dreading that people will start doing it on sane persons that just want an upgrade.

I've read enough Sci-Fy to have a list of 1000 ways it can end badly, and we only need one.

People are dumb enough to get phished by email, gov is spying on any phone, pirates manage to access to nuclear plant, and you want to connect your brain to a network?


I’ll give you a positive counterpoint, my immediate thoughts from the headline alone were “great, finally a decent way to ensure I never suffer from locked in syndrome”, “wonder how invasive it is”, “I’m not in the right mood to even open this link”, and finally a small shudder as I try to avoid thinking about locked in syndrome by starting to focus on other tasks.


Sure, I can see plenty of good it can do.

Unfortunately, I think the wrong it has the potential to do to the human as a specie outbalances it by an order of magnitude.

It's useless to worry though, we are a careless race, we will use it no matter what, and we will hurt yourself in the process. I just hope it will not be too much.


All it needs to be able to do is page-down or scroll-down and I want it (I have RSI problems in my hands).


You can already get a gaze-tracking program to do this. (You can prototype one in MIT Scratch – yes, really!)


I think a foot pedal might be an easier solution than a neural implant, at least for a desktop computer


This is amazing! I'm assuming when they say "imagining" they mean visualizing it as a mind picture? I wonder how this works for folks with aphantasia. Do you imagine the motor action of writing? Can a paralyzed person still do that?


This sounds so cool, I wish I could try it out, usually I never think about letters when typing, only words and they are executed by my fingers instantly non-consciously, I'm curious how this system would work with a fast-typer like myself


Reading this, I can't help but think of this great novella, Free Radical, based on System Shock http://www.shamusyoung.com/shocked/


I wonder if this is capable of interpreting more complex characters? I wonder if a patient who previously was able to write in simplified Chinese, for example, might be able to write faster?


Pretty awesome, pretty clear DNNs are the future for decoding/mapping biological signals/inputs. Tied in with reinforcement learning, amazing.


I love it how we can do this complex stuff. Though being a 90s kid, I imagine doing a T9 based brain-computer interface would be easier and faster.


I'd like to see a followup on that body of research with Chinese. I wonder if we'd have lower, higher or just similar accuracy.


That is seriously creepy. And I REALLY hope if they make it into a product that you can turn it off when you are asleep because I know I'm not the only one who dreams about writing code sometimes and I don't want find myself debugging something I dreamed about the previous night with only a vague memory of having written it.


I'd quite like to get dream recordings to be honest, though not to the point of wanting to have an implant just for that purpose.


Why does this article not mention the names or institutions of the authors?


This is awesome, but also terrifying. The way the world is going, it is not hard to predict that in the future this kind of device will be used to tackle thought crime or "zap" you if you think of something government doesn't like.


Does this work on people with aphantasia?


my hands were always too slow for my brain


Title should be updated to 90 characters per minute. 90 wpm would be legitimately fast.


The title shouldn't have added that bit in the first place ("Please use the original title, unless it is misleading or linkbait; don't editorialize." [1]) so we've taken it out. The headline is just fine without it.

(Submitted title was "Neural implant lets paralyzed person type by imagining writing [90wpm]")

[1] https://news.ycombinator.com/newsguidelines.html


90 characters per minute is still "legitimately fast" for someone who cannot type otherwise!


Right, but 90 wpm would be faster than most people could ever do on a conventional keyboard; it's a whole other level.


It would be interesting to see how people compare in speed, assuming that the technology itself is not a bottleneck.

We could directly measure how fast a child thinks, or a CEO vs. a homeless.

Eventually we would be able to pinpoint mental problems by measuring the time it takes to think about a certain topic, check if the mind "locks up" for a couple of seconds on a seemingly unrelated topic which got triggered by the context the mind was thinking about. We could pinpoint the unrelated topic and have a base for a psychotherapy which could be more accurate than by just talking around in order to get to know the patient.

Let's say a group of 10 have to think out the ordering process at a Starbucks where every step has been provided by a list. An average of 20 seconds is used to do this +- 5 seconds. If there is an outlier, one could start to dig deeper into what exactly is making the mind to wander off. Multiple tests in different scenarios could then decide if the outlier is a slow thinker in general, or if it is a certain thing which triggers this wandering off.


90 wpm and I would be wanting to get one myself!


Yeah a 90wpm interface to a computer mentally would be incredible. Combine that with something like GPT3, use it to do natural language command line processing… If anyone is working on this I’d pay like 5k for a non invasive solution! ;)


If it helped me think at 90 wpm that'd be even better.


Just a trivia perhaps, but WPM is _exactly_ 5 times faster than CPM. "the definition of each "word" is often standardized to be five characters or keystrokes long in English" [1]

1: https://en.wikipedia.org/wiki/Words_per_minute


So this implant is 18 wpm, then - a bit slower than hunt-and-peck typing (which is ~27 wpm), but still very workable.


What is typical writing speed in terms of characters per minute? Maybe the next version could use a former touch typist and ask them to visualize typing on a keyboard.


average english word is < 8 chars, so call it 8, and that gives you 10 WPM. Sent from a plane, I'm not doing more research than this.


Might be good to remove the ?comments=1 from the linked url so it doesn't scroll you past the article.


Done. Thanks!


It's not 90 wpm it's 90 characters per minute. Big difference. Still awesome.



It seems Vim may finally have a challenger for the title of editing text at the speed of thought at some point.


Interestingly the constraint of only being able to use a simplified alphabet would let Vim "supercharge" this tech.

My assumption is that translating the thought "delete the current line that the cursor is on" to the actual action is still far away. And then expanding that to something like "delete the current line that the cursor is on and all the lines above it" might be even more difficult.

But the equivalent operations in normal mode are "dd" and "dgg", this interfaces very nicely with the implant.


90 characters per minute is quite slow. I would expect they could get faster by having the user imagine moving a cursor over an onscreen swipe keyboard.


They address this in both the article and the video. Their previous version was exactly that and this is twice as fast.


I suggest you to watch the video




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: