Hacker News new | past | comments | ask | show | jobs | submit login
Implant lets those with severe paralysis send texts with just their minds (pcgamer.com)
185 points by ohjeez on Feb 23, 2023 | hide | past | favorite | 67 comments



I've done some testing with ECG sensors adhered to the scalp and know what noisy, limited signals that gives you, but man am I looking forward to the day when people can send data from their brain directly to a computer. I don't mind shaving my head (I'm male in a family with a history of male pattern baldness, if that matters) but I'd 100x prefer a device that lived outside the scalp to one with probes beneath the skull.

The amplifier fidelity and data processing rate of eg. TI ADS1299 chips that I worked with 10 years ago is only going up slowly, but (now outside the field) it feels like the data processing potential to turn extremely noisy and hard to measure waveforms into letters and words could move forwards extremely fast with AI.


One thing that concerns me a bit is stacking machine learning onto this. Imagine a 100% paralyzed person in the hospital. We attach some probes to their head and get a very noisy signal. What comes back is a very noisy signal, but we've got a machine learning model that takes in a huge amount of context about interacting with computers and text and such and uses it as part of how it cleans up the signal, just like how phones today use machine learning to de-noise camera input.

Then let's say that our paralyzed person says something very impactful like "please turn off the life support" or "I will all of my money to my kids." Can you imagine the lawsuit over to what degree the machine learning was driving the boat?


Require them to spell it out letter by letter. Which btw is almost certainly how current Brain reading works, you think hard to stop a pointer walking a screen.


Or just ask a lot of questions which prove mindfulness.


How can you design questions that can't be answered by an LLM?


Ask them something specific that only they would know. Classic anti-cold reading technique.


Dumas has fun with this in "the Count of Monte Cristo"


A lot of them are viterbi estimators, you lose predictive power when you do character level only.


Most systems I have seen allow for you to delete characters. This was close to 20 years ago now though so perhaps it has gone through a worse is better phase.

Now, the issues you state are very real, terrifying, and depressing. Still anything that allows the person to communicate directly, rather than through an intermediary holding an eye board, will be a major step toward greater independence and the ability to hold those who abuse the disabled accountable.


in research, typically the behavioral tasks are randomized to avoid this sort of thing.

for practical purposes and legal decision-making, i have always envisioned a protocol where the patient is required to repeat back a random string in order to authenticate the system's output.

that is, the patient is shown or played a recording of the authentication sequence (by a separate device) and then they must express it through the bci before any device output from that session can be used as communication of intent for legal purposes.


I loved what ctrl-labs was doing, where they could read what your fingers wanted to do from an arm sensor. Not only was it supposed to be a great virtual keyboard, but it allowed some users to sprout a third virtual arm.

Unfortunately they were bought by facebook and then... nothing. and if they do come out with something, you know it will be locked into their platform.


at meta connect they showed someone playing a mobile game (maybe temple run or subway surfers) with their mind


Hot take:

The metaverse will take off when bi-directional BCI takes off.

I also think the smart players are trying to stake claims right now with in the software landscape, and keep holding on until HW improves, at least I hope no one actually thinks the current head mounted display system is the way forward! I'd look at HMD VR as the equivalent of black and white resistive screen Palm Pilots that communicate with IR. The foundational ideas are there, but smartphones were the revolution.


I think this as well - putting the proverbial 'money where my mouth is' by starting a company around it [0]. The signals one can get from a 'BCI' are quite magical. BCI is in quotes, because I, like CTRL-Labs, base the term loosely around non-obvious control to bystanders around you (no vocalization, no obvious hand movements, etc).

Extending the idea here: but much of the world is context based. Kitchens house a few main functions across cultures - where I would bet money that on a 'invocation count' metric for GoogleHome/Siri, the highest ranked function is the simple timer. If BCI companies finally got this around their heads (yes), then the skies clear. That's what we think, and what we prove with some pretty insane results. The common refrain from experts in the field is 'well your metric is only one bit' - and to that I say, 'well, yes, but it's the key to acting within the context.'

There's a lot of ink I could spill about this; but I'm excited by everyone getting it so wrong. It just proves with time we will get this right. The only sad part is I have no idea how it truly turns into the empire that is the personal computer distribution model through AppStores. But honestly, I don't care - the user stories are gobsmacking.

And to add to your hot take, here's a bit more: AR won't be glasses or contacts. It will be DynamicLands + Interactive Projection Mappings of the world. Wearing your compute will be seen like mainframes with mini wheels - oh cute, but I think you're missing the point.

[0] An old demo of the first embodiments of this idea: https://www.youtube.com/watch?v=VUbJl_xiDFU


That's going to happen only when you have a million nanowires worming themselves throughout your brain. Of course you can't insert that many, so it will need to be a device capable of self-assembly.


You've just predicted a sci-fi dystopia featuring actual brain worms.

Thanks, I hate it.


Don't worry, nobody would want the massive extra complexity to make them self-sustaining. They'll all lead back to the same implanted surface and if anything starts to go wrong you can unplug it. And it would be the opposite of contagious.


You need a feed into the visual cortex and then some form of input.

Full immersion? Yeah, centuries away maybe, but that isn't needed for a revolution. Scientists never perfected smell o vision, but cinema still changed the world!


Totally agree that we are decades away from either.


Based on the news coming out about them lately, Neuralink seems like a lost opportunity to accelerate progress in this space.

A ton of money was dropped into improving HMDs, and a lot of improvements to surrounding technical fields got made, it is sort of unfortunate that physics fundamentally limits what can be done there.

If a similar investment had been made in BCI over 10 years, I suspect we'd be halfway to success.

However it is important that there are multiple companies dumping money into R&D, so that the best methodology wins. Even in HMDs, different devices learned from each other and everyone benefitted.

Of course implantable BCIs have more ethics concerns, and that likely is why the big tech companies didn't even try.

But the first company to get consumer level bi-directional BCI onto the market is going to become the largest tech company ever, orders of magnitude larger than anything we see today.

Based on your job, I'm guessing you are also sad that we've reached a local maxima regarding input devices! If machines were designed from scratch now days, meaning everyone had no prior expectations of keyboards/mice/touch/etc, I suspect we'd do everything very differently.


Neuralink wasn't a lost opportunity because it wasn't a possible opportunity in the first place. We are so far off from BCI input to the brain it's actually laughable. I'm willing to bet well over 60-80 years away

Output is also a joke but a far easier problem to solve.


> We are so far off from BCI input to the brain it's actually laughable. I'm willing to bet well over 60-80 years away

Progress depends on discoveries in surrounding fields having already been made, and on resources invested. To an extent, you can help spur surrounding fields with lots of $, but it seemingly isn't possible to force brilliant theoretical breakthroughs.

We have direct input for audio (cochlear implants) that needs iterating on, what needs tons of investment is visual.

Figuring out how do to research in a humane fashion, that'll be the tricky part, and it isn't something corporations have a good history of.


I also tried working with EEG data from an OpenBCI device. My goal was to get an accurate enough signal to move a cursor on an X/Y axis and use that to control a predictive text input tool like Dasher.

https://en.m.wikipedia.org/wiki/Dasher_(software)

The X/Y BCI signal goal seems fairly practical and could open up critical pathways for interaction and communication.

If anyone is interested in pursuing this idea, please send me a message. My previous attempts were derailed due to technical difficulties with the BCI hardware/software link, but I'd like to see this idea come to light.


> but I'd 100x prefer a device that lived outside the scalp to one with probes beneath the skull.

So would everyone, but it's just too limited. It's going to be implants.


Would you mind giving me an idea of how I might go about procuring some sensors like this for myself? I’m at a loss for how to even search for one especially that gives signals I can parse myself rather than through some special software or proprietary machine. I’d love to be able to feed this stuff into code and mess around with it!


Try searching for OpenBCI, you can also look on AliExpress.


I really want to know more about the control the patient has over the interface.

i.e. if I think think something stupid, I still control whether my mouth lets that out (typically). Do they have control over mode of operation, like they think "OK Synchron!" and then issue some speech?


My understanding from their original paper is that Synchron’s device (known as the stentrode since its electrodes are on a stent scaffold) decodes only a binary signal for this trial, that is “intent to move” or “no intent to move” in a period of time (~1 second). Their paper mentions the decoder outputting no click, short click, or long click where a short click is movement intent followed by no movement intent, and long click is something like 3 consecutive movement intents followed by no movement intent.

The person types either by using eye tracking to move the cursor and clicking with the BCI device, or with a custom interface that cycles through characters one at a time and using only the BCI device to say yes to that character.

So the decoding of intent isn’t at the level that your thought experiment is concerned about, but in general, you definitely could implement something that decodes an initial intent before subsequent recording (e.g., think about waking up the device). Trivially for Synchron’s device this could be X number of consecutive movement intents. For intracortical BCI devices with single neuron resolution, you could imagine more precise neural activity correlated with the intent to begin decoding.


God I hope someone implements a binary tree for these poor people, I can't imagine how frustrating it must be to type like that.


There is a simpler device, a glass plate with the alphabet held between the patient and the other person. Humans are extraordinary good at following someone's glance, and this is how a quadriplegic patient can spell out words. Franz Rosenzweig used such a thing in his last years.

It's surprising that no one has used a camera plus ML whizzo stuff including predictive text to speed up the process.


Haha, for typical use with Synchron’s device they are using eye-tracking. The BCI-only mode is just for research purposes/baseline. It’s also just what’s in the paper, they may implement other UIs in practice.


> they may implement other UIs in practice

for the first time in my life I'm thinking "now here's something that should be called UX"


Couldn't something pretty close to that be done with eye movement and blinking?


There are a variety of Eye Tracking Communication Devices on the market.

I am not listing the manufacturers since most of them are also involved in either/or/and military and marketing applications and I am done supporting surveillance and murder capitalism.

But the eye movement interfacing tech is there and becoming more and more wide-spread. The major players have pilot studies at hospitals and r&d medical facilities across the world.

With the implant the concept is that with further development, it can be used for connection to locomotion etc. The proposed future potential of direct interfacing is larger, so to say.

An exoskeleton with direct input from a fully paralyzed wearer can significantly contribute to rehab, just one scenario.


This is a really great point.

I feel like I have three levels of thought - unconscious, unprompted and active. Unconscious are what you'd imagine - I don't actively think them (like I don't hear the thought in my head), but clearly something's happening in my brain that's affecting my actions.

Unprompted and active are both things that I hear in my inner monologue or picture in my head. The former, as the name would suggest, are things that I'm not trying to think about - intrusive thoughts are certainly an example. The latter are things that I am purposely shaping my thoughts around.

Active thoughts are almost the only thing that come out of my mouth (if I'm very surprised, an unprompted thought might come out). Would I have that same level of control here?


In past experiments that I'm aware of, you steered the cursor, or typed a letter, by using a few seconds of _visualizing_ a sort of "command image". The command image recall would essentially generate a recognizable signal in a sensor array that is in a cap the patient wears.


When I'm trying to be funny, I'll sometimes feel a funny joke coming, but I have no idea what it'll be. Then it pops out, fully formed, seemingly out of nowhere. Sometimes I don't feel it coming.

It's strange, but I feel like this is what you're talking about with unprompted thoughts.


Or just try to sit still for a minute and not do anything and you'll notice that the vast majority are unprompted and just arise from nowhere.


You would definitely have that level of control. These signals don't map to your inner monologue but they can pick up your intention to physically do something.

So if you think LEFT, you can move the selector on the keyboard to the next key to the left. And if you think CLICK, you can trigger a click. But you are focusing on asking your hand to physically make that click. Which your hand can't do obviously, but they can track those neurons to simulate the action for you.


I wonder how it works for those of us without an internal monologue.


Just yesterday I was rewatching an old Star Trek episode, "The Menagerie," where the story centers around this poor victim of an accident ("delta rays") who is confined to a wheelchair, in a near-vegetable state, with only the ability to trigger a light: once for YES, twice for NO.

https://en.wikipedia.org/wiki/The_Menagerie_(Star_Trek:_The_...

Okay, still no flying cars—but we really ought to take stock in the many ways our present has already exceeded expectations from our recent past of even our distant future.


If you liked that episode, you might want to see the show "Star Trek: Strange New Worlds", which tells the story of the crew before Kirk, and how the Captain of that crew (Pike) got into that position


The episode always struck me as very strange - all that advanced technology - but all they can do is give this guy a single light.


Most of the Star Trek writers really didn’t care about technology. The space environment was just a backdrop for human and human vs. alien drama.

Evidence of that is how they used to add something along the lines of “insert technobabble here” in the scripts, for someone to fill in later. But that didn’t allow for any technical input on the nature of the scripts.

The exceptions might be some of the scripts written by sci-fi writers, like the one written by Harlan Ellison - but he hated that episode. When he won the Hugo Award for it, he dedicated the award to “the memory of the script they butchered, and in respect to those parts of it that had the vitality to shine through the evisceration.”


It is strange. I chalked it up to the idea that perhaps his injuries were so severe that even with advanced technology that single light was all they could manage for him; that if he were injured in the same way today that he'd be brain dead at least for sure.


Yeh if you can trigger a light you can send morse code, to say the least (I don't remember the episode so I don't recall the mechanism of triggering the light, maybe morse wouldn't work)...


I love to see news like this. I had a serious brain injury that could have left me paralyzed (or dead). Thankfully I'm okay, but it's made me very empathetic to everyone out there living with severe paralysis.


Do you have brain fog? If so, is there anything you could recommend? I was attacked and have had post concussion syndrome for almost a year, with no sign of improvement for about 6 months, that's more or less destroyed my academic and intellectual potential.


Might be worth having a look at the Strange Parts YouTuber - he suffered severe concussion after an accident with serious brain fog and has managed to come out the other side. I believe he used these guys:

https://www.cognitivefxusa.com/ or maybe these guys: https://neuraleffects.com/

No idea if this would be covered by your health insurance (assuming you're USA).

https://www.youtube.com/watch?v=Gs790JOeN3Y


Thomas Oxley? Endovascular? Looks like they finally commercialised the stentrode!

I remember seeing the initial trials of this back in the day. Basically functions like a slightly worse ECoG sensor (since the signal still gets filtered by the dura), but much easier to install. Think best case performance of about 80 binary inputs per minute, or a really noisy cursor input.

Honestly, without the improvement in algorithms they were hoping for, there's not a snowball's chance in hell of this playing Doom within 5 years, let alone one. There's still fundamental unsolved problems (like the fact that ~20% of people simply can't Motor Imagery BCIs no matter how hard they try, and a massive chunk of those who can produce incredibly noisy signals), but new sensors are always a step in the right direction (as opposed to new algorithms, which, in this field, are almost always complete bullshit.)


Just be careful not to accidently leak your passwords or 12-word crypto keys . It's like a hot mic but worse. If such technology becomes more commonplace, and not just for paralyzed people, this will be a problem.


The article doesn't specify how it works, but my guess is that you're controlling some typing or word selection in a learned way, like eg how Stephen Hawking talked but with some eeg input instead of muscle movement. You're not broadcasting your thoughts.


It only registers a binary input (e.g. mouse-click); their YouTube[0] shows a patient using it in conjunction with eye-tracking. [0]: https://www.youtube.com/watch?v=mm95r05hui0


I can't help but think "there goes the last remaining bit of human privacy, namely that between your own ears". Ten years from now, all the Presidents for Life around the world will know with a reasonable level of certainty, who actually loves them and who does not. Off to the reeducation/extermination camps with the latter.


> The Syncron Switch is meant to be less invasive than other BCIs like Neuralink

While it is a direct brain implant, Neuralink’s goal is to have it done 100% by robots, no neurosurgeons involved (with their hands at least). Their device is also so tiny that it’s hard to say it’s more invasive then pushing something through your veins + a chest implant…


As of right now, there is a lot more safety data in humans supporting the use of stents in vasculature, and because it is outside the brain (in the vessels), the blood-brain barrier is never broken. There also a lot of safety data regarding putting devices in the fleshy parts of your chest/abdomen.

So I think it’s fair to say it’s less invasive than opening the skull and breaking the blood-brain barrier. That may or may not translate to riskier/safer, but with the data that exists right now I’d say it does carry less risk than a more invasive technology. At the end of the day, it will come down to risk vs. benefit, where invasive technologies have demonstrated far greater efficacy thus far.


> The year-long human trials for Synchron's BCI system have been peer-reviewed by a neuroscience medical journal in Australia(opens in new tab), where the study found the technology safe and signal quality didn't degrade for its Australian patients. The study also concludes that "the favorable safety profile could promote wider and more rapid translation of BCI to people with paralysis." We'll give it about another year before someone's running Doom on it.

could not find full text for free unfortunately


https://jamanetwork.com/journals/jamaneurology/fullarticle/2...

worked for me, but perhaps it's paywalled for some.


This is great news. I am suffering myself from some kind of neurological issue which hasn't be diagnosed yet but dirty EMG, brisk reflexes, hand muscle wasting might already foreshadow paralysis. Ending up completely locked-in must be one of the worst fates for a human being and this technology gives hope.


Sort of thread cross-posting this, but it's a bit odd to combine the idea of this post with the one on human mind control of rats. Paralyzed people controlling other creatures via their minds. Sort of a cyberpunk familiar / spirit animal.


YouTube video I found with more info https://www.youtube.com/watch?v=mm95r05hui0


Still don't understand how this company got FDA approval before Neuralink.


There are a lot of factors going into an approval — a couple that may be relevant here:

(i) Synchron had clinical (in human) results from Australia before submitting to the FDA.

(ii) Synchron’s underlying mechanical device is a stent. A million Americans per year get a stent installed, so there is a lot of existing data demonstrating the safety of stents.

(iii) the procedure is basically a typical stent installation and less risky in many ways than open brain surgery. Notably Neuralink built a surgical robot to aid in the installation of their implant but that is yet another product that needs to be approved. Thus, I’d imagine Neuralink’s submission is vastly more complicated.


Is it FDA approved? I thought it was Australian?


Yes, they have IDE approval in the US for a clinical trial, after first conducting a trial in Australia.


Site hijacks the back button.


So, how long before the company goes out of business and leaves patients stranded?

See what happened to ocular implants from Second Sight https://www.bbc.com/news/technology-60416058




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: