Hacker News new | past | comments | ask | show | jobs | submit login
“Neuroprosthesis” restores words to man with paralysis (ucsf.edu)
278 points by porsager on July 18, 2021 | hide | past | favorite | 118 comments



Before we get ahead of ourselves:

>The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms.

Fifty words, that's it. For comparison, a nurse a laminated piece of A4 paper and a patient who can blink get about 15 characters per minute from the English alphabet.

Great PR, but not a substantial improvement over the BCIs from the past 15 years.


Is this really sound criticism? The first iterations of lots of technology were worse than their contemporaries. You could easily out-ride the first car with a horse in both speed an distance. You could out-calculate the first computers. The point is the potential the technology has. Here they are translating actual brain patterns to words. That's pretty awesome.


I should point that I studied a PhD in the application of Machine Learning to Neurosignal decoding back in 2017-2018 (dropped out after one year), so this isn't just criticism I've plucked out the air.

I spent a solid 6 months intimately learning the pros and cons of various existing BCI systems, as well as the exact methods researchers use to make their technology look like a breakthrough when it isn't.

Specifically, here, they've created a system that can decode one of 50 symbols - a mere 1 bit more than the English alphabet - and described the symbols as "words" so that the reader thinks each symbol carries much more information than it actually does. They've also cherry-picked the patient that responded the best.

When you peel back the hype, this is about the same performance we've been getting since '08 for an invasive, subdural, non-penetrative array.


From your top level comment:

> a nurse a laminated piece of A4 paper and a patient who can blink get about 15 characters

How is this a relevant comparison? 15 characters per minute is much less than the 15 WORDS per minute performance this work demonstrates

> I spent a solid 6 months intimately learning the pros and cons of various existing BCI systems, as well as the exact methods researchers use to make their technology look like a breakthrough when it isn't.

Then you should know this is a huge deal.

> Specifically, here, they've created a system that can decode one of 50 symbols

Previous motor and speech neuroprosthetic systems focused on pointing (controlling cursor) and more recently, handwriting (https://www.nature.com/articles/s41586-021-03506-2). This work goes gives communication rate similar to the handwriting work, but decoding speech from the motor cortex has been much less understood than that of simple motor movements such as cursor position and velocity control.

Even more impressive, the test subject is not even a native English speaker.

> They've also cherry-picked the patient that responded the best.

Bravo-1 is the only subject they had.

> When you peel back the hype, this is about the same performance we've been getting since '08 for an invasive, subdural, non-penetrative array.

This is completely, utterly false. See (https://stacks.stanford.edu/file/druid:jx921pv3255/Technical...) for survey of performance of typing BCI.

You seem to love to cite your background as a BCI PhD dropout. I should also point out that I have a completed PhD in invasive neuroprosthetics and still work in the field (not that matters when anyone can look up the sources and judge for themselves).


>How is this a relevant comparison? 15 characters per minute is much less than the 15 WORDS per minute performance this work demonstrates

15 words from a set of 50 (6 bits per "word" vs. 5 per letter in the English alphabet). It's like saying a 100 baud telegraph machine can decode 300 words per second, just so long as those words come from the set of "dot" and "dash".

>but decoding speech from the motor cortex has been much less understood than that of simple motor movements such as cursor position and velocity control.

Cursor position and velocity are outputs, the input is still a self-paced motor imagery task.

>Bravo-1 is the only subject they had.

Fair cop. Maybe they'll get genuinely impressive results with their next patient.

>This is completely, utterly false. See (https://stacks.stanford.edu/file/druid:jx921pv3255/Technical...) for survey of performance of typing BCI.

I posted a link in a comment below showing an ITR of 35bpm from scalp EEG from pre-2010.

>I have a completed PhD in invasive neuroprosthetics and still work in the field

I am truly sorry for your loss.


krishna shenoy has some nice papers that apply information theory to bmis (albeit fused to a T9 style text entry task). either way, they attempt to quantify, in bits, the bitrate of the bmi. (which was, back then, quite low... a few bps)

i think it gets rather muddy as there isn't really a good metric for raw signal quality (afaik). there's cell tuning and number of spiking channels, but still not a great measure of snr for bmi work (afaik). often times people will apply measures to the outputs of their systems, like task performance, but part of the problem there is that often the state model has varying quality and suitability to task, so it can be difficult to disambiguate signal quality from state model performance.

(of course, in speech recognition they don't care, the game is to minimize WER and maximize decoding speed and whether language or acoustics (at least when they were separate) get you there, it doesn't matter)


>i think it gets rather muddy as there isn't really a good metric for raw signal quality (afaik). there's cell tuning and number of spiking channels, but still not a great measure of snr for bmi work (afaik). often times people will apply measures to the outputs of their systems, like task performance, but part of the problem there is that often the state model has varying quality and suitability to task, so it can be difficult to disambiguate signal quality from state model performance.

This was something I picked up on back in 2017. I did manage to come up with a definition of SNR that made some sense (basically the euclidean distance between symbol menas, divided by the noise level along the vector connecting the two symbols, assuming the feature space was basically an N-dimensional QAM signal using features 1...N instead of amplitude and phase) - but even then that didn't take into account the fact that the noise was neither well-approximated by AWGN biased nor even constant...

And of course, as you said, you could get a bad SNR just because you're extracting the wrong features (although, to be fair, the same problem can exist in telecoms too).


hah, it's funny. i come at all of this from a sensorimotor control view (even though i was very interested in speech and language- as for a cs person discrete stuff is easier to reason about) and while i can appreciate ideas like this and the shenoy lab stuff, when i worked on this stuff in practice we had no discrete symbols as we decoded continuous variables like positions, velocities, angles and torques- which didn't seem to have clean mappings into comms/noisy channel theory/info theory.


Haha, I get where you're coming from completely. There was a reason I didn't follow through with my research! Just too many unknowns.


Any chance you can please share a reference for that earlier comparable work? Not challenging your view, just curious


Oh God, I started looking up the papers and my brain started melting. I'd genuinely forgotten about SSVEP.

Anyway, knock yourself out with research: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&as_yhi...

Here's one from 2006 that got 35bpm without an implant (the one in the article would be closer to 60, but that's to be expected as it's invasive): https://d1wqtxts1xzle7.cloudfront.net/46069183/the_berlin_br...

Here's another from 2010 with a similar result under similar conditions: https://pure.ulster.ac.uk/ws/files/11410334/cecotti_tnsre.pd...


> Here's another from 2010 with a similar result under similar conditions: https://pure.ulster.ac.uk/ws/files/11410334/cecotti_tnsre.pd...

Within the abstract:

> The average accuracy and information transfer rate are 92.25% and 37.62 bits per minute, which is translated in the speller with an average speed of 5.51 letters per minute.

5.51 letters per minute, not words. The work you cited is not comparable to the UCSF work at all.

It seems you are measuring performance in terms of bit-rate (i.e. based on how many symbols per minute), which makes sense when you are using a cursor-based speller.

This approach is not a correct measurement of bitrate with this speech-motor decoder, however, as words are being decoded based on the syllables contained within it. The decoding model is trained to recognize 50 specific combinations of syllables, and the total number of unique single syllable phonemes is about 44.


>5.51 letters per minute, not words

Again, it is misleading at best to user a measure of "words per second" when you're restricted to a set of 50 of them. A keyboard that had both English and Cyrillic characters in it would have 59 unique symbols.

>It seems you are measuring performance in terms of bit-rate (i.e. based on how many symbols per minute), which makes sense when you are using a cursor-based speller.

I genuinely fail to see what the cursor has to do with anything. Communication speed is communication speed.

>This approach is not a correct measurement of bitrate with this speech-motor decoder, however, as words are being decoded based on the syllables contained within it.

Just as symbols in cursor tasks are decoded based on relative position?


For comparison purposes, 15 characters per minute equals 3 words per minute^, so this would be about six times as fast as that approach.

^by convention


BRAVO-1's vocabulary was limited to 50 words; basically 6 bits per symbol instead of 5.


I still think its a good progress regardless


It is not a substantial improvement over previous experiments from the late 00's.


It is a substantial change in how the intended word is selected. This new process lends itself to huge amounts of training and machine learning to vastly expand the vocabulary.


>It is a substantial change in how the intended word is selected.

Detecting (in)activity in M1 during imagined movement tasks has been the standard decoding method for 30+ years.

>This new process lends itself to huge amounts of training and machine learning to vastly expand the vocabulary.

Not without degrading accuracy. The fundamental limit to these systems is the human operator; not the decoding algorithms. ML has advanced dramatically since the turn of the century but almost all BCI performance gains have been from new sensors.


And it requires a brain implant.


Well this is what is public now.


what about people who can't make their eyes blink?


BCI could be a useful technology for them in future. My laminated paper example was just to point out how basic the current stuff is.


Incredible!

This invokes the feeling that it'll be impossible to keep computers out of our bodies at some point, which is scary on so many levels.

But like any tool, if I look at all the good that can come out of it - simply amazing things are possible.


You know how sometimes you have to fight auto correct while typing on your phone?

Imagine that with a brain interface...


Duck. No, Duck. I meant duck. Duck that. Ducking hell.

I could imagine easily going insane if I were forced to interact through a piece of technology that would not do what I wanted. It's bad enough when my phone won't do what I want.


Not being able to type fuck is such a weird cultural quirk America has forced down our throats.


The puritanism is real, and it's still a thing; people self-censoring themselves on platforms like tik tok (even going as far as 'censoring' words like sex or even more innocuous words). Not sure if it's fear of The Man, or because they don't want to miss out on views from kids.


TikTok rapidly de-ranks any mention of suicide (a word Apple’s keyboard also avoids) so the kids have taken to saying they want to make themselves past-tense


I wonder: Is creativity and innovation reduced in strongly puritan areas because of this?


I never have this issue. Is it a difference between iOS and Android keyboards, or did I add it to my dictionary long ago and forget all about it?


GBoard, at least, has a "block offensive words" option under "text correction" options. I think it makes sense - as bad as it is to accidentally type "duck", it's worse the other way round.

Context helps - if it looks like a duck, quacks like a duck etc...

Edit: this is a possible alternative meaning for "Duck Typing"...


Just wait until your interface doesn't allow you to defend Julian Assange, or deny the efficacy of masks.


A lot of people have experience with fighting mental BS, so hopefully that translates. OTOH they don't have experience fighting both at the same time...


The same kind of statistical models used for voice recognition and autocorrect on touch keyboards are also used for these electrode-in-cortex interfaces, so you can expect the same limitations in accuracy. I don’t think we appreciate how good our hands and fingers are, not just in terms of dexterity, but also on the information-theoretic level as a decoding system.


I mean... who thinks in individual characters? I would assume it would type the words our correctly based on what I'm thinking.

But to your point, would the system understand thought?

Knowing the difference between:

  > "I'm sending two to you" or "I'm sending to Two One Smith Lane." 
requires a lot of knowledge/intent if indeed you can't think in terms of individual characters.


It will likely be something like imagining typing, or swiping. It won't work at the level of words. You'd have to correlate between 50k and 250k individual words per person, which would be cumbersome and obnoxious. You have to correlate word components, and the easiest way to train is tapping into a feedback loop that already achieves your intent, like typing.

Systems like the palm pilot graffiti interface, or some optimized symbolic gesture system will become prevalent, but imagining typing will probably carry us sufficiently until better systems are worked out.


I'm more worried about privacy intrusions.


My muscular memory of Vim is so strong, I’d wager I could code even faster and more powerfully with a proper brain interface.


This always puzzles me a but.

Is it the typing speed that limits the amount and quality of code one can write in a day? To me, it's mostly finding out things about the subject area, finding the right APIs, code search.

Of course, when typing is fast enough so that you never break your flow, it is important.


I think that in order to not break your flow, you should be able to translate your thoughts (the answer you found) to code or text as fast as possible. Typing slowly when you know what you want to write can be frustrating for one (lowering your motivation), but most importantly is too trivial a task to prevent you from thinking at the same time (and thus parasitic thoughts can break your flow). A HCI that can help you write your code as you think of it (and maybe log the thought process for future reference !) would be awesome on that front.


The whole point of vim is normal mode, not insert mode. Insert mode is just typing, but normal mode is code or file navigation. Its not about typing speed but deliberate action and precision - with vim you need only a couple of keystrokes. In this context speed refers not to typing code but to doing what you mentioned quicker (at least for code search, jump to definition, go back in cursor history) but there is vimium for the web which helps with research. My hands are slowly failing me and vim is a god send.


I doubt so; I think that editing text via the brain would take a lot of focus to think the right things at the right time. In practice I think your brain would go one direction and the interpreter would type that, so then you'd think "no wait go back I was just distracted by a squirrel" and the whole document is deleted. Or something.


I think it could be cool being born with a symbiote AI.


This is significant computationally in placing human thought unambiguously into the digital realm because it shows the data protocol between thought and output is machine learnable.

The protocol for sensory input has likewise been prosthesised by Paul Nach-y-Rita’s tongue grid array for vision.

https://www.researchgate.net/scientific-contributions/Paul-B...

https://en.m.wikipedia.org/wiki/Paul_Bach-y-Rita


That is not what is happening in this study, which is tapping into the motor system that controls the muscles used in speech. That is part of the externalisation of language, not the system of thought (which we know very little about).


> That is part of the externalisation of language, not the system of thought (which we know very little about).

Actually we have some idea and one theory is that they are not as distinct systems as we’d like to think, at least they work bidirectionally and reciprocally: language structures back the thoughts themselves, so there is a way that externalization actually feeds “inwards”. See distributed/embodied cognition.


Linguistic determinism is really popular in academic circles at the moment, but it has been disproven. For example, there is a South American tribe, the Pirahã, who have no words for colors, but they have the cognitive ability to perceive them and draw analogies to objects of similar colors.


I don't think that is about linguistic determinism in the sense of the Sappir-Whorf hypothesis, it seems to be more about Chomsky's theory that language is an internal thought mechanism first, and a communication mechanism second. That is, all thought is internally represented in langauge-like structures - not in Chinese or French, but in internal language trees which can, if you decide to externalize it, be translated to an external langauge phrase that you know.


I'm pretty sure that some of the more outlandish claims of Everett (the big Piraha guy) have been cut down to size by his peers. Especially that boo-ha-ha over the Piraha's lack of recursion. If you're interested I have a few articles saved.

As for the popularity of linguistic determinism, who knows. I wouldn't trust a linguist who took the strong Sapir-Wharf hypothesis seriously.


Invoking linguistic determinism here is a strawman. Never said all cognition is exclusively shaped by language.


Your assertion of disproof isn't valid with the example you've provided? Please explain further because in no way does not having words for colours mean the tribe isn't substituting some other properties of linguistic determinism in the place of the words we use for colours.


This 1000x. It's not the scientific miracle the press makes it out to be. While it is incredible that it restores speech, it's not resorting as language as some sources have claimed. This ain't gonna fix most aphasias.


Yes, this is just output. And Bach-y-Rita showed Input is digital.

What’s left is all the logic between input and output.


Yes, we have sophisticated digital systems that interface with our (analogue) sensory systems - a screen and keyboard are examples of this. Maybe these sensory interfaces involve some machine learning (maybe they don't). In the future, we might stimulate the sensory systems directly. But with any advances in technology in this domain we won't get closer to understanding the "logic between input and output" as you put it - i.e. what are the atoms of thought, how do they combine, and how these internal thoughts relate to external language. We can only meaningfully tackle these questions by linguistic theorising and experiments.


> But with any advances in technology in this domain we won't get closer to understanding the "logic between input and output"

We gain a pathway for investigation backwards. What neural substrate is active in generating the speech data? Work backwards from there.


As a proponent of biolinguistics, my bet is that as we study those neural substrates more closely, we'll probably just learn more about the sensorimotor systems (which we largely share in common with other mammals) that have been coopted for expression of language in humans.

We already have a useful pathway for investigation: data from speech/sign of a speaker of a particular language. In fact this already yields a kind of overabundance of data. The tricky part is finding the right kind of data via careful experimentation and organising it, by building explanatory theories of language that meet the conditions of evolvability and learnability.

(Which is not to say that cognitive sciences can't shed any light on this, but studying the brain independently of linguistic experimentation is a dead end IMO. Brain imaging studies coupled with linguistic studies have yielded interesting results - see Andrea Moro's work on "impossible languages").


Part of me thinks it would be demoralizing to have gained speech again, only to lose it once more at the end of the study once the electrode array is removed.


Since neural networks are universal function approximators, they can learn any protocol.


I've read this notion before, but whenever I've actually looked into it, neural networks can only seem to represent differentiable/integrale functions, not any computable function.

More importantly, the fact that any function can be represented as a NN does NOT mean that any function can be learned through known NN training mechanisms, not even in principle (i.e. given finitely arbitrarily many examples and unbounded but finite time).

And of course, there is always still the vague possibility (which I don't personally subscribe to) that the actual function is not Turing computable, which would mean it certainly can't be approximated by an NN.


I am really really tired of this stupid, half-educated claim that gets repeated again and again by NN fanboys.

There has never been a dearth of universal function approximators, polynomials can do it, splines can do it, sine/cosines can do it. Being a universal approximator is hardly unique or special.

There is absolutely something special about DNNs but being universal approximators is not one of them.

Being able to learn a function from data is very different from being able to represent that function.


Well maybe, but good luck breaking encryption this way.

Being a universal function approximator doesn't magically solve every problem.


> Funding: Supported by a research contract under Facebook’s Sponsored Academic Research Agreement…

Big tech isn’t far away from this. Scary to think what kind of applications can be derived from this technology. Imagine a hi-tech polygraph that maps your brain activity to speech—-do you have plausible deniability if something incriminating blurts out?


There are already pretty advanced methods for crime detection like that one thing where they show pictures of various objects while measuring brain activity to determine if you've seen the specific object before. Not sure if that tech is actually deployed though.


I want Grammarly in my brain. I want to have real time speech improvement of my audible speech. If I could have a privacy respecting brain computer interface to improve my speaking and rhetoric, I would.


Grammarly might seem like an improvement if you have poorly developed grammar, but it’s a limited substitute for educating yourself in the English language. Why? Because if you outsource expression to an app you lose the personal dimension of your thoughts. This is more than nuance - it is a straight jacket because it stops you from thinking about what you are saying.


I agree. I often notice the strong correlation between a high level of writing skill and those who have considered a subject well and can express a point clearly, and the opposite, those who seem to have no idea what a comma or full stop is, or the difference between your and you're, and the way that - once I've finally unpacked what it is they're trying to convey - I find their reasoning is also of a comparable, very low quality.

I don't think that giving everyone the gift of good grammar would create a world of geniuses but it might do what every other useful tool in history has done, help "level up" those at the bottom to a better standard (of, in this case, reasoning) and free those at the upper levels to really create.


You may be right, but could your bias towards those who don't use Standard English (Orthography) be influencing your judgement of their reasoning skills? I wonder what a study comparing the reasoning skills of a user of SE communication with those of a vernacular or ill-educated user would show if the argument were kept the same.


You make a good point about the legitimacy of vernaculars.

Separate to idiom is the process of rewriting, whereby rough thoughts are honed to sharp points.

“I have rewritten — often several times — every word I have ever published. My pencils outlast their erasers.” ― Vladimir Nabokov

“Revision means throwing out the boring crap and making what’s left sound natural.” ― Laurie Halse Anderson

“Secure writers don't sell first drafts. They patiently rewrite until the script is as director-ready, as actor-ready as possible. Unfinished work invites tampering, while polished, mature work seals its integrity.” ― Robert McKee

“When asked about rewriting, Ernest Hemingway said that he rewrote the ending to A Farewell to Arms thirty-nine times before he was satisfied. Vladimir Nabokov wrote that spontaneous eloquence seemed like a miracle and that he rewrote every word he ever published, and often several times. And Mark Strand, former poet laureate, says that each of his poems sometimes goes through forty to fifty drafts before it is finished.” ― Susan M. Tiberghien, One Year to a Writing Life: Twelve Lessons to Deepen Every Writer's Art and Craft

“I do so much writing. But so much of it never goes anywhere, never sees any light of day. I suppose that's like gardening in the basement. I don't publish so much of what I write. I just seem to plow it back into the soil of what I write after it, rewriting and rewriting, thinking that somehow it gets better after the fifty-second-time around. I need to learn to abandon my writing. To let go of it. Dispose of it, like tissue.” ― J.R. Tompkins

“Writing a first draft is like groping one's way into a dark room, or overhearing a faint conversation, or telling a joke whose punchline you've forgotten. As someone said, one writes mainly to rewrite, for rewriting and revising are how one's mind comes to inhabit the material fully.” ― Ted Solotaroff


It might be an interesting study but I'd have to wonder what kind of bias one could have against non-standard English usage unless one was an English teacher.

Or conversing with an American, the spelling might tip me over the edge! ;-)


>once I've finally unpacked what it is they're trying to convey - I find their reasoning is also of a comparable, very low quality.

One of the most brilliant inventors I know has terrible written grammar due to dyslexia. I'd be careful with such blanket statements.


A strong correlation isn't a blanket statement, and I'm referring there to my experience. How many dyslexic and brilliant inventors would I need to have conversations with before the correlation weakens to nothing?


> If I could have a privacy respecting brain computer

something's gotta give. and im pretty sure it'll be your privacy


Train ML model centrally, sell brain computer that runs it locally and don’t send data back. Privacy.


If you can afford a brain computer sure, but for everyone else, every 20th sentence will be an advertisement for Doritos.


I'm just wondering, do advertisements worth that much?

Surely brain computers will be a pretty big investment for people; would embedding advertisements be worth it for companies? I'd imagine most people are willing to pay another few thousand dollars on top of the very high cost to avoid any sort of obnoxious features, for both personal and potentially socially-influenced reasons.

It's hard for me to see how anything could be worth more to companies than straight profit.


Well I was intentionally misrepresenting how it would likely work for effect. It's really just more monitoring / data gathering so they can draw a more accurate profile of you in order to advertise to you and your relations.

But what I was saying is that even if you could pay for a brain computer, many less fortunate wouldn't be able to, which opens the road for advertising subsidized (surveillance) brain computers.


Being able to beam ads to the wealthiest folks (who can afford brain implants) sounds like a very high value to companies


I wouldn't be surprised if its the poor that use it the most. I imagine it will be used like a potent drug (like crack) targeted at the lowest common denominator.

you think fb will make people pay for it to receive like notifications in your brain?


At which point.... is a person a person or are we all just computers?

Maybe we already are all just computers in a simulation already. Ghost in the shell thoughts are rapidly coming back.


A BIG portion of the "point" of the Ghost in the Shell franchise (beyond the original first movie) was the "line" between human and machine was already blurred beyond usefulness as soon as they started relying on fire and tools to build a civilisation.

My take (apologies in advance for the self-indulgence): A commonly-claimed revelation or drug-assisted-insight is that "we are all one consciousness experiencing itself subjectively, there is no such thing as death, life is only a dream, and we are the imagination of ourselves". This is a foundational concept in the idea that "cybernetically-assisted individuals" is only a minor extension of our current reality, the tiniest blip upon our shared planetary history's march of progress. It should be treated as such, but hopefully better managed than our previous technological accelerants like fossil fuels...


Brains are basically analog computers made of flesh


I genuinely presume this sort of transfer learning will outpace, or even prerequisite, the neural lace used to integrate it with a customer's consciousness.


We imagine the future will be tools like 'grammarly' in our brains; i'm beginning to think the future will be more like 'ads for grammarly' in our brains.


Why stop there, go all the way to NEXUS-6 if possible.


But do you want Clippy in your brain?


Requires brain surgery. But a few more generations of the technology, and people will be texting with it.


Throw in better prediction software, and better application support

for example,

https://copilot.github.com/

and those 18 words can turn into a lot more


I know this was a joke, but this is some sci-fi level plot.


74% median decoding accuracy. Up-to 93% best performance.

Is 74% decoding accuracy good in a sentence of a very small length? do we know if the misses are at least semantically close or random?

It's very weird and suspicious to me that they mention the Up-To performance in the video, without any explanation or statistical quantification or context information. Sounds like they really wanna make this sound better than it is, in this video at least. (the info is likely in the paper)


How long before the thought crime actually is going to be a thing?

Imagine poorer people will be forced to wear Facebook Talk device, that will listen to your thoughts and then project adverts directly into your eyes. They will also know when your disability check is coming and what are your desires and needs. They'll for sure show you something tempting to buy...


> How long before the thought crime actually is going to be a thing?

My first thoughts as well.

While this iteration requires implants and a conscious effort to function, I bet a year's-worth of salary that future iterations will be able to function without either.

I personally think thought crime is an inevitability at this point. I feel sorry for future generations.


I wonder if there will be something a step further - a device that will prevent forbidden thoughts from occurring...


Google Brain Drive will ban “misleading thought content”.


Interesting that this comment is downvoted - going to save this to come back to it in 10-20y for it to be absolutely the state of affairs.


This isn't mental telepathy, this is using voluntary attempts to activate the vocal cords to produce text using an implanted device.


The ability to read your thoughts at some level from the outside is probably already a reality in primitive ways.


The lab team did an AMA on r/AskScience today. https://www.reddit.com/r/askscience/comments/onbs7v/askscien...


Anyone know the best legal way in the US to prevent yourself from being kept alive if you end up conscious but completely paralyzed? I know living wills can handle some medical scenarios, but I’m having trouble finding an answer on locked-in syndrome. Wondering if I’d need to form a euthanasia pact with a friend or something.


Do you like our Dear Leader? - "NO" detected in brain waves - Off to the reeducation camp you go!


It would be interesting to see followups about these people who had significant brain-computer interfaces. How long the implant stays functional, infections, life-expectancy changes.

The Dobelle Eye for example. In the early 2000s Dobelle Eye brain implant allowed blind man to drive a car in a parking lot using cameras that feed video into his into the visual cortex. They had to make the operation outside the US (Lisbon) and patients had occasional seizures even then.

Wired article: https://www.wired.com/2002/09/vision/

https://en.wikipedia.org/wiki/William_H._Dobelle


Lots of hackers get excited reading about BCI, but criticize how github copilot will never be useful. BCI like this is copilot running on a particularly noisy signal from your brain. Want to code with this tech? Expect something like, if not literally, copilot.


When I first saw the presentations going around regarding the recording and decoding (and reconstruction of vocal cord/oral muscle movements) the one thing I wasn't sure of was - how much of the signal was vibration artifact? Neural electrodes (at least single unit ones) tend to pick up a lot of vibration artifact which we see when people talk, cough, etc w/ those in.

I guess if they are encoding and decoding voice from these...does it prove direct neural encoding/decoding from LFP/single units, if they can generate robot voice from someone who can't generate sound and thus vibrations on their own?


This is the kind of studies that make me feel that we are already in some kind of post-future already.


Imagine if someone finds a zero day for that guy's prosthesis and starts putting words in his mouth.


Imagine your phone asking "Can Google Keyboard have access to your Brain?"


Facebook stopped funding for it's BCI research after this.


Got any reference for this?


This could be an excellent tool for interrogation.


Makes me wonder if it will work for someone that doesn't speak english or they will have to recreate a vocabulary ?


Comatose patients would be so relieved to be able to communicate with the outside world


I don't think that's exactly the condition you're looking for here:

https://en.wikipedia.org/wiki/Locked-in_syndrome

https://en.wikipedia.org/wiki/Coma


A Coma is prolonged unconsciousness. These patients are awake, aware, and yet trapped in their bodies, unable to communicate.

Most previous brain signal translation has been adaptions of typing in some form. This is directly translating signals intended for vocal cords, IE for vocal speech, into text.


A nice thought but I feel like this is more likely useful on the paralyzed. Comatose indicates minimal brain activity.


Not exactly. Comas just indicate prolonged unconsciousness and inability to react (in terms of body movement) to stimuli, not minimal brain activity.

https://www.brain-injury-law-center.com/blog/brain-activity-...

A complete lack of brain activity is brain death. Brain activity is often slowed in comatose patients, but that varies from patient to patient.

https://www.mayoclinic.org/diseases-conditions/traumatic-bra...


So this means we can think about 18 words per minute? Sounds reasonable. I don't think I think any faster than that. Could there be some other ways to measure it? Like timing myself?


This is a prosthetic that translates brain signals into speech at 18 wpm.

Human speech is generally about 150 wpm in english. I've read that the informational output is relatively consistent between languages, so in a language like german you get bigger words and fewer wpm. Assuming that's correct, 150 english wpm is probably close to the processing speed of the brain, minus the overhead of converting thoughts to speech.


Your message here was 34 words -- did it take you two minutes to think of its content?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: