I'm quadriplegic and since lockdown began last March I've not left this bed once. That's not hyperbole, I literally mean that I haven't left the spot in this bed I'm in right now, writing this comment to you all.
So Yeah, connecting with the world like this is something I need to be a part of. I really don't have words to describe how life changing this would be.
Really hope they'll make something like this available to the public.
The term quadriplegia does not always refer to 100% paralysis, and some have limited use of their hands.
I've often completely forgotten passwords and had to rely on this muscle memory to regain access.
I had to unlock my laptop to play some music so I just tried to relax and let my fingers do the thing. I did it.
Motor movements are some of deepest knowledge we have.
Don't know if it's is a real thing, but in some CSI type show years ago there was a witness with amnesia (couldn't remember their name etc.) whose identity they found out by engaging him into a conversation to distract the consciousness from the "I can't remember" thoughts and then put a form in front of him he was supposed to sign (IIRC it could have been a witness statement) and the movement of signing something was so ingrained that it persisted even through the amnesia.
Anyone know if this is a real thing or just fiction?
It's a famous case in neurology, a man called "H.M."
Here's what's written in the chapter "Stress and memory" about H.M.:
"H.M." had a severe form of epilepsy that was entered in his hippocampus and was resistant to drug treatments available at that time. In a desperate move, a famous neurosurgeon removed a large part of H.M.'s hippocampus, along with much of the surrounding tissue. The seizures mostly abated, and in the aftermath, H.M. was left with a virtually complete inability to turn new short-term memories into long-term ones -- mentally utterly frozen in time.
Zillions of studies of H.M. have been carried out since, and it has slowly become apparent that despite this profound amnesia, H.M. can still learn how to do some things. Give him some mechanical puzzle to master day after day, and he learns to put it together at the same speed as anyone else, while steadfastly denying each time that he has ever seen it before. Hippocampus and explicit memory are shot; the rest of the brain is intact, as is his ability to acquire a procedural memory.
There have been times I wanted to switch layouts but qwerty is just too far in there for too many decades.
I take this to test my speed: https://10fastfingers.com/advanced-typing-test/english
Now I've tried to take the test again after roughly a year, out of curiosity, and I got around 75 several times.
Maybe it will improve very slowly past this level?
Do you perhaps remember how the improvement trend was with you?
Maybe try something that forces you to type quickly like the game The Typing of the Dead or The Typing of the Dead 2? They’re silly games but might help since they add urgency to the typing.
My gut feeling is that this is simply the speed range where the benefits of faster typing during composing quickly fall off. If you're thinking about what to say, you'll write a burst, think, write, think, change a bit, write some more- the limiting factor isn't really WPM.
Transcribing, of course, is another matter, but that is a specialized case.
Will try to feel less satisfied with my writing speed, and see if it will help. Thank you for taking time to share!
For reference, my typing while transcribing rate is in the 100-110 range. My daily-use composition is likely half, simply because I spend most of my time composing in my head.
I will say that the big advantage comes from being completely comfortable touch-typing, without needing to look at the keyboard. Once you've achieved that, the mental load of typing fades in to the background, and you can spend more time considering content instead of the mechanics of creating it.
Are there more effective techniques than just typing? I mean like measured techniques for identifying and improving problematic aspects of typing rather than just going at it repeatedly in the hopes that I'll improve overall?
Thank you for sharing, was useful to compare!
If you do change layouts, research them a bit first. I liked dvorak, but it used the right hand more than the left by a fair bit, like the opposite of qwerty. My right hand didn't deal with the strain as well as my left hand did in qwerty. Workman is about 50/50 left and right hand usage, and also avoids putting common letters in the middle two columns, which reduces how often stretching is needed.
Colemak seems a bit better than Dvorak on paper, but still has some flaws that lead to creation of a "mod DH" variant that moves D and H to a less straining spot since they're common letters.
Workman solves the same issue and is a lot more efficient than even Colemak, plus the 50/50 hand balance seems entirely unique to Workman. That balance was why I chose it over the Carpalx QGMLWB layout or the AI-generated Halmak layout, which are even more efficient in some cases than Workman. I have a friend who uses QGMLWB, but as far as I know, he doesn't get any pain in his hands. He also doesn't type quite as fast as I have, which may be related.
Lastly, I want to mention that I built an ergonomic keyboard and used it with dvorak for a couple months without my hand pain going away, which is why I bothered changing layouts again. The journey has been educational. I never used to care about stuff like the pinky being weak and overused. Now I have stuff like backspace and enter on thumb keys, I'm thinking about how much I have to stretch to certain keys, etc.
Some of these ergo keyboards take some keys off the right side compared to something like a 60%, but then you realize your right pinky was covering a huge amount of keys, and maybe it's worth having an extra inner column where you move the = sign to. I use a Pinky4 after using a Pok3r in the past, and it seemed annoying at first that the top row didn't fit the - and = right of the 0, but I prefer it now that I'm used to it. I've tweaked the key layout several times in QMK and it's been a lot of fun. So, I'd recommend both changing layouts and replacing your keyboard if you have any interest in this stuff.
Interesting effect though. This reminds me that it’s almost impossible to draw a bicycle from memory. https://www.amusingplanet.com/2016/04/can-you-draw-bicycle-f...
I also never use the numeric keypad, but I think that's more to do with being left handed.
I suspect this is some part due to the arrangement of the numbers.
Numbers on keypads in these kind of devices often has "1 2 3" on the top row. Whereas on the keybaords and calculators it is reverse. The top row there goes "7 8 9".
I wonder what it's like to use. I know there have been many attempts at this, and they're steadily improving, but I watched a talk at EvoMusArt 2007 where researchers used a smaller array of cranial electrodes to have the user control a mouse. That it worked blew my mind. But they talked about how slow and noisy it was to get the cursor from one side to the other... maybe a minute IIRC. What I remember most distinctly was the researcher said it was physically draining to do, that after 5-7 minutes of this activity they would be sweating and exhausted, without moving.
People have been able to move mouse cursors and type using only their brain and tiny implants for decades... so far Neuralink seems to just be repeating these experiments, but receives a lot more hype.
Here's an article from 2006 with someone moving a mouse cursor and clicking things with a similar tiny brain implant.
Is there something new here, or just the Musk train?
>the participant was able to produce about 90 characters per minute, easily topping the previous record for implant-driven typing, which was about 25 characters per minute.
So it appears to enable typing at > 3x the speed of previous efforts. At least for this one particular person.
This, on the other hand, is watching for complex neural activity, in that it learns what pattern appears when a participant pictures drawing an A.
Think of the difference like: before the input was controlled by turning your whole body, and now the input can take individual sign language letters.
Previous efforts included moving a cursor with the implanted Utah array, which I don't think was just based on broad brain waves was it?
Are you thinking of non-invasive devices? Neuralink didn't invent implanted microelectrode arrays. Some of their press has been around being the first one wirelessly connected to a receiver, but it wasn't first there either. Mainly their roadmap for scaling to more and more electrodes and their custom chip for processing is what's new I think, but they aren't sure if the polymer coating they are using for will hold up long term.
I would risk my life to be able to speak at normal speeds, versus incredibly slowly.
I would risk my life to be able to walk, even slowly, rather than be in a wheelchair (I imagine.)
EDIT: To the downvoters, it’s true, and there are other severe issues with these implants. Have a read, since many of you are laymen:
1. Audio Narrated Version of New Yorker Article: https://share.audm.com/share/newyorker/mind-machines-kenneal...
2. New Yorker Article: https://www.newyorker.com/magazine/2021/04/26/do-brain-impla...
Quite horrifying to read. Absolutely nothing he said made any sort of sense.
Divorce rates spike after diangosis of and successful treatment/recovery for/from all kinds of chronic health issues, as they do for all kinds of major life changes. Disruption in living patterns changes the context and patterns of relationships.
Leggett’s identity changed again once the device was gone. Now she knew great loss, but she also knew things that had been impossible to understand before the device. Like many people with epilepsy, she had often found herself fuzzy for a considerable amount of time after a seizure. That state made it very difficult to notice the signs that preceded seizures which could act as a natural warning light. These days, when she gets a funny, flip-floppy feeling inside, she takes anti-seizure medication. She’s not always sure. Sometimes she gets her husband to weigh in. He says, “Go with your first instinct,” and usually she takes a pill. She is now seizure-free.
To answer your question, it is merely a continuation of research in the field.
Probably a good analogy is that we had letters and fax machines before the internet, the internet is just much faster with much higher bandwidth.
Designing the full stack to be minimally intrusive seems unique.
Using the words "minimally intrusive" in the context of open brain surgery is quite amusing to me.
No matter who or what performs the surgical intervention, it still involves cracking open and replacing part of the skull and inserting foreign bodies into cerebral tissue.
That's as intrusive as it gets.
Neuralink uses an implant, inside your skull, so that this noise isn’t a problem.
Also: With EEG caps, we have filters that remove the noise from movement and muscle activity very well.
With brain computer interfaces you use motor imagery , where you imagine motor movements (e.g. moving your left arm so that your palms face outward, and imagining the position of your shoulder, arm, and fingers) to control the BCI interface, such as in the case with prosthetics or this case handwriting. I imagine since this is fine motor movements (a different brain circuit, too) that it is much more difficult to visualize.
Motor imagery is also used in stroke rehabilitation. It is also used for sports performance.
Graded motor imagery  is a variant of motor imagery and there are apps for that. It basically rewires your brain’s pain circuits and makes pain more manageable and more controllable. It works for chronic pain, of any origin.
For example, in graded motor imagery, apps show you images of arms/legs/etc. in contorted (twisted) positions and at various angles in the picture, and based on the positions of the fingers/toes/etc. you are supposed to identify if the arm/leg/etc. is a “left arm” or “right arm” or “left leg” or “right leg” .
I have tried graded motor imagery (via apps) and I have excellent spatial skills: it is a quite difficult exercise.
There currently is no huge difference between electrode caps and implanted electrodes (there are multiple types) except that electrode caps are super dorky. The resolution (constant bitrate) is higher for some implanted electrodes, but not by a ton compared to an electrode cap. Generally, a ton of electrodes are not implanted, either.
So, everyone drooling about Neuralink really has no idea what they are talking about here. They have a tremendously long way to go with Neuralink and Musk is making claims that may never even be technically possible.
Also, the (marriage) divorce rate of people getting these neural brain computer interface implants (that are regulatory approved already) is extremely high. People should think about this and other social matters, instead of getting all excited about concepts (not technology) such as Neuralink.
 Motor Imagery: https://www.sciencedirect.com/topics/medicine-and-dentistry/...
 Graded Motor Imagery: http://www.gradedmotorimagery.com/
 Graded Motor Imagery apps [iPhone and Android]: https://www.noigroup.com/product/recogniseapp/
Not necessarily. They have an extremely small-dimensional input, and have a pretty straightforward problem. The fact that they can train a good model with just a few hundred sentences suggests that it's an easy problem...
For other context, I do neural speech generation on phones for my day job, using an RNN that makes 4000 inferences per second. It works fine on a single thread with most phones produced in the last few years. Another helpful point of context might be 'swipe'-style phone keyboards, which are often RNN based, and turn paths into words.
The focus on giant-model work I think hides how effective small models can be, and how much progress has been made on making models run faster in limited resource environments.
(Do you have a reference on the divorce rate? Not sure I understand the causal link there...)
My work in AI revolves around biophysical signals. I do not use AI generally in this case but I use a 19 channel EEG (using an EEG cap) for experiments in closed-loop controls. It requires a lot of RAM (ideally 32 GB if not more) to prevent latency.
This is fine.
[*] We will add your technological and biological distinctiveness to our own.
Delete all funds from your bank account? To confirm, think of a polka-dotted elephant. Otherwise, just think of anything else.
> To confirm, think of a polka-dotted elephant.
I immediately though: How does a polka-dotted elephant look like???
I guess my bank account is blank now :( .
You monster! :D
I imagine a future version of this technology would move beyond converting mental keystrokes into input and instead work on more abstract ideas. Imagine if your IDE could auto-generate certain design patterns as you think about them, instead of forcing you to manually type them out.
I image this is how I probably will become 'obsolete' in regards of technology just like my parent can't catch up with todays technology.
I think we were able to get up to something like 20 characters per minute, but it was largely UX design. You can go from 6 -> 20 characters. I wouldn't be surprised if the implants improve the speed, but UX could probably play an equally large impact.
it may be that our current "AI" tools might be helpful--they're really good at composing "languages" for tying together different types of data. seems that tying noisy brain sensors data to our English alphabet might be an example of that.
Every time an article comes up on the topic, I read through the comments section and realise how full of shit we all are here at HN. Then I read another article and assume the commenters are all smart people who understand the topic at hand.
I can't remember what the law that describes this is called, but it is absolutely real.
To be fair, this problem is a co-creation between comment and upvotes, and upvotes deserve more of the blame. But if you post an on-topic, substantive comment in the first place, then you're contributing to improving this place rather than worsening it. Recent comments on similar cases:
It was more intended as a gentle reminder that there are a lot of comments on HN posted in an authoritative tone by people who are not experts in the field, and perhaps we should all stop and think before contributing to the problem in search of validation.
This is something I honestly care more about than BCI technology, and genuinely felt it was a positive contribution to the discussion as a whole. Based on others' reactions, it didn't seem to be taken negatively either.
Regardless of all that, in one of my comments below I did actually share both a short summary of the field and my experience as a researcher, as well as a 20-page long unpublished literature review. Hopefully that can help enlighten people somewhat on the terminology used in the BCI field and point out some of the limitations and issues with the current state of the art.
(My wife says this particular XKCD reflects me the most!)
“Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.”
These articles are designed to make it seem like the technology is developing rapidly, similar to the early CPU and memory chips. A more apt analogy is the progress of electrical systems in the 18th century: we can observe some odd effects but have no idea what it is we're actually looking at. This won't change until we gain a proper understanding of the human brain, just as electronics didn't take off until we understood the atom.
If you want my opinion on the current state of the art (as of 2018), as well as a quick introduction to some of the technology, here is an old draft of my research proposal (You can skip almost all of it except Appendix 1; that's by far the most important argument for why I think the current paradigm cannot deliver significant improvement):
As you can probably tell, I gave up because I could identify gaping flaws with the current paradigm but wasn't able to formulate a new one out of nothing.
On a side note, I suspect this is why impostor syndrome is so prevalent amongst PhD students. You're not actually expected to contribute worthwhile knowledge to society, but instead expand the body of knowledge in your field. In a static field, unless you are a hyper-genius capable of spinning a whole new paradigm, this means churning out papers which you know are bullshit but are supported well enough by the existing body of research to appear reasonable. In short, grad students feel like impostors because they are.
> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
It would be wonderful integrated with a context and language sensitive IDE.
>To make the interface efficient, we use the predictions of a language model to determine how much of the world is devoted to each piece of text. Probable pieces of text are given more space, so they are quick and easy to select. Improbable pieces of text (for example, text with spelling mistakes) are given less space, so they are harder to write. The language model learns all the time: if you use a novel word once, it is easier to write next time. [...]
>Imagine a library containing all possible books, ordered alphabetically on a single shelf. Books in which the first letter is "a" are at the left hand side. Books in which the first letter is "z" are at the right. In picture (i) below, the shelf is shown vertically with "left" (a) at the top and "right" (z) at the bottom. The first book in the "a" section reads "aaaaaaaaaaaa..."; somewhere to its right are books that start "all good things must come to an end..."; a tiny bit further to the right are books that start "all good things must come to an enema...". [...]
>.... This is exactly how Dasher works, except for one crucial point: we alter the SIZE of the shelf space devoted to each book in proportion to the probability of the corresponding text. For example, not very many books start with an "x", so we devote less space to "x..." books, and more to the more plausible books, thus making it easier to find books that contain probable text.
The classic Google Tech Talk by the late David MacKay, the inventor of Dasher:
It's based on the concept of concept of arithmetic coding from information theory.
Ada Majorek, who has ALS, uses it with a Headmouse to program (and worked on developing a new open source version of Dasher) and communicate in multiple languages:
Personally I've struggled with some RSI. Fortunately I've figured out a good way to manage it, but the thought of loosing my ability to type terrifies me. I could see a mature version of this technology being safe and common enough to be an elective thing; then I wouldn't have to worry about hurting my hands!
Ideally, future generations will use a non-invasive sensor.
If you can invent one with high enough resolution, you will change the world. But first, or simultaneously, the other components of the system will have to be invented.
TFA is about decoding the ill-gotten signal. It's an impressive sign that our information processing technologies, and neuroanatomical understanding are already at the point where the system is viable.
If you could complete development of the non-invasive smart hat by this time next year, the world will be a different place by 2025.
The advances demonstrated here are in the algorithm and approach, not in the interface hardware (Utah array).
> Steve had the surgery at Stanford, in November, 2012. After the surgery, he had “severe cognitive decline” and a slew of physiological adversities. “The leads [wires] were 18 inches longer than they needed to be, so they coiled it up in the chest and at the top of the head; I could feel them externally,” he says. “And the leads were too tight. I could move my ear and my chest would move, too,” he says of a condition called “bowstringing,” whereby scar tissue encapsulates the wires (partly from the body’s natural response to foreign material), which has been documented in DBS cases and can cause permanent complications. Steve also had many symptoms that were ultimately diagnosed as shoulder and jaw muscle atrophy, spinal accessory nerve palsy and occipital nerve palsy. He reported all adverse effects immediately and continuously throughout the first year of the study, but the trial doctors continually told him that they’d never heard of such symptoms with DBS, even though nerve damage and DBS wire-related “hardware” complications were among the potential risks listed on the informed consent document.
Because they're relatively experimental, it's almost impossible to find a doctor/surgeon/etc that will choose to work on you should you run into complications. If you have problems with the programming of the devices themselves, there isn't much you can do as a patient, and even specialists can't help you. The only people who can help you are those who developed the device. That can be a problem for a device that's meant to remain implanted until death. Removal is also a huge issue, because brain tissues grow on and around the implants. There are people who want their implants removed, but can't find a doctor who is willing to remove them because of the potential for brain injury and the resulting liability.
They can also cause personality changes, suicidal behavior and even homicidal behavior. There are documented cases of increased impulsivity and impaired executive function, which have led to pathological gambling and shopping. Breakdowns of relationships and the ability to work are also documented.
Here's an article on the subject, and there are numerous studies that look into such effects, like this one that aggregates dozens of studies.
Then again, countless potential technologies have be "10 years away" for a long time now.
Is finding the right neurons just luck? Does the person somehow adapt to the interface?
For the adaptation, there’s a rich literature of neuroscientists in this field studying how the participant adapts to the control characteristics of the decoder, and how the decoding algorithm can be designed to adapt after seeing more data during use. Here’s one paper if you’re interested http://www.stat.columbia.edu/~liam/research/pubs/merel-fox-c...
But I'm dreading that people will start doing it on sane persons that just want an upgrade.
I've read enough Sci-Fy to have a list of 1000 ways it can end badly, and we only need one.
People are dumb enough to get phished by email, gov is spying on any phone, pirates manage to access to nuclear plant, and you want to connect your brain to a network?
Unfortunately, I think the wrong it has the potential to do to the human as a specie outbalances it by an order of magnitude.
It's useless to worry though, we are a careless race, we will use it no matter what, and we will hurt yourself in the process. I just hope it will not be too much.
(Submitted title was "Neural implant lets paralyzed person type by imagining writing [90wpm]")
We could directly measure how fast a child thinks, or a CEO vs. a homeless.
Eventually we would be able to pinpoint mental problems by measuring the time it takes to think about a certain topic, check if the mind "locks up" for a couple of seconds on a seemingly unrelated topic which got triggered by the context the mind was thinking about. We could pinpoint the unrelated topic and have a base for a psychotherapy which could be more accurate than by just talking around in order to get to know the patient.
Let's say a group of 10 have to think out the ordering process at a Starbucks where every step has been provided by a list. An average of 20 seconds is used to do this +- 5 seconds. If there is an outlier, one could start to dig deeper into what exactly is making the mind to wander off. Multiple tests in different scenarios could then decide if the outlier is a slow thinker in general, or if it is a certain thing which triggers this wandering off.
My assumption is that translating the thought "delete the current line that the cursor is on" to the actual action is still far away. And then expanding that to something like "delete the current line that the cursor is on and all the lines above it" might be even more difficult.
But the equivalent operations in normal mode are "dd" and "dgg", this interfaces very nicely with the implant.