> By comparison, able-bodied people close in age to the study participant can type on a smartphone at about 23 words per minute, the authors say. Adults can type on a full keyboard at an average of about 40 words per minute.
I had no idea average typing speed is this slow, though the extreme slowness of touchscreen typing is no surprise.
My own typing speed ranges from ~40WPM to ~110WPM depending on what tool or test method I'm using. For example some tests just use any random word from the dictionary, which of course includes very long and complicated ones, others go as far as to limit themselves to the 100 most common English words. And even with those factors being identical I can still get different speeds depending on the UI; the cursor movement, presentation of the text and input latency all have a measurable impact on the end result.
I actually spend a few minutes every few weeks taking typing tests etc, as it's a valuable skill to me.
I figure if the average typing speed is somewhere between 50-80 WPM, if I can type 50% faster, in theory, I'm responding to emails and commenting in general 50% faster.
It's just a nicer experience using a computer if you can type fast. Sometimes it's easier to command-backspace and retype an entire line/word than it is to move the cursor and correct the error. It's also just plain nice to have only your thoughts to worry about when typing and not have the cognitive overhead of trying to keep track of where all the letters on the keyboard are.
All in all, I would recommend improving your typing speed if you spend any type of meaningful time on a computer.
A fun 1-minute test that is somewhat practical here: 10fastfingers.com
A pretty trivial demonstration of this is that I can talk much faster than I can type, and I would expect that you can too.
If this happens to you a lot, you might benefit from training your typing speed up. (or writing, or dictating, or whatever)
As there are also plenty of competent typists and computer users at that age, It’s all about the influences, wonder how I can avoid that and brain plasticity excuses
This is key. Everything else is an excuse.
My Mum is in her 70s and until she lost feeling in one of her hands (chemotherapy 23-ish years ago) she was an exceedingly fast touch typist. Even without feeling in her hands, she could still type at ~40 WPM, possibly higher depending on what she was typing. Arthritis means she now mostly uses a tablet.
But looking at other people in that age range, I can think of literally zero from church who even know how to touch type. It's probably the difference between someone who had to do secretarial work and someone who did not.
I do think your first statement is absolutely on the mark. e.g., "I'm old, and I don't understand it, so I won't try."
The old adage is still true: Whether you think you can or you can't you're probably right.
This is part of why I've never used Facebook, Twitter, or any of the rest of it.
The difference is definitely much less about not finding new technologies useful or irritating; it's a difference, I think, between people who are genuinely curious and interested in learning new things and those who aren't.
I've worked on this question personally. I decided that the obvious first step to gaining the mental flexibility of youth, is to mimic youth:
* Explore: When we're young, we constantly try new things, even when there isn't an apparent ROI. We try new arts, new experiences, new ideas, new hobbies, etc. We are not afraid to ignore the established way and invest in something new - often for the novelty (or rebellion) of it. We give the new things time; we play. We are curious, not critical - we wonder why and explore the idea instead of criticizing it and shutting it down. When we are old, we often stick to what we know well and criticize the rest.
* Push yourself: In school or as a junior employee, you can't say 'I've always done it this way' or 'I'm not interested in learning something new'. You have to learn and adapt. When we're old, power corrupts - most people make those excuses and they are generally accepted. Nobody else will push you, as a rule.
There are limits in life; I don't have as much free time now as when I was young, but that's not a deal-breaker (and I use time much more efficiently now, including by prioritizing and by knowing myself much better). Also, I don't 'play' like a 6 year old or even a 25 year old; I do it my way.
I also saw it as an interesting experiment: How much of mental changes were due to changes in practice and how much due to biology. I can't provide empirical data but especially Exploration seems to have changed my life, not only mentally but significantly, emotionally: I'm much more optimistic, less jaded, and more emotionally connected than I was. Life is invigorating. A warning though: I'm challenging some norms of age and therefore peers don't grasp and sometimes reject me. I wish I could get through to them.
Swype is pretty great for writing with a phone. I kind of wish there was an improved Swype though.
I find it more annoying that it can't even attempt to render words that it doesn't know. The reason is exactly the same as its inability to distinguish "isn't" from "orange" -- there just isn't enough information in a swipe to identify any intended letters. But a failure to recognize a novel word means you can't correct the IME - you have no other option but to switch input methods.
It should be way more accurate on the starting and ending characters. If I'm starting with "i" I'm not going for "orange".
One thing that frequently trips me up are names. I'm swyping a regular word and it thinks I want to use a name I've never used before.
The whole point of using a very-low-fidelity input method is that it's faster. How much care and effort are you planning to put into entering each word?
Given how accurate search suggestion algorithms are becoming, I find it surprising that keyboard suggestions are as bad as they are. If swype could have the same predictive abilities as search engines it would be a major boost in speed and comfort.
I was expecting this feature when I got my first Android phone over a decade ago, and I am still waiting for it. Is there some engineering step that I'm missing? Model my typing history and use that for prediction weightings. Is this unfeasible on my phone hardware?
(You'll know that you're buffering when you take a typing test and by the time you see that you've mistyped something you've already typed several more correct words, so it pays to be accurate before you try to buffer ahead.)
we had to type at least 60 wpm in middle school in order to pass the "typist class" (around 2000) thou i did not maintain that level.
edit: clarification downthread https://news.ycombinator.com/item?id=27423762
Ah, so it's previous record for BCI typing speed, not "typing speed" in general. Original headline (on IEEE side) is misleading.
Edit: Found a reference that average dictation speed is around 150wpm, but also mentions some tests going up to almost 300. So it would seem the average person could be almost 4 times faster speaking rather than typing.
Currently to draw a line in AutoCAD, I have to either move the mouse to a "line" button and click it. OR type a command "line" if my hands are on keyboard. Now If I had a direct interface with the machine, just the the thought of "Draw a line" and software is line-drawing mode. Add eye tracking to it, and I simply look at the point on the screen when line shall start and say in my head "here"...... opportuties are endless.
Computer will have difficult time catching up with the amount of commands we are capable of issuing to it. BRING IT ON.
It was 2006, when I did Speec-To-Text data entry of 200+ mobile numbers using good old Windows XP. Progress seems to be have been too slow since than.
I challenge you to put your top ten commands on single key or double-same key (e.g. zz) shortcuts on the left side of the keyboard. Your CAD skills will speed up significantly once these become muscle memory.
But comparing speeds isn't really necessary or even relevant. For a person that literally has no other way to communicate nobody will be complaining the can type 20% slower than a person with a smartphone. The fact they can type at all is important.
Now, for the speeds, I type 80wpm/400cpm but what is more important I can also freely think while I do so. I don't even know consciously where the keys are on the keyboard, I need to focus on a character and then see where the finger goes as if I was bystander, a passenger of a mech body that observes it do something.
I can only assume that writing with brain-computer interface requires incredible focus and most likely engagement of visual cortex which would preclude from using it for anything else (so no ability to imagine any imagery while typing).
This isn't a comparison between this and typing on a keyboard.
It's a comparison to the prior state of the art in this technology.
EDIT: I mean, on the screen (and perhaps some on their keyboards).
I have wondered about this and I think what is happening you actually learn the qwerty layout on your phone and the fingers already know more or less where to go and what you do visually is to just aim where your finger needs to go (ie find the center of the button) and the rest (like reading the letter) is most likely redundant.
I don't know how to or have time to test this, though.
I'm surprised, though, that the most efficient way to do this is still to have the person imagine physically drawing the letters by hand. I know motor neurons are probably our most reliable output, but I would still think that, with all the advances in training from noisy data in the past decade, that training what the thought of "A", "B" etc look like in the head would be doable.
Or even what the thought of hearing or saying "A", "B" etc looks like. The auditory cortex is activated when we imagine sounds. Or, if they wanted to stick to motor neurons, could they have the person imagine saying the letters with their mouth?
I'm sure they've thought about this stuff and it's harder than it seems, of course. But I would just predict that brain-computer interfaces 20 years from now won't involve imagining using your hand to write letters.
For context, I did my PhD in the lab that did the work in this article.
Honest question: how so? We should expect a direct neural interface to far exceed the speed of any manual input device, especially after 40-50 years of research.
GPT-3 is also very impressive to me, even though 30 years ago I thought we'd have Hal by now.
Some problems just turn out to be way harder than anyone anticipated, and so when they make advances I'm impressed.
Counterpoint: If this were the case I would have already heard about techies getting brain implants to optimise their communication.
Since that hasn't happened, the only logical assumption is that available neural interfaces are slower than existing manual input methods.
18 WPM would make me feel like a snail. However, I feel fairly confident that within 1 - 3 decades some kind of BCI will let any user surpass the equivalent of 200 - 300 WPM after a bit of training. And hopefully even with a device that sits on your head rather than in your brain.
So I'm just kind of looking at this like ML research circa 1990. We're hardly even in the infancy stage yet.
With greater sensor fidelity, greater quantity of data, improved ML approaches combined with alternate detection targets (particularly subvocalisation) means that getting to faster than parity WPM seems like a realistic result. Incremental improvements to tech is something we humans are quite good at.
I could be wrong, though. Maybe people's thinking rates wildly vary. And maybe my 170 WPM was pretty close to my actual speed of thought. It also may depend a lot on if it's something you're thinking about on the fly vs. regurgitating existing thoughts.
On the other hand, this interface is for people who have a current typing speed of approximately 0 WPM (or ~8 WPM if they were lucky enough to have the previous leading BCI technology available to them). So it's all about perspective.
I was quite fast in landscape due to the placement of everything and my ability to also rock the phone while I typed. This increased my speed dramatically.
Sadly, that is not really an option anymore as I gladly give it up for the larger real estate of the 6" models.
If I have something long to say and I'm not too concerned about accuracy, I just use text to speech.
I've tried to back this up and only found a single, questionable, data source on the topic but I do attend several typing competitions and some of the young ages are phenomenal. I was beaten a few months ago by a 15 year old who was typing >220wpm+ for 5 minutes.
Also, Macross Plus, spoiler, for the subconscious throwing up images in a high stress environment; https://youtube.com/watch?t=2077&v=Fgg6p9gSeR0
It might be enough to make people walk again though.
Obviously, that's still sci fi at this point, but that's what you seem to be rejecting here.
To each their own, I guess.
A buddy of mine used some timing app for our typing speeds, you had to backspace repeatedly to retype a word if you misspelled it. I argued that is unrealistic, you can just use spell check or just catch the typos during your editing read through.
Full words correct with no typos.
If you want to measure this, you're interested in the combination of reading speed and muscle memory while typing. If you're making typos, your muscle memory is not good. Using a spell checker to automatically fix those words... Well, the number is not comparable, then.
A very effective friend of mine once said if your messages are perfect you're spending the oo much time on them.
If you're talking about who can send text messages to another person faster, then the "typos included" metric makes more sense.
Was that intentional? Looks like the other two replies may have read right past that without noticing it...
Typing speed is not about understanding information to its fullest, it's just a somewhat quirky metric for the speed of replication of words.
You can have effective fast communication to the machine without any fancy technology (and certainly without surgery!) The subjective experience is "think to type". I expect that these days you can use a camera and ML to "read thoughts" directly from facial muscles, no need for GVR or IMUs on the fingers.
It would be really cool to have a BCI that runs off mental-verbalizing and have a chopper style rapper use it (eg. Busta Rhymes). Imagine people doing transcription learning chopping as skill to get an edge on their job!
What could be extra cool is just imagining a loop construct in your head and it just appears in your text editor.
Is there already a scene for this? Sign me up.
But they use toes to control the thumb instead of BMI.
Other tricks we use for noisy touchscreen input could apply here too. For one thing, we've got different models for to collect input. You could try offering autocompletions/predictions to make typing long words faster (long a thing in e.g. single-switch input tools), if watching a screen doesn't slow the person down much. You could try a swipe-like flow where you collect a chunk of data imprecisely but fast (mentally type or scribble a whole word, say), then offer choices.
(Thinking about other extremes, I wonder if there's a way at faster input using a T9-like reduced alphabet (3x3 grid, pick row and column?) or something like that. Or if it could someday work to for people to try to speak or visualize words instead of handwrite.)
You might be able to glue the letter decoding and language model together more closely--feed the letter-decoding NN's uncertainty and less-probable guesses (25% chance this 'e' was really an 'l') to the lang model.
You could learn a person-specific language model, seeding it with a person's writings/speech before the injury or disease (if available), whatever else you think they might need (family/friend names, care requests, etc.), and training the language model as they use the interface (paper already does that with the character-recognizing model).
You could do explicit "mutual training"--while the machine samples how you write letters, it can show you its certainty scores (based on the model it has so far) or maybe something graphical to help you write the letters how it expects (exaggerate certain differences etc.). They already have an "optimized alphabet" that maximizes the machine-visible differences between letters.
From their paper, the existing language model already did very well at producing clean results in this test, but the more you can refine the cleanup, the more you can potentially sacrifice cleanliness of input for greater speed, and maybe get closer to speaking rate.
FWIW some Googling found http://web.stanford.edu/~shenoy/GroupPublications/WillettEtA...
However, considering how much the now-normalized ill effects of the digital privacy dystopia we’re already living in would be multiplied by that development, I really fear for the future.
In re: the technology to take the signals and transduce them to e.g. a byte-stream without surgery. First was Galvanic Skin Response, then when IMUs (Inertial Measurement Units) were miniaturized enough you could use those instead of GVR, nowadays (as I mentioned in a sib comment) you can use a camera and ML to recognize e.g. muscle twitches in the face or whole body.
(I have no idea why people are not more interested in this angle, even BCI enthusiasts seem to have a blindspot here and just go on about surgery and implants. If anyone is interested, let me know and I can explain further. FWIW, I don't bother to do this myself because I can already type as fast as I need to. In fact, I was a hunt-and-peck typist for years! I'm not proud, just explaining why I don't bother to explore hypnotic low-tech BCI.)
> are there any papers on it?
None that I know of, but I have never looked. Hypnosis in general has a hard time being accepted by conventional science (going back to Mesmer and Ben Franklin.)
> the use of hypnosis to create unconscious “hooks” into conscious thought.
Yes, this was literally one of the first things I learned when I started studying and using hypnosis: a binary Boolean signal from my unconscious to my conscious minds (idiosyncratically we (my conscious and unconscious minds) settled on twitch of the right arm shoulder for yes, left for no. Technically it's a trinary signal, with no twitch indicating a "reformulate the query" or "does not compute" response.)
Really, the thing to do is improve communication and report between the unconscious and conscious minds, operating with deliberate cooperation (rather than the ad hoc programming you get from life.) E.g. when I play chess I do not think about the moves, I look at the board and "just know" which move to make. I can actually decide whether to beat someone or lose, and by what degree! (As you can imagine, it makes chess dull.)
> This could be useful if for whatever reason certain features of conscious are not easy to detect (another comment mentioned that it’s harder to get clear signals from the frontal cortex).
Yes, translating and amplifying signals is trivial, your brain does it all the time. However the entire point of having a "conscious mind" is tied up in being easy to detect. The ego is a communication device.
> Then there is the idea of using muscle twitched / GVS as an information channel. It seems hard to get a high bandwidth from this (compared to invasive or EEG-like approaches).
I actually have no clear idea of the bandwidth limits from the unconscious mind to some specific sensor system. Keep in mind that touch typing is already a form of this: the unconscious mind moves the fingers and a stream of text goes into the computer. Achieving that sort of bandwidth should be no problem: set up a binary signal on each finger and an "ACK" on the thumb and you can transmit bytes in parallel, eh? Build something like this "squeezebox" keyboard or a dataglove and use chords from greater bandwidth.
We also have the face, a high-bandwidth output channel, and I have no doubt that a camera (or two) with a simple neural net could be used to train the system to recognize facial tics and expressions. You would have to be a little sophisticated in how you set up the feedback loops: you want to settle on motions that are easy for the face to make and for the NN to recognize. That's kind of a neat thought experiment. (No pun intended.)
I don't doubt that you could get higher bandwidth from implants, I just don't think it's worth the surgery (for able-bodied folk. For people with paralysis it makes more sense.) Not EEG though (I know a neuroscientist who has experience with high density EEG and from what I gather it's just not that great. Like trying to find whales by analyzing surface waves.)
Sir Francis Galton pioneered the study of psychometrics which showed that intelligence is positively correlated with reaction speed. So it's not surprising that the average person is slow at a cognitive task like typing..
Uh, that we can even type is in and of itself a wondrous thing. Enjoy your miasma.
And I'm at least partly a victim of the "but your IQ is so high - you could do so much!" expectation trap. At least until I learned to let go of other people's expectations and just be myself.
It's not that IQ is bunk, it's that effort and discipline counts more than anything when it comes to getting something done.