Hacker News new | comments | ask | show | jobs | submit login
A new digital divide: Young people who can’t use keyboards (asahi.com)
432 points by paralelogram 7 months ago | hide | past | web | favorite | 356 comments



It's worth bearing in mind that the source is Japanese. All input methods for the Japanese language are a compromise. Inputting kanji on a physical keyboard is nowhere near as fluid as inputting Latin characters - you're constantly toggling between inputting kana and selecting the kanji options presented by the IME.

The prediction and correction technologies of smartphone keyboards are a very good match for kanji and hànzì input. As a second-language user of Chinese, I find it considerably faster and easier to input hànzì on a smartphone than with a physical keyboard. The context switching between inputting pīnyīn and selecting hànzì is much less expensive when the hànzì are presented directly above the on-screen keyboard. The prediction and correction algorithms seem to be far more intelligent on mobile, which largely compensates for the slower and more error-prone tactile experience.

It is my understanding that most young Japanese people prefer the flick input method, which is a refinement of the old keitai input method used on featurephones with numeric keypads; they are often startlingly quick at using this method, but it poses a far higher switching cost when moving to a QWERTY-derived physical keyboard. I find it entirely plausible that the flick method could simply be inherently superior.

https://www.youtube.com/watch?v=8V-za9LT_30


As a Japanese native speaker who has to enter many Japanese texts on a daily basis both on a PC and on a smartphone, I feel obliged to point out that the quality of an IME (a piece of software providing possible kanji/alphabet sequences for a given phonetic relization), is far better on a PC (its built-in MS-IME, to be precise) than on a smartphone (iOS).

I started looking for some example to show how bad iOS's IME is, and I found it for the first try: it returns a wrong candidate for the first suggestion for "かんじをにゅうりょくする" (to enter kanji), returning "感じを入力する" (to enter sense) rather than the correct "漢字を入力する". Note that "漢字" (kanji) and "感じ" (sense) have the exact same phonetic relization: Both of them are pronounced as kanji. It seems as if iOS's IME does not take into account any contexts at all. If it did, how could it have calculated entering sense (?) is more likely than entering kanji? This kind of absurd error would rarely happen with Microsoft's IME and it always stresses me out when entering long texts in Japanese on a smartphone.


The iOS IME definitely uses context but I think the default training just isn't very good. If I try your example it starts suggesting 感じ, but once I add を it changes to 漢字を.

But this is also why there is a large market in third party keyboards/IMEs, even on Windows. From the classic ATOK to the modern Google Japanese IME


There are some better input methods for Japanese, and they are not available on smartphones.

SKK

What you are talking above is phrase-wise conversion. However, using SKK can easily distinguish "感じ" and "漢字" by "KanJi" and "Kanji", by explicitly specifying where the conversion starts and ends with Shift key. SKK can massively reduce the conversion candidates, so that people can faster obtain converted sentences.SKK is a good input method, but doesn't exist for smartphones.

T-Code (or TUT-Code)

We also have t-code input methods on computers but not on smartphones. It assigns 2 key strokes into one letter(kana or frequently used kanji) directly. For example, "kd" will type "の" and "is" will be "東". This input method is also very efficient and boost input speed, however it is designed for physical full size keyboard with 8 fingers. Its users can't do the same with software keyboard because they remember the key strokes with fingers.


If anybody is interested in a Kanji dataset for improved input methods, please check out the 52,835 characters I gathered.

https://blog.usejournal.com/making-of-a-chinese-characters-d...

I made my own input method for Chinese Hanzi, which decomposes the characters and lets me find characters based on their IDS codes. It also predicts words both forwards and backwards (in case you don't know the first character, but do know the second).

https://pingtype.github.io


I think this might be the training (or lack of) of your device.

I get it right on mine: https://imgur.com/a/83VCSwo

Bear in mind I use a mac wih the same icloud account, and have years of data on it. Part of it must be shared.

I haven’t use MS’IME in a long time, but I remember it being only marginally better than Apple’s. The main differentiators for me where names (places, stations, peoples).


iOS input for non-latin or less used languages is clearly lacking. There are third party keyboards which do better predictions, at least for Chinese there is Sogou which apparently also completes things like popular (trending) names and expressions.

But one does not even have to go that far, autocorrect for Slovak is quite a disaster to the point where I'd like to be able to disable it on per-keyboard basis.


I think iOS's IME expects you to segment things yourself, rather than typing several words strung together and then pressing the suggest button?


It would be interesting to compare the experience with a recent and updated Android phone, with the Google Keyboard. My bet is that Google's one is better at it, and improves rapidly.

(I don't speak Japanese, sadly)


If you have also tried the Japanese IME on macOS, how does it compare to the Windows version in your opinion?


Google IME keyboard for Andoid also gives "感じ..." as a first suggestion for this phrase.


This is definitely a big factor. When you look at a Japanese computer keyboard [1] you might see lots of Japanese characters on the keys. Almost nobody actually uses these to type. Instead, if you want a か(ka) character then you type k,a on the familiar qwerty keys. These phonetic characters then get converted into the more complex Chinese-origin characters as you go along, allowing you to disambiguate homophones.

The flick keyboard removes the need for typing 2 Roman letters to make one Japanese letter, instead you just have a single flick. Almost all young people use the flick keyboard and I definitely think it is faster.

Cast your mind back to when you first encountered a computer keyboard. I remember hunting for seconds to find letters in this unfamiliar arrangement. This is where many Japanese young people are. There was never a computer in their house and now they are heavy mobile users. The qwerty keyboard is not everyday for a lot of people.

As to general computer literacy among Japanese teens: I teach a first year general English course at a Japanese university. The students are drawn from all different faculties so I feel it is a pretty good informal sample. I tried to get students to do an online survey by putting a web address on a slide. Over half do not know what a web address is and draw no distinction between search bar and address bar.

[1] https://en.wikipedia.org/wiki/Japanese_input_methods#/media/...


> Cast your mind back to when you first encountered a computer keyboard.

Does an IBM cardpunch keyboard count? :-)

Anyhow, I took a 2 week class in 8th grade to learn to touch type on a mechanical typewriter. It's paid off handsomely ever since.


I found that taking a few college programming courses was entirely sufficient to learning to type. (keep in mind, I had already written dozens of academic papers at this point, but it was the coding that really solidified my typing abilities...)


> a few

Yeah, but my typing class was only 10 days, one hour a day. It was a marvelous return on investment.

Also, it was on mechanical typewriters. You had to physically hammer the keys to make it work, making for a very positive impression on muscle memory.


> search bar and address bar.

I would say that is because the search and address bar have been the same thing in all major browsers for some time now.


> Cast your mind back to when you first encountered a computer keyboard. I remember hunting for seconds to find letters in this unfamiliar arrangement.

How early did you learn to type? As an American born in 1980, I learned in first grade, and I don't remember it being a struggle. So maybe it's really important to learn early.

FWIW, I'm visually impaired, so looking at the keyboard wasn't a practical option.


I've never learned home-row touch typing, but I have my own style that I learned while playing StarCraft when I was ~10 in the late 90s - you can't spend time hunting for keys if that means your army is getting wiped out.

So while it may have been a struggle for me back then, it was for an entirely different reason. I too don't remember it, though.


I had similar circumstances, I started using computers at a fairly young age and basically developed what I call an "advanced hunt and peck" style. I knew where everything was, but for the life of me I found it incredibly difficult to learn to relearn how to type "properly".

This is actually why I use DVORAK now, the only way I could force myself to learn was to completely disconnect the letters on the keycaps from what they actually represented. It was the most grueling two weeks of my computing life, but it was worth it in the end. Actually, I've since lost my previous typing skills entirely - whenever I'm forced to use QWERTY for some reason I end up typing at a glacial pace, so there's the one downside (thankfully this isn't often, mostly when I pull up the console of some server through iLO/iDRAC to get networking back up so I can just SSH in instead...)


My family got our first computer when I was about 8. I didn't learn to touch type till I was sixteen, but by then I had a really good sense of where all the the keys were. I distinctly remember hunting for letters. (but then again I remember not being able to spell basic words, so perhaps I'm the outlier).


I started touch typing at around 18, after just forcing myself as I knew it'd be beneficial in the long run.

I could type around 60wpm using two fingers, I know where they keys are... Just not how to use every finger!

Now I touch type 'properly' I can get close to 100wpm. My speed has kind of plateaued now though unfortunately.


Do you really find a benefit much past ~60wpm? Unless you're just transcribing, I would think compositional speed is the limiting factor.


Not OP but I type ~140 and there are definitely situations where you feel a desire for higher WPM. Informal writing is the most common one - for example, with this comment I'm basically just typing out what I'm thinking and it reads more like a conversation than a well-planned chunk of text. I'm going nearly full-speed because, like most people in the world, I can compose casual sentences in my head much faster than 2 or 3 words a second. This becomes even more noticeable when you're in chats and just typing as you think of things to say. It's also useful to type faster in settings with repetitive phrases, like coding. There is definitely downtime where you think of what to write next, but it's fairly common to quickly think out a longer segment of simple words and have to wait for your typing speed to catch up before moving on.


It was an an honest question, and I appreciate you taking the time to respond. I can't say that I have ever bumped in to my typing speed limit when writing, even informally, and I am certain my max is well below 140. Different strokes and all that. :)


well, I'm sorry, but that seems like too many words for the idea you expressed in them.

maybe it's just me, but at least sometimes brevity pays off.


Written text is condensed. It takes more time to idealize a good paragraph for a written text than the equivalent for conversation. This extra time is used to sort the logic of the text, reducing redundancy in the process.

The parent comment does sound like verbal communication. Even so, I understand the point. I type 120wpm and, when writing, I form paragraphs mentally, then dump them via keyboard. If this second phase could be faster, I'd write faster.



I find that the closer my typing speed is to my thinking speed (be it fast or slow), the better. If my typing lags behind my thinking, I have to slow down my thoughts or I’ll forget what I’m typing, but if I slow down my thoughts I can lose my way there and either forget what point I’m trying to make as I get bogged down in typing it, or my mind drifts. If my thoughts are slower than my typing, I can slow down my typing without detriment, but if my typing is slower than my thinking, it may be detrimental (not always, of course, but it sometimes is).

Therefore, yes, I benefit from >60wpm typing. Not all the time, maybe not even that often, but it definitely does happen.

In programming, my thoughts are usually slow enough and I only need to type in bursts, but sometimes it takes a lot of code to represent a small idea and I need to turn it into code asap before my mind drifts and the house of cards gets shaken up.

(The above makes it sound like I lack focus, that’s not really true and my mind doesn’t always drift, but in this world of noisy open plan offices, it’s not hard to get distracted in some small way, enough to be detrimental)


A lot of the reason why I personally find >60 wpm difficult is similar to the UI feedback studies that come up on HN from time to time. If I avert my gaze and let my typing become a secondary act it's not as bad, but there are some UI situations where the keypress and UI display can be more than 6-12 characters behind so watching it gives me the feeling that I gave miskeyed something or the software froze. It's part of the reason why I prefer key chord driven autocomplete compared to in place autocomplete from the way it can be visually distracting while trying to keep up.


Also not op but I can type fast and it definitely helps with verbose things like a SQL query or while chatting with coworkers on slack... .... But that can frequently be frustrating if the person on the other side isn't nearly as fast....

(edit autocorrect)


Conversely I was born about 20 years earlier and I didn’t touch a keyboard until I was a senior in high school.


The advantage of "flick" keyboard is probably that the buttons are larger than buttons on a latin keyboard. Hitting and sliding the finger seems like a more complicated action than just hitting a letter, but maybe that is just because I don't use it often.


I've known an english variant of the "flick" keyboard (MessageEase) since my first smartphone.

I find the flick/tap-and-drag gestures to be vastly more comfortable and natural than trying to use my thumbs to peck at a QWERTY soft keyboard and relying on predictive algorithms to make up the speed loss.

An additional benefit is that this approach provides more room for additional symbols and layers that are more of a bother to reach from standard soft-keyboards. For example, I can have a full set of programming symbols with Ctrl/Esc modifiers available without explicitly mode-switching the keyboard, it's extremely helpful when I'm ssh-ing from my phone.

For reference, with a physical QWERTY keyboard I average around 95 wpm, with messagease on my phone I run around 60 (without autocorrect/suggestions).


Holy Crap. MessageEase is the keyboard I didn't know I wanted. It's like Minuum but smarter. <3


I've used MessageEase for years now on Android and it was really worth the time investment. I only switch back to Gboard occasionally to get at its far superior emoji input system, but I don't use a lot of emoji in most of my text entry so that's not a massive deal. I can type accurately without having to wrestle with autocorrect all the time, and that'll do for me! And although the letter frequency distribution isn't right, I can happily use it to enter German and Lojban and Welsh on the occasions when I want to use those on my phone.


I haven't gone too far myself, but I'm pretty sure it's possible to customize the symbol layout to better suit a particular language or user preference.

IIRC you can swap out (or add) any of the side/minor triggers, but I don't recall if it's possible to change the big nine major keys.

It is great though, I abhor word prediction when I'm trying to write (the smarter the predictions, the more viscerally disturbing I find it), and ME has been the only way I can keep up a comfortable pace.


>I find it considerably faster and easier to input hànzì on a smartphone than with a physical keyboard

As a native Chinese speaker I find the opposite.

>when the hànzì are presented directly above the on-screen keyboard

It's basically the same for most of "proper" PC IME. They start to appear when you're typing pinyin, and then you choose with numbers or space (for first one).

Showcase: https://i.imgur.com/gQGKw11.gif

So, IMHO there is no obvious advantage that smartphone's IME has over PC's here. And the speed of typing on physical keyboards beats smartphone by a mile, overally speaking.

---

However, I DO found that most of Japanese (which I'm a second-language user) IME on PC I tried having the problem that, you have to press some key (normally `enter` or `space`) to start "convert", and even another input to start choose between candidates, which is very tedious because you have to keep pressing `space` which IMO shouldn't be necessary

(Note: It was how most Chinese IME (Like Zhineng ABC) worked 10 years ago, but they got rid of redundant inputs later.)

Showcase (MS JP input, I knew it's not the best, so feel free to let me know how other IME function in these scenarios!): https://i.imgur.com/hwml1Sf.gif

Notice that I have to press space once first to enter "convert mode" (which breaks down your inputs to groups), and then press space again (for the first group) to make the candidates appear. I really don't get why it can't be like the Chinese IME.


>you have to press some key (normally `enter` or `space`) to start "convert", and even another input to start choose between candidates

This doesn't match my experience, and I just double checked on Windows 10 and OSX 10.13 - I do not believe this experience has changed in at least the past 7-8 years.

I switch to Japanese hiragana input, and type 'sake', and 酒 appears as soon as I hit space or escape. I do not hit space or escape or anything prior to typing 'sake'. If I type 'yoroshikuonegaishimasu' and hit space/escape/enter it becomes 宜しくお願いします, properly converting hirigana to kanji where it should.

I am using the default IMEs that come with Windows and OSX.


I believe I was not clear, but your description is exactly what I said.

* You type in "sake"

* Press space once, it becomes "酒"

* If you want to choose OTHER candidates (such as "鮭"), you have to press space again to show the numbered candidate list.

Please see this showcase: https://i.imgur.com/rXxjMRs.gif

>I do not hit space or escape or anything prior to typing 'sake'

I don't mean you need to hit anything prior to typing Romajis. But you need to hit it twice afterwards to get the numbered candidate list.

To be fair, on Win10's default IME, it does provide a predicted suggestion list before you press any spaces, which is nice; but for some reason this list is different from the formal candidate list, that it is not numbered and you have to hitting tabs repeatedly to choose from them:


That's why it's so sad this[0] 2016 Google April's Fool stays an April's Fool

0: https://www.google.co.jp/ime/furikku/


What do you mean, "stays an April's Fool"? There are circuit schematics as well as software for it, all available under an open source license: https://github.com/google/mozc-devices/tree/master/mozc-furi...


There's no commercial product? I'd love to be proven wrong though.


It's a kit :)


It sounds like even just the google flick keyboard on a tablet as an input device to a computer would be useful.

Have a Qwerty keyboard center, a flick keyboard left and mouse right? (Or personal preference of course)


Seems like a DataHand would work well for Japanese.


I can attest that flick-method typing is fantastic for Japanese. I'm over 100 WPM typing in English, so using standard romaji-phonetic input on a keyboard is still the fastest for me, but flick-typing is bar-none the best option for a touch-screen.

You don't even have to bother with using the modifier button (for the ゛or ゜markers) because the prediction will guess what you meant. It's much easier than even typing my native English as far as phone input goes.


This is insightful. I see behaviour which princely has similar roots. In India, I have seen people who never input any Indic language text on a regular keyboard do that many times on a phone soft keypad. The prediction and multiple methods to input, including phonetic Latin, makes it significantly easier than the older keyboard based methods. Besides there were several non- standard keyboard methods which didn't help!


This makes a whole lot more sense. I couldn't imagine how a student could get through high school and all its essays without once thinking a physical keyboard might be an easier option.


Reading your comment really put the news article in proper context. Thanks.


I'm unsure how relevant my experience is as someone who has learned (some) 日本語 as a second language, but I find that I do not have any more trouble typing in Japanese on the keyboard than I do English. The IME in Windows and OSX is quite good at determining which kanji makes sense in context. In that case, it's as simple as pressing space/escape/enter to confirm the selection. I've always remapped capslock to escape because I'm a vi user, so I tend to gravitate towards that, but there's definitely choice there.

I do find the flick method to be significantly faster on a touchscreen, however, I'm still quicker with a regular qwerty keyboard.

I'm curious if we would see similar results if swype-style keyboards had as much domination among young people in countries that use Roman characters as the flick method does in Japan.


That's fascinating. Is the video representative of texting speed, or is this the fastest human in the world? Are they making choices that would take effort in a different system, or just clicking the next suggested letter?


>Is the video representative of texting speed, or is this the fastest human in the world?

From what I understand, he's on the faster end of the normal range.

>Are they making choices that would take effort in a different system, or just clicking the next suggested letter?

With Japanese kanji or Chinese hànzì, there's no practical way to directly input such a large range of characters. Users type a phonetic spelling, then the input method editor presents them with a menu of characters with a corresponding pronunciation. Chinese mostly uses a system of phonetic transliteration based on the Latin alphabet (pinyin), whereas Japanese speakers use both a Latin-based system (romaji) and a native Japanese system of syllabic characters (kana). The flick method shown in the video uses directional gestures to input kana.

For example, if I'm trying to type the Chinese word for bread (面包), I'll input the word as it is pronounced, "mianbao". On mobile devices, a list of predicted characters will appear above my keyboard; on a computer, a numbered list will appear beside my cursor. I select the characters I was intending to input by tapping on mobile, or by pressing the corresponding number key or clicking on a computer. The choice of characters invariably requires some amount of human input, because there are many homophones (different words with the same pronunciation).

This method of text input can often be quite slow and cumbersome, so good prediction and correction algorithms are crucial. The input method is constantly guessing which characters you want; if it's not aware of context, it'll make bad guesses and require a lot more manual selection and correction. Good input method software can predict entire phrases and is very resilient to typos.


For Chinese hanzi, some people also use a stroke input method. You are presented with ~10 possible strokes that make up all hanzi, and by selecting them in the correct order you can write a character.


correct, that's 'WuBi' stroke input method. for a trained typist, stoke input is much faster than latin-letter based input method such as 'pinyin' since chinese character is structure-based.


I don't know Chinese but it seems like more difficult method because you have to remember how the character is written instead of just typing the spelling and choosing from a list.


You need to know how the character is written regardless because the stroke order is part of the character. There are some basic rules like working left to right and top to bottom. Also, each component has the same order when written out.


I also seem to recall there's been successful input system like this for Japanese, in particular for handhelds with pen input (think psion and similar early devices).

Optical recognition of Kanji can be though, but with stroke direction it is easier.

See for example : https://jisho.org/#handwriting


Yes, but reading and writing are different skills. For example, there are some characters I can recognise but won't be able to write correctly.


The video is using simple example sentences where the prediction is always correct, so actual typing means choosing the correct kanji at intervals rather than typing the entire sentence at once, but otherwise it's pretty representative of what a normal person can do.


In my experience the prediction on computers and smartphones is very good for everyday use when inputting Japanese. You do have to make choices of what kanji to use but often you're aware of the homophone confusion as you write (think about writing "they're", "their" and "there") so it's easy to get the hang of.


Sounds like finally a good usage for the mac touchbar


The touchbar does support this (for Japanese at least) but I still can't get used to looking down from the screen onto the keyboard and still use the standard method


My thoughts exactly. Maybe Apple was a couple of steps ahead and added an amazing feature for the ever growing Chinese market. Who knows?


Feels like it would work better with the Zenbook Pro's touchpad, seeing as it's about the size of a Smartphone.


On a tangent here but curious, if you know: what's the closest input method you can get to flick for Chinese? I realize the languages are quite different (might be botching this, but my fuzzy understanding for Japanese is it has kanji+katakana+kana+romaji, where Chinese has only hanzi + pinyin|zhuyin/etc.) Do you think flick is compatible with Chinese?


They don't have that. They do have handwriting recognition.


Well the question was what do they have that comes closest. They do have a lot more than just pinyin and handwriting. If you don't know, it's fine to just say that, or stay away from the discussion, especially when you seem to be answering with such certainty on something where you didn't read the question.


They have nothing that resembles the tenkey input because they have nothing that resembles kana, unless you want to count bopomofo. If you want to make some correction, rather than just condescending to me, feel free.


Sorry I just find statements of the form “nothing exists” a bit irksome especially when the person talking doesn’t know what exists as can be seen by looking at the matter at hand. Chinese has plenty of ten key input systems, some alive, some dead, since the mid 1990s at least for phones and possibly before that for the keypads accompanying computers. It’s been a very fertile space for innovation and I’d be extremely surprised to meet anyone who has kept up with it, so forgive me for being skeptical of your claim of perfect knowledge of a negative. I was asking a different person who sounded like he knew something, and you answered. I did mention zhuyin which is commonly understood informal shorthand for zhuyinfuhao, the more formal word for bopomofo. There are component based systems as well that work with radicals, strokes, quadrants of the characters, and all kinds of zany stuff. That being said, I am not very familiar with Japanese and didn’t know about the the flick method.


I'm on a bit of a tangent, but it appears that a lot of people are beginning to use voice-recognition to recognize Japanese and Chinese, or are sending the actual voice clips instead of manual input methods such as handwriting recognition.

Granted, this is my very limited experience. Perhaps this could be a competing way to help communicate in foreign languages.


Seems like there's room then for a touch based input system to augment/replace the standard keyboard for these users.


Which IMEs have you tried with a physical keyboard? I don't type much Chinese on a keyboard these days but I recall that the Sogou IME for Windows had far more accurate ranking of suggestions than did the built-in one.


Great insightful comment, I've never really typed on anything but an IME keyboard on a computer, so I wasn't familiar with how easy it is to type kanji / hanzi on a phone

IME keyboards (for Japanese) has always confused me, since sometimes if you input hiragana or katakana it would translate to the kanji equivalent because japanese is kind of weird in that they have 3 written types. Thats just a quirk with Japanese itself though. I'm am glad that English doesn't have such an issue though. Then you have Romaji, the English - Japanese phonetic equivalent

Its also important to note when doing these comparisons, Japanese has a significantly lower information informational density than that of other languages. It also has a low reading speed rate too (informational rate), but very high speech rate (e.g. how fast you can talk with the language). English is actually very high across the board.

https://www.quora.com/Whats-the-most-efficient-highest-infor...

The article here has an image of the table I am thinking of (its the first response) for the data, from lyons et al. But, it does omit certain languages that are of interest as well, namely arabic. The rate at which you can write arabic and achieve the same level of information by hand is very closely related to that of English shorthand https://en.wikipedia.org/wiki/Shorthand. This might not tie directly to texting, but its worth mentioning because notetaking efficiently requires quickly capturing information. Some people prefer by hand others by typing etc

I don't really know all the slang terminology in Japanese all that well, but in English we have things like brb lol btw lmfao roflcopter fam. I don't really know what the Japanese equivalent at all of this is, and I wander whether if you were to compare Japanese Slang to English slang / Ebonics, which input type method is superior on a phone? The japanese flick input method or a traditional qwerty keyboard. My bet is on the latter though (QWERTY)

Like I know instead of saying わたしは you could just shorten it to ぼく は (only if your male though). Which means "I am ..... {{doing something}}". You could just say instead "{{doing something}}" too, which gets the same point across. In mandarin you would use 我是 which is pinyin is typed "woshi" which is the equivalent.


Do your children a favor and get them a "real" computer with keyboard and mouse, instead of a tablet.

I've found it helps with several things. For one, I've seen children accustomed to using touch interfaces blurring the line between physical and virtual, i.e swiping at physical objects like books, photos, even walls. I've not seen this behavior with those used to non touch interfaces.

Also, learning to use a keyboard while learning the alphabet seems like a virtuous cycle, at least in my personal experiences.

And it may sound old fashioned, but making things too easy for kids makes them less independent and less willing to put in the effort required for learning.

Compare for example searching for animal pictures using voice search versus going to the search engine, typing out the term, clicking on "images"... The first is much easier and teaches instant gratification, while the second teaches perseverance and comes with a greater sense of accomplishment.

Disclaimer: purely anecdotal, take the preceding with a salt shaker...


Kids should be weaned off tablets just like infants are weaned off milk and induced to use their teeth.

The thing that really bothers me is, as Alan Kay says: The ipad interface is designed for 2-year-olds and 82-year-olds and being forced upon everyone in between. See, eg: https://www.fastcompany.com/40435064/what-alan-kay-thinks-ab...

It is a de-evolution of problem-solving culture in the sense that people are discouraged from using more sophisticated tools to step up their game! "There's an app for that" culture implies that you don't need to learn to compose tools to solve problems -- sit back and consume somebody else's hard work. While that does simplify computers so that more people can use it in the short-term, it also strips away the whole purpose of computing, which is to empower people with a more advanced tool. That's what human cultural/civilizational evolution has been about -- from stone tools, to metals, to the industrial revolution, to the information/computing revolution. Forcing people to interact by tapping on graphical interfaces is to step backwards to caveman levels of communication: point and grunt. We're giving up on human language, writing and tool use, just so that people can avoid learning a little!

Not knowing to use a keyboard is not bad as such, if one's typing speed on a touchscreen can be as effective. But that's hard---at least for someone who hasn't grown up with touchscreens all over the place---and I had to switch from my phone to my laptop to type this long-ish comment! And the amount of typing, editing, reorganizing and adding links that I had to do would have been extremely difficult to do on a phone interface. Giving in to that barrier can so easily stop one from creating/contributing, and going into a passive consumption mode!


This kinda captures one of my concerns. While I could care less about the particulars of how people interact with computers, I do find it concerning that modern devices seem to make general purpose computing less accessible. I love my phone but I haven't even found a decent calculator for it much less a tolerable equivalent of Excel, the shell, or programming environment. Part of me suspects it's an inherent limitation in touch IUs but that also seems like a cop out.


>tolerable equivalent of Excel

You can get _actual_ excel. Google Sheets is also pretty decent. I don't think there's a libreoffice implementation, unfortunately.

>the shell

On Android, at least, actual shells are available. Most useful if you have root, but even without they're still shells, just ones without elevated permissions.

>or programming environment

There's, surprisingly, actually a few, though I don't think any are really competitive with x86 environments. There's plenty of good ssh clients if you're happy to remote somewhere else, certainly.

Notably, all of the above options I find basically intolerable on any touch device without an active stylus, and the latter two without a physical keyboard. Such devices certainly exist, though. A galaxy note with a bluetooth keyboard is surprisingly useful in a pinch, though you're always compromising with something that small.


I dunno about android, but on ios I've found a perfectly capable calculator, and it looks like both ms and google have made their spreadsheet apps (or some version thereof) available. As for programming, I have no inclination to program on a tiny screen with an awful keyboard.


This is my calculator of choice on iOS. https://itunes.apple.com/us/app/tydlig/id721606556?mt=8


Sadly, I use an Android and can't say if this is what I would want but it seems on a better path. Most things I've seen for Android are replicating a traditional graphing calculator. Which suffers from 2 major problems. One, graphing calculators already have confusing UIs. Two, the tiny buttons that work tolerably physically are much less tolerable on a touch screen.


Termux for shell on android.


Same as I use. And I love JuiceSSH for ssh.


You can emulate a full graphing calculator using wabbitEMU.


> While that does simplify computers so that more people can use it in the short-term, it also strips away the whole purpose of computing, which is to empower people with a more advanced tool.

Back in Sumeria they used to go on and on about how this newfangled writing thing would destroy civilization because it took the personal element out of interpersonal communication.

Just because the majority of people don't concern themselves with learning the intimate details of how a computer works doesn't in any way imply their lives aren't "empowered" through their interactions with one. Having the world's knowledge at one's fingertips (or voice prompt) is arguably a lot more valuable than having the requirement to construct a complex query to find out where writing was invented.

You can lead a horse to water...


> Back in Sumeria they used to go on and on about how this newfangled writing thing would destroy civilization because it took the personal element out of interpersonal communication.

I remember reading an old rant by someone (I want to say letter sent to a newspaper) who complained that the grammophone will destroy music.


In a sense it did: I'd wager that being able to play an instrument or sing well is rarer these days than it was before the gramophone.

That said, I'll still take Spotify and access to all the music in the world over random neighbors playing the fiddle any day.


Not me! Sometimes, but not every time.

I grew up in a family of musicians and I really miss the social element of just plunking down next to someone to share in performing a song. Closest experience I’ve had to it is couch multiplayer video games, but it’s still not the same. Still do see the occasional person with a guitar on the porch, but I can’t help but think it would a lot more common and a lot more fun without recorded music.


We still consume music but we don't play it ourselves anymore. We are missing on the many benefits(i.e. enjoyment, meaning, mastery). Same with drawing, painting and many other arts.


Speak for yourself.

I'd like to highlight that synthesizers are very easily available now. As for physical instruments... it hasn't really improved or worsened significantly in the last few decades.

There being less easily available music wouldn't lead to more people learning how to make it. Just people coping without it.


Who is this "we"? I know lots of people who play music recreationally. Same with drawing, painting and many other arts.


> I've seen children accustomed to using touch interfaces blurring the line between physical and virtual, i.e swiping at physical objects like books, photos, even walls. I've not seen this behavior with those used to non touch interfaces.

Why do you think this is a bad think? This is how kids learn how the world works, by trying the same thing in different contexts, like how small children put everything in their mouth. We don't criticize that, it's part of learning.

> Compare for example searching for animal pictures using voice search versus going to the search engine, typing out the term, clicking on "images"

I don't know, when I was a kid I was searching for animal pictures by opening an atlas picture book. Was it better, did I became smarter because I did this this instead of using Google or a voice search?

I don't think that making information access easier is ever a bad think.


My worry with toddlers and touch interfaces is precisely because at that age they should be exploring the world, touching, tasting, smelling, feeling... By exposing them to something that doesn't react to the laws of physics, I fear they get a wrong first impression of the world, that at an early age they are not able to distinguish the difference between a book and a representation of one. I'm not against tablets, just that they shouldn't be given to toddlers at all, and to older kids once they are able to use the basics of a keyboard and mouse interface. Like walking before running so to speak.

Regarding looking things up in a book, it's a great skill to have, even from a purely enjoyment aspect, and I would certainly encourage kids to learn it. It doesn't mean not having access to the internet or voice search. Same way one can teach growing food or making fire with a bow, while still buying groceries every week and cooking on a convection range.

It's important in my view for kids to know where we've come from, to better understand the world as it is today.


> By exposing them to something that doesn't react to the laws of physics, I fear they get a wrong first impression of the world, that at an early age they are not able to distinguish the difference between a book and a representation of one.

I'm not friends with how you casually throw the word "wrong" in there, as if it was an unquestionable universal agreement.

Transport yourself (by vessel of imagination) to a future where this toddler is 30 years old, and both you and this toddler-no-more are bidding for a project: a customer is going to write a neat piece of software dealing with technical spec sheets, but they need someone to do the interaction design for it. Of course, it's going to run on the next-next-next-next-next gen touch surfaces which is what is available at the time.

You both present your solutions to the customer, and after a few weeks they call you up to say, "Please don't get us wrong. Your interface was really good. We decided to go with the ex-toddler anyway. They had similar ideas to yours, it's just that yours felt a bit tied to the Newtonian laws of physics. The toddler seemed to think more freely about the medium and used that for good.

"We fear that, given your age, you have gotten a wrong first impression of digital interfaces. You seem to have problems distinguishing the difference between what we call a book these days, and one of those old objects made from dead trees."

I'm not saying it's good either, I'm saying it's different. And while the toddler perspective may look wrong with your 20th century eyes, it could very well be that your perspective looks wrong with the toddlers 21st century eyes.


I work with a designer that has something like 20+ years of print experience. We make web sites. It's true that some habits are hard to break, like wanting pixel perfect shapes and alignments, which of course doesn't work too well with today's needs of responsive design, or mobile first interfaces.

But, after a basic introduction to the fundamentals of modern web design, the guy has come up with some great ideas, elegant and easy to navigate... And can still do posters flyers etc.

He is now even getting into 3d stuff for animations.

Point is, learning is incremental, and it's always possible to extend what one knows in new directions, if one is of curious mind and willing to change.

Now, concerning toddlers and tablets... the thing is, we are governed by the physical world in which we live. It's very important on a mental but also motor level to have a good understanding of that. I mean things that adults take for granted like balance, dexterity, hand eye coordination, a subconscious understanding of gravity, etc.

These are actually learned by trial and error, if you've ever seen a toddler stack blocks or learn to throw you'll have seen this in action.

During this period of learning about the world, having regular interactions with objects that do not follow the same rules is confusing to very young children. This is not just my opinion but something which many child psychologists agree on.

Again I'm not against tablets, but better not to introduce them earlier than 3 or 4 years old at the earliest.

I think that still gives them plenty of time to assimilate 21st century technology and think of the next big thing in 20 years. Which I probably won't understand ;-)


It really doesn't matter whether a child learns to stack blocks at ten months or at three years. Childhood comes before the rat race. I was phenomenally uncoordinated at least through the age of ten, and it didn't have anything to do with electronic devices because we didn't have those. I'm now perfectly capable of numerous complex and precise physical tasks.


this is once of those things where I'm 90% confident it isn't actually harmful to a young child...

...and yet if I had a child, I wouldn't expose them to tablets at a young age, just because that 10% is scary.


But I guess it's more because of the content, not because it will learn that certain surfaces are swipeable.

E.g. it's probably quite bad to just let your kids watch random youtube playlists all the time.


> By exposing them to something that doesn't react to the laws of physics, I fear they get a wrong first impression of the world

By this logic, "cartoon physics" are also a big no-no, because they teach kids that gravity only acts if you look down and notice that you have no support. Next thing you know you'll have kids running off ledges and expecting to float.

And I don't see how a mouse is more "physical" and "logical" than a touchscreen. On a touch screen, the pointer is right under your finger where you touch. The mouse is this strange thing which does action at a distance, through the intermediation of this "cursor" which has no correspondent in the physical world. Which is why I presume it's easy for a toddler to understand a touch-screen unlike a mouse which requires some pretty advanced hand-eye coordination and mental models.

> Nguyen, who is 10, said she has used one before - once - but the clunky desktop computer/monitor/keyboard/mouse setup was too much for her. "It was slow," she recalled, "and there were too many pieces."

> "Human hands and voice, if you use them in the digital world in the same way as the physical world, are incredibly expressive,"

https://www.independent.co.uk/life-style/gadgets-and-tech/th...

> Why the explosion now? For decades, attractive, interactive graphic interfaces have been available on home computers. But young children’s access to these was limited by both their cost [with the cost of hardware, software, and home internet contributing to the “digital divide” (Norris, 2001)] and by the fine motor skills and eye-hand coordination required to manipulate a keyboard and mouse. With the advent of touch screens on less expensive devices – smartphones and tablets – these financial and developmental barriers have been reduced: By their first birthdays, most children can become adept at touching, swiping and pinching on the screen. As a result, children’s access to touch screens has outpaced what we know about its effects – for better or worse – on early development.

https://www.frontiersin.org/articles/10.3389/fpsyg.2016.0107...

I'm not saying we should be giving tablets to toddlers. I don't know about that. But not doing this because of concerns about being able to distinguish between physical and virtual seems pure speculation at this moment, especially when we have as precedent fantastical stories that parents typically say to kids, which are also full of physics defying stories.


It depends on the age of the child. A toddler who is still exploring the world is one thing. A 14 year old who swipes at a book is a problem. (I don't know the age of the children from these anecdotes.) On the other hand, a toddler who explores the world as if it is a touch screen has been given way too much time on a touch screen. So there is definitely a problem there.


I highly doubt you'll find a 14 year old who swipes at a book... kids very quickly learn context and rules of physics. It's much harder when some screens are touch screens and some are not, however, but I see adults trying to swipe non-touch screens all the time (mall displays, ads, iMacs etc.)


I’m in my mid-thirties, and I have definitely tried to swipe my MacBook Pro screen a few times.

I don’t think it’s an indication of anything troubling. Sometimes you just get lost in what you’re reading and momentarily forget what device you’re using. If anything, I’m glad teens are so absorbed by reading. It was a struggle to get the teens of my generation to read anything!


A little older than you (prolly couple of years).

I remember how when I was a teen and coding a lot inside the Turbo C IDE, it became second nature to press F2 frequently (to save).

Around that time, while working on math homework (on paper), I'd occasionally experience brain farts, where I would think of pressing (not reach for) the F2 key.

FWIW, I have decent handwriting today, and even exchange correspondence with some friends using pen-and-paper.

I do agree that touchscreen usage should be restricted for toddlers.


I have, on more than one occasion, reached down and wiggled my pencil in order to avoid getting a screensaver on the sheet of paper I was working with a couple of minutes ago.


Or when handwriting on paper and feeling incomplete without Vim keybindings...


I've recently moved from reading mostly actual books to using e-books. Now when reading physical books I sometimes find myself trying to pinch zoom or select text with my finger. I am 40 years old.


Meanwhile we adults are still occasionally trying to Ctrl-Z real life.


I was constantly running into this during drawing classes in college after having spent countless hours in Photoshop in high school. It’s amazing how firmly the Cmd-Z reflex plants itself as a reaction to making a mistake!

I’ve also caught myself yearning for Cmd-F when hunting for something in large chunks of printed text.


That's why traditional books (especially references and textbooks) have an index at the back.


I understood that as suspend. It still works! Who wouldn't want infinite time for video games/hobbies while time was stopped.


What else would it be? EOF on CP/M? I assume that's not what was intended since it is possible in real life (and the OS will do it sooner or later).


Undo, presumably.


I think that's mostly a function of many touch screens not being obvious about being touch screens, and some non-touch screens having interfaces that look like touch.

We have some older machines for getting train tickets and some newer ones. They probably didn't want to scare anybody off, so the interface still looks almost the same. It used to be buttons next to the screen, now it's a touch screen.

Of course you see people trying to touch the old ones. There's no signifier on the new ones (except the lack of buttons) that shows that they're different, so without a bit of thinking you don't know which affordances there are.


Try being an engineer who works with oscilloscopes and such. Some of them have touch screens with the buttons along the edges of the screen, and some have softkeys so the GUI looks exactly the same, but you press the buttons next to the screen.


I get this with fuel pumps, the screen isn't a touchscreen (in a brand new device in 2017) the "buttons" are just some subtle screen printing on a static panel (capacitive buttons I think), no affordance at all.

So the on-screen text says "touch here" or similar. Took me a while to realise it meant "press inside the screen-printed rectangle in the panel below the screen" ...


> On the other hand, a toddler who explores the world as if it is a touch screen has been given way too much time on a touch screen. So there is definitely a problem there.

You are making quite an assumption here. People of many centuries ago would also say that we raise our kids in a wrong way, because we don't teach them how to survive in the wild or how to work a potato field.


I might agree with them. I'm not saying we should be all out working potato fields, but knowing how to grow your own food is valuable knowledge. Likewise, knowing how to live in the wild, at least for a few days is a basic skill that I believe everyone should know, as it could save your life even today if something goes wrong while you are out hiking/camping.

But that is all just my own personal opinion, not something we need to argue about. However, the idea that toddlers should not have that much screen time is an official recommendation: https://www.cnn.com/2016/10/21/health/screen-time-media-rule...


A computer is a machine of profound alienation from the actual ability to create and the ability to use, unlike anything someone who survives in the wild or farms would do. Many people will die having used computers for 10 hours a day but not being able to really act with computers and create with computers. This is unacceptable.


> A computer is a machine of profound alienation from the actual ability to create and the ability to use

A funny thing to say, given that today a lot of content requires computers to produce, even abstract art.

> Many people will die having used computers for 10 hours a day but not being able to really act with computers and create with computers

How is that different from people who voraciously read countless books, or watch countless movies, yet never produce anything at all, in whatever domain.

Not everybody wants to be a producer. Some people are perfectly happy just consuming, or partying all day.


I've had similar disagreements with people bemoaning VR for being isolating and non-social compared to other interactive media. Nobody decries reading literature for being somehow morally inferior to watching TV in a group.


>Nobody decries reading literature for being somehow morally inferior to watching TV in a group. //

Never heard the term "bookworm", it's a mild pejorative leveled at people who read books a lot.


This probably varies a lot by culture (and sub-culture), but 'bookworm' doesn't really have perjorative connotations for me. I know it was probably originally intended that way, and it can still be used as an insult, but it feels neutral to me.


How many of these literature readers can't write? Not write well, can't write at all. How many can't take a video? Not take a good video, but take one at all.

But you can use a computer without knowing how to create for it _at all_.


Not sure what your point is.

Coding is significantly harder than writing or pressing two buttons on a video camera, which is also visible in salaries - basic journalist vs. basic camera operator vs. basic coder


Well the UK Day rate for a Camera Operator (labour only) is around £420 for TV and £600 for film - which compares quite well for the contractor rates - and you wont fall foul or IR 35.

Journalists your right - there is a lot of young people who have the dream of being the next woodwood and Bernstein - and so news paper publishers take advantage.


> TV cameraman Joel Shippey: 'although a seasoned cameraman can earn between £300 and £400 a day, you'll only be earning the minimum wage for the first few years. I was doing jobs for free when I began'

> "You have to be very sure you want to do it because it involves years of long hours, challenging conditions and low pay."

https://www.theguardian.com/money/2013/mar/12/how-become-tv-...

I doubt there are minimum wage software developers.


Well you don't start as an operator and also it helps to have connections unfortunately


Countless people can drive a car but have no clue how to change a tire. If they get stuck in the wilderness during a blizzard their life is in serious peril but most people are just "you really don't know how to change a tire?"

And most computer usage is creating tons of valuable information to feed into the googlebrain.


So, for those on the farm: should they dig earth by hand, with a spade, use horse-drawn plow? Tractors should be a big no-no, by this logic.

Actual ability to create is in the brain of the creator. Anything else is just media ant tools.


I live in an agrarian area where potato farming is a major industry. It’s a hard life, and not one that I’d encourage my son to get into. I think he’ll be better off with a computer.


When I was a kid growing up on a farm, we'd have semi trucks come by and pick up our produce, and I thought that would be such an easier life than actually producing the crops. I told my grandpa that I wanted to be a truck driver when I grew up, and a few weeks later he drove into town and bought a computer to keep me from being either a farmer or a truck driver. He had been both, and didn't want me to be either.


I have swiped on books several times now. Just a temporary lapse, I laugh about it and a few years later it may happen again.


When watching movies in the theatre, I catch myself jogging the cursor to see how much time is left.

How I do this without a trackpad and without moving, I don't know. I only notice when it doesn't work.


I find this type of neo-Luddism to be a strange form of gatekeeping for technologists. "Back-in-my-day"-ism for Google and manpages and the computer mouse.


I know plenty of parents who let their 3/4 year olds "play" away with tablets. Like really, they give toddlers internet connected devices and let's be honest they're not always supervised. Or even most of the time from what I've seen. It's the modern pacifier. When some of the kids come over to ours to play, their withdrawl from the devices becomes very bad. I won't let my kids near tablets/smart-phones. When they're a bit older, they can start their journey on a family desktop in the sitting room. Call me a neo-Luddite all you want, I have a feeling all other things being equal my kids will have a better chance at life than the ones left to explore Youtube on their own before mastering how to cycle a bike.


I don't think that making information access easier is ever a bad think.

Except when it decreases learning and perseverance, and deemphasises the benefit of careful thinking. The term 'spoonfeeding' is very relevant here.


> deemphasises the benefit of careful thinking.

This may be a bit of a tangent, but that reminds me of what is, by my guess, a core flaw of the US Constitution - it is deliberately written in a difficult to follow style (most notably, double inversions are slammed in everywhere). Even if the intent is to cause people to more carefully consider the subject matter, in the hopes that they come to a more accurate understanding, the literal effect of making information harder to access is that more energy is spent trying to access that information. In turn, the likelihood of errors in the course of accessing that information increases.

addendum: It's good to make things less error-prone.


The way you view personal responsibility is also important here.

Spoonfeeding kinds of assumes no human agency.

If you let yourself be a leaf in the wind, yes, you'll arrive wherever the currents take you.


> For one, I've seen children accustomed to using touch interfaces blurring the line between physical and virtual, i.e swiping at physical objects like books, photos, even walls. I've not seen this behavior with those used to non touch interfaces.

I don't know, I have the urge to tap words in books to bring up definitions after using ereaders for the past ten years.


My urge: Let me CMD+F this -oh right, it's a physical book


Interesting. I didn't quite realise it but these may be important computer literacy advantages we can pass on to our children.

Also noting we're typing in English here on a Latin keyboard, however my 7y son's taken to his keyboard like a duck to water. We gave him a touch typing challenge to plant some seeds, and he greatly enjoyed it. I benefit every day from touch typing and want him to have the same advantage - I started far later, in middle school, on the good old typewriters. He's also fully figured out Windows 10 user interface, alt tabbing, using copy paste shortcuts etc; I'm pretty sure he'd figure out any GUI, as long as there was a reward at the end (start a game / movie).

He loves his mouse keyboard gaming, but in order to earn gaming time he's also got to do chores - and I'll also give him points for coding. I'm trying to get him used to Python now, it's a bit early and it can be challenging to find the special characters, but he's getting it rather than giving up - and we'll dive into it more later, with the benefit of the Cozmo SDk to make things more interesting. There's a challenge in making the understanding of code an intrinsic reward, vs the low hanging fruits of playing computer games, but I'm hoping it'll come in the next few years.

I also couldn't help but reflect on the fact that by doing this, he's getting experience with Visual Studio Code, the exact toolset I use for my own work.


Can you point me to that touch typing challenge that your 7 year old enjoyed? Very pertinent to my interests.

Also, Python at age 7 seems pretty advanced. My 7 year old is pretty strong with block-based programming but I'm still trying to figure out when and how to transition to real coding.


Nice! It was actually an iPad app that we used, with the Bluetooth keyboard - he was using it for hours.

I'll dig up the app name when I can find the iPad, presently everyone's asleep and the iPad was hidden somewhere. I think it doesn't matter so much which one you choose though, as long as it tells you what fingers you are allowed to use for what keys. The challenge that we set for him was to perform the exercises and getting the fingers right.

He's at the absolute beginner level with Python yet, which we do on the PC, but I figured it was the next logical step as I'm questioning the engagement with e.g. Tynker. He's quick at dragging the blocks across, but the way they template a lot of the lower levels makes it too easy to blindly drag things across until he suddenly hits a wall with something he can't understand.

I'm struggling with the curriculum part of this myself, how to best get his interest and how/when to introduce concepts. So hooking up the Cozmo robot and getting it do simple things is neat way of doing loops for example.


Thanks!

I started my kid with Scratch. Then had him do some of the Hour of Code challenges on Code.org and then some of their courses (recommended). Did a couple Arduino projects programmed with ArduBlock. Now we are working with a VEXIQ kit from Vex Robotics and working our way through the RobotC course. All graphical programming so far. Want to start looking under the hood at the actual code soon.


> I've seen children accustomed to using touch interfaces blurring the line between physical and virtual, i.e swiping at physical objects

A friend of mine in college once wrote date on a sheet of paper during an exam, expecting to find out what time it was...


College? As in older than 13? What else did they write? Was it the first time they saw paper? I'm flabbergasted but want to know more!


These are just brain farts. Read throughout the sibling threads to see all the other personal accounts.


This was around 20 years ago (!), and yes, he was a fully-grown person. But he probably hadn’t slept in a few days — exam week and all. And he definitely spent more time with a keyboard than a pen.


Typing a word into Google gives you a sense of accomplishment? Wow.

When I was a kid (and by that I mean all the way into university), we had to go to the library (an actual physical place that wasn't our home), look in the card catalogue (an actual physical box of drawers with actual physical cards in them), then find the shelves with actual physical books on them. Then we had to look at the index in the book. Or just read it.

That took time and perseverance. There was no instant gratification. It took hours or days.

Are you advocating a return to that past and that much perseverance?

Typing a word into Google and clicking search is nothing. Trying to make it sound like it's so much better for character building is ridiculous.


> I've seen children accustomed to using touch interfaces blurring the line between physical and virtual, i.e swiping at physical objects like books, photos, even walls. I've not seen this behavior with those used to non touch interfaces.

Recently I started reading actual paperbacks again after a few years doing most of my reading on a tablet; I was quite amused to find that sometimes I'd try to swipe the pages instead of turning them.

I'm in my forties. My first computer was in 1981. I've got decades of interacting via keyboard behind me and this still happened to me. Habit is a powerful thing.


Motor memory working as intended. It can adapt in the other direction.


> Compare for example searching for animal pictures using voice search versus going to the search engine, typing out the term, clicking on "images"... The first is much easier and teaches instant gratification, while the second teaches perseverance and comes with a greater sense of accomplishment.

I find this comparison hilarious. I've spent my early childhood without a computer, so if I wanted to see a picture of an animal I probably had to open a book and maybe even go to the library to get one. So I really doubt any of those methods u mention really teach anything.


Even more important is the nature of PCs vs mobile devices. The mobile platform is designed to commoditize the user and create a stark division between users on one side and content and software creators on the other. You can't easily create anything of any substance on a mobile device and the ecosystem discourages it by e.g. treating user data as unimportant.

PCs were designed to be devices for people to create things.

Its largely a product of when the two platforms were created and the economic forces at play. I wonder what mobile devices designed to empower the user would be like?


PCs were designed to be devices for people to create things.

I think you have an idealized view of how most people use PCs. Even during the first era of PCs in the 80s, most kids used them for playing games.

By the mid 90s, it was all about games and “multimedia” on CDs.

Then Facebook and social media games.

The geeks that looked forward to InCider, Nybble, and whatever the offshoot computer mag from 3-2-1 Contact were and typed BASIC programs in were the minority.


The geeks that looked forward to InCider, Nybble, and whatever the offshoot computer mag from 3-2-1 Contact were and typed BASIC programs in were the minority.

But if you look at the PC magazines from the late 80s/early 90s, magazines not even oriented at "developers" but more "power users", you'll find huge chunks of content devoted to programming --- not just BASIC, but Asm (DOS's DEBUG command was the preferred method of creating small utilities), undocumented features, controlling hardware, and the like. Programming was viewed more as a progression/spectrum from novice -> power user -> programmer, with the result that a lot of users knew the basic concepts of how computers worked and would not have much trouble making little modifications to the Asm listings they found in order to customise them to their needs.

Contrast this with the locked-down walled-garden ecosystems where you can't even easily control the behaviour of, much less write programs for, on the device you bought!


> But if you look at the PC magazines from the late 80s/early 90s

What was the reach of those magazines? When I was young I was the only one in my high school class with a computer. There was some self-selection going on.

Today, with $300 (inflated dollars, so much cheaper than in the past) you can get a very nice laptop and program your heart away if you so want.

Those people who would have read those magazines are now on various internet programming/forums, hacking minecraft. It's only the magazines which disappeared because now there are better ways to disseminate technical info. The absolute numbers of hackers probably remained similar, it's just that now there are a ton more computer users, so they get diluted.


The Commodore 128K came out in 1985 for $300. Certainly not out of reach of the average middle class family.

But you are right, in high school, I was one of the few with a computer and even in college in the early 90s most students didn’t have computers.


And for most platforms you need to pay to develop your own programs, a no go for most teenagers.


My first experience programming on computers was o a demo version of VB4 (could not export .EXE), and later on JavaScript (copying and pasting things from the Internet).

Nowadays it's even easier to learn how to program without paying anything.


> I think you have an idealized view of how most people use PCs.

I don't think that he has an idealized view per se. It is rather that people who bring up such arguments often are surrounded by similar minded people, which is some kind of echo chamber. So people who bring these arguments actually have observed lots of people using their PC/smartphone. Unluckily this "lots of people" sample is strongly biased towards their echo chamber.


Of course, but having the technical possibility to create something on the machine make this minority exists in the first place. Now take the same demographic with a locked down class of devices and the basic programming crowd fall to 0%.


With iOS 12, Apple will be integrating automation that lets you automate actions within other programs either visually or from what I’ve read JavaScript. Just imagine what kids can do when they can use their phone to automate smart home devices.

There are also apps that let you program robots.

https://www.apple.com/shop/product/HK962VC/A/ubtech-jimu-rob...

Amazon announced an easy way to program Alexa.

https://developer.amazon.com/alexa-skills-kit/alexa-skill-py...

There is also Swift Playgrounds.

https://www.apple.com/swift/playgrounds/


IMHO all very poor substitutes.

Yes, I know Apple wants to police their App Store, but writing something for your own personal use (and maybe to give some copies to friends) shouldn't be a huge bureaucratic hurdle.

(Android has smaller hurdles but still not insignificant --- when the first step in the tutorial is "download and install this gigabyte-sized piece of software", you can be sure a ton of potential users have already been put-off. Compare with early home computers that booted to a BASIC prompt, or PCs where DEBUG was there and ready to create tiny/small "apps" immediately.)


How will it be a huge hurdle to write your own Siri actions that can control other apps on your phone and your smart devices with iOS 12? You can look on the Internet to see what people have been doing with the Workflow app (the app that Apple acquired and is integrating into iOS) without Apple’s hooks.


Well, you are speaking about an OS that isn’t publicly released. I’m speaking about programming capabilities of iPhone et al. as they are right now and have been in the last 10 years.


The OS may not be released, but the Workflow app - the basis of the automation that Apple acquired - has been out for years.

https://workflow.is/

https://www.lifewire.com/best-workflows-ios-app-4153797

The current integrations between third party apps are based on x-callback-url. Third party developers have been using it for at least 5 years.

http://x-callback-url.com/


Besides the Swift playground nothing is really close to real programming. Even worse, it relies on expensive+ external hardware of questionable utility (the "smart" devices) that repetitively shown how insecure they are. And Alexa is a whole problem of its own given the huge privacy thread it is. I will definitely not teach my kids about happily wasting money on GAFAM/PRISM surveillance tools.

+I would rather spend 40€ on a raspberry PI than a smart light bulb or anything like that.


Writing my first program in Applesift Basic wasn’t “real programming” either even in 1986 but it was my gateway that got me interested.

Programming with Swift playgrounds or doing an Automator action that can control smart home devices will hold kids interests way more than “real programming”.

I was excited in 1985 at 12 just to be able print something on the screen. More recently I was asked to give a presentation to some kids during career day. Knowing that they wouldn’t be interested in a talk about doing yet another SAAS app, I recommended that they talk to a friend who does game development.

If they were younger, I would definitely recommend a presentation on automating smart home devices activated by Siri or Alexa.


I strongly disagree. I use my tablet for painting and sketching, composing music, and sometimes a bit of writing. These devices are amazing for content creation.

They’re currently not good at programming, which is a small subset of content creation. This problem plagued personal computers for a long while as well. For example, Apple’s Lisa could only be programmed by attaching it to a second Lisa (which was extremely expensive). Being able to program your computer with itself wasn’t always common.

These are early days still. Tablets will get there some day. It’s a techical problem, and technical problems have techical solutions. Give it time.


I think it's also a governance problem. The relative inability to program within itself reduces the number of bricked devices. A bricked device either becomes a cost to the manufacturer, a cost to the consumer, or can be quickly reflashed by the consumer. Not only does reflashing likely require a second device to do so, but it also works against the locked secure bootloader concept which is such a popular (albeit controversial) feature to keep these ecosystems healthy.

The days when you could compile an exe without any kind of signing and distribute it with nobody getting a warning about an unknown developer were the days when everyone was bluescreening monthly; hardly a coincidence.


Programming is getting better. I regularly use my iPad with a Bluetooth keyboard to hack over SSH/MOSH, and often just use the onscreen keyboard for short edits. Vim is surprisingly usable with it, actually; modal editing is a natural paradigm for a touch screen.


> You can't easily create anything of any substance on a mobile device

The instagram community would like a word with you.


You just made his point a lot stronger.


There is plenty of truly artistic content created on mobile devices and posted on Instagram, it's not all fake vacation selfies.


True. There is also pictures of food.


Only if you "I know what good art is and this ain't"

Don't be like these guys: https://en.wikipedia.org/wiki/Fountain_(Duchamp)


There are bluetooth keyboards for tablets. I spend most of my laptop time on a browser and in ssh, both of which I can do effectively on a tablet with a keyboard.


> And it may sound old fashioned, but making things too easy for kids makes them less independent and less willing to put in the effort required for learning.

Yes, agreed. We're building a generation where even pulling your parking brake when you park is seen as "hard", not having a discoverable, hand-hold, guided experience is "inconvenient". Netflix even have a hard time making people update their cards when they expire or are cancelled.

Life is not a Steve Jobs utopia. Some things require work and persistence.


I'm guessing I'm a generation older than you and I agree, my generation built a lazy, useless generation.

My parents say the same thing about my generation.


While there is a lot of that, it's one thing to "not do X because it's never needed and is supplanted by new technology"

Young people don't know what a VHS or vinyl player is and that is fine

Another thing is to think everything is solvable with a phone app, that everything is on Google or that everything is learnable through a step by step YouTube video and requires no effort.

That causes frustrations as well.


With A/R, it will get worse.

When I used to work with Motorola in the early-to-mid-90s (it was an actual company, you know), we had some HUD A/R things, and my kids (3 y.o.? 4 y.o?) got pretty accustomed to flicking at books and objects in real space.

Motorola tanked. Kids grew up. Learned how to type. Everything is fine. It was just a weird thing for my wife to try to process at the time.


I've absentmindedly pinched to zoom on books and occasionally mentally ctrl+z when I screw up. Not a major problem.


I've never done that, but I sometimes will pick up a paper book, and instinctively want to press control-F in order to search for a particular word or phrase.


Chromebooks are a good device that is inexpensive, has keyboard, and many can run Android app.


> And it may sound old fashioned, but making things too easy for kids makes them less independent and less willing to put in the effort required for learning.

I wonder if this makes older programmers better learners. For instance, I learned BASIC and QBASIC before I had internet access. When I ran into problems I only had the provided language documentation to help me and I was largely on my own.

Now when I run into problems I have many resources to help me, but sometimes there is no answer on StackOverflow or anywhere else I can find and I have to rely on myself to find a solution.


I have mostly forgotten how to write with a pen. I am absolutely fine with that.


I use fountain pens exclusively and take pride in my penmanship. I also love my Apple Pencil - it lets me use and practice my skills, and is a very good approximation of several different types of nibs. Even my older Wacom tablets weren’t as good.


Any tips on improving penmanship? Mine has always been absolutely horrible, which stinks as there's lots of things I prefer to write out by hand. Most things, in fact, unless it needs to be electronic.


Learn a different script.

Right now you know one script: whatever scrawl you learned as a youth. Pick up something else, preferably a print hand rather than cursive.

There are a ton of options! Here are some examples: http://medievalwriting.50megs.com/scripts/scrindex.htm

A print hand lets you focus on correctness. Once you have some confidence, if you like, you can tackle another cursive hand, such as Spencer: https://en.wikipedia.org/wiki/Spencerian_script


I'll look into those, thank you!


I started by changing my writing drastically - I wrote in smallcaps for a while, until I “forgot” my handwriting, then started over.


The first example provided in the article (indeed the first sentence) is “How do I double click?”, and I want to focus on this.

As a kid in the 90s, I just took double clicking as a necessary evil. But after many many opportunities of trying (mostly successfully) to teach it to both the elderly and to kids, I've gradually grown to hate it. It's such a horrible gesture, difficult to perform, complicated to reason about and entirely disconnected from any real-world metaphor.

May double clicking die off and vanish, never to be found again, except for in the annals of bad ideas.


I often encounter people having issues double-clicking.

Actually, a lot of people have trouble just single-clicking, so double-clicking is even more of a challenge.

A lot of people just can't get the knack for clicking twice in quick succession. And the inconsistency between when they should single-click and when they should double-click causes a lot of confusion.

I think it probably would make life easier to get rid of the double-click.


The other week my Dutch language teacher (he's ~55) wrote an email "I would like to try Linux, can you please assist?" after learning about what is open source and libre software. I was pleasantly surprised to receive his email. So last week I went to install Fedora 28 on his new laptop. It all went smoothly. No, that's a lie; I had to do a little scary command-line surgery while he watched me do it (and heard me assure he doesn't need to do anything like that).

I set up all the requisite things, and since it was his first time, we went through a few common desktop workflows tailored to his needs. There I noticed: he was single-clicking where he needed to double-click—e.g. while opening a directory. I briefly explained the differences between the two and we tried again. He got better, but still struggled. I biked back home wondering if he'll continue to struggle with the double-click. :-(


Maybe Ubuntu would be more appropiate as a first time distro? I use Ubuntu because I end up frustrated having to google a million issues whenever I use any other linux distro. I grew up on windows desktop and can't really see any but the most determined ipad user from sticking with ubuntu let alone anything else.


I considered it, but Fedora is the beast I know for ~10 years, and I could debug it more confidently if something breaks (and I know I'll inevitably be called for tech support). Also, I've had "great success" with Fedora on my Father's (age: 67) desktop for already 3+ years; so I "built on top of that".

My professor's use cases are quite simple: writing articles in Libre Office (he's already familiar with it on Winblows), YouTube, and a browser—only one tab at a time; he was blissfully unaware of the concept of tabs until we walked through it!


I changed to single-click (and select-on-hover) a few months after moving to Linux, and won't come back, maybe your prof would like it too?


Hmm, I didn't consider that. I'll write to him to ask how it's going, and will suggest if he would like to try this.

Thanks for the idea.


> And the inconsistency between when they should single-click and when they should double-click causes a lot of confusion.

There's a lot of anecdotes about how uneasy/annoyed/bothered it can make you feel when watching someone unfamiliar with a scroll wheel instead mouse over and slowly click+drag the scrollbar, but the big one for me is seeing someone double-click on a hyperlink...


In my experience, dragging the scrollbar slowly hasn't been an issue I've seen. What I have seen people do is drag the scrollbar really quickly and end up at the bottom of the page, when what they want is in the middle. They correct this by dragging the scrollbar sharply up, but they're now back at the top of the page. Rince, repeat.

I also see people use the arrow buttons on the scrollbar. But they just click the button repeatedly instead of holding it down. Which means that for a very long page, it can take them a long time and hundreds of clicks to get to the section of the page they want to see.


I think it was Windows 98 or maybe ME where Microsoft tried to make everything a single click. People hated it so MS went back to the double click method.


The desktop enhancements (Active Desktop?) that came with Internet Explorer 4. You could install that on Windows 95 and get "almost Windows 98" that way. I liked it and kept it this way for years to come. In Windows 10 it's still an option you can turn on but one day I just did not bother to turn it on on a new installation.


Thank Jobs for that. Two physically, visibly, distict buttons were ‘too complicated’, so instead the second action is invisible and undiscoverable.

(Jobs also stuck us with the invisible fragile clipboard in place of the Star copy/move pattern.)


That being said my mum has been using actual computers for 20 years now (without understanding more than just what she needs), and she double-clicks on everything, even when a single-click should apply. “It works that way”. True.


Wouldnt work for something that toggled on a single click.. e.g checkbox


I think it was the idea that single-clicking in the desktop is a select action, while a double click is an "open/navigate" action.

It makes sense to me.


Until you have someone double-clicking on browser links (and other things) because that's how you open stuff in a computer, as I've seen multiple less-experienced people doing it.


I actually added protections in some of our internal software against that recently.

People (professionals with a lot of experience) would open orders or product sheets twice (two new tabs) and it created issues. So now, you can't double click.

Once you click something, it is disabled for a few seconds.

I still hear double clicks, but now it doesn't create any bugs.


well, I always found it a hassle to select texts in web documents for that reason. want to search a word in a hyperlinked headline? meh, the browser decides to open the link. even worse on touch enabled devices. double clicking stuff to open it does make sense, especially if they are more than just buttons.


Alt + select is a standard now in all major browsers.


This is like showing fire to a caveman. Holy crap. I wish Firefox had a tooltip when I'm "dragging" links to say "Hey, I bet you're trying to select!"


Well today I learned something. Thank you !


This is similar to the stereotype of: young people are good with computers. Browsing Instagram is not the same as creating value through programming and complex problem solving.


I've always felt that young people are seen as good with computers because they're more confident and less afraid of failure, and that even if their only experience is instagram they're more likely to google a scary error message or click around at random in various menus.


I'm not sure that that is the case, any longer. I have a lot of trouble with our college-aged interns, because they will barely make an effort to try doing something before getting stonewalled and bailing out to go ask someone to do it for them. Sometimes I just want to yell at them to run the code and see what happens, and then try to figure out why what happened happened. And these are computer science majors.


Maybe they need someone to yell at them to break them out of their comfort zone and learn something new. They might find they enjoy it or even that they’re good at it once they do it.

It’s human nature to feel like the next generation is coddled, of course. Maybe the real problem is that the current generation isn’t as capable of instruction as previous ones.


> doing something before getting stonewalled and bailing out to go ask someone to do it for them

Funny this is the problem. One of my major life lessons was learning not to always persevere, and to ask for help as soon as possible. Solves more problems faster, teaches you more ways of looking at a problem and smoothly scales to delegation.


This cannot be emphasized enough. In a social+technical work environment, the more you "persevere" on your own, the more you'll become pigeonholed as a tech guru who solves immediate problems but loses influence in the higher-level decisions of your organization. There's still a balance, though, as asking for too much help will lead to resentment and people just straight up thinking you're stupid.


I'm not sure it really is a good thing that being too good at something is leading to pigeonholing. It's a shame that what was before (or for other discipline) named mastery is now just seen as relegating from higher levels. Can you pigeonhole in medicine? Nuclear physics ? Professional football /sports ? Cooking ? I could go on. I'd frankly rather have people with less high level view and more knowing what it really means to do something before deciding to do it.


> being too good at something is leading to pigeonholing

People who are good at something tend to ask for help when they need it. Asking for help is a way to get better. Blindly persevering wastes time and tends to force one into dead ends.


I don't think giving up instantly is the same thing as asking for help when you need it, and I don't think e.g. spending 30 seconds minutes googling the definition of "ENOENT" before calling someone else over is blindly persevering. I think there's a path through the middle of the two extremes, where you try a bit of basic problem solving on your own, but ask for help when you're stuck, and learn from the experience.


> spending 30 seconds minutes googling the definition of "ENOENT" before calling someone else over is blindly persevering

It's not blindly persevering. It is, however, giving up an opportunity to learn from and interact with a colleague. Whatever you were working on related to the query might have additional context filled in or expanded upon through conversation. Some of my most productive and unexpected insights came up as a result of such banter.

When you ask "what does this mean," you're asking for a definition. You're also communicating the problem and hinting at your angle of attack. Possible valuable and unexpected responses include "you're approaching it wrong" or "why are you working on that problem when X looks more lucrative"


I don’t know if it’s because I am old fart but my generation (born early 80s) would teach their parents how to use a computer or the internet when we were teenagers. The next generation might teach me how to use the latest gestures on iOS or instagram, that doesn’t seem to be a new technology or a useful skill to me.


It's because old timers are much more afraid to look bad since you are "supposed" to live up to that external image you put on LinkedIn. :)


The BBC love to divide the population into “digital immigrants” and “digital natives”, always with the subtext that the “natives” are superior. But their so-called journalists can’t even understand that the “digital immigrants” that they sneer at actually built everything and the “natives” are mere consumers


Agree. Though if we think about it, kids who were computer savy in the 90s were certainly more technologically sophisticated than computer savy kids now (when you have to actually install TCP/IP it makes you learn what it is!). But the population of kids with daily access to computers then was much lower than today. So now you have a much bigger base which on average is less technical, but because of its cheer size, there is probably a much greater absolute population of sophisticated kids now than 20y ago.



I was just about to post this so I guess I'll just upvote this instead.


Watching TV makes you good with RF.


Watching TV in the 50s made you good with knob adjustment and basic electronics maintenance, just because things tended to break and also tended to break in fixable ways. The same thing is true today, kids that play Minecraft aren't getting any better with computers by clicking on blocks, they are getting better by doing the things necessary to keep the blocks there in front of them (because if you want to install mods or run a server for your friends for example Minecraft becomes about as reliable as an early TV).


Watching TV in the 50s made you good with knob adjustment and basic electronics maintenance, just because things tended to break and also tended to break in fixable ways.

In particular, tube replacement was pretty common DIY and even nonspecialist corner stores would sell tubes and have self-service testers:

http://travelphotobase.com/v/USOK/OKCH66.HTM

In similar analogy, I wonder how many people today know how to replace a lightbulb and will do it themselves, and what that would be like in a few decades...


Embedded computers have done a lot to ruin automotive DIY. Someone should write a short story where nobody can change their low-failure-rate LED light bulbs without calling a technician who has a programmer card with the right IoT private keys. Electric utilities are no longer common-carier and underground cyberpunk "Edisons" fabricate incandscents that disguise themselves as toasters on the grid in order to circumvent the expensive licensing deals between lightbulb manufacturers and power companies.


Likewise with the Japanese living in the land of the future in the 90s-00's. I found there that people having access to tech was much much different than being technical. Fancy phones but few personal computers or the skills that go along with them.


I'm confused here. The article implies that kids aren't learning how to type because they don't have access to a PC at home. But for the majority of the history of computing, it was the case that most people wouldn't have had access to a PC at home as children. My parents learned to type on typewriters at college. I had a mandatory class entirely dedicated to typing in middle school. Have schools stopped offering computer classes, assuming that kids already know how to use them from their experience at home? If so, then that sounds like a problem all on its own given that poorer households might not have had a computer even during the PC golden age.


> Have schools stopped offering computer classes,

Many have stopped or at least severely cut back

> assuming that kids already know how to use them from their experience at home?

Partly this and partly budget cuts. Computer skills aren't on the standardized tests, so they are a "waste of time" like art and music and history and everything else that isn't on the test. So they are deemphasized.

Some schools with money for computers and a computer teacher will at least try to integrate it into their language instruction by doing exercises on the computer.


Typing class and most computer classes are long gone. One of the problems with dedicated computer class is it has most of the characteristics of a vocational class. You need a dedicated teacher, a special room, and special equipment. At least shop classes don't need many updates to the equipment. Computers are now a resource, and most kids get minimum instruction (just enough to type papers or use the educational programs).


Some of my teachers knew how to touch type. We were never taught. Nowadays I work with people who type with two fingers. I'm 30 and live in the UK. The ball has been dropped on computer skills. Anyone can say they "know Microsoft Office" but they won't be tested on it and nobody ever talks about typing as a skill.


>Given that they can write and submit their school reports with smartphones

Would students really do this? I couldn't imagine trying to write an entire essay or a lab report on a smartphone. Maybe Japanese schools don't have to turn in longer things like that?


As someone who recently worked in education... yes. Anecdotally, I tended to see it at the community-college level with students who owned a large-screen phone and didn't own a computer.

I don't think anyone is writing whole research papers or serious lab reports on phones. A two-page reading response is totally doable. Something with footnotes or equations, not so much.


Yes, one can technically vomit out a few hundred words using a small touch screen. But the tools make it damn difficult to edit and re-organize text. Does this mean that all such reports are basically stream-of-consciousness writing instead of a more carefully polished piece of work?


It's not hard to edit & reorganised with larger screen smartphone at all.


It seems like it would be more awkward than with a mouse+keyboard controlled text editor, but I'd be happy to be proven wrong. I'd love to get better with working with touchscreens, but I haven't seen any way of getting the composability I've become accustomed to on the *nix command line and associated editors (be it vim, emacs or even the monoliths of atom and vscode).


Of course it's more awkward. But it's also not hard. And if a phone is all you've got, and you don't have the money to spend on a computer on top of it... then you make do. It's not the end of the world. It's fine.


That's a fair point. I know it's the not the end of the world, but I guess I was just hoping to be shown a particularly efficient method of touchscreen input.


Also: to further the point, it is possible but even more invonvenient to fix mistakes and reflow text on a physical page - but that doesn't mean that all text written on paper is stream-of-conciousness


The more I use my iPhone X, the better I am at it. I actually prefer its keyboard to the iPad’s for prose now, because you can long press the period and select text much more easily. If you can do that with the iPad, I’ve not found it yet.


iPad doesn't have 3D Touch yet but you can press the keyboard with two fingers instead.


> damn difficult to edit and re-organize text

The honest and unfortunate answer is, in my personal experience, that editing and re-organizing text doesn't often happen in students' papers regardless of the tools available to them. Not knowing how to type well is one thing, but a _lot_ of students get to college not knowing how to write a half-decent paper (and then still don't learn squat in their freshman composition class).


There are many writing apps available, for example Microsoft Word free edition. One does not have to compose in a text box. Use the app to write it then copy and paste the result.


Well the good side is that the UI also makes it very hard to select and copy-paste from wikipedia!


I mean, if you grew up with phone keyboard as your main method of text entry, this might be normal to you.


I used to write my reports on a type writer. A smartphone would have been a dream come true back then.


But didn't you write a longhand draft first? Or at least longhand outlines?


When I first got a typewriter, I wrote shortish papers in 2 drafts on the typewriter. The outline was in my head, and "editing" was staring at the first draft and rearranging them as I typed the second and final version.

You probably use a different technique.

I changed how I did things a year later when I started typesetting papers in TeX.


Yeah, but I mean, those didn't have spell check or word completion or copy/paste or even backspace. Do you know the pain of typing a wrong character on a typewriter? It gets old.


FWIW I had a typewriter in ~1995 that actually had a backspace. It had a white ribbon, similar to white out and it kept a small history buffer so it knew which key to erase. It was an amazing invention.


Yup i also were enthusiast with my grandfather modern typewriter which had a one line alphanumeric LCD where you could type one line, re read and correct it and it would print it only when you pressed return.


I think mine had this feature too.


Japanese isn't as well suited to a keyboard as English is; a smartphone and keyboard are about equally fast for typing Japanese.


I have totally had college students turn in essays that were written on their phone.


Yes.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: