Mouse and keyboard is dead, "cannibalized" by touch, which is now also dying? This is ridiculous. The entire world of "getting things done" on a computer - business productivity, programming, content creation, science - is still firmly entrenched in KB-mouse. Nothing has even come within a longshot of challenging that yet.
I'm sure I'm somewhat biased by interacting mostly with tech-savvy people, but even people like my mom - who loves her iPad and is right in that target demographic - still use a mouse and keyboard every day. I just don't understand how anyone can claim that we are even close to replacing those tools.
Yes, lots of touch devices are now being sold, but a lot of them are being sold into a channel in which nobody used a physical keyboard or mouse before (phones) and the rest are being sold into a device category that didn't quite exist before (tablets).
A lot of people seem to be misreading the decline of PC/laptop sales and the rise of tablets and smartphones as being totally interrelated when I don't believe they are. Certainly some percentage of tablets are purchases that otherwise would be have been netbooks, but IMO the bigger issue with PC/laptop sales dropping is people (granted, I'm only discussing First World people) generally already have one AND just don't need new ones as often anymore. Anything with a Core 2 Duo and 4+ gigabytes of RAM "ought to be enough for anyone" (and in my experience, for non-power-users it certainly is).
I'm a hardcore developer/gamer/"power-user" and even my system buying and major upgrade lifecycle has extended to about 3-4 years when it used to be 6 months to a year, combine that lifecycle extension with the fact that most people can get by with just a laptop (because the practical power difference between even a low-end laptop and a desktop are insignificant) and it isn't any wonder that PC/laptop sales have suffered for everyone but Apple who is one of the few smart enough to be selling systems with actual new-system differentiation ("Retina" screens).
tl; dr - I know lots of people (including myself) who have bought new smartphones and tablets over the past few years. I don't know a single one of them who doesn't use a "real" computer with mouse/keyboard daily and on average much more than their tablet/smartphone (if you exclude phone talking from the smartphone use). But their smartphones are probably like <1 year old (because significant practical hardware progress is still being made in this space) while their laptops might be years old and plenty fine for what they use them for.
If I buy an android device, then within a year there is a new one that's significantly better than the one I have in just about every way. It runs a newer version of the OS and therefor there is a much wider selection of software I can have.
OTOH my development workstation is mostly built out of scraps and I don't feel much need to upgrade it despite spending ~10 hours a day using it and I probably spend an average of 1 hour a day using the smartphone.
So spending does not necessarily correlate with usage in this case, kind of like how people may have a classic sports car that they pour money into but drive twice a month and a much cheaper daily runner that they actually use more.
Most of these "post touch" options are even more inconvenient, more abstract, and have greater shortcomings than touch or mouse, so I don't see them replacing touch. Now if an interface technology can win across the board: more convenient, more direct/less abstract, more definite/reliable -- then it will win. But these technologies lose across the board, at least in their current state.
If in any way possible, I much prefer mouse and keyboard over touch, touchscreen, or air (eyetoy, wii, kinect, etc.). It wouldn't be the first time I use my computer keyboard on my phone to be able to type decently. (USB keyboard, female usb -> male microusb converter.)
> you could easily type something different from what you actually want
Lol, talk about touchscreen, or accidentally dragging something with a mouse. GUIs aren't much better than typing commands.
> it'll be "move it where I'm thinking."
I agree with you on that. The problem is that I don't have the hardware to experiment with this (and I don't expect that I could develop anything better than other researchers are doing), so it's not possible yet. In the meantime, I prefer keyboard over everything, including mouse. The problem is that websites are optimized for mice, so many aren't very usable without. Touchscreen with a stylus is good for drawing, but that's the only application I can think of, and you might as well just use pen and paper for that.
I wouldn't want a laptop without a touchpad, that doesn't mean I want to use the touchpad most of the time.
I also hope that a touchscreen is a cheap or universal feature next time I make a purchase, because why not.
30 years from now will we be using neural interfaces to enter Unix commands into 80x24 text windows projected onto our retinas?
GUIs, on the other hand, can be used well by humans but poorly by computers.
Why would you want the interface to be usable by computers? So that people can easily delegate common tasks to the computer (i.e. automate things). Once you realize you do something often, automate it and will be trivial to do it again. Once it's automated, you can build even more powerful things on top of those automations, and achieve even more with less effort.
So to my mind you have to have some notion of what is 'forcing' the change before you can really say that things will change.
To use a current example, 'touch'. The force here is that two fold, one the amount of keyboard interaction you need to consume content is much less than the amount you need to create it, and two, keyboards take up space that could be filled with other features. Touch became credible when you could use it exclusively to operate the device in an acceptable way. Its why it failed to displace keyboards on the original Tablet PCs (you needed the keyboard too often) and its why the iPad without a keyboard is a lot less productive to process email on .
So 'post touch' needs, by my reasoning, some force behind it if it is going to displace touch. And we can look at those forces and see where they are coming from.
Clearly people talking to their devices is cool, but annoying to others on the train and potentially embarrassing. That being an example of a force which doesn't allow voice to display touch. But the Myo device seems to be operable reasonably privately if it is sensitive enough. The Leap lets you do gestures locally for action at a distance, I could see that as having some pull if people continue with large displays at a distance, but being less effective if the trend becomes many touchable displays close to you. I would say Kinect is a sort of mixed bag here, great for games, a huge win for Robotic vision, but less durable as new general purpose interaction method.
It will be fun to watch. Just hope my toy budget can keep up!
 This are clearly pretty arguable statements, but they are hear to serve as illustration of the force pushing change on people rather than a quantitative measure of that force.
Hopefully Subvocal Recognition can improve enough that it will solve this particular problem. They've already created non-invasive forms of electronic signal relay that could be used for this as well.
It definitely will be fun to watch. I'm with you on the skepticism of video capture devices like Kinect being the solution to non-touch interfaces. We'll see though :D
Moreover, I imagine (okay, hope) that intense miniaturization is going to one day produce something like "Google Contact Lenses", which are going to be even more restrictive in the sort of interactions they permit.
And anything that gets popular for Google Glass is probably also going to be good for existing contexts like cars.
I don't know exactly what this is going to be, but it'll be cool.
As you said: "where I'm looking", "where I'm gesturing", and "where I'm thinking" are all possible - but totally different. I think if post-touch gets critical mass, we'll see a much more diverse and dynamic space of interfaces. An implication is that which interfaces to support would be a much more important decision - right now you can just default to building a web app and a touch app.
Thought experiment: I offer to replace your /WORK/ computer with an equivalent computer that has a touch screen and voice recognition. (But no physical keyboard or mouse). Do you take this offer?
I'm guessing most of you would say "no", because you'd get less work done.
I think it's telling that touch interfaces are mostly being used in consumer devices. The "consume" in "consumer" is the key hint there.
Look at the terrible touch-driven applications and you'll probably find that a goodly number of them assume the same sort of interaction paradigm that you'd find with a keyboard and mouse. The excellent touch applications have made good use of the fact that you have more than one finger and that you can perform gestures with them, that fingers get in the way of stuff, and that you don't want to have to tap multiple times to get something to happen (whether that's finding a file or so on).
It'll take a shift in thinking to make post-touch effective (whether that's gesture, vision tracking, thought etc). If we think about a computer as being a desktop with folders on it, or an 80x24 terminal then we're looking at it the wrong way (an extension of the "if you see a stylus they blew it" principle I guess).
There are, though, UIs where item organization may happen without user input, like Genius playlists in Apple iTunes. This might be the future, I guess.
This system would require direct connection to signals from your brain, intercepting and consuming them, never reaching your muscles. Resting comfortably with REM-like paralysis, you control your game character naturally and intuitively, moving it and not yourself, all from the comfort of your couch.
Technology like this already exists in its infancy, such as providing moveable arms and hands to those who have lost them:
"Move Arm" ---> Engage Robotics ---> "See Arm Move"
"Move Arm" ---> Animate Game ---> "Feel Arm Move"
When will this technology be available? What bottlenecks need to be overcome for it to exist? How much will it cost? What other applications could it have? Will it be released before Episode 3? ;)
And this is not to mention the cognitive trouble this could lead to. Isn't hard to imagine a situation such as Mal from Inception happening.
A personal aside — I use a Kinesis Advantage keyboard, as I have found that typing too quickly on other keyboards causes me wrist pain after a few minutes (I'm not affiliated with Kinesis or any other keyboard company for that matter.)
I don't think that keyboards will ever go away. I think that touch will complement keyboards, but will never replace them because keyboards allow you to type more quickly and efficiently and in a more comfortable fashion.
Perhaps when touch technology incorporates tactile feedback (by using electrovibration ), there will be touch based simulacrum of keyboards. However, there will still be the problem of having "Ipad neck" when typing if the keyboard is part of the screen.
For most people voice transcription software allows them to compose documents much faster than typing. But there are many problems with voice: besides mis-transcriptions, it is very annoying to listen to someone else dictating text and to try to work at the same time. Technologies using brain currents to create text or eye blinks are still a long way from competing with keyboards as a primary input device for healthy people.
The question I pose is, how do we determine our intentions? I believe that the idea of a frictionless interaction with devices is impossible for anything worthwhile, because anything worthwhile requires actually thinking about what you want.
I agree with the author that we are heading towards interpreting the user's intentions in better ways. But physical inputs with tactile feedback are still to be beaten at their domain.
>And here we are, not even a full year after Gabe's talk, and the first post-touch products are already landing on Planet Earth: Glass, Myo, Leap Motion, Kinect, on and on.
I don't think these devices/interfaces are really any better than touch with respect to "do what I really mean". It's still hand eye coordination at the level of our limbs and motor senses.
I think real "do what I really mean" post-touch interfaces are the ones where our minds become the actuators, instead of our limbs. Things like translating neural signals to actions. We just got the first Telepathic Mice , you know?
"For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy. So the longer a technology lives, the longer it can be expected to live." -Taleb
It'll be interesting to see what happens to the keyboard when voice processing gets good enough that we can do purely voice-driven programming for the majority of what we need.
The author also (mostly) missed the most promising post-touch/keyboard input device we've had lately: speech input in the form of Siri & clones, which has finally reached the masses after decades of development and use in some fields where the required extensive training of the speech recognition was feasible. It could be used more for games, in the age of MMOs, everyone is already used to shouting into their headset microphone.
A truly Kurzweilian scenario would be that you merely have to expect to find the folder somewhere, and it is already there.
*Yes, I know they didn't "invent" these things. But they were instrumental in refining them and positioning them so as to catalyze the industry toward sea changes.
I think you need to give examples of important content which, at the same time, your father stores with files and your daughter uses an abstraction.
Keyboard and mouse allow very rapid and precise text input, rich option and function choice, and very precise selection and movement control. Touch pads do a very good job or replacing the mouse for mobile devices like laptops.
For a long time touch wasn't up to scratch. Pens allowed more precise selection and motion, but were always a kludge because the pens themselves were too easy to misplace or drop while on the go. Once touch's early imprecision was overcome, it took over because you always have your fingers with you. Note that there is one case where touch isn't enough - controlling volume settings for your phone while in your pocket. In this edge case, physical buttons take up the slack. My point is not that touch has limitations (it does), but that you need to take a very long, hard look at any technology intending to replace it to be sure it is even more robust, and even more convenient and precise and has even fewer limitations in a huge range of situations.
Motion controllers like TouchMotion are extremely limited compared to touch. You can't use it in a relaxed posture, you have to have your hands raised and posed in the space that will accept gesture input. For precise selection, you need to have a cursor on screen like a mouse, because you don't have the directness of touch. Also while it's not ideal to use touch on a train or bus that's moving, trying to use something like TouchMotion would be a joke.
Voice control is highly problematic too. People actually find it extremely hard to be precise, to the level that many interactions with computers require, in verbal communication. That goes double for describing visual or spacial information verbally. Anyone that's ever worked phone tech support for computer users knows what I'm talking about.
Eye tracking has possibilities, but our eyes wander around and shift focus point all the time. Sometimes we want to look at something other than the thing we're controlling. Also I suspect that maintaining the disciplined and precise eye movements you'd need to replace touch or mouse/trackpad would be pretty onerous.
So I don't see touch going away for a very long time, if ever. I remember in the 90s pundits predicting that keyboards and mice were just placeholders and they'd be gone within a few years. The truth is you'd better get used to them because they're here to stay, and so is touch.