Hacker News new | comments | show | ask | jobs | submit login
Where The Puck Is Going (clayallsopp.com)
45 points by 10char 1003 days ago | 45 comments

I understand why people want to write about the death of the mouse and keyboard - it's been around unchanged for a while and tech is now a hype-based "what's next" culture - but it's always just a bunch of hand-waving and name-dropping experimental technologies.

Mouse and keyboard is dead, "cannibalized" by touch, which is now also dying? This is ridiculous. The entire world of "getting things done" on a computer - business productivity, programming, content creation, science - is still firmly entrenched in KB-mouse. Nothing has even come within a longshot of challenging that yet.

I'm sure I'm somewhat biased by interacting mostly with tech-savvy people, but even people like my mom - who loves her iPad and is right in that target demographic - still use a mouse and keyboard every day. I just don't understand how anyone can claim that we are even close to replacing those tools.


I don't think this is necessarily about replacing stuff like the keyboard and mouse though. If you look at where touch and gesture-based interfaces have really made inroads it isn't the traditional computer power-user, it's been casual use (including gaming). Just because we still have good use cases for keyboard + mouse doesn't mean the impact of good touch based displays hasn't been enormous. I don't think it's out of the question that other interfaces that overcome some of the issues with touch will come to prevalence in the near future.


Indeed, in any case things like leapmotion will make certain specialized accessories like 3D-mouses obsolete, but a regular mouse? that's going to be around for ages.


because if technology X exists someone wants to write a blog post or article about how X is dead.


I can't help but roll my eyes at the idea that "mouse and keyboard" are dead.

Yes, lots of touch devices are now being sold, but a lot of them are being sold into a channel in which nobody used a physical keyboard or mouse before (phones) and the rest are being sold into a device category that didn't quite exist before (tablets).

A lot of people seem to be misreading the decline of PC/laptop sales and the rise of tablets and smartphones as being totally interrelated when I don't believe they are. Certainly some percentage of tablets are purchases that otherwise would be have been netbooks, but IMO the bigger issue with PC/laptop sales dropping is people (granted, I'm only discussing First World people) generally already have one AND just don't need new ones as often anymore. Anything with a Core 2 Duo and 4+ gigabytes of RAM "ought to be enough for anyone" (and in my experience, for non-power-users it certainly is).

I'm a hardcore developer/gamer/"power-user" and even my system buying and major upgrade lifecycle has extended to about 3-4 years when it used to be 6 months to a year, combine that lifecycle extension with the fact that most people can get by with just a laptop (because the practical power difference between even a low-end laptop and a desktop are insignificant) and it isn't any wonder that PC/laptop sales have suffered for everyone but Apple who is one of the few smart enough to be selling systems with actual new-system differentiation ("Retina" screens).

tl; dr - I know lots of people (including myself) who have bought new smartphones and tablets over the past few years. I don't know a single one of them who doesn't use a "real" computer with mouse/keyboard daily and on average much more than their tablet/smartphone (if you exclude phone talking from the smartphone use). But their smartphones are probably like <1 year old (because significant practical hardware progress is still being made in this space) while their laptops might be years old and plenty fine for what they use them for.


One thing I've noticed is that I feel pressure to spend more money on touch based devices than I do on keyboard/mouse driven devices.

If I buy an android device, then within a year there is a new one that's significantly better than the one I have in just about every way. It runs a newer version of the OS and therefor there is a much wider selection of software I can have.

OTOH my development workstation is mostly built out of scraps and I don't feel much need to upgrade it despite spending ~10 hours a day using it and I probably spend an average of 1 hour a day using the smartphone.

So spending does not necessarily correlate with usage in this case, kind of like how people may have a classic sports car that they pour money into but drive twice a month and a much cheaper daily runner that they actually use more.


Most of the "post touch" technologies discussed are, in my opinion, blind alleys. Touch has supplanted the mouse -- to some extent -- because it's less abstract and more convenient than the mouse. These are big wins. (Consider the finer-grained hierarchy: touch - stylus on display - stylus on tablet - mouse). Stylus on tablet is only barely less abstract and generally more inconvenient and expensive than mouse so mouse never really got supplanted by stylus on tablet. Stylus on display has disadvantages as well as advantages over mouse, and it's even more expensive, so again it didn't reach a tipping point.

Most of these "post touch" options are even more inconvenient, more abstract, and have greater shortcomings than touch or mouse, so I don't see them replacing touch. Now if an interface technology can win across the board: more convenient, more direct/less abstract, more definite/reliable -- then it will win. But these technologies lose across the board, at least in their current state.


I'm sorry, what was wrong with mouse and keyboard again?

If in any way possible, I much prefer mouse and keyboard over touch, touchscreen, or air (eyetoy, wii, kinect, etc.). It wouldn't be the first time I use my computer keyboard on my phone to be able to type decently. (USB keyboard, female usb -> male microusb converter.)

> you could easily type something different from what you actually want

Lol, talk about touchscreen, or accidentally dragging something with a mouse. GUIs aren't much better than typing commands.

> it'll be "move it where I'm thinking."

I agree with you on that. The problem is that I don't have the hardware to experiment with this (and I don't expect that I could develop anything better than other researchers are doing), so it's not possible yet. In the meantime, I prefer keyboard over everything, including mouse. The problem is that websites are optimized for mice, so many aren't very usable without. Touchscreen with a stylus is good for drawing, but that's the only application I can think of, and you might as well just use pen and paper for that.


Touch has already beaten the mouse. The majority of PCs sold (notebooks) are sold with touchpads, not mice…


What do you classify as a mouse when speaking in terms of notebooks? Are you referring to the nub that say Thinkpads have? If so it all comes down to personal preference but I'd say I'm way more accurate with the nub on my Thinkpad than with the touchpad. I can move to any part of the screen faster with the nub and if for example I'm using the lasso tool in Photosohp I'm way more accurate with the nub. But maybe you're referring to something else?


I will probably continue to buy laptops with touchpads and mostly use them with a mouse.

I wouldn't want a laptop without a touchpad, that doesn't mean I want to use the touchpad most of the time.

I also hope that a touchscreen is a cheap or universal feature next time I make a purchase, because why not.


Oh crap, I just accidentally thought about deleting my document.


It seems crazy that with all these innovations in input devices, so many of us control computers via a 80x24 window emulating a 1970's terminal - 80 columns to match IBM punch cards from 1928(!). And I type into these windows the exact same Unix commands I was typing in 1985. It just seems to me there are huge opportunities for progress that are being missed - surely 80-column text windows aren't a global maximum.

30 years from now will we be using neural interfaces to enter Unix commands into 80x24 text windows projected onto our retinas?


I think the main reason CLI remains popular is because it has a property that it remains usable both by humans and computers.

GUIs, on the other hand, can be used well by humans but poorly by computers.

Why would you want the interface to be usable by computers? So that people can easily delegate common tasks to the computer (i.e. automate things). Once you realize you do something often, automate it and will be trivial to do it again. Once it's automated, you can build even more powerful things on top of those automations, and achieve even more with less effort.


This comment is extremely well put. The terminal is an amazingly pragmatic and honest interface. After all, almost all manual interactions leave you wanting a way to automate. Meanwhile, almost all automated processes need the occasional manual intervention. A defined library of specific and unambiguous commands really allows for both.


I've observed that change has its own sort of Newtonian laws that regulate it, and the one where people do the same thing they always did before unless acted upon by a very force is perhaps one of the stronger ones.

So to my mind you have to have some notion of what is 'forcing' the change before you can really say that things will change.

To use a current example, 'touch'. The force here is that two fold, one the amount of keyboard interaction you need to consume content is much less than the amount you need to create it, and two, keyboards take up space that could be filled with other features. Touch became credible when you could use it exclusively to operate the device in an acceptable way. Its why it failed to displace keyboards on the original Tablet PCs (you needed the keyboard too often) and its why the iPad without a keyboard is a lot less productive to process email on [1].

So 'post touch' needs, by my reasoning, some force behind it if it is going to displace touch. And we can look at those forces and see where they are coming from.

Clearly people talking to their devices is cool, but annoying to others on the train and potentially embarrassing. That being an example of a force which doesn't allow voice to display touch. But the Myo device seems to be operable reasonably privately if it is sensitive enough. The Leap lets you do gestures locally for action at a distance, I could see that as having some pull if people continue with large displays at a distance, but being less effective if the trend becomes many touchable displays close to you. I would say Kinect is a sort of mixed bag here, great for games, a huge win for Robotic vision, but less durable as new general purpose interaction method.

It will be fun to watch. Just hope my toy budget can keep up!

[1] This are clearly pretty arguable statements, but they are hear to serve as illustration of the force pushing change on people rather than a quantitative measure of that force.


>Clearly people talking to their devices is cool, but annoying to others on the train and potentially embarrassing

Hopefully Subvocal Recognition[1] can improve enough that it will solve this particular problem. They've already created non-invasive forms[2] of electronic signal relay that could be used for this as well.

It definitely will be fun to watch. I'm with you on the skepticism of video capture devices like Kinect being the solution to non-touch interfaces. We'll see though :D




One "force" away from touch is Google Glass. I'm not quite sure what the dominant UI for devices like that is going to be, but it certainly won't be touching the display!

Moreover, I imagine (okay, hope) that intense miniaturization is going to one day produce something like "Google Contact Lenses", which are going to be even more restrictive in the sort of interactions they permit.

And anything that gets popular for Google Glass is probably also going to be good for existing contexts like cars.

I don't know exactly what this is going to be, but it'll be cool.


I don't agree with Gabe's conclusion that "post-touch, and we’ll be stable for a really long time"

As you said: "where I'm looking", "where I'm gesturing", and "where I'm thinking" are all possible - but totally different. I think if post-touch gets critical mass, we'll see a much more diverse and dynamic space of interfaces. An implication is that which interfaces to support would be a much more important decision - right now you can just default to building a web app and a touch app.


The notion that touch has/is replacing the mouse and keyboard strikes me as silly. It will supplement, not replace.

Thought experiment: I offer to replace your /WORK/ computer with an equivalent computer that has a touch screen and voice recognition. (But no physical keyboard or mouse). Do you take this offer?

I'm guessing most of you would say "no", because you'd get less work done.

I think it's telling that touch interfaces are mostly being used in consumer devices. The "consume" in "consumer" is the key hint there.


It's partly a consequence of people seeing smartphones/tablets as more intuitive and easier to use devices than traditional PCs (which they are) and imagining that the absence of a mouse/keyboard is a fundamental part of that (which is not entirely true). Mostly I think this is because people find it difficult to fully articulate all of the little reasons why mobile OSes are more usable.


The arrow of progress in human-computer interfaces has pointed toward steadily simpler interaction. Most people have no idea what a computer is capable of doing, much less how to make the computer do it. So the dual challenge has been to present the user with the available functionality and then make it as simple as possible to put that functionality into use. Thus, the modern smartphone. The command line is in a sense a much simpler interface, but absent a cumbersome series of menus it requires the user to know what can be done. (Touch has an analogous mode with complex gestures, but few people go beyond swipe and tap.) Even if we have thought-directed cursors, that only addresses half of the interface problem. Users still need to know what their options are, and there is no corresponding "thought-display" to go along with the thought-mouse.


It's interesting that when we think "post-touch" we still think about files and folders ("move the folder where I'm looking"). Maybe we won't need to think about organization -- the computer can handle that, and just play the songs we want, pull up the project we want, play the video we want.


I wish I could upvote this more.

Look at the terrible touch-driven applications and you'll probably find that a goodly number of them assume the same sort of interaction paradigm that you'd find with a keyboard and mouse. The excellent touch applications have made good use of the fact that you have more than one finger and that you can perform gestures with them, that fingers get in the way of stuff, and that you don't want to have to tap multiple times to get something to happen (whether that's finding a file or so on).

It'll take a shift in thinking to make post-touch effective (whether that's gesture, vision tracking, thought etc). If we think about a computer as being a desktop with folders on it, or an 80x24 terminal then we're looking at it the wrong way (an extension of the "if you see a stylus they blew it" principle I guess).


Well, in this case “folder” could be substituted with a generic “item”, I guess. We still have items: songs, projects, videos. And in order to let computer know which items we want, we somehow have to organize them. This is where experimentation happens in UI, but it's all based on “moving” items between folders (playlists, etc.). It's familiar to us from real world, which makes it convenient and easy to learn.

There are, though, UIs where item organization may happen without user input, like Genius playlists in Apple iTunes. This might be the future, I guess.


Yeah, I'm talking about things like Genius playlists, or Google Now, where the computer's algorithms can figure out what you need without you needing to organize.


Imagine a future where you play a video game in the first person, using virtual reality, controlling your avatar with your own movements. The most natural way for someone to walk, run, aim, and jump, is to actually do so. Angling control sticks, pressing buttons, and waving gestures in-air does not map one-to-one in the same way your brain maps to your muscles and limbs. Flow with the wiring of your brain rather than against it.

This system would require direct connection to signals from your brain, intercepting and consuming them, never reaching your muscles. Resting comfortably with REM-like paralysis, you control your game character naturally and intuitively, moving it and not yourself, all from the comfort of your couch.

Technology like this already exists in its infancy, such as providing moveable arms and hands to those who have lost them:

    "Move Arm" ---> Engage Robotics ---> "See Arm Move"
    "Move Arm" ---> Animate Game ---> "Feel Arm Move"
The current feedback loop in robotics is limited; we can't make Luke Skywalker's robotic hand flinch with needles yet. For things to work well in a video game, you'll need to feel your actions, just as a runner feels their foot-strike flick off the ground.

When will this technology be available? What bottlenecks need to be overcome for it to exist? How much will it cost? What other applications could it have? Will it be released before Episode 3? ;)


I don't know, this seems sort of silly. Working through abstractions such as metaphor and imagery (both visual and textual) is essentially working with the wiring of the brain already. Consider, plenty of people have explored such wondrous places as Middle Earth. None have literally walked around in it. Would it be neat. Sure. Especially in an instant gratification sort of way.

And this is not to mention the cognitive trouble this could lead to. Isn't hard to imagine a situation such as Mal from Inception happening.


One thing that always surprises me is how little keyboards have changed in the past thirty years as computers became more mainstream. The majority of keyboards use cheap rubber dome keys in a horribly unergonomic setup. It's no wonder people dislike keyboards and thing of them as dated.

A personal aside — I use a Kinesis Advantage keyboard, as I have found that typing too quickly on other keyboards causes me wrist pain after a few minutes (I'm not affiliated with Kinesis or any other keyboard company for that matter.)

I don't think that keyboards will ever go away. I think that touch will complement keyboards, but will never replace them because keyboards allow you to type more quickly and efficiently and in a more comfortable fashion. Perhaps when touch technology incorporates tactile feedback (by using electrovibration [1]), there will be touch based simulacrum of keyboards. However, there will still be the problem of having "Ipad neck" when typing if the keyboard is part of the screen.

For most people voice transcription software allows them to compose documents much faster than typing. But there are many problems with voice: besides mis-transcriptions, it is very annoying to listen to someone else dictating text and to try to work at the same time. Technologies using brain currents to create text or eye blinks are still a long way from competing with keyboards as a primary input device for healthy people.



My issue with this post comes down to his last line "The abstractions between our intentions and how we execute them in computing are eroding. That's where the puck is going."

The question I pose is, how do we determine our intentions? I believe that the idea of a frictionless interaction with devices is impossible for anything worthwhile, because anything worthwhile requires actually thinking about what you want.


I think Bret Victor's article on the direction of interaction design is relevant here.


I agree with the author that we are heading towards interpreting the user's intentions in better ways. But physical inputs with tactile feedback are still to be beaten at their domain.


>In my mind, it clicked that that we're always inching closer to computing being "do what I really mean."

>And here we are, not even a full year after Gabe's talk, and the first post-touch products are already landing on Planet Earth: Glass, Myo, Leap Motion, Kinect, on and on.

I don't think these devices/interfaces are really any better than touch with respect to "do what I really mean". It's still hand eye coordination at the level of our limbs and motor senses.

I think real "do what I really mean" post-touch interfaces are the ones where our minds become the actuators, instead of our limbs. Things like translating neural signals to actions. We just got the first Telepathic Mice [1], you know?

[1] http://jezebel.com/5987583/brain-implant-allows-rats-to-comm...


There is a bit of neomania in this view. The keyboard has been with us for many years, which only strengthens that case that it will continue to be with us for many to come. Even touch screens use a tactile keyboard. If you take that further and think about the history of humans using their hands to express themselves, it goes back to all of written history (further if you include cave paintings, etc.). I'm sure non-tactile manipulation will be very important, but it doesn't have the history necessary to make solid predictions about its level of importance in the future.

"For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy. So the longer a technology lives, the longer it can be expected to live." -Taleb


Possibly. But look at the prevalence of voice-driven command support in mobile devices. Once machines get better at determining context and things like proper nouns we'll see where that goes.

It'll be interesting to see what happens to the keyboard when voice processing gets good enough that we can do purely voice-driven programming for the majority of what we need.


Where did the 3D (VR) glove go? We've always had such alternative input devices, their existence doesn't mean that keyboard and touch are suddenly doomed. Keyboard and mouse are probably the main reason why PC gaming still thrives next to consoles.

The author also (mostly) missed the most promising post-touch/keyboard input device we've had lately: speech input in the form of Siri & clones, which has finally reached the masses after decades of development and use in some fields where the required extensive training of the speech recognition was feasible. It could be used more for games, in the age of MMOs, everyone is already used to shouting into their headset microphone.


I have to agree. A good 3D headset with gloves could easily replace my graphic design work. Need to type? Just use the virtual keyboard. My hand can remain lying on my armchair. Fingertips project above the keys, a little move downwards and lit up keys (and perhaps sound of keys) to provide feedback. Always felt stupid using photoshop with a mouse. Wacom is better but Virtual Reality, hell that'd be great IMO.


"Now, in the touch-era, it's more like "move the folder where my finger goes," without any mouse-indirection happening. But with post-touch, the status quo will be "move the folder where I'm looking" or "where I say"; and one day (as Kurzweilian as this sounds) it'll be "move it where I'm thinking." Or simply "move it.""

A truly Kurzweilian scenario would be that you merely have to expect to find the folder somewhere, and it is already there.


I'm curious as to whether Apple without Jobs will continue to be the company that redefines* computing (PC, GUI, touch, tablet). Or will change come from another giant like Google or from an upstart like LeapMotion?

*Yes, I know they didn't "invent" these things. But they were instrumental in refining them and positioning them so as to catalyze the industry toward sea changes.


Current users of a paradigm tend to stick with it. Takes a generation to raise a new set of users open to something new. My father is a hunt-and-peck typist obsessed with knowing where his files are; my daughter's second word was "iPad" and will have little notion of files.


Those two aren't exclusive - does your father know what file a given email is stored in? (Good for him if so :)

I think you need to give examples of important content which, at the same time, your father stores with files and your daughter uses an abstraction.


Let's see how good LeapMotion works and what people build with it.


The dominant control interface for any device has to be incredibly robust. It has to be useable in almost every conceivable situation, and for almost every conceivable purpose the device is intended to be used for. The edge cases it doesn't cover have to be covered by an even more robust, but perhaps less sophisticated alternate.

Keyboard and mouse allow very rapid and precise text input, rich option and function choice, and very precise selection and movement control. Touch pads do a very good job or replacing the mouse for mobile devices like laptops.

For a long time touch wasn't up to scratch. Pens allowed more precise selection and motion, but were always a kludge because the pens themselves were too easy to misplace or drop while on the go. Once touch's early imprecision was overcome, it took over because you always have your fingers with you. Note that there is one case where touch isn't enough - controlling volume settings for your phone while in your pocket. In this edge case, physical buttons take up the slack. My point is not that touch has limitations (it does), but that you need to take a very long, hard look at any technology intending to replace it to be sure it is even more robust, and even more convenient and precise and has even fewer limitations in a huge range of situations.

Motion controllers like TouchMotion are extremely limited compared to touch. You can't use it in a relaxed posture, you have to have your hands raised and posed in the space that will accept gesture input. For precise selection, you need to have a cursor on screen like a mouse, because you don't have the directness of touch. Also while it's not ideal to use touch on a train or bus that's moving, trying to use something like TouchMotion would be a joke.

Voice control is highly problematic too. People actually find it extremely hard to be precise, to the level that many interactions with computers require, in verbal communication. That goes double for describing visual or spacial information verbally. Anyone that's ever worked phone tech support for computer users knows what I'm talking about.

Eye tracking has possibilities, but our eyes wander around and shift focus point all the time. Sometimes we want to look at something other than the thing we're controlling. Also I suspect that maintaining the disciplined and precise eye movements you'd need to replace touch or mouse/trackpad would be pretty onerous.

So I don't see touch going away for a very long time, if ever. I remember in the 90s pundits predicting that keyboards and mice were just placeholders and they'd be gone within a few years. The truth is you'd better get used to them because they're here to stay, and so is touch.


I totally agree. Until you can browse Facebook, play video games (this one's easier), write an email, and access/create media as easily, comfortably, and precisely as you can with a mouse and a keyboard or with touch, they won't go anywhere.


Regardless of how technologies evolve, the world is going to be a creepier place.


Telepathy is next.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact