Hacker News new | comments | show | ask | jobs | submit login
The future of the human-computer interface? (hpe.com)
63 points by ohjeez 101 days ago | hide | past | web | 32 comments | favorite



If mere thought is input to the computer, then why should its output be presented to you as a physical stimulus? Like, why would there be a hologram telling you how to cook a recipe when it could just present itself as a hallucination? Better yet, save the time and make it seem like a memory of something you already know how to do. Or why not have the program operate your physical body directly? Or why not give you nourishment and produce the food experience completely virtually, a la The Matrix?


So this article wants us to believe that we're going to find revolutionary new ways of communication but meanwhile we'll all still be cooking in the same inefficient way.

To me that sounds like 1950s predictions of the future showing us how mailmen will deliver letters with jetpacks. Anyone can do that kind of stupid extrapolation. But what's the equivalent of email in that scenario?


I'm just waiting for Transmetropolitan's TV bombs. Spending the nights dreaming with advertisements.


Anybody willing to take a bet that my keyboard is not going anywhere in my lifetime?

The first time I was told that voice recognition is here and my keyboard is going away was about 20 years ago. Here I am today, still strangling to have Alexa understand simple commands.

Thought controlled input? Forget about it in this lifetime.


I wrote a book about speech recognition -- and I _wrote_ it, I didn't speak it. I discovered that speaking uses different mental muscles than writing does. That's fine, because some people are more comfortable with one than another (such as doctors who are happy dictating but don't want to type).

Also I remember when computer mice were new, and plenty of people said, "I can't imagine using that."

I think that's true of other types of user interface as well, from haptics to whatever comes next. Sometimes you have to envision what _could_ be, and discover from experience what reaches people... or a subset thereof.


I'm more on your side than not when it comes to these bombastic claims. That said I think consumer level mind interfaces will happen, and be useful, but much more so as a way to augment traditional input.

It's a bit boring to think about, but given some of the research that has been able to reconstruct a (poor) image from brain activity, I could imagine using something like that to open applications by imagining their icon.

Continue down this line of thinking and it is entirely possible that we will begin to construct an entirely new symbolic language to interact with these interfaces. I mean if a picture is worth a thousand words, maybe it's more effective path to follow than obsessing over perfect speech recognition.


No because you seem biased to keep it and can control the outcome of the bet. Anybody willing to take a bet that my keyboard is going to evolve radically or disappear altogether in my lifetime?


How old are you? It’s really not that hard to imagine the keyboard being replaced by one, or a combination, of technologies in the next decade or two.

Voice has gotten dramatically better in the past few years. Another decade? There are also gesture technologies:

https://m.youtube.com/watch?v=0QNiZfSsPc0

Anyway, if you’ve got 3 or 4 decades of life, you still think we won’t have sufficient advances to replace the keyboard?


I can't imagine what a replacement would look or behave like. You can replace your keyboard with almost anything that proxies human to computer, but nothing else is the same or as capable.

The radar video is impressive (as were Leap Motion videos, which didn't live up to the hype), and sure it makes sense for extending a watch or phone, but the problem with autocorrect and voice recognition and predictive keyboards and spell checkers is much of the time, I'm not writing clear, coherent, English sentences.

Sufficient advances to write foreign words and jargon and deliberate misspellings and usernames and hostnames and slang and quotes which are deliberately not spelled normally?

A lot of the claims about "you'll just think it" hide the reality - e.g. when I try to speak a password and I can't, because it's just a pattern of motion (and you can't escape this overall problem by saying you won't need passwords in the future).

Human hands are dextrous and sensitive, almost no other body parts are anything like as much - voice is laborious and prone to problems of pronunciation and tiredness and background noise and homonyms, anything else isn't going to come close until you can have brain surgery implants - and even then a) no thanks and b) I bet that still underestimates the complexity of reading clear intent.


If sub-vocal recognition [1] (together with suggestions) gets as good as voice recognition I think we could finally have a viable alternative to the keyboard. [1] https://en.m.wikipedia.org/wiki/Subvocal_recognition


Mice work well. Voice recognition works well for me. I still use the keyboard for most real work.

Similarly, I think we will see some degree of brain reading, but at least the first generation or two won't replace other input methods.


I'm sitting on the couch on a Saturday morning with...an iPad. My keyboard is still at the office, but I don't use it as much as I used to.


I'm calling bull.

I would not trust some device to read my thoughts to write them to a Word document. Random thoughts are too uncontrollable.

I would not trust my phone to connect automatically to any device, I see that's a huge risk.

I would not trust my computer when I talk to it. Anyone can hear the conversation and it's disturbing in public spaces to other people.

We saw this happen with Google Glass. People need interfaces they can trust, that are not intrusive.


I would love to be able to write code just with my thoughts. With the keyboard is not that bad, but in certain moments feels like the main bottleneck.


I recommend reading an article about dude who was unable to code with his hands, so he uses voice commands to code faster than with keyboard[1].

[1]: http://ergoemacs.org/emacs/using_voice_to_code.html


Switch to a more terse/dense/expressive language?

If you're typing filler, maybe the language is just wasting your time?


"Expect the way you interact with your laptop, tablet, and smartphone to change a lot in the next five to 10 years."

I am going to bet the opposite; we will interact with laptop, phones and tablets roughly in the same way as we do now in ten years. Voice control, thought control and VR headsets will be mostly a dud for anything beyond games and novelty.


We said that about touch, and now most human-computer interaction is via touch screen.


Tablets and smartphones are a new device class which increased the amount of human-computer interaction. You're implying we also mostly use touch screens on previously existing platforms like desktops, laptops, gaming consoles and TVs. Those platforms haven't changed their primary input method to touch.


Nintendo DS, 3DS, 2DS, WiiU, Switch, and Sony PS Vita are all touch screen.


They also all have directional pads, joysticks, and tactile buttons.


I don't think "we" did. And also I would argue that going to touch from say a mouse or lightpen (which I think predates mouse-based interfaces) is essentially an incremental improvement on the same kind of user interface compared going to things suggested in this article (voice control, thought control).


Most human-computer interaction involves games and novelty-seeking.


But it actually took 30 years to get there, not 5


There's a lot bandied about "Thought control" or brain controlled devices to replace existing HCIs, but many fail to realize that our input devices aren't just used for input but are tools which help us amplify our thought process.


This reminds me of this 'Big Head' scene from Silicon Valley: https://www.youtube.com/watch?v=jroQCyWwEgE


When a device can do all that, it won't need you.


A whole lot of talk with nothing to show for.

The first iPhone was released 10 years ago and it was an actual product you could buy.

The only concrete device they linked was https://www.technologyreview.com/s/534206/a-brain-computer-i....

"Blackrock has begun selling the wireless processor, which it calls “Cereplex-W” and costs about $15,000, to research labs that study primates. Tests in humans could happen quickly, says Florian Solzbacher, a University of Utah professor who is the owner and president of Blackrock. The Brown scientists have plans to try it on paralyzed patients, but haven’t yet done so."

It's incredibly expensive and they are primarily using it for animal tests.

If it's so obvious that this is going to replace keyboards, etc within 10 years then show me at least one consumer product that I can buy today for less than $1000.


Great, I can't wait for the voice controlled reimplementation of IRC, the augmented reality reimplementation of IRC, the holographic reimplementation of IRC, the node.js JavaScript deep learning in-browser reimplementation of holographic IRC via websockets, etc.

You've been able to voice control a device and speak reminders for literally years.

Plugging your smartphone into a TV to use it as a display has been a thing for at least five years since the Galaxy Note 2, probably long before that.

Devices 'sending information to nearby stores' has been a thing for years with Bluetooth and WiFi sensing beacons tracking individuals.

The steak/salad/spyware problem exemplified at least 4 years ago here: https://www.youtube.com/watch?v=Xn7N2UiOYCk

"Every time I interact with a new piece of technology, I don't want to have to tell it who I am and type in all my password information. My phone will tell these devices who I am and that I'm trustworthy."

Every time I interact with a new piece of technology, I don't want to have to tell it who I am, so I lie and write some bullshit in the form. My phone will not tell these devices who I am and that I'm trustworthy, whatever that means, but it probably won't have any way to lie on my behalf because it's not really "my" user agent at all, but my proxy in a for-profit exploitative system I increasingly dislike.

Would devices, which connect to our advanced smartphones, share our information with the company that made them as well as advertisers?

Because it's not even worth questioning whether they could, should or would share the information with advertisers, that's just implicitly assumed and accepted as fine. It's the scary "manufacturer" we should be concerned about.

If a senior citizen is having memory problems, for example, instead of immediately being placed in assisted living or a nursing facility, they could use a smart headset or ear piece that act as cognitive assistants, reminding them to take their medications or to turn off the stove. [..] "At the end of the day, one of the most profoundly helpful use cases will be in elder care," says Satyanarayanan.

For some definition of 'care' which involves +1 rugged American independence (tm) and absolutely no human interaction if at all possible.

Your friend might say, 'Wait! You have to wait till the oil is sizzling' It will guide you in real time.

It will criticize in real time. (Hi mom).

"We'll get to a point where I don't have a smartphone or a laptop. The computers will just be all around us."

And each one requiring an account, a login, a micropayment, a license agreement, an AUP, and tracking every interaction with it. Sweet.

No, just no to all this future.


I have little doubts decades from now I will still be mashing code on a keyboard. Maybe not a physical keyboard but something out of Tony Stark's lab.


The interesting assumption is that people control their thoughts.


Just more shit to break.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: