Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What may the future interaction technology look like?
1 point by going_ham 51 days ago | hide | past | favorite | 1 comment
I happened to stumble upon a few UI/UX history, it seems that some groups in the past were capable of showing future interactions for example memex machine, mother-of-all-demos, Xerox Star, etc. It was quite extraordinary given their circumstances and will to think. Now, after so much revolution, we are still following the same old methods in the case of web development. Our tech stack has changed, but interactions can still be traced back to the old days.

Even in mobile devices, we can see how these function. Touch-screen has enabled a new form of interaction. However, the fundamentals can still be traced back to what those pioneers did. We are still pressing the button on our phones. We are still scrolling up and down using a finger instead of a mouse. And finally, we type using virtual on-screen keyboards that mimic the physical keyboard.

Slowly there has been a shift to speech recognition systems and audio-based interaction. But our VR technology still makes use of pointing and clicking in some form. Why aren't there any other for of interactions like grasping, pushing, pulling (grasping and pulling the button), pinching (actually pinching and moving things instead of gestures), etc? Instead of creating controllers, why not create a wearable glove that can precisely get the 3D hand position. It can just create new possibilities after that. Why limit to typing when we can actually use hand gestures, grab a pen and write down? (This is entirely fictional)

Is it really that it wasn't thought out or is it more likely that those never took off because we are already used to the old ways? I am quite curious about this and would like to have your opinion about it? I may be naive to ask the question without researching, but it's quite difficult to find these things.




1. We use screens.

2. Screens are flat.

3. Gorilla arms.

4. Typing remains preferred for text input (I'm using an external keyboard on a tablet as I write this).

5. Speech interfaces seem to be the current hawtness, though they're still of limited suitability. Privacy and unauthorised / unwanted inputs (and surveillance) are all issues --- keyboards offer specificity, intentionality, and a pronounced limitation on unwanted information capture (though keylogging remains a concern).

6. If information-enhanced systems become more widespread, I expect to see predictive systems (anticipating needs from behaviour), standardised interfaces (discovery is expensive, especially for widely-used / high-traffic systems), a return to physical or at least physically-indicated interfaces (same reasons), and the like.

7. VR is immersive but also exclusive --- the individual wearing a VR headset (googles, earphones) is isolated from the environment they're physically present in. VR is not locally sharable in the way that other technologies (screens, whiteboards, dashboards, keyboards, voice inputs, audio outputs) are. Yes, you can share a VR environment with others present within it, but that makes use in a given physical space more cumbersome and limited.

8. What informational problems are you actually hoping to solve?

One of the key strengths of MOAD was that it actually did demonstrate useful activities. Novel, yes, but useful and familiar in terms of their nondigital analogues. And in that context --- reading and creating texts, email, communications, interacting with graphics --- MOAD anticipated virtually all our present use-cases. Principle changes have been scale, performance, and ubiquity, where scale applies both to the size of computing systems and their number. Pervasive computing --- carrying the Internet in your pocket --- has been the big change of the past decade. For better ... or worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: