Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Wearable/Ubiquitous computing. How will will it affect our daily lives?
8 points by srlake on Dec 28, 2012 | hide | past | favorite | 4 comments
With sensors, processors and display devices becoming increasingly integrated in the world around us, I believe that the barriers between physical computing devices and our daily lives will be broken down.

How do you think this integration of technology into our lives will progress from where we are today? Google Glass is one next step, but how will we use it? What comes next? How do you imagine our (technological) world 5 years from now?




Id like to see a decrease in the number of external devices we rely on. Progressing towards an input/output device that is integrated into or onto the body somehow is what Im waiting for!!


I had always hoped that cochlear implants would go mainstream. Imagine a bluetooth cochlear implant...


One thing I'd love to see is an improvement in personal analytics. Fitbit is a good start, but the data is only useful up to a certain point.


I would encourage you to not think about things in terms of "imagine our (technological) world 5 years from now" and instead in terms of "imagine our (quality of life) world 5 years from now," because the issue for five years from now is the same issue from ten years ago: there's no "why."

Ten years ago, wearable computing and versions of Google Glass existed in hobbyist garages and university labs. Steve Mann's been toting around a wearable computer for thirty or forty years now. Half a dozen graduate schools had wearable computing programs, usually funded by Nokia or Intel. Here was mine:

http://mavra.perilith.com/~vito/photos/wearable1.jpg

But, all of the wearable programs were research-level technology explorations into theoretical utility. No-one was looking at use cases for normal people. Related programs, like lifelogging pioneers such as Gordon Bell, had the same problem: the question was always "can we" and never "why."

I stopped experimenting with wearable computing in 2006, and left these rants on the wear-hard mailing list:

"Will the next Jeff Hawkins please stand up?" http://www.eyetap.org/wearables/wear-hard-06/2006492.html

"It's the year 2006. But where are the better UIs? I was promised a better UI. I don't see any better UIs. Why? Why? Why?" http://www.eyetap.org/wearables/wear-hard-06/2006494.html

"You're trying to do something new and better and just tweaking an existing modal UI isn't going to cut it." http://www.eyetap.org/wearables/wear-hard-06/2006498.html

I did two more things in 2006: I presented a proposal to the R&D department I worked in to invent something like the iPhone, and I designed an aural PDA as part of my college coursework, and then I was done.

I recently got back into it this past Spring, when I had an epiphany around "why," which I've talked about in previous comments. I still think ubiquitous computing can dramatically improve our quality of life, but I don't think it'll happen through heads up displays and chording keyboards.

This PDF of a presentation I gave in May touches on it a little: http://s3.amazonaws.com/vitorio/Automated%20Storytelling%20M...

Google Glass is being designed by the same people that I ranted against in 2006. There's a very humane, social, intimate aspect it's lacked so far, with the exception of one photograph, which Robin Sloan talks about here:

http://www.robinsloan.com/note/pictures-and-vision/

Everything about Google Glass is fraught with legal peril, because videotaping and audio recording laws and wiretapping laws and personal privacy laws are different from state to state. I've discussed this before, here, too: academics stick to still photos for this reason, unless it's their own family inside their own house (Deb Roy) or for military use.

I continue to believe that wearable computing, ubiquitous computing, ambient intelligence, ambient information, quiet computing, lifelogging, quantified self, internet of things, natural interfaces, immersive i/o, etc., are all facets of the same "next step" in technology, and that "storytelling" is the way it will make sense to us, and we will make sense of it. Normal people don't want a bar graph; they want to know they did a good job today, and they're definitely making progress toward their goal, and if they get off the computer and leave in the next two minutes they'll beat traffic and have time to pick up flowers for their wife on their way home. And it will be a million sensors and real-time 24/7 video and audio recording and teraflops of traffic prediction and monitoring of your wife's mood, and all you will know -- all that will matter -- is that you two haven't fought in three years and she thinks you are just so thoughtful.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: