Steve Mann: Old-fashioned welding helmets use darkened glass for this. More modern ones use electronic shutters. Either way, the person welding merely gets a uniformly filtered view. The arc still looks uncomfortably bright, and the surrounding areas remain frustratingly dim.
Me: Hasn't this guy ever heard of HDR? He could have just used a couple of video cameras with some processing.
SM: A few years before this, I had returned to my original inspiration—better welding helmets—and built some that incorporated vision-enhancing technology. [...] These helmets exploit an image-processing technique I invented that is now commonly used to produce HDR (high-dynamic-range) photos.
Steve is the kind of person that makes you question your assumptions about just about anything.
Doing a 'why does't he just' on him means you're going to have to do the equivalent of 6 months continuous reading first if you want to avoid making a fool out of yourself, so I wouldn't worry about not knowing about his connection to HDR.
Steve and I had some interesting exchanges back in '95 or so when video on the web was still a novelty. Steve went on to make history with his series of inventions.
What's extremely impressive to me is Steve's incredible faith in his own inventions, no clinical trials on others but dog-fooding in the extreme. Bolting things onto (and into) his body for attachment and augmenting his world. He's a real pioneer in every sense of the word.
From Wikipedia: Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.
I don't understand why Google hasn't hired Steve Mann yet, at least as a consultant on the project. Seems like hubris to me: this guy has been testing wearable systems for 20 years or so and knows more about the experience than anyone on the planet.
Google has been working with one of Steve Mann's previous collaborators[1], Thad Starner[2]. Thad's actually the Tech Lead on the Glass project. As far as I know, he's been wearing AR systems for just as long and does the majority of his research in this domain. I must admit, I'm extremely jealous when I see Thad walking around campus with his Glass.
I'm not surprised at all to find out Thad's been working on Glass, considering I never saw him on campus without his personal creation. Also, taking a special topics mobile/embedded computing course he taught when I was there was definitely worth it.
Maybe because the biggest hurdle Google is trying to overcome with Glass is the the "fashion" side of it, which Steve Mann has not cracked in those 20 years and isn't going to if he has 20 years more.
Also - does he really have so much experience with that tech? There seems to be some controversy around how much self-promotion he's doing via Wikipedia.
Academics have been driven largely by hubris since time immemorial. It's a mistake to think that this conflicts in principle with a passion for the subject (all though in can in practice, of course). If you get rid of all the academics who made you nauseous with self-promotion, you'd pretty much just be left with Erdős and some teachers.
Incidentally, for those who haven't read it, pg's analysis of the incredible technology of the Segway being completely trumped by human fashion is great: http://www.paulgraham.com/segway.html
As a graduate student, I can tell you, what you are saying is just plain untrue.
Surprisingly many professors I know seem to be genuinely motivated by what they see as a service to society. Typically, more through teaching than research.
Yes, there are a few big jerks who are clearly driven by wanting to show off.
Becoming an academic in CS for the sake of showing off would be an extremely dumb move. You get paid less than your students after just a few years, and nobody knows about you except some small clique of international researchers.
I do know of one or two in my area who seem this way, and I think they're very shallow people. And I know of some profs who are just kind of slackers. Maybe they though academia was something different than what it turned out to be.
But most profs are smart enough to know what kind of life it is going in. (A modicum of intelligence and societal savvy are required to get the job.)
It's clear that profs aren't in it for the money or the worldwide fame, but that hardly shows that they are motivated by societal altruism or an intrinsic love for the subject.
I didn't rule out showing off, I ruled out worldwide fame. People find communities that they believe to be important and then seek to gain status in those communities. You don't care about who won the best rhubarb pie in the greater Kansas bake off, but there is small contingent of people who care greatly about this prize. (After all, Kansas is center stage for fruit-based pies worldwide, and everyone knows that rhubarb is the true test of the purist.) This effect is even stronger in academic fields where it's easier to convince oneself of the importance of your community even when most people haven't heard of you. (The masses just don't understand the importance of our work, obviously.)
It's not always easy to distinguish between status motivations and the more socially desirable motivations. In fact, one measure of how well our institutions are constructed is how well aligned these motivations are; ideally, they would be indistinguishable. But there are ways to tease this out.
For example, when some exogenous input (e.g. federal funding cut) knock these people out of a high-status career, do they still work long hours in their spare time on the subject? Very rarely, especially as it becomes clear they won't regain former respect.
The incredible power of fads in academics, even among those with tenure, is more evidence. The best group strategies for maximizing impact call for diversifying research routes, but academic researchers clump very strongly because this is often in the best interest of an individual's status.
Human motivations are complicated and this makes it very difficult to compile indisputable evidence of anything. This is compounded by the fact that we want to appear to have other motivations than our actual ones. However, I think if you take a careful look and try to model academics as robots you'll find the "tries to maximize status" is a better first order approximation than "tries to maximize academic contribution".
For the record, I've been strongly influenced by the views of economist Robin Hanson on this subject.
I'm not saying that self-promotion is entirely and absolutely bad. Just that the post I replied to seemed to think that Mann was the be-all and end-all of the field, and I'm not sure how much that was based off his own propaganda.
> Also - does he really have so much experience with that tech?
Well, I, for one, was fascinated with wearable computers for a time after reading an article about Steve Mann (in MIT Technology Review, IIRC). That was about fifteen years ago. Based on my memory of that article, I'd say that 20 years is a bit of an understatement.
How do you know they haven't hired him, or that they approached him but couldn't come to terms? I'd say Google has been most un-hubristic with Glass, by revealing it early on to build acceptance and gain feedback.
Because Steve Mann is into wearable computers and google glass is not a wearable computer project. Your wondering this is understandable because google has mislead you. But glass has no CPU, there's not enough power or space for one. The intelligence lives on google servers where the speach recognition is done, and everything else, and glass is useless without a net connection (So you have to be in an area with wifi or have a smartphone handy to tether to.)
Steve Mann has been working on head mounted displays, true, but he's been focusing on local horsepower wearable computers.
Ultimately, I don't think google glass is anything more than a PR project to remove the stigma of google as ripoff artists and to make it look like they're innovative.
Given current technology, glass on wifi should have about 20 minutes of battery life, maybe an hour. Which makes them pretty useless.
There's really quite a difference between a wearable computer (what Mann works on) and a bluetooth headset with integrated display (what glass is.)
> But glass has no CPU, there's not enough power or space for one. The intelligence lives on google servers where the speach recognition is done, and everything else, and glass is useless without a net connection (So you have to be in an area with wifi or have a smartphone handy to tether to.)
I guess this CPU-less device talks to a Wifi connection via... magic.
For me it's something that execute instructions and have their own instruction sets. It can be even built in a breadboard and you could eventually invent your own instruction set.
Even low powered microcontrollers have a CPU (microcontrollers are small, low powered computers), and microcontrollers can come in many sizes[1]
Your point could be that it's just a head mounted display on top of an ASIC, which I doubt it would be.
Right now I think Glass is just a display for smartphones and a way to use Google services, which I think is quite limited (you said it's useless without the net, I agree). Right now we don't even have the tech to run sophisticated speech recognition in a smartphone without a couple of servers crunching statistical formulas why would you think it would be different with a low powered device?
EDIT: Basically my last paragraph is saying that I agree with you but without being too harsh in the comments. This could be the beginning of the wearable computers revolution along with a iWatch.
Android has offline speech recognition (introduced in Jelly Bean). From my limited testing it works really well. So, I don't think an external server is as required as some people say.
It works insanely fast, transcribing what I say in near real time which feels like black magic compared to Siri on my iPhone that has to record an audio clip in its entirety, send it up to their servers, process it, then send a response back.
Glass could use the Android handset as a remote server--it shouldn't matter if the Android crunches voice on its own or with a data connection, all that matters is Glass getting a reply from the API call.
>Ultimately, I don't think google glass is anything more than a PR project to remove the stigma of google as ripoff artists and to make it look like they're innovative.
I disagree with everything about your comment, but especially this part. What on earth are you talking about? Labeling Google as non-innovative and Glass a mere "PR project" is shortsighted (oops, forgot about self-driving cars) and quite frankly, something I'd expect from an Engadget comment thread, not someone who managed to snag "nirvana" on HN.
I don't see how the details of the hardware platform have anything to do with the advice he could give on the effects on the viewers eyesight, optics, etc.
I agree with your post in it's entirety, except for the bit on battery life as I spoke to an engineer using Glass a few weeks ago and 5-6 hours with real world use was the toted number (for a mix of Wi-Fi and Bluetooth use).
I still feel that is very poor though, for Glass to really be useful it should last a long working day, and ideally all of your average waking hours.
It remains a bluetooth headset with integrated display and camera. The phone and some remote servers do the real work.
Pure bullshit. If you seriously believe that Glass doesn't have a CPU or has under 20 minutes of battery life, your raging Apple partisanship is showing through even more than normal.
I'm willing to bet that you haven't used Glass, but don't let that stop you from desperately trying to portray it as vaporware, a PR stunt, or some kind of scheme masterminded by Satan himself on EVERY HN thread you can plausibly cram that nonsense into.
Between your recent posts about Glass and taligent's constant downplaying of Google's maps and autonomous cars, I have to wonder what makes you two work so hard at spewing blind hate for them here.
I like your comment, but would it really be so difficult to for the next generation of Glass to tether to a smart phone in your pocket that did all the computing and only used Google's cloud services for supplemental input when available? I.e. the voice recognition gets better when you've got a good data connection. The phone could be larger than average because you'd rarely need to pull it out of your pocket.
I don't see why you think there's such a large difference between local horsepower and cloud computing. They compliment each other, and the ultimate technology will be a mixture of both. User interaction is the much more interesting and difficult problem.
Does google glass not use the wearer's smartphone?
Regardless of whether the computation is done locally or not, he has been working on "augmented reality" using glasses, a great asset for google he may even prove to be.
To me, google glasses is worth a buy for just only one reason: sousvelliance(or inverse survelliance).
When somebody crashed into your house, or you witnessed a crime taking progress, or remembering an important details related to business dealing, it just may be worth the 1500 dollars you spent to get a Google Glass.
Other application of google glass may provide utility on a daily basis. I can imagine getting 10 dollars worth of useful service from google glass everyday, and 5 dollars in security benefit for the surrounding society. Multiply that by 365 days which is 5475 USD in term of economic benefit every year. Don't forget to mention high value recordings such as record of criminal activities, abuse of authorities by cops.
(Of course, if you're too poor, than google glasses isn't worth 1500 USD even if it may be someday worth 1500 USD of value to you.)
Recording in a public place in generally legal, but recording a private conversation, like a business meeting, is more complex. The laws vary depending on state (in the U.S.), and some states require the consent of all participants. I'm not sure that I'd want to wear a device that would expose me to felony charges if I inadvertently recorded some sensitive conversation that I didn't have permission to record.
Also, private property owners (like store owners) can eject you for trying to record video on their premises. There are some places where recording devices will never be welcome, such as movie theaters, sporting events and workplaces that deal with confidential information (e.g., a doctor's or lawyer's office, or even a start-up company whose product wasn't yet announced).
And I'm pretty sure that even Google wouldn't be happy if all their employees wore these to work every day. Would your work colleagues or managers speak candidly with you if they knew that their every word might be getting recorded?
>Would your work colleagues or managers speak candidly with you if they knew that their every word might be getting recorded?
If I recall correctly, there's a light visible on the outside of the eyepiece (the lens part) that's on whenever recording is happening. There will be no question if recording is going on or not.
It sounds like an inverse-need situation. Those who can afford Google Glasses probably live in neighbourhoods they aren't attacked at a high rate, whereas those who live in higher crime neighbourhoods can't afford it.
Will it help? Once criminals recognize the device they will either steal them or smash you in the face to destroy the evidence.
>>Once criminals recognize the device they will either steal them or smash you in the face to destroy the evidence
Such a system for security ought to have video storage inside a hard shell with a high speed 4G link, able to send the last minute of video/sound (with gps position) to the police at the push of a button.
The high speed link is probably a bit bulky (batteries) and will probably always be put into a pocket or something anyway, even without security considerations.
But sure, criminals could get some equipment to disturb mobile connections.
If it becomes standard for that video to be continuously uploaded to a remote server, then hunting down the bystander would be entirely pointless. I think that's what we have to hope for.
1. Fair point. I think I had in mind a future where these devices were in such common use that any bystander could be assumed to be wearing one. That might lead to greater targeting of all bystanders, but at some point I suspect criminals (or other privacy seekers) would just adapt and factor "avoid bystanders" into their plans in order to settle at about the same level of violence they do today (rather than routinely attacking every bystander they see). This may be over-optimistic.
2. I find it to be an intriguing idea, but one that does admittedly scare the bejesus out of me. Unfortunately, I consider it all too probable that near-ubiquitous camera surveillance is in our future whether we like it or not, and in that scenario I'd vastly prefer for individuals to have their own recordings as well rather than blindly trust that the governments and/or corporations running the other cameras will always act with my best interests at heart.
Regarding 2, I'd get it to constantly upload to my home server. Seems easy to write an app for and set up on my server with php or maybe a custom service.
>The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.
Yes it seemed to me like he already used this excuse.
Yeah you don't have to worry my glass never record anything... well except when it's super convenient to me, then it magically break just at the right moment
I've been a follower of Steve's work since meeting him in '96 at work, and I can only echo your sentiments here. The EyeTap would be my preferred solution (compared to Glass, for example).
I've been planning to make one myself once the Microvision laser scanning displays got reasonable, but I've been waiting about...15 years for that? so it's probably a lost cause.
The area which I think would be super-interesting and easy would be pure audio mediated reality. Vision is hard, but I could do audio for $500. I have shooting earmuffs which essentially do this already -- they have microphones and speakers, and amplify soft sounds while attenuating loud sounds.
The second issue, the eyestrain from trying to focus both eyes at different distances, is also one I overcame—more than 20 years ago! The trick is to arrange things so that the eye behind the mirror can focus at any distance and still see the display clearly.
Sounds like he has some great insights here. He's also known as 'the worlds first cyborg' (http://en.wikipedia.org/wiki/Steve_Mann), and the lonely trail he seemed to be on is now shifting to the mainstream.
I think eyestrain would be a factor if you're using these systems as an augmented display that's constantly on, but I don't think the point of these devices is to be constantly looking at them, but rather, to engage them as needed, otherwise, let them get out of the way.
His devices in the pictures are shown to get in between the eye and the external world, whereas, if you look at glass, the screen is up and out of your line of sight.
I think if your wearable tech display is always on and continuously visible, it'll be a problem, battery life will be negatively impacted, and the device will distract you constantly.
I think that the problems you mentioned are either obvious (clearly being on always will reduce battery life) or just require better design: if the device is distracting, it needs to be adjusted so that it isn't (as an example, it could usually be pass-through displaying only a tiny information panel out of the line of sight like Glass, and only expanding over the whole field when some special feature is activated. Being artificially limited to never being an overlay isn't better than being able to be both an overlay and also just an info panel.)
The author posits that Google placed the screen out of the line of sight to avoid vision misalignment and misadjustment problems.
(Also, the eyestrain due to focus distances was mentioned, and apparently solved by using an "aremac": a pinhole camera in reverse which means the video is focused at every distance.)
That alone would be worth the money to me. Such things already exist for skiing goggles, but those aren't extensible and only really fit one sport. So that's no good.
Not exactly what you want, but there is Sportiiis, that gives you pace, hr, cadence etc with a visual led bar and audio prompts. http://4iiii.com/2012/product-sportiiiis/
What you want is coming, what exists now is very basic. It's juts take time until the technology hits the price point to make consumer products possible.
All I really want is a network connection, an 80 chars wide Emacs terminal in the window and a chording keyboard strapped to my hand. This assumes a small Linux (BSD?) distro installed. The rest is trivial.
(Iirc, this is a setup one of Mann's students had.)
Edit: OK, I do want a camera too. And video log. And... But 80+% of usability would come from Emacs lisp (or short scripts run from shell)
Edit 2: Love HN. I comment about a setup I read about years ago and have been waiting for buying the hardware -- and of course get answers (I assume that w/out employment contracts, they would have been more detailed). Thanks.
I'm not sure if he still uses it, but when I met him some years ago, Thad Starner (http://www.cc.gatech.edu/~thad/) had a chording keyboard hooked up to a HUD emacs.
I remember thinking at the time that it was a very "MIT" take on Ubicomp, a community that is otherwise pretty strongly infused with an Apple-esque "intuitive interface" ethos.
Remembrance Agent http://www.remem.org/ was created by Bradley Rhodes, one of the group of "cyborgs" doing wearable computing research at MIT at that time.
... and after a quick search, it looks like he works at the Google, most likely on Glass.
I've actually worked with the Thad Starner and the Remembrance Agent before. Let me tell you, it's even cooler than it looks.
Of note, there's a version of the RA that has additions that make it more suitable for use on wearable computers. The first item in the papers section (Using Physical Context..., 2003) describes all the extra stuff that the wearable version does.
Goal-wise this reminds me of the Memex [1], albeit with automated creation and display of associative trails.
I wonder, though, if the manner in which it presents information in the Emacs UI (namely, as a list of headlines over a part of the screen that is continuously updated) could worsen understanding and natural memorization in the way some research shows news tickers on television do [2].
emacs sucked, though -- it was a pretty good case for vi, due to the control keys not working well with the versions I used. but it looks like they fixed that.
Visual perception is truly an amazing thing! The author's anecdotes about vision alteration and the brain's ability to adapt were very interesting to me. I have nystagmus: my eyes move back and forth quickly all the time. I've often wondered what it looks like to see without the movement; however, that is how I see: I don't notice the movement at all. My vision with contacts doesn't get much better than 20/40, so I do experience the effects of the movement. I tend to think of my vision as if it's an example of two-point wave interference: http://en.wikipedia.org/wiki/Interference_(wave_propagation) The further away an image is from my focal points, the more the interference from movement affects my brain's ability to piece it all together; it's similar to tunnel vision, but instead of darkness on the periphery, it's progressively more blur. To see most clearly, I have to tilt my head to the side, to my "null point" where my eyes move the least. Not to mention my head moves often in some sort of sync with my eyes, especially while reading; once in school, a substitute teacher raised his voice angrily, thinking I was shaking my head at his work on the board!
I'm curious how Google and other developers of high-tech eyewear will account for us with out-of-the-ordinary eye conditions. If the glasses or certain apps rely on eye movements for communication, we probably couldn't use them.
> But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.
[McDonald's assault]:
> when the computer is damaged, e.g. by falling and hitting the ground (or by a physical assault), buffered pictures for processing remain in its memory, and are not overwritten with new ones by the then non-functioning computer vision system.
Reliant: Steve Mann wrote "Physical assault by McDonald's for wearing Digital Eye Glass" [1], back in July 2012. Speaks to the stigma Google Glass is likely to face.
I wonder how much wearing a contraption on his head contributed to his getting hit by the car. Even state of the art high-end viewfinders with millions of pixels have frustratingly long lag, enough to easily take away the reaction speed edge gained from millions of years of evolution.
i don't know why Google doesn't team up with some lead designers in the eyeglasses and sunglasses business, and come up with some actual stylish shades for Google Glass. Maybe they just aren't at that stage yet.
(In the far future) if Google Glass could automatically upload video of what you're seeing to your cloud storage, you could have a searchable log of your entire life.
Maybe a V1 of this could have Google Glass take a photo every minute. You could upload it automatically to Evernote or your private G+ photo feed. Then, you could occasionally review and "star" the important moments of your life (and maybe even delete/summarize chunks that are less important).
As much as I enjoyed watching Black Mirror bringing a piece of fiction into this kind of discussion early on is potentially highly problematic [1].
Consider what mentioning The Matrix or Terminator does to a discussion about AI. What Black Mirror's "The Entire History of You" [2] does to lifelogging resembles what those films do artificial intelligence for dramatic purposes. I highly recommend reading Less Wrong's article on this issue [1] for an in-depth discussion of this issue.
I loved Black Mirror (so happy it's getting a 2nd season!), but most of the issues in that episode are due to the main character's shitty life. The neuroticism about his interview, the issues with his relationship, his wife throwing his poorly chosen words back at him, all those would be possible with regular memory. The removal of the grain and "play it back for us!" reaction to his bad interview seem like unusual cases, although I think only time will be able to tell how social norms will form with the second part.
Couldn't the fashion problem be solved by making hats cool again. Already some hipsters are wearing them, should provide plenty of space for hiding computing gear. Perhaps even the projector into the eye could be hidden in the rim of the hat?
The pinhole aremac idea is so elegant! Infinite field of view and no need to measure the eye's lens. I wonder if video games and head-mounted displays designed for gaming will one day take advantage of that.
Apart from the potential physiological issues, the author briefly touches on the sociological impact this may have. In my mind that will be even more profound.
If you haven't seen it, 'Black Mirror' on TV here in the UK has an excellent episode where nearly everyone (voluntarily) has an implant which records everything they see.
This guy looks amazing, though he can hardly lament that lessons were not learned if be did not participate in the commercialization of the technology.
Why is this guy not consulting for Google? And I'm not sure if I'm more astounded or thankful that he has not patents his research.
Interesting idea, but rally drivers probably wouldn't want a level of distraction in their eye line. Vehicular HUDs have been around for a while now, so they could have adopted it a while back.
Only if the FIA allows GPS for navigation. Co-drivers still exist because it is up to the crew to create their own pace notes. I suppose the driver could record his own pace notes during the recce, but there would need to be a tight integration of the notes with the GPS coordinates so the data is absolutely spot on during the stages themselves.
I spend several thousand dollars testing a few "eyesight for the blind" products by taking video on a head mounted camera, encoding the image as an 1 image per second audio file that is transmitted to the ears. I was actually able to get it to work as advertised, and I believe that given 10 hours a day practice for a month, you could detect a sense of depth perception and make out attributes in your environment through your audio cortex enough to walk around slowly without bumping into things. I had the blind friend test out the best I could do, and although it was a technological marvel, he actually didn't like it because it made people ostracise him even MORE than him being blind. He can move around slowly without bumping into things much more fashionably with the system he already had, a stick, good hearing, touch, and memory.
So a few insights:
1. If you are putting something in front of your eyes, or on your hat brim that looks like a hacked together bunch of cameras and wires and you wear it in public, there is millions of years of evolution causing people to ostracise you. It's so bad, that a blind person told me: "The ostracism from wearing it is worse than the ostracism from them realizing your blind."
2. You think you're confidant and can handle it? You aren't, inside you are millions of years of evolution to remove what is causing the ostracism. If you are the kind of person who can choose to remain single and lonely for life when you burn with passion for the opposite sex, then you have the kind of mettle it takes to wear cameras and wires on your head in public.
3. The experience I had with converting visual to audio and using my audio cortex was tremendous. For example objects that "popped out" at me during audio-vision were completely different than normal vision. Take a brick wall for instance: I could hear the distance between the bricks (cement) was smaller in one spot, and larger in another spot because of an anomalous blip in the audio file. When looking at it visually, you think "meh", it's just a brick wall. With the audio file, the different brick leaps out at you as an anomaly. Thus exposing the data structure/algorithmic differences between the visual cortex and audio cortex.
Doing visual as audio makes you an infant again, the tiniest changes in things leap out as fascinating. This experience I had could probably be sold to people bored to tears with life. A billion dollar idea! Be an infant again.
My intuition is that this is only true insofar as sensory augmentation up to this point has been mostly useless. The reason you don't see delivery people or mechanics or whoever with headgear isn't because it's dorky (although of course it is), but because it just doesn't help that much. We live in a world that has been very well designed for people with the usual basic senses, so adding more on doesn't give you that much more information.[0]
So miniaturization is important, but I think the real improvements to be made are in software. In the coming era, these devices are going to start offering real-world superpowers. People who never forget a face, or where they put their keys, or anything really. People who can have a quiet conversation in a noisy room. People who can do basic computing tasks subconsciously while having a conversation. People with "spider-sense" who never seem surprised by anything.
These tools will still look dorky, but the advantages they offer will be so great that the people who do use them will be very cool regardless.
[0] And I'd augment this claim by noting two examples off the top of my head of people who do use these technologies professionally: surgeons and fighter pilots. Both of those jobs involve doing unfathomably difficult things human beings are incredibly unsuited to do, so they will take every advantage they can get, hang the cost and the aesthetic.
Brainport seems to be the most useful kind of sensory technology available. Giving you the ability to 'see' all kinds of different inputs, such as radar, sonar, UV, vision, balance, etc.
The only problem is that it requires your tongue. And your tongue is where you talk and eat. Once we can overcome this problem, there are huge implications.
I find this really interesting as I built an augmented reality headset that used only sound as output to the wearer as an undergrad project. It tracked the wearers orientation an position as well as using ultrasonic to detect obstacles. It then used 3D audio to convey this information to the wearer. It was built as a framework on top of which multiple applications could be built and three demos that we (I did this project together with another guy) made were: a guidance application that used 3D positioned audio as waypoints; a virtual band playing music so you could walk between the instruments as if the band were playing in the room and finally an application that used the ultrasonic sensor to detect walls. The wall detection would play a number of blips in quick succession and the tempo was relative to the distance to the obstacle. The cool thing was that you could turn your head and get a general sense of the shape of the room by how the blips sped up or slowed down as you scanned along the wall.
Assumedly the implication is that by not succumbing to the social pressure you are demonstrating the required fortitude to choose to also not succumb to a desire for intimacy with other humans, as op states this is an extreme outlier behaviourally speaking, completely regardless of how successful a person may be with human intimacy it's another thing altogether to actually purposely choose to disregard that intense natural urge for intimacy.
TLDR; it's not about not having game, it's about having the guts to tell the entire rest of the world to get fucked, big difference.
To transcend your genetic programming to create more of you takes a kind of self control few people in this world can muster, I don't even think I can do it. The problem is, the moment you achieve it, you immediately remove yourself from the gene pool, so your kids don't have your hardware. However, you do augment the fitness of your collective, as if they were your kids, then your ability to transcend desire to procreate is somewhat selected back into the genome because your presence enables human life (and continued reproduction) where it was previously not possible.
Wearing these electronics in front of your face makes you un-datable, and if you are completely OK with that, then you're a candidate for being a very-early adopter of the next thing that's going to be bigger than the invention of the Internet.
For the record, this guy has been wearing actual computers.
Google glass is an accessory- essentially a bluetooth headset, display and camera built into glasses. The intelligence lives on the servers, and glass needs a bluetooth or wifi connection to talk to the net.
I think google's engaging in a bit of a PR swindle by making people think google glass is like an iPhone. It isn't, it needs and iPhone or android phone to connect to the net.
Consequently it can't replace a smartphone.
I'm also pretty dubious about the battery time it will get, even without having to run a local CPU.
When reading comments by nirvana you need to realise that anything done by Apple is good, and anything done by anyone not Apple (but especially MS, Google, and Samsung) is stupid, or evil, or crooked, a dumb.
"Google is trying to swindle people with dishonest PR stunts" translates into "Google is doing the normal pr stunts that every company attempt; there are problems with most pr."
In 1998 researchers with a 1000 subjects found a 93% confidence of predicting whether a comment was made by nirvana or not nirvana just from reading the post, based on phrasing such as "the real reality distortion field".
Yes of course, technology will never progress beyond what we have today, we won't get better CPUs or batteries... ever!! /sarcasm. No, seriously, you should check out the Osborne 1.
Me: Hasn't this guy ever heard of HDR? He could have just used a couple of video cameras with some processing.
SM: A few years before this, I had returned to my original inspiration—better welding helmets—and built some that incorporated vision-enhancing technology. [...] These helmets exploit an image-processing technique I invented that is now commonly used to produce HDR (high-dynamic-range) photos.
Me: Oh. Right.