Me: Hasn't this guy ever heard of HDR? He could have just used a couple of video cameras with some processing.
SM: A few years before this, I had returned to my original inspiration—better welding helmets—and built some that incorporated vision-enhancing technology. [...] These helmets exploit an image-processing technique I invented that is now commonly used to produce HDR (high-dynamic-range) photos.
Me: Oh. Right.
Doing a 'why does't he just' on him means you're going to have to do the equivalent of 6 months continuous reading first if you want to avoid making a fool out of yourself, so I wouldn't worry about not knowing about his connection to HDR.
Steve and I had some interesting exchanges back in '95 or so when video on the web was still a novelty. Steve went on to make history with his series of inventions.
What's extremely impressive to me is Steve's incredible faith in his own inventions, no clinical trials on others but dog-fooding in the extreme. Bolting things onto (and into) his body for attachment and augmenting his world. He's a real pioneer in every sense of the word.
From Wikipedia: Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.
Also - does he really have so much experience with that tech? There seems to be some controversy around how much self-promotion he's doing via Wikipedia.
He's both self-promotional and has legitimately done all of the stuff he's promoting.
Incidentally, for those who haven't read it, pg's analysis of the incredible technology of the Segway being completely trumped by human fashion is great: http://www.paulgraham.com/segway.html
Surprisingly many professors I know seem to be genuinely motivated by what they see as a service to society. Typically, more through teaching than research.
Yes, there are a few big jerks who are clearly driven by wanting to show off.
I do know of one or two in my area who seem this way, and I think they're very shallow people. And I know of some profs who are just kind of slackers. Maybe they though academia was something different than what it turned out to be.
But most profs are smart enough to know what kind of life it is going in. (A modicum of intelligence and societal savvy are required to get the job.)
then what the heck do you think think they're in it for?
It's not always easy to distinguish between status motivations and the more socially desirable motivations. In fact, one measure of how well our institutions are constructed is how well aligned these motivations are; ideally, they would be indistinguishable. But there are ways to tease this out.
For example, when some exogenous input (e.g. federal funding cut) knock these people out of a high-status career, do they still work long hours in their spare time on the subject? Very rarely, especially as it becomes clear they won't regain former respect.
The incredible power of fads in academics, even among those with tenure, is more evidence. The best group strategies for maximizing impact call for diversifying research routes, but academic researchers clump very strongly because this is often in the best interest of an individual's status.
Human motivations are complicated and this makes it very difficult to compile indisputable evidence of anything. This is compounded by the fact that we want to appear to have other motivations than our actual ones. However, I think if you take a careful look and try to model academics as robots you'll find the "tries to maximize status" is a better first order approximation than "tries to maximize academic contribution".
For the record, I've been strongly influenced by the views of economist Robin Hanson on this subject.
Implying an educated person.
> ... I can tell you, what your [sic] saying is just plain untrue.
Did that education include English literacy? s/your/you're/
I'm actually far more literate than most of my colleagues. Making this mistake honestly makes me want to harm myself.
Well, I, for one, was fascinated with wearable computers for a time after reading an article about Steve Mann (in MIT Technology Review, IIRC). That was about fifteen years ago. Based on my memory of that article, I'd say that 20 years is a bit of an understatement.
Steve Mann has been working on head mounted displays, true, but he's been focusing on local horsepower wearable computers.
Ultimately, I don't think google glass is anything more than a PR project to remove the stigma of google as ripoff artists and to make it look like they're innovative.
Given current technology, glass on wifi should have about 20 minutes of battery life, maybe an hour. Which makes them pretty useless.
There's really quite a difference between a wearable computer (what Mann works on) and a bluetooth headset with integrated display (what glass is.)
I guess this CPU-less device talks to a Wifi connection via... magic.
For me it's something that execute instructions and have their own instruction sets. It can be even built in a breadboard and you could eventually invent your own instruction set.
Even low powered microcontrollers have a CPU (microcontrollers are small, low powered computers), and microcontrollers can come in many sizes
Your point could be that it's just a head mounted display on top of an ASIC, which I doubt it would be.
Right now I think Glass is just a display for smartphones and a way to use Google services, which I think is quite limited (you said it's useless without the net, I agree). Right now we don't even have the tech to run sophisticated speech recognition in a smartphone without a couple of servers crunching statistical formulas why would you think it would be different with a low powered device?
EDIT: Basically my last paragraph is saying that I agree with you but without being too harsh in the comments. This could be the beginning of the wearable computers revolution along with a iWatch.
It works insanely fast, transcribing what I say in near real time which feels like black magic compared to Siri on my iPhone that has to record an audio clip in its entirety, send it up to their servers, process it, then send a response back.
I disagree with everything about your comment, but especially this part. What on earth are you talking about? Labeling Google as non-innovative and Glass a mere "PR project" is shortsighted (oops, forgot about self-driving cars) and quite frankly, something I'd expect from an Engadget comment thread, not someone who managed to snag "nirvana" on HN.
I still feel that is very poor though, for Glass to really be useful it should last a long working day, and ideally all of your average waking hours.
It remains a bluetooth headset with integrated display and camera. The phone and some remote servers do the real work.
I'm willing to bet that you haven't used Glass, but don't let that stop you from desperately trying to portray it as vaporware, a PR stunt, or some kind of scheme masterminded by Satan himself on EVERY HN thread you can plausibly cram that nonsense into.
Between your recent posts about Glass and taligent's constant downplaying of Google's maps and autonomous cars, I have to wonder what makes you two work so hard at spewing blind hate for them here.
I don't see why you think there's such a large difference between local horsepower and cloud computing. They compliment each other, and the ultimate technology will be a mixture of both. User interaction is the much more interesting and difficult problem.
(And your battery life comment is misplaced.)
Regardless of whether the computation is done locally or not, he has been working on "augmented reality" using glasses, a great asset for google he may even prove to be.
When somebody crashed into your house, or you witnessed a crime taking progress, or remembering an important details related to business dealing, it just may be worth the 1500 dollars you spent to get a Google Glass.
Other application of google glass may provide utility on a daily basis. I can imagine getting 10 dollars worth of useful service from google glass everyday, and 5 dollars in security benefit for the surrounding society. Multiply that by 365 days which is 5475 USD in term of economic benefit every year. Don't forget to mention high value recordings such as record of criminal activities, abuse of authorities by cops.
(Of course, if you're too poor, than google glasses isn't worth 1500 USD even if it may be someday worth 1500 USD of value to you.)
Also, private property owners (like store owners) can eject you for trying to record video on their premises. There are some places where recording devices will never be welcome, such as movie theaters, sporting events and workplaces that deal with confidential information (e.g., a doctor's or lawyer's office, or even a start-up company whose product wasn't yet announced).
And I'm pretty sure that even Google wouldn't be happy if all their employees wore these to work every day. Would your work colleagues or managers speak candidly with you if they knew that their every word might be getting recorded?
If I recall correctly, there's a light visible on the outside of the eyepiece (the lens part) that's on whenever recording is happening. There will be no question if recording is going on or not.
Will it help? Once criminals recognize the device they will either steal them or smash you in the face to destroy the evidence.
Such a system for security ought to have video storage inside a hard shell with a high speed 4G link, able to send the last minute of video/sound (with gps position) to the police at the push of a button.
The high speed link is probably a bit bulky (batteries) and will probably always be put into a pocket or something anyway, even without security considerations.
But sure, criminals could get some equipment to disturb mobile connections.
2. Do you really hope for each and everything you see to be uploaded to a remote server?
2. I find it to be an intriguing idea, but one that does admittedly scare the bejesus out of me. Unfortunately, I consider it all too probable that near-ubiquitous camera surveillance is in our future whether we like it or not, and in that scenario I'd vastly prefer for individuals to have their own recordings as well rather than blindly trust that the governments and/or corporations running the other cameras will always act with my best interests at heart.
This image retention also happened when he was attacked at a McDonald's in Paris last summer: http://www.huffingtonpost.com/2012/07/17/steve-mann-attacked...
Here's his account: http://eyetap.blogspot.com/2012/07/physical-assault-by-mcdon...
As far as I know, nothing ever came of it (i.e. there were no charges or settlements).
Yeah you don't have to worry my glass never record anything... well except when it's super convenient to me, then it magically break just at the right moment
Would I pay cost plus 50% for one? Absolutely. Up to and including car-level prices. This is HN, a startup, anyone?
The area which I think would be super-interesting and easy would be pure audio mediated reality. Vision is hard, but I could do audio for $500. I have shooting earmuffs which essentially do this already -- they have microphones and speakers, and amplify soft sounds while attenuating loud sounds.
Sounds like he has some great insights here. He's also known as 'the worlds first cyborg' (http://en.wikipedia.org/wiki/Steve_Mann), and the lonely trail he seemed to be on is now shifting to the mainstream.
His devices in the pictures are shown to get in between the eye and the external world, whereas, if you look at glass, the screen is up and out of your line of sight.
I think if your wearable tech display is always on and continuously visible, it'll be a problem, battery life will be negatively impacted, and the device will distract you constantly.
The author posits that Google placed the screen out of the line of sight to avoid vision misalignment and misadjustment problems.
(Also, the eyestrain due to focus distances was mentioned, and apparently solved by using an "aremac": a pinhole camera in reverse which means the video is focused at every distance.)
Probably because the driver was like "AAHHHHH!!!! A FREAKIN' CYBORG!!!!"
Current time/pace/distance/route/whatever when I'm running.
Current speed, next corner severity/distance when I'm longboarding. (http://swizec.com/blog/ifihadglass-the-app-i-want-to-build/s...)
That alone would be worth the money to me. Such things already exist for skiing goggles, but those aren't extensible and only really fit one sport. So that's no good.
There is also the o-synce screeneye x that is a visor that gives you a heads up display. http://www.o-synce.com/en/products/running-fitness/data4visi...
What you want is coming, what exists now is very basic. It's juts take time until the technology hits the price point to make consumer products possible.
(Iirc, this is a setup one of Mann's students had.)
Edit: OK, I do want a camera too. And video log. And... But 80+% of usability would come from Emacs lisp (or short scripts run from shell)
Edit 2: Love HN. I comment about a setup I read about years ago and have been waiting for buying the hardware -- and of course get answers (I assume that w/out employment contracts, they would have been more detailed). Thanks.
I remember thinking at the time that it was a very "MIT" take on Ubicomp, a community that is otherwise pretty strongly infused with an Apple-esque "intuitive interface" ethos.
... and after a quick search, it looks like he works at the Google, most likely on Glass.
Of note, there's a version of the RA that has additions that make it more suitable for use on wearable computers. The first item in the papers section (Using Physical Context..., 2003) describes all the extra stuff that the wearable version does.
I wonder, though, if the manner in which it presents information in the Emacs UI (namely, as a list of headlines over a part of the screen that is continuously updated) could worsen understanding and natural memorization in the way some research shows news tickers on television do .
 http://www.tandfonline.com/doi/pdf/10.1080/08838158509386593, http://blog.lib.umn.edu/stgeorge/artofscientificpresentation...
emacs sucked, though -- it was a pretty good case for vi, due to the control keys not working well with the versions I used. but it looks like they fixed that.
I'm curious how Google and other developers of high-tech eyewear will account for us with out-of-the-ordinary eye conditions. If the glasses or certain apps rely on eye movements for communication, we probably couldn't use them.
> when the computer is damaged, e.g. by falling and hitting the ground (or by a physical assault), buffered pictures for processing remain in its memory, and are not overwritten with new ones by the then non-functioning computer vision system.
Fragile design or, say, accelerometer-controlled backup memory?
Apparently nothing about fashion?
Reminds me of these projects:
Maybe a V1 of this could have Google Glass take a photo every minute. You could upload it automatically to Evernote or your private G+ photo feed. Then, you could occasionally review and "star" the important moments of your life (and maybe even delete/summarize chunks that are less important).
Obligatory sci-fi cautionary tale: http://www.channel4.com/programmes/black-mirror/episode-guid...
A circular buffer and explicit save is the furthest I'd want to take it.
Consider what mentioning The Matrix or Terminator does to a discussion about AI. What Black Mirror's "The Entire History of You"  does to lifelogging resembles what those films do artificial intelligence for dramatic purposes. I highly recommend reading Less Wrong's article on this issue  for an in-depth discussion of this issue.
 Trailer: https://www.youtube.com/watch?v=3bFCqK81s7Y, plot summary (spoilers): https://en.wikipedia.org/wiki/Black_Mirror_%28TV_series%29#S....
(I'm taking this from Transmetropolitan if anyone's in the know)
I would pay for this. I would pay a lot.
It seems the Occulus Rift does not have anything similar.
If you haven't seen it, 'Black Mirror' on TV here in the UK has an excellent episode where nearly everyone (voluntarily) has an implant which records everything they see.
Well worth a watch: http://www.channel4.com/programmes/black-mirror/4od#3327868
not sure how available this is outside the UK. It's called 'The Entire History of You'
Why is this guy not consulting for Google? And I'm not sure if I'm more astounded or thankful that he has not patents his research.
So a few insights:
1. If you are putting something in front of your eyes, or on your hat brim that looks like a hacked together bunch of cameras and wires and you wear it in public, there is millions of years of evolution causing people to ostracise you. It's so bad, that a blind person told me: "The ostracism from wearing it is worse than the ostracism from them realizing your blind."
2. You think you're confidant and can handle it? You aren't, inside you are millions of years of evolution to remove what is causing the ostracism. If you are the kind of person who can choose to remain single and lonely for life when you burn with passion for the opposite sex, then you have the kind of mettle it takes to wear cameras and wires on your head in public.
3. The experience I had with converting visual to audio and using my audio cortex was tremendous. For example objects that "popped out" at me during audio-vision were completely different than normal vision. Take a brick wall for instance: I could hear the distance between the bricks (cement) was smaller in one spot, and larger in another spot because of an anomalous blip in the audio file. When looking at it visually, you think "meh", it's just a brick wall. With the audio file, the different brick leaps out at you as an anomaly. Thus exposing the data structure/algorithmic differences between the visual cortex and audio cortex.
Doing visual as audio makes you an infant again, the tiniest changes in things leap out as fascinating. This experience I had could probably be sold to people bored to tears with life. A billion dollar idea! Be an infant again.
So miniaturization is important, but I think the real improvements to be made are in software. In the coming era, these devices are going to start offering real-world superpowers. People who never forget a face, or where they put their keys, or anything really. People who can have a quiet conversation in a noisy room. People who can do basic computing tasks subconsciously while having a conversation. People with "spider-sense" who never seem surprised by anything.
These tools will still look dorky, but the advantages they offer will be so great that the people who do use them will be very cool regardless.
 And I'd augment this claim by noting two examples off the top of my head of people who do use these technologies professionally: surgeons and fighter pilots. Both of those jobs involve doing unfathomably difficult things human beings are incredibly unsuited to do, so they will take every advantage they can get, hang the cost and the aesthetic.
The only problem is that it requires your tongue. And your tongue is where you talk and eat. Once we can overcome this problem, there are huge implications.
Who said anything about choosing?
TLDR; it's not about not having game, it's about having the guts to tell the entire rest of the world to get fucked, big difference.
Wearing these electronics in front of your face makes you un-datable, and if you are completely OK with that, then you're a candidate for being a very-early adopter of the next thing that's going to be bigger than the invention of the Internet.
Google glass is an accessory- essentially a bluetooth headset, display and camera built into glasses. The intelligence lives on the servers, and glass needs a bluetooth or wifi connection to talk to the net.
I think google's engaging in a bit of a PR swindle by making people think google glass is like an iPhone. It isn't, it needs and iPhone or android phone to connect to the net.
Consequently it can't replace a smartphone.
I'm also pretty dubious about the battery time it will get, even without having to run a local CPU.
"Google is trying to swindle people with dishonest PR stunts" translates into "Google is doing the normal pr stunts that every company attempt; there are problems with most pr."
In 1998 researchers with a 1000 subjects found a 93% confidence of predicting whether a comment was made by nirvana or not nirvana just from reading the post, based on phrasing such as "the real reality distortion field".