Hacker News new | past | comments | ask | show | jobs | submit login
Glass (dcurt.is)
401 points by nate on May 30, 2013 | hide | past | web | favorite | 174 comments



After trying google glass out last week, it still has a long way to go. Many tech people lined up to try it out. All of us failed to do anything useful with it. We had to be told how to activate it. We had to be told what we could do with it. What phrases it keys off of ('Okay glass'). These were all highly technical people and we couldn't figure it out on our own.

Contrast that with a few christmas' ago when I left my iPad out and found my dad watching aeroplane videos on youtube. He didn't know what an ipad is (or youtube for that matter). Yet, he still managed to find something he could appreciate with it with no guidance.

Us tech savvy people couldn't figure out how to turn google glass on. Those with glasses could barely see it even when it was.


>All of us failed to do anything useful with it. We had to be told how to activate it. We had to be told what we could do with it.

If I'm going to wear something on my head like a nutcase, I would hope that it has been designed for _usability_ not _Discoverability_. Making everything idiot-proof or at least "didn't bother to read the manual proof" leaves us with only devices suited for idiots and people too lazy to read the manual.

EDIT: That said....I'm not going to wear something on my head like a nutcase, because no matter how cool it is, I'm too shy/vain/self conscious to do so. The only people willing to wear glass I would bet are people who are wiling to learn to use something to get the most benefit out of it (I would bet a huge overlap with emacs and vim users).


> "leaves us with only devices suited for idiots and people too lazy to read the manual."

Devices "suited for idiots and people too lazy to read the manual" have, in the past decade, driven massive growths in computing, made it more accessible to more people, changed the landscape of the entire world, and even fueled a few revolutions.

But by all means, when people need to read a book just to use a device, we'll consider it a badge of honor instead of a failure of design.


> Devices "suited for idiots and people too lazy to read the manual" have, in the past decade, driven massive growths in computing, made it more accessible to more people, changed the landscape of the entire world, and even fueled a few revolutions

The touch devices I assume you're referring to are intuitive because as a human you're naturally good at using your hands to make things move around.

Push a button and it acts like a button, push a thing that looks like a sheet of paper and it moves in a way that you would expect a paper on your "real" desk in front of you to move.

They're "idiot proof" because we've been trained to use them most of our lives.

>But by all means, when people need to read a book just to use a device, we'll consider it a badge of honor instead of a failure of design.

Glass is something we interact with through an entirely alien interface and potentially an extremely powerful tool. I'm not talking about a podcast app, but a head mounted computer you might potentially be using for hours at a time.

Unless you eat a lot of acid, you're probably not familiar with how to interact with imaginary things that float in space before you.

To me, "they had to tell me how to use it" is a much weaker complaint than "it was easy but tedious to use" for something that sits on the side of your head.

This blog post, and the ones it links to, do a much better job of exploring this than I could here:

http://haacked.com/archive/2008/11/06/usability-vs-discovera...

(somewhat relevant and fun top gear segment on the evolution of car interfaces) http://www.streetfire.net/video/125-top-gear-first-modern-ca...


And yet, a whole series of such "unintuitive" devices is what led to the iPad.


And note that Apple was not the inventor of any of those unintuitive devices. Which makes one wonder who will ultimately make the "iPad of Glass." :)


Yep, the newton is nothing of the sort :-P


FWIW A good quote from Jeff Atwood "Think of Glass as the Apple Newton of wearable tech". In ten years time we'll be chatting to these glasses like its the most natural thing in the world.


The UI may initially take some time, but not any more than learning how to use a pencil the first time. Glass feels unlike anything you've used before because it is. That said, my experience at Google I/O, I figured out how to use it very quickly and I thought the interface was intuitive as I navigated. Using the touchpad on Glass is more like using a smartphone before touchscreens. It feels a little disconnected in that it is indirect control, but when Blackberries only had a jog wheel, millions of users had no problem using those devices.

Given that Glass doesn't have a launcher like most Android devices, the real challenge will be in actively using Glass. Passive use, whereby you are connected to applications using the Mirror API, means that you just receive notifications pushed by remote services, and if you don't respond to a card, it just gets pushed down the stack. You can navigate backwards to find those cards again, but just as you don't respond to every Facebook post, you can just let them flush out and choose to ignore it.

The two problems that have to be resolved, are how do you control "native" applications, built using the GDK, and how do you control the flood of extraneous information when you might subscribe to tens or hundreds of services that send you messages through the Mirror API.

While you can control what notifications you receive, it would be very easy to subscribe to "important" services and still feel overwhelmed by notifications that aren't particularly interesting just so that you do receive the notifications that you do care about. Culling this stream into something more useful is where some of the AI research may also pay off. It is a difficult balance to figure out today, but as this tech evolves, I have no doubt this problem can be solved too.


When mice first came out for PCs, they included training software for how to use them. Earlier versions of Android used to include intro software for the same reason.

In retrospect drag and drop, pinching, double clicks, long holds, swipes, acceleration etc are all obvious. But at one point they weren't.


As someone who just got their Glass last night, I have a FEW conflicting points, although a lot of it is spot on.

-I don't think the screen is not bright enough, and I haven't had the issue where you have to look at something dark to see it.

-I also disagree about "dangerous while driving." In fact, I would argue quite the opposite. It's SO much safer than looking at your phone for directions. The fact that any information I need is a simple, and fast, glance away is much better than holding up my phone and looking back and forth between that.

(Side note- it's also fantastic for directions while biking. Previously I'd have to do this scary "check my phone while biking" thing, which was no good at all.)

-Looking at the screen has felt incredibly natural to me. You're not looking at the extreme top right. I don't have a lot more to say here other than YMMV, I guess.


How does it compare to a windshield mounted TomTom?


I'd still argue less distracting. Taking my eyes off the road vs changing the focus point feels a lot different.


i agree 100% with this statement. it is safer to look at glass while driving than any dash-mounted device, let alone you mobile phone in your hand or lap. it may even be safer than glancing at your speedometer.


I'm interested in a second opinion on the speaker. I think the bone conduction was more a privacy than a convenience choice. He mentions others can hear garbled sound. What did it seem like to you?


it is too quiet and folks around can here _something_ but it isn't clear. sound needs to improve otherwise it is only useful in very quiet places.


I can hear great in quiet places. If I'm on the road (biking, or driving) there's too much ambient noise and I can hear notifications or dings, but no detailed audio.

As far as what other people can hear, definitely something, but not super well. Also, depends on how quiet it is.


I just got my invite to order and his comment about it being dangerous while driving is a non-starter...that's the main reason I'd find this useful (while cycling)


For cycling and other sports activities the Recon Jet glasses (http://jet.reconinstruments.com) might be a much better fit. I'm watching them closely, hoping to order once they become available.

Judging from the preliminary specs and information, it's a much nicer product: full developer docs available, an open system where you can deploy any apps you like, a very nice hardware platform and to top it all off, a much nicer price than Google Glass.

I also find the explicit design much more appealing — they don't try to hide the fact that there is a display on your face.


Hmm. I can't see how it would be much more dangerous than using a mirror. I ride with a mirror mounted on my glasses, and I glance up at it briefly then back at the road again. The same would be true of the glass. Sure, when you're first getting used to it you will probably linger on it for longer due to the novelty and getting used to the exact focus point. But once you get the hang of it, I can't imagine it would be much different than glancing at a mirror, which is perfectly safe.


The mirror is there to increase your situational awareness while cycling. Glass has a strong potential to do the opposite. If its use is limited to HUD for information useful to you for cycling, fine, but if you're getting text messages or anything else you have to think about, it's distracting.


Those do not pop up while navigation is on.


And, it's worth pointing out, even navigation isn't displayed all the time. It only pops up when giving instructions.


I'd be more concerned about trashing $1500-worth of hardware if I fall off my bike.


I've literally never fallen off of my road bike in 20 years of cycling, and if i get hit by a car or something I have bigger problems...


I do it about once every 2-3 years. Just last month I misjudged a curb cut while on a bike trail after dark. Before that a railroad crossing, on a angle, grabbed my tire. For me this is not all that rare...



I like it - but I'd worry about wheather it'd be useful warning oncoming traffic, and if not, how we'll it'd perform in the presence of another more traditional headlight.

Still a spiffy idea.


I'd personally settle for a voice-output-only Glass (maybe with chording for text input to avoid the "talking to yourself crazy person" problem, and with a camera/microphone built in for ambient data gathering).

Hardware we could have built for the last 10 years, and really unobtrusive. You'd need better UI and software than with video, since information needs to be more closely tailored for a lower bitrate channel, but I think Google (or a smart startup) could do it.

99% of the reasons I use video are because I need to do the filtering and postprocessing in my head. If I had complete trust in a great software agent, I could just let it tell me what to do, vs. showing me enough data to make a decision. e.g. for driving, you can give directions by voice (if you're a talented codriver) which do NOT require any visual information to the driver during a high-speed rally. Almost no car nav systems are that smart, but if Google can build a self-driving car, they should be able to make an awesome codriver/navigator.

Extends to almost anything. I don't need to see a picture of someone and a dossier; just remind me of the most critical facts as needed, by voice.

This should be just as good as having a clone of yourself, or an entire team of ops people, watching/listening to what you do, and giving you voice prompts, just like a video game, or being the President on TV in a live debate, or whatever.

The hardware is trivial; it's all software and back-end processing, which Apple sucks at and Google/Amazon/Startup should rock at.


The interesting thing is that Google's recent high profile AI hires like Hinton and Kurzweil have said they'll be working on NLP systems, which leads to some interesting possibilities along the lines of what you're saying.


Yes, with such high profile such as Kurzweil, we might even get the singularity emerge in Google Glass first. /s


I'd be following Taleb here and would be very skeptic about the positive or negative outcome from the hiring of a Kurzweil at Google.

To expand: Big companies may have the need for very talented particulars, like Peter Norvig, who can drive innovation over big teams and have a very precise mind, and all the humility required. But in the case of Kurzweil, it looks more like he is invited to play the clown and amuse the assistance, than to lead anything concrete.


It's easy to knock Mr. Kurzweil, given his supranormal convictions about the relatively near-term future of our society. However, check out some of the names on this list: http://en.wikipedia.org/wiki/Grace_Murray_Hopper_Award

   ...it looks more like he is invited
   to play the clown and amuse the assistance,
   [rather] than to lead [to] anything concrete.
It may be a bit preemptive to discount him so readily.


That looks like a very bad list.

Some great scientists, some ho-hum and some that are there mostly for political or other reasons. I mean Wozniak in 1979? What had he done at that point to be in the same list with Knuth or even Bill Joy?

Since you invoke the list, are we to expect the same great stuff from Kurzweil in Google, as we got from Wozniak post 1979, Richard Stallman past 1990 and Stroustrup past 1993?


> I mean Wozniak in 1979? What had he done at that point to be in the same list with Knuth or even Bill Joy?

Woz was largely responsible for designing the Apple I & Apple II, both of which played a pretty significant part in starting the entire PC industry. Seems like a pretty spectacular achievement to me.


... and those hardware designs are generally very well respected. He didn't just kludge those boards together.


You don't even _need_ Glass for that -- just an unobtrusive bluetooth headset, and a good backend.


Video and ambient mic would be nice, and data. We could have had this 10+ years ago, although it really only became practical with 3G I think.


Yeah something like the Pebble watch but as a Bluetooth-style headset. (Come on Kickstarter community)

It should support something more reliable than SCO and an input touchpad like Glass. It would also make sense to have some of the initial NLP processing in the device, or even a full ARM processor like the Glass running arbitrary applications.

You would be able to make calls through an offloaded cell phone using a more advanced profile than SCO HS.

If I could remove the device and place it on the car visor to use as a noise-cancelling speakerphone like the Motorola Bluetooth unit I have, even better.


> I'd personally settle for a voice-output-only Glass (maybe with chording for text input to avoid the "talking to yourself crazy person" problem, and with a camera/microphone built in for ambient data gathering).

I pitched exactly that idea to Logitech & Inria in Switzerland about 15 years ago and I agree with you, we could have done that long ago and it is one of the best interfaces that you could have. Think 'personal assistant'.

The general idea was to simply open a voice channel with a server farm somewhere that listens to your ambient audio and pics up cues / clicks and responds with audio.

The project stranded on privacy concerns, and on speech decoding issues that Inria foresaw would be hard to overcome.


The industrial design is solid, and though it is being manufactured in small batches, it has the build quality you might expect from something being mass-produced.

Not sure what to make of this statement. The design is beautiful, but I don't see why that would be an artifact of mass-production. The early prototypes for Glass were much sexier in terms of build integrity (optics, for example) than the current mass-produced models. One of the big challenges, in fact, was lowering expenses for mass-producibility. However, yes, the design is wonderfully intelligent. You'll notice some optical tricks when you look at Glass. For example, you'll see as if the frame takes up a portion of the sides, but in reality, there is circuitry hidden behind it--by looking at it you'd never know :)

The interface is not intuitive. It is actually very difficult to use the first time, for seemingly no reason. ... I would have expected more design attention to have been spent on interacting with the software.

I really feel this is a non-issue. Yes, you don't know how to use it when you first get it, but after a few instructions from a friend, I navigated the entire interface just fine for about an hour without needing additional help. Mostly, I just had to learn about the well-disguised touchpad. I've used a lot of silly software and hardware, and Glass is not one of them. It's a new product class, and it's going to have a new interface.

Glass doesn't communicate with you very much, and when it does, it doesn't use audio. It makes heavy use of the screen when possible. When navigating Glass, you can rarely speak selections. The only way to fully navigate the interface is to use the touchpad by holding your hand up near your face.

Navigation is definitely an issue. However, the Glass team planned on external devices being used to navigate Glass. Once Thalmic's MYO is fully operational, I don't think navigation will be much of a problem.

The battery life is dreadful. After ten minutes of use, the battery level reported went down by at least 8%. The owner told me that it would probably last about two hours with constant use. (This is hopefully a temporary handicap that will be improved in the future, but I find it hard to consider even this level of battery life good enough for a device that is sold.)

This is supposed to be fixed. Especially if you use video recording, the battery will drain really quickly. The idea by final product, is to have it last about 12 hours with passive use (kind of like a cell phone).


> "I really feel this is a non-issue."

For geeks, probably not. For normals? It's a pretty big deal. Normals weren't entirely convinced by the mouse. They learned it. Sort of. But many never even grokked the whole 'click vs double-click vs right-click' thing.

And the mouse was at least a fairly consistently-behaved indirect pointing device. Tapping and swiping an inconsistent, indirect touch surface, particularly after having learned direct-manipulation touch-screens, is not going to go over well.

Keep in mind that to make a Glass-style wearable make sense to a casual user, it has to be more efficient than simply taking out their cell phone. And for people whom HUDs are a natural advantage, it needs to be efficiently usable without requiring that person's hands being used.

So seemingly minor annoyances can add up quickly to a determination of "not worth it". You've probably got about a half-second of grace before people go back to their phone.

> "Once Thalmic's MYO is fully operational, I don't think navigation will be much of a problem."

Hand navigation might be easier, but the social problems will get massively amplified. Nodding/tapping/talking is 'weird' enough. Throw in some finger/hand/arm movements and this thing's never leaving the den of specialized technologists.

And requiring hand gestures can kill usefulness for those (hands-full) people that most naturally benefit from a HUD.

To me, Glass is looking more and more like Microsoft's stab at tablets. It's an early attempt that's going to make a class of specialized users very happy. But it's not going to be in casual use on trains, in coffee shops, etc. It's supposed efficiency gains are largely hamstrung by interactivity problems that are just annoying enough to send most users back to the alternative tools.

The real test for Google, is whether they address these problems or pretend they don't exist -- as Microsoft did -- until someone else comes along and eats their lunch with a far more modest solution.


Glass needs to be an overlay for your entire vision, so that active UI could be shown overlaid on any surface. Tapping out a private message on your sleeve wouldn't be that weird once people are used to it. It wouldn't even need to drive up the hardware much.


> For example, you'll see as if the frame takes up a portion of the sides, but in reality, there is circuitry hidden behind it--by looking at it you'd never know.

What?


Re: "The industrial design is solid, and though it is being manufactured in small batches, it has the build quality you might expect from something being mass-produced."

I expect high quality from hand produced jewelry, art, etc. Knowing nothing else, I wouldn't expect high build quality and beauty from handmade consumer electronics. I assumed the OP was just saying that it doesn't look like a prototype or hack.


I spent an hour tonight debating someone on how the Oculus Rift is revolutionary while Glass is not. The first hand reviews of the Rift vs Glass are night and day. One is the future the other is meh.


At the potential of starting a flame war, I would categorize the Oculus Rift as the meh, and the glass is the future.

I was able to use a head mounted (and tracking) 3d display with a computer back when ROTT was popular (circa 1994). It's a novelty at best, and a headache inducing nightmare at worst. I didn't see the magic then, and I certainly don't see the magic now.

A seamless hud, on the other hand, has the potential to do great things for your interaction with the environment. Based on reviews, however, it's still years out.


Seems a bit unfair to dismiss the Oculus Rift based on the state of head mounted displays from ~20 years ago. Yes, they used to suck. That's exactly what the Oculus Rift is trying to solve. Have you tried one?


> Yes, they used to suck.

Actually, no, this one didn't. It was fantastic technology. The resolution was the same as the monitors on the other computers, and the tracking was very fast. The only downside to the technology itself was that it was heavy.

The problem was that it was only a window, and a relatively small window at that. It's a glorified 3d monitor that denies you vision of your surroundings (and works very poorly with anyone with glasses).

Worse? You still need some other sort of controller (which you can't see when you're wearing them), and unless you're standing up, you can only look a limited amount around you. In any environment other than wandering around a virtual world, it's a curiosity at best.

FPS games - the head moves too slowly to make an accurate method of aiming (and think of your neck muscles afterwards), so you still need to use a mouse as your primary aiming device, and a keyboard to move.

MMO games (perhaps the ideal target for these) require extensive use of the keyboard and mouse (neither of which you can see with the device on), and are rarely played in first person view.

I just can't really see a market outside of VR, and then not without a whole new class of controllers and tactile feedback methods. It's the first (and arguably the easiest) part of a new class of technology, which when combined could be interesting. Until then... meh.


I've tried one (dev one) and the resolution's crap - 1280 × 800 spit across both eyes. It's shocking how bad it was.

The final one will have full HD, but even then I think it needs a quadrupling in res before it's even acceptable.


Have you actually used the Oculus? I agree that the 3d stuff from 20 years ago was shit, but that's not really relevant.


I haven't tried Glass, but I tried Oculus Rift a few weeks ago and it made me want them both more.

They're very different devices. The Rift immerses you in 3D worlds. Every way you tilt your head immediately changes your viewpoint. My perception of Glass is that it tries to overlay information on top of reality and add more senses to humans. They are similar means to completely different ends.


Agreed. Things like the Rift have been all over for a decade. Sony Glasstron, etc..


Interesting, so we're finally getting virtual reality helmets? I've been waiting since the early 1990s. They seemed to be right around the corner and were featured in several films like Lawnmower Man (1992), that Aerosmith video (Amazing, 1993), Disclosure (1994), Virtuosity (1995), Johnny Mnemonic (1995), Hackers (1995).

I suppose it makes sense, every technology seems to take about 20 years to reach the consumer market. I wonder if it has anything to do with patents lasting for 20 years.


I have one. It's amazing. It's everything you imagined it could be, and more.

It has the drawback of not integrating very well with most of the current games because the game's UI itself needs to be drawn in 3D perspective. For example if you don't do that then you'll see two crosshairs when you focus your vision in the distance. And if you focus on the crosshair, you'll see everything else in double vision.

But that's minor. You should experience Google Streetview with it. You can google Eiffel Tower, look up, and almost feel like you're there.


Oculus Rift is the real deal. Not sure why people compare it with Glass, though. They are not even remotely competing with each other.


Agreed. Google Glass is more about showing some useful information while you're on the go and not about immersive 3d gaming.


Both are not revolutionary, but evolutionary because both category products existed way long ago.

I remember watching members of the sailing world cup wearing glass equivalents like what, 10 years ago?

First Rift equivalents are like 30 years old or older.

Rift is now riding the wave of affordable and good solid state accelerometers or gyros, created by car and cell phone industries.


Those products were never successful. They didn't ever matter.

Turning a novelty that is unappealing to most people into a mass market product is damn well revolutionary. Not many companies are able to pull that off. (Sometimes that transition is more gradual and evolutionary - I would argue that was the case with digital cameras - but sometimes it's actually a single product that makes this transformation happen. I would say the iPad is the prime example for that. Sometimes it's something between evolution and revolution.)

Yes, inventions do matter - but turning mere technology into products people actually want also matters. Both can be evolutionary, both can be revolutionary. Sometimes (though probably rarely) it's possible to do both in one step, often it's not.

If the Rift catches on with more than a handful gamers and starts a sustained era of affordable, high quality, low latency head tracking 3D HMDs it will be a revolutionary product - and it doesn't matter even a little bit if something a bit like it existed 30 years ago.


Both products are amazing but that's a silly comparison.

The Rift gives you an instant 3D experience whereas Glass sits there idle most of the time and waits for it to be used as a tool when needed.

So to really assess the value of Glass you would have to use for a few weeks, while the Rift only has to be used for a few minutes to really experience what it is all about.


See now if the Occulus had an HD camera and could overlay it's information what you're looking at whilst walking around, that would be pretty fucking cool.


Anyone want an unsolicited opinion? waits for everyone to leave

I hope Google Glass is an experiment, because voice reco isn't particularly important outside of when you're driving. When I get tired of texting and consider voice reco, 2 things hit me (and others, I've done solid research on this)

1. Voice reco would be slightly better now that I'm tired of texting, but not much. 2. It is nowhere near socially acceptable to be speaking to no one. That's why you only see old people using bluetooth - how is that not an unbeatable sign? You can't just be speaking to no one.

The next step in communication has to be either low input "aware" communcation (aka, the check-in) or communication that somehow otherwise reads your mind.


(I agree with you, and want to be clear that I do not bring up this example with the goal of showing you are "wrong", but someone I know brought up the example of "while cooking, and your hands are messy", and it made me think that there were at least a few other non biking/driving examples where voice recognition would be valuable. Still, however, I agree with you.)


> someone I know brought up the example of "while cooking, and your hands are messy"

Voice control isn't there yet. You could have a card deck in your timeline that consisted of individual steps of a recipe, but you can't use your voice to advance to the next card; you have to swipe. Hopefully that will change.


I've found I can organize my thoughts better when I talk out loud; I have more than once wished I were wearing a Bluetooth headset, so it would be more socially acceptable for me to appear to be talking to myself.


Me too. I've drafted plenty of marketing documents speaking into Dragon Dictation on my phone, earphones in, walking down the street. I suspect it looks just like I'm on a phone call - nothing awkward for me about walking down the street on a phone call.


sitting at a table messing on a computer in your hand used to be socially unacceptable.


And speaking to a live person via cellphone, in many places and contexts, is still considered socially un-acceptable. And this hasn't changed all that much after a decade-plus of near-ubiquitous cell phone ownership.

So I can't see how replacing the phone-and-person with wearable-and-computer-agent is going to see social change along any further/faster.


IT IS.

It is not about the thing in itself. It is about: "F*ck you, my computer is more interesting that you attitude"

With a tablet or big phone you could integrate everything into a social context. You could search something socially for the entire group to benefit. Show pictures for the entire group. Whatever.

Or you could look like a social retard that uses the computer to isolate herself.


I sincerely hope you missed the word "about" in that sentence otherwise we're going to need a whole box of tissues.


I still find the same situations unacceptable.


Wasn't it just an extension of operating a typewriter?


Glass is cool as shit though. I think it's a step in learning what boundaries we can push. I might even wear it if I almost never needed to talk to it.


Voice reco isn't very important now, but it is important to the future of computing. It enables a lot of things in fields like wearable computing that aren't really doable right now, because the HID becomes a microphone instead of a keyboard or other input device, which is much easier to stuff into small awkward places.


Voice recognition is NOT the only thing that could do that, and when something else comes along that is better, THAT will really change things.


Reading through this post, I was put in mind of just the opposite: how so many computers in sci-fi movies use voice interaction. In particular, D.Curtis' point #3 -- looking at something near your face and/or looking at a dark background, maybe while driving -- brings this, dare I say, into focus.


As you note the problem with voice is social.

And I find that I am frequently alone - and there you don't have to worry about the taboo. I think that voice interfaces have a lot of potential in non-social settings. Personally I dictate much of my morning emails when out taking the dog on a walk.


I agree with the idea that talking to yourself is a social taboo. However, there are startups that are working to solve the data input problem (Thalmic Labs and the Myo being the one I'm most excited about). Combined with glass I think the future is bright.


Voice recognition is also useful when it's really cold out and you have gloves on, or when you can't/don't want to be looking at your phone (e.g. because you're navigating a crowded or dangerous place)


Can you retitle this to "Google Glass"? I know the policy is to copy the original title, but "Glass" is too generic.


Given the topicality of the word I very much doubt that many HN users will confused.


I was confused. Not a big issue, but a fair request.


Well, he had a long post about choosing the perfect flatware before, I thought this might be about finding the perfect glass to go with it.


I figured it was about Google Glass from "Glass" alone.


The mod should edit the title shortly.


>"Google Now is integrated, and it makes a lot of sense with Glass. The more intelligent Now becomes, the less actual interaction you need to do with the interface."

This is the "big thing". The battery issues will be a minor footnote if the majority of your use of Glass is it providing information as you need it, rather than you fussing with it trying to get the information you need. Google Now allows for this to happen - the better it becomes, the better Glass becomes.


> Google Now allows for this to happen

I've never quite figured out what's up with Google Now—I have it on my phone, and I hear it's supposed to be some sort of (semi) intelligent assistant type of thing, but so far it doesn't seem to do very much at all, much less anything "intelligent."

On my phone, it gives me weather info every day (which is nice), and dutifully tells me how to get to work in the morning and return home home at night (thanks Google Now!) but ... other than that, it basically does nothing.

Am I doing something wrong? Is there some setting I need to set ("intelligence: on") that will make Google Now suddenly start telling me what upcoming movies I might like, or interrupting my dinner to tell me about that great TV show I'm about to miss?


You might need to signup for the Gmail search field trial to get some of that stuff: https://www.google.com/experimental/gmailfieldtrial/ , you also need search history on for it to learn from your searches.


"This trial is only accessible on https://www.google.com in the U.S. in English for @gmail.com addresses (not available on Google Apps accounts)."

There are three points in that sentence that exclude me.


Do you have multiple Google accounts? If you're signed in to one you don't use for things, that will cripple Now, because it won't have all the data it needs to do things. Same problem if you do all your searching and Youtube watching and whatnot while signed out.


I don't know if there is a way for the person next to the person wearing a glass to tell whether the glass is on or off, but if there is no way then there has to be one which will take care of the security issues like recording a video, taking a picture etc while talking to a person and that person don't know about the same.


This is a perfect example of a primary difference between apple and google.

Apple makes sure that before you ever use it that almost every single thing akin to what Dustin pointed out is a non-issue.

Google gives you an alpha/beta product and wants you to find the flaws for them, and maybe help shape it.

Using bread as an analogy, Apple gives you a beautiful tasty loaf ready to eat; Google gives you some dough and tells you to start kneading if you want some bread. With Apple you know you are getting a fantastic loaf and with Google you get to help bake.

Each has their benefits and drawbacks.


I think ice cream shops might make for a better analogy.

Apple is like one that makes the absolute best vanilla and chocolate ice creams you've ever had. And maybe in the back, they're working on an amazing mint-chocolate-chip, but they haven't quite found just the right supplier of peppermint extract, and until then, nobody gets to try it.

Google is like that crazy experimental ice cream shop that has all sorts of weird flavors available at any time, and sometimes they don't quite work out (like "pancakes") but sometimes they're huge successful breakthroughs (like "bourbon and cornflakes"), and they have no way of knowing which is going to be which until the public starts to taste them.


And at times they would, out of the blue, cancel a taste you really like.


Just like my ice cream shop :(


Not really fair to reference Google here.

After all, the flip side is, "or announce that a flavor that you paid $600 for barely 14 months ago is end-of-life and will not be getting any updates".


Is that really the flip side? With an Android you don't even have to wait 14 months to be guaranteed no updates.


Also, the Apple ice cream shop is staffed by friendly, well dressed servers who are knowledgeable and passionate about ice cream and all of the relevant subfields.

The Google ice cream shop is actually a vending machine.


I don't know about that; both companies actually have really good customer service for paying customers. The difference is that Google also offers free products to the public, which generally don't come with customer service, whereas Apple doesn't really offer free products at all.


That is indeed a very good analogy.


A refinement of that analogy is that Apple gives you one particular type of bread, and you damn well better like exactly that kind of bread. This is mostly okay, because it is really tasty, but if you happen to want sourdough instead, too bad.


Actually, sourdough is the only bread worth eating.


In many of the reviews - the camera is noted as one of the most useful or interesting current features. As unique as the camera perspective offered by Glass may be - virtually none of my favorite photos using traditional cameras were taken at head-level.


I don't think that glass will take to many beautiful photos. The lack of framing, the uncertainty of the moment of capture, and the compromised size of the optics and sensor argue against it.

But it gives you the opportunity to take photos that you'd have missed with less available hardware.


Cameras built into electronics have become so cheap that they are being dedicated to specific functions (things like a front skype/facetime camera and a back "take pictures of things" camera on an IOS device).

It does feel like the head mounted camera in Glass is hardware that hasn't found its killer app yet. Yet it is easy to imagine lots of possibilities (auto face recognition so you never forget a name, photo search with commands like: "Identify this leaf" and it takes a picture and tells you the plant name, the input side of a WordLens style app, etc), which I think is exactly why it is early adopters and developers that have them now.


Google glass seems like a solution looking for a problem. Impressive tech, but what is the use case that makes up for awkward controls and two hour battery life? You can strap your smart-phone to your wrist for $30 and get hands-free operation.

I am much more excited about self-driving cars. Maybe Google could transfer some of the tech to heads-up displays for drivers.


Could you somehow hook it into Ingress? That could be interesting.


> 15. If Glass is “on” and anyone near you says “OK Glass,” they can control what you see, take a picture, etc.

That's what I thought, though oddly, I haven't seen this obvious fact mentioned anywhere else. It can be fixed in future I'm sure - but in the short term it might cause some problems:

"OK Glass. Tweet 'I'm an idiot'." - someone shouting in the subway


If it turns out to be a big enough problem, Google could probably add a way for people to run Glass off of a throat mike [0], simultaneously solving the "crazy bluetooth headset person" and "hostile takeover" problems.

0: http://en.wikipedia.org/wiki/Laryngophone


Given that there's a bone conduction speaker, I rather assumed that Glass was going to pick up on bone-conducted vibrations to make sure the wearer is the one speaking.


"OK Glass. Google goatse."


R.I.P Goatse


A bigger problem with that, two people using glass in the same voice range area will step on each others commands.


You might want to check which of your eyes is dominant. At Google I/O they mentioned that if Glass shows on your dominant eye that it might be a safety issue - but if it's on your non-dominant eye, it'll be more of an overlay and less of an obstruction to your vision. They're working on a mirrored version that goes on the other eye.


Interesting that Sergey Brin called Glass basically done when something as major as a mirrored version is in the works. That doesn't bode well for the project.


A mirrored version is relatively trivial. It does not introduce any new technical challenges. From a technical point of view, the hardware is done and that's what google[x] focuses on.


Oddly enough, the thing that's disappointed me the most about Glass so far is that there's no real SDK for it that I can find. There's something called "Glassware" that lets you build views in HTML and throw them up on the screen, but that's it.

Am I just missing something? I suppose Google could be mirroring what Apple did with the first iPhone and letting the tech stack settle down before opening things up to third-party development.

(C'mon, who doesn't want to make a game where you turn your head to aim and say "pew pew pew" to fire?)


At I/O they said the SDK is on the way.


The big question is: Can I trade in my Segway for it?


Only if you also offer a hypercolor t-shirt and some crocs as loot.


Google needs to make the camera very good in low-light and they desperately need to add Optical Image Stabilization, like we've seen in some of the latest phones. Watching some of the Glass videos makes me sick, because people move their heads even more than they move their hands when recording with their phones. So OIS is that much more important. Even if it makes the product cost $50 more at retail, I think it's worth it.

I'm not sure what they will do about battery life. Maybe if they'll use more efficient chips like Cortex A7 or Cortex A53 instead of Cortex A9, battery life would increase. But I think the main reason is the small battery, and the reason they can't increase it much more is because they don't want it to be heavy. So they can only really try to optimize the device with a more efficient chip, more efficient voice recognition, more efficient video recording, and so on.


Or BIG.little which would allow selecting the ARM processor core to use based on the current needs, using the more power efficient chip when passive.


> 3. When you look at the screen, your eyes have to focus on something extremely close to your face, which leaves everything else in your field of vision totally blurred.

Not true! Completely wrong! You don't have to focus on the glass device itself. I doubt anyone here even can. It would hurt very soon and you would have an extreme cross-eyed look when trying to see the display.

The display is set-up for a focal plane somewhere in front of your eyes. Google mentioned it is equivalent to a 25" at 8 feet distance.


If the issues with having to look at dark areas are common and persistent in the consumer release, it'll kill mainstream adoption. A geek might dislike having to do it, but they'll do it, but explaining to most people "oh sure, you just look at a shadow and hey presto" isn't going to convince them.

I'm cautiously optimistic about the product, I just hope that Google can nail these issues before they release it, otherwise wearable tech might get a bit of a bump back in terms of adoption.


there is a sunglasses extension. put that on and the issue he described disappears. you can see the screen in direct sunlight. this is the problem with someone giving such a detailed review when they don't fully understand the product features (the sunglasses extension is a feature).


So there's an additional widget that's required for you to use it in direct sunlight, so you need to have the sunglasses extension for it to work correctly. I understand there's a technical reason for this, but that's not a solution that's going to work for the majority consumer.


most people wear sunglasses in the sun.


I think it is mildly ridiculous how much attention "Glass" receives. The concept behind the product seems remotely OK, however from all the initial reviews this seems like yet another failed Google experiment. Poor, buggy software, and bulky hardware that looks like a joke - I'd be embarrassed to wear those. I think people really need to stop spinning the whole Google Glasses topic, they don't really deserve the attention.


1) give your product to every tech journo and blogger

2) ???

3) coverage!


Basic question - is the little screen focused at infinity?


It is several feet out at least. I certainly don't have to focus on it like if I hold my finger the same place. Heck, if I don't wear contacts I need my distance glasses in front to see it. I don't think Dustin got much time with it, he has a lot of mistakes in his review.


Google describes it as equivalent of a 25" screen at 8 feet distance. I haven't made any exact measurements, but I wouldn't be surprised if the focal plane is in fact 8 feet.

You certainly don't focus at the glass cuboid right in front of your eyes.


Clearly not, if this article is to be believed.

Yes, that's a problem!


Don't believe this article. You do not focus on the glass cuboid and I doubt you even can. Just hold your finger up close and try to focus on it. You probably can't at this distance.


Ah yes, not sure how I missed point 3. Not sure how easy it would be to have some kind of focus adjustment, so you could choose near/far.


This reminds me a lot of PDAs, etc, 10 years ago. Not sure how long until we solve the ubicomp issues of something like glass, but I'm bullish we will.


I suspect that one day we will have to make a step away from general purpose computing. Not every device has to be a little PC.

PCs, due to their general purpose nature, cannot be shaped to fit any one situation perfectly. They also require a lot of interaction to tell them which particular function they're supposed to perform in a particular moment.

I'm sure there will still be wearable PCs, but it's going to be transitional.


Sorry, but this sounds like a pre-alpha gizmo.

And not of the "beta coming in Fall" kind, the check-back-in-ten-years kind.

If, say, Apple was also presenting such pre-demo quality stuff, they would have flooded the media with BS. E.g they could have shown their iPad prototypes in 2004, in a similar crappy form.

Google only presents this half-baked shit to show us it "innovates". I'd rather know if they can deliver.


Google and Apple are different. They have different cultures, but also quite different businesses. If someone else takes the Google Glass as it is now and makes a more attractive product out of it, Google still wins, as they make their money from usage, not hardware sales. Therefore it often makes sense for Google to throw half-baked products out there and get help from others to gain traction, Android being the shining example.


>Therefore it often makes sense for Google to throw half-baked products out there and get help from others to gain traction, Android being the shining example.

I'm not sure Android is a good example of "Google making their money from usage".

For one, have the Android development costs been recouped even? Including the Motorola buyout.

Second, the more successful companies (Amazon with Kindle and Samsung, which is 95% of all Android sales IIRC), are either already having or in the process of locking Google out of it. The Fire's already doing it, and Samsung is investigating it's own path there too.

And if they could replace the web search to something else, they probably would -- but there's nothing much at the moment, which is why even iOS using Google as the default.


Is the image from glass projected into your eye or onto a screen? If it's projected directly into your eye then wouldn't it possible to set the focal distance to be roughly the same as where you're already looking?

NB given I have never ever heard of this happening i suspect there's a hole in my knowledge of optics but it would be awesome if it was possible.


There are some "virtual retinal displays", but they're mostly vapourware rather than actively developed and sold devices. They use a laser beam into the eye.

(http://www.hitl.washington.edu/projects/vrd/)

(http://ascentlookout.atos.net/en-us/enabling_information_tec...)

(http://eclecti.cc/hardware/blinded-by-the-light-diy-retinal-...)

That last link is a DIY (terrifying) project. Your eyes, your choice. Retinas are delicate.

EDIT: One company doing stuff is "Microvision" - (http://www.microvision.com/index2.html) but they've changed to pico-projectors.


glass uses a small screen


To be fair, it is in alpha, so this is the kind of feedback they are looking for.


Nowhere in nature does moving your head also move the thing you want to look

It's not nature, but everyone who already wears glasses knows what this is like. It's just part of my software by now.


I'm sorry if I'm being a bit dense, but I don't understand your comment. :(

If I physically move my head, nothing I am looking at is moving with me, i.e. the computer screen does not move to where I look.

The point is that you look through glasses, yet the Google Glass screen makes you focus on it, not through it, and hence it moves with your general head movement.

Or have I misunderstood what you said?


If you want to look at your glasses themselves, bespectacled-people learn to flik their eyes, not their whole head.


> If you want to look at your glasses themselves, bespectacled-people learn to flik their eyes

I still fail to see the point. Maybe because I never try to look at my own glasses...

edit: When I want to look over the rim of my glasses, I still move my head while looking up with my eyes, otherwise I wouldn't be able to look at what I was looking at before.


What problem does Google Glass solve? Is it the best way to solve that problem?

Would Google have been smarter to not include a camera, at least initially?

Is the tech there to make this a viable product yet?


Anyone can take control of your Glass? Wow, this needs some fixing. I guess this will become annyoing for owners as people find out this type of trolling.


"Before you even turn it on, Glass feels like something from the future that is worth at least $1,000."

taken from Google PR?


i can totally see myself using something like Glass while working, but in my private life i believe that a smartphone + maybe a smartwatch is more than enough.


I guess that's the approach that many people are going to have. Glass (or something similar) might be a worthy addition for lots of professionals.


the camera takes the photo when you tell it too - i'm not sure where the author gets the idea that google glass can somehow go back in time to take a photo...


Dustin Curtis does not even own Glass, he is using a 'friends' item.

He just wants readership/attention/validation, hence the opinionated and slightly snarky blog post.


This comment adds zero value to any meaningful discussion. aJust because he doesn't own it himself doesn't mean he is incapable of a thoughtful analysis of his experience with the product. Keep your jealousy to yourself.


actually, the author got quite a few key facts wrong, so in this case, i think not owning the device does factor into the equation. for example, he complains about not being able to use them in direct sunlight. google provides a sunglass add-on to all explorers, which is a critical component for using glass in direct sunlight. sergey brin himself told me as much the day i picked them up. he got a few other things wrong as well. all-in-all, he has a lot of valid points i agree with, but with some false information mixed in, it could end up leading to a lot of folks further misunderstanding this device.


So basically if I want to have an opinion about a product, I must have purchased it?

Given confirmation bias, that'd basically skew all reviews about products to the positive - hardly useful or objective.


His review was pretty off, though. I've worn mine a month and only ever was asked to take it off once at a company with a bunch of private meetings going on. Meanwhile he claims the most important flaw is that it is not socially acceptable. You actually get treated like a minor celebrity for wearing one, with people on the streets often asking about it, how much it costs, how to get one, if they can try it, if they can get a picture of themselves wearing it, etc.. No one has batted an eye at me giving it voice commands either, they are all used to people with Bluetooth headsets.

I've never had any trouble using it in sunlight myself either. I half suspect he just saw the screen was a little faded and went, woot, another complaint I can make instead of actually using it. A lot of his other issues are similar. Other people voice controlling it is almost never an issue since you are the one who taps it or looks up right beforehand - it isn't on all the time. He doesn't know the shortcuts to skip to Googling or skip to the menu either.


OK GLASS FORMAT C:


If you think Glass is the end product that matters, you're not paying any attention. Tim Cook wants to start shit-talking Glass and talking about how they're going to bring out watches? Cool. So are three other Android manufacturers.

Who is the only person with the AI and data sets to power things like Google Now and really make form factors like Glass or Watch practical? You don't hear anyone talking about that. They're talking about SDKs or users or app stores or whether or not I can set my own default browser on my phone.

Google Now is the big deal here that everyone glazes over when talking about Glass. They're already putting it into Google's web interface (for the desktop, yeah), Chrome (though it makes more sense in Google.com rather than Chrome to me), Glass and I'm sure more and more services will continue to feed into it. They announce more every few weeks.


I wouldn't exactly say Tim Cook's "shit-talking" it. His quote from D11 on Glass is:

"There are some positives in the product. It's probably likely to appeal to certain vertical markets. The likelihood that it has broad appeals is hard to see."

I think that's actually a fairly well-balanced statement. He also says this about glasses and watches:

"Nothing that's going to convince a kid that's never worn glasses or a band or a watch or whatever to wear one. At least I haven't seen it. So there's lots of things to solve in this space."

This isn't saying that Glass will never work or that it's a bad idea, just that as it currently stands it needs a lot of work to have a mass appeal, which is probably true. I, for one, am looking forward to see what Glass becomes in two or three iterations time.


Look, I'm not a fashionable guy, but come on:

"Nothing that's going to convince a kid that's never worn glasses or a band or a watch or whatever to wear one."

I'm kind of embarrassed by Tim's comments. He won't even make a non-pessismistic comment about watches, which, I guess everyone could be wrong but it seems they'll introduce one soon; it's as if he's already pushing the notion "Apple iWatch is magical and people want to wear it and not the three competing Android watches that better integrate with your phone and have more features".

Who knows, it's taken HTC years (and losing tons of market share) to design fashionable looking hardware. Maybe Apple will win purely on design and Tim's comments will be vindicated. Either way, I hope he's not putting all his eggs in that iPhone basket, unless iOS7 just prints money.


   with the AI and data sets to power things like Google Now
Microsoft, Amazon, IBM and Yahoo to name but a few.

And I've used Google Now and I fail to see what makes it so incredibly advanced that no one else could possibly replicate it. Maybe I am missing something but it doesn't seem to hold a candle to say Watson.


Maybe you're right, but Google has the personal data from me that they don't, across a variety of services that allow for integration and knowledge about me that is hard for me to really wrap my head around.

I've used this example before, but Google Now already recommends destinations based on what I Google Maps at my desk before I head out. It also knows what is on my "grab when I go by the grocery store" list before I quickly use Google Now and say "Google: note to self, pick up milk" and it's in there.

[soon] So now, when Google's autonomous car is driving me home and I'm watching the road attentively, Google Now pops up in Glass and offers to redirect the car to the grocery store most minimizing the way out of my route (or the one that has the best deals listed with Google Shopping, or that has the best coupons, or that Google Wallet has my rewards card for). And it reminds me to grab my milk. [/not yet]

And then on the way out of the store it again offers directions back to my house.

This is all there (except the Keep intuition bit that I sincerely hope is coming and will really prove Google Now). (And sadly, and I'm whining, but I didn't get into the super special Glass program :[)

Sorry, I've rambled, back on point: I don't doubt that those you list have the power and smart people to build the tech, but I don't think they have the data. I think Apple Maps was but a small example of the advantages Google has in areas of data.


Thanks for the downvotes, sorry I said anything bad about poor ole Tim Cook. Yeesh.


There's a problem with glass. As far as I can tell.

What Google is hoping for is to tease us into wanting something, that no one has asked for.

Google is not alone in this historically.

Let's step back a decade, when we were all told that Microsoft (by Microsoft) had the biggest R&D budget in the world.

What products came out of that R&D? ZERO.

Apple on the other hand spent years developing the iPhone with iPod profits and never let on what they were going to reveal. What happened? A mobile revolution.

Google is telegraphing R&D to try and leapfrog Apple in the public mind while having no viable consumer product in the pipeline much less on retail shelves. Thats tantamount to product suicide.

I think Glass or something like glass could be important, but I think that Googles approach to pushing alphaware in front of the masses, will, if it really is viable force google to play catch up to Apple or someone else once again.

Google has never, not once, in its history, had a successful launch of a hardware product.


"Google has never, not once, in its history, had a successful launch of a hardware product"

Sure, Apple brought us the iPad Mini because the Nexus 7 failed so hard. The Nexus 4 and 10 are also known as big failures that customers hate.


You are correct, No Nexus device from the time it was released to today has sold as much as the iPad or iPhone sells in one day.

Google has never had a successful launch of a hardware product, not even once.


Supposedly Apple sold 23M iPads in Q1 2013, or 250K/day [1] but Google sold 4.6M Nexus 7s in 2012 [2].

[1] http://techcrunch.com/2013/01/23/apple13q1-iphone-ipad-ipod/

[2] http://techcrunch.com/2013/02/19/analyst-estimates-peg-total...


So what? Has it ever been the purpose of them to sell as many Nexus devices as their competitors and thereby risking to alienate their partners? (Samsung, HTC, Lg and so on)

It was nevertheless a successful hardware product because users and reviewers loved the Nexus devices and they sold everything they produced.


So, you joined this community nine days ago and thought a great way to begin would be with a ridiculous and absurd lie about Nexus sales?

Regardless, it's hardly clear why a community of people intent on founding new businesses would consider a history of successful product launches all that important.


"There's a problem with glass. As far as I can tell. What Google is hoping for is to tease us into wanting something, that no one has asked for."

Uhh, what, huh?

You mean to tell me all those movies, cartoons, thousands of sci fi books where people explore a future with such objects, all these people exploring and creating, or describing what a future might be like...

... are not actually asking for it at all.

Instead, they're just waiting for what Apple will tell them they wanted all along?

Sorry, even as someone who has an MBP, iPhone, iPad, I find this perspective hilarious and borderline offensive.


What products came out of that R&D? ZERO

The point of research isn't products (and you can ask famous labs like PARC and Bell if you don't believe me) but MSR has done a bunch of stuff http://en.wikipedia.org/wiki/Microsoft_Research




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: