Check out the A/B test of a technician with / without the software referenced in the article:
Then here's a video that gives a sense of the software's interface:
Very workflow oriented with nice communication and lookup features.
This is the kind of small optimization stuff that is going to be revolutionary to driving macro productivity.
And as far as the software goes, there's a lot more there on display than just Glass. For instance, there are at least two cameras in this screenshot that are just magically implied to exist - one taking high quality video of the supervisor, and one taking high-quality video of the employee: http://i.imgur.com/qLjCoou.png
I've never worked on planes, but quite a bit on cars and semi-trucks. You can get yourself into some tight spots. There may not be room to bring a manual with you, let alone turn your head to look at it. The closest I've had to glasses is bringing a smartphone to look at a manual or photos - but having it right on your had and hands free would be a huge help.
Pretty lame comparison.
I do like sticking my smart phone in holes to take photos there I can't otherwise get.
Edit: this is for personal automotive work
The worker presumably has to do the same work many times. So he isn't doing the work in the video for the first time regardless. It's something he does regularly. They could do the work with glasses a couple of times and compare it with non-glass work times. I think this is (as in the video) is the best A/B test way.
I'd imagine most of the gain comes from companies with a high number of ESL workers - but it sounds like it may be a tool that a bureaucratic, control obsessed, management culture likes but is largely just busy work parading as productivity gains.
It could also make the technical manual writers lazy or use less editing because they lean on the language instead of investing real thought into making their communication effective and easy to understand.
As machine translation of english continues to improve, I'm curious how useful it will be. And the feature of "Reducing ambiguity" (according to Wikipedia) is something that can be solved in many different ways without having to invent a whole new simplified language subset for all communication.
Either way that's an interesting example of how serious they take this stuff.
I don't work in aviation, but you can produce much better documentation by following at least the spirit of this vs. the specific grammars and vocabulary.
Engineers and IT people are often not very strong in writing ability, and there are many non-native speakers in technology. I've seen situations where "Cute" documentation with TV references (infrastructure placement was captured by types of disney vs. looney tunes characters) and lots of implied context. Having a style guide that forces simplicity can be a high value.
But the bay pictured looks more like a mechanic's shop than an assembly line. You can't pull a car into a vehicle bay and refer to a big picture of "how to fix car" on the wall - the information is in dozens of 3-ring-binders, vehicle repair documentation, and on the computer. On that stand and for that jet engine, there's far too much information to put it on the wall.
Seems like if the mechanic just had to turn around to access the ring-binder, instead of walking down a flight of stairs, the efficiency difference would be less.
Or maybe it's as simple as them having determined in the past that making people move to shift tasks makes it less likely that they will attempt multiple at once from memory, and mess it up. Sometimes what looks like an inefficient process is actually serving another need you haven't considered, and is more efficient in the long run. Having technicians document each task as done and take a picture to confirm it might yield the same benefits, without the forced short-term inefficiency.
Most of those infomercial products are actually designed for the disabled. I don't bring that up just because it's a really important lesson (which I think it is), but it also has a parallel here. Minor issues for people in one situation may be huge encumbrances to someone else. What's more likely is products have their audience exaggerated and widened, but very often a real problem is being solved. It just may not be a problem all or even most of us have.
Sure. I'm not trying to suggest it is an enhancement in the work process, just that it may be even considering the GP's sentiment of "Seems like if the mechanic just had to turn around to access the ring-binder, instead of walking down a flight of stairs, the efficiency difference would be less." by providing additional information that may not have been considered.
Rinse and repeat through a few different business areas in my career, and you can't help but learn a little humility and come to respect the power of truly understanding the problem space before embarking on a project of any magnitude. A powerful lesson, but unfortunately easy to forget.
That said, one of the replies to my comment (that subsequently deleted it) was a HN regular that said "Kaizen principles tell us that there are many optimisations that are known to the people doing the job, but not to the people who have the power to implement them." I wasn't familiar with the term, but I am familiar with what it refers to, and my bet is you'll find good information branches along that path.
I don't think it's overcomplicating an issue to look at a situation that appears to be sub-optimal and search for reasons why what appear to be simple solutions might not actually yield the benefits you would assume, because in real life, often they've been tried, and they don't. It could be Google is purposefully presenting an unrealistic scenario, or it could be that this large multi-billion dollar company hasn't bothered to optimize this integral process in this simple way, but when I see what appears to be an easily fixed problem in an unfamiliar area of expertise I prefer not to immediately assume everyone else that looked at the same scene wasn't capable of seeing what appears obvious to me.
You're right -- you're not overcomplicating things so much as trying to give this video the benefit of the doubt. But it looks to me like the benefit of the glass is greatly exaggerated by having the binder with instructions be located off the platform. Obviously glass is a superior solution here, but not as much as presented IMO.
To be able to check the torque speck on a bolt while getting it started and tightening it up would be so nice.
Yes, the example video about the time going up and down the ladder is stupid - but I think tech like this has a huge future in shops and I'm sure a bunch of other work places.
One of the most useful smartwatch apps I got was a metronome app, simply because it's hands-free. I wouldn't spring for consumer Glass (simply because it was an order of magnitude more expensive than a smartwatch), but it's even more useful for musicians. Imagine never needing to do a page-turn again!
Smartwatch metronomes are also great in that they can tap you on the wrist instead of making an audible click. It's sometimes hard to hear the click if you're playing loud, and it limits their use to practice since you don't want anyone else hearing the click.
The alternative with Glass is, read out loud the reference you have in your hand and it tells you where to plug it right where you are, hands-free.
Point being that Google isn't quite breaking new ground by selling augmented reality to businesses. Rather, it seems that they're trying to make the most out their Glass product now that it tanked as a consumer product.
It seems for low knowledge/low skill tasks the gains would be less (because there is likely less need for operator to switch their information context during the tasks).
Tightening a screw or riveting a steel frame for example require little knowledge, but on the other hand they're easily automated. Or rather, more easily automated than reassembling and disassembling an engine.
To be fair, there might be some more productivity gains lurking around the corner if augmented reality allows a tighter integration of each process in the factory line.
Any time lost involving searching anything in the factory floor could be reduced with augmented reality by peppering the world with "quest markers".
Besides the 10% gain on searching which parts of the engine you're supposed to either screw in or unscrew out, there might be a gain when searching your wrench, when searching Joe, when trying to find out where exactly the new parts are and so on.
Later there's running and screaming...
Consumer tech may be where the glamour and scale are, but it's not always the best market entry point.
Needless to say, I didn't get the job. It was really funny to watch Glass (which is a cool technology) totally fail because it's a stupid consumer product. And, I'm no Steve Jobs. It was absolutely predictable that Glass would never work as a consumer product.
And, if you want to make it work for manufacturing, I think you'll need to scale up the form factor. Why wouldn't you make it a safety goggle too? Think about all of the engineering effort that went into shrinking this into a small, completely impractical, package.
Anyway, I think that this was not only predictable but it also shows a company culture where people can't tell the emperor that he's naked. It's not like I was the only person that took one look at glass and knew you couldn't leave the house wearing one. But, that message was pretty actively suppressed at Google.
That said, this is a company that has massively succeeded with other bets that were "absolutely predictable they would never work." So just because something looks questionable doesn't mean it gets shot down.
(Now, IMO, Glass was a mistake - but for entirely different reasons than you describe. An elite fashion product cuts against Google's reputation as something for everybody, even IF it had succeeded.)
Sometimes it's the pressure "if you can't sell them in millions, we don't want to do it at Google". This means Google is a great company to make acquisitions that they can scale e.g docs, maps, YouTube, Android.
Google has so much money than they know what to do with it. They can't experiment like a startup because failed products hurt their brand.
Their best bet is to invest in startups that use their stack and if successful, acquire and scale the shit out of them.
This is a good point. I bet the team that made glass looks at what a technical achievement Glass is and rate the project as a complete success. I don't know how many millions they spent to get the form factor so small but I really would be surprised if anyone caught any flack for it.
Unless you believe that he's an omniscient psychic who somehow has insider knowledge of his hiring process.
"We have this cool thing, but we're not sure were it's most applicable. Lets just put it out in the world and see what people do with it!"
Which is what they did. The data they gained from that experiment no doubt led to this.
It's easy to forget how hard Google pushed Glass to consumers. They parachuted people out of a helicopter onto the roof of the Moscone Center wearing them. They invested a vast amount of money building out massive floating barges to use as showrooms, which they quietly mothballed and sold. They allowed Robert Scoble to take a picture of himself in the shower with one (some might say this was the worst crime of all).
If Google wanted to "just put it out in the world" they wouldn't have invested so much money in their consumer push. Now that I think about it, that Google I/O in 2012 was a bit of a disaster all around for consumer hardware, because it had both the Glass and the Nexus Q. At least Glass actually shipped.
Robert Scoble is not really a consumer tech reporter either. He's more like a futurist. Most of the things he likes to talk about are things you can't buy.
Maybe because you got marketed to you are thinking it was consumer marketing, but consider that you were being marketed to as a developer not a consumer.
They tried going lux consumer and failed. But if they went the manufacturing route first, they would've burned any chance of the other.
They could've used the manufacturing side to enhance the tech, get the components to a cheaper price point, maybe even slim down the whole thing and then when they felt ready to enter the consumer market, spin it off as something new under another consumer-only brand.
As they say, you can spot the pioneers because they're the ones with the arrows sticking out of their backs.
- People were initially apprehensive of the high price, but a small dedicated fan base bought it up
- Shortly after launch, Google drops the price by a $200
- People deride it's feature set, commenting on the things it "should" have included
- Despite that, the interface is very good
- A year later, the next version is released, fixing all the missing problems people complained about
- The item becomes a must-have tech, front page of Wired articles are written about it, etc, etc
Apple spent many years creating the iphone. First they created a tablet computer, Jobs rejected it as not ready, not good enough. They then built a phone based on ipod touch wheel. Jobs rejected it as not good enough. He let the tablet team try to build a phone and eventually after much work and improvements, finally shipped it because it actually worked well for consumers, and could do many things consumers wanted (phone, texts, music, web, etc)
Google got glass working and said, what's will consumers use it for? We don't know, so let's dump prototypes on developers and have them figure it out!
Eventually some company, probably Apple, will build something like Glass that consumers will want, but it won't be Google. They don't get consumers (except for high functioning types) and their product development/approval process is a mess.
I always thought the privacy concerns were overblown:
If it's about users surreptitiously recording others, wearing the really conspicuous gadget known to contain a camera on your head is a really ineffectual way to do that. A smartphone in a shirt pocket would be better, and surveillance devices intended to be concealed better (and cheaper) still.
If it's about Google or app makers getting recordings, that doesn't seem nearly as bad as the various always-listening voice recognition tech in popular use today. Perhaps it might if people were wearing them 24/7, but they don't have the battery life for that.
One might argue that people should get used to that, and at some point they probably will, but turns out it's a somewhat more difficult task trying to adapt people to your product than vice versa.
The former would make a lot of sense to me. That it has a camera is obvious, and knowing how the light works requires a modicum of research. The latter, not so much, as we're back in the realm of secret recording.
I do agree that ubiquitous smartphones are just as bad.
You've put the cart before the horse. The price point is not fixed. You don't make something and then look around for a convenient market. Google glass was also designed backwards, they maximized screen quality and watched it fail in the market due to high price and shit battery life when what they should have done is designed up from a 12+ hour battery life.
I think they saw the difference between Apple and Blackberry and believed that penetrating the consumer market first would drive demand within the business market.
It’s like carrying one 12-pack of beer rather than two.
Design requires fitting form to function, and that often means you add non-functional, ie "useless", parts to ensure the interface between human and device is ideal. Aesthetic appeal is a huge part of that, one could argue any component put on a device strictly for aesthetic appeal is "useless." Yet, in many cases, this "useless" component is strictly necessary for a successful design. (Ie, one that people actually use.)
For something you wear on your face, Glass screams out that it was under-designed. Humans are hardwired to prefer symmetric human faces, studies have shown this. So why would you design a device you wear on your face to be asymmetric? Seems obvious.
People walked into bars and clubs with these. Then bouncers kicked them off.
If people use your product and they are labelled assholes, clearly you have not done market research.
Apple would have made it aesthetic as hell before a rumour even got out.
My primary care doctor has a human scribe. The scribe is a recent graduate (BS), planning on going to med school next year. Being physically in the room, watching the doctor work is a great benefit to her. I'm not sure she'd benefit as much from watching a live stream.
Additionally, as a patient I wouldn't be comfortable being recorded.
I mean, who's to say any given doctor's office doesn't have a hidden camera/microphone somewhere anyway.
If you're having paranoid delusions, visiting a doctor's office may be a good start, despite the small possibility of hidden cameras and microphones.
Well, I say that, but thinking long term, it does kind of seem inevitable. Somehow we're going to have to come to terms with having every embarrassing moment of our lives outside of our home being recorded. Unless the UN comes up with an effective digital bill of rights this is an espionage disaster waiting to happen.
I do share your concern about being recorded as a patient though.
As an aside, is this a common thing? I had an appointment to get a tdap booster a few weeks ago and spent a good 30 mins. chatting with my doctor.
Here at least (uk) yes. Al of the practices I've visited have strict 10 minute appointments and the doctor will cut you off at 10 minutes. If I go in for a flu jab(I'm asthmatic and at high risk), I expect to be in and out in about 2 minutes, and I don't see the doctor; it's normally a practice nurse that does it. You then get sent back to the aaiting room and told to sit tight for 10 minutes in case you feel weak, but that's it.
For consultants in hospitals it's different, though.
However it is perhaps worth noting that the practice charges a $250/year "concierge fee" and from what I've read that's a very low fee comparable to other offices that use that model. Perhaps I would be less likely to go there if I had a high premium to pay in addition.
But even with previous employers and plans across the US, I can't recall a doctor who has ever rushed me out of appointments.
I do have private health cover but it doesn't help much if I have a chest infection. If I need an MRI or to see a consultant, I can effectively skip the waiting lists and go to a private hospital, but if I have a heart attack or any immediate emergency, I'll be going to an NHS hospital regardless.
It's always going to vary wildly based on what you need to discuss - a shot takes a minute or two of lead up, being diagnosed with a chronic ailment may take 3+ visits with 30 minutes of discussion each time.
Either way, I'm not comfortable with my doctor wearing a wire into the room. It's hard enough to trust that the notes they take in confidence will stay confident without throwing faulty computer security into the mix; yes, I am already weary of electronic medical records for exactly the same reasons.
You can outsource the data entry parts of the job to someone in a centralized location, who can be assisted by voice-recognition on an initial pass. This saves time for the doctor and patient, and lets the doctor focus on the patient rather than data entry.
Incidentally, you might not currently be recorded on video in the doctor's office, but there is already a very good chance your doctor is dictating notes describing your medical history using scribes in India or elsewhere.
As a patient I would be quite happy for doctors and nurses to wear body cameras if it was part of a systematic approach to eliminating errors (similar to the way airplane cockpit voice and data recorders have been instrumental in reducing plane crashes).
Also sent to google, so then when you get home you get relevant ads for your medical issue. Yay!
So when you go to another hospital, your records should be unavailable?
More data = better outcomes.
Just because you're fine with it doesn't mean that there are people with sensitive medical conditions who would rather not have a video recording of themselves being examined.
The article makes it sound like Google glass is the first to do anything like this, and it was all paper manuals before that. In fact, aircraft manufacturers have been using smart glasses for years to augment workers.
Maybe Glass is a significant improvement, but it's not unprecedented.
From a pretty good NPR article from a while back.
Obviously the AI part isn't there but we now have a fabulous interface to have complex tasked aided/guided by AI. This in combination with what's already going on in Amazon warehouses and we are pretty close to the description of how fast food restaurants are run in the story.
And that also means you need 30% fewer employees to manage the same workload. That's going to be the trade-off here. How many people will have to go just to offset the hardware and software costs?
I don't know what I'm arguing here... I'm finding it hard to avoid quoting Ian Malcolm in the context but I think we have to remember there are definite downsides to treating people like underutilised machinery.
I don't buy this argument, honestly. By extension, you're saying we should go back to manual book-keeping instead of computers to do accounting, because it would employ more accountants, and let them work at a more relaxed pace.
Looking up documentation, for example, is not an intellectually demanding task. Instead of going back and forth between printed reference, being distracted, forgetting what you read and having to re-check it, this helps reduce the feedback loop and that makes work more intellectually engaging and interesting because the worker can focus on things that actually require thought, rather than menial tasks.
People will still take breaks and slack off, it's human nature, and they should be able to, if you demand continued unbroken attention, you will end up with people who make a lot more mistakes, I expect (citation needed).
I am much more worried about automated performance metrics and gamification of work, those do offer levers for the employers to push the employees beyond sane limits.
To put it briefly, the utopian "we get to go home and see our kids on time" world Augmedix is selling is just the sort of stuff that gets bought by the NHS to make GPs handle double their workload.
There supposedly are protections in place (contracted and EUWTD) to stop doctors and nurses working without protected breaks, but you find me a single competent (eg) med-reg who manages to regularly take theirs.
There is a ton of quite low-hanging fruit... But half of it is a poison in some professions and most of that relies on the reason it was bought.
Really, chasing this thought process can get pretty philosophical, especially when you consider the widespread skills we have already lost because we outsource and automate. They are things to consider too but immediately I'd worry about the people being told their workload is doubling because they've got a fancy gadget now.
The development of advanced calculators (computers) removed the need for rooms of hundreds of engineers fiddling with slide rulers.
Microsoft Excel increased accountant efficiency by ungodly amounts, which makes their lives... harder... because they have less time to think and physically rest?
Yea I hate to say it but I have no idea what you're arguing either.
God forbid. Didn't we learn our lesson from the plow, the cotton gin, or the combine? All those jobs needlessly lost. If we don't learn from history, we're doomed to repeat it.
A huge percentage of business software is built for this exact reason though.
I honestly though Glass would have done better if it had no recording capabilities built-in. It would have substantially reduced the creepiness factor.
It's sad that no one has come in and tried to tackle the heads-up wearable market. Sony has some glasses that looked terrible, and I guess the battery life issues are still too big for many manufacturers to overcome?
Are they completely unmaintained? There was a story here recently how it just got an update for the first time in a few years. I'm not sure if that was a fluke, or a renewed commitment...
Sure. When many thousands of these devices have been produced on a growing economy of scale, when there are multiple options available from various vendors at various price points, when ordinary cellular network speeds and capacities are closer to the wifi in these businesses, when there's education, experience and toolchains for building apps that run on them, and the hardware has been battle-hardened, I'm sure they'll make more sense for consumers.
Computers were first affordable to businesses only - first as mainframes, then as PCs, then again as laptops, and once more as tablets. Consumers didn't buy the first cellular telephones and car phones, businesspeople did. CNC milling machines and rapid prototyping machines were once reserved for specialized, high-technology machine shops, now hobbyists can put a CNC router or 3D printer in their workshop. You used to have to go to a dealer with an expensive computer console when your check engine light came on, now a $10 OBDII reader can read and clear codes for you.
> I guess the battery life issues are still too big for many manufacturers to overcome?
They probably will be a problem for a long time. Unfortunately, battery capacity seems like one of those areas where we're not simply too low on the technology pyramid, it's just a question of raw physics. I would love to see this problem sidestepped by the compromise of making the batteries on these devices easily swappable. Even if they only got 6-8 hours of battery life, I'd happily unclip the battery on the earpiece at lunch or when I get home, swapping in the freshly charged one from my bag. What I don't want is an undersized battery that's glued in and decays down to 60% capacity after 18 months.
Outside of the tech community this is a distinction I imagine few would make - most people upon seeing a camera are going to not unreasonably assume it can record. I still see large numbers of people taping over their laptop webcams even when they are turned off.
I think the simpler explanation is that strapping a camera to your face is simply inappropriate or off-putting in some social settings for many people.
Didn't help that the only people who did/could/would buy the first glass prototypes were nerds who apparently started using them in bars and clubs. That solidified its image as "creepy".
Contrast it with Spectacles, a product designed to record as much as possible. But since it was marketed/targeted towards "cool" people, it never got the creepy trait.
The approach with Google Glass was very different, where Google were arguably prototyping a device intended to be worn all the time, including indoors and in scenarios were people typically would not normally expect to be photographed or filmed.
Good. That isn't a demonstration of technical illiteracy. There have been plenty of vulnerabilities shown in webcams and microphones, and even the indicator LEDs can be disabled remotely
Like Zuckerberg does for his mic and webcam:
Or direct image link: https://i.imgur.com/OxWY3FV_d.jpg
I can't count the number of times I've had to extract both my arms from inside a machine (in doing so losing the information of where precisely i'm holding stuff and the bearings that provides you), wipe off all the grease, dust, grime, etc., thumb through a pile of papers that still get dirty and then mentally translate a 2D drawing to what I'm working on, only to then lose my place and have to work it all out again. Having something voice controlled and right there in front of my eyes would be invaluable.
Industry really is the perfect environment for this. Safety issues notwithstanding (which you can work through), it's really the best application of this technology and you can quickly quantify a RoI from its implementation.
I was able to snag a Glass for a good price when they killed support (before selling it off again after a couple months). I enjoyed using it, and being on a college campus at the time reduced some of the social awkwardness. I could push notifications to my face with IFTTT and the voice recognition worked reasonably well. Ironically, I found the most useful feature to be the camera. It's liberating to be able to wink and get a snapshot of whatever is in your field of view, whether it's some info you want to remember or a small moment you want to share. I'm on vacation now and find myself fumbling with my phone to take snaps of interesting things I want to share way too often.
I just got back from vacation, and I agree. The whole process of taking pictures with my phone feels so cumbersome and annoying, especially for short lived scenes. It's locked and in my pocket for security and safety because I don't like to walk around holding my phone when I'm trying to enjoy the moment. It feels like I could either have the phone be an appendage and then have fairly easy picture capability, or actually be present in the moment but then it's substandard for taking impromptu pictures.
I wouldn't have as much of a problem with a dedicated device, as I wouldn't necessarily be as worried about breaking or giving someone access to the device that contains details about every aspect of my life.
Alternately, something designed to be an appendage but in an unobtrusive way (to the user, at least) might be just as good or better, as you suggest.
http://spectrum.ieee.org/geek-life/profiles/steve-mann-my-au... (For discussion on Glass design, look for the paragraph starting: "I have mixed feelings")
FWIW it appears Mann is working with a company on a different system for mediated reality:
We used to buy the glasses for $1500 a piece and had a probably 50 pairs of them lying around by early 2015.
The engineering team was great - while I was there it felt like we were flying blind wrt Google’s official support. From a business perspective, I’m not sure product market fit was really ever achieved, though after I left the company expanded its horizons beyond healthcare / telemedicine.
Good luck to Upskill :)
Now, with a full vr headset, it might make sense to be able to place such things outside of the normal field of vision (where your text/code editing resides) - but so far I don' think working full time in VR with text is a great idea, with the current generation of headsets.
can't have other folks knowing you're looking things up on SO. it'd ruin your mystique.
Modern era there is a "Mirror API" that's a very limited web based API for serving "cards" to the device and there's a a native SDK with a few hooks for registering for pre-selected voice commands and showing activities with various amounts of liveliness or graphics capability vs. static cards.
So if you are just publishing cards with standard menu actions you can write server side only in Python and several other languages and just communicate with Google's servers. If you want a more native experience you write in Java using an SDK originally based on Android.
Basically they pushed an update out a couple weeks ago (after almost 3 years of nothing) which added Bluetooth device support.
However, there were some other issues I encountered in previous (early) Google Glass development that I wonder if they have been resolved:
- Google Glass devices were prone to overheating to the point of shutting down, so it was not possible to do anything terribly computationally intensive.
- Support for more corporate oriented wifi protocols (eg LEAP) was marginal; the Glass even had trouble making an initial connection to a hidden SSIDs on a standard WPA2 network.
- Barcode scanner support is based on camera image capture; unfortunately small barcodes were difficult to impossible to scan with the Glass as a result (both due to the need to position the code close to the camera, and the nature of the camera that made closeups of small barcodes very blurry).
Did glass have cellular? Dropping that could do a lot too.
It's a good question.
Honestly, I'd love to use it all the time, instead of my smartphone. There are so many cases in which you want to have the screen in your FOV, but your hands free, not just at work...
> For the doctor example, why is this better than a simple body cam, or even a camera that is wall or desk mounted?
It probably isn't, unless the doctor can also utilize a display for something. Otherwise it's just an expensive (and quite likely crappy, compared to other available options) camera.
To be clear, I'm not saying there isn't a problem to be solved, I am questioning that "small screen that you can look through" is the right form factor for any of these problems.
- Can only businesses buy Glass?
- What is the pricing model? Is this sold as a product, or is it paid for as a subscription service?
- Will there be a "play store" equivalent for software for Glass?
It's probably also way more expensive to design & build the experience and content for Hololens' interface..
Glass just needs video, text and a 2D navigation interface (I'm being reductive here but you get my point)..
You could of course do that kind of interface for Hololens too, but then why not just use Glass?
Glass is basically just a tiny phone on your glasses.
Can these provide eye protection from bits of flying metal while one is drilling?
Will keep digging through the page.
You know what else looks bad? Hard Hats.
HN has probably mentioned this before but is there a reason Google makes announcements on Medium and not on Blogger?
I would do the same thing if I had a choice, but Google could just make the formatting on Blogger better.
So some tech to aid recognition and the ability to add a few notes such as their line of business would be something I'd happily spend a lot of money on.
But early on glass said "no access to facial recognition APIs" which killed my interest in it's first incarnation.
If it's done in real-time, i.e. the scanned images aren't saved then surely a "This is person <X> you've met and tagged as X" is less creepy than actually videoing someone? You wouldn't get any information you haven't yourself added to the device (although it would probably need to lean on external data-sets for the training).
I wouldn't suggest facial recognition should recognise anyone you haven't met yet, I think that would be a bit weird (although I think it is the future anyway), but a way to effectively add a tag on someone you know would be great. Most people can do this without the technology to varying levels of accuracy and and breadth of their acquaintances.
They mix it up: the "Transfer Appliance" announcement also on the HN front page is on googleblog.com. Different teams of course.
Not just different teams, different companies. Glass is X, and they don’t use Google Blogger. Transfer Appliance is Google, and they do use Google Blogger.
Both Google and X are Alphabet subsidiaries, but they aren’t the same company.
X’s blog is on Medium, not Blogger, but X is not Google, it’s a separate subsidiary of Alphabet.