A killer app for me would be floating name tags above people's heads, but I imagine that would require better HW than what is available here.
I'd love to see what this looks like actually installed on glasses, in different configurations.
Some simple AR use cases that I'd like to see: ]
* Turn by turn directions
* Incoming high priority notifications
* Weather alerts
* Front door notifications
* Upcoming meetings
* Interior building navigation
* Shopping Lists (hard to do the interactions on this)
The issue here is none of these are a killer feature, so I wouldn't be willing to pay more than a small premium to have them added on top of what I normally pay for frames, but since I already am paying $300-$500 for frames (before insurance), if someone offers the above for another $100 or $150, sure why not.
This is a societal issue, more than a HW/SW issue. Google Glass explicitly bans this type of feature:
> Don't use the camera or microphone to cross-reference and immediately present personal information identifying anyone other than the user, including use cases such as facial recognition and voice print. Glassware that do this will not be approved at this time.
This is in response to Congress raising Privacy concerns:
I wouldn't expect any "official" facial recognition on any of these upcoming AR devices any time soon.
Which is too bad, since I see AR as potentially being a massively useful device for those with memory issues, dementia, or who generally need some help with things like this.
I'd have given anything for that 35 years ago. First thing on any contract job, I'd make a map of desks & names - but it always tripped me up when people slipped over to work at a different desk!
What's particularly odd to me is that many people seem to be sensitive about, specifically, cameras in glasses. I sometimes literally wear a bodycam, not subtle at all, and it's mostly ignored, while with no-camera HUD glasses people get nervous that there's a camera there.
I hope that we can eventually get away with stuff like real-time facial ID as an adaptive technology feature - though the only way that will work is if we can get it pretty much invisible. Or we finally get used to all being recorded all the time anyway and stop freaking out about it.
Unfortunately, that's also why "auto-name-tagging" is never going to happen. It doesn't help the gigantocorps who are already tracking us through wifi and IR and seismic patterns in public places. They've had data that could make the individual experience immeasurably better for nearly 15 years now, but they conveniently use "privacy concerns" to avoid having to acknowledge it in public. As long as there is a modicum of suspension of disbelief, they can continue to collect all the data they want for their own purposes and to hell with anyone who wants to use their own data for their own purposes.
Like, I could conceivably take photos of people I meet, store those photos along with notes in the iOS notes app (or even a contact app) then create a panel of those photos so that I can tap the right person and bring up those notes when I see them again. AR would simply make this easier and faster. Unless the AR camera was hidden it would not make it different from a privacy perspective.
That's the thing. The privacy implications are different at scale.
A world of FOSS tools, where centralized ID databases are heavily restricted or banned, and people are using facial recognition technology on people they encounter in their lives and tag, that would be fairly harmless, perhaps even doing much more good than harm.
But it's not going to play out like that. How many of us run speech transcription software on our own machines, and how many of us use a remote service?
But that's, like, 90% of what I'd want something like that for. Not for everyone in my field of view, of course, but when I'm out in the field I meet a lot of people in a short time and I can never remember everyone's names. Work shirts with embroidered names are an absolute godsend.
Self driving cars will track everywhere you go and when and ultimately decide where you'll be allowed to go and when, augmented reality will make it so you can't see anything without an ad playing, robots in your home will record you and catalog everything you own, the phone in your pocket already tracks you and everyone you care about, etc. It's hard to think of a single popular tech product that isn't collecting data and working for someone other than the "owner" who paid to bring it into their home.
Seems like one of those things that can't be stopped. It can be slowed, as Google has done, but eventually OSS will allow for it. If it is massively outlawed then it can be reduced, but even then it'll still be on the level of hidden cameras.
Which is ridiculous, it is giving an advantage to people who were born with a good memory for faces.
Do not allow for lookups in any sort of online database, all tagging has to be done locally by the user.
I would, just not for the general public. This would be very helpful for police to immediately do a check on people around them and spot people with active warrants.
I get there's a balance, but at some point, it's a problem that holds individuals back and supports the elite.
It can 100% be done in a non-sketchy way by simply limiting the names to the names you've manually entered for people you've seen and tagged before.
folks need to stop ignoring disabled people and / or going "yeah well who cares about disabled people's needs".
Where this device has a unique advantage is overlay AR. Turn by turn is somewhat there but I think a better way to look at this is passive information. You could walk by a business and see hours on a virtual sign. You could make an app to leave virtual art you could stumble on. I'd like to see more ideas like that. Just a new way to show notifications doesn't cut it for me.
Most stores have those posted already. :-D
> They can almost entirely be replaced with a smart watch.
Agreed, unfortunately current smart watches are tasked with doing a lot, and as a result they are not very good at blending into the background of life.
I'd also love a life logger, being able to see snapshots of my day and quickly create journal entries. An automatic record of everyone I talked to, summaries of important conversations.
Human potential could be greatly expanded with current day technology but sadly we are limiting ourselves.
Ok but it could be 3rd party data instead. It could be live open seating data, etc etc.
Seriously though, I'd love to have a wearable adblock. Detect anything that resembles an ad, blank it out. Obviously not while you're doing anything safety critical, but most of the time - good trade.
Also, anything the AR can see you put down/drop can be remembered; anything you're looking for can be highlighted (in theory).
Would be fun to work on some other ideas.
with a companion app, you could scan the barcodes of the items as you place them in the cart, and then remove the item from the visible list in the AR HUD. shit, tie that into the actual store's check out system so you don't have to use their stupid POS self-checkout lines. just bag it up and be on your way. other than the products that are sold by weight, this seems like something we're missing out on as a species.
https://www.youtube.com/watch?v=5t_-Na63kUY, https://youtu.be/ZMVbwVpBkKc?t=115 for the tech
Disclosure: I worked at one of the companies spending big on AR and worked on these exact use cases. Sadly, the company was not excited by these cases in the short term, so the projects were effectively cancelled. I fully expect them to be revived at some point, but not any time soon.
I wouldn't touch any kind of AR use case while driving. Seems ripe for lawsuits when you're obscuring a driver's view (especially with an eyepiece compared to putting a HUD on the windshield)
Cities, walking. :)
Walking around a city pulling out my phone to check directions is actively stupid, a HUD with arrows telling me what intersection to go down would improve safety on multiple fronts. (Theft and not situational awareness).
Though honestly on vacation there, I enjoyed getting lost in Shinjuku station. :D
Office complexes, finding someone's desk.
Also, there is an ongoing trend to ban mobiles for pedestrians crossing the streets (Honolulu, followed by Jiaxing, Lithuania, Poland, ...). It would be great if AR was regulated from the start to disappear in such semi-risky situations.
But if you say this is safe to use while driving, you need to handle situations where "this AR app had a bug which drew a solid color over the entire view, obstructing my view, and caused me to miss the pedestrian", while a projected HUD has a limited area it can draw on.
If you can't/won't say it's safe to use while driving, then why develop the feature for driving?
This being an eye piece which has software control over your entire field of view has potential for danger.
Then you will have to make sure all of that doesn't replicate these results from the out set: https://arstechnica.com/cars/2015/07/heads-up-displays-in-ca...
- chord changes
- a visual metronome
It supports Bluetooth 5.2, up to 2Mb/s.
So video compression is going to have to get more than ~10x reduction to fit into 2Mb/s and that's just 1 frame/sec.
When 802.11n 300mbps was the hot thing that made LAN obsolete, you'd be happy to get 60 megabits (which is still plenty for most purposes, don't get me wrong, but it isn't 300). Bluetooth is more resilient due to spectrum hopping that I wish newer WiFis had taken over (no more channel madness where you either have to coordinate with 3d neighbors or, on auto, might have good and bad days; everything just uses the whole spectrum as efficiently as possible), but I still doubt you'll have one fps stable.
I'm not uninterested, but at this price point, selling a ~400 euro device that I didn't know existed until 30 seconds ago takes a bit more convincing, especially with a gif showing 1.5x the original size and calling it 8x zoom.
Seriously, the only two images I could find are 500x333 (the monocle comes in at 61x66 pixels if I'm being generous with the edges) where the person literally has to hold it up https://uploads-ssl.webflow.com/623cc6cc56889b045032bfc1/62a... and 319x306 (crops to 20x20 pixels(!)) https://uploads-ssl.webflow.com/623cc6cc56889b045032bfc1/637... where there's again a hand hovering. It looks stable nor comfortable, to me, but I don't know, it's not like there's a video with someone casually wearing it while talking to the camera or doing something.
Open source being listed as something you can toggle off is also a bit ironic to this FOSS advocate :-) but the spirit is there and it's definitely a selling point that it says the platform is under an MIT license! Let's get that app on f-droid and I'm curious to see it actually in any user's proverbial hands!
Would prefer the clip-on module for existing glasses and get extra battery life, perhaps more vision angle since the monocle seems to cut 30% of the vision field.
/me realizes something
Wait, the camera is advertised as being 720p. No mention of optical zoom (and smartphones (which are bigger and heavier!) don't have that either). I'm allowed to drive, so I'd say my eyesight ought to be better than 720p (at least after the brain's post-processing, as eyeballs are allegedly pretty terrible). In that case, "zooming" cannot help to see more. If this crops 720 pixels by a factor of 16, you get 45 pixels blown up to the whole screen. Plus motion blur. I can understand they didn't show what 16x would look like!
What a scam of a feature, neither being 16x nor being useful at all nor being zoom by any of merriam webster's definitions. Either that or I am majorly missing something here.
If you want, you could do it offboard but there's a reason wireless VR headsets don't work, and it's latency.
I, like many of you, am very good at telling a computer what to do with letters and symbols. I spend a lot of time in voice calls and I often write little scripts while I'm talking to answer questions, ballpark numbers etc. That's not to mention the power of tools like Wolfram Alpha and Google.
I would like that to be everpresent in my life. I know we'll get some semblance of this with conversational AI at some point soon, but for me I crave the determinism of real programming. I want to be able to summon a quake-style REPL from the sky at will, and while looking someone face to face google facts or compute probabilities. I want to be able to sketch algorithms while walking around in nature.
(Intel was working on something like this at one point but sadly the project was abandoned)
Presumably the university eventually sold the patents off and they're sitting in some dusty corporate vault now, but maybe they'll expire soon.
I think it would be quite fun to have a HUD like that for daily life.
I have been seriously considering building a keyboard along these lines for in person meetings, so I can essentially have it on the table in front of me and take notes without a display.
You might enjoy Ben Valack's videos on the topic (18 key keyboard)
Edit: never mind above TIL lasers are light and can be dimmed
If you completely obscure one eye's vision that no longer works and you've lost your depth perception.
Even having one eye blurry is sufficient for parallax to work. (I have very different prescriptions for each eye and if I take my glasses off I can still do distance vision even though one eye can't read any text further away than about 2 feet)
So try it - put an eye patch on for a day and see how many door frames you walk into... or the headache you may get trying to use focusing an eye at different distances (and that's fairly tiring for the lens muscle to be doing all day if it isn't properly exercised).
The "Open source platform" part of the slideshow shows object detection at various distances. If it can box a car 20 feet away, a building 400 yards away, near objects on my breakfast plate, etc., then the plane of projection has to vary over a huge range as you look around (?). In general many of the demos I see seem to have remarkable things going on, in terms of what parts of the UI and world are in focus simultaneously.
While I'm making up some numbers... the sensor for that camera is 3.6mm x 2.7mm. A 5mm lens (that's a made up number) at f/3.5 (that's a guess) would have everything from 5 feet away to infinity be in acceptable focus. If it's a 3mm lens (again, made up number), then everything from 2 feet to infinity is in acceptable focus.
So having the stuff on your plate and the building in the distance both be in focus - yea, that's something reasonable.
But I am still interested in the overlay and how distracting that actually becomes.
So by plane of projection I mean the apparent distance of the virtual image of the UI. If your eyes are focused "through" the display on something a few yards away vs. relatively close, the light from the UI needs to come into your eye (or possibly glasses etc.) as though it is tracing rays in parallel from roughly the same distance away.
Quake style consoles and other HUDs work in video games because in reality the entire scene is coming from the plane of the display, some inches in front of you. If you tried to really focus on a game object 20 yards away, instead of on the screen in front of you, the HUD wouldn't be visible anymore.
In VR optics I believe the virtual screen is something like ~6 feet out in front of you. It is a compromise and still causes eye strain, but is workable perceptually. The issues for transparent AR seem much more complex.
Many of the far-out concepts and ideas that are mocked up for AR seem actually very achievable right now, or yesterday, if the display works like a phone or laptop or VR goggle's does, by re-projecting camera input from a plane a few (possibly virtual) inches or yards away from your eye. The value added though is pretty niche, because people have mobile phones anyway. But if the iPhone hadn't happened, a little display in front of the eye, a whole visor, or a pop-up wrist computer might have been possible to sell. It sounds kind of silly now, but that's what the expectation was in the 80s-90s. The idea of putting a computer on your head still seemed cool then.
Obviously a full color display that could overlay reality at will with a wide FoV would be better, but I think the monochrome vector laser thing is achievable today and for me personally would be invaluable.
Maybe a roll-down giant screen like for a projector display would work better, to give you a big virtual "hole" in a wall, but I've only seen it on those smallish tables. Possibly due to projection power/angle limits?
(Even demand for the turn-by-turn I think is largely driven by people allowing turn-by-turn to erode their natural sense of direction and ability to recall directions, but that's a separate point).
There are some useful cases I can imagine, but they would require so much collaboration by other contributors who aren't traditionally inclined towards good UX and software, and would be so niche that I don't see them driving a successful consumer product. In those cases the hardware isn't really the hard question it's "how do we deliver great experiences that can accomplish this without a hitch and without getting in the way?"
For instance, presenting overlays is very useful for something like an inspector to be able to easily cross reference maps, blueprints, and schematics. Uploading an instruction manual for building flat-pack furniture and have it able to literally tell you what to do. Meal prep services overlaying recipes. A tool for guided tours at museums (in which case you'd rent the AR device instead of owning your own). Maybe as a bike or running computer to overlay your time, speed, splits, or whatever.
But these are all such niche use cases I can't imagine a company like Apple, that generally aims to have product lines that sell in the hundreds of millions of units, would ever be in that market.
And why do they have a patent on it if it's open source? Do they mean just the code? (It only looks like it has the MIT license, so assuming just the code).
IF it's real, then it's pretty neat.
As soon as one has a patent, then they are incentivized to sell their patented thing, not to switch to the thing that is better for their customers.
Patents are a way for lawyers to inject themselves where they don't belong to extract rent, making us all worse off. Patents are a negative sum game.
File a new patent?
Maybe. That has non-zero cost and if the patent office is doing its job the new thing may not be patentable because it is better and there is prior art etc. etc.
The original concern is valid w.r.t. patents in addition to the many, many other concerns surrounding how they are used nowadays and indeed how much BS there has always been since Alexander Graham Bell stole the telephone idea from someone else at the patent office and then used patent enforcement to good effect. It is extremely widely believed around here that things are much worse with respect patents now.
"Patents encourage innovation" seems to have become an idea that marks one out as unbelievably naive. Or a lawyer.
The ideal time for my personal daily commute/schedule to go on a 45 min daily hike/walk is 8am-9am. However at exactly that time some important automated business processes occur (markets related) that it’s best I keep an eye on.
If I could wear some glasses that display the information required safely I probably could lose 15 lbs easily as I’d be able to exercise regularly by hiking each morning, weather permitting. Since I also carry a MicroPC and LTE connectivity, and phone, so if I see a serious issue on AR glasses I could just stop somewhere and get access.
Weird question I guess.
Have you considered putting the dashboards on a smart watch?
The Nreal Airs are definitely interesting, thank you. Could work for the while doing dishes example mentioned in another comment.
I don't know if you can offload processing onto your phone with this, lag is not good or just not possible. IMU wouldn't make sense (phone in your pocket) but yeah. Not sure how you'd do the VIO without the I.
What I am hoping for is a single tracker that can absolutely position itself in the environment.
Basically a self contained SLAM in a box
Google Glass, and its many Chinese clones
So only 12 hours of total use? Hoping it's a typo or not what it sounds like.
Translation… we made some easy hardware, but software is hard and we’re not even sure what our device will be for, maybe this will be cool later?
But I wonder what do I use it for?
Looks like you can program it yourself so that's great.
Get more familiarity with FPGA.
I'm thinking about it.
Well... I've supported SVR and Pine64 why not this.
Nothing else can use it to record videos of my stuff working at workbench (not a lot of onboard storage though).
Ordered one, will see how it goes. It definitely blocks a significant part of your vision but I figure I can toy around with it while I'm cooking food. 3D print some glasses for it.
I would be curious if you bought two, could the two talk to each other... but the hardware kind of blocks your vision/does not seem ideal... still it's something to tinker with.
Do you have a blog, where it’s possible to see what you learn?
looks like once bought they will ship from NY so I should have it in a week or so
There are tons of these dev hardware projects you can buy which usually are centered around having a camera. “Did you ever want to see in slow motion? Now you can.” “Never miss a moment.” Go ahead a miss the moment. Big foot is fake. There is not much point to this anymore.
I see a lot of projects that hunt out developers as customers, but I can’t get excited for them most of the time. I think Playdate is one of the highest quality of this type of product. Behind that is some of the holographic 3D displays they are making now that use 3D pixels. They can go for $200. Pretty cheap.
Augmented reality is very lame to me. I think most of the hope for useful AR applications has been snuffed out long ago. Its the same question asked a different way, “Why can’t you just google the answer? Why do you need to do x,y,z (go to college, call plumber, whatever)” The answer is, invariably, nobody has the data! A lot of good data is being held hostage or destroyed by greedy corporations. Its a winner take all world. Companies are reluctant to have a brand new game.
“But the main reason plumbers are well paid is because they know the arcane secrets of plumbing fittings.”
Modern wearable displays have a focal distance between 1m and infinity to improve comfort.
> The result is a transparent floating display with a 20° field of view. About the size of a table display at arms length.
I can't find anything that actually states the focal distance, though I agree there's a decent chance it's at infinity.
But in any case - without 6dof then this isn't really AR in any usual sense of the term. It's just half a Google Glass, surely?
The documentation shows something like this where the projection is off axis from the actual mirror doing the projection.
The FPGA+micropython means that there's a lot you could do on device if you put in enough development time and converted your code to run directly on the FPGA, but it probably works best if paired with a smart phone. Someone like google could probably get this doing real time text translation off that FPGA, but I sure can't.
When paired with a phone you could use this to do real-time subtitles for the hard of hearing, display bike direction, there's all kinds of uses. Really all it has is bluetooth, the camera, some touch controls along the rim, and the display.
In the show-and-tell channel
Obviously suboptimal, really needs to be integrated into some old-timey welding goggles for the steampunk vibe.
At $349 it’s almost in the range where I’d buy it to play around with but I suspect it’d end up in a drawer like a bunch of other things that looked cool but I never got around to hacking on. Once the technology gets good enough where they can fit it into a regular pair of glasses then we can talk — though looking at this Bluetooth Aftershockz headset I use that day probably isn’t too far off.
The other issue is that the projection display is a bit low-res for what I'd really like to see, but 1 pixel works out to ~0.5mm at 1m, so it might be ok. That might be precise enough for (for instance) identifying particular through-hole connections on a circuit board at a rework station. It's certainly adequate for identifying components on an engine.
There doesn't seem to be much on the physical hardware though.
a sentence to get Californians and Texans both excited
What you describe is something I also want/need.
So I'm going to see about somehow feeding in text into the monocle via phone/BT... like an OBS situation. Then you could use your phone's screen as a mouse/input method. I know it seems pointless just use your phone... but you know... it's cool.
I have to read the manual on this too.
I’m thinking a custom textual python terminal app on transparent terminal emulator on black background with tiling window manager. I have a GPD microPC so it seems if I add the Nreal adapter to connect hdmi. Then all it’s really missing is some type of input mechanism maybe via bluetooth.
The Nreals looks potentially good enough to be literally walking around safely if the textual TUI is designed in a way to not be too distracting.
Edit: found this for input mechanism.
This idea looks fairly realistic to me.
I was thinking about how to make something like that, I'd be curious on its accuracy, will see if there are videos on it
hmm it does not work the way I think it would, I wanted a qwerty layout
- Humans aren't fast: using AR while also functioning as a person is too hard for us dumb meat-computers.
- AR devices still aren't as fast, sophisticated and low-power in hardware as we'd take it granted for mobile devices e.g. iPhone and MacBook.
Can I choose different shape of it? For example, rectangular one?
guess I didn't really want to contribute anyway
The MIT license does not provide license to use patents. It's really open source in the letter but not in the spirit.