The first time you see virtual objects linked to a real-world place, it's magical - but that magic quickly goes away when everything suddenly shifts 10 (or 50) meters to the east because your device got updated GPS info.
I've become much more aware of how much "cheating" happens in driving / map apps to cover up these hiccups - ever take an exit and your map still shows you driving down the highway for a while? That kind of cheating probably won't work in an AR space.
This is technology that will no doubt improve, but it's definitely one of those "final 5% is 50% of the work" nuisances where just a small amount of inaccuracy can wreck the illusion.
GPS positioning does not provide a fixed reference frame, even when it works as advertised, as it assumes some properties of reality are constant that are actually variable. But let's assume that it does provide a fixed reference frame for the sake of argument.
Physical objects are not fixed in any global reference frame. They can move quite a bit throughout the day, exhibiting significant Brownian and regular displacement relative to their mean position. No big deal, we'll just use a local reference frame, like the geometry of buildings and objects, right?
Local geometric relationships we treat as fixed are also quasi-randomized throughout the day. For example, the distance between two buildings can vary by centimeters over a day. With enough measurements you can sort of average out the local noise, but the precision is much worse than people find desirable.
We can't precision measure our way out of this problem because the things we measure don't sit still.
High-precision registration in physical reality is generally believed to be an AI-complete problem. This is a major hurdle for the vision of AR most companies have. You have a huge number of contradictory positioning cues, all of which are constantly changing, from which you need to synthesize a coherent positioning model that matches the one humans naturally perceive.
If you measure the environment with high-precision and use that to construct a geometric model of the space, and then come back a week later and measure it with the same instruments, the two spaces won't be congruent even for objects we normally think of as invariant, and the variability is sometimes surprising in magnitude. The noise floor for repeatable measurement out in the physical world is centimeters in most cases, regardless of the instrument precision used to measure it. This isn't a problem if you don't need particularly high-precision but people are inventing applications that do.
The software challenge is trying to position relative to previous measurements of the same space when the myriad positioning cues are contradictory. Knowing which of the totality of cues are relevant in context so that the software can appropriately adapt its positioning behavior to the change in geometry is the part that is usually deemed AI-complete by the people I know that have been working in the space a long time. There are many infamous example cases of humans being able to correctly register contradictory positioning information in context that we don't know how to algorithm our way out of currently.
Some of the drone work we did was actually measuring how the geometry of "fixed" spaces varies over time. The world around us moves a lot more than humans can perceive.
Then you will be able to triangulate and locate the user quite accurately as long as they are above ground and outside.
And you get this: https://www.youtube.com/watch?v=XWbY5jdJnHg
(Interestingly, this just came out today)
With the work that Google is doing for in-door-mapping, this might also work indoors as well as underground, I don't know.
But it seems like the location accuracy for location-based AR is being "solved" right now. Unfortunately, the way it's done can only be done by somebody like Google or a company who can afford to collect streetview level data (maybe Apple can afford to do the same here.)
[Edit] Plus, if you're talking about parallel worlds, Google could even use the Streetview data to potentially pre-render the alternate world over the real world and only send the data back the user after they manage to triangulate them. This way they don't need to do that in real-time, reducing the latency of rendering something over the real world structures.
"The principle of generating small amounts of finite improbability by simply hooking the logic circuits of a Bambleweeny 57 Sub-Meson Brain to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea) were of course long understood – and such generators were often used to break the ice at parties by making all the molecules in the hostess’s undergarments leap simultaneously one foot to the left, in accordance with the Theory of Indeterminacy."
Have you tried turning off the Bambleweeny 57 Sub-Meson Brain?
This product uses an auxiliary GNSS antenna and software to provide centimeter level precision:
Just about any modern GPS receiver in a dedicated device has this capability. This may not be the case for mobile devices for cost reasons. The Ublox NEO-M8N, for example can concurrently receive signals from 3 GNSS constellations and has a single unit price around $15.
The thing is, all those satellites don't necessarily help precision as they tend to cover different geographic regions. There aren't a lot of Beidou sats passing over the US, for example.
What does help are augmentation services like WAAS, SBAS, or EGNOS which provide regional GPS correction data originating from known, fixed points in the region.
Differential GPS works similarly and can offer greater precision, but requires a different receiver in a whole other part of the RF spectrum.
Finally, all this precision comes at a cost, that being time. If you want centimeter level accuracy it can take a fair bit of time to get a good fix. In my experience, a couple minutes with a good modern receiver.
I'm thinking along the lines of "Want to be a Pokemon gym? Place these devices around your public space and we'll overlay our AR reality while people are there!"
Seems like you could 'debounce' GPS updates where you know it hurts to make changes. Not that different from maps' cheating. But the resolution of GPS and its challenges related to ground clutter will always be a thorn in the side of AR applications like this one.
In the real world: Burger King is not allowed to enter a McDonalds property and plaster their ads all over the place!
But what about AR? Let's assume there is a really popular AR app and it supports placing ads in certain spaces. Would BK allowed to place virtual ads on real-world McDonalds property? Or would McDonalds have grounds for suing because their real-world property extends into AR?
The only entity that has jurisdiction over that particular digital space is the app itself, so they get to make the rules.
I think this notion, one I identify as an issue of semiotics as much as a technical challenge, is a huge problem / business opportunity. I did a bit of a deep dive on the topic in '15 in this Medium post:
... been meaning to post to HN and never to got it ... will have to do that now.
A little disappointed in KK because I sent him this article in '16, and, despite that he didn't respond, he did crib a number of my points for this piece, as well as the Borges allusion.
In your specific example, McDonalds might have both copyright and trademark, or even the publicity claims - design of the restaurant, arches and the recognizable nature of the brand all are very unique. Placing BK logos over them would be valuable because it’s McDonalds.
Should the neighbor have any rights over their virtual AR property?
The way I hope to see it is that all the different features are apps that you can either enable or disable and then the AR platform (e.g. the OS on your headset) just mixes them all together. So for example there could be an app for placing sculptures anywhere and it becomes a matter of that apps policy. If you don't like an app that lets you place sculptures anywhere just don't install it and use the one that ties in with property register. And so on.
This would also reduce the BK Ads in MCDs example to what we have today. It's not illegal to show BK Ads on peoples phones while they are in MCDs.
Maps: I use a few apps whose sole purpose is to provide location-based info (eg we have a govt app that gives fuel prices). This could just be a plugin layer that could be enabled in the Maps app. Property searches, your favourite fast food place, etc.
Camera: plugins could provide more effects or manual camera settings control UI, or indeed... AR overlays.
I think what allows "winner" apps to exist in modern world is that you can really use only one app at the time. Especially true for Smartphones. The way I hope AR to develop is that all of the small components (the apps) get composited together OS level. So that "winner" platform would be the OS itself.
Albeit, I'm being extremely optimistic here.
Or the situation where you (with or without attribution) cause a virtual sign saying 'idiot' to appear over their head when they are in a public place?
Looks like google has started regulating this just from a quick look. I’d expect winners in the AR space to follow a similar trajectory, coming up with rules for this when it becomes a problem.
Should be the same thing, no?
All the infrastructure needed for this already exist in a form which offers barely adequate performance. Good performance should be just a generation or two away, and then things should get interesting.
 This is my dearest hope for AR, that someday all the people around me will have MMO-style nameplates, because I have many strengths but remembering the names of people I've just met is not one of them.
No, thank you, kind user. Just because you are unable to remember the names of relevant people around you, there is no particular reason why anyone sensible should agree to have personal data be transmitted to random people around them at all times.
Such name identification on the fly then contributes to a (still fictional for the public) global real time people location service. Great, because 5 people are in line-of-sight of George McGeorgeface he is now confirmed to be at the dildo store in downtown Nairobi although he claimed to be sick for work, thank you face recognition APIs!
It's bad enough that we get tracked and screwed for ad data nowadays which just happens to not even stay in the ad space.....
The 'Mirrorworld' described here is not just an AR application. It's an indexable, queryable database of "everything" that is built.
Don't think about nameplates above people.
Think about service crews coming to inspect the local sewer network, who get an "xray" view of the pipes running under the street.
AR : Mirrorworld :: The Internet : Google Maps
AR is augmented reality and the pipes example is a great example of that.
Which leads me to my next prediciton: There will not be a singular mirrorworld. There will be many. It would undoubtedly be cool if there is a "standard" mirrorworld that people refer to (just like they refer to sited on the one and only internet) unless some big entity creates something compelling early on (think Wikipedia) and has a big first-mover advantage I don't see it happening.
Good example; if there's going to be a 'shared' mirrorworld it probably can't be an end in itself the way games are.
It would have to be at least one of informative (about the real world), social (so your network is all there), or common-ground experience (like watching popular shows and sports playoffs). And since 'informative' is fungible, it might not suffice unless there's a clear leader.
Or, I suppose, some major power might roll all the popular offerings into subsections of a top-level experience. It's not that hard to imagine Google or Apple setting up an AR 'appstore' from which you can open games, information, and so on. Google Glass was premature, but the idea made some sense as way to use existing accounts and image analysis abilities.
For a user with a passion for fashion, what might a nifty AR future expert experience look like? Heads-up tooling for where their head is at as they walk down the street. Collaborative tooling for their interactions with friends of similar interests, while walking and retrospectively. What about for software, or for startups?
The AR vignettes I've seen have been generic. Refrigerator contents, street navigation, business travel, etc. Broadly accessible, but most of it mundane. Not targeted, not trying to inspire individuals with "O.M.G. Want future now.".
Sort of like demos of Unreal VR Editor for game devs, or EXA for bands, or PRIMITIVE for software devs. But synthetic, rather than app demos.
 HYPER-REALITY https://www.youtube.com/watch?v=YJg02ivYzSs  Unreal https://www.youtube.com/watch?v=JKO9fEjNiio  EXA twinkle https://www.youtube.com/watch?v=WB22jF9cRko , multiplayer https://www.youtube.com/watch?v=bdZsUZGuCeI&t=9  Java https://www.viveport.com/apps/675c92c6-7df2-4ee3-b919-1bfbb6...
And I'm sure that won't have any security issues
And businesses should get ready to optimize it for social and financial objectives.
What could possibly go wrong?
It'd also be interesting to replace ads with other stuff in general. Imagine being able to replace a print ad with a random article from the publication's website, or a TV commercial with a YouTube video in your subscriptions list.
We're certainly going to make this as easy as possible to use rather than targeting just blockchain enthusiasts.
Perhaps in the future we'll rebrand our website to not focus on the blockchain aspect as much.