Hacker News new | past | comments | ask | show | jobs | submit login
AR Will Spark the Next Big Tech Platform: Call It Mirrorworld (wired.com)
97 points by longdefeat 36 days ago | hide | past | web | favorite | 72 comments



I've done some work in "parallel worlds" overlaid on top of the real one by latitude/longitude, using the cool "ARCL" library - and the biggest headache I've found for location-based AR (instead of the type that just scans the room or a table) is that even little variations in GPS positioning can really interfere with the experience.

The first time you see virtual objects linked to a real-world place, it's magical - but that magic quickly goes away when everything suddenly shifts 10 (or 50) meters to the east because your device got updated GPS info.

I've become much more aware of how much "cheating" happens in driving / map apps to cover up these hiccups - ever take an exit and your map still shows you driving down the highway for a while? That kind of cheating probably won't work in an AR space.

This is technology that will no doubt improve, but it's definitely one of those "final 5% is 50% of the work" nuisances where just a small amount of inaccuracy can wreck the illusion.


This is a Hard Problem because no fixed reference frame exists for registering position. For many applications we pretend that exists but with sufficient resolution (centimeter) the illusion of a fixed reference frame is shattered. Many companies, like Microsoft, want/need near pixel perfect registration. The challenge is worse than people imagine.

GPS positioning does not provide a fixed reference frame, even when it works as advertised, as it assumes some properties of reality are constant that are actually variable. But let's assume that it does provide a fixed reference frame for the sake of argument.

Physical objects are not fixed in any global reference frame. They can move quite a bit throughout the day, exhibiting significant Brownian and regular displacement relative to their mean position. No big deal, we'll just use a local reference frame, like the geometry of buildings and objects, right?

Local geometric relationships we treat as fixed are also quasi-randomized throughout the day. For example, the distance between two buildings can vary by centimeters over a day. With enough measurements you can sort of average out the local noise, but the precision is much worse than people find desirable.

We can't precision measure our way out of this problem because the things we measure don't sit still.

High-precision registration in physical reality is generally believed to be an AI-complete problem. This is a major hurdle for the vision of AR most companies have. You have a huge number of contradictory positioning cues, all of which are constantly changing, from which you need to synthesize a coherent positioning model that matches the one humans naturally perceive.


No, it's not an "AI-complete problem". It's just hard. With GPS for coarse position, inertial sensors for movement, depth sensing, and SLAM for fine position, it can work.[1] The drone industry and DARPA are working hard on this.[2] Right now, you can do it, but not with cell phone grade hardware.

[1] https://www.youtube.com/watch?v=iZ1psxcMvrQ [2] https://www.spar3d.com/blogs/the-other-dimension/nanomap-sla...


You can't build repeatable models of space for high-accuracy registration with big drone or car hardware either, I've worked with both. The geometry of space may rhyme but it never repeats. Those links don't address registration.

If you measure the environment with high-precision and use that to construct a geometric model of the space, and then come back a week later and measure it with the same instruments, the two spaces won't be congruent even for objects we normally think of as invariant, and the variability is sometimes surprising in magnitude. The noise floor for repeatable measurement out in the physical world is centimeters in most cases, regardless of the instrument precision used to measure it. This isn't a problem if you don't need particularly high-precision but people are inventing applications that do.

The software challenge is trying to position relative to previous measurements of the same space when the myriad positioning cues are contradictory. Knowing which of the totality of cues are relevant in context so that the software can appropriately adapt its positioning behavior to the change in geometry is the part that is usually deemed AI-complete by the people I know that have been working in the space a long time. There are many infamous example cases of humans being able to correctly register contradictory positioning information in context that we don't know how to algorithm our way out of currently.

Some of the drone work we did was actually measuring how the geometry of "fixed" spaces varies over time. The world around us moves a lot more than humans can perceive.


OK. I've never worked tighter than 15cm, for automatic driving, so I haven't seen that.


If you are able to: 1. Narrow down your GPS coordinate to within the nearest 50 meter radius (really crappy GPS signal.) BUT 2. Are able to use the physical structures surrounding the user to pinpoint the actual precise GPS coordinate. Which Google can do using streetview data.

Then you will be able to triangulate and locate the user quite accurately as long as they are above ground and outside.

And you get this: https://www.youtube.com/watch?v=XWbY5jdJnHg (Interestingly, this just came out today)

With the work that Google is doing for in-door-mapping, this might also work indoors as well as underground, I don't know.

But it seems like the location accuracy for location-based AR is being "solved" right now. Unfortunately, the way it's done can only be done by somebody like Google or a company who can afford to collect streetview level data (maybe Apple can afford to do the same here.)

[Edit] Plus, if you're talking about parallel worlds, Google could even use the Streetview data to potentially pre-render the alternate world over the real world and only send the data back the user after they manage to triangulate them. This way they don't need to do that in real-time, reducing the latency of rendering something over the real world structures.


> but that magic quickly goes away when everything suddenly shifts 10 (or 50) meters to the east because your device got updated GPS info.

"The principle of generating small amounts of finite improbability by simply hooking the logic circuits of a Bambleweeny 57 Sub-Meson Brain to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea) were of course long understood – and such generators were often used to break the ice at parties by making all the molecules in the hostess’s undergarments leap simultaneously one foot to the left, in accordance with the Theory of Indeterminacy."

Have you tried turning off the Bambleweeny 57 Sub-Meson Brain?


Yeah, GPS alone is not enough.

This product uses an auxiliary GNSS antenna and software to provide centimeter level precision: https://sitevision.trimble.com/


This doesn't really make sense. GNSS is simply a reference to a receiver's ability to access any navigation constellation, that is, GPS, plus GLONASS, Beidou, and Galileo, operated by Russia, China, and the EU respectively.

Just about any modern GPS receiver in a dedicated device has this capability. This may not be the case for mobile devices for cost reasons. The Ublox NEO-M8N, for example can concurrently receive signals from 3 GNSS constellations and has a single unit price around $15.

The thing is, all those satellites don't necessarily help precision as they tend to cover different geographic regions. There aren't a lot of Beidou sats passing over the US, for example.

What does help are augmentation services like WAAS, SBAS, or EGNOS which provide regional GPS correction data originating from known, fixed points in the region.

Differential GPS works similarly and can offer greater precision, but requires a different receiver in a whole other part of the RF spectrum.

Finally, all this precision comes at a cost, that being time. If you want centimeter level accuracy it can take a fair bit of time to get a good fix. In my experience, a couple minutes with a good modern receiver.


That's very cool - thanks for sharing!


I might be oversimplifying this - couldn't companies that want to take part in the parallel world set up local sensors around their building to estimate position to a much more accurate degree?

I'm thinking along the lines of "Want to be a Pokemon gym? Place these devices around your public space and we'll overlay our AR reality while people are there!"


> everything suddenly shifts 10 (or 50) meters to the east because your device got updated GPS info.

Seems like you could 'debounce' GPS updates where you know it hurts to make changes. Not that different from maps' cheating. But the resolution of GPS and its challenges related to ground clutter will always be a thorn in the side of AR applications like this one.


There was an anime series called Dennou Coil from 2007, that explored a lot of these themes, based on a world with a very detailed AR model of reality. Being a TV show, it had lots of anthropomorphism, but it asked questions such as, what if self-driving cars relied only on this modeled mirror world, even for real time information? What happens with power dynamics when when an authoritarian corporation owns the mirror world? How do assets age and become dilapidated if they are virtual?


I was going to post this, but you beat me too it. One thing I remember was how drab the real world was, because all signage and art had migrated to AR.


That's actually a feature.



This will bring up very interesting questions about who owns real-life property in AR:

In the real world: Burger King is not allowed to enter a McDonalds property and plaster their ads all over the place!

But what about AR? Let's assume there is a really popular AR app and it supports placing ads in certain spaces. Would BK allowed to place virtual ads on real-world McDonalds property? Or would McDonalds have grounds for suing because their real-world property extends into AR?


Can I load a banner ad from McDonalds on a webpage from my phone if I'm in BK? I don't think legislation that might prevent that would be reasonable. Just because this is projected onto the real world doesn't make it different. They're both layers in digital space.

The only entity that has jurisdiction over that particular digital space is the app itself, so they get to make the rules.


Not the same thing if the ad space is tied to geographic coordinates and services by a few major ad providers. Geographic ad placement could be huge. Especially “this is cheaper on amazon” ads.


What if google used location services to detect I was in a burger king, and showed me sponsored ads on the top of my search page on my phone, and McDonalds happened to be the highest bidder. I don't think the trigger for the ads change things.


Burger King is already (kind of) advertising in McDonalds by offering $0.01 Whoppers if you order them from inside a McDonalds: http://fortune.com/2018/12/06/burger-king-is-offering-01-who...


This is the big question, and one Kelly glosses in this piece, perhaps even obfuscates by banging on the 1:1 mapping thing while ignoring that a 1:1 mapping implies consensus on the map.

I think this notion, one I identify as an issue of semiotics as much as a technical challenge, is a huge problem / business opportunity. I did a bit of a deep dive on the topic in '15 in this Medium post: https://medium.com/@doctorhandshake/the-map-and-the-territor... ... been meaning to post to HN and never to got it ... will have to do that now.

A little disappointed in KK because I sent him this article in '16, and, despite that he didn't respond, he did crib a number of my points for this piece, as well as the Borges allusion.


Perhaps it'll be like buying the top search results in Google Ads: the AR space will be for sale to anyone and McDonalds will have to buy ads in their own stores to prevent Burger King from doing the same.


Yes! Although that assumes a third party managing the digital space, which McDonalds might still have to allow.


This could be a relevant take for US: https://www.lexology.com/library/detail.aspx?g=ca4af3d4-867a...

In your specific example, McDonalds might have both copyright and trademark, or even the publicity claims - design of the restaurant, arches and the recognizable nature of the brand all are very unique. Placing BK logos over them would be valuable because it’s McDonalds.


What about the situation where I don't like my neighbor and I put an ugly virtual sculpture on their property? Nothing insulting or offensive, just something ugly.

Should the neighbor have any rights over their virtual AR property?


I personally don't see this becoming a problem. At least the way I have imagined AR being.

The way I hope to see it is that all the different features are apps that you can either enable or disable and then the AR platform (e.g. the OS on your headset) just mixes them all together. So for example there could be an app for placing sculptures anywhere and it becomes a matter of that apps policy. If you don't like an app that lets you place sculptures anywhere just don't install it and use the one that ties in with property register. And so on.

This would also reduce the BK Ads in MCDs example to what we have today. It's not illegal to show BK Ads on peoples phones while they are in MCDs.


I've wished we had this same concept of OS-provided base layer in other areas as well, where your app provides plugins that could be enabled in the core app. It came to my mind when Apple released their share extension method.

Maps: I use a few apps whose sole purpose is to provide location-based info (eg we have a govt app that gives fuel prices). This could just be a plugin layer that could be enabled in the Maps app. Property searches, your favourite fast food place, etc.

Camera: plugins could provide more effects or manual camera settings control UI, or indeed... AR overlays.


I'm pretty sure that eventually, one "winner" app will appear and grab almost all market share, similar to how Facebook has it for social media.


Not necessary.

I think what allows "winner" apps to exist in modern world is that you can really use only one app at the time. Especially true for Smartphones. The way I hope AR to develop is that all of the small components (the apps) get composited together OS level. So that "winner" platform would be the OS itself.

Albeit, I'm being extremely optimistic here.


I would like for a more decentralized scenario to win, but it seems to me that people tend to gravitate towards centralization. It happened so many times already that it would be surprising if it doesn't happen again.


> where I don't like my neighbor and I put an ugly virtual sculpture on their property

Or the situation where you (with or without attribution) cause a virtual sign saying 'idiot' to appear over their head when they are in a public place?


first of all, i'm not going to have ads in my fucking AR experience.

https://vimeo.com/166807261


So far it’s totally legal to advertise on direct searches for your competitor. As of a few years ago it wasn’t uncommon to spend five figures or more monthly just to protect your brand.

Looks like google has started regulating this just from a quick look. I’d expect winners in the AR space to follow a similar trajectory, coming up with rules for this when it becomes a problem.


Can you navigate to the Burger King website in a MacDonalds?

Should be the same thing, no?


May not be quite the same. Navigating to Burger King website in McDonalds requires the user to take an explicit action to do so, Always-on geo-located AR may not require user consent (unless it's designed that way which may defeat the purpose of Ads).


How far away is the hardware? I think AR is truly the next evolution in human-computer interaction, but every time I explore the hardware it still seems like an entirely immature area. We haven't even really reached the "Palm Pilot" stage as I like to call it. In that stage the Hardware and Software are production ready and usable but only power users are truly interested in it. There's HoloLens, there's Google's attempts, there's those glasses by that company called North. All of those are nowhere near mainstream ready. There's heads up displays which are interesting and I think have lots of opportunities. I think realistically we need a proper set of glasses with a solid brain-computer interface in order for AR to take off and become a mainstream. I personally don't want to try interacting with a pair of smart glasses using a hand-held controller or some motion detection sensors.


I'd guesstimate that it is tied with 60GHz femotcells and ISPs. Graphics at 4K+ resolution and 120Hz+ refresh rates does not seem viable for many years on mobile platforms, but indoor positioning and high-bandwidth streaming provided by a device that takes the place of your WAN/LAN router could possibly work. Time-sharing high-end GPUs could also be quite economical, and give ISPs yet another revenue stream.

All the infrastructure needed for this already exist in a form which offers barely adequate performance. Good performance should be just a generation or two away, and then things should get interesting.


Why will AR require a persistent full-scale model of the globe? Isn't the whole point of AR that it takes the existing environment, generates a local map on the fly, and overlays something useful on it? My 2025-model AR sunglasses don't need to know nor care what shape the Arc de Triomphe is, they just need to know the names of the people in my field of view[1], or where the walls and floor of my house are, or maybe the next 500m of the road I'm driving on.

[1] This is my dearest hope for AR, that someday all the people around me will have MMO-style nameplates, because I have many strengths but remembering the names of people I've just met is not one of them.


> someday all the people around me will have MMO-style nameplates

No, thank you, kind user. Just because you are unable to remember the names of relevant people around you, there is no particular reason why anyone sensible should agree to have personal data be transmitted to random people around them at all times.

Such name identification on the fly then contributes to a (still fictional for the public) global real time people location service. Great, because 5 people are in line-of-sight of George McGeorgeface he is now confirmed to be at the dildo store in downtown Nairobi although he claimed to be sick for work, thank you face recognition APIs!

It's bad enough that we get tracked and screwed for ad data nowadays which just happens to not even stay in the ad space.....


You bring up some good points, but I would be happy with having that info for people in my Contacts list. I would see it like this, each user has an id, and I would be able to tie that id to a name. And it would only be visible if they were in eye-sight.


Woah, OK, we clearly picture this service working very differently. I'm not asking for everyone to be geotagged in some publicly accessible database. Ew. I just want locally running facial recognition paired with speech-to-text which is smart enough to recognize introductions, so that when five people introduce themselves to me in quick succession after I stumble off a red-eye flight, and then three weeks later one of them comes up to me acting like they know me, I have some chance of knowing who the hell they are.


It’s amazing how much anxiety this would remove from my life. A very significant amount, like, I’m ready for this now please.


To circumvent this, I think it would be great if user had to option to make a geofence. Like they could enable their Contact details being detected in a Conference/Meetup? networking event,as the person himself/herself will benefit from this. Then, outside a particular location your ID gets locked down sort of like location based -Smart Lock in smartphones.


Almost certainly this is already possible with Facebook's graph and the current state of facial recognition in photos.


Speak for yourself. I would permit and embrace it.


"Why will AR require a persistent full-scale model of the globe?"

The 'Mirrorworld' described here is not just an AR application. It's an indexable, queryable database of "everything" that is built.

Don't think about nameplates above people.

Think about service crews coming to inspect the local sewer network, who get an "xray" view of the pipes running under the street.


That "xray view" could actually be created from the cameras that were worn by the service crew that installed the pipes in the first place. That way it could be accurate even if the pipe maps aren't, and e.g. it might be able to warn about other nearby pipes that aren't present on the map.


That's pretty cool and all, but that's not "AR" (although it could be used for it).

AR : Mirrorworld :: The Internet : Google Maps


You don't get to choose the definition. Mirrorworld is just one usage of AR.

AR is augmented reality and the pipes example is a great example of that.


We already have some of these mirror worlds. The AR game Ingress and Pokemon Go are two well-known examples.

Which leads me to my next prediciton: There will not be a singular mirrorworld. There will be many. It would undoubtedly be cool if there is a "standard" mirrorworld that people refer to (just like they refer to sited on the one and only internet) unless some big entity creates something compelling early on (think Wikipedia) and has a big first-mover advantage I don't see it happening.


> some big entity creates something compelling early on (think Wikipedia)

Good example; if there's going to be a 'shared' mirrorworld it probably can't be an end in itself the way games are.

It would have to be at least one of informative (about the real world), social (so your network is all there), or common-ground experience (like watching popular shows and sports playoffs). And since 'informative' is fungible, it might not suffice unless there's a clear leader.

Or, I suppose, some major power might roll all the popular offerings into subsections of a top-level experience. It's not that hard to imagine Google or Apple setting up an AR 'appstore' from which you can open games, information, and so on. Google Glass was premature, but the idea made some sense as way to use existing accounts and image analysis abilities.


I always think the next surfing experience (dialling through radio frequencies, surfing the early web, etc) will be dialling/browsing between "mirrorworlds". I doubt there'll be one. Although the fear is we're already being segmented into alternative realities. Imagine the ramifications we're already experiencing in that kind of context.


That might make for interesting project (individual or community). Photoshopping AR vignettes for peoples' various passions. Sort of speculative storyboarding and market exploration for future apps.

For a user with a passion for fashion, what might a nifty AR future expert experience look like? Heads-up tooling for where their head is at as they walk down the street. Collaborative tooling for their interactions with friends of similar interests, while walking and retrospectively. What about for software, or for startups?

The AR vignettes I've seen have been generic. Refrigerator contents, street navigation, business travel, etc[1]. Broadly accessible, but most of it mundane. Not targeted, not trying to inspire individuals with "O.M.G. Want future now.".

Sort of like demos of Unreal VR Editor[2] for game devs, or EXA[3] for bands, or PRIMITIVE[4] for software devs. But synthetic, rather than app demos.

[1] HYPER-REALITY https://www.youtube.com/watch?v=YJg02ivYzSs [2] Unreal https://www.youtube.com/watch?v=JKO9fEjNiio [3] EXA twinkle https://www.youtube.com/watch?v=WB22jF9cRko , multiplayer https://www.youtube.com/watch?v=bdZsUZGuCeI&t=9 [4] Java https://www.viveport.com/apps/675c92c6-7df2-4ee3-b919-1bfbb6...


First paragraphs and I just had to think of this: https://www.youtube.com/watch?v=YJg02ivYzSs


Anyone interested in a fully imagined setting where a "mirrorworld" is firmly in place should check out Verner Vinge's book Rainbows End.


" To recreate a map that is as big as the globe—in 3D, no less—you need to photograph all places and things from every possible angle, all the time, which means you need to have a planet full of cameras that are always on."

And I'm sure that won't have any security issues


Security issues are our chaos ally, whereas Political issues are what make such a thought stupid.


Most of these "twins" have already been created. The majority of all components for cars, buildings, tech, etc have CAD and 3D models before even being manufactured so it's just a matter of the creators open sourcing a public version.


Cities should proactively legislate that ad platforms must lease geographic coordinates for ads that are tied to specific places from them.

And businesses should get ready to optimize it for social and financial objectives.


AR will only work when you have a constant on system. People holding up phones is more like an experiment than anything useful because of the UX of needing to hold it up.


Here's what "AR" would look like if it worked. Watch "Hyperreality".[1] Best depiction of AR so far.

[1] https://vimeo.com/166807261


Ray Kurzweil, is that you?


Close, it's Kevin Kelly


"...Next Big Ad Platform"

What could possibly go wrong?


I would love it if we could regulate advertisements to only exist in AR space.


How about an AR ad blocker for real life? Removes bilboards, commercials, branding and ads from real life spaces and publications.

It'd also be interesting to replace ads with other stuff in general. Imagine being able to replace a print ad with a random article from the publication's website, or a TV commercial with a YouTube video in your subscriptions list.


Relevant to what we're building: https://www.infiniverse.net/


If you don't use open standards you'll look like a pyramid scheme.


Could you elaborate, what possibly looks like a pyramid scheme?


Immediately checked out when it mentioned blockchain.


I'm sorry that some people have such negative connotations towards that word, though I can understand to an extent.

We're certainly going to make this as easy as possible to use rather than targeting just blockchain enthusiasts.

Perhaps in the future we'll rebrand our website to not focus on the blockchain aspect as much.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: