It looks like Pioneer already shipped something much cruder but of similar idea, and I swear I saw a Microsoft concept video with something more refined.
For navigation, I think this may come over as having better precision than it really has.
I’m looking forward to when AR libraries use their point cloud technology to also do object reconstruction and virtual occluders using real world objects!
also, I think there are other ways to build up reasonable occlusion nodes manually. for example, its probable that the Google Maps iOS team is currently adding an AR directions view that use the Streetview point cloud to build up occlusion areas around most streets. Likewise, some areas of the OSM dataset include building footprints with height attributes, this could be used as well. No where near perfect but I think it would help in the situation you described above
Here is a video upload from a July 4th ARKit experience I made. If you have small movements and have occlusion, ARKit does well. You see in the video that after movement and occulusion it loses its bearings.
But whats crazy is when you return the view and ARKit fixes itself
The second part of the OP tweet is where the real magic happens: https://twitter.com/AndrewProjDent/status/888380207962443777
that said, when using worldalignment gravityAndHeading, any location inaccuracy when the ARSCNView starts up will throw off the AR illusion, sometimes considerably. I hope apps will be able to correct during an ARSession when better location data is detected
where is the iBeacon in the parking lot?