Hacker News new | past | comments | ask | show | jobs | submit login
Google Launches ‘Live View’ AR Walking Directions for Google Maps (techcrunch.com)
86 points by jbredeche 72 days ago | hide | past | web | favorite | 50 comments



Whenever I come out of an unfamiliar subway station in NYC and am trying to get my bearings, my iPhone's compass direction is often off by 45° or more -- the blue arrow in Google Maps can be useless to know which way I'm pointing, and I always have to verify with street names.

Curious if this will be a problem for AR.

(When I'm consistently above-ground, however, e.g. on a car trip, compass orientation remains perfect -- I'm assuming it calibrates itself to GPS movement or something.)


If you know that you're getting out at 33rd and that 33rd is a 1-way street that travels west you can orient yourself pretty well if you can see any street-signs.

What's infuriating is that nearly all map applications don't show you the pertinent street names or which directions one-way streets are going. You have to zoom and pan and whatever else to find out which street you're on.

This AR thing may be a nice upgrade that would help but please for the love of god just show me all street-names that I might be on and which direction one-way traffic goes.


My phone compass is pretty consistently wrong (probably because I use a case with magnets in it), but the AR navigation feature works well for me. It asks you to point your phone camera at "buildings and signs across the street", so I think it's using Street View-based image recognition.


This is exactly the scenario that AR is best suited for. Assuming that it works at all, the orientation data from the camera + gyro will be essentially perfect.


This is right, the information can be combined using the uncertainties given by each observation through a process known as sensor fusion. [1] An article that pops up sometimes on HN describes one way to combine such information known as a Kalman filter. [2]

[1] https://en.wikipedia.org/wiki/Sensor_fusion

[2] https://www.bzarg.com/p/how-a-kalman-filter-works-in-picture...


You can virtually instantly calibrate the iPhone compass by waving it a 3 dimensional figure 8 pattern. It's hard to describe, but this YouTube video (for the original generation-1 iPhone (!)) shows it quickly: https://youtu.be/sP3d00Hr14o

It still works and it's quick and easy to do.


This is for any phone


Usually the exits will have cardinal directions (e.g. NE corner of x st and y ave). I don’t see why you need a smartphone for this.


Counter-intuitively (or not?), my sense of cardinal direction while using the subways in New York was way better before the smartphone era.


Frustratingly, they don't explain how it works, or why it useful.

1) its a visual search. This means that using streetview data, google has crunched a sparse 3d pointcloud. Each of these points are represent a globally unique has of a point. (a global descriptor)

2) This is better than GPS, because its possible to resolve a position to ~20cms and crucially where true north is. Crucially it doesn't need an unobstructed view of the sky to resolve a position.

In urban canyons, where sky is limited, there are loads of unique visual points that are more than enough to calculate an accurate position. It also works inside, in places like malls.

The drawback is that it requires constant updating to remain accurate, buildings and hoardings change. Now, there is a patent for self updating, but google doesn't own it.

Also, they don't appear to be offering this service to third parties, which is a shame.


This is great for big cities. I live in a medium sized city and GPS works pretty good in most places. I was surprised just how bad it is in NYC (with much taller and denser buildings), unless you can orientate yourself manually with a business the location is not accurate.

This does the work for you.


Nice that this is finally live. I was in the beta group for it for about a year and really liked the feature. Only shortcoming was at night it didn't do well at figuring out your direction as it uses visual clues to get the exact direction you are pointing. Was great for walking around cities as I travel a lot.


There's a number of things that are alarming me about this, but I'll focus on one thing: the privacy implications. Not just for the user, but for those who will appear in the pictures that Google will be - for sure - ingesting for the purpose of crowdsourcing street view imagery. Suddenly we have this platform that will be capturing millions of faces and potentially know where people have been even if they don't use the app.

Not to mention that this type of app is treating people as incapable of following directions and it opens up the possibility of new avenues of advertising. Google already keeps an archive of your whereabouts and can even tell when you get in and out of a car. Just imagine wielding your phone around and getting "special offers" for that store you go by but never walk into.


> There's a number of things that are alarming me about this, but I'll focus on one thing: the privacy implications. Not just for the user, but for those who will appear in the pictures that Google will be - for sure - ingesting for the purpose of crowdsourcing street view imagery. Suddenly we have this platform that will be capturing millions of faces and potentially know where people have been even if they don't use the app.

Umm, the camera stream of AR isn't transmitted from your phone. The 3D objects are rendered on top of detected features on device.


How do you know? And even if it doesn't for starters, there's no telling it'll stay that way.

Very legitimate concern.


> How do you know? And even if it doesn't for starters, there's no telling it'll stay that way.

Same way I know there's no pagan god hiding in my phone whispering my secrets to evil aliens: by actually working with the technology, observing the behaviour of my device and reading the privacy policies and activity log published by Google and not finding any proof.

Of course you can argue that Google might lie for whatever reason... but then you probably shouldn't be using Google Maps at all. Or any product built by a corporation.


Also by watching the upload bandwidth. If it streams the live video feed off the device at anywhere near being useful for facial recognition it would be very noticeable.


It doesn't have to be the entire video feed. it could be snapshots from a specific direction. If millions use it, there's a good chance even a handful of snapshots from each user at a location that thousands of people pass by could get it mapped better than how they currently do.


Sundar repeatedly stated in this Google I/O 2019 presentation how they were condensing and bringing AI and ML techniques onboard to phones to provide better responsiveness and lower the security issues of taking private data into their cloud.


A very good result from all the bad press they've gotten lately. If Google can improve its stance on privacy then I think it solves it's biggest problem.


Couldn’t you say the same thing about the standard camera app on Android and iOS? People have it open all the time.


> Not to mention that this type of app is treating people as incapable of following directions

Seems like a legit solution to the common problem of people standing around trying to work out which direction they're actually pointing.


Lets split this into three:

1)ingesting for the purpose of crowdsourcing street view imagery

If they are doing that(Which I strongly suggest they are not, well at least not yet), then having multiple views allows google to remove things that move (people/cars)

2) Depending on how they do this, if its using "visual landmarks" then people are a massive pain. They move, and cause all sorts of problems when doing positioning. So they'll need to be removed for the system to work properly.

3) The actual scary part is not the faces, persay. Its the fact that any picture where google has streetview is now able to be precisely located, regardless of who, when or how it was taken.


Are they comparing live data with Street view data to ensure accuracy or is it just simple direction overlays through GPS? What algorithm is best for comparing images in 3D space while taking into account of differences due to littering, vehicles, street signs etc.? Also this is a great way for automated outsourced Street view data gathering. Would need a strong image stabilization algorithm but nothing a big enough neural network can't handle, in theory at least.


It seems to use image recognition of buildings and other features. When I tried it a few days ago, it shows sparkly dots on surrounding things and even shows an image overlay of a building and suggests you line it up with surrounding buildings.


Super cool. It even worked for me at night, although I was walking down a brightly lit major street.

I wonder if it could work in a dark alley or do indoor navigation in malls.


Too bad the compass still doesn't know which way I'm facing 90% of the time.


The whole point of AR directions is that it uses data from your camera to figure out your orientation


Google uses AI along with the camera to correct the compass. If they could integrate this correction with their standard compass API it would be a game changer.

As a side note, the API docs in both Android and iOS fail to mention how unreliable the compass outputs are!


Or they don't don't know about it. It is pretty apparent that the google maps team have never tried to navigate via compass on an actual phone.


Compass operation is clearly dependant on the hardware and your physical surroundings. My phone compass ( LG V30+) works well in a suburban street or open bushland, showing high sensitivity. Inside my steel framed house or 5 metres from a 40 storey steel reinforcement sky scrapers not so much. ( And not much different from my 40 year old Silva compass) That's why Google is using the camera, Street view and AR to align the orientation


Yes. And why, for the love of everything that is holy, can you not disable compass navigation then?!?

Even gps direction is vastly superior during walks - despite the obvious issues with that.


Perhaps it's time to disrupt the world of compass physics by inventing one that doesn't get influenced by other magnetized objects?

Because that's the only way you'll fix that.


It's only the electronic compasses in mobile devices that are garbage. I've experimented with holding a regular small mechanical compass in one hand and a cell phone in the other. The mechanical compass is steady and accurate whereas the cell phone compass jumps all around. It's just poor design.


Sure, but that's because the compass in your phone is smaller than your nail is thick. Which obviously makes is significantly more sensitive to any disruption.

How would you design it better?


You can design a separate compass transmitting data to the smartphone. There's many people who could trade $50 and some additional weight in their bags for accuracy.


A Friend of mine and I had attempted something similar but in an indoor setting. The key problem we faced is that the compass will in the phone and positioning system. The way we overcame the orientation was by using the camera to combine the visual odometer data with the compass measurements. We also faced issues with the localisation as being indoor meant we had to rely on wifi location which was not very good. Again, the way we got over this was to use the visual odometry, and rely on users turning. https://github.com/chelseyong/Android-App-AR-Map-Navigation


Had this feature on my Google Maps a while back, thought it was already released and has some fun using it, but you need a constant internet connection


I think they beta tested on Pixel phones.


That must be it.

I rarely use walking directions due to the awful accuracy of GPS between buildings. AR walking directions are fantastic.


I also had it on my LG, because of my Local Guide status


How much of a battery drain is current AR tech? I would imagine it's decently computationally expensive.


Why is this a thing? Why would Google invest money to create this? To use it, you must hold your phone in front of your face, with camera on, while walking. It is a regression in my opinion, because it adds no value to their current walking navigation system, and also encumbers the user with having to hold their phone up at eye level. Is it a cool novelty? Yes. Is it useful or productive (other than creating a new surface for advertising)? No. I strongly advise against using this gimmick. Please don't feed the bear. That is all.


It actually does not allow you to walk and use it at the same time. It is meant for stopping, getting your bearings, and then putting your phone down and continuing on your way.

It definitely would be useful in unfamiliar, dense urban environments where maybe the streets are a bit too close together to easily distinguish on an overview map.


It does allow you to walk and use it at the same time. But since it auto switches to normal view when you hold the phone flat it is easy to swap between both modes.


I routinely navigate in London where I walk out of a tube station and have absolutely no clue which way is which. The roads are unlabelled and all look the same and the GPS direction dot isn't quite good enough

Last week, with this feature, I pointed my camera across the street and it told me to go left. It was really really useful.


Yes, I was in London last week and had the same issue with the tube exits. The tall buildings also make block level GPS pretty useless. I hope this is as good as it sounds


The roads are not unlabelled. There are street signs mounted on walls at almost every intersection, and additionally a huge number of local area maps for pedestrians: https://tfl.gov.uk/info-for/boroughs/maps-and-signs Every single tube stop has "exit maps" that show you where you'll be emerging at street level before you exit the station.

Being able to read a map and orient yourself in your environment appears be a dying art.


> Every single tube stop has "exit maps" that show you where you'll be emerging at street level before you exit the station.

Not at every exit, and you can't just casually stroll over to one in the middle of rush hour.

> The roads are not unlabelled

Sure they are, half the time the tiny road sign is obscured by adverts, construction, or just at an awkward angle not visible from the tube exit.

> and additionally a huge number of local area maps for pedestrians:

Again, not at every station, mainly the tourist ones.


It was extremely useful for me in Tokyo, where the streets are completely unlabeled. The big win is using image recognition of buildings over pure GPS direction finding, since that can get turned around in the caverns.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: