This is not a technology problem, its a process problem. Vast majority of submitters could be cheaply verified and for small number of potential spammers, there are several options:
1. Require submitter to be legally traceable and identifiable. Establish fines/fees for misuse of your property and charge their bank accounts for cleaning fees. Blacklist them for life.
2. Once a submitter is identifiable and legally traceable, charge exponentially higher fees for number of submissions unless they want to show up as chain business in which case charge reasonable fee for validation and background checks.
3. Detect and avoid autonomous mass submissions except for highly trusted entities such as big chains.
Google Maps is not optimizing for "percent of listings that were completely accurate at time of submission". That's how paper maps (and to a lesser degree, the Yellow Pages) mostly worked. There's a reason we stopped using them.
What Google's doing is harder and more nuanced -- they want to accurately capture the world as it exists around you right now. That means understanding what parts of the world around you you care about and understanding what level of accuracy (as opposed to promptness of updates) they should optimize for. Given Maps' extraordinary success as a product, they've done this quite well. But, yes, it has failure modes that users should be aware of and that Google tries to combat without compromising the main goal.
Google is trying to build maps without human verification, because humans are expensive, and to some degree because humans can be tricked too.
Accepting content without verification certainly has a side effect of getting businesses on board the instant they open, too, but that's just a happy accident.
Google is fine with expensive (they sunk many billions into Maps over 10+ years and only just started monetizing it). But humans don't scale. If you want to have ten thousand data points about every business on Earth, you can't get there by relying on humans. You need to automate everything from top ot bottom.
Humans can scale, it's just that Google chooses not to use them. Anything that requires a human in the loop is something they won't do.
Customer service, not going to happen.
Having Youtube Kids actually have appropriate content for kids, unlikely.
Being a trustworthy source for businesses on maps, no way.
Google products have up to 2 billion users. It's nearly impossible to scale any human support to this size.
Because it's actually billions of users. It's a billion of users for Google Maps. It's two billion of users for Youtube. It's nearly a billion for GMail. And the requests for those different products have to go to different support personnel.
In areas where there's significantly less users they do provide human support, for example in GCP.
I work for a product with big numbers too. We have humans reading the support queue (with a lot of engineering to prefilter and group for bulk replies and what not). People may or may not like our support, or may complain it's not as good as it used to be, but it's there.
Frankly, we couldn't have as many users as we do without a human connection in support. Users tell us what we're doing wrong, and what we need to do that we're not doing, and where we need to improve --- but only if you listen to them.
If it's impossible to provide human support for a system, and if it's possible to automatically exploit the system to affect a nontrivial fraction of users, then perhaps the system shouldn't have billions of users?
Google Maps does allow any user to submit edit request for place details (opening hours etc) and flag suspect entries. I guess one could be upset about this attempt to get users to work for free, but personally I don’t consider exploitative because (a) verifying local business is actually far easier for local people compared to anyone Google could hire and (b) Google Maps is such a useful product that the distinction to, for example, Wikipedia is less obvious than the simple for-profit/non-profit designation makes it out to be.
> What Google's doing is harder and more nuanced -- they want to accurately capture the world as it exists around you right now.
I challenge that. I do not believe it is true. Google wants to be the go-to directory for spatial queries so that they can display their ads and make tons of money. The quality of the data is irrelevant after a certain level, as long as there is no viable alternative or people just blindly trust Google. Google Maps' data is quantity over quality by a wide margin.
Google is a huge organization. It is possible for Larry or Sundar to have a 30 year plan that involves Maps making money while every engineer working on Maps day to day (including Director and VP-level people) cares deeply about product quality and has the freedom to actually put a lot of resources towards that.
I agree about quantity over quality -- the product strategy boils down to "collect lots of data and build models with it". In the short term, each new feature has middling quality data with middling quality models. But over time they get better and better. It's just Maps keeps adding new features that are at that early, kind of crappy stage, and you forget how much the really polished ones used to suck.
Why not have both? Still allow users to submit their own listings but (like twitter/Instagram) have an additional verification process which gives business a "blue tick" if verified.
> The Maps mobile app has had this for at least 3 years -- they ask you questions about places you know. You can also suggest edits on Google Maps which are sent to others for review But not that many people use either feature.
This is laughably ridiculous and not scalable at all. Stores close and prop up all the the time. Not everyone thinks of signing up for google maps. And besides where is the manpower to do verification of millions of stores every day? It’s very easy think something is easy until you actually think of the real world details.
> Vast majority of submitters could be cheaply verified and for small number of potential spammers
Yeah, okay sure. I'm sure all the engineers working on this product at google have never considered anything you wrote before.
If there was a cheap, easy way to block spammers and meet all their other constraints, I'm sure they'd have done it by now.
No matter what you do, if there is money involved, the scammers will eventually work around whatever you have in place to stop them. It is a constant, never ending game of whack-a-mole.
1. Require submitter to be legally traceable and identifiable. Establish fines/fees for misuse of your property and charge their bank accounts for cleaning fees. Blacklist them for life.
2. Once a submitter is identifiable and legally traceable, charge exponentially higher fees for number of submissions unless they want to show up as chain business in which case charge reasonable fee for validation and background checks.
3. Detect and avoid autonomous mass submissions except for highly trusted entities such as big chains.