I'm reminded of the time Google Translate autodetected "Gesundheit" as Spanish. And Gmail kindly offering to translate "hahaha" from Portuguese, putting an ad for coconuts next to it.
Data science is improving, but you might be surprised how slowly. Especially in the consumer space, because the metrics on effectiveness are so warped.
Voice recognition of numbers only, over a phone connection, can be below 40% accuracy! Much of the perceived success of these systems comes not from the core machine algorithm, but from clever human tweaks around it. Also end-users who are happy with what they get, not quite realizing how goofy it all is if they were to get a glimpse of the raw data.
We have machines that can categorise pictures better that humans. In 2011 that seemed completely impossible.
A Google Image search for "Wonder Wheel" (the famous Coney Island Ferris Wheel) shows this spoked diagram within the first page of results:
Also this year, Google Photos classified black people as gorillas.
Consumers are rarely exposed to the raw machine output -- for good reason. My experience building these sorts of systems is that they're pretty goofy and they fail unexpectedly. After chasing audio and video problems using custom software as well as three major toolkits, I find myself hyper-aware of the flaws in public systems.
Also common sense dictates that it's more about the data scientist on the way in and the UX person on the way out than the machine.
The first result for me is https://upload.wikimedia.org/wikipedia/commons/2/27/Wonder_W...
Looks good to me?
As for the gorilla incident, I don't think anyone is claiming that errors don't occur, and it's very fair to say that particular error was very embarrassing for Google. It's interesting how children make the same kind of embarrassing mistakes, eg: https://www.reddit.com/r/Parenting/comments/24me24/embarrass...
That's not true; age and wisdom are quite related, just not directly causal. They are correlated. Older people are generally wiser, it's just not a guarantee, nor is it impossible for young people to be wise, but it's certainly far less common. With age comes experience and with experience, wisdom has fertile ground to grow, though it doesn't always.
"neither general nor personal wisdom have a positive linear relationship to age... age is not only not related to personal wisdom (as is the case for general wisdom) but even negatively related..."
[Mickler & Staudinger (2008), Sneed & Whitbourne (2003)]
Source: The Scientific Study of Personal Wisdom: From Contemplative Traditions to Neuroscience
The gorilla example is more to the point. Google pushed a quick fix and couldn't fix it. So they blocked the gorilla tag altogether. Why? Because a data scientist couldn't figure out how to fix the problem properly and deploy a solution. To me, that says quite a bit about the current realities of machine learning systems.
1) An AI researcher decides that problem X is hard enough that solving it represents intelligence
2) Researcher develops an algorithm to solve X
3) Now that we have that algorithm, it's no longer an AI problem
When you teach a computer to play chess, it's just depth first search with a few heuristics added in. When you write an expert system to evaluate mortgage loans, it's just following a if-then script.
Machine learning techniques though, seem to to have stayed solidly in the AI camp.