Hacker News new | comments | ask | show | jobs | submit login

As for "spectacularly well" -- well, the person behind the curtain wiggling the levers retains a lot of influence. Garbage in, garbage out, remember?

I'm reminded of the time Google Translate autodetected "Gesundheit" as Spanish. And Gmail kindly offering to translate "hahaha" from Portuguese, putting an ad for coconuts next to it.

Data science is improving, but you might be surprised how slowly. Especially in the consumer space, because the metrics on effectiveness are so warped.

Voice recognition of numbers only, over a phone connection, can be below 40% accuracy! Much of the perceived success of these systems comes not from the core machine algorithm, but from clever human tweaks around it. Also end-users who are happy with what they get, not quite realizing how goofy it all is if they were to get a glimpse of the raw data.




I'm not sure how old you are, but what exactly are you expectations when you state Data science is improving, but you might be surprised how slowly.

We have machines that can categorise pictures better that humans. In 2011 that seemed completely impossible.


Age is unrelated to wisdom and I'm talking about the full experience.

A Google Image search for "Wonder Wheel" (the famous Coney Island Ferris Wheel) shows this spoked diagram within the first page of results:

http://searchengineland.com/figz/wp-content/seloads/2011/07/...

Also this year, Google Photos classified black people as gorillas.

http://www.usatoday.com/story/tech/2015/07/01/google-apologi...

Consumers are rarely exposed to the raw machine output -- for good reason. My experience building these sorts of systems is that they're pretty goofy and they fail unexpectedly. After chasing audio and video problems using custom software as well as three major toolkits, I find myself hyper-aware of the flaws in public systems.

Also common sense dictates that it's more about the data scientist on the way in and the UX person on the way out than the machine.


I'm not commenting at all on wisdom, merely on how quickly time passes for people of different ages. 5 years seems like a long time for a 15 year old, and hardly anything for a 50 year old.

The first result for me is https://upload.wikimedia.org/wikipedia/commons/2/27/Wonder_W...

Looks good to me?

As for the gorilla incident, I don't think anyone is claiming that errors don't occur, and it's very fair to say that particular error was very embarrassing for Google. It's interesting how children make the same kind of embarrassing mistakes, eg: https://www.reddit.com/r/Parenting/comments/24me24/embarrass...


> Age is unrelated to wisdom and I'm talking about the full experience.

That's not true; age and wisdom are quite related, just not directly causal. They are correlated. Older people are generally wiser, it's just not a guarantee, nor is it impossible for young people to be wise, but it's certainly far less common. With age comes experience and with experience, wisdom has fertile ground to grow, though it doesn't always.


Notwithstanding the fact that "age is unrelated to wisdom" was a gentle contextual device used to dismiss the commenter's odd assumption about my age, a search for "age is unrelated to wisdom" turns up actual research papers.

"neither general nor personal wisdom have a positive linear relationship to age... age is not only not related to personal wisdom (as is the case for general wisdom) but even negatively related..."

[Mickler & Staudinger (2008), Sneed & Whitbourne (2003)]

Source: The Scientific Study of Personal Wisdom: From Contemplative Traditions to Neuroscience

http://www.amazon.com/Scientific-Study-Personal-Wisdom-Conte...


If you actually searched for that, then you know there's far more studies showing the opposite. You can find a research paper supporting just about any position, that doesn't make it the majority opinion of the field. You actually had to ignore a lot of stuff saying the opposite, to find something saying age and wisdom weren't related. That's OK, you're young, you'll get wiser with age.


That spoked diagram's image is named "google-wonder-wheel" and is linked from an article referring to Google's Wonder Wheel. That doesn't seem like a spurious result to me...


It's clearly an outlier. Try an image search for "Google Wonder Wheel" and you'll see a few Ferris wheels among lots of spoked diagrams -- but not the Wonder Wheel.

The gorilla example is more to the point. Google pushed a quick fix and couldn't fix it. So they blocked the gorilla tag altogether. Why? Because a data scientist couldn't figure out how to fix the problem properly and deploy a solution. To me, that says quite a bit about the current realities of machine learning systems.


We also have history books. The history of AI is funny.


And for the first fifty years of AI research, the process more or less goes:

1) An AI researcher decides that problem X is hard enough that solving it represents intelligence 2) Researcher develops an algorithm to solve X 3) Now that we have that algorithm, it's no longer an AI problem

When you teach a computer to play chess, it's just depth first search with a few heuristics added in. When you write an expert system to evaluate mortgage loans, it's just following a if-then script.

Machine learning techniques though, seem to to have stayed solidly in the AI camp.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: