

Big Data is the new Artificial Intelligence - mark_l_watson
http://www.cringely.com/2014/04/15/big-data-new-artificial-intelligence/

======
mathattack
I like this article, but I think he misses that part of the problem with early
AI (in addition to not enough processing power) was poor mental models. AI
became LISP programming.

Another comment of his is a stretch: _The system just figures it out. And that
means there’s no need for theory._

It isn't that there is no theory behind what Google is doing. It's that
there's theory of how humans process things. There is a theory behind creating
a pattern recognizer. It just doesn't force the pattern recognizer to conform
to an existing theory of how humans process information.

Building the black box still requires some knowledge of what you're trying to
solve.

I'm a big believer in the power of Big Data, but history is littered with
failed projects of machines that replace the thinker.

~~~
chazu
Thanks for this. I cringed when I read "there was no underlying theory." That
dog don't hunt.

------
mark_l_watson
Cringley's article is a lighter treatment of the recent "debate" between Noam
Chompsky and Peter Norvig.

~~~
mathattack
Yes - very clear that he lands on the side of Norvig, but he takes it to an
extreme further than Norvig, no?

~~~
mark_l_watson
I think Cringley is more on Chompsky's side since he is concerned about not
being able to understand the internals of models created with machine
learning.

It may sound like a contradiction, but I agree with both Norvig's and
Chomsky's positions, depending on context.

I would like to see a model for real AI developed from an understanding of how
our brains work, but real AI will probably not be created that way. When/if we
see real AI, we probably won't understand how it works.

~~~
hollerith
>When/if we see real AI, we probably won't understand how it works.

If that's true, then it's very distressing since a smarter-than-us AI whose
workings we do not understand would probably _kill us all_.

~~~
ableal
Maybe the AI will be sufficiently intelligent to just become catatonic in
short order.

(I believe I'm filching this from Larry Niven, no precise quotation handy.)

