
All Models of Machine Learning Have Flaws (2007) - walterbell
http://hunch.net/?p=224
======
peter303
The ressurgence of AI is just as hyped as expert systems and logic computers
in the 1980. I just hope the inevitable AI bubble crash is not as damaging as
it was then.

~~~
dannypgh
What were the highly profitable industries built around expert systems and
logic computers in the 1980s?

The fact that AI seems to be proving itself with real-world, commercially
interesting results in the fields as disparate as advertising, facial
recognition, and self-driving automobiles makes me think the comparisons with
the 80s are premature.

~~~
YeGoblynQueenne
>> What were the highly profitable industries built around expert systems and
logic computers in the 1980s?

I'm not sure there were any industries built around expert systems, as such,
but don't forget that if there's a practical application of an AI system, it's
not considered AI anymore.

Accordingly, expert systems are widely used in the industry today still, first
of all because there's tons of legacy software created in the 80's and '90s
that is still lying around and being maintained. But also because the
definition of an expert system is a database of rules encoding expert
knowledge about a given domain and an inference engine to select the
appropriate rules in any situation- and that is exactly what most entreprise
applications are that manage business rules, for instance fraud detection at
financial orgs etc (which I'm familiar with).

And of course there's industries where you can't afford to use a machine
learning system to do your work. Think of jet engine maintenance for instance.
You don't want a system with 10% error doing that. So I'm told that this is
the kind of job that the industry, even today, leaves up to a good old-
fashioned expert system.

------
joantune
One should also read this: [https://www.quantamagazine.org/20151203-big-datas-
mathematic...](https://www.quantamagazine.org/20151203-big-datas-mathematical-
mysteries/)

~~~
walterbell
A comment on that article references
[http://www.i-programmer.info/news/105-artificial-
intelligenc...](http://www.i-programmer.info/news/105-artificial-
intelligence/9090-the-flaw-in-every-neural-network-just-got-a-little-
worse.html),

 _" Early in 2014 a paper "Intriguing properties of neural networks" by ...a
team that includes authors from Google's deep learning research, showed that
neural networks aren't quite like us - and perhaps we aren't quite like us, as
well. What was found was that with a little optimization algorithm you could
find examples of images that were very close to the original image that the
network would misclassify. This was shocking ... some how complex classifiers
such as neural networks create classification regions which are more complex,
more folded perhaps, and this allows pockets of adversarial images to exist."_

