
The AI Revolution Hasn’t Happened Yet - wei_jok
https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7
======
visarga
Well said:

> Thus, just as humans built buildings and bridges before there was civil
> engineering, humans are proceeding with the building of societal-scale,
> inference and decision making systems that involve machines, humans and the
> environment. Just as early buildings and bridges sometimes fell to the
> ground — in unforeseen ways and with tragic consequences — many of our early
> societal scale inference and decision making systems are already exposing
> serious conceptual flaws.

~~~
return1
that's a trivial statement, unless he can suggest a different way to do
things. Should people have held off building any bridges, until the 20th
century arrived ? Building things and science have always evolved side-by-side
in a feedback loop.

~~~
rdlecler1
Actually it’s not a trivial statement. He’s saying that we don’t have enough
theory guiding us and instead we just spin up a tensorflow library without
having any clue what’s going on underneath the hood. And I don’t mean
understanding linear algebra in neural networks, I mean we don’t have a good
theory of computation for Artificial neural networks like we do for Boolean
electronic circuits. There is a little bit of work out there, but no one seems
to be interested in the theory and why these work. The fact that we still
represent networks as fully connected when many many of those weights w_ij are
completely spurious tells me that we don’t spend enough time figuring out
what’s going on and instead focus on the result. And this lack of attention
will be why we’ll hit another wall. We should be reverse engineering
intelligence and comparing that to AI so that we can build up the theoretical
equivalent of aerodynamics.

~~~
visarga
We already hit a wall - neural nets are easily hacked. The field is called
Adversarial Attacks and Defences of Neural Nets. There's no perfect fix yet
and it is a major security flaw.

~~~
rdlecler1
You’re example doesn’t prove they hit a wall, only that they’re fallible.
Humans ‘get hacked’ every day. I wouldn’t exactly say human intelligence has
hit a wall because of it, rather that human intelligence may have weaknesses
and can be exploited. Frankly I wouldn’t expect neural networks to be any
different.

------
eanzenberg
I saw Prof. Jordan give this talk in person, and the 2nd paragraph resonated
with me. I wish there was more critical thinking involved in
science/research/higher ed/industry but unfortunately most there's a lot of
cheap money floating around with "experts" slurping it up without care for
outcome.

"But the episode troubled me, particularly after a back-of-the-envelope
calculation convinced me that many thousands of people had gotten that
diagnosis that same day worldwide, that many of them had opted for
amniocentesis, and that a number of babies had died needlessly."

~~~
tlarkworthy
Like we have defensive programming: assert this field is not a stupidly large
number. Perhaps we can encode our assumption as probabalisic asserts, and
defensively encode that our assumptions are not violated. E.g. the current
distribution looks like my training data.

------
Talyen42
The biggest challenge yet unconquered is getting your "average" business
($1-100m revenue, zero AI knowledge) using ML to help with _literally anything
they do_.

I'd wager less than 1% of businesses outside of SV even have a clue where to
begin, or what to use it for, my employer included. "Do we hire some AI guys?"

We'll need to crack the 1%-using-it mark for me to consider the revolution
"begun"...

~~~
gliese1337
I work for an actual tech company, which actually told me to look into a
couple of ML projects to improve our operating efficiency, which actually
produced good results and had an obvious and short path to being put into
production... and which were promptly shelved.

Business people make no sense.

~~~
czbond
Can you go into why they were shelved? Were they normal technology and
business reasons - or related to ML directly? I ask so that potentially others
can learn to either navigate around those in the future.

~~~
gliese1337
I wish I could. I have very little insight. I presume it's some sort of
"normal business reasons", but the from my point of view the decision process
went something like this:

Me: "It's finished, here are the areas of strength and weakness, and here's
where we can deploy the system for maximum effectiveness."

Business: "We're thinking about the best way to deploy this."

Me: "This is how you deploy it."

Business: "We'll think about and get back to you. Don't do anything until we
tell you."

Me: "..."

Business: "..."

~~~
xy55
Perhaps they needed you to elucidate the risks or possible downsides?

~~~
gliese1337
That's the "weaknesses" part. And I did in fact spend a good bit of time with
my manager going over exactly what the model _does not_ mean, and what you
_should not_ conclude from it or use it for.

------
deft
I'm not an AI researcher (although from my internet reading that's not a hard
title to claim ;)) but I feel that ML/DL can't go much farther than they
already have. The concept of "we just need more power" is an obvious fallacy
to me.

~~~
randyrand
Humans are proof that machine intelligence can be improved quite a bit. We are
just complicated machines, no?

~~~
goatlover
We're animals who have to worry about surviving and passing our genes on in a
variety of social settings.

Machines are the tools we make to aid the above.

~~~
fvdessen
And your body and worries are just tools that help your genes reproduce.

~~~
danharaj
And your genes are just tools that help the chemical environment they regulate
reproduce.

~~~
dingo_bat
And the chemical environment is just trying to maximize entropy.

------
throwaway84742
The main problem with “AI revolution” today is that 95% of things that
practitioners tell you it can do are just nowhere close to sufficient accuracy
under the range of conditions they’d need to work in in the real world. Easily
more than a half of them don’t even work at all outside a demo with hand
picked examples. And nobody seems to care. This narrows down the scope of
applicability to a tiny sliver where you can tolerate high error rates, and
even there it’s a hard slog to get anything done, and AI is relegated to the
role of the “cherry on top”, rather than baked into the pie.

There are a few areas where performance is good enough to be practically
useful. One of those (facial recognition and tracking) is currently causing a
“revolution” in China. I’m just not sure it’s the kind of revolution we want.

~~~
TwoBit
I have this idea that the only way to create true AI necessarily results in
something that has feelings.

~~~
throwaway84742
I’d settle for probabilistic, context aware cognition, myself. Sadly (or
thankfully, depending on one’s perspective) this won’t happen in my lifetime.

~~~
jon_richards
What do we want?

Context aware language processing!

When do we want it?

When do we want what?

------
mexus
A related talk that Michael Jordan gave at a machine learning conference
called Sys ML last month.

[https://youtu.be/4inIBmY8dQI](https://youtu.be/4inIBmY8dQI)

~~~
cbHXBY1D
To anyone upset by MJ's Medium post, please watch the video. It's a much
better representation of his thoughts on the current state of machine learning
than the blog post.

------
xvilka
Most certainly we will see the comeback of "logical" AI too, including systems
built on Prolog.

~~~
skjerns
Prolog _horror intensifies_

------
msteffen
> The problem had to do not just with data analysis per se, but with what
> database researchers call “provenance” — broadly, where did data arise, what
> inferences were drawn from the data, and how relevant are those inferences
> to the present situation?

Plug: I work at a company
([https://www.pachyderm.com](https://www.pachyderm.com)) whose product is
designed precisely to track data provenance across pipelines and through a
company's larger data-processing operation for this reason

~~~
TTPrograms
You all should really play that up more in your messaging - "provenance" is
one of the hardest and least-addressed components of building AI/ML/data
science systems that actually have measurable impact (rather than analysts
making plots and speculating). In general having a structured, centralized
representation of business processes is super valuable I'm sure.

If you write a blog post describing how critical that is to practical data
science efficacy with some examples I bet you'll end up in a bunch of VP
inboxes.

~~~
msteffen
It may be time for a refresh, but our CEO has such a blog post from about a
year ago: [https://medium.com/pachyderm-data/provenance-the-missing-
fea...](https://medium.com/pachyderm-data/provenance-the-missing-feature-for-
good-data-science-now-in-pachyderm-1-1-2bd9d376a7eb)

~~~
TTPrograms
Nice - I went to the blog page on your site and this wasn't there. Good stuff.

------
prolikewhoa
What are we defining as AI now? It seems like AI has had a lot of hype in the
past few years, but I really don't think we're even close unless it's created
by mistake. We don't understand the brain fully, we don't understand
consciousness, we just know so little in the grand scheme of things.

Do we have the computing power to come anywhere close to what we need, will we
have that computing power any time soon without a major breakthrough?

~~~
Ar-Curunir
Did you read the article? Some of your questions would have been answered

------
mgeorgoulo
This sort of distraction arises whenever we try to adjust logic to words and
not the other way around. It is said that this kind of distinction is only
relevant to scientists. The article is very convincing though, that in the
case of AI, the general public must be able to make those distinctions.

------
laythea
Where is the end game? The problem with this kind of AI revolution is that it
relies on data about us. And this is responsible for the feverish attitude
towards collecting or scraping data, sometimes from the public domain, but
more than usually, from us. I for one, am having none of it.

~~~
paganel
It will probably not end very soon. The author did mention some concerns about
privacy, but only once, and then moved forward with his reasoning like nothing
had happened. Also, it’s disheartening to see that his reaction to a piece of
technology that could have probably killed his unborn kid (the medical device
at the beginning which was fed bad data) was to say “we should use more
technology!”, he was not questioning the underlying premises behind the whole
operation at all: like, is it ok to “let” a machine help decide a
technological untrained person (the doctor) if my unborn kid should support a
life-threatening intervention or not?

------
deepaks4077
Does anyone who's working/worked in the Healthcare "Intelligence Augmentation"
have something to say about the challenges they've faced in setting up an
"Intelligent Infrastructure"? Maybe someone who's worked with the DeepMind
Streams application?

~~~
dontreact
A lot of people don't think about how hard validation can be. You need a lot
of high quality labeled data to confidently say that your algorithm works. I
don't see a way around this even if learning algorithms become very data-
efficient.

------
stevehiehn
An A.I revolution may not be happening but an 'Augmented Intelligence'
revolution is

------
taurath
I’ve been finding it difficult to find concrete examples of where AI is
useful. Suggestion engines? Alg optimization? What would be something my
parents would recognize that is being helped by AI?

~~~
choxi
I like the category of "augmenting human skills". Google's Quick Draw is an
example of augmented drawing, but you could use the same concept for writing,
creating music, even programming. DeepFakes are basically the beginning of
augmented video production. Imagine if you didn't need a green screen and body
tracking suits to replace any person or scene in a movie with whatever you
want. The whole field of generative deep learning is interesting and, to me, a
really unexpected result of AI research.

------
Bye_Felicia
...AND IT WON'T for a VERY LONG TIME. Those of us who know the origins of Ray
Kurzweil, knew back then he was nothing special, and his ideas were derived
from much smarter people around him.

As I recall, the article in WIRED from like 20 years ago, talked about how he
missed his dad, and how in the future you would be able to take a room full of
all his dad's old crap, and AI would be able to recompile his dad into dad 2.0
in the cloud!

(he didn't use these terms but that was the gist of the ridiculous and absurd
WIRED article interview with him.)

Why do I bring up Ray? It's because circus acts like Ray and the Singularity
University, sold a bunch of people certain goods, which were VASTLY
overestimated in terms of delivery times, and in terms of the good themselves.

First of all, many folks don't want to admit this, but WE MAY NEVER BE ABLE TO
CREATE A SYNTHETIC CONSCIOUSNESS! I think we will eventually, but it's
definitely not a certainty. It just may not be possible for a completely
synthetic 'consciousness' to exist.

This type of hype has become almost religious in nature, just like the folks
who really think Moore's Law is an actual scientific law and not just an
observation and prediction based on very limited data.

In conclusion, you'll get your AI just like those folks in the 50's, or was it
60's? ...well anyway, back when they had those world's fairs and predicted
flying cars.

We're just not there yet. We don't even have basic foundations to build this
yet. There is always a possibility of a brilliant individual, who may leapfrog
humanity, but for right now, it's just vaporware.

AND YES. Even IBM's Watson and Google's GO winning "AI" are just vaporware
machine learning and big data sets, it's not AI even a little bit.

Sorry folks, I love futurizing, and inventing words ;) , just like everyone
else, but... if AI is in the stadium, we aren't even in the parking lot.

\---

And now, after my rant, I shall read this article since it looks really
interesting. I'm surprised it's on Medium.

~~~
MattRix
Not sure how you can claim things like Deepmind are "vaporware" since it
achieved exactly what it was intended to do. If you disagree that it should be
called "AI", well that just comes down to semantics, but I think "Artificial
Intelligence" is exactly what it is.

Of course it will be possible to create synthetic consciousness eventually,
why wouldn't it be? It may take 10 years or 10,000 years, but if you think it
will never happen for the rest of human existence, that is nonsense. If it
already exists in nature, then there is absolutely no reason why it can't be
done synthetically.

~~~
Bye_Felicia
This came as no surprise to me, and hence, I was spared the pangs of
disillusionment, but there are countless people who have actually dealt with
IBM's marketing gimmick, Watson, and have come away with a serious hangover:

\---

“Watson is a joke,” Chamath Palihapitiya, an influential tech investor who
founded the VC firm Social Capital, said on CNBC in May. __

A Reality Check for IBM’s AI Ambitions - MIT Technology Review

[https://www.technologyreview.com/s/607965/a-reality-check-
fo...](https://www.technologyreview.com/s/607965/a-reality-check-for-ibms-ai-
ambitions/)

Why Everyone Is Hating on IBM Watson—Including the People Who Helped Make It

[https://gizmodo.com/why-everyone-is-hating-on-watson-
includi...](https://gizmodo.com/why-everyone-is-hating-on-watson-including-
the-people-w-1797510888)

Is IBM Watson A 'Joke'?

[https://www.forbes.com/sites/jasonbloomberg/2017/07/02/is-
ib...](https://www.forbes.com/sites/jasonbloomberg/2017/07/02/is-ibm-watson-a-
joke/)

\---

The final article, is the most telling, in its revelations. Of course, you
must speak the language to understand IBM's gartner magic quadrant-esqe based
reasoning: Their argument is because a bunch of businesses they convinced to
sign up have 'Watson' stamped on their engagement contracts, it's PROOF that
Watson is breakthrough technology, catapulting research into a new era of big
data and machine learning....lol

I'm not saying some of the things that IBM claims it does, cannot be currently
done, but I no doubt at all the IBM is not doing it, and Watson is most
certainly behind closed doors a complete joke.

~~~
MattRix
I didn't even mention Watson, since I know nothing about it. Even if Watson is
a complete sham, it doesn't mean ALL AI projects are shams!

For example, you can't say that DeepMind is vaporware. AlphaGo is real, and it
works. Self-driving cars are real, and they work, etc.

------
aaronsnoswell
Can someone with more karma modify the title to be less click-bait-ey?

------
nicodjimenez
Michael Jordan is just mad he missed the boat on deep learning. Must be tough
being that brilliant and at the same time get left in the dust by algorithms
from the 90's and just sheer brute force. The AI revolution is here, from
Google search to Uber pool to auto correct to recommendation engines, it is
just a slow process.

~~~
Ar-Curunir
Lol Michael Jordan is at the forefront of ML, Stats, and yes, deep learning.
He's not missed any boat.

~~~
nicodjimenez
Michael Jordan is a mathematician far more than an engineer. I've never heard
of any deep learning papers or open source software coming from his lab. Like
a lot of mathematicians though working in stats and ML, he's been forced to
adapt. There is lots of resentment amongst mathematicians about the current AI
revolution, to me this reeks of that. I could be wrong!

------
John_KZ
The AI Revolution is happening right now. There's enormous proven potential
which is materializing in ever increasing rates. The first generations of
hardware accelerators are slowly coming out and they're going to leave
everyone astounded.

It's a bit like the early days of the internet, it's not always the most
practical solution, but it definitely works.

~~~
crististm
If it were there would be no one denying it. I don't see _that_ though so my
conclusion is that, no, there is no revolution yet.

