
Google DeepMind Founder Demis Hassabis: Three Truths about AI - cpeterso
https://www.techrepublic.com/article/google-deepmind-founder-demis-hassabis-three-truths-about-ai/
======
mindcrime
I've been spending a lot of time lately digging into "outdated" 80's era
approaches to AI. Some people think I'm wasting my time and ask "why aren't
you doing Deep Learning?" This is the reason why:

 _" You can think about deep learning as it currently is today as the
equivalent in the brain to our sensory cortices: our visual cortex or auditory
cortex. But, of course, true intelligence is a lot more than just that, you
have to recombine it into higher-level thinking and symbolic reasoning, a lot
of the things classical AI tried to deal with in the 80s. One way you can
think about our research program is [that it's investigating] 'Can we build
out from our perception, using deep-learning systems and learning from first
principles? Can we build out all the way to high-level thinking and symbolic
thinking?'. In order to do that we need to crack problems like learning
concepts, things that humans find effortless but our current learning systems
can't do."_

I basically completely agree with Demis on this. A lot of the stuff that was
being done in the 80's wasn't "wrong" so much as it was ahead of its time. And
just like advances in data, algorithms, and compute power helped propel Neural
Networks (which are actually a _really_ old idea) to another level, I suspect
some other "old" approaches will also experience a similar resurgence due to
that same confluence of factors.

~~~
qbilius
What would be a good starting point for digging into that 80s stuff, in your
opinion?

~~~
mindcrime
Good question. There's a LOT of material out there, and it could take a
lifetime to parse through it all. So my own approach has been to skim a lot
and latch onto whatever superficially seems interesting at first blush, do a
deeper dive, and decide whether or not to keep digging. As it happens, one of
the first things I latched onto was abductive inference, and an approach to
automating same called "parsimonious covering theory". I've been working on my
own implementation of PCT as a way to learn that.

I also think there are some interesting ideas in Minsky's _The Society of
Mind_ approach, so I've been going through some of his stuff. That and the old
"Blackboard Architecture" model, which I see as having a tenuous sort of
connection.

And while it's more "90's" than "80's", I've also been reading a lot of older
literature on Multi-Agent Systems lately. There's been a lot of talk lately
about "Multi-Agent Reinforcement Learning" so I'm interested in that entire
space.

Really, there's a thin, tenuous thread connecting most of my interests in this
area (with the PCT stuff being sort of un-connected w/r/t the rest)... MAS's,
Blackboard Architecture, Society of Mind, all deal with the idea of multiple
"things" (call them modules, agents, components, whatever) working together in
some fashion. I suspect there's "some there there" in terms of unifying all of
these various AI approaches and achieving something useful. I could be wrong,
but this is where I've been directing a lot of energy lately.

------
adwhit
I always find it strange to hear people talk about climate change as if it is
some sort of tricky challenge we need to solve, like improving education or
curing cancer. It is a 'problem' of completely different magnitude. It is on
track to end civilization as we know it within two generations. A rational
society would think about nothing else.

Our children will be astonished and mystified - and furious - to see all the
things we got up to instead of dealing with this thing that is the only thing
that matters.

~~~
exceptione
> It is on track to end civilization as we know it within two generations. A
> rational society would think about nothing else.

If you mean by "to end" something like "to extinguish" than I think this
hyperbole is enormous.

If you mean by "to end" that the civilization will change, than there is
nothing new under the sun. If I look at pictures from 19th century, which is
very recent, I am always baffled at how much have changed in the passing of
time.

~~~
DennisP
A while back I read _Six Degrees_ by Mark Lynas, who read 3000 peer-reviewed
papers on the effects of climate change and summarized them, one chapter per
degree, with extensive references.

My takeaway from that book was that three degrees was pretty terrible but life
would go on, but at four degrees modern civilization would have a hard time
surviving. Go all the way to six and there won't be many humans left on the
planet.

I'm not saying we'll get to four degrees in two generations, but at the rate
we're going we'll be well on the way, and enough feedbacks will have kicked in
by then to make it inevitable.

~~~
gowld
How is that possible? From the Amazon blurb, it seems that the book talks
about what would be _destroyed_ by weather change, but _nothing_ about what
would be _created_. If temperature increases, some current cold areas would be
more habitable. If sea levels rise, some inland area would become coastal.
These changes would happen over generations, so populations would move. There
would be tumult in human and animal life, which is bad but not a novel part of
human experience (we've always had hurricanes, tsumanis, earthquakes, and
plenty of manmade disasters), but a new equilibrium would be reached. There
was once an Ice Age that supported human life! I don't think the overall human
environment 6 degrees in the future would be worse than say 200 years ago.

Also, it's been 16 years since the book was published. Have temperatures
risen? Have the predictions come to pass?

[https://www.amazon.com/Six-Degrees-Future-Hotter-
Planet/dp/1...](https://www.amazon.com/Six-Degrees-Future-Hotter-
Planet/dp/1426203853)

~~~
DennisP
We're currently at one degree above preindustrial, and the world looks a lot
like he described for that level.

The book was published in 2009 but most of the news since has not been
encouraging. A recent book that covers similar ground, also with a lot of
references, is _Unprecedented_ by David Ray Griffin; the conclusions are
similar. If you want more detail on the geologic evidence, see _Storms of My
Grandchildren_ by James Hansen.

You're surely right that there will be things that improve. However, we know
from geology and paleontology what's happened in previous major warming
periods: mass extinction, and a huge loss of biomass, with most of the
survivors clustered around the poles. That doesn't bode well for us.

~~~
antognini
> However, we know from geology and paleontology what's happened in previous
> major warming periods: mass extinction, and a huge loss of biomass, with
> most of the survivors clustered around the poles.

I thought that warming events were generally associated with speciation
events, at least on land. (Marine life is a different story.) The last event,
the Paleocene-Eocene thermal maximum, coincided with a major speciation event
for mammals.

------
dmreedy
Great, I hope he's right.

However, the AI that has panned out so far (whatever subset of the Hard
Problem of Consciousness we've been able to...I don't really want to say
solve. Address? Attempt?) in the current summer has done very little to
further any of these goals, and has done quite a bit to advance many of the
aspects he's worrying about.

So when are we going to start seeing evidence that these kinds of lofty
prophecies will pan out? Or is the idea that it's going to arrive all at once
like the Messiah, and save us from ourselves. I'm very excited about and
deeply interested in Artificial Intelligence the discipline. Artificial
Intelligence the Religion, I could do without.

------
davidhyde
Replace AI with cake :)

"Either we need an exponential improvement in human behavior — less
selfishness, less short-termism, more collaboration, more generosity — or we
need more cake.

"If you look at current geopolitics, I don't think we're going to be getting
an exponential improvement in human behavior any time soon.

"That's why we need more cake"

------
mattnewport
What I'm most excited for is Demis using his AI to finally finish his Infinite
Polygon Engine. I expect he'll get to that right after solving mass inequality
and creating world peace.

~~~
cpeterso
What is the Infinite Polygon Engine?

~~~
mattnewport
Before Demis Hassabis was a master of AI hype he was a master of video game
hype. The misleadingly named Infinite Polygon engine was one over hyped aspect
of the over hyped and much delayed game Republic the Revolution. I think it
actually turned out an ok game but is a notorious example of over promising
and under delivering in the games industry.

~~~
ohopton
I remember reading about the Infinite Polygon Engine in Edge (UK video games
mag).

Pleasantly described as twaddle on Eurogamer in 2003.
[https://www.eurogamer.net/articles/r_republic_pc](https://www.eurogamer.net/articles/r_republic_pc)

Still disappointed!

~~~
cpeterso
Interesting. Thanks!

There is so little information about the Infinite Polygon Engine that my
comment above is now the #1 Google search result for "Infinite Polygon Engine"
(with quotes), even above that Eurogamer article. Without quotes, my comment
is only #2. :)

~~~
mattnewport
Here's Demis hyping up the AI (plus ca change...) to IGN:
[http://m.ca.ign.com/articles/2001/05/17/republic-the-
revolut...](http://m.ca.ign.com/articles/2001/05/17/republic-the-revolution-
interview-2)

Compare these claims to the game as reviewed when it finally shipped some 2
years later. It probably did do some reasonably clever and even innovative
things but it never really lived up to the hype.

He's a smart guy but his real genius appears to be drumming up breathless and
relatively unskeptical media coverage, something else he probably learned from
Peter Molyneux who also made a career out of some genuinely innovative games
that nonetheless rarely quite lived up to the extravagant claims made about
them years before they shipped.

------
vinayms
This seems to be a bit of oversell, similar to how a gifted musician might
declare that music can cure all diseases, including cancer.

We must take care to ensure that AI doesn't end up atrophying intelligence of
humans, and make us its slave, like how technology seems to be doing. I mean,
look at how people have lost capacity to do basic maths mentally, or follow a
route based on landmarks only. Technology is useful as long as its _our_
slave, like the slide rule or the compass. Same goes for AI.

The combinatorial example about chemical compounds is poor because, if
anything, it only reflects the imperfect methodology adopted rather than
something inherently insurmountable. May be they need more insight to help
prune the irrelevant combinations, and actually act like scientists instead of
tinkering inventors. If scientists rely on technology or AI to solve these
sort of problems, either through brute force or clever algorithms, instead of
engaging their intelligence, it would begin the deterioration of humanity as a
whole. Though most of the significant scientific discoveries have been
accidental, our progress has been due to application of our minds to
understanding and harnessing them. Same goes for evolution of society due to
philosophy. AI induced atrophied intelligence will push us back to the dark
ages, only this time the god would be the AI, the oracles would be the
software, and the heretics would be those propounding reliance on innate human
intelligence.

Its ironic that a bunch of brilliant scientists creating a brilliant system,
on par with human mind, risks destroying the collective brilliance of humans.
We must tread carefully.

------
jokoon
The problem with current ai right now is that it's very complicated and recent
work (meaning the tools required are not for everyone), so it's more research
than anything else. Most programmers don't really have a clue of how machine
learning works when it works well enough, so that's really few people.

And so far, what applications in the real world are we seeing that are good
enough and are actually useful to justify the investment? You cannot really
trust self driving cars yet, and researchers don't have an insight on a deep
neural network and how to use those results.

I have the feeling deep learning is another excuse to justify selling
hardware.

I wish there was more research done at a higher level in Neuro sciences, to
have a better definition of general intelligence.

------
drabiger
It's sad that a well paid person and probably smart person like Demis Hassabis
is so wrong in his view about how the world is.

To quote the article quoting Hassabis, "The reason I say that is that if you
look at the challenges that confront society: climate change, sustainability,
mass inequality — which is getting worse — diseases, and healthcare, we're not
making progress anywhere near fast enough in any of these areas."

Fortunately, that's not true, as is shown in Factfulness:
[http://a.co/d/cau5vUv](http://a.co/d/cau5vUv)

~~~
adrianN
For climate change we're not even close to making enough progress. Global CO2
emissions are expected to continue rising for the foreseeable future, so we're
not even getting the second derivative right. The chances are close to zero
that we'll manage to limit warming to two degrees and imho it's likely that
we'll set off some feedback loops (melting permafrost, methane clathrates,
albedo changes at the poles, etc.) that will lead to catastrophic warming in
the next century.

~~~
drabiger
Agree.

------
wsy
AI has been a marketing terms from its invention, and is very misleading.

Whenever you read something about 'Artificial Intelligence', try to replace
this term with an actual technology, such as 'Deep Learning'. If the sentence
doesn't make sense with any such replacement, it is most probably just
marketing bullshit.

~~~
meh2frdf
Replace deep learning with ANN, it was branded DL In order to get easier grant
funding!

~~~
wsy
You are right, even DL is still quite fuzzy, ANN would be a much better
example.

------
paganel
To be honest, I can't really understand how come there are still people who,
when confronted to a world that has been irremediably changed by humans' use
of technology (for worst, they say) still believe that an improved technology
used by the same humans will make things better. I'm looking at these quotes
from the article:

> The reason I say that is that if you look at the challenges that confront
> society: climate change, sustainability, mass inequality — which is getting
> worse — diseases, and healthcare, we're not making progress anywhere near
> fast enough in any of these areas.

followed by

> Either we need an exponential improvement in human behavior — less
> selfishness, less short-termism, more collaboration, more generosity — or we
> need an exponential improvement in technology.

~~~
TeMPOraL
Frankly, I can't really understand how come people believe what you just
wrote. Yes, technology created many problems, but we can't solve them by doing
_less_ technology, without reintroducing problems we've already solved. We can
either _not_ solve them, or try to solve it with more progress (likely
introducing new problems as well).

In other words: we dug a hole for ourselves with technology, but the only way
we can dig ourselves out of it is with _more_ technology. This is pretty much
obvious.

~~~
glitchc
Let's play with that notion for a bit: If technology is a tool, let's say akin
to a spade. A spade gets you into a hole. Will more spades ever get you out of
a hole? Sounds like you need a ladder instead.

~~~
TeMPOraL
Technology is all the tools. With a spade you got into a hole. With a ladder
you'll climb out of it. Hopefully with treasure. Then that spade might help
you fill the hole back in.

Particular technologies are highly intertwined - both directly and causally -
so you can't really micromanage this. You have to push for technological
develoment as a whole, and trim developments here, encourage there. For
instance, we wouldn't be talking about renewable energy sources were it not
for coal furnaces and internal combustion engines that allowed the world to
progress to the point we even _can_ exploit renewables. In this case not only
we need the ladder because of the spade, but we also _can have_ a ladder
because of a spade.

~~~
glitchc
I think that's what you need to realize. Technology is only one kind of tool
in our toolbox for shaping humanity. Other tools include:

\- Policy: With regards to technology, who's in charge, how we use it.

\- Laws: How do we protect vulnerable (defenseless) people from the negative
repercussions of technology.

\- Philosophy: How we think about things, treat other humans, our environment.

And I'm saying all of this as a technophile. The same tool that got you into
the hole is likely not the tool that will be useful for getting you out.

~~~
TeMPOraL
The tools you mention are also intertwined, with technology in particular
being in the driver seat of changes in the other areas you mention. Therefore,
it'll still be a necessary component of the solution, even if it won't be the
only one.

~~~
glitchc
I firmly disagree. While technology is certainly an enabler, either to
implement the solution or to enforce it of a particular solution, it is not
the tool from which the solution must be derived. The intertwining that you
envision emerges when decision-making bodies look at a particular technology
and decide how it should influence humanity.

------
tomp
> mass inequality — which is getting worse

How can presumably smart people still keep saying this? Inequality is
decreasing by all measures! And then I'm supposed to believe any other
argument this person makes...

~~~
credit_guy
Some people think the inequality has gone down significantly (I'm in that
camp), but others point out to things like Bezos and a few others owning more
assets than 80% of the world combined.

Regardless, AI is bound to increase inequality. Just yesterday we discussed
this book [1] on probabilistic programming here on HN [2]. That book is, by
all definitions, an advanced book about AI. How many people without a PhD can
read that? How many people _with_ a PhD can read it? Well, guess what, people
who can and do read this book earn more than people who don't. That's AI
driving inequality. Before AI will create the mythical HAL or other self-aware
machines, it will enable humongous productivity for the top 0.01% most highly
educated engineers.

[1] [https://arxiv.org/abs/1809.10756](https://arxiv.org/abs/1809.10756)

[2]
[https://news.ycombinator.com/item?id=18109260](https://news.ycombinator.com/item?id=18109260)

~~~
vorg
Perhaps humankind needs a "Butlerian Jihad", as defined in the Dune novels,
where all machines made in the image of a human brain are banned for all time.

~~~
adrianN
Arguably we don't have machines that are made in the image of a human brain,
so that wouldn't change things right now.

