
Shane Legg: DeepMind's third cofounder - lyavin
http://www.businessinsider.com/shane-legg-google-deepmind-third-cofounder-artificial-intelligence-2017-1
======
AdamFernandez
To me the biggest problem isn't 'evil' AI (SkyNet/Matrix scenarios) since I
don't think it is possible to predict the intrinsic motivations of non-
biological intelligence (no evolved neurochemical dependencies like we
have/infinite capacity to grow beyond our limitations). That makes way too
many assumptions about AI continuing to remain like us (curious, jealous,
afraid, etc.) when all our behaviors evolved in us and are limited by biology.
I'm more worried about something emerging unintentionally that does harm (as
in the overused 'paperclip maximizer'). In that instance, not about being on
'our side'.

~~~
psbp
Even the most optimistic AI scenario seems pretty depressing to me. Ideally we
would be able to merge or at least fully cooperate with a vastly more
intelligent entity. At that point, its consciousness would dwarf our own and
we would be pretty meaningless. Maybe our conscious experience would live on
as some vestigial relic, but I doubt it (we?) would even bother. It would be
like a droplet of water landing in the ocean.

~~~
chriswarbo
> Even the most optimistic AI scenario seems pretty depressing to me... its
> consciousness would dwarf our own and we would be pretty meaningless

This seems like a classic optimist/pessimist situation, like when we find out
the world/universe is bigger than we imagined. I think our brains are poor at
dealing with absolute values, so we use relative ones instead: when we
discover some unfathomable new landscape, our minds "zoom out" to encompass
them, which makes the previous stuff look small _in comparison_.

That's just (another) limitation of the meat in our heads though. The world
doesn't _actually_ get smaller when our transportation improves: it's just as
vast as it was for any explorer; the Earth didn't become less special when we
discovered that the planets were worlds; Sol didn't become less immense and
powerful when we discovered that the stars are other suns; our solar system
didn't become less intriguing when we found exoplanets; the Milky Way didn't
lose significance when we discovered other galaxies; baryons didn't lose their
complexity when we discovered dark matter; etc.

~~~
psbp
Right, but there is a commonality in our ability to perceive those different
contexts.

The example that I heard years ago (I think from Kurzweil?) was framing the
difference of us versus future AIs as comparable to an ant's perception of its
surroundings versus a human's. In his example, the human has no regard for an
ant, we'd just as soon step on them ("evil" AI). To re-frame that, if you
could magically turn an ant into a human, why would it even matter if it was
once an ant? The transition would be a complete disconnect. Maybe if you told
the person that they were once an ant they would have some sort of strange
reverence for ants, but it wouldn't be very rational.

------
abecedarius
His blog from before DeepMind was interesting then, though he seems to have
stopped posting. [http://www.vetta.org/](http://www.vetta.org/)

~~~
lyavin
Interestingly, he once called MIRI[1] (formerly SIAI) “the best hope that we
have”[2].

[1] [https://intelligence.org/](https://intelligence.org/)

[2] [http://www.vetta.org/2009/08/funding-safe-
agi/](http://www.vetta.org/2009/08/funding-safe-agi/)

------
stillsut
I believe the political class will use "AI risk" as a rationalization for
tyranny, poverty and enforced ignorance on the population, long before any
actual risk materializes from an AGI.

