
Interview with Ex Machina’s Scientific Advisor - craigcannon
https://blog.ycombinator.com/ex-machinas-scientific-advisor-murray-shanahan/?=hn2
======
jgrahamc
Fun video I made about the code Easter Egg in Ex Machina:
[https://www.youtube.com/watch?v=JsDt1laSRzU](https://www.youtube.com/watch?v=JsDt1laSRzU)

~~~
ufo
When I watched the movie I was actually annoyed at that part because they show
the code front and center but it clearly doesn't do what the plot was saying
it did.. Turns out they were clear like that so viewers would be able to copy
the sorce code program and run it to find the easter egg :)

------
kangman
the podcast feed looks to be broken.

This page contains the following errors:

error on line 52 at column 224: EntityRef: expecting ';' Below is a rendering
of the page up to the first error.

    
    
      http://www.ycombinator.com Thu, 29 Jun 2017 15:00:57 +0000 Mon, 15 May 2017 02:20:47 +0000 hourly 1 60 https://backtracks.fm en Y Combinator http://www.ycombinator.com http://feeds.backtracks.fm/feeds/series/cb81757a-3054-11e7-89cf-0e1b887eb36a/images/main.jpg  http://www.ycombinator.com  Y Combinator Craig@YCombinator.com (Craig Cannon)  Craig@YCombinator.com no   Wed, 28 Jun 2017 09:00:00 +0000 00:52:58 no 1

~~~
craigcannon
huh. is it not playing for you in the embed?

------
unspecified
Funny transcription error with the professional Go player names from the
DeepMind series:

> So it’s been interesting watching the reactions of the top Go players, like
> Lisa Doll and KGA

These should be Lee Sedol and Ke Jie

edit: Odd that it's correct two paragraphs later

~~~
craigcannon
haha. thanks for spotting that. I really should proof these things :)

------
ericjang
Murray makes a number of nuanced comments about AI philosophy that I enjoyed.
I've reproduced them below, but please see the full transcript in case I have
taken quotes out of context:

\- _Is she conscious at all? Is she conscious just like we are? Is she
conscious in some kind of weird alien way. You never really know._

\- _we’re pretty close to human brain scale computing already in the world’s
fastest super computers. We’ll get there within the next couple of years. But
that doesn’t mean to say we know how to build human level intelligence. That’s
an altogether different thing. And also there’s controversy about how you make
that calculation as well. I mean do you, what do you count? Do you count a
neuron? How do you count the computational power of one neuron? Or one
synapse? And some people, you know, it may be that some of the immense
complexity in the synapse is functionally irrelevant._

\- _They’re just the process of turning the raw waveform into text. So that’s
been cracked. But then again, real understanding of the words, that’s a whole
other story. And with all today’s personal assistants, you know it can be cool
and they’re gonna get better and better, they’re still way off displaying any
genuine understanding of the words that are being used._

\- _But let’s suppose that it was a world where we were minded to put Asimov’s
Laws into Ava. Well maybe Ava might reason that she is human. You know what is
the difference between herself and a human? And maybe she would reason that
she shouldn’t allow herself to come to harm. And therefore she was justified
in what she was doing. Who knows?_

\- _Of course there are people who want to build autonomous weapons and all
kinds of things like that and you might say to yourself well I would very much
like it if somebody was trying to pay attention to things a bit like Asimov’s
Laws and say well you know, you shouldn’t build a robot that is capable of
killing people. But that’s a law that the designers, or that would be a
principle, if we were to have it, that the designers and engineers would be
exercising, not one that the robot itself was exercising. So that’s the sense
in which it’s not relevant toady because we don’t know how to today make an AI
that is capable of even comprehending those laws._

\- _Nathan’s criticizing Caleb over going with his like gut reaction and his
ego and not in like if he were to think through every logical possibility for
every action he would never do anything. Which is kind of like directly
against all these laws. Ava would never do anything if she could harm someone
possibly down the road by you know burning fossil fuel, by being in a
helicopter._

(My own commentary) This last point may be particularly relevant to
researchers in Deep RL. Often we formulate off-policy RL problems as computing
value functions that represent expected reward over optimal policies _until
the end of time_. But maybe long-horizon reasoning is too computationally
expensive (requiring agents to solve every trolley problem in its future and
model the effect of burning fossil fuel on the environments), so some kind of
rational ignorance is actually optimal in a compute-resource-restricted
regime.

------
madarcho
Anecdote: Got taught an Introductory AI course by the gentleman at Uni.

~~~
radicality
He was my master's thesis advisor at imperial :)

