
The science of Westworld - mike_hearn
https://blog.plan99.net/the-science-of-westworld-ec624585e47#.th6i2yuzb
======
chvid
I really enjoyed westworld and found the coverage of ai amazingly not stupid
as opposed to most other sci-fi shows.

However the biggest idea in the show was the relationship between god (Anthony
Hopkins), self (the robots) and consciousness as an extension of inner
conversation.

I had to look up the idea:

[https://en.m.wikipedia.org/wiki/Bicameralism_(psychology)](https://en.m.wikipedia.org/wiki/Bicameralism_\(psychology\))

~~~
rubber_duck
>coverage of ai amazingly not stupid as opposed to most other sci-fi shows

If they were capable of that level of AI they would have reached exponential
growth shortly - to suggest someone would use it to build theme parks is
downright retarded.

Like you said - AI is a plot device in that show, just another form of "magic"
that lets them ramble on about consciousness while pretending to be sciency.

~~~
sp332
Why would a robot inherently want to make a better robot, or change itself?

~~~
api
I love "stupid" questions like this.

We can say that the vast majority of living beings on Earth do not seem to
seek any form of radical self improvement beyond ordinary developmental
learning and mastery of survival skills. There is no intrinsic reason that
there must be an impulse to exceed oneself. Why would AI be different?

~~~
JackFr
Why indeed would an AIs motivations even be comprehensible to us.

~~~
enraged_camel
Because an AI would be created by humans, and would therefore be modeled after
human intelligence.

An alien AI, on the other hand, would most likely be incomprehensible to us,
at least until we understood the aliens that created it.

------
ygra
> This sort of “escape hack” isn’t possible for human players because you have
> to do too many precise actions too quickly, so in the video it’s performed
> by a separate computer wired up to the gameport.

Well, someone did the credits warp manually, at least:
[https://www.youtube.com/watch?v=HxFh1CJOrTU](https://www.youtube.com/watch?v=HxFh1CJOrTU)

The video in the article shows basically shellcode injection, but that's not
timing-sensitive, it'd just take longer for a human. And, as seen above,
similar things _are_ possible for humans, if just less convenient to do so.

~~~
oh_teh_meows
> The video in the article shows basically shellcode injection, but that's not
> timing-sensitive, it'd just take longer for a human. And, as seen above,
> similar things are possible for humans, if just less convenient to do so.

Wow that just gave me an idea for a story. Humans found out they are mere
pawns in a simulated reality, discovered a 'hack' to alter reality, but the
'hack' would take hundreds of years to complete. So generations of humans
toiled over at completing the 'hack', passing the baton through each
generation.

There could be so many possible storylines from this - corruption/destruction
of reality, dictator wanting to change the past, or a cult hell-bent on
changing reality, or sorcerers practicing 'magic', or a lone protagonist who's
on the verge of completing that hack after hundreds of years, but suffer from
ethical conflict and existential crisis.

~~~
IgorPartola
Check out Off to be a Wizard. You will enjoy it.

~~~
oh_teh_meows
Sweet! Thanks I'll check it out.

------
sssparkkk
"Lily is a swan. Lily is white. Bernhard is green. Greg is a swan." -> "What
color is Greg? Answer: white"

At the risk of sounding somewhat stupid, but shouldn't this contain "Swans are
white" for it to be a correct answer?

~~~
tfm
If we're limiting ourselves to deductive reasoning, then yes – the facts as
stated do not give enough information to deduce that Greg must be white.

If instead we use abductive inference, we might seek the simplest and most
likely explanation given our universe of observations. Sherlock Holmes was a
big fan of abduction!

Much of real-world reasoning is abductive to a greater or lesser extent. There
is a well-known joke about some motley band of engineers, logicians,
mathematicians, statisticians, etc etc catching a train through the Highlands.
They see a black sheep, the engineer says "look, all sheep in Scotland are
black!", the statistician says "no, you can't say that – just that MOST sheep
in Scotland are black", another says "no, we can only say that at least ONE
sheep is black", another says "no, it's only black on at least one side", then
the one you're stuck next to at the party says "you're all wrong, we can only
say that at least one sheep in Scotland is black on at least one side at least
some of the time". The last statement is fully deductive; the rest of them are
abductive, and more-or-less useful.

~~~
Cybiote
This is why I think the ability to ask good questions is a better indication
of understanding and intelligence than the ability to generate answers.

As a gauge for how far we are from AI you can consider what sort of modeling
capacity is required until an AI can ask, when presented with such a sequence:
"What country is the swan from?" or, even more impressively: "Do you know
where this took place and what country the swan's parents were from?" For the
first question it would then _abduce_ a color. Same for the second but perhaps
it could include probabilities based on estimated number of each color and the
genetics of swan color.

This post is a rotation meant to provide a better sense of scale for the
problem at hand.

~~~
tfm
> This is why I think the ability to ask good questions is a better indication
> of understanding and intelligence than the ability to generate answers.

Certainly! Synthesis rather than reformatting (or, more commonly,
regurgitation). Analysis and abduction are more than just "put it in your own
words". More useful too.

There is something of a rush on at the moment to generate chat-bots to replace
FAQs. Every Slack/Fleep/Blern/Crank channel appears to have five or six
memoisation bots. Seems to be largely a solved problem!

When we can start having bots that can be sensibly interrogated for a summary
(or even a "hey, you've been away for several hours: here's the key points"),
we can finally abandon the chatrooms and let the generative bots flood them
with abductive content, and the precis bots can then ping you every couple of
weeks when something important comes up.

------
k__
Westworld reminded me a bit of Asimovs stories.

Coupling the whole robotics and AI thing. In Asimovs stories they first built
the robots, which get smarter and smarter, but this is not how reality worked.
Robots are rather specialized and most AI/automation we have today is about
data, which is virtual.

They're all like, lets build a robot, then make it intelligent, but it should
also move like a human, oh and why not replace their internals with life-like
organs?

It's like they make a run through all science and engineering in about 30
years of development and pretend it's mostly the work of one master mind,
which is ridicioulus. The whole "technical" side of Westworld is trash.

It's more a philosophical story than technical or psychological, with a few
deus ex machina to get the ball rolling.

~~~
cydonian_monk
> Westworld reminded me a bit of Asimovs stories.

I suspect this isn't an accident, given (some of) the same writers/producers
are developing the Foundation series for HBO. There are MANY Asimovian themes
scattered throughout the series, and even a few things I'm reasonably certain
are direct references. (Ex: "Someday".)

> It's more a philosophical story than technical or psychological

And that's why I enjoyed it. My favorite science fiction always fits that
description, especially if the big philosophical question comes around for
technical reasons. (Such as in "The Cold Equations.") That, and the first
season was very much a complete story. They could stop here and I'd be happy
with it.

The series had its flaws, but I'm very optimistic if this is the shape of TV
scifi to come. Even if it's all adaptations or of a derivative form.

~~~
k__
I don't know... I think it's a nice show and the story telling is really
awesome and the philosophical questions and their answers are really
interesting.

But the rest feels a bit... meh.

The charaters don't have any depth.

Bernard and Ford had the most, but the rest?

The Maeve story-line was utter crap and the people around it wer basically
imbecills, Maeve included.

William had something going on for him and when everything got together I was
blown away, but only for a moment because some of the story telling puzzles
were solved and not because he is a good character, his development is simply
implausible.

Teddy is just... empty?

Dolores was okay, but since the whole story was about her uncovering her past
and with it her personality, it took the show till the end of the season till
she got some depth.

The scifi aspect is also minimal and basically a huge plothole, it's more
fantasy to me.

~~~
joshvm
_Spoilers if you haven 't watched._

The tech I can deal with, because they don't even try to explain it, and there
was a nice moment where Maeve goes bonkers when confronted by the fact that
she doesn't really have free will.

But my biggest gripe was that park security was a joke. It has a kind of Star
Wars stupidity - "There's a problem down on the planet that could be hostile?
Let's send the captain, first officer and chief medical officer". There's a
problem in the park, so they send the head of security alone who conveniently
can't get a signal back to base. And then there are the personnel who are
wearing armour so ineffectual that they all drop like flies with a single
bullet. You'd think that they might design weapons that were biometrically (or
at least RFID-tagged to be realistic) linked to real people so they couldn't
be stolen.

It seems like the entire park is manned by about 100 people. And until the
final twist was revealed, I did wonder how the hell any of the chronology
actually made sense - as in the starting town scene was being reset so often
that it would be an insane clean-up job every night. Not to mention that there
were parallel storylines where characters that had dependent stories seemed to
be apart during resets e.g. Dolores and young William, Teddy and old William
were being aired at the same time.

~~~
joshvm
Ahem, I meant Star Trek...!

------
OJFord
> _Here’s a harder one that tests basic logical induction: Lily is a swan.
> Lily is white. Bernhard is green. Greg is a swan. What color is Greg?
> Answer: white_

That's not right is it? At least, not logical induction?

Supposing the second premise were 'Lily is female'; the answer to 'What gender
is Greg' should obviously not be 'female'.

~~~
ssanders82
Well a strict logic question would be "All swans are white. Lily is a swan.
What color is Lily?"

But induction is more like learning about the world by observing it, and
making probable guesses. So it may be correct for a machine to observe that if
one swan is white, the best guess for the color of another swan (in the
absence of other info) is white.

~~~
OJFord
Thanks, your first sentence is more like what I was expecting before reading
the example.

I'm not familiar with that definition of induction, but I haven't studied ML
at all.

------
wonko1
If I'm honest with myself I have to admit that a general purpose human level,
strong AI would shake my world view significantly.

For that reason, I find it difficult to have a completely unbiased opinion on
our current state of progress.

But... while we seem to be making great progress, it seems like we're a long
way off understanding how the human mind works.

AlphaGo was an amazing achievement, but I think it's unlikely that the human
mind tackles Go in the same way.

It's obviously possible that there are multiple routes to a general human
level intelligence. But I think it's still unclear if the way AI is currently
being developed is one of them.

~~~
cel1ne
What I find puzzling is that many people seem to assume that AI can be
generated using a specific uniform neurological structure, while the human
brain is actually made of many different parts, some older, some newer, some
more connected, some more isolated, some inhibiting others, some potentiating
others, some mostly signalling with this neurotransmitter, some using that
etc.

~~~
WJW
I would assume it might actually be easier to get there with a uniform
neurological structure, as you don't have the "legacy" infrastructure left
over from millions of years of evolution.

However, I think that the first AI humanity manages to build will be more or
less a copy of a human mind and only later will we learn how to construct
minds "from scratch". Akin to how a beginning programmer will often scrape
together bits from various sources to build his/her first program and only
later can make original work.

~~~
TeMPOraL
Question is, whether those older parts are best viewed as legacy
infrastructure, or as ASICs - parts that are in their local optimization
minimas for functions they perform, and better than a uniform architecture?

~~~
jimktrains2
I mean, a lot of the "older" brain is control circuitry that keeps everything
running and regulated without our conscious thought. Everything will have that
-- at some level. It might just be actual control circuitry and not anything
tied to the "brain" (e.g. a PSU in a computer), but the function would need to
be performed, and if it's not part of the brain, the feedback we get (e.g.
about stress or pleasure) might be lost?

~~~
WJW
I think that a lot is control circuitry, like the bits that regulate digestion
or body temperature. I don't see any reason that for a superintelligence these
could not be consciously controlled. The main reason we have so many
unconscious processes and heuristics in our brains is to limit the total power
consumption, as that was super important back in the days when food was
scarce. If power consumption becomes less important, you could do more and
better thinking.

~~~
cel1ne
It doesn't require a superintelligence. It's possible to gain a certain amount
of control over various unconscious processes by means of mental-training.

------
codewithcheese
I think the physiological and the logistical aspect of Westworld is more
interesting than the AI

~~~
katpas
Agreed, given the amount of damage the Hosts seem to receive each day it
doesn't feel plausible that they could collect them all and do what seems to
be a manual operation on each to return them to perfect condition overnight.

~~~
ghaff
That's one of the aspects of Westworld that I think you have to just more or
less accept and move on. The nightly reset is implausible at all sorts of
levels: economically, scale and effort (even to buildings burning down), and
just logistically (does the park just shut down for a few hours every night).
And we're shown that, as you say, this all seems to be a very manual process
for the most part.

Something else you pretty much just have to accept is that through all sorts
of mayhem, the humans stay safe (well, until the end at least). There's some
rather inconsistent hand waves around guns and bullets but one has to believe
there are still ample opportunities for serious injury in some of the scenes
we see.

~~~
fao_
> the humans stay safe

[SPOILERS]Dolores kills a human outright by aiming at their head mid-season.
Then it isn't mentioned again.[/SPOILERS]

I always assumed something in the suit lining made the guns faux-fire, and the
suit responded by exploding a pocket of air or something. But then you can see
The Guy In The Black Suit (I've forgotten his name) load his revolver by
manually inserting bullets, so I have no idea how that'd work. It's probably
one of the only things that bothered me about the series.

~~~
simonw
According to Jonathan Nolan it's the bullets:
[https://editorial.rottentomatoes.com/article/11-rules-of-
wes...](https://editorial.rottentomatoes.com/article/11-rules-of-westworld-
hbos-killer-robot-theme-park-series/)

~~~
ghaff
Which is a reasonable handwave until you start asking about the physical
damage that the bullets do to [EDIT] hosts and ignore the mayhem involving
explosives, fires, etc.

It doesn't especially bother me though. We know that TV and movies generally
have a convention that trauma that would put someone in the ICU in the real
world is brushed off as a flesh wound. And I'll accept technobabble about the
bullets. I'm also happy to accept that maybe Westworld takes place in a
culture where theme park risks on par with base jumping are considered fine
and proper.

------
hellofunk
Personally I found the recent season of Humans a lot more interesting than
Westworld, which (to my taste) tried too hard to be philosophical, while
Humans managed to tell a similar philosophy more subtly without being on-the-
nose.

But we're all critics, so YMMV.

~~~
johnohara
I tried watching season 1 of Humans but was jaded by first watching Akta
Manniskor (Real Humans), the Swedish series it is based upon.

It seems that sometimes a lower production budget yields a better story.
Forced to rely instead on the viewer's imagination and subject interest.
Especially if it's done with aplomb and intelligence.

I found the dialogue in Humans to be too "explanatory." Haven't seen a full
episode of Westworld yet, but imagine the same is true. Big budgets tend to to
that.

~~~
akvadrako
I haven't seen the US Humans but did enjoy the original and found Westworld to
be even better in the subtleness category.

It's probably due in part to being an original and even higher production
values, so you get actual good writers. Maybe also a higher-brow target
audience.

~~~
hellofunk
Is there a US version of Humans? It's a recent show and on BBC, I wouldn't
think a US version existed yet.

~~~
akvadrako
The series is produced jointly by AMC in the United States, and Channel 4 and
Kudos in Britain.

------
throw2016
I loved westworld. I think it shows imagination and elevates TV. But it starts
at a point where the AI is already real. They robots are able to parse reality
in all 5 senses in real time and respond. The are physically sophisticated,
are supposed to be bio-chemical and match humans in dexterity and motion.

Season 1 focuses on the next step which is consciousness and choice, and the 2
critical theories to explain the emergence of these in the show is
bicameralism and memory. It doesn't dwell too much on the how and focuses more
on the consequences of it, which is a fascinating journey.

~~~
akvadrako
They are apparently not having the same kind of inner thoughts that we have
and most of what they do is scripted. I couldn't give evidence without some
spoilers.

------
splicer
My answers to those example bAbI questions don't match to "correct" answers. I
guess I'm not human...

------
ArkyBeagle
Most of the plot of Westworld revolved around the complete absence of any
rigorous release strategy - or at least Ford's privilege in thwarting it. That
was necessary to expose the development arc of the hosts.

The writers also appear to have used a variation on recursive plot, which is
nice to see.

------
mklim
>Put another way, there’s a risk of AIs learning to achieve their assigned
task better by preventing humans from shutting them down.

Thought experiment/short story that goes into this in depth:
[https://gist.github.com/deanmarano/142df7a8a824ab05fc777d8e0...](https://gist.github.com/deanmarano/142df7a8a824ab05fc777d8e054ab0f3)

The crux of the story hinges on the magical spontaneous development of general
intelligence, so it's pretty unconvincing as a specific plausible scenario
IMO. But the general idea, that an AI may take unethical/unprecedented actions
to maximize a harmless goal, is a good one.

------
foxhop
I'm a huge westworld fan so I made a fan site:
[http://westworld2.com](http://westworld2.com)

------
njharman
Lily is a swan. Lily is white. Bernhard is green. Greg is a swan.

What color is Greg? Answer: white

How the hell is that logical? Greg may be white, he may be black or any other
color. You could guess Greg is likely white. The correct answer is "can't
tell". But I'd expect an intelligent responder to reply with questions and
complaints of unanswerablity like I have done here.

~~~
mike_hearn
AI that insists on hard logic even in inappropriate situations is a staple of
science fiction, but there's no reason we'd build real machines that way.

The cited example is taken from the bAbI papers, and is a case of inductive
reasoning:

[https://en.wikipedia.org/wiki/Inductive_reasoning](https://en.wikipedia.org/wiki/Inductive_reasoning)

 _Inductive reasoning (as opposed to deductive reasoning or abductive
reasoning) is reasoning in which the premises are viewed as supplying strong
evidence for the truth of the conclusion. While the conclusion of a deductive
argument is certain, the truth of the conclusion of an inductive argument is
probable, based upon the evidence given._

In reality this is often the best you can do, as all you have to work on in
the end is the input from your own sensors.

------
faragon
Westworld (2016) is an amazing show. In my opinion it is more about how real
it feels, than the thematic itself. I'm convinced that same approach would
work for other genres, not just sci-fi. It is a kind of hyper-realist cinema,
where the plot and acting is so superb that feels credible and breathtaking.

------
mcguire
As far as I can see, most of the points of this article were brought up in
James P. Hogan's 1979 _The Two Faces of Tomorrow_.

------
ianamartin
Meh, I think that human intelligence is vastly overrated. We've been moving
the goalposts for centuries to safeguard our place not only at the top of the
intellectual food chain, but also to maintain the idea that we are on an
altogether different chain.

We've been doing this with regard to various animals since forever. Every time
a chimp learns sign language, an elephant cares about its mother, or a dolphin
has sex for fun, everyone loses their minds falling all over themselves to
"prove" that we are qualitatively different. That there's something else going
on with us that is special.

I'm not passing judgment here. I do it too. It's extremely convenient for me
to do so. If you start thinking of intelligence as a spectrum with some
species closer to one end of it than others, it gets a lot harder to justify
most of what we do to animals. And there's a dark place I don't want to go
that suggests that some people are far enough down the spectrum that maybe you
could justify doing bad things to them.

It get really messy and ugly in both directions when you think about
intelligence as a spectrum. At what point should an animal be considered smart
enough to merit "human" rights? At what point should a human be considered so
dumb that it doesn't?

We, as a society, are not ready to have that conversation. We lack the moral
fortitude to do so, which is why I happily participate in this artificial
segregation.

But we are going to be forced into dealing with it much sooner than we are
ready to. I fully expect that within my lifetime, there will be Bladerunner
scenarios with lifelike robots who are practically indistinguishable from
actual people.

We live in a bubble here, where all the women are beautiful, all the men are
above average, and all the children are FBI agents.

Human memory is far more corrupt and fallacious than we want to think it is.
Weird social interactions with slightly (or very) dysfunctional people are far
more common than we tend to think they are. Spend a day on the subway in NYC.
Spend a day in my hometown in Texas (population: 498).

Many of these people could easily be simulated with a high degree of
believability. The real hypocrisy here is that no one wants to believe that
you, as an individual, could be simulated with believability. I'll go on the
record and say that it would be trivial to simulate me. I'm not that special.

The problem we have with AI tests is that we are testing against the ability
for an AI to be anyone. We're checking to see if an AI could be as good at
impersonating absolutely anyone as one of the top .00001% of human character
actors.

We aren't checking the lower bounds. Because that's extremely uncomfortable
for us. We're maintaining goals and standards that are designed to make the
tests fail.

Again, we are doing it for good reasons. We haven't yet solved the problem of
how to treat each other when we know that we're only dealing with humans. We
aren't ready to talk about bringing other entities into our world yet.

Bladerunner was prescient; Westworld is near future. We need to get our shit
together because these issues are going to come up far sooner than we expect.
And when we're talking about an entity that speaks to us in our own language,
with our own idioms, with our own concepts of feelings and emotions--it's
going to be a lot harder to maintain the pretense that we are somehow
qualitatively different.

On the other hand, this could be really convenient for us. A moment of
solidarity, if you will. We could create believable robot characters that we
unite against and focus all our hate, violence, racism, and abuse towards.
Maybe we all get along better after that.

But what does that say about us? And have we really solved our problems? I
think that's the question Westworld is asking, similarly to the question some
open world games ask, like EvE Online: in a universe where everything is
permissible, who are you?

