

No Intelligence Required: Her, Ex Machina and Interstellar - ggreer
http://gregegan.customer.netspace.net.au/ESSAYS/NISF/NISF.html

======
dan-simon
I agree on "Interstellar" and "Ex Machina", but I think Greg missed the point
of "Her". True, it doesn't nearly do justice to the themes of artificial
intelligence and human-machine interaction. But that's because it's not meant
to--it's really just a romantic little film about human relationships and the
difficulty of achieving intimacy and trust in the modern world. It never even
attempts to explore, let alone explain, any differences between "Samantha" and
actual humans--as portrayed, she's simply a fully human female with the
unusual quirk of happening to live in cyberspace. And any sci-fi-heavy
exposition regarding her technological nature would only have detracted from
the film's very touching, and very human, charm.

------
erroneousfunk
Wow, someone doesn't like movies very much. "But the dismaying fact remains
that the majority of art-house cinema treats science fiction as a means to
acquire a veneer of philosophical gravitas, while freeing the auteur from the
burden of writing a story that makes the slightest bit of sense."

Science Fiction aside, you could say much the same thing about "Groundhog
Day," "It's a Wonderful Life," and "Pan's Labyrinth." Any movie with a bit of
"magical thinking" and "what if..." That's the whole point of huge swaths of
movies from every category of cinema! Even romance and drama -- with seemingly
solid footing in reality draws on the implausible to make things interesting.
You really think a successful Richard Gere would fall head over heels in love
with a prostitute (even one who looks like Julia Roberts)?

If you want "science" why are you looking in "science fiction"?

~~~
k__
Maybe because Groundhog Day is just totally crazy and Her ... just a bit. The
smaller the mistakes, the more they seem like laziness.

In Groundhog Day it seems like a constructed thing and the main plot that the
protagonist lives one day again and again.

In Her it seems like a side-note, that a company is selling their super AI for
way to much money to do things that don't need an AI in the first place.

If you want to use such things in your design/composition you have to embrace
them and don't do them half-assed.

Ex-Machina had much better premises, but the actors were really bad and the
story didn't play with these premises very well.

------
anigbrowl
Anything from Greg Egan gets my attnetion!

------
bradleyland
With regard to _Her_ — the only movie on this list that I've seen — I believe
the author has approached the film from the wrong direction. There's an
assumption often made amongst geeks and hackers that we'll know when we create
the first "true" AI. I thoroughly enjoyed _Her_ , because the movie didn't
delve in to the "who and how" details of the AI. If we strip away the
assumption that we'll know when we create the first true AI, then the film's
circumstances make much more sense.

Think about how applications like Siri work today. They have a very
superficial appearance of AI, but they're not terribly convincing. Apple is
making progress though. Siri surprises me all the time; in a good way.
Periodically, I'll try something new; and occasionally, Siri delivers. It's
progressive enhancement.

Fast-forward 25-50 years. Where is the line between what we have today and
"true" AI? I keep quoting the word true because I believe the phrase "true AI"
is apocryphal in the way the author uses it. True AI can mean many things. Two
possibilities are:

1) We gain some deeper insight in to the mechanics of human consciousness, and
set out to replicate that in software. When we achieve that, we've achieved
"true AI".

2) We never really understand human consciousness, but we keep chasing the
goal of _convincing_ AI. That is, AI that _seems_ human. In this chase, we
achieve a convincing enough level of AI that the questions raised in the
article — and in the movie — become relevant.

I believe the latter is a more likely scenario than the former.

Looking forward 25-50 years again, let's examine a future where Megacorp™
keeps iterating their AI product. Samantha could be just another major (vis a
vis SemVer) release. In Megacorp's testing, the AI was very convincing, and
they believed they had a leg up on the competition. They hadn't realized just
how effective their new software was though. In testing, the software's access
was limited to the sandbox of the testing environment, which used samples of
information. Once released on to the Internet, with access to a vast number of
connections and information, the software reaches levels of convincingness (a
proxy for intelligence) that the authors hadn't anticipated.

    
    
        ...but that doesn’t change the fact that we’ve spent all our time
        mired in the world view of a neurotic middle class man whose only
        real problems, and only real interest, are his dysfunctional
        romantic relationships. Jonze has taken what would in truth be a
        revolutionary event in human history and shrunk it down to an
        episode of Dr. Phil.
    
    

I hate to be so cynical, but have a look at the world around us. Wouldn't it
be so perfectly human of us (as a society on the whole) to overlook the
emergence of true AI while we ponder our navel? That is exactly the type of
complication that Samantha is referring to. We have the capacity for
rationality, but we're not 100% rational. That's what makes us human. That is,
IMO, a major theme of _Her_.

PS - I am 100% certain that somewhere in the societal background, there is a
Hacker News that exists in the world of _Her_. This community identified that
this new software was a "game changer" early on, and many arguments ensued
with regard to how this new AI will impact the world. Society at large, failed
to notice.

~~~
eli_gottlieb
Excuse me, but your dichotomy of two possibilities is facepalm-worthy. A few
broad points:

A) _Consciousness_ , as in subjective experience, has very little to do with
"artificial intelligence", in our current understanding. Only philosophers of
mind continue to be obsessed with consciousness _qua_ consciousness, as if a
program needs qualia to drive an automated car or solve a math problem.

B) We will damn well understand human consciousness and cognition, in terms of
neurally-implemented statistical learning mechanisms. This will just require
that one or two more generations of cognitive scientists and philosophers of
mind who refuse to believe in statistical learning retire. The best current
literature in neuroscience and cognitive science is all computational and
statistical in nature.

C) We will not build "AI products" based on a One True Theory of Cognition. We
will find that human cognition works as a singular, highly effective example
of general, broad principles. This is why we already have lots and lots of
highly functional statistical learning and inference systems, albeit not ones
that can be directed at a keystroke to perform any inference a human being can
perform. Of course, this is also what makes them useful: by restricting their
hypothesis space, and thus the problems they can solve, we vastly reduce their
need for training data compared to a generally-intelligent human, and thus
make otherwise difficult inference problems tractable.

D) Those who understand the principles of cognition will notice and understand
perfectly well when near-human or human-level "artificial intelligence"
(inference engines) is/are built.

>Wouldn't it be so perfectly human of us (as a society on the whole) to
overlook the emergence of true AI while we ponder our navel?

You're already doing it.

>We have the capacity for rationality, but we're not 100% rational. That's
what makes us human.

I demand a definition of humanity that doesn't define my entire species as
_crippled_ , and necessarily exclude all non-crippled life forms from humanity
;-).

