
Sunspring, a short science fiction film written by algorithm - brokenbeatnik
http://arstechnica.com/the-multiverse/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/
======
emptybits
My prediction: At this level of AI-generated writing, keeping the works short
will attract (allow?) small audiences ... but only to watch the human talent
in the production struggle and improvise over truly awful writing.

If a human was responsible for that writing, they wouldn't have much of a
career.

Actors: 1. AI: 0.

~~~
daveguy
Agreed. Joeboy's description of this screenplay as "word salad" was perfect:
[https://news.ycombinator.com/item?id=11876333](https://news.ycombinator.com/item?id=11876333)

I like your scoreboard too -- that pretty much sums it up.

Although I do wonder if actors/directors could use this as a practice tool.
Challenge: turn word salad into a meaningful scene. It almost seems like it
could be an exercise in a theatre class.

You wouldn't want to do this for the full 10 minute screenplay (it's a little
painful even with these talented actors). Maybe generate a 2-3 minute scene or
generate the whole screenplay and you get to pick a scene. An optional crutch
-- the actors get to do their own 2-3 minute scene before and/or after to give
it real context and meaning. That could be interesting.

I have a feeling that something like this kind of exercise is probably already
done in training (any actors on HN?). Although I bet this algorithm is better
than humans at coming up with difficult challenging incoherent word salad
gibberish.

~~~
fallous
Doing scenes with "word salad" is something improv actors do as a practice
tool, as well as scenes where each actor only has a single word or simple
phrase they can use as dialog. The latter is often part of a performance for
an improv troupe.

~~~
daveguy
Good point. I have seen the single word / phrase prompt for improv. It would
be a truly impressive improv group that could make this coherent with an on
the spot improv performance (but that wouldn't really be improv). The
screenplay seems more difficult than a single word prompt because with the
single word there is so much that you get to make up on your own. I feel like
the ability to take the word salad and convey emotion and meaning through body
language and delivery/emphasis of each word is a whole different skill set for
an actor. words from the generate scene only without any additional
improv/screenwriting is definitely the most challenging (and that's what they
did here for a full 10 minute screenplay!)

~~~
fallous
I don't mean a single word prompt, I mean you can only use the single word or
phrase as dialog and have to perform a scene (also randomly provided) with
other actors. Bob gets "roses," Jill gets "fire," and a third person has
unrestricted dialog... now do a scene where Jill is a salesperson getting
ready for pitch to a big client.

The word salad you're describing would be no more difficult than having a
scene where actors have to speak gibberish (faux Klingon or something)...
which coincidentally isn't unlike another improv exercise where one person has
to convey a message given to them by the director to other actors but must
talk gibberish or just use a single vowel sound.

------
SideburnsOfDoom
Apparently the "essence of sci-fi" is people saying "I don't know what you're
talking about" to each other.

~~~
phaemon
I notice one of the inputs was The Phantom Menace. This film's dialogue was
better.

~~~
Pica_soO
Me se don't know what you are talking about

------
Houshalter
This is a weird thing about LSTM generated sequences. Any random 5 seconds of
this sounds reasonable. Like it could come from an actual movie. But there is
no coherency between these sections. It flows and ebbs randomly around the
state space like a markov chain, with no direction.

I think this is because LSTMs have very little "memory". They have a learned
procedural memory, but no episodic memory. So they have a very difficult time
keeping track of information. E.g. if I say "the cat was in the box", a few
sentences later I might say "the cat is in the __" and the LSTM has a hard
time guessing "box".

Second, it works by predicting the next character in a sequence. This is _not
how humans write_ , at all. If you ask a human to predict the next word in a
sequence, and then the word after that, and then the word after that, etc, you
would also get something like this.

~~~
SideburnsOfDoom
The output is not far off what what a Markov chain or "Dissociated press" (1)
technique would make. I did one of those 20 years ago for fun in a few hours;
it wasn't AI then.

1)

[https://en.wikipedia.org/wiki/Dissociated_press](https://en.wikipedia.org/wiki/Dissociated_press)

[http://www.catb.org/jargon/html/D/Dissociated-
Press.html](http://www.catb.org/jargon/html/D/Dissociated-Press.html)

------
markyc
AI is not here, as far as we know, so let's stop throwing it around with every
other topic

~~~
stevehb
> as far as we know

Or maybe AI is here, and it's throwing out all these articles to hide its
emergence. :)

~~~
markyc
Ender's Game suggests such a genesis story for AI: born as a "spark" of self
awareness in the world wide computer network where it developed silently for a
while, before making itself known to humans

~~~
xyience
Oh wow, is that where that meme started? (All I know about Ender's Game is
from
[https://web.archive.org/web/20110319084212/http://plover.net...](https://web.archive.org/web/20110319084212/http://plover.net/~bonds/ender.html))

~~~
arcticfox
I don't know anything about the site you linked, but at face value, that
article is absolutely awful. (Is it satire?)

For example, the point at the end saying the book celebrates a guilt-free
genocide. Not the point at all! OSC wrote the entire rest of the series about
his main character grappling with the guilt of what he did.

------
deepnet
A sort of Proxy Turing Test would be that a computer could write a character
with a convincing inner dialogue.

If a machine author can produce a simulacrum of consciousness by good
characterisation then that seems like a partial theory of mind.

------
sjbase
Ignoring for a moment that the script makes zero sense... I expected the
writing to feel more sci-fi-esque: space, aliens, computers, physics, etc.

Maybe the conclusion to draw is that sci-fi writing is 99% like any other
storytelling in terms of how characters think, behave, and talk.

~~~
tikhonj
Scripts of popular sci-fi movies, anyhow. Most films aren't going to be _A
Clockwork Orange_. Instead, they'll just have the same sort of dialogue and
tropes as other popular movies, with the science fiction aspect relegated to
the sets, props and costumes.

------
jorjordandan
to be fair, I felt pretty much the same as I did at the end of Primer

------
Pica_soO
A chinese room making party conversation and keeping to the filler lines that
allow to connect to the most conversations?

------
sickbeard
Can anyone explain the appeal of getting an AI to generate a movie or (chat
with you)? I find that experience "plastic", not exactly the right word but I
hope it conveys the feeling I get when something programmed pretends to be
intelligent.

~~~
xemdetia
I'm very interested in this area, and for me it's a couple of different
factors.

From the engineering side:

1\. There is a model that is built of what is a valid screenplay, and that can
grow and be enhanced over a time. What gets generated is an 'acceptable'
output.

2\. Lots of extraction of storytelling and adding all the features and
constructs to the model (through algorithm or manually).

3\. Representing that knowledge in a form that makes sense, in English.

4\. You really get to play with the pieces of what makes language and the
composition of language work rather than just consuming it, which is sort of
the same as authoring under your own power.

5\. The act of carrying a tune. Lots of AI's right now are building a model
with a look at the next step which is great- but combining that with building
a structure with a beginning middle and an end is much harder.

From the output/end result side:

1\. Lack of culture preconceptions- an AI doesn't know what the last Marvel
movie was, an AI never saw Back to the Future, an AI can't quote Star Trek or
Gilmore Girls references off the top of their head (unless they were
informed), an AI doesn't know about WWII, the Crusades, or other historical
events, lots of things like that.

2\. Lack of social norms- developing a morality system for the end output is
very difficult so the AI author doesn't know what is appropriate or isn't.

3\. The act of serendipity. Just like doing a materials science or engineering
optimization through computers you can have a sequence of events that come
together in an unexpected way. Instead of getting a interesting new material
or alloy, you end up getting something that is a valid output of the model
with all of the warts for and against.

4\. It fits the form of a 'single room/closed room' movie such as 12 Angry
Men. The entire universe as it is known by an AI is considered when it
constructs a script.

This ends up forming for me at least the same kind of intrigue as watching
sports or a well written mystery. It is a story told within in a certain
framework and there is always a chance for something truly special to come
from it.

------
nxzero
Interesting that the AI is able to produce a meaningful storyline without
fully understanding characters; makes me wonder how it would do at producing
stories without any characters; such stories exist, bit SciFi feeling too.

~~~
Joeboy
> Interesting that the AI is able to produce a meaningful storyline without
> fully understanding characters

Does it? I think the actors and film makers did a pretty good job of creating
a film with meaning and characters, despite not having much to work with. In
fact a lot of the fun of this is watching the actors try to make something out
of the word salad they've been given.

Edit: If you want to see the script unembellished by the cast and crew it's
here [https://www.docdroid.net/lCZ2fPA/sunspring-
final.pdf.html](https://www.docdroid.net/lCZ2fPA/sunspring-final.pdf.html)

It's a damn thing scared to say. Nothing is going to be a thing.

~~~
ccvannorman
Also _we_ fill in tons of subtext I'm quite sure the AI has no idea about.

~~~
notahacker
Also, this script appears to have been selected by a human actively looking to
find apparently meaningful and original structure and sentiments from a long
list of attempts by Benjamin, most of which clearly derived from the source
texts

> For a while, Sharp said, Benjamin kept "spitting out conversations between
> Mulder and Scully, [and you'd notice that] Scully spends more time asking
> what's going on and Mulder spends more time explaining."

Can't help thinking that this reported response to questions about the film
appears more meaningful - poignant almost - than anything in the actual script

> The world is still embarrassed. > The party is with your staff. > My name is
> Benjamin.

~~~
schoen
It reads a lot like Racter (I think I read that _The Policeman 's Beard is
Half-Constructed_ actually involved a lot of human selection of amusing and
interesting utterances).

------
spacemanmatt
AI?

If it were written by a man, we would call him stupid.

~~~
r3bl
AI is basically divided into three "levels": ANI (Artificial Narrow
Intelligence), AGI (Artificial General Intelligence) and ASI (Artificial Super
Intelligence).

We already have a bunch of narrow AIs, as in, algorithms that do a specific
thing in a way that no human would have ever been capable of doing it (think:
Google searches). If such AIs (well, large amount of algorithms combined) were
thrown into any other scenario other than the one they were created for, they
would be useless and a human could perform better than them because humans
adjust easier (we don't have to change a bunch of lines in our brain to be
able to drive on the left side of the road, it just takes us some time to
adjust).

This is a perfect example of this. We have an ANI that was intended for one
purpose, we have a scenario for which it wasn't created (imagining SciFi
scenarios) and it behaves poorer in that scenario than a human would.

However, we've now explored what can it do in this scenario. We laughed at how
terribly bad it behaved and we can either move away from it, improve the AI so
it can do this one specific task better than humans, or improve the AI so it
can do it kind of okay, but not very brilliantly (like a random person would
do if you stopped him and the middle of the street and asked him to write a
SciFi scenario).

Once we have an AI that behaves kind of okay, but not brilliantly, in any
situation we can possibly put him in and at the same time, if it can learn
from his mistakes and improve itself not to make it anymore, we have a AGI
(Artificial General Intelligence).

AGI behaves exactly as a human, _but_ , because it will be able to surpass the
physical limits that we humans have (as in, brain capacity, dependence on
food/water/oxygen etc.) and because it is able to improve itself by learning
on his own mistakes, soon after he hits the AGI mark, he will surpass that and
become ASI (Artificial Super Intelligence).

What happens then, nobody knows. It's hard to imagine how something with a
higher intelligence than ours is going to behave. All we can do is to try to
come up a number of certain plausible scenarios. If there are negative ones
(and sure as hell there are), then we need to address them _before_ we even
create something _close_ to being AGI, because by the time the AI hits the AGI
mark, it's already too late for us to do anything about it.

There you go, AI philosophy 101.

~~~
daveguy
> because it is able to improve itself by learning on his own mistakes, soon
> after he hits the AGI mark, he will surpass that and become ASI (Artificial
> Super Intelligence).

I was with you until this point. You have a great description of why ANI is
not AGI, but this AGI => ASI is just hand waving.

An AGI will have some of the same issues to deal with:

1) opportunity cost. yes, it will have more time because it doesn't sleep.
Although maybe it will find that spending 1/3 of its time/resources cleaning
out the cobwebs is optimal. Regardless it will have to spend resources
(including time) on some things rather than others. The leap from general
adaptability to perfect selection of tasks is likely just as large if not
larger than the leap from ANI to AGI.

2) Some problems are just plain hard. There are algorithms for learning
optimal results -- even brute force. The problem is they are too complex for a
realistic fast solution. Just because an algorithm becomes as adaptable as a
human doesn't mean that computational complexity is reduced. Therefore, either
the AGI will consume massive resources to get a single optimal answer or they
will be fallible just like humans.

When we get AGI, that just means we will have adaptable general algorithms,
they will still have to learn and they will still be susceptible to restricted
resources. In other words AGI does not imply ASI.

