Here's the description from the slide:
• Ilya Sutskever (2011) trained a special type of recurrent
neural net to predict the next character in a sequence.
• After training for a long time on a string of half a billion
characters from English Wikipedia, he got it to generate new
– It generates by predicting the probability distribution
for the next character and then sampling a character from
– The next slide shows an example of the kind of text it
Notice how much it knows!
Some text generated one character at a time by Ilya Sutskever’s
recurrent neural network:
In 1974 Northern Denver had been overshadowed by CNL, and several
Irish intelligence agencies in the Mediterranean region. However,
on the Victoria, Kings Hebrew stated that Charles decided to
escape during an alliance. The mansion house was completed in
1882, the second in its bridge are omitted, while closing is the
proton reticulum composed below it aims, such that it is the
blurring of appearing on any well-paid type of box printer.
Especially important as neural net knowledge seems to be evolving quickly.
And perhaps someone can explain why paper matters relative to the plethora of papers and approaches "out there".