
The Unreasonable Effectiveness of Recurrent Neural Networks (2015) - sidcool
https://karpathy.github.io/2015/05/21/rnn-effectiveness/
======
gavinpc
Good article, but ugh, please leave Shakespeare alone if (as you freely admit)
you don't know anything about Shakespeare. Just because the real thing sounds
strange to you doesn't mean that your "samples" are anything but nonsense.

 _edit_ To (preemptively) stress that I am not trolling: I find the topic and
the writeup interesting and useful, and I appreciate the work that people do
to share their experiences. But this is not a good example of the RNN working;
it's a good example of the RNN _not_ working, and should either be omitted or
presented at least neutrally, instead of glibly saying

> I can barely recognize these samples from actual Shakespeare :)

~~~
scott_s
It works about as well/poorly as the other examples (Paul Graham, Wikipedia,
math in LaTex). The samples from all of the examples are nonsense. What's
interesting is that they, mostly, follow the _form_ of the original.

~~~
gavinpc
Fair enough. My objection was that the OP implies this is a passable
substitute, but okay, in the larger context it's clearly a joke.

------
okket
Previous discussion:
[https://news.ycombinator.com/item?id=9584325](https://news.ycombinator.com/item?id=9584325)
(458 days ago, 211 comments)

------
adamwi
Really interesting article (missed the previous post)!

How far is this from being implementable for a practical application such as
e.g. a "spellchecker and improvement ideas" for coding? E.g. train with "good"
code from open source projects, have it running and highlighting areas with
potential errors/improvement potential while you code (not counting trivial
errors caught by e.g. lint tools).

