
Show HN: LSTMs for deep neural sentiment analysis, running in your browser - struct
http://dracula.sentimentron.co.uk/sentiment-demo/
======
struct
Now also available as a Node.js module:
[https://www.npmjs.com/package/dracula-
sentiment](https://www.npmjs.com/package/dracula-sentiment)

Feedback welcome!

------
samblr
Are u planning chrome extension of this anytime soon.. If it can read content
of a link and show sentiment emoji or symbol for Google search or Facebook
feed

~~~
struct
It's not on my radar at the moment but it might be a fun experiment! I've
often thought that sentiment might be useful in the context of parental
controls: whether you could de-emphasise links or messages which might contain
distressing content.

~~~
samblr
I see you have not used any standard libs to do NLP, Deep learning like
tensorflow, theano etc! What linear algebra and stats skills came into play to
come up with your own ML code. Im new to ML and learning pre-requisites for it
right now - curious to know.

~~~
struct
To answer the spirit of your question: the really interesting thing about deep
models is that once you've developed and trained them the forward pass is
really easy to deploy. The forward-only version I did here relies on
numeric.js[1] for the basic matrix operations (addition/dot-product etc). The
really useful and amazing thing about both TensorFlow/Theano is that they can
automatically differentiate the model so that training becomes tractable,
meaning the actual linear algebra concepts aren't too hard to grasp. Aside
from that, you'll need lots of time, a bit of money, and patience: I first
started adapting Dracula from a tutorial[2] (although the structure of the
model is different) about a year ago, but things didn't really get started
until November last year after I'd bought a decent GPU (a GTX 980). Happy to
answer any more questions :D

[1] [http://numericjs.com](http://numericjs.com)

[2]
[http://deeplearning.net/tutorial/lstm.html](http://deeplearning.net/tutorial/lstm.html)

~~~
samblr
Really appreciate your answer. :)

how much was performance difference b/w cpu vs gpu ? what is kind of insight
you gained from this experience ? Say for eg - a newbie like me would do
sentiment analysis using say IBM watson api ? So doing this by yourself must
have helped in some ways - a blog post kind of thing will help newbies. Thank
you.

~~~
struct
The performance difference between CPU vs GPU is very substantial: the
consumer-class Nvidia GTX 980 I use can deliver about 4612 GFLOPS [1] of raw
performance, whereas the i7-6700k CPU I've got can only deliver about 113
GFLOPS. Whilst those numbers aren't really comparable (and hence the
difference is not as large in practice) you still get a substantial speedup,
maybe 10x-15x (this is only especially important during the training phase).
The insight I've gained is that the kinds of APIs that Google/Microsoft/IBM
give you are definitely not magic and they _can_ be replicated + tweaked to a
reasonable degree, and actually I think the concentration of natural language
understanding / visual recognition / machine learning and the datasets needed
to make them work into the hands of a few very well-financed corporations is
both a good thing (because it lets you get up and running quickly and the
predictions are continuously updated) and a bit of a bad thing (since these
types of applications are going to become a more essential part of computing,
and for reasons of cost, performance and customisability). I've also written a
bit about my experiences with Theano versus TensorFlow[3].

[1] [http://techgage.com/article/intels-skylake-
core-i7-6700k-a-p...](http://techgage.com/article/intels-skylake-
core-i7-6700k-a-performance-look/)

[2]
[https://en.wikipedia.org/wiki/GeForce_900_series](https://en.wikipedia.org/wiki/GeForce_900_series)

[3] [https://medium.com/@sentimentron/faceoff-theano-vs-
tensorflo...](https://medium.com/@sentimentron/faceoff-theano-vs-
tensorflow-e25648c31800)

~~~
samblr
Thank you - this interaction will help me as I move further in ML.

