
Keras.js – Run trained Keras models in your browser - transcranial
https://github.com/transcranial/keras-js
======
zan2434
This is awesome! Can you describe how you implemented the WebGL ops a bit
more? Did you have to write your own convolution kernel with GLSL for example?

~~~
transcranial
Thanks! For WebGL, credit goes to
[https://github.com/waylonflinn/weblas](https://github.com/waylonflinn/weblas).
I only really use GEMM, but it works quite well. In keras.js, convolution is
implemented with the oft-used im2col transformation to turn it into a matrix
multiply followed by reshape. Convolution kernels directly GLSL could
potentially provide speed gains I'm sure, but I can't even imagine writing it
for tensors of arbitrary shape.

~~~
dharma1
Check out the Winograd optimisations used in Nervana's neon - very fast

[https://www.nervanasys.com/winograd-2/](https://www.nervanasys.com/winograd-2/)

------
aab0
What sort of performance can be expected compared to running in the terminal?
How large NNs will this scale to in practice? I see a 50-layer resnet is
mentioned; but not 1000-layers?

~~~
dharma1
On these demos, I'm getting several seconds on the imagenet inception v3
recognition on an i7 macbook pro (nvidia gpu), on both gpu and cpu modes.

I've built tensorflow for android, running inceptionv3 trained on imagenet and
it's much faster, running just on mobile CPU pretty much realtime, around
5fps. On a desktop CPU/GPU it's obviously even faster

------
dguest
This is awesome. We're trying to do something similar but moving in the
opposite direction language-wise by implementing the models in C++:

[https://github.com/dguest/lwtnn](https://github.com/dguest/lwtnn)

The idea is to have something lightweight that we can easily copy into our
analysis framework (which is written in C++).

If anyone reading this knows of a library that already does this it could save
us some time.

------
matt4077
Wonderful! And because all praise comes with work in OSS: I wish the network
diagram would show intermediate states where possible. I've seen some examples
where – with the right presentations – they gave fantastic insights into the
network's "thinking".

------
dharma1
Very cool. Didn't work on Android (Chrome) either in GPU or CPU mode.

Usual tricks like pruning the model and quantising to 8bit should get the
model sizes down significantly from 100mb. Or using an architecture like
squeezenet

------
oelmekki
This is cool, especially given how javascript is everywhere. Is it browser
specific, or could it run in other javascript environment, like nodejs?

------
fbreduc
cool but.. >Offload computation entirely to client browsers

i'm not sure that's a big benefit really

~~~
f00_
They're using pre trained models though, like

[https://github.com/heuritech/convnets-
keras](https://github.com/heuritech/convnets-keras)

I don't think they expect people to train them in the browser, just run the
pretrained ones for image recognition or something

~~~
jayhack0
Training may not yet be available, but how about inference?

~~~
dharma1
Inference is what this does

------
dimatura
This is great! The network visualizations are also pretty sweet. How are those
generated?

~~~
transcranial
Thanks. Nothing fancy with the network architecture diagrams. The layers are
just div elements with the layer name as the id. There's a definition of
inbound/outbound connections by layer names, extracted from the keras json
config, which is used to draw SVG paths.

------
stared
It does not work on my Firefox (but works on Chrome).

------
kiechu
That's realy good stuff!

------
visarga
This has teaching potential.

------
botw
how to get the demo running? they are just images now.

------
lucidrains
Thank you thank you! I love you

