
Show HN: Layered – Neural Networks in Python 3 - danijar
https://github.com/danijar/layered
======
albertzeyer
Such projects are nice for educational use but not so much further when you
don't have any GPU implementation.

It's pointed out in the Readme that this was for education but it still wasn't
clear whether it's supposed to have a practical use outside of that (some
educational projects develop into something useful). If it has (or maybe even
if not), it should have a small comparison to other frameworks (maybe Python
frameworks only), e.g. Theano, Blocks, Keras, Brainstorm, Neon, .... I think
Brainstorm
([https://github.com/IDSIA/brainstorm](https://github.com/IDSIA/brainstorm))
or Neon
([https://github.com/NervanaSystems/neon](https://github.com/NervanaSystems/neon))
are somewhat close to your framework, because they are not based on Theano and
thus do not do automatic symbolic gradient calculation but have explicit
backprop code. Also, you really should point out what GPU implementation you
have (if any), if it supports multi-GPU, and/or maybe other distributed
setups.

------
lqdc13
Interesting, but really slow to train.

Partially because of the class __call__ instead of function calls among other
things.

~~~
danijar
Thanks for taking a look. Do you really think __call__ affects performance
that much? I'll look into OpenCL to improve performance.

~~~
lqdc13
The lib looks good otherwise. Might be a useful educational tool if anything.

In my tests it's 2x slower, but it might not be the main reason. I didn't
profile it at all.

Another thing is that it seems like it doesn't use Atlas to scale to all the
cores even though my Python is linked against it.

