Hacker News new | comments | show | ask | jobs | submit login

Yes, Hebel doesn't have a ton of features and a kitchen-sink of different models yet, but I hope that's going to change. There are lots of things that are quite easy to implement in the current framework, such as: - Neural net regression - Autoencoders - Restricted Boltzman machines

There's a lot of interest for convolutional networks and the best way to implement it will be to wrap Alex Krizhevsky's cuda-convnet, like DeepNet and PyLearn2 have, but this will require a bit more effort.

With respect to other deep learning packages, Hebel doesn't necessarily do everything differently, but depending on your needs it may be the best choice for a particular job.

PyLearn2 is big and monumental and although I haven't used it much personally, it seems to excellent. But as you mentioned, it's not necessarily easy to use and if you want to extend it, you have to learn the Theano development model, which takes some time to grok.

DeepNet is quite similar to Hebel in its approach (even though it offers more models right now). However, DeepNet is based on cudamat and gnumpy, which I have found to often be quite unstable and slow. Hebel is based on PyCUDA which is very stable and according to some preliminary tests I did runs about twice as fast as cudamat.

So, the idea of Hebel is that it should make it easy to train the most important deep learning models without much setup or having to write much code. It is also supposed to make it easy to implement new models through a modular design that lets you subclass existing layers or models to implement variations of them.

Question: do you/will you plan to support converting GPU nets to CPU, perhaps by keeping weights and architecture definition separate from PyCUDA dependent structures during serialization?

I have found that using a trained net for preprocessing can be accomplished using very limited resources (read: Core 2 Duo laptop). This is one of the very nice features of DeCAF, which could allow for some interesting applications on embedded devices.

Great work by the way - I look forward to testing it out soon!

That would be possible, but since Hebel is mainly meant to be used in research I don't think it's a big priority now. The most important reason to do this would be to allow development on laptops and workstations without NVIDIA cards and to run the finished model on CUDA hardware later.

As far as embedded devices go (I assume you're talking about ARM cpus etc), they are probably too underpowered to run Neural nets anyway, or models would have to be written in highly specialized C.

Yup, I didn't mean to belittle Hebel. I actually just meant that the lack of features is likely why I hadn't heard of it. From the looks of things it's on a nice path. You philisophy about what Hebel should be sound similar what's been done with MORB for making RBMs, and that is one of the reasons I've always like that library. Although MORB still does incur the 'working with theano' conceptual overhead.

The reason why haven't heard of it is probably because I only put the code on Github less than two weeks ago ;) - I've really been blown away by the response though.

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact