- Google DeepMind and slowly Google Brain is moving as well.
- Certain people at IBM
- LISA LAB (not exclusively, but some students started using it)
- Purdue e-lab
- Several smaller companies (10-100 companies)
There will definitely be several commonly asked questions, and this is my personal perspective on them.
Why torch/lua, why not python+?*
No reason. Just because. Mostly because LuaJIT is awesome (with it's quirks) and LuaJIT is extremely portable. (we embed torch routinely in tiny devices, afaik not practically possible with python).
Is Torch better than Theano/etc.etc.?
Better and worse. Every framework has it’s oddities.
I like the super-simple design and the compactness of traversing from high-level easy-to-use API to bare-metal C/assembly.
Also, torch’s ecosystem was grown not with exclusively lab experiments in mind, with Yann’s strong robotics research, packages were developed with practicality in mind all the time. Custom chips are being developed for convnets (TeraDeep) and they use Torch.
Where’s the doxx???
If there’s documentation, I’ve tried to make people aware of it, mostly by consolidating everything torch to this one page:
What about Julia?
I like Julia a lot, and it’s definitely cool, but the packages for NN and GPUs aren’t very strong, so advantages over Julia are simply code that’s already written.
If there are any more questions, feel free to ask them here or just open an issue on the github package.
Thanks for reading.
Edit: Apologize for the formatting, not very good at hackernews
Are there any plans for Torch7 to support something similar? I suppose it would be nontrivial since to my knowledge Torch7 does not use symbolic representations of the computations internally, which is what enables Theano to do automatic differentation in the first place.
The comparison helps quite a bit. Mind if I lift this for our site? I'd link back to this thread as well.
Could you give some example of its quirks?
Edit: to clarify, this does not mean torch cant use more than 2GB, but native lua allocations cant be more than 2GB, so ffi-based or C based allocations (which is how torch tensors are allocated) have no such limits.
I am the lead engineer at ArrayFire. Is there anyway I can get in touch with you ?
Whats the license for the free edition? I would be sticking to open source code until we secure capital, but would be very interested in your commercial services down the line.