
Torch7 – Scientific computing for LuaJIT - ot
http://torch.ch/?repost=1
======
ot
[Note: this is a repost from
[https://news.ycombinator.com/item?id=7927077](https://news.ycombinator.com/item?id=7927077).
The original set off the voting ring detector and was demoted]

I came across about Torch in this FB comment [1] by Yann LeCun:

    
    
        I have moved to Torch7. My NYU lab uses Torch7, Facebook AI
        Research uses Torch7, DeepMind and other folks at Google use
        Torch7.
    

Apparently it is used in many deep learning labs. I think that in Toronto they
mostly used Matlab and Python, does anybody know if this is still true?

[1]
[https://www.facebook.com/yann.lecun/posts/10152077631217143?...](https://www.facebook.com/yann.lecun/posts/10152077631217143?comment_id=10152089275552143&offset=0&total_comments=6)

~~~
smhx
I am one of the maintainers. From information first-hand, Torch is used by:

\- Facebook

\- Google DeepMind and slowly Google Brain is moving as well.

\- Certain people at IBM

\- NYU

\- IDIAP

\- LISA LAB (not exclusively, but some students started using it)

\- Purdue e-lab

\- Several smaller companies (10-100 companies)

There will definitely be several commonly asked questions, and this is my
personal perspective on them.

 _Why torch /lua, why not python+_ __?*

No reason. Just because. Mostly because LuaJIT is awesome (with it's quirks)
and LuaJIT is extremely portable. (we embed torch routinely in tiny devices,
afaik not practically possible with python).

 _Is Torch better than Theano /etc.etc.?_

Better and worse. Every framework has it’s oddities. I like the super-simple
design and the compactness of traversing from high-level easy-to-use API to
bare-metal C/assembly.

Also, torch’s ecosystem was grown not with exclusively lab experiments in
mind, with Yann’s strong robotics research, packages were developed with
practicality in mind all the time. Custom chips are being developed for
convnets (TeraDeep) and they use Torch.

 _Where’s the doxx???_

If there’s documentation, I’ve tried to make people aware of it, mostly by
consolidating everything torch to this one page:
[https://github.com/torch/torch7/wiki/Cheatsheet](https://github.com/torch/torch7/wiki/Cheatsheet)

 _What about Julia?_

I like Julia a lot, and it’s definitely cool, but the packages for NN and GPUs
aren’t very strong, so advantages over Julia are simply code that’s already
written.

If there are any more questions, feel free to ask them here or just open an
issue on the github package.

Thanks for reading.

Edit: Apologize for the formatting, not very good at hackernews

~~~
pavanky
Hi,

I am the lead engineer at ArrayFire[1]. Is there anyway I can get in touch
with you ?

[1][http://www.arrayfire.com/docs/index.htm](http://www.arrayfire.com/docs/index.htm)

~~~
ludamad
It may seem obvious, but I feel like the page you linked could use an
occurrence of the word 'C++'. The code snippet was obvious to me, but just a
thought.

~~~
pavanky
Thanks for the feedback! While the code snippet is from C++, we have language
wrappers for Java, R and Fortran. We'll try to be more clear about this in our
documentation.

------
cr4zy
I found [https://github.com/BVLC/caffe](https://github.com/BVLC/caffe) to be
20 to 40x faster for image classification when comparing with Overfeat, which
uses Torch - YMMV (The type of BLAS you use makes a gigantic difference. MKL
was 2x faster than ATLAS and 5x faster than OpenBLAS). Caffe also has a more
active community and cleaner code IMO.

~~~
smhx
caffe definitely grew in popularity very quickly, and it does a narrow set of
tasks very well, but I dont agree that the code is cleaner (in fact I think
the opposite), and I dont see the design to be particularly broad-minded (i.e.
generic neural networks, or a general scientific computing framework).

------
benjaminva
I haven't used Torch7 yet, but from heavy usage I can recommend
[http://www.nongnu.org/gsl-shell/](http://www.nongnu.org/gsl-shell/). GSL-
Shell combines Lua + Luajit + GSL-Library (GNU Science Library) + additional
syntactic sugar that helps with matrix/vector calculations. I use it for all
kinds of linear algebra projects and gsl-shell does an excellent job.

------
fit2rule
I'd love to try this out, but it seems that it uses some luarocks that are
unavailable (sundown, cwrap) .. or .. at least, my attempt to do a standard
install with the instructions as given has resulted in luarocks not finding
any of the required rocks .. anyone know what might be causing this?

(EDIT: Never mind I figured it out - if you have a luarocks installed
previously, you must do:

    
    
        $ mv /usr/local/etc/luarocks/config-5.1.lua /usr/local/etc/luarocks/config-5.1-old.lua
        $ mv /usr/local/etc/luarocks/config.lua /usr/local/etc/luarocks/config-5.1.lua

------
conistonwater
I don't understand: how is this better than/different from python (with numpy,
scipy, theano and FFI)? Is this a language preference thing where people just
want to avoid python?

~~~
benjaminva
Thanks to Luajit, the code you are writing will be compiled to efficient byte
code which runs very fast. It's because Lua as a language is simple enough to
produce light bytecode that can run in the CPUs cache. Numpy will always do
the job, but when speed is really critical, you might want to look into it.

~~~
deanjones
I'm not an expert in LuaJIT, but it sounds unlikely that the performance
characteristics of torch7 are due to the efficiency of Lua bytecode. The speed
with which you can train NNs will be dominated by the performance of the
linear algebra libraries which are utilised by the numerical optimisation
algorithms (SGD, L-BFGS and the like). Almost everyone ends up using some
variant of the BLAS libraries for this.

~~~
smhx
Someone I met recently said this (in rough words): _" When I'm writing rough
code in lua, it's completely acceptable to do a couple of for loops here and
there without a disaster in speed. With python, this was a complete meltdown"_

I'm not saying it all boils down to just this quote, but I thought it was
interesting and wanted to replay it here.

~~~
ravich2_7183
This is true, but there are a couple of easy workarounds for it: Cython and
scipy.weave

[http://wiki.scipy.org/PerformancePython](http://wiki.scipy.org/PerformancePython)

------
scythe
Wow, this looks awesome (I usually use C). I had added Lua to a previous
project which had been entirely C and it simplified the "outer bits"
drastically, as well as being relatively painless to set up.

How is the support for complex numbers?

~~~
smhx
One thing that isn't really supported yet :-( We have support for doing
complex FFTs etc. in the _signal_ package, but otherwise, it is non-existent.

------
Udo
It seems to be written mostly in C, so I wonder how difficult it would be to
port this library into an interpreted Lua 5.2 environment. Has anyone tried
that yet?

~~~
smhx
It should be mostly compatible already, but torch9 (the next version which is
getting ready) should be fully compatible I think.

------
joelthelion
Looks cool. I couldn't find any real documentation, though. Did I miss it?

~~~
tristanz
There is a good tutorial here:
[http://code.cogbits.com/wiki/doku.php?id=start](http://code.cogbits.com/wiki/doku.php?id=start)

------
aton
What are the advantages of using Torch7 over Julia?

~~~
bufo
Julia doesn't yet have strong support for running stuff on GPUs.

~~~
fyolnish
Torch/LuaJIT does?

~~~
justincormack
LuaJIT certainly doesn't at this point; not sure whether torch uses some
libraries that do...

