
PyTorch, a year in - ghosthamlet
http://pytorch.org/2018/01/19/a-year-in.html?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=The%20Wild%20Week%20in%20AI
======
cs702
Great update, it's been an exciting year for the project.

I _love_ PyTorch for tinkering and experimenting.

In my experience, there's very little 'impedance mismatch' with PyTorch,
meaning the framework rarely gets in my way. I _never_ find myself 'wrestling'
with the API. I expect this is only going to get better now that one of the
project's explicit goals is to match numpy's API and semantics as much as
possible over time.

Congratulations to the PyTorch community. You guys have done a _great_ job!

~~~
tachyonbeam
In terms of impedance mismatch, I wish the PyTorch API was more similar to
numpy. eg: using .shape instead of .size, using the same method names where
possible. Seems like a small detail, but it could make PyTorch a little bit
more intuitive.

~~~
amelius
I'd love to see the foundational layer of PyTorch integrated into Numpy, so
that e.g. Numpy matrix-multiplications can be performed on the GPU without
rewriting (much) code.

~~~
deepGem
Perhaps numba might help, accelerated by gputoolkit

[http://numba.pydata.org/](http://numba.pydata.org/)

------
JacobiX
It is our framework of choice especially when prototyping and implementing new
differentiable programs. Although we use Caffe2 in production right now mainly
because of Windows support for some of our customers. But both of them support
ONNX exchange format, we can prototype and train in PyTorch and then deploy
the model using Caffe2 CPU version.

~~~
agibsonccc
I just want to call out something with onnx. It will be slated to improve this
year but be very careful what you try to do with it. A lot of basic things
don't work yet.

See:
[http://pytorch.org/docs/0.4.0/onnx.html](http://pytorch.org/docs/0.4.0/onnx.html)

The other frameworks are using (in my opinion) slightly misleading language as
to how "ready" it is. onnx to Caffe2 is facebook's use case so I could see
that being supported fine. It's hard to go much beyond that though.

Support for onnx will be bottlenecked by what pytorch can export right now.
The file format just hit 1.0, but it will take some time for the ecosystem
around it (including the export) to mature.

Disclaimer: I am a framework vendor who has spent the last few months messing
with it for end users writing model import

for tensorflow's file format as well.

~~~
ipsum2
> Caffe2 to onnx is facebook's use case so I could see that being supported
> fine. It's hard to go much beyond that though.

ONNX to Caffe2*. Though it's possible to go from Caffe2 to ONNX to some custom
hardware accelerator (TensorRT?)

~~~
agibsonccc
Ahh yes sorry for the typo. Fixed!

And it _will_ be possible but again support isn't actually implemented by
anyone yet.

They just "signed on".

Tensorflow is the only supported format right now for tensorrt.

------
reacharavindh
Wow. I'm a passerby who had heard of PyTorch on HN, and been on the sidelines
about Machine Learning and Deep Learning. Just read this summary and feel
inspired to kick the tyres and start learning some.

May be I will find some use for it in my sysadmin world.

Thanks for such a summary.

~~~
make3
machine learning is about the math, not about the frameworks.

~~~
zengid
Downvote because while it is true that the math is important, a framework like
PyTorch allows for more idiomatic python code. Tensorflow is in its own world
being mostly written in C++ and trying to cater to multiple client languages.
That means the same math (same networks) will be coded differently in the two
libraries/frameworks.

------
minimaxir
That said, there's been a religious flamewar over which-is-better-
TensorFlow/Keras-or-PyTorch that's dragged down a lot of productive discussion
about ML/DL frameworks.

~~~
curuinor
chainer is the best. plain numpy means that somebody else thinks about
broadcasting rules and other stuff that has nothing specific to do with neural
stuff and cupy just implements them.

~~~
amelius
Imho, as a user, it is rather silly to find that a framework requires me to
define a dependency graph instead of just coding the operations directly. In
my career so far, no compiler has asked me to do the same. So why would neural
networks be an exception?

~~~
cjalmeida
Optimization and embedded devices. In theory, knowing the full graph upfront
allows the framework to optimize and fuse some operations.

For embedded devices you may not have access to Python. You can precompile the
graph to the target device in such cases.

Note that in practice, PyTorch is as fast or faster than Tensorflow and, and
newer version allow you to "post compute" the graph and export to ONNX to
allow embedded inference using Caffe2

------
etiene
As a member of the Lua community I cry every time...

But congrats on the hard work! I've been planning to try it out for a while
now and the amount of resources and docs is great.

~~~
stealthcat
Convince me to Lua. While Lua gets job done, as an outsider who dabbled into
Lua (Torch, Love2D, ESP8266) I think Lua ecosystem have two major problems:

\- 5.1 vs 5.2 vs LuaJIT. This is worse than Python 2 vs 3

\- Lack of universal standard on how to OOP, etc. Python has PEP8.

Sometimes coding in Lua feels like Javascript, but more or less worse.

I can cope with index-1 counting (think like using Matlab), but other problems
are holding me back.

~~~
etiene
It's not my job to convince you of anything. But while I agree with the lack
of universal standard, PEP8 is great, I disagree with all the rest.

The changes between Lua versions are _nothing_ like the abysmal difference
between python 2 and python 3. Not only 5.1 code is very likely to run on 5.3
and JIT without issues, if there are ever any issues they are always very
minor and solvable with requiring a simple compat library.

As someone who has done a lot of Lua and Javascript, I had always had the
feeling that coding in Lua feels better by orders of magnitude. It's a way
more consistent language.

~~~
stealthcat
Thanks for reply. Since I am learning deep learning, I still need to read and
run Lua Torch code every now and then. But if I have choice I choose Pytorch.

P.S. I did not downvote you. Somebody else did

------
ajeet_dhaliwal
That image to image transform of the horse and zebra is something I’ve never
seen before. Soon (or perhaps already) we can’t trust anything we see with our
own eyes anymore, the potential for manipulation is scary. The repcursions are
unthinkable with respect to mob mentality and gullibility.

~~~
blt
We live in a post-truth society now [1]. OK, maybe we're not all the way yet,
but it's interesting (and scary) to speculate about what society might look
like if it becomes much harder to verify if a given statement is true or
false.

[1] [https://en.wikipedia.org/wiki/Post-
truth_politics](https://en.wikipedia.org/wiki/Post-truth_politics)

------
sabertoothed
Why is this a campaign link? Not a direct link?

~~~
greglindahl
I wish HN would strip tracking cgi args in software.

------
singularity2001
I wonder how anyone can even install torch:

luarocks install torch

Error: No results matching query were found.

/opt/torch$ ./install.sh

/opt/torch/extra/cutorch/lib/THC/generic/THCTensorMath.cu(393): error: more
than one operator "==" matches these operands:

~~~
Smerity
PyTorch was born from Lua Torch but they're not interchangeable and noting
installation issues regarding Lua Torch when the article is discussing PyTorch
is likely confusing.

PyTorch helpfully provides clear installation instructions for each platform
and package manager at [http://pytorch.org/](http://pytorch.org/) and the team
have consistently been careful to ensure they work simply.

