
PyTorch Mobile - jayyhu
https://pytorch.org/mobile/home/
======
crubier
This is amazing! It might be a personal feeling, but my opinion is that
Facebook is SO much better than Google at delivering open source libraries
that people want, and support them in the best way.

React vs Angular, Pytorch vs Tensorflow. These are just two examples among
many where the Facebook framework arrives a bit later on the market but is
then supported awesomely and improved continuously while the Google framework
arrives earlier but then becomes a hot mess of non retro compatible upgrades
and deprecations...

My “loyalty” to Facebook open source libraries just keep growing.

~~~
m90
Not to dismiss anything you mentioned, but looking at the big picture Facebook
also published things that went awry / never really gained adoption. Flow,
Buck, Hack to name a few.

I mean, the cool things are cool (even if I have a really hard time with
Facebook as a company) but I think it's a bit of a stretch to say Facebook has
"the recipe".

~~~
umanwizard
FB has thrown a lot of random stuff over the wall that they use internally and
thought might be useful to other people. I think Hack and Buck fit into this
category. Facebook will maintain them forever, because they are core parts of
internal infrastructure. Their value to Facebook is completely independent of
wider industry adoption.

Whereas PyTorch is intended, from the ground up, to be a widely useful
project, and the dev team weights open-source issues at least as much as
internal ones.

(Full disclosure: I used to work at Facebook, including, briefly, on PyTorch)

~~~
a-b
That's because people at FB have to contribute OSS to help with leveling.
Especially after lvl5

~~~
crubier
This might explain the difference I perceive between FB and Google on OSS.
I’ve heard that Google rewards more the creation of new projects.

------
Nimitz14
Yes yes yes! I've been anticipating this for so long.

However I'm a bit skeptical about doing quantization after training, in my
experience you have to do quantization-aware training for there not be a large
performance decrease. I guess it works though otherwise they wouldn't have
released it?

~~~
jayyhu
According to their docs[1], three different ways of quantizing your model are
supported - one of which is quantization-aware training (3).

[1] -
[https://pytorch.org/docs/master/quantization.html#quantizati...](https://pytorch.org/docs/master/quantization.html#quantization-
workflows)

------
orf
Does anyone else have to maintain backend PyTorch based services? Is it just
me or is it a complete mess?

Members of my team have spent literal months tracking down memory leaks, the
performance of these services are always sub-par to Tensorflow based ones and
the less said about the atrocious memory/cpu usage the better.

What's the advantage of using PyTorch when you have things like Tensorflow
Serving ready to productionize any model with ease?

------
reducesuffering
Does this leverage the Neural Engine part of the chip on iOS devices like Core
ML does? Can anyone compare using this to Core ML Apple API's?

~~~
sanxiyn
As far as I know, there is no way to use Neural Engine without going through
Core ML. This does not go through Core ML, hence this can't use Neural Engine
hardware acceleration.

------
spicyramen
This is a common trend for being second in market, when we see Pytorch and
TensorFlow 2.0, TF 2.0 was created to compete directly with Pytorch pythonic
implementation (Keras based, Eager execution). Facebook at least on Pytorch
has been delivering a quality product. Although for us running production
pipelines TF is still ahead in many areas (GPU, TPU implementation, TensorRT,
TFX and other pipeline tools) I can see Pytorch catching up on the next couple
of years which by my prediction many companies will be running serious and
advanced workflows and we may be able to see a winner there.

------
faceshapeapp
This is super exciting! I wish they also had browser support without having to
go through onnx.

------
umanwizard
If anyone from the team is reading, can you comment on how much code size this
adds to iOS apps?

~~~
dzhulgakov
In this experimental release with prebuilt binaries it’s about 5Mb per
architecture. This includes all operators for inference (that is forward
only). We’re working on selective compilation so that you can build a smaller
bundle with only a subset of ops that you use. With that for common CNNs it
should get to 1-2 Mb range or even smaller.

------
tmoot
Well, there goes my weekend :)

------
mikkelam
Might be a stupid question, but what's the advantage of using PyTorch Mobile,
compared to converting to ONNX-> TF -> TF lite (or something similar)

~~~
sinkpoint
It's non-trivial, certain models like GRU has fundamentally different
implementation, so they are not cross-compatible. TF also has no incentive to
merge any PR back into its code-base for compatibility, and dragged its feet
for months. I have spent a long time investigating into this and hit brick
walls after brick walls.

------
diffset
What advantage does this provide over Apple's CoreML?

~~~
ipsum2
Support for running same model on both Android and iOS, for one.

------
wil421
As a side point, looking at maven and the org.pytorch gave me a cold shudder
remembering my java days.

How does CocoaPods compare to Maven, NuGet, or NPM?

~~~
Traubenfuchs
Maven is simple, works extremely well and behaves pretty sane. IDE integration
is amazing and unless you write a build/deploy pipeline you don't even need to
call a single mvn command. What do you not like about it?

~~~
woolinsilver
I can't believe he's comparing Maven to npm.

I wouldn't piss on npm if it was on fire.

~~~
wil421
Where did I say that? I asked about CocoaPods. I’ve used Maven, NPM and NuGet.
No one is even answering my question.

> How does CocoaPods compare to Maven, NuGet, or NPM?

~~~
xta0
Cocoapods is a dependency manager for iOS/MacOS projects. Works more like NPM.

------
bjornjaja
Curious, what does PyTorch do for embedded when Torch uses Lua? Can’t imagine
python for embedded is better than Lua?

~~~
reubenmorais
It's supposed to go through their JIT first, so there's no Python running on
embedded devices. Of course, this means if your model can't easily be made
compatible with the JIT, you can't use PyTorch Mobile. But that's not
surprising.

