I've tried to work out how to do similar things in Torch and Tensorflow but all they really offer is pre-packaged layers like LSTM. If you want to make your own it's difficult, undocumented and not at all ergonomic.
How does Caffe2 compare?
output = do_stuff
List of functions:
We use it for machine translation so the perf is nice.
If you have design feedbacks please let us know - we'll be really grateful.
Will Caffe2 support dynamic graphs (perhaps down the line)? What is your reasoning behind whichever decision?
Cheers and congrats on the release!
> Caffe2 is built to excel at mobile and at large scale deployments. While it is new in Caffe2 to support multi-GPU, bringing Torch and Caffe2 together with the same level of GPU support, Caffe2 is built to excel at utilizing both multiple GPUs on a single-host and multiple hosts with GPUs. PyTorch is great for research, experimentation and trying out exotic neural networks, while Caffe2 is headed towards supporting more industrial-strength applications with a heavy focus on mobile. This is not to say that PyTorch doesn’t do mobile or doesn’t scale or that you can’t use Caffe2 with some awesome new paradigm of neural network, we’re just highlighting some of the current characteristics and directions for these two projects. We plan to have plenty of interoperability and methods of converting back and forth so you can experience the best of both worlds.
Or it actually means Caffe2 was designed for both research and production
Sometimes the line gets a bit blurred - for research that are focusing on relatively fixed patterns, such as Mask RCNN, both PyTorch and caffe2 are working great. In fact, Mask RCNN is trained in Caffe2, and that also makes things much easy when we put it on mobile - what our CTO Mike Schroepfer showed in his keynote is a Mask RCNN model trained and then deployed onto mobile with Caffe2.
I'm trying to implement the RoIAlign layer in Tensorflow and I've a few doubts and having the author's code would definitely help in implementing it.
And thanks so much for the awesome work from Intel!
I am currently exporting models from an existing Theano system running on an a Linux server.
I'm currently running inference using tiny-dnn and an alternative direct implementation using Eigen (which is much faster) but in the long run it would be nice to have stuff like quantization and GPU available.
Better create a proprietary license which allow you to protect your predatory patent claims instead of acting benevolent and releasing with this kind of corporate malarkey.
Quick question: why did you develop and release this as a completely separate codebase from the original Caffe you created at UC Berkeley?
I think Caffe2 is especially suited to machine learning that runs on mobile devices, so I wouldn't be surprised to see it become more popular as that mode of machine learning becomes more popular.
We also did a lot of optimizations on the mobile side - like using NEON, mobile GPU and stuff for optimized speed.
I'm thinking something along the lines of
The framework itself allows fine-tuning and training of the model on the mobile device too, but more work is required to enable particular use cases.
Q: I've never used Caffe - based on the examples provided, I would say it's best for images and videos? I'm interested in NLP (eg seeing patterns in science papers) or in studying wearables data (gps, heart rate etc.) to predict user activity.
Over the next weeks/months we'll share more examples on other applications such as RNNs.
I would have preferred them to chose another name.