Docs here: http://docs.fast.ai . GitHub repo here: https://github.com/fastai/fastai . It's also available now on Google Cloud Platform, including example notebooks and datasets (Viacheslav Kovalevskyi from Google has posted a walk-thru here: https://blog.kovalevskyi.com/google-compute-engine-now-has-i... ). AWS support coming soon.
I suggest you create a new conda environment and then follow the instructions in the readme to install both.
I remember it being a pain to get all the drivers working and there was a lot of version incompatibility in the Python ecosystem.
I'd rather not mess with the bootloader and just have an external drive I can select to boot to in the BIOS. For some reason Windows wiped out GRUB and I can't boot my old Ubuntu distro anymore.
P.S. I'm considering building a DL box based on Ryzen 2600 w/ GTX 1070Ti, so could you please share your experience with Ryzen and Linux.
conda install -c pytorch -c fastai fastai pytorch-nightly cuda92
Fast.ai instructions here: https://learn.spell.run/fast_ai
For a little latitude, I believe there are PyTorch VMs available on the other cloud providers as well.
ps: i'm the creator of salamander
What is the main difference with Keras+TF, I've only worked with them (still not mastering them), why should I consider fastai?
> It’s important to understand that these improved results over Keras in no way suggest that Keras isn’t an excellent piece of software. Quite the contrary! If you tried to complete this task with almost any other library, you would need to write far more code, and would be unlikely to see better speed or accuracy than Keras. That’s why we’re showing Keras in this comparison - because we’re admirers of it, and it’s the strongest benchmark we know of!
So I can't wait to try the new release out. I think Fast.ai has set a new bar for deep learning frameworks in terms of speed and ease of use. Thank you for all your great work!
You can certainly train a YOLO or SSD pytorch model with fastai, however.
One question, there is any best practice to transform video into [n] frames to then use model.predict(n) to make a "live classification / object detection"?
Kind regards from Dublin!
A few questions:
1. what is the benefit of using fast.ai to someone well acquainted with pytorch (for academic use)
2. how well does fast.ai interface with pytorch itself ? Can parts of program be in fast ai and other parts be in pytorch ?
3. Am I correct in assuming that despite being very fast, fast.ai is still slower (even if marginally so) than pytorch itself ?
2. Very close integration with PyTorch. fastai is designed to extend PyTorch, not hide it. E.g. fastai uses standard PyTorch Datasets for data, but then provides a number of pre-defined Datasets for common tasks
3. fastai is not slower than PyTorch, since PyTorch is handling all the computation. It'll othen be faster than your handwritten PyTorch however, since we went to a lot of effort to use performant algorithms.
From appearance alone, everything about fast.ai seems to be positive with no obvious negatives over native pytorch.
I often require to train baselines for my projects. That will be the perfect place to start using fast.ai. I am sold.
I have heard a few people lament the lack of keras in pytorch. Fast.ai also appears to take care of that.
Thanks for the great work.
2. Yes. You can make everything project-specific if you want to. e.g. Image segmentation datasets are just slightly modified Dataset classes.
3. fastai extends PyTorch in a very Pythonic OO sense, so I think the only speed issues could come from that, and maybe maintaining a few extra dicts in memory. If it's about i/o, probably not, if it's about parallelization (for nlp), definitely not. In fact, I can't think of a way in which there's a significant speed penalty.
On the side, I am also taking Andrew's coursera course for a theoretical grounding , Bengio and Goodfellow's Machine Learning Book ,and Hands-On Machine Learning with Tensorflow and Sci-kit( O'reilly book).
We are not really doing cutting edge or research stuff. Just developing a big dumb resnet and hoping to scale up to 10s of GPUs over 2019.
I was really happy to read a comment here about how this framework was used to reproduce a 6 month project in 2 weeks :)
For inference, new features being announced today at the conference will help a lot too.
Most of the functions in there are pre defined in fastai so you can remove much of the code there in practice.
We'll be adding a proper segmentation example soon.
@Topic: What is the difference to Keras, PyTorch, etc.? They are already pretty high-level and the basic models for common tasks are available in all libraries at that point.
And AI is commonly-used as basically a synonym for Machine Learning in the ML community. Yes, the word has multiple definitions and connotations, and some people prefer to avoid using the term AI at all because of that. But that doesn't meant that someone using the term AI to refer to ML is wrong or up to something.
Furthermore, just because non-legitimate organizations want to ride the hype train, it doesn't mean legitimate ones shouldn't benefit from that effect as well. fast.ai is an organization actually writing software for doing machine learning, as opposed to a completely unrelated company (like an electric toothbrush company) claiming their latest product "uses AI" whatever that means.
The funniest part is that the slogan for fast.ai "Making neural nets uncool again". If that isn't explicitly going against the hype and saying this is _actual_ neural nets and not just fluff, then I don't know what is.
There are so many neural networks libraries now and I don't find the time to test all of them - dismissing libraries that use .ai in their website's TLD (or in their name) served me very well so far. Sure if it really makes neural networks that much simpler to train, I will take a look, but for now it does not seem to be more high-level than Keras or PyTorch.
It does not matter if you can describe your network in 50, 100 or 200 lines of code as the hard part is to make it learn, choose hyper parameters, change loss functions, etc. - this is possible in all frameworks.
And I don't think AI is used as a synonym for machine learning in the community, but is used to describe AGI.