If you're interested in diving deeper into the papers and math behind the scenes, as well as coding from the lowest levels (right down to the compiler level), you'll be interested in "Deep Learning from the Foundations", which is coming out in 2 weeks. The last two lessons are co-taught with Chris Lattner (creator of Swift, LLVM, and Clang).
If you want to understand the underlying linear algebra implementation details, have a look at "Computational Linear Algebra for Coders": https://github.com/fastai/numerical-linear-algebra
If you want to learn about decision trees, random forests, linear regression, validation sets, etc, try "Introduction to Machine Learning for Coders": https://course18.fast.ai/ml
(All are free and have no ads. They are provided as a service to the community.)
Let me know if you have any questions!
I'm still chipping away at AI applications to mental health and the law, which sadly remains relevant: https://www.oregonlive.com/pacific-northwest-news/2019/06/ju...
The course has been a fantastic resource. Thanks again, and keep up the good work. Looking forward to what comes out of the work with Swift.
Thanks again to both you and Rachel for your contributions, and attention to the ethics of AI as a central concern that's every bit as relevant as the technical details.
I think there's also a need for a very low level course in deep learning. I.e. on the level of someone who wishes to write their own deep learning library. Because from high up, sure it all looks like the chain rule, but down low, it gets messy quickly if you want to write a high performance library on your own.
I also started working on a book titled "Deep Learning for Programmers" that takes it even further! You can subscribe today and start reading the drafts as they are written.
However, what worries me a lot is complete breakup of the API between versions and some discussion about Swift. As a non full time expert I would need to have a robust and stable framework to keep on learning so that I can keep on building knowledge without needing to be worried that whatever I have learned about how to use some framework (or which language to use!) Will be obsolete in a couple of months and I need to start from scratch again.
So, does anyone know if fast.ai is going to stabilize anytime soon, or is it better for me to just try to spend the little time I have to play with ML and deep leaening directly with e.g. pytorch?
Having said that, v1 isn't changing much now - v2 (out soon) will be where new ideas go, and only bug fixes will go to v1, so you can stick with that as long as you like.
Obviously it is moving fast at the bleeding edge. But the basics of backprop have not changed for a while, and somehow I would think that a toolchain that I would expect to be a serious tool for practical purposes and not just a display of the bleeding edge would at least be backwards compatible with previous versions.
(I hope I am not writing this in too harsh way. I actually liked your videos so much that I am seriously considering applying for your on site course if I one day just get all the other issues sorted out in my life so that I can relocate to SF for a couple of months)
I’ve gone through this guide and other guides before, as I often teach courses like: “introduction to deep learning”
The guide from fast.ai is very good and highly recommend. I also recommend if you want to get into deep learning taking a numerical methods course / guide. Once you understand the basics the hard part is understanding the pitfalls introduced from hardware (and some software) limitations
What is a good course that focuses on "tabular data", in particular predicting continuous outputs from continuous inputs, aka regression?
There is only one global in the library, which is a singleton called `defaults`, and contains 2 things: the default GPU to use, the the number of CPUs to use.
So basically, nothing promised for v4.
Edit: this appears to be a link to Part 1 which was released in January (and is excellent). The s4tf portion was in part 2 which should be released publicly sometime this month.
IIRC fastai transitioned from keras/tf to pytorch while back so I don't understand what's happening now
- the 2018 course is pytorch.
- the 2019 course (part 2) is vanilla python, and then the last couple of classes re-build everything using swift.
The lectures will be released soon.
"The combination of Python, PyTorch, and fastai is working really well for us, and for our community. We have many ongoing projects using fastai for PyTorch [...] This stack will remain the main focus of our teaching and development.
It is very early days for Swift for TensorFlow. We definitely don’t recommend anyone tries to switch all their deep learning projects [...]"
Implementation in vanilla python seems exciting. Do you any idea when lectures will be released?
I was a bit put off by the use of fastai library in 2019 part 1 course mostly because library makes it sound too simple.