BTW the 2018 version of the course is being discussed in this forum, for those interested: http://forums.fast.ai/c/part1-v2
I've been a ML/DL practitioner for the past five-plus years, and first watched one of your lectures a bit over a year ago. All I remember thinking is that a wealth of practical knowledge that had taken me years to acquire was there for the taking, for free, for anyone who cared to look.
Since then, I have been recommending these courses to anyone who asks me for advice for learning about deep learning. You -- and Rachel Thomas -- have created by far the easiest and fastest path for a wide range of people to gain deep learning expertise.
In fact, I'm so sure the new lectures will contain valuable nuggets of know-how that even though I consider myself pretty knowledgeable about deep learning (and an expert in my narrow domain of interest), I will make it a point to find time to watch all the updated lectures.
I am a web dev, JS being my first lang of choice :-), and i have been trying to get into machine learning/deep learning/AI, but I am having a information overload.
I have 0 knowledge about this field, so my question is should i just start with machine learning, instead of jumping right into deep learning ? Or is it ok to jump right into deep learning and is it possible to do all the stuff done with machine learning with deep learning ?
Just like when a person who shows an interest on learning to code I refer them to Python instead of C++, what do you recommend ?
You can then go back through them a 2nd time and do a deeper dive. By that time, our Intro to Machine Learning course will be out too :)
The concepts and skills you learned from the 2017 are entirely transferable - learning the software packages is the easiest part, and the various libraries are similar enough that switching between them doesn't take much time.
I too want to say thank you, even though I have only started with the material. Your philosophy on how to teach the subject I feel will be much more useful to me than the classes I was signed up for recently (and lost interest in rather quickly). It's a lot easier to get excited about the practical things surrounding the topic which is why I'm looking forward to diving into the new content.
On a related note, one thing I do see a heavy emphasis on with the material is on the Computer Vision / Image Processing side of things which is certainly understandable considering how popular that area is specifically right now.
Something not really computer vision related though that I'm curious about (and I'm not sure if it's covered in the new/existing lessons) would be on how to craft a data set using data I might have accessible to me, but which isn't necessarily image-based, and to apply these techniques to that sort of data set to come up with predictions (I bring this up, because one of the goals I have for learning about this topic specifically is to see how I might be able to apply it back to my job at a community college and if I can pull historical data related to our students and use that for forecasting / recommendation purposes and create some useful applications our students can utilize...as a simple idea one example would be using historical data about the current student, and maybe data from other similar students, to predict success in a student's upcoming courses).
Thank you and keep up the awesome work (and for sharing it freely :-)!
Just wanted to say thanks for all your work. I'm taking some time off from my regular job as an iOS developer to pursue an ML project of mine. I wouldn't have had the confidence to do that with out your online course.
I wrote up a project I did after last years class on Medium, also because of your guidance.
1. You've made an off-hand comment on one of your videos that a sequential dense network is just a generalization of any other type of neural network architecture. In theory you could re-create an RNN or CNN through just Dense layers. But obviously it's not practical.
Why isn't it practical? Is it because the network would have to be too deep, or too wide? Would the optimizer just get stuck in a local minima or would overfitting be inevitable? Or perhaps some combination of issues?
What do you think is the best hope for a generalized network architecture, most similar to our brain?
2. On a somewhat related note, do you have a strong enough faith in the current machine learning algorithms and architectures being used (RNNs, CNNs, capsule networks) that given infinite resources (time training and network size), that we would be able to create a meaningful general AI? Or do you think that our current approach is just incremental and a truly different approach would be required to achieve meaningful AI?
Schmidhuber did a paper a few years ago showing near SoTA performance on computer vision using just a fully connected net. One of our students showed how a convolution is just a weight-tied matrix multiply here: https://medium.com/impactai/cnns-from-different-viewpoints-f...
So the issue is that without the weight-tying, you've got more parameters to regularize (which can decrease performance) and train (which takes longer). So you should use weight tying where you can - e.g. by using convolutions.
In general, domain-specific architectures try to find structure in the underlying data and problem, and use that to decrease the number of parameters we need. The use of implicit factorizations in the inception and xception architectures is a good example.
Also, any chance of https on the forum?
PS: The focus of fastai is training, not production. The models you end up with are largely standard pytorch models, so standard pytorch approaches to production work fine. For most people, a simple flask endpoint with CPU inference is generally the best approach for this (e.g. all the deep learning web apps at http://allenai.org/ use this approach AFAIK.)
Thanks Jeremy for putting this together.
Would you release a new version for the part2 course this year? I haven't started the part2 yet, but I heard that the part2 course is using the new Pytorch based library already.
It'd be great if I can switch to the new lessons but with AWS.
I'm having trouble finding the (old) 2017 material. I would like to see it for my reference. Do you know where this is located?
The old forum is still at the same place http://forums.fast.ai/c/part1
This is again going to take up my weekends just like their previous course. :D
Congrats on the new course! Thank you. :)
Why thank you :) Being uncool is our specialty.
(I was trying to be sarcastic. Thankful for the courses :)).
This includes setup of fastai, pytorch, cuda90, cudnn, opencv, bcolz, and much more!
BTW I just had a thought. What if instead of a txt file like you have:
hi everybody welcome to practical deep
[hi everybody welcome to practical deep]
Didn’t know punctuator existed, I’ll look into that!
(this is a genuine question, and not a meta-comment)
(I'd still suggest doing the first few lessons on Paperspace so you can focus on the deep learning, rather than the setup. It's only $0.45/hour and 20 hours is plenty enough to get going. Sometimes getting your computer set up can be distracting and frustrating at first!)
(It's for dummies that are prepared to work hard over a 7 week period and that have been coding for at least a year.)
If you're considering the investment of weekly flights, you should probably first ask the students at http://forums.fast.ai whether they would recommend it.
OTOH, I've done it remotely just watching the videos and I've found it great too.
I'm only down the coast in SD so it wouldn't be too bad... but certainly something to consider.