1. How exactly it is different from Andrew NG's course, other than Octave/Matlab vs Python? As someone who is new to ML and wants an introduction, I am now confused which course I should do.
2. What are the math prerequisites? Do you also cover them in the course or is there any list materials available to prep?
Sometimes I'll assume some understanding of basic linear algebra or probability distributions when explaining why something works. Particularly for the naive Bayes section.
The first half of the course assumes only high school math. If you get to a bit where I mention some math you're less confident of, just search for that term on Khan Academy for a lesson. Or just skip that explanation in the course - it won't stop you from being able to apply the concepts even if you don't always follow the math.
After completing this course (in addition to Andrew Ng's), will I be able to implement state-of-the-art models for real-world problems, where "just ML" (as opposed to Deep Learning) can be used? In other words, would I be competent enough for an entry-level ML engineer position?
Also, the machine learning course was originally the initial introduction for a masters program, so it is a little less intensive and fast moving. But it also assumes a little more math background in parts, since everyone in the program already was pretty familiar with linear algebra, probability, and statistics.
In the end, the two courses go together pretty well, so I don't think it matters too much what order you do them in.
Do you think DL techniques are going to better than traditional techniques for tabular data in the future?
This is in reference to your article: http://www.fast.ai/2018/04/29/categorical-embeddings/ and lessons 3 and 4 from the DL course.
Although decision tree approaches maybe will continue to be faster to train in many cases. At the moment, it really helps to be familiar with both approaches.
Thanks for making this available.
On Udemy, there's a particular course-author I like a great deal (Anthony Alicea iirc) who made a huge difference when I was first learning nodejs & angularjs. Unfortunately, it looks like he only gets to work on these courses here and there - the final project of the Nodejs course has been in-progress for years now at this point.
I can't help but wish that he was able to make more money from these courses so that he would be motivated to develop another 2-3 of them. As-is, it seems like a lot of good teachers end up teaching on a part time basis only (there are some youtube tutorial creators I have in mind here too)
The course I am working on has been in development for over a year.
Quick query - both lessons 1 and 7 are titled “Introduction to Random Forests”, is that intentional?
(In fact, we have a major software release coming up next week - and then nothing planned until well in to 2019.)
With that said, I'm so excited to learn from your new course, and can't wait to start.
There is some truth to Octave having a faster turnaround than Python, if you're new to programming. I feel like with Deep Learning you really have to bite the bullet, but his courses are just fine with Octave/Matlab - with respect to getting an intuitive feel for the algorithms.
Slightly OT question: At some point after DL1 pt 2 was released and before DL2 pt 1 was released, I recall you saying it was probably better to wait on starting the DL series since the new DL series (at the time) was going to be completely revamped with just PyTorch.
Would you say something similar about what I presume to be DL3 pt 1 coming out soon-ish? If so, when would you say that threshold is (i.e. if you start before this date, do DL2 pt 1, if you start after, wait for DL3 pt 1 to come out).
Hopefully that made sense.
Shouldn't we start we something like linear regression?
Decision trees are much easier to use correctly, and can be easily implemented from scratch without relying on any external libraries (as we do in a later lesson).
I actually wrote most of the machine learning content at Dataquest (where I work) and I started with k-nearest neighbors because it's way more approachable (https://www.dataquest.io/course/machine-learning-fundamental...). Little math, very visual, easy to program, etc. I used this easier-to-approach algorithm to teach the key other ideas in ML (test/train, cross validation, error metrics, etc).
Happy to chat more about different pedagogical approaches to teach machine learning!
That's a great idea. It's actually what I did in an earlier version of the USF course. It worked great. Especially because a decision tree is basically just KNN with a different distance measure (loosely defined) so it can flow well
However it turned out that jumping straight to decision trees worked out well too, so I'm happy with the change.
There is no need to know gradient descent to learn OLS. All you need to be able to is take derivatives (and "matrix" derivatives)