Hacker News new | past | comments | ask | show | jobs | submit login

I took that course from the pre-Coursera Stanford videos, when someone from Black Rock Capital taught the course at Hacker Dojo. Did the homework in Octave, although it was intended to be done in Matlab.

It was painful. Those videos are just Ng at a physical chalkboard, with marginally legible writing. All math, little motivation, and, in particular, few graphics, although most of the concepts have a graphical representation.




Spot on. I respect the depth of Ng's knowledge, but for 99% of people, knowing how to implement a linear regression algorithm is completely useless. Hardly anyone is trying to write a better ML algorithm; the rest of us just need to import code that was written by PhD's. So it's far better to understand higher level concepts like when you should use a certain ML method, what assumptions go into it, and generally how the underlying algorithms work.


Well, not if you want to be a data scientist, I think.

If you don't do a class where you build things from first principles, you'll never know how to tweak code you imported.

The linear regression algorithm he teaches is a stepping stone to neural networks, it's a neural network with no hidden layer and no nonlinearity. True, you would probably never use that in the field but you have to start with something simple.

After I took the Ng course and put a couple of algos built from examples in the course into production, I said, "oh, let me use R or scikit-learn instead of this hacky Octave." And off the shelf using default parameters, none of them performed nearly as well. You need to understand the algorithm pretty granularly to be able to then cross-validate and tune parameters.

The field is sufficiently new that for anything interesting, an off-the-shelf import from scikit-learn is not going to be anywhere near state of the art, you should have the ability to roll your own.

It would be interesting to re-implement Ng's examples and assignments in TensorFlow.


For people getting started with ML do you think it is more important to learn first principles and the "boring" math like this, or do you think it is important to give the learner some quick wins and keep the excitement and interest levels up?


Do what feels good for you :)

Ng is a fine place to start, you get some pretty quick wins, doing MNIST from first principles within a month or two. You just need to know or get comfortable with matrix multiplication. It strikes a reasonable balance between being rigorous and approachable for a committed student at an undergrad level.

Principles of Statistical Learning is easier https://lagunita.stanford.edu/courses/HumanitiesandScience/S...

LAFF linear algebra is just starting http://www.ulaff.net/

Hinton's Neural Networks is offered in the fall https://www.coursera.org/learn/neural-networks

For my money, I wouldn't do something like Practical Machine Learning in R, because I think you'll learn more R than machine learning. I wouldn't do the Udacity TensorFlow course because I think it assumes a lot of stuff you would learn in Ng's class ... I think Ng is a fine place to start.


This feels like a pretty loaded question. It seems like you can have math with quick wins, keeping excitement and interest. When you say "boring" math, are you referring to the overall content or the way it's taught?

Most of my experiences with "boring" math was because it felt taught poorly or I wasn't ready for it.

ML is such a broad canopy that it probably includes many who aren't ready for the math, and will find it boring. It's the same with the distinction between appliers and "methodologists" in statistics.

Breaking down "people getting started with ML" into what they want to do with it feels more tractable. Maybe it's an issue of courses signaling who they are geared for.


I'm really glad it's out there though. I may be in the minority, but I would love to write better ML algorithms!


Can you recommend courses/books/etc that take that approach?


Agreed. Though this is pretty consistent with college CS/Math courses in general (at least in my experience). A lot of dense theoretical content covered in scribbles and slides. You don't really learn anything until you just do practice problems or research the same topics independently.


> You don't really learn anything until you just do practice problems or research the same topics independently.

This is to be expected. As my Linear Systems textbook says, "math is a contact sport."


You're right -- the class is intended to be a primer for your learning or, ideally, something you come into having already read about the material ready to gain insights.


The current Coursera course's videos are pretty unadorned, but he's not using a physical chalkboard any more. I also found that for most of them I can use the subtitles instead of the audio and play them back about about 2x speed.


That alone would be a big help. Reading his chalkboard work is hard.


I wrote up the course a few years ago for easy reference. I need to update these notes it as I have about a year's worth of (minor) typos that people have pointed out [which is hugely appreciated], but in general they seem well received.

http://holehouse.org/mlclass/


Heh, this is how my CS classes were.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: