Hacker News new | past | comments | ask | show | jobs | submit login
Stanford Machine Learning Course (youtube.com)
79 points by helwr on June 4, 2010 | hide | past | favorite | 17 comments



There's a pile of great AI/ML content up on Youtube as well. IIT has posted two cool AI series that HN'ers might like. See:

http://www.youtube.com/watch?v=eLbMPyrw4rw&feature=PlayL...

and/or

http://www.youtube.com/watch?v=fV2k2ivttL0&feature=PlayL...


These lectures follow Norvig's AI: A Modern Approach. If self studying, I find it helps a great deal to reinforce what you've read through watching these lectures.


Additional content (transcripts/handouts/assignments) are available at the Standford website.

http://see.stanford.edu/see/lecturelist.aspx?coll=348ca38a-3...


Andrew Ng is great. He's worked on some really cool and practical stuff. Check out his projects:

http://www.cs.stanford.edu/people/ang/research.html

I helped build parts of the hardware and control software for Retiarius and the Snake robot:

http://www.cs.stanford.edu/people/ang/rl-videos/


Some people have been working through this class together on Curious Reef. It's loosely organized, with people posting questions and ideas in the class forum: http://curiousreef.com/class/stanford-cs229-machine-learning... Might be a useful resource to people learning this material. (Disclosure: it's my website)


Thanks for posting this. The course is on iTunesU also, if anyone wants to download the files to sync to your iPod or iPhone.


I was hoping for a video thick with data I could apply or use to Google and learn more.


This is good but not excellent. Too much theory motivated by theory motivating more theory, whereas in the real world the theory is motivated by and usually invented after practice.


Are you watching the same thing I'm watching?

"So I have a friend who teaches math at a different university, not at Stanford, and when you talk to him about his work and what he's really out to do, this friend of mine will — he's a math professor, right? — this friend of mine will sort of get the look of wonder in his eyes, and he'll tell you about how in his mathematical work, he feels like he's discovering truth and beauty in the universe. And he says it in sort of a really touching, sincere way, and then he has this — you can see it in his eyes — he has this deep appreciation of the truth and beauty in the universe as revealed to him by the math he does.

"In this class, I'm not gonna do any truth and beauty. In this class, I'm gonna talk about learning theory to try to convey to you an understanding of how and why learning algorithms work so that we can apply these learning algorithms as effectively as possible."


Huh? You use the theory to figure out what to implement in practice.

The idea of just randomly hacking some shit together then backfitting a theory onto it is absurd. That's the same strategy that led to many of the past failures of "AI" -- approaches based too much on intuition that wasn't theoretically well grounded.

Probability theory and statistical models are foundational material for anyone interested in machine learning.


There's a great applicable quote from Nikola Tesla here: “If Edison had a needle to find in a haystack, he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. I was a sorry witness of such doings, knowing that a little theory and calculation would have saved him ninety per cent of his labor.”


As I just pointed out to someone else who quoted the exact same thing elsewhere on HN, that approach seemed to work out pretty well for Edison.


I don't agree with the parent, but I also don't think backfitting the theory is absurd.

What I've discovered from using machine learning in practice is that it's far more important to degrade gracefully when you have little data than to do the theoretically best thing when you have a lot of data. What this ends up meaning is that a hacky thing that is somewhat reasonable but based on the realities of the data will usually perform better than something more sophisticated that made too many simplifying assumptions along the way.

(That said, stats is amazing, and is the most important thing to learn for anyone getting into machine learning.)


I disagree. What you just described is the definition of a crackpot. If you want to achieve real results you need to have a firm understanding of the fundamentals, then you learn how to extrapolate. Besides, when learning theory you often undertake case studies that allow you see what real world problems you can actually apply these theories to.

If you watch the lectures, Professor Ng does a good job of showcasing projects that benefit from various algorithms.


Can you please suggest alternative university courses / reading material / progression?


Do you know of any better lectures?


well there are loads of lectures on videolectures.net definitely worth a view, in particular those from machine learning summer school(primarily because audio quality tends to be pretty sucky in some of the other ones)...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: