I have only the best possible things to say about the video lectures, but the lab assignments are just too easy. For something that is supposed to be the advanced track, I was hoping that they would do something more than "take this description of the function, implement it in Octave, make it pass our tests".
Of course, I know it's their first class and you can't get everything perfect right off the bat. But I'd love to see the lab assignments being one larger project, some kind of task where you need to apply the principles learned to solve different problems. It would be also more effective in measuring/grading students.
On an unrelated note: I wonder what is the license for the classes/videos. I am working now in an adaptive e-learning platform. It would be cool to put their videos and questions in our system and see how effective the system is.
Unfortunately the "difficulty" of the ML class assignments was not in the material application of the things learned in the lecture but instead in the gotchas of Matlab/Octave vectorization. Most of the assignments were in the format of: "Here's a forumla that applies to the element, generalize it to the matrix so there's no for-loops." Which, while challenging for those not accustomed to thinking that way, was not a challenge of applying what one learned from the lectures.
The course was awesome for what it was: a chance for the average programmer to get their feet wet in ML, but the fact that it was hosted by Stanford seems to lead to a somewhat humorous irony that it was pretty accessible whereas Stanford's real-life rigor is supposed to be nothing of the sort.
I agree the programming assignments have been very easy.
However, I doubt they have the bandwidth to manually grade thousands of assignments. Given the scale of the class, the grading is forces to be automated. It's hard to come up with such assignments and even harder to make them challenging, given an automated grading script.
The grading doesn't need to be manual. Think of the lab assignments being something like a smaller scale of the Netflix challenge:
1) They provide some set of data and establish rules for the competition.
2) They implement their own solution to the challenge, and that is the benchmark.
3) A "passing grade" is obtained by getting any working system.
4) The actual grading is then given on a curve, compared against their benchmark.
If your project is better than the benchmark, you get an A+, 95%-100% an A, 85%-95% gets you a B... etc.
I think this is a brilliant idea and is already being implemented by many training organisations through kaggle.com's "kaggle in class" program: http://inclass.kaggle.com/
Using this system for ml-class (and presumably the forthcoming pgm-class and nlp-class) would be extremely beneficial for real-world application of the information presented.
That said, learning what the algos are and how they work is one thing; learning how to actually apply them to real life situations is another thing. I think the class leans quite heavily towards the former, but I really love the few glimpses of the latter.
Personally, as someone who is new to the field (didn't do maths at college) & is barely fitting the classes & exercises around a fulltime workload & other things, I am glad that the programming exercises are "easy". Some of them are ridiculously easy, agreed (where 1/2 the solution is given basically verbatim in the pdf notes, and the other 1/2 in the code comments) - but for most of them I think it's enough to wrap my head around what's actually happening, especially in terms of the multiclass neural network assignment. That gives me enough foundation to try to apply them to real-world situations on my own time.
Granted it wouldn't work in many other classes, but my teacher for assembly language did something like this. First, your code had to work or you got nothing. You also had some time limit, to avoid ridiculously slow, yet working code. Finally, each working submission was graded by the number of additional bytes you used above the reference implementation.
And he knew all the tricks. I don't think anyone ever beat him. And he didn't show anyone any of the solutions until after the final.
I felt like I learned more from the few minutes I spent reading those solutions than I did during the rest of the course.
Yes. The online ML class method is extremely effective and efficient too. After two months I'm able to apply ML to the most common problems without any previous background on the topic.
I don't know the numbers for the ML class and the format is a bit different, however, for DB class from the many people that were signed up only 9180 took the midterm[1] and (only 908 got scores >= 18/20) <- That is wrong, final had 20 questions, midterm had 18. So that's 908 people who scored 18, 2082 with 17 or over and 3428 with 16 or over.
So if ML numbers are in any way similar to this maybe it wouldn't be so bad. That is if there was such a letter.
I'll add that Professor Widom's style is quite engaging as well.
I wonder about this. Apparently the letter we get at the end contains our ranking. I wonder what the distribution of scores would be; I'd assume there would be a small fraction of enrolments actually doing the exercises, a big spike at 100%, and most falling in the range of 80-100%. Personally I forgot that I hadn't done a couple of the quizzes, so did them late, and didn't complete the optional part of the first week's programming assignment on time, so although I've probably got a "good mark" on paper, I'm probably nowhere near the top 10% of students.
That's at least partly due to an extremely effective teaching system and style, so I'd say it's a worthwhile tradeoff.