What struck me was how much importance he gave to this metric, which isn't that hard to game on an online offering.
I know a few people (who shall remain nameless) who collaborate and check each others answers and so on before submission, in direct violation of the Stanford policy (and have 100s or close to it), and so probably have received this mail, whereas more "deserving" (note quotes) people who honestly work through the course material may not because they have, say, a 85% or 90% score.
That said, my key takeaway from this is that professors are very impressed by perfect scores irrespective of how you got them. There must be something magical about that row of 100s. Once you set up a grading/ranking system, it is psychologically very hard not to admire people who end up at the top.
I am personally a little dubious that the people with the highest scores would make the best pool of employees, especially given that this is an online course without the programming component, but what do I know?
I wrote Java code for most of the algorithms in AIMA as a side project a few years ago , and after I read an online post by Peter Norvig saying a few of his students had tried and failed a few times (to implement the code in Java- Common Lisp code existed and the Python version was in its infancy), I sent him the code and this became the "official" java distribution for AIMA ( though I don't maintain it anymore- the immensely talented Ciaran O'Reilly of the Stanford Research Institute does) and no one ever invited me to Stanford or offered me a cool AI job, sob! :-P.
No I am not bitter I tell you, not even the teeniest bit :-p 
I wonder how this signalling will play into the upcoming courses? If there are tangential real world benefits to be gained by attempting a "perfect" score, then you can expect a lot more game playing wrt scores and exams.
 More about how Peter Norvig shredded my initial code etc here http://news.ycombinator.com/item?id=2405277
 though eventually, after a lot more work, it did lead to my working on good ML/robotics etc projects from Bangalore, which is a hard thing to do in the Great Outsourcing Wasteland.
 I am really not bitter.
I wrote the code for the hell of it, not to get a job. AIMA was my introduction to the fascinating field of AI. It is a great, great book and it has a lot more material than is covered in the course.
I once did want to go to Stanford and learn from the great profs there, but now in a "mountain comes to Mohammed" fashion, Stanford is coming to me. I don't care about the credentialling - I just want to learn. I took the AI online course and enjoyed Peter's and Sebastian's teaching immensely. Fwiw I should have a high 90's score, (I didn't add it all up) but nowhere near a perfect score.
Second, I registered for the Machine Learning course (I am not sure if the same applies to the AI course) and I compared it with the actual ML course at Stanford (CS229) (I mainly looked at Youtube videos of Andrew as well as Assignments/Midterm). The latter is by far more advanced and theoretical. The assignments tend to test more than basic comprehension of the material presented in the lectures, which is exactly what the online course reviews tend to evaluate. They require strong mathematical knowledge and obviously a minimum level of creativity/intelligence.
I don't. That part of the post was written with tongue firmly attached to cheek. If that tone didn't come through, that means I have to improve my writing.
The online ML course is CS 229A (which is also an actual course at Stanford. The online version is close to the Stanford course).
The "tough" version is CS 229 (no 'A' at the end). I registered for the ML course thinking it was an online version of CS 229 and dropped out when it was confirmed to be 229A. In my politically incorrect opinion, 229A is close to worthless. The math is important in real world ML. This course included gems such as "if you don't know what a derivative is, that is fine".
The online AI course is almost exactly the same course as Stanford (CS 221), minus, of course, the programming assignments. It is an introductory, broad based course, and it does the job well (imo)
The online DB course is almost (if not exactly) the same as Stanford CS 145. I think this was the best course of the three.
All courses track the corresponding Stanford courses.
It also included other gems like debugging models with learning curves, stochastic gradient descent, artificial data and ceiling analysis. I have not come across practical things like these in more mathematically oriented ML books that I have tried reading in the past.
Interestingly, your arrogance is in sharp contrast with the humility of the professor, where he admits in places that he went around using tools for a long time(like SVM) without fully understanding the mathematical details.
I'd hardly call it worthless myself. It lacks a deeper analysis of all the methods that are used, but using them can sometimes be a greater challenge.
I did the AI course and the ML course and find it a great way of getting a little overview of the subjects, so when I study on my own, I have a little direction.
A bit of me died when I heard prof. Ng say that. However, I had committed to finishing ml-class and I did. As of now, I'm glad I went through with it. I felt like I was learning all these cool AI techniques that I hadn't heard about. However, the proof is in the pudding. The question is will I be able to take a real world problem and apply what I learned in that class to come up with something interesting? If I can't you are probably right. My perfect record would only be worth the paper it's printed on and the money I paid for the course!
I'm not pointing fingers at Prof. Ng. or anyone here. It was an experiment for Stanford and an experiment for me. I know I am looking forward to the courses next year :).
The Programming Projects that ML class had were slightly better metric of performance as there's more work that would have to be plagiarized, and if you're just going to go through life outsourcing all of your work then I guess that's your prerogative. However, I think that if you wanted to be very serious about actually testing for knowledge of material then the addition of some sort of interview component (phone/skype session), while time-consuming, could help.
1) you immediately know if you got it right or wrong when you submit, so you can to a lesser extent brute force the correct answer
2) with the exception of maybe the first assignment, they are all "fill in the blank" sort of programming assignments. You basically just have to find the equations they give you in the PDF, translate them directly to Octave, and bam you're done.
I personally have only scored straight-100%s in a single course (Python programming), and that was only because I was relatively an expert in the material before the course began.
Well, the only two people I personally know who would get all 100s are Peter Norvig and Sebastian Thrun, and I personally wouldn't mind hiring them!
Of course, in reality, Peter Norvig and Sebastian Thrun are working on projects that have long time horizons, e.g. self-driving cars and search. So perhaps you're still correct: The people you would hire to bang out code to meet a short deadline are probably different from the people you would want to work on your long-term technology bets.
In general, I disagree that knowing a topic incredibly well is necessarily overfitting. Deep knowledge can only aid new insights. You often hear about mathematicians and physicists who think by inhabiting their own mental world, composed of insights that they hold so deeply that they are _intuitive_.
The work you did strikes me as a far better measure of character and subject understanding.
Why would someone do this for a free online course that gives no credit for a degree? I mean, the whole point is to learn, not get the highest grade. I really fail to understand people sometimes.
* rather than just giving up on a problem, you can talk it out and learn together
* you get the opportunity to teach the material that you think you know that others find hard (a good heuristic for problems you may have just barely understood, but gotten correct anyway). Teaching material is a great way to learn it, and expose any gaps you might have in your knowledge.
* instant feedback on problems while they are still fresh in your memory
Your final score will be calculated as 30% of the score on the top 6 of your 8 homework assignments, 30% your score on the midterm exam, and 40% your score on the final exam. For those completing the advanced track you will receive your final score as a percentage as well as your percentile ranking within all those who completed the advanced track, and this will appear on your statement of accomplishment. The statement of accomplishment will be sent via e-mail and signed by Sebastian Thrun and Peter Norvig. We hope to have them digitally signed to verify their authenticity. It will not be issued by Stanford University.
As with the homeworks, exams must be completed individually without the help of other people.
I think that lots of people probably did collaborate on the homeworks and the mid-term and likely will do so again on the final and it was and will be cheating.
It's a shame, especially given that the instructors do seem to be attaching some importance to students' scores and rankings, but I'm not letting it detract too much from my enjoyment of the class.