Unfortunately the "difficulty" of the ML class assignments was not in the material application of the things learned in the lecture but instead in the gotchas of Matlab/Octave vectorization. Most of the assignments were in the format of: "Here's a forumla that applies to the element, generalize it to the matrix so there's no for-loops." Which, while challenging for those not accustomed to thinking that way, was not a challenge of applying what one learned from the lectures.
The course was awesome for what it was: a chance for the average programmer to get their feet wet in ML, but the fact that it was hosted by Stanford seems to lead to a somewhat humorous irony that it was pretty accessible whereas Stanford's real-life rigor is supposed to be nothing of the sort.
The course was awesome for what it was: a chance for the average programmer to get their feet wet in ML, but the fact that it was hosted by Stanford seems to lead to a somewhat humorous irony that it was pretty accessible whereas Stanford's real-life rigor is supposed to be nothing of the sort.