They say you can "audit" the course for free, but they employ a ton of grey patterns to get you to pay for it. I haven't been able to find out where to audit it yet.
Update: You have to go into the individual courses within the specialization and the enroll popup will have an audit option.
All videos of all courses in Coursera are free. You can watch them fully without providing your credit card info.
There are two types of courses in Coursera- free and paid.
In case of the paid courses, you can go to the course and navigate to the "Buy Subscription" page and click on "audit the course". You can watch all the videos for free, but you don't get access to quizzes and programming assignments (you never know what a web search will turn up ;)) ⊕. You do not get a certificate by completing a course or completing all courses of a "Specialization".
In the case of a free course, you get access to all the videos, quizzes, and assignments. You don't get any kind of certificate. Instead of going to subscription page, you can just click "Enroll" and choose the no certification option.
There are some great courses in the free tier (videos + assignments, no certs) as well. Dan Boneh's Cryptography and Grossman's Programming Languages A, B, C come to mind. Also Model Thinking by Scott Page.
There were some great discussions on HN in the past. [0][1][2]
⊕ There are courses where duplicates of paid assignments and quizzes are provided under "Practice Assignment" as opposed to "Graded Assignment". Like Martin Odersky's Functional Programming Principles in Scala MOOC.
I'll ask the opposite question.. how much do these courses cost? Some quick googling has led me to Coursera, but their pricing model seems a bit obtuse. So if they're going to try and grey-pattern me into paying i'm trying to understand how much i would pay. I don't care about a degree from these places, i'd just like to learn.
(specifically the crypto course sounds interesting)
Coursera has a monthly $45 fee for the whole specialization. But, the specializations also include a 7 day trial. You get access to all course material and assignments and all courses. Back in my college days(2-3 years ago), I would start a specialization and blaze through in 6-7 days. Money saved and time well spent. Ofc now that I am working, it's not going to be that easy.
I thought it was useful but awfully low level. For example I hope to never, ever implement backpropagation again; I'm going to use whatever code is in TensorFlow or PyTorch or whatever. But as a student I'm glad I did implement it myself, once, so I understand what is going on. More broadly it demystifies the black box of machine learning methods and you can see it for the giant pile of statistical categorizing functions that it is.
The most practical takeaway I got from Ng's course was the dangers of under and overfitting your data and techniques for detecting when you make that mistake.
I still remember a talk by a woman from Google at a fairly long ago now O'Reilly conference (R.I.P). Part of what she discussed was Research AI vs. Applied AI. The gist of it was that a lot of the things in university course, graduate programs, etc. are tilted towards Research AI and you can get away without a lot of that stuff by using pre-built tooling for practical machine learning applications.
Of course, you want to have some understanding of what's going on under the covers but, for a lot of people, starting from first principles is quite hard and isn't really necessary.
Not the course I took. It relies on basic linear algebra like matrix multiplication. You can probably get through it with just coding and not understanding the math but it wouldn't be much fun.
I took this course as a defensive mechanism against BS at work, especially when the consulting Data Scientists were around. In that sense it's super practical.
ML is dominated by gigantic datasets and massive computing powers, something individuals will not have a lot of.
It is unlikely that you could build a major product with it, but it could tech you neat tricks to speed up some parts of work. Also, similar to cs101, it is a necessary first step towards a career in ML. So might as well do it.
I know a bunch of business analysts and data analysts who have gotten a job based on what they learnt in this course. Ofc, they also got some stem degre alongside it, but this course made a difference.
In 2012 I did Andrew's original machine learning course, and implemented a bespoke OCR engine for iOS, which was released in a banking app for scanning utility bills. Back then deep learning was just taking up, so I did my own backprop training in Matlab based on Andrew's code as well. It was a pretty fun end-to-end experience, much better than just throwing stuff at tensorflow like we do nowadays.
Do you thing now days Deep learning does not requires much math? If yes, to what extend of knowing math is enough to be truly good deep learning specialist? By deep learning specialist I mean the person who is building a commercial software that uses deep learning but not tools for deep learning.
The things I learned here helped me gain a solid foundation, which, in turn helped me learn Deep Learning.
And Deep Learning feeds me now.
The good thing about this course is that it is not Math-shy. It is not rigorous in terms of Math, like there are no proofs and so on. But Math is omnipresent here.
Andrew Ng's MOOC is among the best game in town. Ng is among the best teachers I have ever seen.
No. The ugly truth is that these courses will be useless to 99% of the people. Machine learning is dominated by big corporations with gigantic amounts of data and processing power. If you want to work in one of them or create competing ML companies you need pedigree (a PhD from a well know university), and those guys arent taking courses with fake credentials.
You could use ML in your job/company but then you dont need this course, you just use a ML product.
See this course as a hobby thing, or if you are in HS and want to start preparing for college, otherwise there are better uses of your time.
There's a lot of ML happening outside of big corporations, which you can confirm by just searching 'machine learning' on any job site. While it's true that often you can use ready-made ML solutions, you often will benefit from additional knowledge for improving or adjusting them for your company's specific problem and while interviewing you will often be asked the kind of questions those courses cover.
You can get almost unlimited GPU time on Google Colab for $50 a month. I don't know why or how they pay for this, but it does bring "real research" into the reach of individuals.
You can get more processing power and true unlimited time with any semi-competent graphic card (probably costing less than 1 year of Colab Pro+). Pro+ is a scam, you are not told what kind of instances you will be running at, and you dont have any guaranteed continuous running time. And even if you were given full 24/7 access to a top of the line card that would be like 0.001% of the power used to train any big modern ML model.
For example, Google Vision API can do some out-of-the-box classification on arbitrary images with no training needed. Covers super common cases such as explicit content detection and object detection.
There are more customisable products within Google where you can provide training examples and labels using a UI (AutoML I think it's called). The result is an endpoint you can use to do inference, based on the model created behind the scenes.
I just mention these examples because I've spent a little time researching them at top-level.
Though I had almost zero ways I would actually use the learning from this course (and indeed really never did any ML after and have probably forgotten it all) it was still a really fun brain exercise to revisit some math and then see how ML thinking worked! I have recommended it quite a few times.
I highly recommend https://course.fast.ai/. It's much more top down: in the first lesson or two, you train a NN image classifier, rather than starting with first principles and linear algebra. I found this structure to be more motivating and effective.
Fast AI teaches you a little bit more about being a practitioner, dealing with datasets, pointing the right algorithms at the right data and checking whether you get good results.
Andrew Ng for me did a lot more to demystify how stuff actually works
I didn't do these particular courses but I found it a lot easier to stay motivated with the top down approach. First demonstrate usefulness, then deepen fundamentals.
When I was younger and didn't work full time + have other commitments the bottoms up approach appealed to me more, I think partially because I had bigger time blocks to allocate. i.e I could spend a whole weekend just learning fundamentals of some particular thing I was interested in and reach the first levels of usefulness in that one "session".
These days smaller time blocks mean that I need to walk away with something the keep the spark going for most curiosities.
Jeremy Howard came off as anti-intellectual to me. He is always like "oh math is nothing... you do not need math... math is not needed" and stuff like that.
Other than that, fast.ai is a great resource, and Jeremy Howard is a great instructor.
You will learn very practical tools and tricks, and a lot of recent research is demystified, but don't expect to achieve deep, general insights.
Also, fast.ai is a very very limited and poor library compared to PyTorch, JAX, TF, etc.
Programming, design, and architecture decisions are outright terrible.
I got paid to write fast.ai in one job. I still have nightmares. I never did it again.
I wouldn't say Jeremy is anti-intellectual, but he does know that a lot of people get turned off from the AI field because they are afraid of the math, and a lot of other courses (used to?) start with the math. So he makes sure early and often to tell people that you don't need to understand the math that is happening deep under the hood in order to do productive, even state-of-the-art, research with AI.
> When I was younger and didn't work full time + have other commitments
I second this - while both are great courses, I found I could only dedicate very short amounts of time recently to any kind of study, and going from the ground-up more thoroughly seemed like I was making no progress. The fast.ai top down approach worked a bit better for me for those reasons, otherwise it would have been interesting starting with the deep dive.
Yeah, this is probably the reason it worked better for me as well. I had bounced off of some other courses because I didn't have the time to dedicate 16+ hours of lectures before I would get to the fruit of all that foundational knowledge. Starting with some high-level abstractions, then digging down into how each of those abstractions ticked, kept the number of concepts I had to remember at once to something more manageable while dealing with distractions like a full-time job.
I did (old versions of) both of these and liked both. What I liked about the top-down approach of fast.ai is that it worked the way I approach working with other programming systems. You have a thing you want to do and APIs that promise to do that thing for you, and you plug them together. Then you decide you want to change it from the default behavior, so you tweak the parameters, then you need to learn why they're set up the way they are, and how they work, etc.
Similarly, when I learned web development with Rails over a decade ago, I didn't start by building an HTTP stack. I started by doing the build-a-blog-in-fifteen-minutes tutorial. Now I had a working project. Eventually I needed to learn all of the underlying technologies, but it's much easier and more rewarding to have something running first.
I found that starting with the big picture and a tangible result made it easier to stay engaged. At the end of the fastai course, however, I felt there were some gaps in my understanding especially at the low-level side. Andrew Ng's course helped fill in those gaps.
Can't tell you about Andrew Ng's coruse as I haven't done that, but for Michael Nielsen's course it was Matrices and Partial Derivatives. I'm assuming it's quite similar.
It's a recorded version of a real Caltech undergrad course, and it's focused on understanding the math behind these algorithms, not just applying black-box ML libraries.
It's much less practical, but I feel like it teaches you more.
OK, since it is for a teenager I would assume just basic computer competency (how to install programs and stuff like that) but nothing else, so apologies if some of these things are pretty obvious/basic..
Now, for any of those 3 options a little (or a lot) of guidance and patience will be needed so hopefully you or some friend/peer can help with that. Good luck!
I took this when it was mlcourse along with the aicourse by Peter Norvig. I was in research at the time. They were entertaining, but certainly mainly an intellectual curiosity for both academics and practitioners. Nowadays
Practitioners would most likely use an ML library.
I'll say that that waitlist registration form is very sketchy. Agreeing to receive marketing updates is required to join a waitlist for a course? Classy move.
Is the registration broken? I am getting errors to "Please complete this required field" on two fields that I cannot see (or fish out of the div soup that is this signup page.)
Question: Why was the original version in Matlab? I am familiar with Python, R, and others.... I get that those languages until recently might not have been great over the ancient predecessors (LISP, etc) for ML related.
But I've never seen actual production anything in Matlab. Did Matlab provide something at the time others did not? If so, how did they transfer MatLab to running production models? Or did they create a model with basic outcomes - and then code a representation of it in C++, etc?
Before Python (Numpy/Scipy) really came into its own (which didn't quite happen until early 2010s), Matlab was among the easiest-to-use scripting languages for writing scientific computing programs. I was in university (Bachelor's + Masters) from 2007-2012 and learned Matlab extensively in my numerical computing classes (I was a Physics major, for what that's worth). When you're ready to run your scientific computing codes on a supercomputer cluster, you'd usually rewrite it in C++ or Fortran (my research group used the latter), but to develop and debug at a small scale, you'd usually use Matlab, although the younger people like me might use Numpy, and there were one or two people who used Mathematica even for numerical computing (as opposed to symbolic algebra, which everybody used Mathematica for).
It was around the time I was in university that Python really matured for numerical computing, but professors (as opposed to grad students) were likely to be already familiar with Matlab, so there wasn't much reason for them to learn Python. Andrew Ng was already a mid-career researcher when he made his course, which was probably based on older materials (I also learned basic neural networks in my numerical computing class in 2008), so it made sense for him to continue to use Matlab, especially because Octave exists as an open-source reimplementation of the basic functionality.
These days, you wouldn't use anything else but Python for ML, at least until you really productionize the implementation at a large scale, at which case you might rewrite in C++ or Rust (I don't know if they even bother rewriting these days when most of the computation happens in GPUs or TPUs). And it's my understanding, although I'm not really too familiar these days, that Matlab has mostly pivoted into providing a toolbox of all sorts of esoteric numerical methods for engineering-related tasks like finite element analysis, as well as hardware simulation (using Simulink).
Well, matlab didn't pivot into that esoteric toolbox of numerical methods for engineering tasks. It has always been that. That's probably the very reason ML researchers picked it up in the first place, because everyone was using it already anyway for scientific (esp. linear algebra) calculations.
I've done my degree a bit before you (in Electrical Engineering, also learned all I could about NN and other AI methods back then) and most people would use MATLAB for whatever scientific algorithms/calculation they needed to do. We had free student licences at the university so that we could use it for lab work and for our theses. I remember it had all kinds of numerical optimization algorithms/packages, control theory algorithms, etc.
@telotortium - hey thank you so much for adding that context, it is incredibly helpful. Now I can internalize why it was Matlab. The update to python is going to scale it to a new level of learners!
The programming assignments were one or two lines in Octave. They'll turn into 10 lines of Python with indentation errors. Python is a worse pedagogical language for any course in applied linear algebra.
OTOH, the time I spent learning Octave/Matlab for Andrew Ng's course was 100% wasted time, because I've never used it again in the 10+ years since I took the class, whereas time spent learning Python would've been useful to me in myriad other ways.
So sad to hear Mariah disparaged. I’m an Gen X engineer and Matlab is one of our first languages. Use it today still in aerospace but I would imagine Python suits software shops much better. Does Python handle matrix math as well?
If you're playing around interactively, it's a bit easier to write (in Matlab)
m = [1 0 0 ; 0 0 -1 ; 0 1 0]
than (in Python)
m = np.array([[1, 0, 0], [0, 0, -1], [0, 1, 0]])
Also a bit longer example:
m = rand(3,4)
a = [0.1 0.2 0.3]
m \ a'
versus
m = np.random.rand(3,4)
a = np.array([0.1, 0.2, 0.3])
np.linalg.lstsq(m, a.T)
wtf?
google...
fine!
a = np.array([[0.1, 0.2, 0.3]])
np.linalg.lstsq(m, a.T)
But if you're developing software, you can't really easily and reliably deploy Matlab or Octave to run in the cloud in your production systems, whereas Python you can.
Matrix math in python is a bit clunkier, because matrices are not native to the language. That said, numpy, the standard for matrix math in python, is quite nice. Its documentation is, imo, miles ahead of matlab's and the APIs are a bit more sane.
This brings up a good point. If the goal is to understand the underlying concepts, it's quite possible Octave is a better tool. Matrix math is fairly clunky in any mainstream programming language.
It's fairly common I have a little dataset I need to do some fft's on and draw a graph or some other similar one-off task. MATLAB still wins for getting the job done.
I wish someone would make "MATLAB with all its toolboxes, but with python syntax, in a colab-like IDE".
Is that not what numpy/scipy with Jupyter (as well as the various distributions that package them) essentially provide? If you just want an all-in-one that has a supported Matlab FFI interface, there's Julia.
The programming assignments in the original course were mostly useless. They provided you with a template with 90% of the problem solved, and you just had to enter an equation to solve the problem.
> But you also had to understand the rest of the code in the template.
Not really. They had lots of comments that explained what the code did. You didn't need to read most of the code.
My point is that compared to real university courses, the HW in this course would be labeled as "trivial". Writing those few lines of code was no more instructive than an in class paper test. It's more comparable to answering simple questions than building anything.
I don't think I had to debug even once in that course. It was that easy.
I mean python has issues (indentation errors isn't one I would list) but that's besides the point isn't it. The Lingua Franca for ML is currently python. Teaching octave, when most things they search for will be python just seems unnecessarily stubborn. Some day it might be Julia but we aren't there yet.
Indentation errors are a big problem for pedagogy. Imports are a big problem for pedagogy. When you are teaching how algorithms work, you want to implement in a language as close as possible to the language of the domain as possible. Hence, Python is a terrible pedagogical language for linear algebra, and Octave is a reasonable language.
1) Python is the lingua franca for ML. You WILL need to learn python. All other resources are in python. Matlab you'll likely never use again, so it's kind of a waste.
2) Probably more people have existing python knowledge than Matlab knowledge. And if you already know python, and you know python is the lingua franca, it's annoying having to learn Matlab knowing that in the real world you'd be better off with python.
1. You'll use Matlab in any other linear algebra or numerical methods course as well as in any digital signals course.
2. Optimization algorithms, of which gradient descent is a subset, are deployed in production in many languages, very often not Python.
3. There is almost nothing to learn. For the programming assignments in the course, Octave is used as a succinct DSL for matrix math. The assignments were to simply write the math in a computer and watch what happens when you run the computations.
4. You wouldn't learn Python by completing the programming assignments because you're just calling numerical routines, not dealing with anything else. Writing the code in Python simply adds more opportunity for error with no pedagogical benefit.
Octave is an easy language for beginners and has excellent (less ceremony than numpy) support for linear algebra out of the box without having to learn any libraries. The point of the class isn't to teach you how to use libraries but to teach you at a high level how to use gradient descent to optimize parameterized models. Once you understand how it works, it is easy to translate what you know to run well on different systems or to use existing frameworks already implemented on different systems.
Numpy is kind of a funky library with some weird (but good!) syntactic sugar that doesn't translate to the rest of Python. Scipy is a different beast. And pandas. I could go on. Making, and using matrices, feels weird in python and interoperability/efficiency doesn't come for free.
Compare to matlab, where matrices are first-class, syntactic sugar is consistent and rather lovely. But then the rest of the language is detestable.
I took this course and Dan Boneh's cryptography course and both were truly excellent.