Even if you don't want to use the frameworks, you'll still have access to fast linear algebra routines.
I completed it and can't recommend it more highly. It is a really excellent, dense course and Ng is a very good teacher.
His choice of Octave/MatLab simplifies issues of dependencies. In particular the soft ones of documentation and community. This is something a lot of academic contexts get wrong with software: the tools are either to open ended and students wind up manipulating matrices with forloops or there's an inflexible stack of professional tools that require massive effort to learn and an orthogonal community or there is a toy IDE based on a senior thesis.
Octave more or less follows the Unix philosophy of doing one thing and thus can meet many people where they are rather than with a one true way.
Once you step outside it, you need one library for the machine learning part of your project, another one for the computer vision aspects, yet another one for the reading of different data files. When you get to the 20th library you'd need, plus all the classes and wrappers you have to write on your own to achieve the same result, you start looking MATLAB under a softer light. That's even more for those who are no programming-inclined.
Browse JSTOR at the library versus Googling any historical, scientific or research topic and you'll quickly learn that "internet" offers the shitty version of surprisingly many things. (Shh... it's a secret.)
This is a bit presumptuous. I know people for which 200$ is a month's income. If you only consider their disposable income, 200$ would probably take 6 months.
And as others have pointed out, there's also Gnu Octave, which is reasonably (or used to be) compatible, and free. And where it's not source-compatible, it still largely retains the same semantics and underlying models so knowledge of Matlab can be easily transferred.
 in summary, in the age of open access, FLOSS science tools become a must -- and a collective responsibility.
If those tools make it impossible to share data, or publish results reproducibly, then I'd agree that those tools suck. However Matlab reads and writes every damn format under the sun.
Don't confuse "create, invent and build" with "export and publish." They're fundamentally different tasks.
Yup, startups have costs. Go figure. And sometimes you get what you pay for.
I wish I'd learned Matlab sooner. I still love the Python ecosystem, but Matlab's replaced a LOT of dicking around in Python for me. It does completely different things, and certain things are trivial or impossible in each place. Worth learning both.
I believe ANNs are Turing Complete, meaning they should be able to compute anything (EDIT: + "that is computable by any other Turing Machine"). The questions are, can a training regimen be created to create the right ANN to solve "any" problem, and if so, is it an efficient means to solve that problem?
For example, it's fairly trivial to build an ANN to spit out the right results for a given polynomial function, i.e. "f(x, y, z) = ax + by + cz". Knowing the polynomial function ahead of time, you just generate a ton of input/output sets and feed them into the training of the ANN, and then from there on the ANN will spit them back out.
The problem with that is, you didn't learn anything new. You didn't learn how to solve a new problem. It's somewhat useful for teaching people how to program ANNs, but I personally think it's garbage for teaching how to understand ANNs.
ANNs make more sense when we already have the training data, but we don't know the underlying function that maps input to said outputs. In the trivial case of the polynomial function, if someone were to hand us the training set, we could use an ANN to figure out what the polynomial must be.
Except--for this particular example of a polynomial function--this isn't very efficient. For a polynomial of N terms, you only need N+1 sets of IO to trivially use algebra to determine the function. You can use any of the readily available linear algebra libraries to do such a thing. In fact, I wrote a project for a client that does just that: it uses a basic matrix library to crunch a set of GPS data to create a quadratic formula estimation of curves in roads, so that model can them be resampled, continuously, sans noise.
And if that function is not just a simple polynomial--if, say, it includes sines and cosines and square roots, etc.-- then the ANN is going to have to be large enough to include in it ad-hoc, arithmetic estimations of sine and cosine and square roots sufficient to give the right answers. It might even include several different estimating functions just for sine just because our mystery function requires more than one sine operation. It might even have corner cases where it gets the answer wrong, because you didn't have a sufficiently large data set for it to "figure out" things like the fact that sin(x) is approximately x for small values of x. If one knew the right formula (and yes, that's a big if), it'd be significantly more efficient to write a program that computed the values correctly.
All of this is not to poo-poo on ANNs. ANNs are great tools for when we don't know the function and when the function is sufficiently non-trivial to discover. The polynomial example is like trying to kill a fly on the wall with a swarm of nanonmachines designed to evolve and learn how to construct a flyswatter (which is part of the reason I dislike it as a learning tool). But write traditional code to do Optical Character Recognition, I dare you. ANNs are just highly specialized. Think of setting up your ANN like defining the full width and depth of the space of all possible programs that you'd like to search for the program that solves your problem. You then use feedback to "walk" across that space until you find something that looks like your desired program. We're entering an era where we have the memory and distributed processing capabilities to crank out some rather large ANNs. For some problems, we end up training a computer to write programs for us that we could have written on our own. This can impact the number of requests you can handle in a given amount of time.
Of course, that is not necessarily bad, either. "Throwing money at the problem" is not the wrong solution when you have a lot more money than time. Technology is supposed to serve us, not the other way around. Why spend a week discovering a formula to map your data when you can train an ANN in a few hours? And perhaps you don't have very high requirements for request handling. Maybe you only need to process one image a minute on your particular system. Have at it.
But you really, really need to know that is the case before you jump on the ANN bandwagon. You have to know what you want out of the ANN. If you don't have that ability to look at a set of inputs and express a desired set of outputs, then ANN isn't magic pixie dust that will solve that for you. If you have experts in your particular field telling you that your particular problem cannot be easily modeled, then ANNs might be helpful for you. If you are new to your field and you think "let's try an ANN", you're probably going to have a bad time. If you end up with an ANN that is estimating a relatively trivial program, and you're trying to provide a SaaS offering that is meant to scale to thousands or millions of concurrent users, the ANN approach could seriously harm your ability to scale.
This paper shows the existence of a finite neural network, made up of sigmoidal
neurons, which simulates a universal Turing machine. It is composed of less than
10^5 synchronously evolving processors, interconnected linearly. High-order
connections are not required.
You spent several paragraph criticizing ANNs, but regular ANNs are not deep learning at all.
Also, I have no idea what you're talking about that ANNs are not involved with deep learning. Recurrent and Convolution Networks are types of ANN. https://en.wikipedia.org/wiki/Deep_learning
Their mcc compiler is even more crappier, it has so many memory leaks that even valgrind gives up and gets freezed.
I do not want to run a MATALBBED-ANN over large datasets, no way.
MATLAB scwhag: "Do you speak MATLAB ?"
me: "No, I don't speak MATLAB, and I don't want to"
I find that having a goal of what I want to solve or create motivates me to learn. Whereas if I'm studying Statistics but don't have a clear goal that motivates me (calculating sports betting odds) then it's that much harder to master and appreciate it's applications.
I guess to me, knowing the application of something before I dive both feet into learning it is actually the most important truth for beginners.