I'm a intermediate-level programmer, and would like to dip my toes in AI, starting with the simple stuff (linear regression, etc) and progressing to neural networks and the like. What's the best online way to get started?
A few months ago, I stumbled upon very amazing YouTube Channel 3Blue1Brown which explains math in very accessible way and at the same time I got feeling that I finally started understanding core ideas behind linear algebra and calculus.
Just recently he published 4 videos about deep neural networks:
So my fear of ML was gone away and I'm very excited to explore whole new world for neural networks and other things like support vector machines etc
Even if you think you grok matrices, have a go at the first few videos of that playlist, if just for the visualization. It really helped me see what matrices (and operations on matrices) represent!
I believe it was developed for Brown's linear algebra course for CS undergrads.
I would specifically recommend Machine Learning Foundations: A Case Study Approach - It is fantastic and helped me greatly start my ML journey last year.
Turi is awesome, I hope Apple is doing something great with it.
If the lecturers aren't very interesting Coursera can be as hard as any other lectures. I gave up on the Scala functional programming and disappointingly have stalled with Geoffrey Hinton's Neural Networks courses.
But I really can't understate how good Andrew Ng is, he has a very relaxed manner and manages to make some very complex topics seem almost trivial.
The worst of the mathematics is derivatives and matrix multiplication. You can even avoid matrix multiplication mostly in the ML course, but in his Deep Learning course he takes you through the 300x performance benefit you get from using NumPy and matrix multiplication vs loops.
The two most important things to remember, since the courses are challenging: 1) don't be in a hurry, and 2) don't give up! Take the time to learn every detail presented, do the optional exercises, and dig deep.
Its specifically geared towards visual recognition, but it starts with the basics of machine learning and moves on to feed forward nets and covnets and covers RNNs and attention towards the end.
The assignments are a great set of jupyter notebooks that really get your hands on the material and you can find a number of peoples complete assignments on github just by searching.
The lectures are available online as well https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-z...
I've done hinton's and Ngs courses and as someone who already has a non-ai development background I found this to be the best introduction. Its really an extension of Andrej Karpathy's Neural Nets for Hackers (http://karpathy.github.io/neuralnets/)
The good news is that compared to other technical fields, the math is also relatively shallow. Here are some good resources that you don't need more than calculus/linalg for (I've used all of them and they got me off the ground):
Once you feel confident, the Deep Learning book is more math-heavy, but it is really very good. The authors are more or less deep learning gods. It'll teach you a tremendous amount about how/why neural nets work and the principles used to discover new architectures, and gain a strong intuition for how to use neural nets as a tool. Read it slowly---unless you're already good at math, it takes a while to get through. Don't skip the first five chapters. Use Google and Wikipedia to pick up concepts you don't understand along the way instead of skipping over them (it will bite you later).
I can speak to what "AI" means for most businesses outside Top Tech which more frequently work with tabular, relational, or log data rather than image and text. For these companies, this is what you need to learn how to do
1. Define a prediction problem and extract labels
2. Organize and clean the data for prediction
3. Perform feature engineering by applying domain expertise
4. Apply an off-the-shelf open source machine learning algorithm like a random forest
At my company, we've noticed a lot of programmers are intimated by the feature engineering step in particular, so we tried to make it easier by creating an open source library called Featuretools .
Maybe you could start an AI study group online? The Silicon Valley study group was great, but I was just visiting.
it has Py2, Py3, R, F#, anaconda, TF, CNTK, etc. pre-installed.
There are some ML tutorials on it already + you can use the "load from github" feature to load, run, edit, ... many of the great tutorials already on github.
Other similar environments include colab by google and cocalc.
If you already know Python, you could dive straight into machine learning (https://www.dataquest.io/course/machine-learning-fundamental...) and work your way upto calc / lin al, linear regression, decision trees, neural nets, etc.
If you want to get a taste without signing up, you can check out our blog posts that preview the course (like this one: https://www.dataquest.io/blog/machine-learning-tutorial/)
Happy to answer any questions over DM or email (srini@ourdomain).
For AI, I would take the Udacity AI courses.
For ML, I would take the Udacity ML courses.
I take a lot of different online courses, I have no affiliation with Udacity, but their courses are just too good.
I studied AI (focused on ML) in a decent grad school (and I like to think I had the best teachers there), and I think the quality of the courses is comparable.
Or is it an operant/classical conditioning sort of thing, where AI is specifically about training programs to act rather than to perceive/categorize things?
I suppose you can have AI that incorporates no ML (like most video game AI), but I'd imagine that will become vanishingly rare in the future.
Traditionally, AI has been divided into distinct subfields (e.g. search, planning, natural language and speech processing, game playing, computer vision, robotics, knowledge representation, expert systems, logic, and ML). Today, ML is employed in all AI subfields, but until recently, most subject matter in each AI subfield had been unrelated to ML. In the past decade especially, that's changed as deep learning and probabilistic methods have gained mindshare and now are largely unavoidable when tackling AI-related problems.
In general, AI's subfields have focused on identifying fundamental obstacles and important features in their own problem domain and developing appropriate techniques that operate on those features when solving problems (like using object recognition and localization to solve vision problems like autonomous driving). I suspect AI's past emphasis on feature engineering has faded as NN-based ML has risen.
Don’t have strong feelings about these definitions.
Statistical Rethinking by Richard McElreath gives a good introduction to Bayesian approaches to statistical analysis
People like you are our primary audience :) it should take you exactly where you want to start and take you a good chunk of the way to where you wanna get.
Please check it out
Here's a recent video where he talks about how to create new Pokemon with Generative Adversarial Networks (https://en.wikipedia.org/wiki/Generative_adversarial_network).
Nice contrast from the usual MNIST dataset, especially if you want to be inspired to think about novel ways to apply this stuff:
-  https://www.coursera.org/learn/machine-learning/
-  http://ocdevel.com/podcasts/machine-learning
-  https://www.goodreads.com/book/show/24612233-the-master-algo...
-  https://www.robinwieruch.de/linear-regression-gradient-desce...
I have two engineering degrees and studied neural networks in college. He cuts down about 3 months worth of math to a couple of lines of code in such a way that makes it make sense AND is productive. It may just be that his teaching style matches my learning style really well, but I'm enjoying going through it.
His course inspired me to create a cryptocurrency trading bot which i spun into a business offering forecasting for altcoin markets: http://BitBank.nz - Crypto Market Predictions with Machine Learning
I managed to make much more successful forecasts by understanding the fundamentals taught in that course like under-fitting and over-fitting and how to visualize whats happening by plotting a learning curve ect.
The forecasting algorithm really just applies the fundamentals thoroughly in perhaps a novel way, e.g. some features we compute at the current time include the linear regression of trades over time weighted by their amount
So its definitely worth the investment i think :) try and apply the teaching to solve a real world problem which i think is the interesting part, although you’ll end up doing a lot of data engineering you’ll savor the AI/ML part even more and start to appreciate strategies for how you can improve your performance in your case and test them out.
Having a play around with the create your own deep neural net at playground.tensorflow.org is pretty helpful, try and conceptualize what youve been taught in the courses by playing around with that, e.g. add more layers/breadth to your network to watch it get more and more powerful and begin to overfit when you add noise to your data ect.
The Pacman programming exercises in python
And the Kaggle Titanic Survivability dataset
But if you desire an even gentler intro. Try Daniel Shiffman's Nature of Code in P5
best of luck ;)
Andrew Ng's ML course quickly provides a base in theory.
Ideally you couple that with some empirical work.
For that, I think sklearn is the best starting point (assuming you go down the python path). Modify some sample code and make a few simple models. Sklearn provides an excellent framework across all kinds of models (including deep learning if you use say keras.wrappers.scikit_learn), and can play well with pandas.
There are lots of practical concerns that come up that are not covered in intro ML courses.
It's a good structured way to learn the core of ML while learning about Neural Networks and without having to become and linear algebra expert which for most people including like me was a deal breaker with other courses. The timing is great too as ML now is so much different than it was 2-3 years ago.
Linear Algebra (which is what you really need): Gilbert Strang - Linear Algebra and its applications
These two are all you need, with which you'll get a solid base. Then you're good to go on your own. These two combined are about 4 semesters worth of work. But if you really focus, I think you can get them done in a little less than 6 months.
If you want a 'just what I need' approach, Khan Academy.
I don't remember what intro linear algebra books I used, but my college uses this: https://www.math.ucdavis.edu/~linear/ (I took the class before this free textbook was developed).
The next one starts in January, and is taught by an MIT grad that taught a similar course at MIT.
What percentage would you do for $100k?
Or paying up-front/in monthly payments is $1041/month for 12 months.
It's a bit more cursory and mostly just a collection of articles/papers, but it has the benefit of not being paced like a university course.
After you've got a grasp of what these things are doing then you can move into the how. For that you will need some math background, with emphasis in calculus and probability.
After that, you can take a look at PRML. https://www.amazon.com/Pattern-Recognition-Learning-Informat...
Some people might prefer seeing things from another approach. http://pgm.stanford.edu/
1. Hugo Larochelle's Deep Learning course available on YouTube
2. Depending on how much math you like, Nando de Freitas's Deep Learning course (also on YouTube) is also superb.
It teaches you the foundational theory behind ML, and shows how the fancier stuff is built on it. Good to know the foundations, so you can branch outside of predefined ML techniques.
As other posts note, there are many resources available for teaching the concepts. But they don’t teach the limits of AI, and the rise of MOOCs is setting novice programmers up for a shock when they encounter real world data that is not as nice as the Titanic dataset, and requires making smart decisions to handle the data, and handle it in a way that does not invalidate the results.
Many romanticize ML/AI as something that can solve any problem, which is a dangerous approach.
Courses You MUST Take:
Machine Learning by Andrew Ng (https://www.coursera.org/learn/machine-learning) /// Class notes: (http://holehouse.org/mlclass/index.html)
Yaser Abu-Mostafa’s Machine Learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners.
Neural Networks and Deep Learning (Recommended by Google Brain Team) (http://neuralnetworksanddeeplearning.com/)
Probabilistic Graphical Models (https://www.coursera.org/learn/probabilistic-graphical-model...)
Computational Neuroscience (https://www.coursera.org/learn/computational-neuroscience)
Statistical Machine Learning
From Open AI CEO Greg Brockman on Quora
Deep Learning Book (http://www.deeplearningbook.org/) ( Also Recommended by Google Brain Team )
It contains essentially all the concepts and intuition needed for deep learning engineering (except reinforcement learning). by Greg
2. If you’d like to take courses: Linear Algebra — Stephen Boyd’s EE263 (Stanford) (http://ee263.stanford.edu/) or Linear Algebra (MIT)
Neural Networks for Machine Learning — Geoff Hinton
Neural Nets — Andrej Karpathy’s CS231N (Stanford)
Advanced Robotics (the MDP / optimal control lectures) — Pieter Abbeel’s CS287 (Berkeley)
Deep RL — John Schulman’s CS294–112 (Berkeley) http://rll.berkeley.edu/deeprlcourse/