1. AI: A Modern Approach by Stuart Russell and Peter Norvig.
2. Deep Learning by Ian Goodfellow and Yoshua Bengio.
It is amazing how approachable both books are for beginners, but you will be diving a lot into academic stuff as you go along.
While some will argue it is dated, I think it presents many timeless ideas that will get in vogue soon with little tweaks to their inference schemes.
Same for The Art of Prolog.
AIMA provides better introduction for wider area of subjects but PAIP is one of most elegant and timeless books for both programming and old school AI.
There's basically no numerics in that book about anything that'll past muster at NIPS or ICML nowadays or would be shipped by one of the big corporate AI labs, I'm sorry to say.
David Silver's Reinforcement Course is based on Sutton & Barto
- The Master Algorithm: made for a general audience, gives you a lay of the land
- Python Machine Learning by Sebastian Raschka: gives you practical skills using python, scikit-learn, numpy, jupyter notebooks, pandas etc. From zero to kaggle in 4 chapters, goes deeper after that. Also goes into enough theory you aren't flying completely blind.
After that, I'm afraid I think you do need to go "academic", if by that you mean learning some of the underlying math to approach AI / ML from a more rigorous probabilistic perspective. I'd recommend studying probability theory and then working your way through Bishop's Pattern Recognition and Machine Learning. After that a lot more doors open up too more specialized topics like computer vision, reinforcement learning etc.
I've written up a lot more about this here:
I agree that basic statistics + Bishop's book is a great way to start getting into machine learning -- but AI is a much broader field than that.
"new-school" AI (machine learning) is just a more pretentious name for statistics/control theory/randomized algorithms.
Some of the old ideas with enough computation power are actually pretty amazing.
If I can extract any advice from this, it's that putting yourself out there and letting the world / your network know that you are interested in something or working towards something (in my case: a transition to a career applying ML), things might turn up. Also: if you have more experience you should feel comfortable completely customizing your resume to the role so you have one page jam packed with relevance; it's ok if they don't see (or won't care) about some of your experience.
I also notice that in some of your previous HN discussion you lamenting companies not really looking at your open source work; it's annoying that they wouldn't take the chance to look. But if I were you I might highlight specific projects on your resume relevant to the role you are applying to if you haven't been doing this already; this could elevate your open source work to job experience in its emphasis. Assume 99% of people will only see your resume, everything else should be supporting resources in case they get interested enough to look (or wish to validate your claims).
You can read the rest of the book if you want. You probably should, but I'll assume you know all of it.
Take Andrew Ng's Coursera. Do all the exercises in Matlab and python and R. Make sure you get the same answers with all of them.
Now forget all of that and read the deep learning book. Put tensorflow or torch on a Linux box and run examples until you get it. Do stuff with CNNs and RNNs and just feed forward NNs.
Once you do all of that, go on arXiv and read the most recent useful papers. The literature changes every few months, so keep up.
There. Now you can probably be hired most places. If you need resume filler, so some Kaggle competitions. If you have debugging questions, use StackOverflow. If you have math questions, read more. If you have life questions, I have no idea.
source: fizixer https://news.ycombinator.com/item?id=13890952
FWIW, a "super harsh" guide to (learning) ML  was posted on reddit a few days ago.
Edit: The entire Reddit discussion feels slightly similar to this one, if more snarky. The first reply there also links all the resources listed above. I don't really know enough to add anything.
AI is academic (as a synonym for 'theoretical' and 'math-intensive'). Once you look beyond purely symbolic AI, which proved to be infeasible as @curuinor pointed out somewhere here, you will need to build up at least basic knowledge in probability theory and linear algebra.
The path I'm following at the moment is a quite rigorous one and is outlined here (http://www.deeplearningweekly.com/pages/open_source_deep_lea...).
If you've never had any exposure to probability theory or statistics, I recommend having a look at the course "MIT 6.041 Probabilistic Systems Analysis and Applied Probability" taught by John Tsitsiklis at MIT (video lectures are available through YouTube and MIT OpenCourseWare for free). Both the course and Tsitsiklis' book are superb learning materials to get into probabilisitc thinking.
Edit: Link was broken. Thanks to @blauditore.
A field that does inspire a lot of deep learning folks and never gets mentiond in this sort of thing is the theory of physical dynamical systems. Attractor is a term that came from here, for example, and much of the mathematics behind the numerical fuckery behind deep nets is dynamical in nature. RNN's are entirely dynamical systems. Classic there is Strogatz book (https://www.amazon.com/Nonlinear-Dynamics-Chaos-Applications...).
There is also information theory, of course, which is part of the MacKay source.
Many of the earlier papers in deep learning-land are really nontrivial to read, because the terminology and worldview of everybody has changed so much. So reading original Werbos or Rumelhart is really difficult. This is really not the case for Sutton and Barto, "RL: An Introduction" (http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html). Two editions, apparently the second edition is basically getting with the program on shoving DL into everything.
Schmidhuber often mentions that Gauss was the original shallow learner. This is a technically correct statement (best kind of statement), but you definitely should probably know linear and logistic regression like the back of your hand before starting on DL too much.
Now, from the link: "Few universities offer an education that is on par with what you can find online these days. The people pioneering the field from industry and academia so openly and competently share their knowledge that the best curriculum is an open source one."
On the one hand, it is true there are a ton of resources where the largest cost is the time it takes to go through the learning process. And I'm awestruck that research papers are so openly available and practitioners are so willing to share their knowledge to others both in posting their books as PDFs/HTML files and creating online courses.
On the other hand, how feasible is it for an individual to work on notable AI companies/projects without a Masters or PhD in a related field? Can that gap be crossed merely by becoming fluent in the various disciplines involved in AI, before contributing non-formally academic research/experiments you've conducted on your own?
A great place to study about math is www.khanacademy.org, they have courses on calculus, probability/statistics and linear algebra.
The first chapter in the book provides a detailed analysis of how other disciplines contribute to the idea of AI - from Philosophy to Psychology, Biology to Computer Science. Makes for an interesting read, even for a non-tech reader.
If you're also looking for a course that goes alongside the book, I highly recommend UC Berkley's CS188 (you can find it at http://ai.berkeley.edu).
The lecturer Pieter Abbeel does such a good job explaining stuff and the programming exercises are really neat.
This alongside Andrew Ng's Machine Learning course was my first exposure to the field. https://www.coursera.org/learn/machine-learning
I can also recommend Sebastian Thrun's Artificial Ingelligence for Robotics course: https://www.udacity.com/course/artificial-intelligence-for-r...
* http://cs231n.stanford.edu/ (the course notes are excellent)
Use Tensorflow to train a few small neural nets. Move on to CNNs and RNNs. Make sure you actually do this. By this point you'll have read a lot, and retain none of it if you don't put it to use. Look at reinforcement learning. Use the book by Sutton and Barto, the new edition: https://webdocs.cs.ualberta.ca/~sutton/book/the-book-2nd.htm... Read the first 4-5 chapters, then go online and read about Deep Q learning, policy gradients, DDPG, etc. Then try to solve some problems on OpenAI Gym.
Once you have an idea of the kinds of problems you can solve, and have a couple you're interested in, go back and learn the foundational math, and start reading research papers.
In general, start with modern books that mention deep learning. With older books or high-level-overview books, you'll get frustrated when you see something cool on /r/machinelearning and can't find any mention of it in the book.
Cycorp still exists, from 1984. However, D. Lenat's approach to AI by ontology engineering has basically been completely infertile after about the early 90's.
Feigenbaum's expert systems stuff was a basic bust, it led to the Japanese just throwing that stuff away. People spent an incredible amount of effort and time systematizing expert knowledge and making expert systems and it was not a happy time. Much of that knowledge went into probabilistic forms, culminating in the Bayes net. The most famous application of Bayes net: Clippy (there are a lot more successful applications, but still...)
It was believed shortly after the AI conference that computer vision could be solved by a summer project in the 50's. That didn't happen.
Minsky and Papert gave a criticism on single-layer perceptrons in '86 where they proved that they could only make linear discriminators and therefore were useless for any real practical purposes harder than the XOR problem. They were wrong, given that we call multi-layer perceptrons neural networks.
Simon and Newell made their model and thought that models like theirs with production rules would point the way towards the way that humans could systematize thought. That didn't happen, although they had some cool papers.
People saw ELIZA and SHRDLU and thought that good NLP was coming in only a decade or so.... in the 60's.
Beveridge report. "The spirit was willing, but the flesh was weak" to "The vodka was good, but the meat was rotten." (that last one's a bit apocryphal, but still)
There was a huge and abiding torrent of neural net stuff that dealt with evolving topologies in late 90's. I see very little of it in any way shape or form in industry or academia today, because it's a lot of computation for basically no gain.
They thought that layerwise pretraining of neural nets was the way to go in 2006, before they realized that initializations, normalization, and better activations was the better way.
A disgusting amount of why Watson won Jeopardy was because it could buzz faster than Jennings and Rutter. Ain't that nice?
Lisp machines (ok that's symbolic again).
The skill-cap in Jeopardy is sort of low. The top players can all answer almost all questions, so victory comes down to the buzzer even between Jennings and Rutter.
The important thing is that Watson hit that skill cap. From there it wins on tie-breaks every time. I think we'll see this dynamic in many human/AI contests. If both competitors' skills are at the saturation point, the contest is decided either by luck, or some strategically unsatisfying thing like diligence or mechanics. I don't see why humans will ever have an advantage at this.
Tldr: it's good conditioner but you can do better ab initio
Gedankenexperiment as a methodology has had considerable success in physics and miserable, complete, ridiculous, awful failure in psychology and cognitive science.
Much of AI philosophy is done by extremely non-practitioners. John Searle can't code. Nick Bostrom came to coding extremely late in life. Geoffrey Hinton and the other ex-PDP folks wrote some philosophy papers, though, which are of interest if you like the philosophy.
"There's no sense in being precise when you don't even know what you're talking about." - von Neumann
This is a good getting started book for TensorFlow:
Gives a great run through of the history of AI research. Understanding the approaches that have been tried before gives you a sense of why the state of the field is what it is today. It is worth bearing in mind that AI research expands far beyond computer science into psychology, philosophy, linguistics etc.
You'd think there would have been 100 "How To Make a Computer Chess Engine in BASIC" books back in the 80s, and continuing to the present day, but I can't find them. Lots of papers and online tutorials, and some stuff in textbooks, but no accessible hands-on books.
The canonical text is by Daphne Koller; a course I took used Martin Wainwright's monograph though - the book is briefer and dives into the math quicker.
It also depends on what you're going to focus on. Are you looking to implement a game-playing agent? An object recognition algorithm? More of a logic focus?
If you just want Deep Learning and statistical methods, then Bishop's Pattern Recognition and Machine Learning is a good start. Otherwise, Russel and Norvig's Artificial Intelligence or Patrick Winston's similarly titled book are great starting points. For more big-picture stuff,
Marvin Minsky's Society of Mind is great, and Hofstader's Gödel, Escher, Bach is a classic too. Both are a lot less practical though, which seems to be what you're looking for.
Are you simply curious or is there something more pressing?
For example, do you want some light reading or have you perhaps been asked to implement machine learning for your company?
Most answers here assume you want to jump into the ML swamp and start analyzing your trove of "big data" ASAP. But is that so?
I'd also recommend:
Godel, Escher Bach: an Eternal Golden Braid
by Douglas Hofstadter. Might not be exactly what you're looking for (it's all over the place, touching music theory, math, art, philosophy...), but it's fun and enjoyable to read. Also very dense.
But it could be a bit too theoretical - it provides a foundational mathematical framework and got me thinking about problems in a better way.
One of the best books on AI and Programming ever.
- The Emotion Machine by Minsky
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom