Further, PGMs have the advantage over deep networks in that they are highly explainable, and you can go back and look at the chain of reasoning. For some problem domains, this part is more important than prediction accuracy.
If you are building a robot, even if the mechanics are not really newtonian, modeling the system mechanically can get a model much closer to the underlying manifold, reduce training set size and improve generalizability. So I don't think the old way of doing things should be thrown out. They got pretty near the right answers, and we should use newer methods to just fill in the gap between theory and practice.
E.g. pre-train a DBN using an analytical model and later adjust it on real data.
1. Where you learnt PGMs
2. How you made it part of your 'personal everyday' toolkit
As far as everyday reasoning, it made me somewhat more skeptical of long chains of A --> B, !B therefore !A, type of thing. It's easy enough to model this type of logic as a special case of PGMs. And the causal stuff is extremely useful for making me skeptical of arguments of the sort "If we did X, then Y would happen," and also how and when correlation is causality. Don't have any pat examples though, it's just something that infuses my thinking, such as learning about biological evolution.
Hinton's Neural Network class was very challenging for me too, mostly because many of the concepts were unfamiliar to me. But again, I could re-watch whatever I needed to in order to get it.
Too bad they haven't offered it since 2013. I didn't finished it by them for personal reasons :c/
1) It is fully web-based, and thus it is technically "on" the web, and accessible by everyone
2) It does not deal with any "modern web appy"-type meta-frameworks, and thus it is not, so-to-speak, "of" [what most of today's web developers would call] the web, and it therefore has no dependency issues to hold back its development
3) It is essentially a working Unix-like development environment, complete with a standard(ish) shell.
If you use Chrome, you can find it at https://www.urdesk.net
To go directly to the AI, just follow this link: https://www.urdesk.net/desk?intro=bertie
As far as the issue of data vs. rules is concerned, I don't know what it would even mean for a system to be purely one or the other.
I love "AI A Modern Approach" but the chapter on PGMs wasn't the best in my opinion. I think the dentist example just bothered me/it wasn't all that obvious how useful they really are. Thankfully the book is amazing and they provide plenty of references to move on :)
That being said I think PGMs are immensely powerful and my gut says this approach is the one that I like the best.
In the vein of the papers "From machine learning to machine reasoning" and "Text understanding from scratch" I expect a "First-order logic understanding from scratch" to follow naturally.
Jaynes' fundamental metaphor through the book is building a "reasoning robot" so anyone interested in the intersection of logic, probability and AI will get many interesting insights from this book.
 PDF of the preprint: http://bayes.wustl.edu/etj/prob/book.pdf
My personal prediction is that once we get good at learning whole probabilistic programs from data rather than just inferring free numerical parameters from data, this is going to become the dominant mode of machine reasoning.
There is a tool called Eureqa which was specifically designed to produce understandable models, in the form of mathematical equations. A biologist used it on some data from an experiment of his, and it produced a very simple equation that fit the data perfectly. But he couldn't publish it because be couldn't understand or explained why the equation worked or what it meant.
That is one of the advantages of PGM, it tells you why it thinks something. Combining this with domain experts is a killer advantage of PGM. For the soundbite: PGM's help the domain expert figure out where to go next.
But there are many logics that can be used to reason about stochastic and probabilistic dynamics.