Hacker News new | comments | ask | show | jobs | submit login
“The Book of Why” by Pearl and Mackenzie (andrewgelman.com)
154 points by nkurz 10 days ago | hide | past | web | favorite | 71 comments





My comment on the blog was (at least initially) eaten by the spam filter, so I reproduce it here.

---

JP: There is no way to answer causal questions without snapping out of statistical vocabulary.

AG: We disagree. That’s fine. Science is full of disagreements, and there’s lots of room for progress using different methods.

I'm only an amateur, but from the outside it sure doesn't feel "fine" for the two of you to disagree on what seems like such a fundamental issue. Instead, this seems like a case where two extremely smart individuals should be able to reach an common understanding instead of accepting disagreement as a final outcome.

JP: I have tried to demonstrate it to you in the past several years, but was not able to get you to solve ONE toy problem from beginning to end.

AG: For me and many others, one can indeed answer causal questions within statistical vocabulary.

Pearl obviously disagrees that standard statistical vocabulary is sufficient to answer all simple causal questions. You seem to think he's wrong. I think you'd be doing a great service to encourage him to formulate such a "toy" question that he thinks is unanswerable without resorting to the do-calculus, which you then try to answer to the audiences' satisfaction using more standard techniques. Maybe the two of you turn out to be in agreement but using different terminology, maybe you are right that his tools are optional, or maybe he's right that they are essential. Any of these outcomes would feel much more satisfactory and productive than agreement to disagree. Please consider offering him a platform with which to make his case.


I think you are overstating Judea Pearl's argument. He doesn't claim it is impossible to answer causal questions (on observational data) without resorting to do-calculus. What he does claim is that to answer causal (interventional or counterfactual) questions you need to define a model and reason on that model in a way that transforms the question into conditional probabilities. Do-calculus provides a framework to help you perform that reasoning, but certainly people have been using things like the backdoor criterion before the invention of do-calculus to answer interventional questions.

He might claim that any other method can be reduced to do-calculus. I'm not sure. I do believe at the core of his argument is need for an explicit model.


That’s my core understanding too, the model is essential and that not only rubs statisticians the wrong way but also the current luminaries in deep learning get visibly irritated by that too. The irony is that JP was very successful with statistics having invented Bayesian Networks, and now has moved on.

Here he asks a very simple question and look at the body language from the panel: https://www.youtube.com/watch?v=mFYM9j8bGtg&t=50m47s


Yes, that's a better and more accurate phrasing than I used. The issue is that the correctness of the answer depends on the whether the causal structure of the model matches the underlying generative process. Graphs and do-calculus aren't required for this, but can help to make things clearer. In a later comment on the blog post, Pearl links to a paper that describes one of his "toy" examples: http://ftp.cs.ucla.edu/pub/stat_ser/r400-reprint.pdf. Section 3 of page 584 is the beginning of the example. I'm sad to say that I'm sufficiently amateur at this that I found even the "toy" example to be at the limits of my reasoning ability, but I thought it still illustrated the argument Pearl is making.

I absolutely agree if your model is wrong any causal inferences you draw will be wrong. However his framework does provide mechanisms for determining if your data is inconsistent with your model (ie certain independences should or should not exist). So hopefully if your model is wrong your data will tell you so and you can change your model.

"you'd be doing a great service to encourage him to formulate such a "toy" question that he thinks is unanswerable without resorting to the do-calculus, which you then try to answer to the audiences' satisfaction using more standard techniques."

This would be very helpful indeed.

I think a lot of the issue comes down to how idiosyncratic Pearl's work is. It will have to accumulate quite a lot of victories before enough people will bother with it.

Until then I suspect Causality will remain something of a statistical Finnegan's Wake.

I'll probably read The Book of Why to try and get a better handle on motivation for the technical material.


Thankfully, "Causality: A Primer" is sort of like the CliffNotes to Finnegan's Wake ;)

Thanks, may check it out. This stuff has long been on my to-do list.

I'm not an expert on this stuff at all, but are the toy problems basically formulations of Simpson's paradox? My sense was that if you have a set of a data, where if you cut it into two slices and each slice suggests the same (identical) conclusion, but then if you look at the data overall it suggests a different conclusion, then how do you know which approach is correct? And it seems the only way to know that is to use your judgment about causal factors that generated the data, and the do-calculus helps make that explicit. For instance, when thinking about gender ratios and grad programs, it's more probable that the choice of what grad program to apply to is affected by what gender you are, but it's less probable that the choice of your grad program affects your gender.

Well I just reviewed section 3 of page 584 of that pdf that was shared and it seems different, so I probably have a long way to go before I understand this stuff.


Not all "paradoxes" are similar to Simpson's paradox. See this PDF for more similar "paradoxes":

https://ftp.cs.ucla.edu/pub/stat_ser/r409-corrected-reprint....

(I quote the word paradox, because they aren't really paradoxes once we have an understanding of what's going on.)


> it sure doesn't feel "fine" for the two of you to disagree on what seems like such a fundamental issue. Instead, this seems like a case where two extremely smart individuals should be able to reach an common understanding instead of accepting disagreement as a final outcome.

Once you've reached the limits of what you can know (after examining all data, worked through all arguments, etc.) this is pretty much the only possible outcome. What works in one situation might not work in others. One person's interpretation of our limited knowledge might very well appear implausible to someone else.

I'll admit to being turned off by Pearl's insistence on working with "toy problems". That might be fine for a philosophical discussion, but it's not of much practical value. I want Pearl to write a few empirical papers attacking important issues, then let's have a discussion about putting ideas into practice.


I have much more sympathy for Pearl. He's constructed "toy" problems that yield different answers depending on the hidden causal structure chosen. He then shows that unless one takes account of the causal structure, it's impossible to correctly answer the problem. The logical conclusion of this is that any technique that doesn't use the causal structure as an explicit assumption is instead using it as an hidden implicit assumption. His hope is that anyone who goes through his exercise will come to the same conclusion. Starting with real-world problems is tricky, because one doesn't know the correct answer in advance, thus it's much harder show the reliance on the assumptions.

Consider a parallel with computer programming. A user complains that they fear a program is giving them the wrong answer on a complex real world problem. They report this, and get back the unhelpful answer "Works for me, will not fix". Unable to shake the feeling that the answer is unreliable, they reduce the problem down to a proof of concept that serves as a simple self-contained test case. Two different inputs produce the same answer, but only one of them can be right! But now they are unable to convince the maintainer to even look at the test case, because now the maintainer says "I need to focus on the real world, and don't want to waste my time on toy examples".

It's a discouraging position to find oneself in.


Oh, I'm completely on board with being explicit about any assumptions you're making, and for thinking deeply about the causal relations, and for thinking about the reliability of the analysis. I've been critical of economists for not thinking things through. And I'm definitely on board with teaching using toy problems.

The problem is that, as far as I can tell, Pearl doesn't go beyond making points with toy problems. He hasn't done much empirical work (has he published a single empirical paper?) or even read much of the empirical literature he's criticizing as worthless. Ultimately, the question is whether policy and other decisions will be better using a particular framework. The fact that Pearl writes with the aggressive confidence of a Hacker News commenter does not mean he's right.


But from a mathematical point of view this insistence of focusing on empirical work is a little bit disheartening isn’t it?

And frankly Judea Pearl is hardly a random internet tinfoil hat guy, nor is he asking people to invest a massive effort checking his looong work (as we eg often see with random NP-completeness “proofs”, or was discussed with the time investment needed to check Mochizuki’s ABC proof - no, he just asks Gellman to apply his own familiar techniques to a toy problem. It does not sound unreasonable at all.


> But from a mathematical point of view this insistence of focusing on empirical work is a little bit disheartening isn’t it?

Why? The only thing that matters is the quality of the empirical work. You can get caught up in philosophical debates about the best way to do research. If it has no impact on empirical work, it's useless.

That's not to say Pearl's arguments are wrong or that his work is useless. The problem is the incompleteness of his arguments. You can't arrogantly dismiss empirical work just because it's imperfect. There's no reason a priori to expect that Pearl's approach will lead to better decisions.

There are many self-proclaimed experts who can point out the flaws in programming language designs, but that doesn't mean they can design a better language, and it doesn't mean existing programming languages are useless. Pearl's approach is not some kind of magic pixie dust that suddenly guarantees your empirical work is more trustworthy. It's unfortunate that Pearl thinks it is, and it prevents him from having a reasonable conversation about the topic.


Well from my point of view when I read the link, I see (with a little bit of characterization) two academics, one of them points at an issue on the standard statistical approach and tools, provides a reproduceable way to showcase that answer, and the other one answers “not interesting - i’ll go now to do real work”. And I found that avoidance of a discussion of a shared example disheartening.

And Pearl isnt just complaining - the provided an alternative. So it seems that one side identified a problem, curated examples, delivered a solution, and now the other side doesnt care about the problem - and causality is not a trivial problem. As academic discussion, not a business meeting, to me thats subpar.

That’s only my view from the outside, it may well be the case that it is wrong and I just need to review more in depth Pearl to see that he does not bring anything useful.


They've been debating this for many years, and Gelman is a coauthor and student of Rubin, who has been debating with Pearl for decades. Gelman is saying he doesn't want to debate the same issues yet again.

The primary message of Pearl is "all other methods are trash and all empirical work done using those methods is trash." My interpretation of the post is that Gelman is arguing other methods are not trash.

I probably agree more with Pearl than Gelman on the details, but Pearl's approach is just not appropriate for an academic setting.


What they are really fighting about is who gets to reference colloquial notions of causality when discussing their work. I don't think it's fair to say that Pearl treats the rest of stats as trash, but he is saying that they are misleading folks by describing their inference as evidence of cause and effect - which is about as bad for most academics and especially for the Rubin crew who have often policed integrity of other inference regimes (like less rigorous ML).

Debating who gets to use which words is a great way to make sure your debate only matters to other academics. I'd love to see the causality camp make their point by unlocking some great new applied results instead!


Refusing to provide a demonstration of the statistical technique on a toy example constructed for pedagogic purpose, hardly constitutes as a "debate". I would file that under: using a lot of words to avoid answering a direct and fundamental question

http://causality.cs.ucla.edu/blog/index.php/2016/02/01/works...

Is this sufficiently applied for you? Pearl may not apply his work on causality often, but there are a lot of people who are on the applied side who do.


Why does JP have to be the one to write the empirical papers? Why is that a prerequisite here?

Judea Pearl is attempting to develop (or has developed) and evangelize the approach for others to use it on empirical problems.

So it seems the response is to be in favor of anyone (not just JP) to use it for empirical work.

Do we expect every developer of a theory to put it into practice before it is found convincing? Shouldn't the reasoned explanation of a theory be sufficient for someone else to understand and attempt it?


Some of these "toy" problems have had pretty major practical effects. For example, reconciling the disagreement between cohort studies and RCTs, in many medical contexts, come down to remarkably simple causal diagrams.

I don't disagree. I'm not debating the value of Pearl's approach. I'm saying other methods can also be helpful. Pearl does not accept that.

What methods?

His main point is that "To properly define causal problems, let alone solve them, requires a vocabulary that resides outside the language of probability theory. This means that all the smart and brilliant statisticians who used joint density functions, correlation analysis, contingency tables, ANOVA, Entropy, Risk Ratios, etc., etc., and did not enrich them with either diagrams or counterfactual symbols have been laboring in vain — orthogonally to the question — you can’t answer a question if you have no words to ask it." http://causality.cs.ucla.edu/blog/index.php/2018/06/11/stati...


I think Pearl is more a theoretician. I believe some of his descendants have done some work that might qualify.

But, I agree that it would be nice if Pearl used real world problems using his methodology.


To counter some of the comments here, I absolutely loved the book and went on to recommend it to all my scientist friends. While it may get a bit technical for the lay audience, it should be within reach for a typical scientist or IT person. I wish our society had a better understanding of causality—that would raise the level of many important discussions.

Being a long-time fan of Gelman (and having studied his Bayesian Data Analysis textbook), I am baffled and disappointed that he doesn't seem to understand Pearl's ideas. In his linked 2009 post[1], he wrote: "I’ve never been able to understand Pearl’s notation: notions such as a “collider of an M-structure” remain completely opaque to me." I wonder if, after reading this book accessible even to non-statisticans, he still doesn't understand it.

[1]: https://statmodeling.stat.columbia.edu/2009/07/05/disputes_a...


To counter some of the comments here, I absolutely loved the book and went on to recommend it to all my scientist friends.

Likewise (well, other than not really having any "scientist friends"). I loved this book, think Pearl has some amazingly valuable ideas, and found the book relatively accessible even though I'm not a statistician. I won't claim to have understood every detail on the first reading, but I got enough out of it to feel like I'll understand it all after a couple of follow on readings, plus consulting Pearl's other books.

I wish our society had a better understanding of causality—that would raise the level of many important discussions.

Absolutely.


To be fair Pearl does not speak in the same terms as the statisticians. Collider is a weird word and an M structure is even weirder. Graphical models suffer similarly.

For anyone interested in this book, I'm going through it now but it addresses an important question of how to identify causality.

We've all become familiar with the refrain 'correlation does not imply causation'. This book attempts to answer: 'what DOES imply causation'? He introduces a framework for how one can answer this question. Not very mathematically rigorous, but following through the framework does appear to be able to discover non-intuitive causative conclusions.

Understanding causation will have important implications for the advancement of A.I. Finding a correlation with the causes hidden in a black box (current state of deep learning) isn't enough for many disciplines. Doctors for example will likely need to know WHY an algorithm made a decision, instead of simply running correlations and telling the operator that a patient has 80% chance of some diagnosis.


It's possible to infer causation from correlation without experiments if you add some general assumptions.

One trick in causal discovery is additive noise. If X and Y are noisy correlating variables and X is causing Y, assumption that the noise in X is present in Y but not vice versa may reveal the direction of the causal arrow.

Causal Discovery with Continuous Additive Noise Models http://jmlr.org/papers/volume15/peters14a/peters14a.pdf

Nonlinear causal discovery with additive noise models https://papers.nips.cc/paper/3548-nonlinear-causal-discovery...

Humans seem to have causal reasoning ability that is very ad hoc. It works well in practice but it's not principled. There is not enough time to do experiments to establish facts. Correlation is causality seems to be a good heuristics.

I think that that AI will eventually learn to build causal models in the same way. Build a quick and dirty causal models with unfounded assumptions and see what works. Hold multiple effective conflicting causal theories that apply in different situations without any consistent model.


another neat example of how assumptions about noise and functional forms can let you do causal inference is in an exercise in "Elements of Causal Inference":

consider a linear model. The true model is Y ~ aX + ϵ, X causes Y. you want to distinguish, using observational data, from the case where Y causes X.

if the noise ϵ is Gaussian, there's no way to do this: there are reasonable models going both directions.

if you assume ϵ is uniformly distributed on some interval instead, then it becomes really obvious which way is the correct way.

the exercise recommends drawing little pictures with error bars to convince yourself of this, which is worth doing.


Well it's more that the noise may be in the data already... so you use that in lieu of randomization... since the noise is by definition random. You are assuming that the causal factor noisiness is different than the affected variable.

I feel like I don't have the background to fully understand what you're saying. Could you explain this in a more lay way?

> X and Y are noisy correlating variables


On average, events X and Y are positively correlated if they usually occur together even though occasionally they do not. This lack of perfect correlation is due to A) the natural variation of other (less important) causal factors, or B) imprecise measurement of their values. A and B are also known as 'noise'.

All causation implies temporal separation -- causal event X occurs before caused event Y. The trick is to identify which occurred first AND changed the frequency of the second.

An example is the assertion: "The presence of rain causes people to carry an umbrella". Of course, people carry umbrellas even when it doesn't rain, or don't carry umbrellas when it does rain, but on average, on a day when more people carry umbrellas than usual, it's usually a rainy day. The scientific question is: does people carrying umbrellas cause rain? Or does rain cause people to carry umbrellas?

If the natural variation of rain occurs in some detectable manner (e.g. light rain vs heavy rain) and you see direct variation in how people carry umbrellas (less rain thus fewer umbrellas), then it's more likely that rain causes umbrellas because rain variation correlates positively with umbrella variation. This is effectively confirmed if on several days you see that more people are carrying umbrellas than usual but it's NOT raining harder, then probably carrying of umbrellas does not cause it to rain. (Maybe umbrellas were being given away for free on that day, or the weather forecast threatened more rain than actually arrived, causing more umbrellas to be carried.)

Thus when rain amount rises or falls (due to natural variation or noise), you should see the amount of umbrella carrying follow accordingly. However if the reverse relationship occurs less often or not at all, this implies that rain does indeed cause umbrellas, and not the reverse.


Wouldnt that line of argument lead to believing that drop in barometer readings cause storms ?

If you disregard noise/variation as an indicator of which event is cause or effect, then neither of the events you propose is clearly the cause of the other. Because variation in barometer pressure is likely to be perfectly correlated with variation in storms, there's no noise/variation in either event that isn't also present in the other, so neither emerges as more likely to be the cause of the other.

This strategy of identifying the causal event works only for pairs of positively correlated events whose variations/noise sometimes do not occur together, like an increase in umbrellas without an increase in rain.

Can barometric pressure rise or fall due to causes other than storms? Can storms arise without being caused by a rise in pressure? I'd say maybe yes to the first (an elevation change of the meter, or a storm front that passes you very quickly but whose clouds don't pass directly overhead — maybe). But I'd say definite no to the second. If you are hit with rain from a storm, your baro pressure will drop. Thus storms cause pressure to drop, but pressure drop does not cause storms.


As a non-statistician with a lot of interest in statistics, I found the Book of Why frustrating. Modeling causation seems like an undeniably important step towards understanding the world better. But the biggest question I had was: how can you actually verify that your causal model is true? This is not clearly explained, or wasn't before I gave up on the book. Models are only useful if we can have some confidence that they correspond to reality.

I was especially interested in the answer to this question, because my only exposure to the language of "causal chains" has been on Twitter, where they seemed to serve a distinctly ideological purpose. One (non-mathematical) person says "I think X is caused by Y", and then a statistician chimes in and says "you're missing other parts of the causal chain, the real causes are Z and Q." Where of course, Z and Q are things that one political perspective prefers to blame, and Y are things blamed by the other side.

For example: https://twitter.com/gztstatistics/status/1000914269188296709. Here's a great comment from today about the difficulty of establishing causality in practice: https://news.ycombinator.com/item?id=18886275

I want to know how causal chains can be actually proven or falsified, to be convinced that this isn't just highbrow ideological woo.


> how can you actually verify that your causal model is true?

This is addressed in the introduction. See box 4 in the flow-chart (“testable implications”).

“The listening pattern prescribed by the paths of the causal model usually results in observable patterns or dependencies in the data. [...] If the data contradict this implication, then we need to revise our model.”


The part you removed with ellipses undermines this point:

"These patterns are called "testable implications" because they can be used for testing the model. These are statements like "There is no path connecting D and L," which translates to a statistical statement, "D and L are independent," that is, finding D does not change the likelihood of L."

This says nothing about testing causality, or the direction of causality. If two things are uncorrelated, then there is probably not a causal relationship between them, granted. But this is not a very novel or useful observation.

However if D and L are correlated, the test above says nothing about how to validate whether D caused L, L caused D, both were caused by a third thing (or set of things), or the correlation is just coincidence.

For a book whose entire thesis is "causality is rigorous," I expect a much more rigorous treatment of how to validate causality using more than mere correlation.


From your previous comment I understand you didn’t read the whole book so I don’t know if you got to chapter 4, section ”the skillful interrogation of nature: why RCTs work.” In short, you can use interventions (i.e. a properly designed experiment) to verify that the “cause” does indeed produce the “effect”.

RCT's indeed seem like a good way of establishing causality. But RCT's are well-established, so what is "The New Science of Cause And Effect" (as claimed by Pearl and MacKenzie) bringing to the table?

Intuitively I might guess that RCT's are the only way of rigorously establishing cause and effect. I would have been very interested if the book had confirmed or denied this intuitive conjecture of mine.

Another comment in this thread claims that you can infer causality without intervention: https://news.ycombinator.com/item?id=18884104 Perhaps this is true?

This is the kind of discussion that I wish the book had focused on. I want to probe at the line between belief and established fact, and understand what we can rigorously say given the evidence we have. I have a strong aversion to reading extended flowery descriptions of big ideas if the speaker has not rigorously shown that the model maps to the real world. Otherwise it's like listening to just-so stories.


This is the kind of discussion the book focuses on, you should try to read it. RCTs are not the only way to answer these questions and observational data can be used in some cases (but note that the validity of the inference is conditional on the model being correctly specified).

Maybe I should put some more effort into the book. But statements such as this make me extremely wary:

> but note that the validity of the inference is conditional on the model being correctly specified

This strikes me as begging the question. The model is exactly what I don't trust unless it is rigorously justified, so anything conditional on the model being correctly specified I also don't trust.

It all feels like a house of cards.


The sense I've gotten so far is that given a causal model with non-controversial causal assumptions, you can do algebra in some cases to come up with conclusions that otherwise (in the absence of do-calculus) would have required experimentation. And in other cases, you're still stuck.

What kind of answer do you expect?

You can get no causality from data alone. You always need additional assumptions.

If you can do an intervention and manipulate a variable as you wish, the assumption of its independence is warranted. A correlation with the outcome indicates a causal path (or you're being [un]lucky). Even in that case a more complex causal model is useful to get better estimates, distinguish direct and mediated effects, etc.

If you have observational data only there is not much that can be done without a causal model. Given a model, the causal effect of one variable on another may be estimated in some cases. But if your model is wrong you may conclude that there is an effect when none exists or deny the existence of a real effect.


I think I expected the book to better live up to its billing. Here is an excerpt from the in-sleeve summary:

"Beginning with simple scenarios, like whether it was rain or a lawn sprinkler that got a sidewalk wet, the authors show how causal thinking solves some of today's hardest problems, including: whether a drug cured an illness; whether an employer has discriminated against some applicants; and whether we can blame a heat wave on global warming."

In all of these "hard problems", it is the model itself that is the most contentious piece, and the most ideological. Some people have a mental model where CO2 produced by humans is causing climate change (which I agree with), and others believe that the changes can be explained by natural fluctuation. These beliefs are undoubtedly influenced by a person's biases. It's not very useful to say "once you have accepted a causal model, you can draw lots of useful inferences." Because the main point of contention is over what is causing what.

I found this statement of yours honestly more useful than anything I read in the book so far: "You can get no causality from data alone. You always need additional assumptions." The downside of this is that different people can make different assumptions, and so this implies that this kind of causal analysis can't mediate disagreements between different groups of people who see the world very differently.


> It's not very useful to say "once you have accepted a causal model, you can draw lots of useful inferences." Because the main point of contention is over what is causing what.

Well, accepting a causal model and drawing lots of useful inferences seems better than drawing lots of misleading inferences because no attention is paid to the model (or being unable to make any inference because it's not obvious how the data can be used).

Even if people may not agree on what is the right model at least this approach makes the model explicit. And in many cases there is no reason for disagreement, but if there is no careful analysis the wrong model may be used by mistake. For example, chapter 8 has an extense discussion of the potential outcomes approach in the context of salary as a function of education and experience.


As someone who does social science causal inference for a living, I have to say that I didn't really enjoy "The Book of Why". Full disclosure: I mostly practice the Neyman-Rubin potential outcomes form of causal inference rather than the Pearl do-calculus / DAG ("directed acyclic graph") form of causal inference, but the two are in many cases equivalent.

The reason I didn't like the book is that I found it insufficiently rigorous to really engage with the "how" of doing causal inference, but excessively mathematical as a theoretical introduction to causality.

"Causality: A Primer" (also written by Pearl) is a very short book that I think does a good job of surfacing some of the same theoretical background while also explaining how to use Pearl's causality. If you exhaust that, I'd recommend moving to the full "Causality" book.

But otherwise I'd recommend actually looking into the counterfactual / potential outcomes view of causality. The set of questions it answers are about 80% overlapping (although both Pearl and POs have their own 20%), but I find the vocabulary a little more intuitive. Canonical books include Morgan and Winship "Counterfactuals and Causal Inference" or Imbens and Rubin "Causal Inference for Social Scientists".

As to the blog post, Pearl is correct that causality requires qualitative assumptions about design to justify assumptions required to do causal inference. In Pearl's work this is often motivated as qualitative knowledge informing the structure of the DAG before any estimation. But recent advances in causal discovery have actually rendered it possible to black box the structure of a DAG from data -- happy to provide citations if this is down the rabbit hole. By contrast, I agree with Gelman that Pearl is an irritating writer and that in "The Book of Why" he gives a sloppy intellectual history of causation.


> But recent advances in causal discovery have actually rendered it possible to black box the structure of a DAG from data -- happy to provide citations if this is down the rabbit hole.

I would be very interested in these references.


I should say this is not my wing of the world since in social science typically theory precedes estimation and there would be a strong disciplinary norm against "I have no idea what causes what". So I don't actually use this stuff. That being said, I have played with a few of the packages and read a few pieces on causal discovery.

Jonas Peters et al. - Elements of Causal Inference is a textbook that covers a little bit of what they called "learning cause-effect models". For algorithms, check SGS (Spirtes-Glymour-Scheines) and PC (Peter Spirtes and Clark Glymour). I believe both these algorithms are implemented in R in the package `pcalg`. There's another R package on BioConductor that implements them too, but I'm far enough afield from biostats I don't remember the name or have any notes I can find.

Some recent cites of note: Peters and Buhlmann - "Identifiability of Gaussian structural equation models" (2014), which led to Ghoshal and Honorio - "Learning linear structural equation models in polynomial time" (2018) who generalize the Peters/Buhlmann claim.

Other authors to Google: Dominik Janzig; Joris Mooij; Patrik Hoyer -- all of these people write papers with the above people, so you should be able to map out the network.

What the pieces all have in common is that they're trying to establish empirical differences in the joint distributions of X and Y between scenarios where X -> Y and where Y -> X. This is only possible in some cases.

Hope this helps.


Is this directly related to learning the structure of PGMs?

eg. https://arxiv.org/abs/1111.6925 and practical example at https://github.com/jmschrei/pomegranate/blob/master/tutorial...


Yes.

I think that is unfair. No single book, except for a tome will cover all the bases. On the other hand if one does write such a tome it sets the author up for a low brow dissmissal - the author is incapable of giving a brief description and is too technical...TLDR.

A discussion about this book on the statistics StackExchange, with some interesting answers: "The Book of Why by Judea Pearl: Why is he bashing statistics?" https://stats.stackexchange.com/questions/376920/the-book-of...

Yes, that has lots of great stuff in the comments. One of them (convolutedly just added based on a link provided in a comment on the Gelman blog post) was to this review: https://www.kdnuggets.com/2018/06/gray-pearl-book-of-why.htm.... The review, and then back-and-forth at the bottom and in comments between Gray and Pearl are wonderful context.

The irony in this comment is priceless:

“Simpson’s paradox in its various forms is something that generations of researchers and statisticians have been trained to look out for. And we do. There is nothing mysterious about it. (This debate regarding Simpsons, which appeared in The American Statistician in 2014, and which I link in the article, hopefully will be visible to readers who are not ASA members.)”

There is nothing mysterious about Simpson’s paradox but the proper answer is still being debated!

Pearl’s response ends as follows:

“The next step is to let the community explore:

1) How many statisticians can actually answer Simpson’s question, and

2) How to make that number reach 90%.

I believe The Book of Why has already doubled that number, which is some progress. It is in fact something that I was not able to do in the past thirty years through laborious discussions with the leading statisticians of our time.

It is some progress, let’s continue.”

http://causality.cs.ucla.edu/blog/index.php/2018/06/15/a-sta...


Pearl's causal model is a decent didactic tool, but it seems to have little relevance when it comes to trying to figure out real-world problems. (If any actual scientific breakthroughs have come from Pearl's approach, I'd love to hear about them. I don't think there are any.) I believe the disagreement between Gelman and Pearl may come down to the fact that Gelman often deals with actual estimation problems where you have to find causal parameters using scanty and imperfect data, with no strong theory to guide decisions, while Pearl is focused on toy models where all the limitations and uncertainties of real problems are assumed away.

Donald Rubin has said, surely correctly, that design trumps analysis in causal inference. Pearl's approach seems to be the opposite--all focus is on the analytical details. For practicing scientists, I think this article this article provides a much more useful model for causal inference: https://academic.oup.com/ije/article/45/6/1787/2617188 See Textbox 3 where different approaches to studying the relation between smoking and low birthweight are described. The various approaches rely on different assumptions and any one study design may not be convincing by itself, but the way their results converge ("triangulation") is very convincing. AFAIK, none of the studies used DAGs, yet the causal evidence provided is stronger than any DAG could provide.


The formal approach to quantitative causal inference in epidemiology: misguided or misrepresented?

https://academic.oup.com/ije/article/45/6/1817/2960059

There are other comments, and the authors’ reply:

https://academic.oup.com/ije/issue/45/6


> "I’ve never been able to understand Pearl’s notation:" -- Gelman.

That should not be surprising:

"It is difficult to get a man to understand something when his salary depends upon his not understanding it." -- Upton Sinclair

Accepting Pearl would amount to stating that some of the procedures we(statisticians) have been using, championing and sourcing funds for, for half a century are seriously flawed. Thats going to have consequences on future funding. Of course there will be resistance.

Tony Hoare is worth paraphrasing -- some methods are so crisp and small that they are obviously correct. Others are so complex that one cannot find obvious errors. Piling on a hierarchy of random variables upon random variables and parameters upon prameters lies firmly in the latter class.

This is actually a charitable analogy because some uses of statistical methods are incorrect but the error lies in incorrect use -- Using a tool or a technique to answer a question that it cannot answer. Smothering it with complexity and phrases like 'but real world', 'but noisy big data' helps to muddy the waters enough to deflect the attention from the fundamentals difference beween conditioning and intervening.

I can be sympathetic to a claim that a method is more effective solving a complicated problem than a simple one. On the otherhand, if it turns out that the body of theory on which the proposed method has been built, the same proposed method that is presumably correct for the complicated case, cannot deal with a pedagogic toy scenario correctly, that raises my eyebrows.


I tried reading Pearl once. I couldn't get over his tone.

Andrew Gelman summarize it pretty nicely his take on it.

Coming from a statistic background, casual inference is a growing thing now and several government sponsor research have been pushing for it.

Casual inference from statistic point of view is base on missing data, basically Rudin stuff. It's pretty dang interesting to me. I'm sure there are many ways of looking at the same thing. Linear regression you can look at it in more of a optimal math problem with cost function or you can look at it in statitic using maximum likelihood estimation. Both have it's pro and con, with MLE you get a confidence interval. In my bias opinion I feel that statistic is only about data and it's a great domain for casual inference.

There's no need to put a field down to make yours better. But if it's constructive criticism (pro/con, contrast) I think it make both fields better. Pearl attitude is off putting when you try to read his stuff. We're all human and have vary degree of ego, if you're going to try to convince us that do calculus and your ways is better be objective about it or word things better. If you don't want to convince people then just be blunt as hell.


Judea's work and "The Book of Why" ought to be required reading for anyone who draws conclusions from data. People who do not understand statistics well enough to understand the book need to study statistical thinking until they do.

Michael Nielson has a nice post (circa 2012) on the topic at http://www.michaelnielsen.org/ddi/if-correlation-doesnt-impl... with comments at http://www.michaelnielsen.org/ddi/guest-post-judea-pearl-on-....


So, AG writes a sloppy review to complain about a sloppy mischaracterization of (mostly classic) statisticians.

"About the exposition of causal inference, I have little to say."

That would have been interesting though.


Why is not a physical, mathematical or scientific question, but rather a philosophical one.

Personally I believe there is no why (no causality at all). Rather we love to think it exists because it reduces our uncertainty. It's too much to accept our whole reality is just a bunch of random coincidences.


I tried to listen to the audiobook of this. Terrible idea. One of the few audiobooks I’ve abandoned.

I can see why you abandoned it, but I enjoyed the audiobook. It helped that I had a little bit of prior exposition to the subject, and there were quite a few places where I had to pause and wait until I can sit down with pen and paper and reproduce what Pearl was saying. But on the plus side, I finished the audiobook in under two months, whereas it'd take me ages to find the time to finish reading it on paper/kindle.

All this is to say, it was probably a mistake for the publisher to order the narration, but I am really glad they did.


Why was it bad? I'm not an audiobook listener so I don't know what's considered bad.

It's an user-experience problem: for an audiobook, Book of Why is quite dense in formula that are hard to visualize when spoke as words.

For example, consider what happens if we try to describe a causal diagram in words

"A points to B, A points to X, B points to Y, and X points to Y. Now, if we apply do(X) to the diagram, we see that we can Y is now no longer a child of..."

or even simple formulas in words:

"P of A given B times P of B is equal to P of B given A times P of A"

For most of us, this sort of deal is hard to "get" and would be much better served if we just looked at a visual diagram or saw the equation.

I personally had to repeat many sections over and over again with a notebook and pencil in hand to truly understand what was being read to me... but if I'm taking notes and creating visuals for myself, then I might as well have just gotten the paper variant of this book lol.


There are a lot of diagrams and a bit of math, which don't naturally translate to a purely auditory experience very well. Having read The Book of Why I can't imagine how it would translate to a reasonable audio-book. Frankly I'm surprised they even released an audio-book version at all.

Audiobooks for dense non-fiction material is usually a bad idea.

I'd recommend audio for fiction, or non-fiction with an engaging storyline (ex: Bad Blood), which this is not.

Nonetheless, I'd still recommend the book.



"Fisher would lager argue"

I can't tell if this is a typo in the original text, or a typo from the person complaining about a lack of care.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: