Hacker News new | comments | ask | show | jobs | submit login
Causal Nets Cause Common Confounding (gwern.net)
51 points by colinprince on Aug 26, 2016 | hide | past | web | favorite | 5 comments



This problem is called causal discovery.

Learning causality from observational data is hard, but there are statistical methods to help with that. Of course, you want to verify statistical findings with experiments, but first you want to find most likely causal relations from large number of correlations.

Causal direction between the two variables can sometimes be identified by observation because there is more information in the data than correlation coefficient and it can be asymmetrical.

Here is nice intro:

Causal discovery and inference: concepts and recent methodological advances Peter Spirtes and Kun Zhang http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841209/

Nice paper and method:

Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models https://www.cs.helsinki.fi/u/ahyvarin/papers/Zhang09NIPSwork...


Good paper. Aapo is at the ML group at UCL now, starting this semester.


> We’d expect that the a priori odds are good, by the principle of indifference: After all, you can divvy up the possibilities as: 1) A causes B 2) B causes A 3) both A and B are caused by a C

> If it’s either #1 or #2, we’re good and we’ve found a causal relationship; it’s only outcome #3 which leaves us baffled & frustrated. If we were guessing at random, you’d expect us to still be right at least 33% of the time. And we can draw on all sorts of knowledge to do better

Generally, I'm not a fan of argumentation or logic that references the principle of indifference. Sure, if you insist on making an assumption without any prior information, there is an argument for choosing P = 1 / N. But I don't think the article needs to make sure an assumption about the prior -- it does not seem to be essential to the article's point.

In statistics (and perhaps in life), it is often better to be honest about your ignorance.


I think most of the time the metaphor of the drunkard under the streetlight searching for his keys applies to us.

You can see this most clearly in other individuals, but it is also writ large in our institutions various efforts in welfare, justice.

In my view, what is required to get to the Next Level is vastly improved metacognition of the sort gwern and other inhabitants of Less Wrong + SSC, notably CFAR are attempting to do. However I am not confident we can do this 'manually' on a consistent basis or introduce good habits without discovering new ways to be stupid (much of Scott and Gwern's efforts are directed towards observing dead ends being trod by other relatively bright individuals or reasonably sophisticated meme complexes). The main enemy at present is not just 'knowing how to think' but also in being consistent. The best of us are at best temporarily rational, which is too haphazard to build any serious foundation on.

This is where I am convinced computation comes in. Humans are creative and slow, computers are consistent but stupid. A truly augmented human would receive 'assists' from a computer AI.

This would require full surveillance on your outputs (text typed, speech etc) and inputs (websites, books read) and some manner of at least rudimentary brain monitoring.

From these sources it should be possible to construct an apparatus that acts as personal assistant, pointing out your forgetfulness, the contradictions in your thought patterns or behavior, like a Compiler for the Mind. We could personify such a thing as a living entity in its own right, like Lyra's daemon.

Who wouldn't want a daemon?


I was reading this today -somewhat related and quite interesting

https://arxiv.org/pdf/1509.03580.pdf




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: