Hacker News new | past | comments | ask | show | jobs | submit login
Estimating the probabilities of causation via deep monotonic twin networks (arxiv.org)
40 points by Anon84 40 days ago | hide | past | favorite | 5 comments

TL;DR: this looks like it is about methods to answer questions like "Was it event X that caused event Y?"

This feels difficult for a layman like me. Let's try to clear that up.

> However, as noted by Pearl, interventional queries only form part of a larger hierarchy of causal queries, with counterfactuals sitting at the top.

Regarding "Pearl", paper text cites:

> Tian, J.; and Pearl, J. 2000. Probabilities of causation: Bounds and identification. Annals of Mathematics and Artificial Intelligence, 28(1): 287–313.

which appears to be this: https://link.springer.com/article/10.1023/A:1018912507879

and appears to provide mathematical foundations to answering questions like "did event A cause event B" from observations.

Regarding the "hierarchy", this looks like a clear and very short introduction: http://web.cs.ucla.edu/~kaoru/3-layer-causal-hierarchy.pdf

From the very end of the paper:

> Causal inference is a tool that can have significant impact on society depending on its use. As such the authors are adamant that all uses of causal inference that could have negative societal impacts should be accompanied with the proper due diligence and fail-safes in order to minimize and even eliminate said negative impacts.

Does this translate to "Applications of this research might destroy society, please refrain"?

>Does this translate to "Applications of this research might destroy society, please refrain"?

The biggest potential use of casual inference from what I remember in school is social policy predictions. Large scale changes that you simply cannot randomly test. Things like "does prosecuting minor crimes more heavily lower overall crime" or "does privatizing schools increase student test scores." Even if you get the causality right in the main metric there may be secondary effects you don't even test for.

The main point about most causality questions is to answer the question “what if X did not happen / did happen”.

Normally you can’t get that information since you are looking at past events where X happened and you can’t change that. In addition to that it also helps answering questions where you could simulate a different behavior (as in a controlled random trial), but it’s simply too expensive / unethical / … (Is smoking really causing cancer, or not —- would X have had cancer if they didn’t smoke). Answering these kind of questions in a mathematical way gives a lot of power since it removes / kind of solves the correlation doesn’t equal causation problem

One application I can think of would be to dialog systems. Tracing the meaning in a conversation between two strangers is determined by statements and responses, and the degree of unexpectedness of each utterance. Say a person asks a yes or no question, but gets a response along the lines of "I'm talking about X, not Y"; it would be useful to be able to go back and retrace the conversation at that point and try different responses in an effort understand what's being said.

> Does this translate to "Applications of this research might destroy society, please refrain"?

This is boilerplate acknowledgement of ethics concerns that often appears in DL/ML papers these days. Doesn't really mean anything.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact