Hacker News new | past | comments | ask | show | jobs | submit login

So you know when you believe something and then you update your belief because you get some evidence?

Yeah, and then you stack some beliefs on top of that.

And then you discover the evidence wasn’t actually true. Remind me again what the normative Bayesian update looks like in that instance.

Unfortunately it’s turtles all the way down.




    P(B|I saw E, P) = P(I saw E|B,P) * P(B|P) / P(I saw E|P)

    P(B|E was false, I saw E, P) = P(E was false|B,I saw E,P) * P(B|P,I saw E) / P(E was false|P, I saw E)
This is a pretty basic application of Bayes' theorem.


Love it: p(I saw E) and p(I didn’t really see E).

Just move the argument one level down: “I saw E is false” and it turns out so is “E is false” . So then? Add “E was false was false”?

Turtles all the way down.

At some point something has to be “true” in order to conditionalise on it.


I believe you can condition on a probability of proposition.

For example, if you are in a fairly dark room and you observe with 90% confidence a red object. Then you can do (iirc) P(X | 90% confidence see red object) = 90% * P(X | see red object) + 10% * P(X | do not see red object)

I would think that in principle, this allows for allowing all observations to be fallible, without any kind of “infinite regress” problem? You just apply the same kind of process each time.


Yes sure, here are a few truths that never disapointed me:

There is an absolute universal truth.

Absolute universal truth, as a whole, is unreachable even to the most intelligent and resourceful human that will ever exist.


Real world systems are complicated. In theory, you could do belief propagation to update your beliefs through the whole network, if your brain worked something like a Bayesian network.


Natural selection didn't wire our brains to work like a Bayesian network. If it had, wouldn't it be easier to make converts to the Church of Reverend Bayes? /s

Alternatively, brains ARE Bayesian networks with hard coded priors that cannot be changed without CRISPR.


> you discover the evidence wasn’t actually true

Not really going to vouch for the normative Bayesian approach, but you might just consider this new (strong) evidence for applying an update.


The precise claim (I believe) is that the prior update which you had, made some assumptions about the correct way to phrase your perceptions.

That is, you say, for the update, "the probability that this trial came out with X successes given everything else that I take for granted, and also that the hypothesis is true" vs. "the probability that this trial came out with X successes given everything else that I take for granted, and also that the hypothesis is false." So you actually say in both cases the fragment, "this trial came out with X successes."

What happens if it didn't really? Well, the proper Bayesian approach is to state that you phrased this fragment wrong. You actually needed to qualify "the probability that I saw this trial come out with X successes given ...", and those probabilities might have been different than the trial actually coming out with X successes.

OK but what happens if that didn't really, either. Well, the proper Bayesian approach is to state that you phrased the fragment doubly wrong. You actually needed to qualify it as "the probability that I thought I saw this trial come out with X successes given...". So now you are properly guarded, like a good Bayesian, against the possibility that maybe you sneezed while you were reading the experiment results and even though you saw 51, it got scrambled in your head and you thought you saw 15.

OK but what happens if that didn't really, either either. You thought that you thought that you saw something, but actually you didn't think you saw anything, because you were in The Matrix or had dementia or any number of other things that mess with our perceptions of ourselves. So you, good Bayesian that you wish to be, needed to qualify this thing extra!

The idea is that Bayesianism is one of those "if all you have is a hammer you see everything as a nail" type of things. It's not that you can't see a screw as a really inefficient nail, that is totally one valid perspective on screwness. It's also not that the hammer doesn't have any valid uses. It does, it's very useful, but when you start trying to chase all of human rationality with it, you start to run into some really weird issues.

For instance, the proper Bayesian view of intuitions is that they are a form of evidence (because what else would they be), and that they are extremely reliable when they point to lawlike metaphysical statements (otherwise we have trouble with "1 + 1 = 2" and "reality is not self-contradictory" and other metaphysical laws that we take for granted) but correspondingly unreliable when, say, we intuit things other than metaphysical laws, such as the existence of a monster in the closet or a murderer hiding under the bed or that the only explanation for our missing (actually misplaced) laptop is that someone must have stolen it in the middle of the night." You need to do this to build up the "ground truth" that allows you to get to the vanilla epistemology stuff that you then take for granted like "okay we can run experiments to try to figure out stuff about the world, and those experiments say that the monster in the closet isn't actually there."


TThis just sounds like logical tetris




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: