
Pascal's Mugging: Tiny Probabilities of Vast Utilities - jamesbritt
http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/
======
roryokane
I once read another formulation of this problem, a PDF by Nick Bostrom:
<http://www.nickbostrom.com/papers/pascal.pdf>. It depicts this problem as a
dialogue between the mugger and the mugged, which I found more entertaining to
read than the HN link, and the PDF’s description still preserves all the
important attributes of the problem. If you find the HN link boring, try the
PDF.

~~~
jamesbkel
Thanks, I enjoyed that. I suppose the one issue with the dialogue is that as
the mugger increases the payoff, it would only make sense to decrease the
likelihood that he delivers on the promise.

Put differently, the expected value would be P(Keeps promise|Amount of
promise)*(Amount of promise). However, both of the multiplicands in that are
inversely proportional.

------
powera
There are two problems with this:

1) The mugger is obviously lying. 100% chance. The "you must admit there's a
small probability, because anything is possible" is irrelevant because there
is an equally small probability the opposite of what he says would happen.

2) There's no such thing as a 4 trillion trillion positive outcome, much less
3^^^^3. Just because you can say the number doesn't mean it's meaningful.

That said, this seems more like the
<http://en.wikipedia.org/wiki/St._Petersburg_paradox> which already has many
explanations.

~~~
roryokane
Some commenters in the link address your claim #1 that it is equally probable
for the mugger to be telling the truth and lying.

[Why would we give even a small probability of the mugger going what he says?]
“Because he said so, and people tend to be true to their word more often then
dictated by chance.” –
[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilitie...](http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/2t7o)

[But the mugger isn‘t human, so our experience in proportions of truth-telling
is irrelevant.] “They claim to not be a human. They're still a person, in the
sense of a sapient being. As a larger class, you'd expect lower correlation,
but it would still be above zero.” –
[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilitie...](http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/4o1w)

As for your claim #2, “There's no such thing as a 4 trillion trillion positive
outcome”, indeed there isn’t – I can’t imagine what that would mean – but I
see no mention of “[number] positive outcome” in the link. The link just says
the mugger will “run a Turing machine that simulates and kills 3^^^^3 people.”
What part of the link that you think is impossible did you paraphrase into
“positive outcome”?

------
Natsu
Presumably the utility of anti-manipulation features is higher than that of
exact computation of the payoff of events with sufficiently small
probabilities. So there's some sort of filtering effect that drops all
sufficiently low probabilities down to zero.

And yet a theist can use some variant of that argument to say that belief in
their god is above this threshold while all others are below it, zeroing out
the infinity of deities that no one can even name.

------
jarin
You could take the first 2/3 of this, and title it "Why simple mathematical
notation might turn you into an atheist"

------
tocomment
Can anyone explain this as if to a five year old? I'm not understanding it.

------
ldar15
Surely it would immediately fail the test because which is more likely: the
guy is a nutter, or there exists _another_ universe with _another_ turing
machine which includes its states and all the states required for ours and all
the states required for the 3^^^3 people.

~~~
bdr
It doesn't matter which is more likely, only that there is some positive
probability of the second option.

~~~
ldar15
_It doesn't matter which is more likely, only that there is some positive
probability of the second option._

No, it requires that the weighting (probability * consequence) is greater in
the second case than the first.

FTA:

1\. _But, small as this probability is, it isn't anywhere near as small as
3^^^^3 is large._

2\. _If the probabilities of various scenarios considered did not exactly
cancel out, the AI's action in the case of Pascal's Mugging would be
overwhelmingly dominated by whatever tiny differentials existed in the various
tiny probabilities under which 3^^^^3 units of expected utility were actually
at stake._

I disagree with statement 1. If the author's definition of Occam's razor is
believed by the AI, then the probability that the mugger is telling the truth
is given by the inverse of the complexity of the system that incorporates our
world, the other world and in addition the ability to simulate 3^^^^3 deaths.
So it doesn't matter if X is 3^^^3 or 3^^^^^3, because the very probability of
it being so is the inverse to the damage done.

    
    
      Our world: A
      Outer world only: B
      Temporary 3^^^3 death world: C
    
    

Complexity when mugger telling truth = ABC

Potential consequence: kC

Weighting for mugger telling truth: C/ABC = 1/AB

Complexity when mugger is lying = A.

Weighting for mugger lying: 1/A

Since A and B are staggeringly large numbers, the AI must choose 1/A over
1/AB.

Then there's a whole bunch of other stuff, such as, even if the AI was certain
that the mugger could carry out its threat, should the AI yield to threats of
violence? Another question: if life can so trivially be created and destroyed,
what does it matter? Can we trust the mugger?

But the main issue, and this should be the response of anyone threatened with
some claimed ability to cause harm and suffering in "the next world", is
"prove it".

