Hacker News new | past | comments | ask | show | jobs | submit login
Pascal's Mugging (2009) [pdf] (nickbostrom.com)
55 points by srimukh 18 days ago | hide | past | web | favorite | 69 comments



This can be easily resolved by considering that the victim of the mugging has finite resources. This is a usual remedy to problems were expected value alone gives stupid results (such as Pascal's Mugging). Something similar happens in lotteries where even if the expected value of buying a ticket is positive it is still not rational for an ordinary person to buy a ticket.

If I have $400 dollars I can't afford to take 1:1000000 risks that cost $200 each. I will go bankrupt with an enormous likelihood whatever the payoff. There is a minimum cutoff involving cost/probability below which it does not make sense to take up the opportunity.

There are links to similar theoretical ideas from the Pascale's Mugging wiki page - although from the casinos perspective not the gambler's - https://en.wikipedia.org/wiki/St._Petersburg_paradox#Finite_... and then https://en.wikipedia.org/wiki/Gambler%27s_ruin for example.

Most people will not take an 99% risk of going bankrupt in a game that will consume all their resource reserves; expected value as a statistic does not meaningfully capture the risk. Positive expectation, losing strategy.


That's a good argument against seemingly plausible but risky approaches.

This particular situation, of someone just pulling "it could be true!" out of their arse, can also be solved by framing things as "the more utility you claim, the less likely it seems and disproportionately so".

IE, if the chance of getting X from the scoundrel is less than 1/(X^2*C), even integral of all the offers together winds up very small.


Yes, but it is still not trivial to formalize mathematically without running into trouble: https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-m...


It is pretty simple to formalise - if there is a pool of people who routinely accept positive-expected-value gambles where there is a P = 99.999999999% chance of going broke then we expect everyone in that pool will be broke unless the size of the pool is comparable to 1/(1-P). Tighten up the bounds on 'comparable' a bit and that is formalised.

The mistake is accepting uncritically that expected value is the best metric to optimise. Nobody ever proved that expected value is a strategically superior metric. In fact it would be quite hard to prove that since it is not true. It leaves people vulnerable to making very stupid decisions as illustrated in Pascals Mugging.

Optimium strategy involves at a minimum considering your available opportunities and available resources. Opportunity alone is not enough.


> Tighten up the bounds on 'comparable' a bit and that is formalised.

Is this like „draw the rest of the owl“?


What owl?


reddit.com/r/restofthefuckingowl


Linking to examples of the meme without linking the actual meme doesn't really explain what "draw the rest of the owl" means. https://i.kym-cdn.com/photos/images/newsfeed/000/572/078/d6d...


You need to maximize expected utility. In order to avoid this, you need utility to be bounded: there must be some multiple of your current utility that's impossible to ever have regardless of what happens.


Agreed that most people could reject the mugging on those grounds but I think that is more than is needed.

Even if someone being mugged had an unbounded utility function the finite resources argument forces them to rationally reject the deal. We've reintroduced infinities; so now our rationalist must accept that there are infinitely many situations with the same properties as the Mugging (positive expected value, Almost Sure to lose the stake). Their strategy would have to be to reject deals like the Mugging and seek out deals that have positive expected return and probably keep the stake/have a tiny stake compared to their reserves.

I'm basically saying a rationalist with finite wealth is probably using the Kelly criterion [0] and would reject the Mugging on that basis.

[0] https://en.wikipedia.org/wiki/Kelly_criterion


>We've reintroduced infinities; so now our rationalist must accept that there are infinitely many situations with the same properties as the Mugging (positive expected value, Almost Sure to lose the stake).

Yes, in the presence of infinities the decision function is inconsistent.

>Their strategy would have to be to reject deals like the Mugging and seek out deals that have positive expected return and probably keep the stake/have a tiny stake compared to their reserves.

Try formalizing that.

Kelly criterion doesn't work. Your link says it maximizes log wealth. If potential wealth is unbounded, you will still take bets that are positive E(log utility).


> Kelly criterion doesn't work. Your link says it maximizes log wealth. If potential wealth is unbounded, you will still take bets that are positive E(log utility).

Actually Kelly criterion works well, since it would limit your exposure to the game.

In this situation, Pascal as a Kelly-better would bet 12 deniers out of 10 livres (= 0.05% of his wealth), which is a penny.


How are you getting those numbers?

If you're trying to maximize log wealth, someone promising an insane amount of wealth for a trivial investment is a positive value for log wealth. The amount may need to be slightly more insane than it would be if you were just maximizing wealth, but with massive numbers such as used in Pascal's mugging this is easy enough.


> If you're trying to maximize log wealth, someone promising an insane amount of wealth for a trivial investment is a positive value for log wealth.

Nah it does not work in such way.

Mathematically speaking, even if the mugger can propose an infinite large return in this situation, a kelly better would not bet more than 0.1% (= p = 1 / 1000) of his wealth.

It's very sound mathematics and nothing mystifying.

Edit: A Kerry better thinks in term of a long running sequence of bets. If you all of your wealth just because the game is favorable, you'll eventually lose all of your capital with probability one (in other words, betting everything minimizes the expected value of your wealth in long term). The natural conclusion here is that you need to bet a fraction of your wealth to maximize your long term wealth. But how much? Kelly criterion answers this queation formally. Read the Wikipedia article (or better, read Kelly 1956. It's a good paper) for how it handles the question.


The link says it's trying to maximize log wealth. If it's not actually doing that, then sure, it can limit loss. What exactly is being optimized?


> The link says it's trying to maximize log wealth

Yes it is maximizing log wealth.

Think this way: If you give all of your fortune to the mugger, your capital will end up being 1) 0 livres(99.9%) or 2) 20000 livres (0.1%). So the expected log wealth is:

  log(0) * 0.999 + log(20000) * 0.001 = -inf
On the other hand, if you bet 10% of your wealth, you will end up having 1) 9 livres (99.9%) or 2) 2009 livres (0.1%)

  log(9) * 0.999 + log(2009) * 0.001 = 2.3
So you will prefer to bet 10% over 100%. The math does not bring you to "bet all of your fortune!" even if the odds is 1:100000000.


Yes, maximizing the log implies you shouldn't bet everything. But you should still bet 99.99%, if the odds are a googol to 1.


> Yes, maximizing the log implies you shouldn't bet everything. But you should still bet 99.99%, if the odds are a googol to 1.

Not quite. If the odds are infinitely favorable, the log wealth is maximized when you bet 0.1% of your wealth. Any fractions other than that produce inferior results. This might be counter-intuitive but actually can be easily proven by basic calculus.

If you still doubt it, you can just compute it to be sure! For example, if odds is indeed 1:googol (1:10^100), the log wealth for betting 99.99% is -6.67, less than betting 0.1% which produces 2.52.


I figured out the issue. You're assuming you can bet any fraction you want, while Pascal's mugging requires a specific bet, take it or leave it.

Maximizing log wealth will require taking such bets at less than 100% of your current wealth, provided the payout is high enough. That's easily proven with simple algebra: if you start with X, taking a bet requiring 99.99% of X and paying Y:1 and a probability of Z of paying out, then expected log value of taking it is log(X/10,000)(1-Z)+log((YX*9999/10000)+X/10,000)(Z). This goes to infinity as Y goes to infinity.


You are right in a narrow sense. Yes, you can construct a situation that makes "betting 99.99% of my wealth" profitable. But:

* You're assuming the winning probability p does not decrease as the odds Y goes up. This is a silly assumption. Do you really believe that the probability is all the same when "Hay I will pay you 2000 livres tomorrow" and "Hay I think I can pay you a infinite amount of livres tomorrow"? * For example, if p = 1 / (odds), then E(log) never goes to infinity.

If you really assume that there is 1/10000 chance that the mugger can pay you an infinite amount of money, then ...... why not? You can just start a hedge fund on that. You'll gather 100000 people and make them bet independently with muggers, then there is 99.99% of chance that someone actually gets an infinite amount of returns. Now everyone is happy receiving an infinite amount of money.


>You're assuming the winning probability p does not decrease as the odds Y goes up. This is a silly assumption.

No, the only assumption required is that the probability decreases much slower than the odds goes up. Which is self-evidently true; the complexity of the claim doesn't go up nearly as fast as the odds being offered.

If the probability is 1/googol (in reality it's much higher than that, 1/googol epistemic probabilities never show up) but the odds being offered is 3^^^3, then you should take the bet, whether you're trying to maximize wealth or log wealth.


I thought the "solution" for St Peteresburg paradox was to consider utility as a non-linear function of wealth. And this makes the series converge.

Also, I don't find the Pascal's Mugger example convincing, as the probability that the mugger will return with the money is inversely proportional to the multiple they are promissing (for very large multiples this is because they have finite resources, but even at lower multiples this intuitively feels true).


> as the probability that the mugger will return with the money is...

That can't be reasonably estimated though. Putting aside the fact that we can't really assert the relation you posit, there is also a finite probability that the mugger is some sort of illuminati member with the ability to create an arbitrary amount of money. Ie, there is some tiny-but-positive probability that he can create an arbitrary amount of money.

At that point, the expected return can be made large compared to the probability that the mugger is lying.


They can't really offer you more than the amount of resources in the entire world, though. Money past some point in the trillions stops being money, because you can't exchange it for anything. So even for an illuminati member, the expected value can only go so high, and I don't know if that value is higher or lower than a dollar.


It seems to me that the set of worlds where the mugger can return $X is strictly a subset of the set of worlds where the mugger can return $X+epsilon.

So the probability won't be "inversely proportional" in the strict sense, but it will be decreasing with X increasing.


At some point it converges to some constant probability. The probability of providing an "almost impossible" - say $9^^^^9 - is almost precisely equal to the probability of providing $9^^^^9+1. This is because any mechanism that can provide the first can also provide the second.

Technically one will be a subset of the other but it will be like the sum of an infinite series where the differences are so small that there is a limiting asymptotic on the value of the sum is 0. The probability will converge to the value of "probability an entity can provide arbitrary compensation".


Oh it’s a completely ridiculous, thoroughly flawed and easily refuted argument, for exactly the same reasons Pascal’s Wager is flawed. I think that’s the point.


This is likely trending because John Carmack's recent post saying he was moving to AI references Pascal's Mugging: https://www.facebook.com/100006735798590/posts/2547632585471...

His Facebook post was also discussed in detail on Hacker News here: https://news.ycombinator.com/item?id=21530860


I really love how this is could be seen as a demonstration that HN (or any other similar interest community, or probably the internet in general) functions like a hive mind, a collective consciousness. I've read that post by Carmack and I wondered exactly that "wonder what Pascal's Mugging is, interesting".

Sometimes you go around and wonder various things, but don't look them up or do them in the moment, and then sometime later your subconscious mind serves you up with an answer, perhaps when you are more relaxed you just think up of the answer, or it happens to come up in a certain context, the subconscious lights it up there. It might be a word that you see randomly in a newspaper, or you think of a person that was related when you met them etc.

And here the subconscious did the same wonderous thing, except it wasn't even strictly my personal subconsciousness, it was the group subconscious that found the information and presented it.


I have noticed this pattern before: Someone mentions a topic deep inside a thread on HN and next thing you know that topic is on the front page.


It could also be confirmation bias, though. The Baader-Meinhof thing.


Looking forward to seeing the Baader-Mainhof phenomenon on the front page later today or tomorrow.


I only had to wait for the next comment.


But how useful is it? Pascal's Mugging was submitted to HN and discussed 8 years ago. If the collective consciousness keeps needing reminders of what it once knew, this is probably still inferior to a single intelligent person who reads a lot and remembers it all.


The population using HN takes in new members continuously, and the fraction who read everything posted eight years ago is very small.


As a somewhat intelligent person, I still also do need reminders often. The trick is to forget mostly the unneeded stuff and maximally retain the useful stuff, and accurately discern between the two. The goal is not to remember everything.

Also it would be strange to expect this group consciousness to never need reminders when it continuously has new people added to it, who are unfamiliar with old things.

I think consciousness is more about active living rehashing of information and reupdating it, reupdating the worldview to adjust to constantly changing environment - not so much about building one single model that would somehow know everything. Such models tend to be stale or abstract and philosophical to the point of uselessness.


It’s new to me :)


I hereby declare that I will expose everyone that gives in to Pascal's mugger to a negative utility so great compared to whatever the mugger promises that it is always best to keep the wallet. I cold be telling the truth. You're welcome.


I don't know the background to this but if I understand correctly, it kind of pivots on the probability that the mugger is indeed an Operator from the Seventh Dimension, that Pascal places at 1 in a quadrillion.

In that case, I have to wonder where this estimate comes from? I get that it's just an arbitrary number and that any number would do, as long as it wasn't zero, but that's exactly the point: why can't Pascal place the probability of his mugger being an Operator from the Seventh Dimension at zero?

Is there any evidence at all to support the mugger's claim? Is there any evidence at all that there is such a thing as a "Seventh Dimension" for which the only thing we know is that its "Operators" have magickal, utility-maximising powers?

And does the whole thing only work if we assume that the probability that there is such a place and such people is more than 0?


If you set the probability at zero, you won't be convinced when they actually are an operator from the seventh dimension. That is to say, you run into the opposite problem of being Pascal's Muggle [1].

1: https://www.lesswrong.com/posts/Ap4KfkHyxjYPDiqh2/pascal-s-m...

> A wind begins to blow about the alley, whipping the Mugger's loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle. In the sky above, a gap edged by blue fire opens with a horrendous tearing sound - you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too - and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.

> [...] "Unfortunately, you haven't offered me enough evidence," you explain.


In no particular order:

* Helping a googleplex of people immediately vs over a period of time are two different complexities of action.

* Recall that hypotheses are selected from an ambient pool of possibilities. Then we might imagine that some hypotheses dominate others, so that regardless of how much evidence is offered, we always insist that the evidence supports a simpler alternative. To wit:

"Well, if I'm not a Matrix Lord, then how do you explain my amazing powers?" asks the Mugger.

"Street magic," you say. "Very impressive sleight of hand. Perhaps some smoke, mirrors, lasers, assistants."

* A Matrix Lord asking $5 of a person on the street in order to commit miracles is inherently irrational. If they just wanted $5, or wanted to deprive the person of $5, or wanted to humiliate and embarrass the person, or force them to accept certain philosophical truths, then those all could be achieved via Matrix Lordery. Therefore the Lord in this story is being a pointless dick, and it's silly to expect rational arguments to be part of the conversation. To wit:

"Just give yourself $5. Give yourself any reward you like, for helping people; it's not my place to set or fulfill the price of such powerful entities, is it?" you ask.

"But...but don't you want the feeling of doing good?" asks the Mugger.

"Not really, no," you reply. "I have investments and equity already, and those dollars already have ripples that affect people far beyond my direct control. I don't feel much of anything about those investments. And it would be irrational for me to value a $5 investment more than $5. Really, if you can do all of this good, then you should turn yourself into an exchange-traded fund, and let people buy your time to do good in the world," you muse.

"But...but this offer is for you, and you alone," the Mugger insists.

"Okay, but why me? Let's talk about the Self-Sampling Assumption!" you say. The Mugger groans.


That's a good point: if the Mugger is an Operator from the Fifth Dimension and he has such great magickal powers, why does he need the 10lb in Pascal's wallet? Or, if he does need them, why can't he just get them?

Like, we are asked to believe that, given that the Mugger is an Operator from the Seventh Dimension, he has the power to offer 10 quintillion Utils to Pascal, but not the power to just take the 10lb from his wallet.

I think the whole paradox can still stand, given that the Mugger can then just offer an amount of Utils that compensates from the much smaller conditional probability of the Mugger being only sorta omnipotent.

On the other hand, I think we can easily resolve the paradox by inserting the Crowbar of Cynical Jadedness: If it sounds too good to be true, then its probability of being either good, or true is zero (it can be one or the other with a non-zero probability, but not both). 10 quintillion utils (or however many) sounds too good to be true, so it can't be true. A Used Car Salesman will never offer you a good deal. The Mugger is only lying to get Pascal's money.


Thanks, that's an interesting read. But I don't think it addresses my question: why shouldn't Pascal place the probability that the Mugger is an Operator from the Seventh dimension to _zero_ (rather than an infinitissemally small number)?

The point is that, at the time when the Mugger declares himself to be an Operator from the Seventh Dimension who can offer large rewards etc, there is no evidence to suggest he's saying the truth. No evidence at all. Accordingly, the probability that he's telling the truth must be zero. Where does a non-zero probability value come from?

Are you then saying that the probability of any reward should never be placed to zero because that would not maximise rewards?


Probability zero is the same as saying that it would take infinite evidence to convince you. Even if someone provides amazingly convincing evidence, better than you've ever seen, a flat 0 or 1 eats it.

> there is no evidence to suggest he's saying the truth. No evidence at all. Accordingly, the probability that he's telling the truth must be zero.

I don't think that logic works. What if the claim was "I have a five dollar bill in my pocket"?


>> Probability zero is the same as saying that it would take infinite evidence to convince you.

That assumes I can't go back and change my earlier beliefs. But I don't see why that's necessary. If I have no evidence that X is true at time t, I assing a probability of 0 to it. If I acquire evidence that X is true at time t+1, I throw out the 0 and assign a higher probability to X.

The world changes all the time. Why am I condemned to hold on to obviously unsound beliefs for all eternity?

>> I don't think that logic works. What if the claim was "I have a five dollar bill in my pocket"?

That depends. I've seen five dollar bills coming out of peoples' pockets before (actually, I haven't because dollars are not common where I live but Ok). I don't have to assing a zero probability to that. I have some evidence that it's possible.

But I have no evidence that there even exists such a thing as a Seventh Dimension etc.


> If I acquire evidence that X is true at time t+1, I throw out the 0 and assign a higher probability to X.

> The world changes all the time. Why am I condemned to hold on to obviously unsound beliefs for all eternity?

Normally when you update a probability, how much you change it is based on the strength of the evidence. If your probability of something is ultra-low, and you see an event that's a million times more likely if that thing is true, your new probability is roughly a million times higher. And for a probability that's sufficiently close to 0 or 1, that pit is basically impossible to climb out of.

Do you have an alternate method to suggest? What's the calculation you would use? Note that "I'm seeing this with my own eyes" should only give you so much change, because you might have accidentally taken a whole bunch of hallucinogens.

> But I have no evidence that there even exists such a thing as a Seventh Dimension etc.

If you're setting a hard cutoff based on the silly Seventh Dimension stuff, then you still fall for the version where I come to your house and sign a document giving you a giant pile of money. That's how mortgages and business deals work every day after all.

> How about the statement "Hillary Clinton is the President of the United States"? What probability should I assign to that? I know that the PotUS is Donald Trump. Does Cromwell's Rule mean that I have to believe that Hillary Clinton is the PotUS at least a little, because otherwise I will never be able to believe it if she ever gets elected president?

Not for that reason. But you have to factor in the chance that you got confused, or your brain is failing to make new memories and it's actually 2022, or you just woke up from a really detailed dream about the wrong president.


>> Do you have an alternate method to suggest? What's the calculation you would use? Note that "I'm seeing this with my own eyes" should only give you so much change, because you might have accidentally taken a whole bunch of hallucinogens.

I don't understand. How would it happen that I've accidentally taken a whole bunch of hallucinogens? I never go near that kind of stuff.

>> Not for that reason. But you have to factor in the chance that you got confused, or your brain is failing to make new memories and it's actually 2022, or you just woke up from a really detailed dream about the wrong president.

I don't see how that would happen either. Why would my brain fail to make new memories? Why are you saying that this might be the case?

I think this is just enhancing the deep unreality of what you are proposing. If we need to assume that I'm in some kind of weird mental state that I have no reason to be in for your whole proposition to make sense then I really don't see the point of it, other than perhaps an interesting theoretical game.


You can't come up with a one in a billion scenario that you would accidentally take a hallucinogen?

You never ever have a dream that seems real for a few moments?

And failing to make new memories would be a specific but possible injury.

We're supposed to be working with very low probabilities here. That's the whole point of the thought experiment. If you're going to round anything below one-in-a-million to exactly zero then that's your prerogative, and it works in everyday life, but it's objectively wrong; it would falsely reject the idea of lightning strikes and winning the lottery.

> I think this is just enhancing the deep unreality of what you are proposing.

You didn't even reply to the part about removing all the silly stuff and cutting it down to just "guy offers to sign a document for lots of money"...


But I'm not hallucinating and I'm not dreaming either.

Also, I don't know why you're saying I'd round anything below one-in-a-million to zero. I wouldn't. But would assign zero probability to a mugger being an Operator from the Sevent Dimension because that's a patently absurd idea that I see no good reason to grace even with the slightest degree of belief.

I mean, if you take what you are saying here at face value I actually have to assume that there is a probability that there exists a Seventh Dimension with magikcally powerful Operators inhabiting it. In real life, not just in the context of Pascal's Wager. Because I can't assign zero probability to anything.

That just doesn't make any sense at all.

>> You didn't even reply to the part about removing all the silly stuff and cutting it down to just "guy offers to sign a document for lots of money"...

Apologies. I didn't understand what you meant with that and I didn't want to clutter the comment space with more clarifying questions.


> But I'm not hallucinating and I'm not dreaming either.

You've never been unsure if something actually happened for a moment? Because even if you only spend a few moments like that per month, we can assign it a probability.

> I mean, if you take what you are saying here at face value I actually have to assume that there is a probability that there exists a Seventh Dimension with magikcally powerful Operators inhabiting it. In real life, not just in the context of Pascal's Wager. Because I can't assign zero probability to anything.

You don't think there's any chance that you fundamentally misunderstand the universe and that there are powerful secrets being actively hidden from you? It doesn't have to be real 'magic', just something too beyond your understanding. I think there's some chance of that. I'd say less than 1% and more than 1 in a googolplex, to put some amusingly loose bounds on it. And then you have to factor in the chance the guy picks you in particular to mess with, but that's not an unreasonably large number.

> Apologies. I didn't understand what you meant with that and I didn't want to clutter the comment space with more clarifying questions.

You're objecting so specifically to the seventh dimension stuff, I thought it would be simpler to cut all that out. The point of the thought experiment is just a very likely but very positive act. And the way to have a productive conversation is to respond to the strongest form of the argument. So in that version, you can't just declare that the person in front of you isn't a rich guy screwing around and giving money to people that accept, because the probability of that is clearly not zero.


>> The point of the thought experiment is just a very likely but very positive act.

(You mean very _un_likely eh?)

The reason I'm objecting specifically to the seventh dimension stuff is that it's just something fanciful that someone came up with, so it's obviously fake and I don't have to believe it even a little bit.

The probability of someone just handing out money (if I read you correctly this time) is very low, but not zero, yes. But I'm contesting the claim that I'm never allowed to assign 0 probability to anything, because then I'm at risk of losing out. Sometimes, you don't risk being wrong by disbelieving something.

Anyway I'm getting more and more confused by this conversation. I don't think it's getting anywhere. Thanks for your patience- you have the floor.


As the saying goes, "zero and one are not probabilities". Like 'Dylan16807 says, they eat evidence. When doing maths, when transforming to log probabilities, 0 becomes -Infinity; when transforming to odds ratios, 1 goes to infinity.

A longer explanation: https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-ar....

See also https://en.wikipedia.org/wiki/Cromwell%27s_rule, mentioned by 'edflsafoiewq.


Yes, I get the arithmetic, thank you. What I don't get is why I'm forced to perform it in the way that you say. Why do I have to hold on to that 0 probabilty no matter what happens? Clearly it's much more reasonable to change my mind given that the world has changed and assign a non-zero probability to an event for which I now have evidence. Why would I not?

>> See also https://en.wikipedia.org/wiki/Cromwell%27s_rule, mentioned by 'edflsafoiewq.

How about the statement "Hillary Clinton is the President of the United States"? What probability should I assign to that? I know that the PotUS is Donald Trump. Does Cromwell's Rule mean that I have to believe that Hillary Clinton is the PotUS at least a little, because otherwise I will never be able to believe it if she ever gets elected president?


I'd say it's because the process of updating probabilities in light of new evidence involves multiplication, which yields you nothing new for 0 and 1. It's not resetting values of variables.

> Does Cromwell's Rule mean that I have to believe that Hillary Clinton is the PotUS at least a little, because otherwise I will never be able to believe it if she ever gets elected president?

Yes. And the justification for that is that there's tiny but non-zero possibility that she may really be the president, and it's your senses that deceive you. Perhaps you're the protagonist of your own's Truman Show. Or perhaps it's some peculiarity in your brain that prevents you from accepting who the real president is. Integrating other evidence around you, you can assign ridiculously low probabilities to these scenarios, but you can't assume zero probability. After all, there exist people with such problems, and they tend to end up in treatment when someone realizes what's going on with them.

And the nice thing about it, that not using 0 and 1 makes the whole thing add up to reality in an elegant fashion. Adding 0 and 1 breaks that.


>> And the nice thing about it, that not using 0 and 1 makes the whole thing add up to reality in an elegant fashion. Adding 0 and 1 breaks that.

I really don't see the "elegance" in having to accept that I may be in a strange mental state were reality is unkonwable, in order to describe reality.

I mean, at the end of the day, if you follow down that path you find yourself having to argue that Hillary Clinton might, actually, be the PotUS, and I may be brain damaged or something, you never know. That's just absurd and there's no practical point in it. It's just a waste of time.


I would agree, that is not enough evidence. Some sort of advanced display technology causing the apparition provides the exact same explainability, and would require no changes to our understanding of the universe and the laws of physics.


Why, instead of the probability being 0 or 1 in a quadrillion, is the probability not simply undefined?

Maybe I am not well-versed in Bayesian thinking, but I am unable to understanding assigning probabilities to events that have not occurred before, and to which there is no related numerical data.

Making the probability of Pascal's number being undefined renders any calculation of the risk involved null and solves the problem, while making it possible for future evidence to assign a defined probability (say you were previously approached by 10 Pascal's muggers and 2 turned out to be telling the truth).


> I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

https://en.wikipedia.org/wiki/Cromwell%27s_rule


I think this is the same idea in another comment, above, about Pascal's Muggle. I think there's a bit of a confusion here though. I can adjust the probabilities of any event given new evidence.

For example, at this point in time I believe that the probability that I can fly if I flap my arms up and down is zero. I have no evidence that this is possible and I understand enough of the relevant physics to know that this is not just improbable, it is impossible.

However, if tomorrow I flapped my arms and found that I could fly, there would be nothing stopping me from re-evaluating my belief and assigning a higher probability to the chance that I can fly if I flap my arms.

But I think the problems begin with the misguided ambition to be able to predict the future even when there is no evidence to support any prediction. You can't know what you can't know. You can assign any probability you like to what you can't know, but even if you end up assigning the right probability that will be the result of blind chance, not the result of correct reasoning.

Anyway this is why I prefer logical inference to probabilistic inference. I understand that I'm in a minority on this, but for me it makes a lot more sense to maintain a state of provisional belief with an absolute value (in {0,1}), provisional in the sense that new evidence can always change your belief, than to live in a perpetual state of uncertainty which never resolves itself no matter how much evidence you see, because there is always a chance that you're wrong. There always _is_ a chance that you're wrong but it just seems cumbersome to have to maintain a ledger of competing probabilities for everything that has happened, and everything that hasn't yet happened, just on the off chance that anything can happen, including mutually exclusive events.

In principle, anything might happen. In practice, not everything will. There must be a sensible way to figure out what we need to prepare for and what we can safely ignore. And the whole Pascal's Mugger paradox, while it's meant to attack Pascal's Wager's logic, ends up for me as an illustration of why proabilistic inference is deeply borked.


Wow, I always thought of Pascal's Mugging as a satirical illustration of how stupid it is to take enormous (or extremely small) numbers seriously. Turns out people like John Carmack (see other comments in this thread) aren't picking up on the satire. Or am I reading him wrong? I think he's smarter than me, so what am I missing?

Given Bostrom's general love for mixing tiny probabilities with enormous outcomes, what's the point of this article? It seems delightfully self-critical. How can the conclusion be anything other than that we should not be taking the AI/singularity crowd too seriously, as doing so would be akin to voluntarily handig over a wallet to a mugger?


Sounds more like you are taking the argument too seriously and are trying to read more into it than what it is.

It is just a philosophical story created to provide a certain line of reasoning, as certain possible structure of an argument. People are free to apply this argument however they want, it doesn't prove anything by itself, it doesn't say anything about the world, it's up to the user of it. It does not make any conclusions, it's just a story. Carmack used it to illustrate his own beliefs (which are therefore: AI is possible and the payout for the AI is exteremely high, even if probability for it during the next couple of years is low).

Carmack did not mean that you should believe or not believe in AI or anything else based on this argument. He just used it to illustrate what he himself chose to do. He did not base it on this argument alone, he did not just hear the argument and suddenly decide "now because of that I have to work on AI", he based it on his experience and knowledge of the actual field. The mugging argument is just a cute way of quickly explaining it.

> I think he's smarter than me, so what am I missing?

Wisdom and context.


Paradoxes such as this one point at some rarely discussed assumptions about the utility function. The major one is that it exists. Or exists as something more than a model which is only good for a range of payoffs.


The point of paradoxes like this is to demonstrate that even in our incredibly simple toy models of agents, we still run into issues. These paradoxes help point out weaknesses of the models and act as new desiridata for better models.


Expected value != expected utility.

Just taking this seriously pretty much resolves all these problems.

It makes sense to take high cost/reward, low probability events seriously if the expected utility works out. Examples include reducing existential risk substantially, even at cost to short-term utility (say, in the form of well-being) to increase the probability that we can eventually figure out how to optimally arrange matter and energy and trigger a utilitronium shockwave.


In case someone is not familiar with Pascal's wager, here is a table that shows the expected values: https://en.wikipedia.org/wiki/Pascal%27s_wager#Analysis_with... (the appearance of infinity breaks the decision theory...)


Pascal's Wager fails for the simpler reason that it ignores the possibility that believing in god could send you to hell.

I see a similar problem here. Pascal should consider the possibility that the mugger will use his (Pascal's) silly action as the basis for punishment, in place of the promised reward. Slim probability, potentially very high cost. The fact that this risk has gone unstated doesn't mean it isn't there.


I agree. This problem is still useful though, because to analyse a lot of counterfactual muggers, you must be able to analyse one mugger!


I agree that both problems are worth analysing, even if they're both absurd. Same goes for the Hangman's Paradox, a personal favourite. [0]

I gather that Pascal had never intended Pascal's Wager to be a watertight argument, it was intended more as a plaything for showing how unlikely it was anyone could make a watertight case for the existence of God.

[0] https://en.wikipedia.org/wiki/Hangman%27s_Paradox





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: