
A Primer on the Doomsday Argument - Tomte
http://www.anthropic-principle.com/?q=anthropic_principle/doomsday_argument
======
dogecoinbase
Other than the fact that you can't coherently reason based on your own
existence (existence is not a predicate), one should rightfully be wary of an
argument that would have been valid _and false_ at every point in the past.

The Doomsday Argument is not an argument in favor of its conclusion -- it's a
reductio ad absurdum that demonstrates why this type of reasoning is invalid.

~~~
andybak
Is this not a weakness in many (all?) deductive arguments? i.e. if we throw
out this argument then we have to throw out much of science.

~~~
dogecoinbase
In a general sense, I don't believe so. While "there exists an experimenter"
is an inherent assumption in many deductive chains, there's no attempt to use
the existence of the experimenter as a prior in a Bayesian calculation. The
logical flaw in the DA comes when you say "I, the reasoner, am likely to exist
at a random point in the history of humanity", because the reasoning could not
take place without your existence and therefore the hypothesis is not-
falsifiable (no matter when in the history of humanity the reasoning
occurred).

------
spacehome
>Now we modify the thought experiment a bit. We still have the hundred
cubicles but this time they are not painted blue or red. Instead they are
numbered from 1 to 100. The numbers are painted on the outside. Then a fair
coin is tossed (by God perhaps). If the coin falls heads, one person is
created in each cubicle. If the coin falls tails, then persons are only
created in cubicles 1 through 10.

>You find yourself in one of the cubicles and are asked to guess whether there
are ten or one hundred people? Since the number was determined by the flip of
a fair coin, and since you haven’t seen how the coin fell and you don’t have
any other relevant information, it seems you should believe with 50%
probability that it fell heads (and thus that there are a hundred people).

The conclusion in the last sentence is incorrect is an incredibly subtle way.
Since 10 times more people are in cubicles in the heads case, the probability
that you find yourself in a cubicle at all is ten times higher in that case,
which affects your prior. By Bayes' theorem:

P(HEADS| WAKE UP) = (P(WAKE UP|HEADS) * P(HEADS)) / ( P(WAKE UP |HEADS) *
P(HEADS) + P(WAKE UP|TAILS) * P(TAILS) )

= (10 * P(WAKE UP|TAILS) * P(HEADS)) / ( 10 * P(WAKE UP|TAILS)* P(HEADS) +
P(WAKE UP|TAILS) * P(TAILS) )

= 10 / (10 + 1)

= 10 / 11, or about 91% chance of heads given that you find yourself in a
cubicle.

~~~
lisper
> the probability that you find yourself in a cubicle at all is ten times
> higher in that case

No. That you are in a cubicle is a given fact, so the probability that you
find yourself in a cubicle is unconditionally 1.

~~~
bluecalm
>>No. That you are in a cubicle is a given fact, so the probability that you
find yourself in a cubicle is unconditionally 1.

But it's not 1 that you would be in a cubicle at all. The paradox disappears
if the problem was formulated like this: "God shuffles DNA and there is
0.0000000000001 chances tha it shuffles up you and he does that for every
cuebicle". Or: "God create you first, then flips a coin and put you at random
in one of the created cubicles".

Formulation of the problem should mention which one is it. If it doesn't we
are back to guessing what God does (similarly to 2 envelope problem which
comes down to guessing what the sponsor's preferences for amounts are). As
there are infinitely many ways God could decide to create humans in cubicles
you can't answer that without giving some guess (priori) for probability
distribution over those choices.

~~~
spacehome
There are only finitely many ways to arrange matter in a finite volume under a
finite temperature. If it's safe to assume that anything we could classify as
"human" will fit inside a sphere of radius 1 light-year and has a temperature
under 10^20 K, there are finitely many humans.

~~~
bluecalm
Yeah but it's not given that God chooses at random who is going to cubicle. He
might have for example created you first and then put you in a cubicle
regardless if it's 100 of them or 10. I agree with your resolution of that
paradox though. It's reasonable to assume - having no information about the
process - that God creates humans randomly and it's 10x more likely that you
are created in a world with 100 cubicles than in a world in 10 cubicles. In my
mind there is no paradox at all there.

------
kazinator
Is there a strong, formal version of the Doomsday argument not based on
cubicles and such? The obvious flaw in that version is that it is based on
finite possibilities: that is, finding oneself in this or that kind of
universe out of so many possible ones.

For starters, we have no idea of how likely our universe is; we know what
numerator is 1, but what goes on the denominator? Is there even such a number?
If so it is something so huge that the probability is vanishingly close to
zero.

Then what about the parallel universes hypotheses: what if the universe is
constantly fragmenting into multiple futures. In one future, I will submit
this comment and close the tab, then check my e-mail. In another, I keep the
tab open and go make a coffee.

This proliferation of multiple futures from any moment would tend to provide a
way for doomsdays to be circumvented.

If you are cloned into N futures in this moment, and you're killed in N-1 of
them, then you don't know it; by the anthropic principle itself, only the
surviving future matters; there is no consciousness in the others. The
suriviving future has no idea about the size of N, either.

~~~
saalweachter
The big gaping hole I see is that the blog post at least doesn't treat the
earlier epochs with the same question. If a person in 1000 CE took up the same
question, would they predict Doom Early or Doom Late? If everyone for the last
ten thousand years would predict Doom Early, that seems like a piss poor
predictor.

~~~
AlexMennen
That's just a special case of a general feature of probabilistic arguments: a
small but positive fraction of the times they are used, they will give wildly
misleading predictions. If we assume that everyone uses the doomsday argument
with a uniform improper prior on the total number of people who will ever
live, then all of them will conclude with 95% probability that they are not
one of the first 5% of people to ever live (that is, that no more than 20
times the past population will be created in the future), and 5% of them will
be wrong, which is exactly how probabilities are supposed to work.

------
baddox
This is a much better discussion of the Doomsday Argument, and the self-
indication and self-sampling assumptions:

[http://www.scottaaronson.com/democritus/lec17.html](http://www.scottaaronson.com/democritus/lec17.html)

The whole course is also very good, and will probably be interesting to many
HN users.

~~~
bluecalm
Very nice read, from this lecture:

>>So you're in the room. Conditioned on that fact, how worried should you be?
How likely is it that you're going to die?

    
    
        A: 1/36.
        Scott: OK. That would be one guess. 
    

>>One answer is that the dice have a 1/36 chance of landing snake-eyes, so you
should be only a "little bit" worried (considering). A second reflection you
could make is to consider, of people who enter the room, what the fraction is
of people who ever get out. Let's say that it ends at 1,000. Then, 110 people
get out and 1,000 die. If it ends at 10,000, then 1,110 people get out and
10,000 die. In either case, about 8/9 of the people who ever go into the room
will die.

But it's not really a problem. If you flip a coin, then 10 coins, then 100
coins and continue until you get all heads then vast majority of flipped coins
will be heads while still any particular coin have 1/2 probability of landing
heads. There just isn't any paradox or even anything surprising. Similarly
here it's in my opinion obvious that chances of dying are 1/36 and they are
the same for all people in the situation. The fact that most people die isn't
a paradox at all.

~~~
baddox
I think it's still a "problem." Which one is the "correct probability" that
you're going to die? If, instead of dying, a red light in the room might turn
on after a minute, how would you place a bet about whether the red light would
turn on?

------
amalcon
So, there are a few problems here:

The argument assumes that the only possible end-state for humanity is
doomsday; that is, there are a finite number of humans, and when there are no
more doomsday has occurred. The article touches on this with the infinite
humans / transhuman stuff at the end.

The argument assumes exponential growth in number of humans until the terminal
state, but this is not a foregone conclusion. While this doesn't change the
reasoning in a strict sense, it may put the "soon" cutoff millions of years in
the future.

Finally, the prior probability is chosen arbitrarily. It does matter quite a
bit. Take the exponential argument for evidence as to why: For any given
probabilty p that doomsday will occur on a given day, it is more likely today
than on any single other day! This is simply because if doomsday occurs today,
it cannot occur on any future day, making it less likely on each successive
day.

Except that if p=1E-10, doomsday is fantastically unlikely to occur this year.
The Bayesian argument has the same problem, you just start seeing it at a
different value of p.

------
typomatic
Let's go back to step II:

Imagine that the coin that we're flipping isn't fair, and comes up heads with
prior probability p. Further, suppose that the ratio between the small
population and the large population is R (in the article this is R = 10 / 100
= .1). In this situation, upon discovering that you're inside the small
population, the posterior probability of heads ends up as

    
    
        p / (R - p * (1 - R))
    

So what if the coin is very unlikely to be heads instead of 50/50? If p = .001
(and we leave R = .1), our estimate of the probability heads after we observe
that we are in the small population only comes up to about 1%.

Thus, the real fallacy in the argument is that the choice of prior is
unimportant. (If anyone tells you the prior is unimportant in any situation,
they are wrong.) With a suitable estimate of the prior likelihood of Doom
Soon, the posterior likelihood of Doom Soon is still low enough.

~~~
titanomachy
I played around with the model a bit and based on a couple assumptions it can
be scaled to

    
    
        P(Doom Soon) = x/(1+x) where
        x = B/b * d
    

where B is the number of humans born in the Doom Late scenario, b is the
number of humans born so far, and d is the prior probability of Doom Soon.

Assumptions: a) the number of humans born between now and Doom Soon is
negligible and b) the Doom Late scenario has many more humans than Doom Soon
(B + b ~= B).

Notice I made no assumptions about the prior, d. Of course d does matter, but
the point is that as long as the number of humans in Doom Long is assumed to
be large enough, the probability will go close to one even for very small d.

For example, if we use a Doom Late population of 200 trillion like in the
post, then we have 95% probability of Doom Soon even if d ~= 0.0001.

That being said, I am still fairly unconvinced by this argument. It would have
every intelligent species continually concluding that they are about to go
extinct, right up until the moment that they either _do_ go extinct or they
achieve immortality and stop reproducing.

~~~
typomatic
As B -> \infty, also t -> \infty.

I'm reminded of the Fight Club quote: On a long enough timeline, the survival
rate for everyone drops to zero. :)

~~~
titanomachy
Haha nice. Although, in this case we are looking ahead, basing our short-term
survival on what a long-term survival would theoretically look like. So the
more people there are in the hypothetical Doom Late, the more likely Doom Soon
becomes. The more I think about this the more absurd it seems.

~~~
typomatic
Yeah, all this analysis (both what we've done and in the original article) are
facile--a proper Bayesian treatment would have continuous priors/posteriors
that would be a little more informative than an either-or.

The problem you're seeing here is that if you make your possibilities "Humans
live forever (even past the heat death of the universe)" or "All humans die in
the next 10 minutes", you'll find that the chance that humans die in the next
10 minutes is really absurdly high.

------
lisper
Here's a simpler way to see what is going on. Number all the humans that will
ever live in order of their birth from 1 to N. The odds of your birth order
number (call it B) being in the range of 1..M<N is M/N. The larger N is, the
less likely it is that your B is as small as it is. So the odds of N being
large (relative to B) are small.

Another way to look at it: half of the humans who ever live will have fewer
people born after them than before them. Your a priori odds of being such a
person are exactly 1/2.

------
bicknergseng
"If the Doomsday argument is correct, what precisely does it show?"

That philosophers are not statisticians?

~~~
AnimalMuppet
That philosophers take their arguments too seriously?

I mean, this is fine as an interesting thought exercise. To the degree that
you take it seriously as applying to the real world, to that degree you
need... something. Perspective? To not take fine-sounding argument so
seriously? To take a long walk? To get a life?

Not everything that cannot be logically disproven is true. Not everything that
cannot be logically disproven should be acted upon as if it were true. It is
useful to know when to smile at the philosophical earnestness and then just go
on with your life.

------
altcognito
Is this a restatement of the Fermi Paradox?

[http://en.wikipedia.org/wiki/Fermi_paradox](http://en.wikipedia.org/wiki/Fermi_paradox)

~~~
cjslep
No, it attempts to quantify how many humans are alive at a point in the
future.

[http://en.wikipedia.org/wiki/Doomsday_argument](http://en.wikipedia.org/wiki/Doomsday_argument)

------
abruzzi
Ok, this may be mathematically naive, but...

In the cubicle example discovering that you are in cube 1-10 makes the
likelihood of the 1-10 scenario much more likely than it was before you
discovered you were in one of the first 10 cubes using a simple application of
Bayes theorem.

With the 100 billion or 100 trillion people example, you are person 60
billion. Theoretically making 100 billion much more likely than if you didn't
know where you fell. That probability approaches one if work off the
assumption that doom(late) means hundreds of millennia of humans spreading
across the galaxy at our current growth rate. the higher the Total possible
humans in doom(late) the higher the probability that being in the first 100b
indicated that there will only be 100b. (or similar numbers)

One difference I see between the two scenarios is time. The 100 cubes are not
filled in sequence. In the 100billion/100trillion example, every single person
that ever lived is in the 100billion, until they're not, then everyone is not
in the 100 billion. I don't know that it affects the math, but it affects my
thinking about the problem.

------
livingparadox
Perhaps I'm ignorant of some deeper concept... but it seems that this argument
is deeply flawed on the basis that they are presuming that finding yourself in
cubicles 1-10 somehow indicates that cubicles 11+ are probably vacant.

In both doom soon and doom late, cubicles 1-10 are occupied. There's no aspect
of doom late that would be made less likely as a result of cubicles 1-10 being
occupied.

The reverse works well, though. Being in one of cubicles 11+ /does/ eliminate
the possibility of doom soon. So only passing the cubicle threshold gives any
meaningful information about which of the two scenarios are true.

~~~
rm445
It's probably not worth worrying about the fate of humanity till you've
figured out the rooms example. Since you're on this site I'm sure you can
trivially knock up a script to simulate what happens. Or just draw the
probability tree. In any case the relevant question is,, out of those who find
their room number is less than ten, what is the probability that the coin toss
meant only ten rooms are occupied. The answer is 90.9% - we could go round in
circles trying to choose words that everyone understands, but the numbers
speak for themselves.

~~~
livingparadox
Okay, here's a simplified probability tree.

Scenario 1: Doom soon

10 people have a room number less than or equal to 10.

Scenario 2: Doom late

10 people have a room number less than or equal to 10.

\---

Given that each scenario has an equal probability in isolation, there are 20
possible positions for the the <=10 individual to find themselves in (10 in
doom soon, 10 in doom late).

Of those 20 possibilities, 10 of those are in the doom soon scenario. The
remaining 10 are in the doom late scenario.

10 doom soon to 10 doom late is a 50/50 probability.

Edit: writing up a script as well, right now. Will add to link to it in my
original comment when complete.

~~~
rm445
You're wrong. Not being mean, you've just arrived at the wrong numerical
answer to the maths question.

(Let's leave aside mentions of doom and stick to the rooms: according to the
article no-one has quite nailed down yet whether there are subtleties about
unbounded future populations). Rather than try to draw a tree I'll write it
out flat:

10 rooms (0.5) & you're in 1-10 (1.0): Probability 0.5

10 rooms (0.5) & you're in 11-100 (0.0): Probability 0.0

100 rooms (0.5) & you're in 1-10 (0.1): Probability 0.05

100 rooms (0.5) & you're in 11-100 (0.9): Probability 0.45

Work out any relative probabilities using the right-hand column. For instance,
when you're in room 1-10, the chances of the coin having been heads is
0.5/0.55. Even if this bugs you, this approach of using the tree will let you
get the right answers to tricky teasers about medical tests with false
positives and false negatives.

~~~
livingparadox
Just to confirm my understanding...

The situation you gave me was "out of those who find their room number is less
than ten, what is the probability that the coin toss meant only ten rooms are
occupied".

In my example, I composed my tree of solely those who with a room number less
than or equal to 10, since in your question, we're only concerned about those
who are in rooms 1-10.

In your example, it seems you've included those who are in rooms 11-100... If
we are only concerned with rooms 1-10, what relevance does the probability of
11-100 play in this role?

More specifically, your seems to be what the odds are of /being in/ room 1-10,
rather than the odds of each scenario occurring, assuming you are in room 10.

------
scotty79
Imagine you have 100 ordered rooms. All initially are filled with gold and
ponies. God tosses a coin and depending on the result of the flip God strips
10 or 100 rooms of ponies and gold.

You are in one of the rooms, open your eyes and see that there is no gold and
ponies around. You take a look at your room number and see 7. Then you know
that best decision is to operate on assumption that there is 91% chance that
only 10 rooms were stripped of gold and ponies. So lots of gold and ponies
should await you very soon as you move to further rooms of increasing numbers.

Reassured with that bulletproof argument we may look into the future with hope
that rather sooner than later there will be times where we won't be confined
to this dull rock we evolved on, to this lame rapidly decaying bodies with
machines that push those clunky electrons around that can barely add 0 to 1
while overheating immensely.

Because near future is (likely) full of gold and ponies.

I hope this illustrates why purely philosophical arguments are empty of any
utility. If you reason without experimental data to anchor your reasoning to
something real you end up with reasoning that works just as well when applied
to the thing you had in mind when it wandered as to thing opposite to your
intention giving opposite conclusions and teaching us absolutely nothing about
world.

Ah. First two dowvotes and no comment from fans of realworld importance of
philosophy in this day and age. How classy!

~~~
BoppreH
The Doomsday Argument works because the result (Doom Soon or Doom Late)
affects the number of observers, and we can work backwards to update our
beliefs on which one will be.

That is why replacing the words with ponies and gold won't work unless you are
a pony or sentient gold.

~~~
scotty79
For my argument I just replaced "existing" with "not having gold and ponies"
and drew parallel not between "specific rooms being occupied" and our civ
existing at specific time but between "specific rooms not being emptied of
gold and ponies" and "our civ having decent tech at specific time".

It works. Use your own substitutions.

Phisolophical reasonings often latch onto inexplicit meaning of words. If you
replace them with different words that fullfill same role in language but have
different meaning argument becomes much less apparently profound or even
false.

------
saluk
"Corresponding to the prior probability (50%) of the coin falling heads or
tails, we now have some prior probability of Doom Soon or Doom Late."

Nice to just magically know this probability. I get that the actual number
doesn't matter, but I worry about a predictor that doesn't care about any real
probabilities based on observation.

------
3pt14159
My favourite answer to the doomsday argument: The many worlds hypothesis is
correct and we haven't created our "world ending" p=np computers yet.
(Literally halt if randomly generated solution is incorrect and catch fire to
every sentient being in the world).

Another possibility is that we're very close to building realistic
simulations, and by induction, it makes sense that we're at this stage of
development.

------
protonfish
> The last step is to transpose these results to our actual situation here on
> Earth

Yeah, that's the trick, isn't it? Is the metaphor truly valid? This how
philosophers differ from scientists: they see no need to test their hypothesis
with real-world measurement. This is also why philosophy needs to go the way
of alchemy and astrology - as an amusing but archaic belief.

