
In Great Silence there is Great Hope (2007) [pdf] - arethuza
https://www.nickbostrom.com/papers/fermi.pdf
======
joe_the_user
Another thing to consider is the Fermi paradox arguments assume something like
a steady state universe. However, another hypothetical would be this; suppose
life appears many places once the universe gets cool enough and thus whatever
life has appeared on earth has been developing at a similar speed to whatever
life that exists elsewhere. So it's reasonable to think societies as advanced
as us exist but it may not be reasonable to expect societies a whole lot more
advanced than us exist. Again, how practical and how time-consuming the
hypothetical "colonization of a galaxy" would be is relevant and very hard to
answer question.

~~~
marcosdumay
There aren't many reasonable reasons for life not to appear ~2 billion years
earlier than we did in our own galaxy.

~~~
joe_the_user
A) Some level of residual radiation or lack of complex chemicals might have
delayed life ~2 billion years.

B) Proto-life might indeed have evolved 2 billion earlier than earth and have
been brought to earth by asteroids. Which is to say it took ~2 billion years
for a primordial soup to become single-celled organisms (with viruses being an
intermediate stage).

C) If life evolves "everywhere", then the odds are we are "typical life"
(axiom of mediocrity), so the odds are for whatever reason life was ~2 billion
later than some estimates.

~~~
Reelin
Dinosaurs existed in the Triassic ~250 Mya. Following multiple extinction
events primates showed up something like ~80 Mya. Australopithecus only dates
to something like 3 Mya, Homo habilis to only ~2 Mya, and humans to a mere
~0.3 Mya.

250 vs 3 (or even 80) is a huge difference, and it's unreasonable to assume
that all planets would experience extinction events at anywhere near the same
rates. To put it in terms of your axiom of mediocrity, it looks like the
"typical" timeline we followed has a _lot_ of wiggle room so why don't we see
anyone else?

~~~
joe_the_user
_To put it in terms of your axiom of mediocrity, it looks like the "typical"
timeline we followed has a lot of wiggle room so why don't we see anyone
else?_

Arguably, on a log scale, the total complexity of organisms is increasing
fairly regularly. Extinction events may just be ways the increased complexity
manifests.

Further, there might be wiggle room but maybe our current explosion in
scientific knowledge doesn't yet put near the jumping off point. The sci-fi
paradigm is imagining galaxy colonization as akin to the colonization of
islands and continents around the world by sea faring but the scales aren't
comparable, the effort isn't comparable, etc.

~~~
Reelin
I dunno, dinosaurs seem pretty biologically complex to me. We've got examples
of just about all the features you'd come across today (possibly even venom).

I suppose if we take intelligence as a sort of measure of overall complexity
it could work, the obvious issue being that we can only make the roughest of
estimates for it.

~~~
nradov
If the killer meteorite had waited a few million more years some intelligent
descendants of theropod dinosaurs might have sent up a rocket to stop it.

------
at_a_remove
_A_ Great Filter? Just the one?

Unaddressed is the scenario wherein multiple filters exist. One or more could
be behind us; one or more ahead. And so, not finding life on Mars or Titan or
anything nearby would be nothing to celebrate.

~~~
ByteJockey
That's covered on page 6:

> Nothing in the above reasoning precludes the Great Filter from being located
> both behind us and ahead of us. It might both be extremely improbable that
> intelligent life should arise on any given planet, andvery improbable that
> intelligent life, once evolved, should succeed in becoming advanced enough
> to colonize space.

~~~
at_a_remove
Oh good, I was wondering if I had missed it.

I tend to think of the Drake equation as a series of hurdles, myself. It's
unpopular but each level of "advancement" doesn't seem to be a given to me.

------
ncmncm
Any of the following suffices to account for silence. All three could be true.

1\. Advanced societies develop controlled fusion energy, and lose interest in
terrestrial planets. Future activity occurs in the resource-rich Kuiper belts
of their and neighboring stars. (Perhaps the most valuable resource there is
low temperature.) Extreme primitives stuck on rocky inner planets have nothing
of interest to offer, or to say. Aliens who happen by get no closer than
Neptune.

2\. Expansionist civilizations soon encounter other expansionist
civilizations, and annihilate one another. Remaining civilizations are not
expansionist, and therefore do not arrive. Humanity will likely be
expansionist unless long communication delays make participating in society
too difficult. If not, we will eventually encounter another, and either join
it, annihilate it, be annihilated, or both of the latter. Having joined,
expansion continues until the next such encounter.

3\. Life we would recognize develops only on an inner terrestrial planet with
a large moon. A planet otherwise like ours with no large moon develops like
Venus, as solar tides nearly halt its rotation, and thence its tectonic
processes and magnetic field. Earth-like planets equipped with a large moon
could be vanishingly rare.

------
mannykannot
_" If – as I hope is the case – we are the only intelligent species that has
ever evolved in our galaxy, and perhaps in the entire observable universe, it
does not follow that our survival is not in danger. Nothing in the above
reasoning precludes the Great Filter from being located both behind us and
ahead of us. It might both be extremely improbable that intelligent life
should arise on any given planet, and very improbable that intelligent life,
once evolved, should succeed in becoming advanced enough to colonize space._

 _" But we would have some grounds for hope that all or most of the Great
Filter is in our past if Mars is indeed found to be barren."_

It would not give grounds for that hope; it would merely, and at most, not
give grounds for suspecting that the hope is in vain. Bostrom cannot quantify
how likely his hope is, and what he is praying for here is to maintain that
ignorance - to not learn anything that might help quantify the probability in
a direction inconsistent with his hope.

This strikes me as contrary to the ideals of philosophy.

~~~
ralphstodomingo
I do not see how you came to that conclusion - nowhere in the piece does he
hope or pray to maintain such ignorance.

~~~
mannykannot
He does not put it that way explicitly, but what he is saying is that he hopes
we do not gain knowledge that he would would regard as diminishing the
likelihood of his hope for humanity being realized. Note that gaining that
knowledge would not actually change humanity's chances for colonizing the
galaxy; it would only make that probability more clear to Bostrom.

------
stupidcar
The idea that the "great filter", if there really is one, remains in our
future seems to me to be extremely improbable. The fact is, we _already_ have
the ability, just about, to colonise space. That we haven't yet has more to do
with economic reasons than technological ones, and it's entirely conceivable
that we'll see substantial non-Earth settlements in our lifetime.

Now, that isn't to say we _won 't_ nuke ourselves get turn into grey goo or
something before that happens, but the point is, but we're close enough to it
that, even being pessimistic, there's got to be a reasonable chance we'll make
it (over 0.01, say). And that's not good enough for a great filter. Any
catastrophe that happens so close to the point of becoming a multi-planetary
civilisation simply wouldn't catch enough species. There'd be too many who
would make it through.

~~~
naasking
> Any catastrophe that happens so close to the point of becoming a multi-
> planetary civilisation simply wouldn't catch enough species.

Unless you're missing some sort of fundamental serialization of events, like
technology A that enables colonization is also so destructive that it wipes
them out. Nuclear power could have been an example.

Perhaps we would need to master genetic engineering in order to survive the
ravages of long-term space exposure, and that might inevitably lead to an
outbreak that wipes out most of the species.

Or perhaps sufficient automation is needed because biological brains are too
slow and imprecise, but sufficient intelligence and automation inevitably
yields an AI that supplants them.

~~~
stupidcar
As I said, we already have technology to colonise space. Not to spread
throughout the galaxy, no, but at least to get to multiple places in our solar
system. At that point the feasibility of a single event like a genetic
accident _always_ occurring and _always_ wiping out the entirety of every
species seems unlikely.

Not impossible, of course, but my argument wasn't about possibility vs
impossibility, but probability.

~~~
naasking
> Not to spread throughout the galaxy, no, but at least to get to multiple
> places in our solar system.

To get there, yes. Not necessarily to _live_ there. I was saying that we might
need to actually engineer ourselves to live in these other environments, and
the path to that might itself lead to the great filter.

------
api
I think there are just too many unknowns around topics like the existence of
intelligent aliens, etc., to draw any firm conclusions about anything.

We don't seem to see conclusive evidence for aliens. Okay, _maybe_ a few
anomalous UFOs, but none of that stuff is anywhere near firm enough to
conclude anything as significant as aliens. We have lights in the sky and
funny radar blips, and those could have many explanations.

That doesn't mean there aren't any aliens. It just means _if_ they're around
they are not advertising their presence. There are many rational reasons not
to do so, some altruistic and some self-interested.

An ET studying us might want to avoid contamination just like we do when
landing probes on other planets. It might also fear that we'd be hostile, and
looking around at how humans behave that would not be an irrational concern.
If we did respond with hostility or fear, our ETs might face an awful moral
conundrum: risk letting us continue developing to the point where we become a
danger to other life (including them), or exterminate us preemptively. Might
be best just to not make contact and avoid that situation.

What about SETI's great silence? It's pretty meaningless. Radiation diminishes
with the square of distance. It would take an incredibly powerful directed
radio signal to be detectable even a few light years away, and it would have
to have simple modulation to have any chance of being noticed.

That means an intentional transmission, and one of incredible power. I don't
remember exactly but to reach stars dozens of light years away I recall seeing
numbers in the hundreds of gigawatts of radiated signal power. That would be
on the order of the entire output of the USA power grid fed into a transmitter
array to send e.g. Fibonacci numbers and say "yes we are here." That's
unlikely for many reasons: it's expensive, a literal shot in the dark, and
potentially dangerous. You don't want to get an answer in the form of a
relativistic velocity impactor in a few thousand years.

Incidental radiation is just not going to be detected at any range. It's not
powerful enough. Not only that but as we evolve toward more advanced and
efficient technology we are abandoning powerful transmitters in favor of low
power cellular systems with lots of small transmitters or mesh networks. Our
transmissions are becoming quite a bit harder to detect over time. You
couldn't detect an 802.11 network from the Moon, let alone another solar
system.

There are just so many unknowns. Nothing firm can be said. If people want to
get apocalyptic when reasoning from the Fermi paradox, it says more about
their attitude than the universe.

~~~
joe_the_user
_What about SETI 's great silence? It's pretty meaningless. Radiation
diminishes with the square of distance. It would take an incredibly powerful
directed radio signal to be detectable even a few light years away, and it
would have to have simple modulation to have any chance of being noticed._

Absolutely. It's worth highlight that Fermi is saying something like "why
societies we don't understand using technologies we don't know of to
communicate to us."

------
joe_the_user
_There must be some kind of barrier that prevents the rise of intelligent,
self-aware, technologically advanced, space-colonizing civilizations._

Speculation of this sort is interesting but I think it's important to realize
it's extremely hypothetical. It can't help but have many hidden assumption and
fail to say many "unknown unknowns".

One key unstated assumption is that a interstellar space faring civilization
could be achieved through technological advancement in short -or-medium order
on earth. That a "sci-fi" world is just around the corner.

Our current society has indeed been based on constantly, even exponentially
increasing technology. But there's no visible way to sustain that exponential
growth to the scale of the stars. Indeed, our society's growth is
increasingly, visibly unsustainable and our ability to counter our destructive
tendencies isn't making progress. So maybe what has to replace our
unsustainable trajectory is a society with no technological progress.

You could this as "maybe the 'great filter' lies ahead of us" or maybe you
could say the image of both the Fermi Paradox and science fiction is based on
a belief that Earth's future has a strong potential, a strong tendency to look
like the previous 300 years of the increasingly technological expansion of
European capitalist society over the globe. And that belief, for good or ill,
may be not at all justified.

And the thing is that such "filtering" might imply that a given technological
species fails, just that it reaches stability and isn't part of this a stellar
colonization effort.

------
cgrealy
If humanity turns out to be the most intelligent thing the galaxy has
produced... then the galaxy really needs to do better.

