Hacker News new | past | comments | ask | show | jobs | submit login
In Great Silence there is Great Hope (2007) [pdf] (nickbostrom.com)
28 points by arethuza 31 days ago | hide | past | favorite | 34 comments



Another thing to consider is the Fermi paradox arguments assume something like a steady state universe. However, another hypothetical would be this; suppose life appears many places once the universe gets cool enough and thus whatever life has appeared on earth has been developing at a similar speed to whatever life that exists elsewhere. So it's reasonable to think societies as advanced as us exist but it may not be reasonable to expect societies a whole lot more advanced than us exist. Again, how practical and how time-consuming the hypothetical "colonization of a galaxy" would be is relevant and very hard to answer question.


There aren't many reasonable reasons for life not to appear ~2 billion years earlier than we did in our own galaxy.


A) Some level of residual radiation or lack of complex chemicals might have delayed life ~2 billion years.

B) Proto-life might indeed have evolved 2 billion earlier than earth and have been brought to earth by asteroids. Which is to say it took ~2 billion years for a primordial soup to become single-celled organisms (with viruses being an intermediate stage).

C) If life evolves "everywhere", then the odds are we are "typical life" (axiom of mediocrity), so the odds are for whatever reason life was ~2 billion later than some estimates.


Dinosaurs existed in the Triassic ~250 Mya. Following multiple extinction events primates showed up something like ~80 Mya. Australopithecus only dates to something like 3 Mya, Homo habilis to only ~2 Mya, and humans to a mere ~0.3 Mya.

250 vs 3 (or even 80) is a huge difference, and it's unreasonable to assume that all planets would experience extinction events at anywhere near the same rates. To put it in terms of your axiom of mediocrity, it looks like the "typical" timeline we followed has a lot of wiggle room so why don't we see anyone else?


To put it in terms of your axiom of mediocrity, it looks like the "typical" timeline we followed has a lot of wiggle room so why don't we see anyone else?

Arguably, on a log scale, the total complexity of organisms is increasing fairly regularly. Extinction events may just be ways the increased complexity manifests.

Further, there might be wiggle room but maybe our current explosion in scientific knowledge doesn't yet put near the jumping off point. The sci-fi paradigm is imagining galaxy colonization as akin to the colonization of islands and continents around the world by sea faring but the scales aren't comparable, the effort isn't comparable, etc.


I dunno, dinosaurs seem pretty biologically complex to me. We've got examples of just about all the features you'd come across today (possibly even venom).

I suppose if we take intelligence as a sort of measure of overall complexity it could work, the obvious issue being that we can only make the roughest of estimates for it.


If the killer meteorite had waited a few million more years some intelligent descendants of theropod dinosaurs might have sent up a rocket to stop it.


I am trying to figure out how extinction events caused by a meteorite impact, massive volcanism or global glaciation could be manifestations of increased complexity. Are you are saying that worlds populated only by simple organisms are effectively immune from mass extinctions? Even if that were so (which I doubt), it would seem to be irrelevant to the Fermi 'paradox', which concerns the non-arrival, here, of complex organisms or their machines.


"I am trying to figure out how extinction events caused by a meteorite impact, massive volcanism or global glaciation could be manifestations of increased complexity."

A multitude of organism evolve over time, with more complex organism appearing in little ecological niches not occupied by existing species.

When an extinction event occurs, the entire ecology changes and the most fit, complex and well adapted organism now can fill many more niches.

Are you are saying that worlds populated only by simple organisms are effectively immune from mass extinctions?

No, nothing of the sort.


Mass extinctions roll the dice on complexity. They redefine the yardstick against which fitness is measured, at least temporarily (but long enough to have a lasting effect.) If the KT meteorite were somewhat larger, it might have bombed Earth back to the Archean.

On the other hand, this is one aspect of the Fermi puzzle where we do have more than one example - and they suggest that mass extinction events alone are not sufficient to answer it.


One can always posit possible causes, but is there evidence for any of them? For example, is there evidence that, two billion years before the origin of life on Earth, radiation levels throughout the galaxy were such as to prevent the same thing happening anywhere, or that nowhere in the galaxy could there have been the chemistry which was vital for the emergence of life on Earth?


Anything to do with the Fermi paradox is going to be very broad speculation. We're discussing the probability of X thing happening when have exactly one instance available.


Yet, here we are without even a reasonable speculative explanation.

Your #1 is against all the evidence we have. And we have quite a bit of it.

Your #2 is a well discussed possibility, but we have evidence of life evolving on Earth from very simple creatures. So placing a severe ceiling on the complexity of that panspermy subject, and removing most of the power from that hypothesis. Granted that it's still a possibility, but we know that nearly all of our evolution happened here on Earth, on roughly the same time that was available for those very simple organism to appear.

Your #3 is literally begging the question.


A Great Filter? Just the one?

Unaddressed is the scenario wherein multiple filters exist. One or more could be behind us; one or more ahead. And so, not finding life on Mars or Titan or anything nearby would be nothing to celebrate.


That's covered on page 6:

> Nothing in the above reasoning precludes the Great Filter from being located both behind us and ahead of us. It might both be extremely improbable that intelligent life should arise on any given planet, andvery improbable that intelligent life, once evolved, should succeed in becoming advanced enough to colonize space.


Oh good, I was wondering if I had missed it.

I tend to think of the Drake equation as a series of hurdles, myself. It's unpopular but each level of "advancement" doesn't seem to be a given to me.


Any of the following suffices to account for silence. All three could be true.

1. Advanced societies develop controlled fusion energy, and lose interest in terrestrial planets. Future activity occurs in the resource-rich Kuiper belts of their and neighboring stars. (Perhaps the most valuable resource there is low temperature.) Extreme primitives stuck on rocky inner planets have nothing of interest to offer, or to say. Aliens who happen by get no closer than Neptune.

2. Expansionist civilizations soon encounter other expansionist civilizations, and annihilate one another. Remaining civilizations are not expansionist, and therefore do not arrive. Humanity will likely be expansionist unless long communication delays make participating in society too difficult. If not, we will eventually encounter another, and either join it, annihilate it, be annihilated, or both of the latter. Having joined, expansion continues until the next such encounter.

3. Life we would recognize develops only on an inner terrestrial planet with a large moon. A planet otherwise like ours with no large moon develops like Venus, as solar tides nearly halt its rotation, and thence its tectonic processes and magnetic field. Earth-like planets equipped with a large moon could be vanishingly rare.


"If – as I hope is the case – we are the only intelligent species that has ever evolved in our galaxy, and perhaps in the entire observable universe, it does not follow that our survival is not in danger. Nothing in the above reasoning precludes the Great Filter from being located both behind us and ahead of us. It might both be extremely improbable that intelligent life should arise on any given planet, and very improbable that intelligent life, once evolved, should succeed in becoming advanced enough to colonize space.

"But we would have some grounds for hope that all or most of the Great Filter is in our past if Mars is indeed found to be barren."

It would not give grounds for that hope; it would merely, and at most, not give grounds for suspecting that the hope is in vain. Bostrom cannot quantify how likely his hope is, and what he is praying for here is to maintain that ignorance - to not learn anything that might help quantify the probability in a direction inconsistent with his hope.

This strikes me as contrary to the ideals of philosophy.


I do not see how you came to that conclusion - nowhere in the piece does he hope or pray to maintain such ignorance.


He does not put it that way explicitly, but what he is saying is that he hopes we do not gain knowledge that he would would regard as diminishing the likelihood of his hope for humanity being realized. Note that gaining that knowledge would not actually change humanity's chances for colonizing the galaxy; it would only make that probability more clear to Bostrom.


The idea that the "great filter", if there really is one, remains in our future seems to me to be extremely improbable. The fact is, we already have the ability, just about, to colonise space. That we haven't yet has more to do with economic reasons than technological ones, and it's entirely conceivable that we'll see substantial non-Earth settlements in our lifetime.

Now, that isn't to say we won't nuke ourselves get turn into grey goo or something before that happens, but the point is, but we're close enough to it that, even being pessimistic, there's got to be a reasonable chance we'll make it (over 0.01, say). And that's not good enough for a great filter. Any catastrophe that happens so close to the point of becoming a multi-planetary civilisation simply wouldn't catch enough species. There'd be too many who would make it through.


> Any catastrophe that happens so close to the point of becoming a multi-planetary civilisation simply wouldn't catch enough species.

Unless you're missing some sort of fundamental serialization of events, like technology A that enables colonization is also so destructive that it wipes them out. Nuclear power could have been an example.

Perhaps we would need to master genetic engineering in order to survive the ravages of long-term space exposure, and that might inevitably lead to an outbreak that wipes out most of the species.

Or perhaps sufficient automation is needed because biological brains are too slow and imprecise, but sufficient intelligence and automation inevitably yields an AI that supplants them.


As I said, we already have technology to colonise space. Not to spread throughout the galaxy, no, but at least to get to multiple places in our solar system. At that point the feasibility of a single event like a genetic accident always occurring and always wiping out the entirety of every species seems unlikely.

Not impossible, of course, but my argument wasn't about possibility vs impossibility, but probability.


> Not to spread throughout the galaxy, no, but at least to get to multiple places in our solar system.

To get there, yes. Not necessarily to live there. I was saying that we might need to actually engineer ourselves to live in these other environments, and the path to that might itself lead to the great filter.


The idea that the "great filter", if there really is one, remains in our future seems to me to be extremely improbable. The fact is, we already have the ability, just about, to colonise space.

Semi-plausible plans exist to colonize the asteroid belt and there's a plan to send a microscopic probe to the nearest star. Those plans are a far, far cry from any colonization of the stars, any really impact on solar systems other than earth.

It's worth that if an earth society existed on every star in the galaxy, none would be able to detect the others - all communication fades to quantum noise before it gets to the nearest star.


If you're arguing it may not be possible to colonise space at all, that may be true but isn't germane to this discussion. The idea of Fermi paradox and the "great filter" presupposes that it is possible, and that there is some other, civilisation-ending event that stops it.

My argument is that, within the context of a debate that accepts the precepts of the Fermi paradox and great filter, the likelihood of where the great filter can be placed is constrained by probability. Within that context, even a limited colonisation of our own solar system would be sufficient to drastically reduce the likelihood of a single event to wipe out humanity entirely.


If you're arguing it may not be possible to colonise space at all, that may be true but isn't germane to this discussion. The idea of Fermi paradox and the "great filter" presupposes that it is possible, and that there is some other, civilisation-ending event that stops it.

Sure, this is the presupposition of the Fermi paradox but it is generally presented as an obvious things. Obviously, an advanced civilization can and will colonize not just it's own solar system but some larger area, even a galaxy. I don't that's either obviously possible or obviously what an advanced society would do.

The "filter" could be that a society based on quick growth either wipes itself out or reaches a situation of stability such that "colonizing the galaxy" doesn't hold any great appeal.

And human beings so far are extraordinarily depend on earth. It be easier to survive on even earth degraded by global warming or nuclear war than it would be to live in space. Space colonies, if they ever exist, will extraordinarily fragile and their ability to survive a disaster rendering the uninhabitable seems minute.


There's nothing which says that the Great Filter needs to be caused by endogenous factors. Let's say that these are all relatively "easy" to achieve: planets, life, complex life, technological civilisation. Even if they're "easy" now, there will have been a time, billions of years ago, when they weren't. Let's say that all of this has only been achievable since the emergence of the first high-metallicity stars, maybe 5 or 6 billion years ago.

At that time, There will have been a number of "first" civilisations. Imagine that one of those civilisations grew up to be even more paranoid than us. "Nature, red in tooth and claw" shaped their psychology profoundly. For them, to meet any strange species is to utterly fear it. They know, to the core of their being, that if they do not annihilate the stranger, the stranger will annihilate them. Let's call this species "the Fearers".

So when The Fearers become aware of the potential for species elsewhere in the galaxy to exist, they decide that the best defence is a good offence. They throw all of their civilisational resources into developing technology which ensures that no alien species will ever bother them.

Assembler probes are dispatched to a large metal asteroid. They pull it apart, re-assembling it into an autonomous listening post attached to a 10-million-kilometer-long solar-powered mass driver. If the listening post detects signals from any species other than the Fearers, then the mass driver will spend a few years tweaking its orbit, take careful aim, and dispatch a stream of projectiles towards the source of the signal. Each projectile only weighs a few hundred kg, but travels at 99.9% the speed of light, making it a gigaton-class impactor. Not quite a planet-killer, but the mass driver can take several shots per hour, and has trillions of tons worth of ammunition at its disposal. It spends a few years shelling the source of the signals, plus any other planetary body in the target system, just in case the emerging technological civilisation was able to develop interplanetary capabilities before the first impactor arrived.

The Fearers realise that although this first mass driver is sufficient to clear their local neighbourhood of potential adversaries, they are still, in the long term, vulnerable. What if some species on the far side of the galaxy was able to develop interstellar travel, making them potential resilient to such an attack? The Fearers then conceive of a next-generation defense grid: self-reproducing probes which spread out through the galaxy, establishing listening posts and mass drivers every 100 light-years or so. It only takes a few hundred thousand years to saturate the galaxy with the these defence stations. There, they lurk indefinitely, completely radio-silent, waiting to snuff out signals as soon as they arrive.

Unfortunately for the Fearers, their early foray into self-replication causes them to succumb to a Grey Goo catastrophe, shortly after launching the first interstellar defence probes. But the damage was done. The Fearers have now been gone for billions of years, but their defence grid remains active. In that time, hundreds of thousands of technological civilisations have emerged, but within a century of their first long-range radio broadcasts, all of them have been snuffed out by a fusillade of near-lightspeed projectiles. That is too short of a window for any species to develop the class of interstellar capabilities needed to survive such an event.

Maybe this story seems implausible. But these kind of capabilities will probably be within humanity's grasp within a couple hundred years -- and the galaxy has been around for billions of years, during which it only takes a single species being that much of an asshole to construct a Great Filter like this.

It's scenarios like this which really make me hope that the Great Filter is behind us.


The least plausible thing about it rationally is that the fearers would have anything to be afraid of with billion years head starts and relatively assymptoic accelerating technological capabilities.

It is of course possible that they could be bug nuts paranoid - although they would have more to fear from their own rivals amongst themselves than any life which just started radio communication. MAD could keep the peace amongst themselves. Of course xenophobes also tend to go fractal with more arbitrary divisions of "the other" so they could have just wiped themselves out.


I'm assuming that they are pre-Filter, but not the first. From the moment they develop radio technology, they immediately start receiving signals from dozens of existing alien civilisations. This causes them to think "shit, we really have to do something about this."

Some of those civilisations may already be far more advanced, and have taken to the stars tens of thousands of years previously. No matter. The interstellar mass driver is a weapon against which no defense can be erected. No level of technology can track a 1-meter-wide iron sphere as it moves through interstellar space at near-lightspeed. If you're on the receiving end of a volley, the insanely blueshifted, tiny spec of light from the projectile won't arrive until shortly before the projectile itself. To an observer, it would appear as if all the planets in the system suddenly started exploding. Few systems would even be able to get a warning cry out, and those who did would be unable to provide actionable intelligence on where the attack came from. Even if they could provide actionable intelligence, their warning would spread out at light-speed. If there were listeners to receive it, then those listeners would also presumably have attracted the attention of the Fearers -- and a fusillade of projectiles would already be inbound towards them, almost immediately behind the warning itself.

Imagining such a scenario, the Fearers say to themselves: "If we don't do something like this, then surely it's only a matter of time before somebody else does. And we'll have no way to see it coming. So the first species to build interstellar mass drivers will, inevitably, be the last species left standing. Therefore, we have no choice but build and use them ourselves."

In other words, they do unto others not because they're bugnuts paranoid, but because they have a not-altogether-unrealistic apprehension that if they don't, others will do unto them.

This isn't Mutual Assured Destruction as we know it. What made MAD work was the fact that enemies could observe each other in realtime, and that retaliatory strikes would be possible. Neither is the case here. The Fearers are convinced that the only rational course of action is to assume others will attack someday -- if they haven't already -- and therefore their attack needs to be launched immediately.


I think there are just too many unknowns around topics like the existence of intelligent aliens, etc., to draw any firm conclusions about anything.

We don't seem to see conclusive evidence for aliens. Okay, maybe a few anomalous UFOs, but none of that stuff is anywhere near firm enough to conclude anything as significant as aliens. We have lights in the sky and funny radar blips, and those could have many explanations.

That doesn't mean there aren't any aliens. It just means if they're around they are not advertising their presence. There are many rational reasons not to do so, some altruistic and some self-interested.

An ET studying us might want to avoid contamination just like we do when landing probes on other planets. It might also fear that we'd be hostile, and looking around at how humans behave that would not be an irrational concern. If we did respond with hostility or fear, our ETs might face an awful moral conundrum: risk letting us continue developing to the point where we become a danger to other life (including them), or exterminate us preemptively. Might be best just to not make contact and avoid that situation.

What about SETI's great silence? It's pretty meaningless. Radiation diminishes with the square of distance. It would take an incredibly powerful directed radio signal to be detectable even a few light years away, and it would have to have simple modulation to have any chance of being noticed.

That means an intentional transmission, and one of incredible power. I don't remember exactly but to reach stars dozens of light years away I recall seeing numbers in the hundreds of gigawatts of radiated signal power. That would be on the order of the entire output of the USA power grid fed into a transmitter array to send e.g. Fibonacci numbers and say "yes we are here." That's unlikely for many reasons: it's expensive, a literal shot in the dark, and potentially dangerous. You don't want to get an answer in the form of a relativistic velocity impactor in a few thousand years.

Incidental radiation is just not going to be detected at any range. It's not powerful enough. Not only that but as we evolve toward more advanced and efficient technology we are abandoning powerful transmitters in favor of low power cellular systems with lots of small transmitters or mesh networks. Our transmissions are becoming quite a bit harder to detect over time. You couldn't detect an 802.11 network from the Moon, let alone another solar system.

There are just so many unknowns. Nothing firm can be said. If people want to get apocalyptic when reasoning from the Fermi paradox, it says more about their attitude than the universe.


What about SETI's great silence? It's pretty meaningless. Radiation diminishes with the square of distance. It would take an incredibly powerful directed radio signal to be detectable even a few light years away, and it would have to have simple modulation to have any chance of being noticed.

Absolutely. It's worth highlight that Fermi is saying something like "why societies we don't understand using technologies we don't know of to communicate to us."


There must be some kind of barrier that prevents the rise of intelligent, self-aware, technologically advanced, space-colonizing civilizations.

Speculation of this sort is interesting but I think it's important to realize it's extremely hypothetical. It can't help but have many hidden assumption and fail to say many "unknown unknowns".

One key unstated assumption is that a interstellar space faring civilization could be achieved through technological advancement in short -or-medium order on earth. That a "sci-fi" world is just around the corner.

Our current society has indeed been based on constantly, even exponentially increasing technology. But there's no visible way to sustain that exponential growth to the scale of the stars. Indeed, our society's growth is increasingly, visibly unsustainable and our ability to counter our destructive tendencies isn't making progress. So maybe what has to replace our unsustainable trajectory is a society with no technological progress.

You could this as "maybe the 'great filter' lies ahead of us" or maybe you could say the image of both the Fermi Paradox and science fiction is based on a belief that Earth's future has a strong potential, a strong tendency to look like the previous 300 years of the increasingly technological expansion of European capitalist society over the globe. And that belief, for good or ill, may be not at all justified.

And the thing is that such "filtering" might imply that a given technological species fails, just that it reaches stability and isn't part of this a stellar colonization effort.


If humanity turns out to be the most intelligent thing the galaxy has produced... then the galaxy really needs to do better.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: