B) Proto-life might indeed have evolved 2 billion earlier than earth and have been brought to earth by asteroids. Which is to say it took ~2 billion years for a primordial soup to become single-celled organisms (with viruses being an intermediate stage).
C) If life evolves "everywhere", then the odds are we are "typical life" (axiom of mediocrity), so the odds are for whatever reason life was ~2 billion later than some estimates.
250 vs 3 (or even 80) is a huge difference, and it's unreasonable to assume that all planets would experience extinction events at anywhere near the same rates. To put it in terms of your axiom of mediocrity, it looks like the "typical" timeline we followed has a lot of wiggle room so why don't we see anyone else?
Arguably, on a log scale, the total complexity of organisms is increasing fairly regularly. Extinction events may just be ways the increased complexity manifests.
Further, there might be wiggle room but maybe our current explosion in scientific knowledge doesn't yet put near the jumping off point. The sci-fi paradigm is imagining galaxy colonization as akin to the colonization of islands and continents around the world by sea faring but the scales aren't comparable, the effort isn't comparable, etc.
I suppose if we take intelligence as a sort of measure of overall complexity it could work, the obvious issue being that we can only make the roughest of estimates for it.
A multitude of organism evolve over time, with more complex organism appearing in little ecological niches not occupied by existing species.
When an extinction event occurs, the entire ecology changes and the most fit, complex and well adapted organism now can fill many more niches.
Are you are saying that worlds populated only by simple organisms are effectively immune from mass extinctions?
No, nothing of the sort.
On the other hand, this is one aspect of the Fermi puzzle where we do have more than one example - and they suggest that mass extinction events alone are not sufficient to answer it.
Your #1 is against all the evidence we have. And we have quite a bit of it.
Your #2 is a well discussed possibility, but we have evidence of life evolving on Earth from very simple creatures. So placing a severe ceiling on the complexity of that panspermy subject, and removing most of the power from that hypothesis. Granted that it's still a possibility, but we know that nearly all of our evolution happened here on Earth, on roughly the same time that was available for those very simple organism to appear.
Your #3 is literally begging the question.
Unaddressed is the scenario wherein multiple filters exist. One or more could be behind us; one or more ahead. And so, not finding life on Mars or Titan or anything nearby would be nothing to celebrate.
> Nothing in the above reasoning precludes the Great Filter from being located both behind us and ahead of us. It might both be extremely improbable that intelligent life should arise on any given planet, andvery improbable that intelligent life, once evolved, should succeed in becoming advanced enough to colonize space.
I tend to think of the Drake equation as a series of hurdles, myself. It's unpopular but each level of "advancement" doesn't seem to be a given to me.
1. Advanced societies develop controlled fusion energy, and lose interest in terrestrial planets. Future activity occurs in the resource-rich Kuiper belts of their and neighboring stars. (Perhaps the most valuable resource there is low temperature.) Extreme primitives stuck on rocky inner planets have nothing of interest to offer, or to say. Aliens who happen by get no closer than Neptune.
2. Expansionist civilizations soon encounter other expansionist civilizations, and annihilate one another. Remaining civilizations are not expansionist, and therefore do not arrive. Humanity will likely be expansionist unless long communication delays make participating in society too difficult. If not, we will eventually encounter another, and either join it, annihilate it, be annihilated, or both of the latter. Having joined, expansion continues until the next such encounter.
3. Life we would recognize develops only on an inner terrestrial planet with a large moon. A planet otherwise like ours with no large moon develops like Venus, as solar tides nearly halt its rotation, and thence its tectonic processes and magnetic field. Earth-like planets equipped with a large moon could be vanishingly rare.
"But we would have some grounds for hope that all or most of the Great Filter is in our past if Mars is indeed found to be barren."
It would not give grounds for that hope; it would merely, and at most, not give grounds for suspecting that the hope is in vain. Bostrom cannot quantify how likely his hope is, and what he is praying for here is to maintain that ignorance - to not learn anything that might help quantify the probability in a direction inconsistent with his hope.
This strikes me as contrary to the ideals of philosophy.
Now, that isn't to say we won't nuke ourselves get turn into grey goo or something before that happens, but the point is, but we're close enough to it that, even being pessimistic, there's got to be a reasonable chance we'll make it (over 0.01, say). And that's not good enough for a great filter. Any catastrophe that happens so close to the point of becoming a multi-planetary civilisation simply wouldn't catch enough species. There'd be too many who would make it through.
Unless you're missing some sort of fundamental serialization of events, like technology A that enables colonization is also so destructive that it wipes them out. Nuclear power could have been an example.
Perhaps we would need to master genetic engineering in order to survive the ravages of long-term space exposure, and that might inevitably lead to an outbreak that wipes out most of the species.
Or perhaps sufficient automation is needed because biological brains are too slow and imprecise, but sufficient intelligence and automation inevitably yields an AI that supplants them.
Not impossible, of course, but my argument wasn't about possibility vs impossibility, but probability.
To get there, yes. Not necessarily to live there. I was saying that we might need to actually engineer ourselves to live in these other environments, and the path to that might itself lead to the great filter.
Semi-plausible plans exist to colonize the asteroid belt and there's a plan to send a microscopic probe to the nearest star. Those plans are a far, far cry from any colonization of the stars, any really impact on solar systems other than earth.
It's worth that if an earth society existed on every star in the galaxy, none would be able to detect the others - all communication fades to quantum noise before it gets to the nearest star.
My argument is that, within the context of a debate that accepts the precepts of the Fermi paradox and great filter, the likelihood of where the great filter can be placed is constrained by probability. Within that context, even a limited colonisation of our own solar system would be sufficient to drastically reduce the likelihood of a single event to wipe out humanity entirely.
Sure, this is the presupposition of the Fermi paradox but it is generally presented as an obvious things. Obviously, an advanced civilization can and will colonize not just it's own solar system but some larger area, even a galaxy. I don't that's either obviously possible or obviously what an advanced society would do.
The "filter" could be that a society based on quick growth either wipes itself out or reaches a situation of stability such that "colonizing the galaxy" doesn't hold any great appeal.
And human beings so far are extraordinarily depend on earth. It be easier to survive on even earth degraded by global warming or nuclear war than it would be to live in space. Space colonies, if they ever exist, will extraordinarily fragile and their ability to survive a disaster rendering the uninhabitable seems minute.
At that time, There will have been a number of "first" civilisations. Imagine that one of those civilisations grew up to be even more paranoid than us. "Nature, red in tooth and claw" shaped their psychology profoundly. For them, to meet any strange species is to utterly fear it. They know, to the core of their being, that if they do not annihilate the stranger, the stranger will annihilate them. Let's call this species "the Fearers".
So when The Fearers become aware of the potential for species elsewhere in the galaxy to exist, they decide that the best defence is a good offence. They throw all of their civilisational resources into developing technology which ensures that no alien species will ever bother them.
Assembler probes are dispatched to a large metal asteroid. They pull it apart, re-assembling it into an autonomous listening post attached to a 10-million-kilometer-long solar-powered mass driver. If the listening post detects signals from any species other than the Fearers, then the mass driver will spend a few years tweaking its orbit, take careful aim, and dispatch a stream of projectiles towards the source of the signal. Each projectile only weighs a few hundred kg, but travels at 99.9% the speed of light, making it a gigaton-class impactor. Not quite a planet-killer, but the mass driver can take several shots per hour, and has trillions of tons worth of ammunition at its disposal. It spends a few years shelling the source of the signals, plus any other planetary body in the target system, just in case the emerging technological civilisation was able to develop interplanetary capabilities before the first impactor arrived.
The Fearers realise that although this first mass driver is sufficient to clear their local neighbourhood of potential adversaries, they are still, in the long term, vulnerable. What if some species on the far side of the galaxy was able to develop interstellar travel, making them potential resilient to such an attack? The Fearers then conceive of a next-generation defense grid: self-reproducing probes which spread out through the galaxy, establishing listening posts and mass drivers every 100 light-years or so. It only takes a few hundred thousand years to saturate the galaxy with the these defence stations. There, they lurk indefinitely, completely radio-silent, waiting to snuff out signals as soon as they arrive.
Unfortunately for the Fearers, their early foray into self-replication causes them to succumb to a Grey Goo catastrophe, shortly after launching the first interstellar defence probes. But the damage was done. The Fearers have now been gone for billions of years, but their defence grid remains active. In that time, hundreds of thousands of technological civilisations have emerged, but within a century of their first long-range radio broadcasts, all of them have been snuffed out by a fusillade of near-lightspeed projectiles. That is too short of a window for any species to develop the class of interstellar capabilities needed to survive such an event.
Maybe this story seems implausible. But these kind of capabilities will probably be within humanity's grasp within a couple hundred years -- and the galaxy has been around for billions of years, during which it only takes a single species being that much of an asshole to construct a Great Filter like this.
It's scenarios like this which really make me hope that the Great Filter is behind us.
It is of course possible that they could be bug nuts paranoid - although they would have more to fear from their own rivals amongst themselves than any life which just started radio communication. MAD could keep the peace amongst themselves. Of course xenophobes also tend to go fractal with more arbitrary divisions of "the other" so they could have just wiped themselves out.
Some of those civilisations may already be far more advanced, and have taken to the stars tens of thousands of years previously. No matter. The interstellar mass driver is a weapon against which no defense can be erected. No level of technology can track a 1-meter-wide iron sphere as it moves through interstellar space at near-lightspeed. If you're on the receiving end of a volley, the insanely blueshifted, tiny spec of light from the projectile won't arrive until shortly before the projectile itself. To an observer, it would appear as if all the planets in the system suddenly started exploding. Few systems would even be able to get a warning cry out, and those who did would be unable to provide actionable intelligence on where the attack came from. Even if they could provide actionable intelligence, their warning would spread out at light-speed. If there were listeners to receive it, then those listeners would also presumably have attracted the attention of the Fearers -- and a fusillade of projectiles would already be inbound towards them, almost immediately behind the warning itself.
Imagining such a scenario, the Fearers say to themselves: "If we don't do something like this, then surely it's only a matter of time before somebody else does. And we'll have no way to see it coming. So the first species to build interstellar mass drivers will, inevitably, be the last species left standing. Therefore, we have no choice but build and use them ourselves."
In other words, they do unto others not because they're bugnuts paranoid, but because they have a not-altogether-unrealistic apprehension that if they don't, others will do unto them.
This isn't Mutual Assured Destruction as we know it. What made MAD work was the fact that enemies could observe each other in realtime, and that retaliatory strikes would be possible. Neither is the case here. The Fearers are convinced that the only rational course of action is to assume others will attack someday -- if they haven't already -- and therefore their attack needs to be launched immediately.
We don't seem to see conclusive evidence for aliens. Okay, maybe a few anomalous UFOs, but none of that stuff is anywhere near firm enough to conclude anything as significant as aliens. We have lights in the sky and funny radar blips, and those could have many explanations.
That doesn't mean there aren't any aliens. It just means if they're around they are not advertising their presence. There are many rational reasons not to do so, some altruistic and some self-interested.
An ET studying us might want to avoid contamination just like we do when landing probes on other planets. It might also fear that we'd be hostile, and looking around at how humans behave that would not be an irrational concern. If we did respond with hostility or fear, our ETs might face an awful moral conundrum: risk letting us continue developing to the point where we become a danger to other life (including them), or exterminate us preemptively. Might be best just to not make contact and avoid that situation.
What about SETI's great silence? It's pretty meaningless. Radiation diminishes with the square of distance. It would take an incredibly powerful directed radio signal to be detectable even a few light years away, and it would have to have simple modulation to have any chance of being noticed.
That means an intentional transmission, and one of incredible power. I don't remember exactly but to reach stars dozens of light years away I recall seeing numbers in the hundreds of gigawatts of radiated signal power. That would be on the order of the entire output of the USA power grid fed into a transmitter array to send e.g. Fibonacci numbers and say "yes we are here." That's unlikely for many reasons: it's expensive, a literal shot in the dark, and potentially dangerous. You don't want to get an answer in the form of a relativistic velocity impactor in a few thousand years.
Incidental radiation is just not going to be detected at any range. It's not powerful enough. Not only that but as we evolve toward more advanced and efficient technology we are abandoning powerful transmitters in favor of low power cellular systems with lots of small transmitters or mesh networks. Our transmissions are becoming quite a bit harder to detect over time. You couldn't detect an 802.11 network from the Moon, let alone another solar system.
There are just so many unknowns. Nothing firm can be said. If people want to get apocalyptic when reasoning from the Fermi paradox, it says more about their attitude than the universe.
Absolutely. It's worth highlight that Fermi is saying something like "why societies we don't understand using technologies we don't know of to communicate to us."
Speculation of this sort is interesting but I think it's important to realize it's extremely hypothetical. It can't help but have many hidden assumption and fail to say many "unknown unknowns".
One key unstated assumption is that a interstellar space faring civilization could be achieved through technological advancement in short -or-medium order on earth. That a "sci-fi" world is just around the corner.
Our current society has indeed been based on constantly, even exponentially increasing technology. But there's no visible way to sustain that exponential growth to the scale of the stars. Indeed, our society's growth is increasingly, visibly unsustainable and our ability to counter our destructive tendencies isn't making progress. So maybe what has to replace our unsustainable trajectory is a society with no technological progress.
You could this as "maybe the 'great filter' lies ahead of us" or maybe you could say the image of both the Fermi Paradox and science fiction is based on a belief that Earth's future has a strong potential, a strong tendency to look like the previous 300 years of the increasingly technological expansion of European capitalist society over the globe. And that belief, for good or ill, may be not at all justified.
And the thing is that such "filtering" might imply that a given technological species fails, just that it reaches stability and isn't part of this a stellar colonization effort.