Hacker News new | past | comments | ask | show | jobs | submit login
I Hope the Search for Extraterrestrial Life Finds Nothing (2008) [pdf] (nickbostrom.com)
47 points by jules-jules 48 days ago | hide | past | favorite | 91 comments



God I can't stand Nick Bostrom.

I think the great filter is definitely ahead of us, but I'm not so terrified by the implications of this (or so interested in being a pompous contrarian) that I'm moved to bury my head in the sand and publicly hope for stupid things.

I hope we find current or ancient life on Mars, Europa, etc. because that would be interesting and exciting.

It would bring no more additional existential dread than looking outside at your fellow idiot humans.

Life is hard, and we're probably all going to die. Get over it. We can still hope to find interesting things while we're around to enjoy them.


To be honest, actively hoping for evidence of our extinction is much more contrarian than this piece.


I'm not even sure who Nick Bostrom is, but thanks for your comment, it stands on its own :)


In case you were serious, Nick Bostrom is a philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director.


Yep I was, and thanks for doing emotional labor for me, I probably wouldn’t have bothered googling him :)


"probably"

No, you ARE going to die.


I'm afraid we're looking at a great filter event very soon. We're doing nothing about global warming, we've demonstrated that the United States has a significant fraction of its population which values short-term pleasure over the most basic work to prevent the spread of a pandemic (and we're likely to see these sorts of things happen more, not less frequently as we continue to encroach on wildlife habitat). I'm increasingly skeptical of the ability for civilization to make it into the 21st century.


I really doubt global warming will lead to human extinction. Life might get harder in lots of ways, but it seems extremely unlikely to be an existential problem.


Sure, but there's a small chance that it will bring about the end of modern civilization. In that sense it will be civilization-ending. An interesting question is whether humans would be able to rebuild civilization after events like this.

I think the main extinction risk is the secular trend for technology to become both more destructive and more decentralised. If that continues, it might only be a matter of time before independent actors will be able to extinguish the human race. And given the distribution of human motivations, if individuals can do it, someone will sooner or later.


I can see the civilization ending argument, although it seems like something resembling modern civilization could be rebuilt, maybe with less access to energy, but with less people you also need less energy, so I'm not sure how big of a problem that will be. Maybe without global industries there won't be the materials science necessary for space exploration.

I have a harder time believing independent actors can extinguish the human race. Biowarfare/terrorism is unlikely to do so for a variety of reasons, and nuclear warfare only has the potential to do in superpower level conflict, and even then I don't think it's certain.


> human race

Being satisfied that some "naked apes" still living could mean a survival, as a biological species, of the probable destruction of current civilization is setting the bar really low.


Okay, but do you really think civilization is going to collapse entirely? I don't. Even if half of humanity died you would still have civilization of some sort.


I think with a bunch of nations wielding nuclear power, the threat of a global collapse absolutely becomes an existential problem, regardless of the source of that collapse.

I don't think it's likely, but I also don't think it's something one can dismiss easily.


I don't understand the motivations for using nuclear weapons in that scenario. And you would need a massive nuclear exchange to cause human extinction.


I agree, and I'm not sure I can think of the motivations either except that extreme strain causes people to do stupid things, it could even just be an accident.


I think OP was listing ways that humans are acting negligent which shows that we are susceptible to a filter.


Even if we'll "just" regress back to pre-industrial age instead, it's almost the same as extinction, because humanity will become boring. It'll have a ceiling on its potential for thousands of years, or more.


Maybe in terms of human accomplishment it would be more boring, but probably less boring for individuals.


I meant that in the former, but even in terms of the latter, I personally feel that humanity's progress also made life less boring for individuals than it was before. But I may be biased by how much I enjoy exploring all kinds of complexity and artificial systems.


Existential problem by itself? No.

Existential threat multiplier? Probably yes.


I don't see any clear way in which it is an existential threat multiplier.


> I'm increasingly skeptical of the ability for civilization to make it into the 21st century.

Because of.....Y2K?


D'oh! 22nd century.


CO2?


Sars-COV-2?


Does anyone have a detailed analysis of why we believe we have the technology to detect distant alien civilizations? Given interstellar distances, travel between stars seems unlikely without some new science we don't know yet. Also, do we know we would be able to detect our own radio transmissions with our own telescopes from lighyears away? I think the great filter hypothesis always relied on the assumption that intelligent life would eventually develop Star Trek level technology and be extremely detectable, but what if we're near the end of what's physically possible with respect to energy production, speed, radio astronomy, and communications? What if the aliens are out there, but statistically so far away that we will never detect each other?


This is one thing that I think is underestimated: just how big of a hurdle it is to travel between solar systems. It took the Voyager probes almost 40 years to leave our solar system, which is like taking 40 years to get out your front door on your planned walk across all of Europe and Asia.

By the time we have the technology to do so, we will necessarily have the technology that makes it unnecessary. The primary thing you need is something that can survive effectively indefinitely in deep space where there are no resources available. But if you have that you can build stellar colonies that have unlimited access to solar power and can survive indefinitely just as easily.

It's not like we are short on mass in the solar system either. It will be a long long time until we've used up Jupiter, Saturn, Uranus, the belt, etc...


We basically have the technology to reach Alpha Centauri in less than 200 years for reasonable amounts of money [1]. I don't find it unlikely that alien civilizations could either do better, or have significantly longer lifespans than us.

[1] https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...


That's highly speculative and glosses over a lot of the details. The idea that it can be done with today's tech for "reasonable amounts of money" is laughable. The launch costs alone for putting 500,000 tons into orbit already kill that idea.

For a point of reference, SpaceX currently charges about $2,720/kg for LEO. So to get this project off of the ground (literally) you're talking about $1,360,000,000,000. And that's just launch costs. It doesn't include the invention of spacecraft that can survive in deep space indefinitely without resupply, the materials science advances needed to survive 300,000 nuclear explosions (the paper suggests 1mm of stainless steel coated with a thin layer of oil), or the fact that this is only talking about getting to a place we probably don't even want to go.

And this is kind of the point. Once you've invented all of the necessary pre-reqs there's really no point in hitting it with 300,000 nuclear explosions, you already have a self sustaining space colony. You can continue to build those pretty much forever and never ever come close to running out of room. There's no need to travel to some other star system at that point. The sense of adventure has to be weighed against the risk factor of detonating 300,000 nuclear bombs without incident and all of the yet unknown hazards of interstellar travel. Oh, and thanks to the tyranny of the rocket equation it is a one way trip unless you're packing a nuclear bomb factory and enough mining equipment to get the raw materials you need from whatever you happen to find at your destination.


I think the idea was to use the nukes to launch it into space. If you want less fallout you make the project much harder. In that case I'd recommend waiting until you can manufacture things outside the gravity well.


This seems to be an exaggeration though to say that we basically have the technology. We have the ability to build a ship that would survive 300,000 one megaton nuclear explosions for 200 years, for a reasonable amount of money? What if this paper spacecraft is actually beyond the realm of material science?


I read a book about the topic. The conclusion seems to be that it was doable with 1960s tech, but the test ban treaty, concerns about fallout and the ban of putting nukes into space killed the project before it could produce a prototype.


Which only makes sense if you ignore all of the details. In 2020 we still don't have any proof of concepts for a self-sustaining biosphere that includes people, much less a spacecraft that can survive for hundreds of years without major maintenance. We don't even have a good solution for absorbing all of those hard gammas from detonating 300,000 fusion bombs in close proximity to the people.


The ship itself offers protection from the radiation. The pusher plate, all those bombs, that's quite a bit of shielding. The life system is of course a problem, but that doesn't change the fact that the ship would be pretty fast.


Say we invent life extending technology or generational ships. What happens when someone gets to Alpha Centauri? What can they do?


What could any explorers in the history of mankind do when they got to an unpopulated island or continent? Spread, inhabit, and help ensure the survival of their species.

Whether or not you agree, some people like the idea of humanity surviving as a multi-solar system species.

I'm not sure Alpha Centauri has any inhabitable planets, at least for our biology as it stands today, but what we learn along the way might render that moot.


See my reply to adrianN. Getting there is only a small part of the hurdle to establishing a viable colony, let alone one that can prosper enough to spread further.


Alpha Centauri might not be the best destination, but for suitable systems the travelers could go forth and multiply, eventually sending out ships of their own.


But that makes it a much more costly operation (both in money and risk). Even a nearby star system with a habitable planet would be hard to get going because the settlers that go there would be cut off from their home planet. They would need to take with them everything necessary not only to establish a colony but to thrive (and stave off potential extinction level events which, when your population is small, is far more likely). And there would be little help from homeworld. Even information would take at least the speed of light to get there.

And this is just the tip of the ice berg. I agree with jandrese that people underestimate just how much of a hurdle this is.


“Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.”

― Douglas Adams, The Hitchhiker's Guide to the Galaxy

When talking about interstellar distances people just aren't equipped to think about the scales involved. They're so far outside of the scale of human experience that our brain really struggles to understand the magnitude of the problem.

We're talking about billions of dollars to set up a short term test camp on Mars, and by interstellar standards that is close to nothing.


One issue with the way Bostrom and others frame the Fermi paradox is that they assume the teleological end of any intelligent civilization will be the maximisation of its exploitation of natural resources through the creation of self-replicating probes that will slowly populate the universe. That's why they think the Fermi paradox is a paradox - if there is life, it should be everywhere. That may well be a reflection of ourselves, however, rather than a universal motivation of intelligent life.


At the numbers we’re looking at, even if only 1% of civilizations (maybe even 1 in 1000) would be exponentially expansionary, they would still make themselves fairly obvious and outcompete all the others. The Fermi paradox doesn’t assume that all extraterrestrial intelligence behaves this way, but this rebuttal has to assume that no ETI would behave this way.

And that seems improbable, if only because aggressive, exponential expansion is a fairly valid Darwinian strategy.


I'm highly skeptical of a Darwinian argument for galactic conquest. Intelligent societies would, by definition, be determined by non-genetic learning transmitted via culture. Humans are not naturally aggressive anymore than they're naturally peaceful. They have dispositions capable of moving in both directions, the relative strength of which depend cultural socialisation, which is historically variable.

I'm not at all sure that, even assuming your estimate, ETI would be 'fairly obvious'. Take some speculative figures. The total area of the milky way is about 30 trillion light years. Let's speculate that intelligent life occurs in one in a million planets. There are something like 100 billion planets in the milky way, so that would be 100,00 planets bearing ETI. If one in a thousand intelligent civilizations are super-expansionary, that comes down to just 100. If we divide the area of the milky way - 30 trillion - by a hundred, we get one super-expansionary civilization per three hundred billion square light years. Would that be obvious? Even if we assume that ETI occurs on average on one in 10,000 planets, that would still be one super-expansionary civilization per three billion square light years.


> Intelligent societies would, by definition, be determined by non-genetic learning transmitted via culture.

This is inherently Darwinian as well. In fact, cultural evolution is usually smarter than we are; indigenous pre-industrial cultures typically have an excellent practical understanding of how to thrive in their immediate environment even without having anything close to a theoretical scientific understanding of why their own cultural practices work in the first place.

More to the point, cultures that expand and grow, do so at the expense of cultures that don’t. So through natural selection, we would expect to see more expansionary cultures than non-expansionary cultures.

And, even setting aside the argument, you still have the much tougher task of explaining the notion that no extraterrestrial cultures are expansionary, which is what your side of this argument would require.

> I'm not at all sure that, even assuming your estimate, ETI would be 'fairly obvious'.

It would be eventually if it were growing exponentially. On the timescale of the galaxy and by all probability, eventually has already happened unless humanity is an extreme, extreme, extreme outlier early bloomer.


> More to the point, cultures that expand and grow, do so at the expense of cultures that don’t. So through natural selection, we would expect to see more expansionary cultures than non-expansionary cultures.

Natural selection cannot apply to societies because there is no reliable mechanism for successful decision-making to propagate itself in the way genes convey successful characteristics to future generations. Does survival depend on expansion? The last century has seen the fragmentation of the imperial world into a proliferation of independent states, big and small. It is now widely recognised that exchange is more profitable than territorial acquisition, and war has become a highly technologically-intensive project bearing exorbitant costs. Most arguments for expansion as a survival strategy would depend upon some version of the realist theory of anarchy, that the absence of a supra-national sovereign creates conditions of mutual insecurity in which states are forced to maximise their relative power. But I would think that any major inter-stellar expansion would require the unification of the world - in some form - removing the realist premise from the argument for super-expansion.

> And, even setting aside the argument, you still have the much tougher task of explaining the notion that no extraterrestrial cultures are expansionary, which is what your side of this argument would require.

That depends on how common intelligent life is, the maximum possible speed of interstellar travel, the number of ETI that would commit to super-expansion, how good we are (not very) at searching the sky for signs of life, among other things. If you take the upper-bound estimate for most of those categories, then it is not at all paradoxical that we do not see alien life teeming around us.


> Natural selection cannot apply to societies because there is no reliable mechanism for successful decision-making to propagate itself in the way genes convey successful characteristics to future generations.

Behaviors, customs, and beliefs face the exact same selective pressure as genes. If a certain behavior or custom consistently leads to fatal consequences for the people who practice it, that behavior will become less common over time, and societies that normalize that behavior will go extinct. If a certain different behavior or custom consistently leads to success--in terms of maintaining or, better yet, increasing the number of people who engage in that behavior or custom--then that behavior will persist and spread.

> The last century has seen the fragmentation of the imperial world into a proliferation of independent states, big and small.

Individual empires always rise and fall. That's the nature of history. But for humanity, at least, there are only two options: either some of us will eventually colonize space (in which case those of us colonizing space will be more numerous 1000 years from now, and the set of behaviors, customs, and beliefs that led that group to colonize space will be the representative set among humanity), or else we won't colonize space because we ran into the Great Filter.

The same is true for any extraterrestrial intelligence.


> aggressive, exponential expansion is a fairly valid Darwinian strategy.

Earth exists for around 4.5 billion years, life on Earth for 3.5 billion years, the whole known Universe just less than 14 billion years (and we can send some small probes to the limits of our own Solar system only during the last decades). So the Universe is not "old" and the distances between the areas suitable for the life to "expand" are immense and real barriers.

Knowing the distances involved, I believe that had we discovered that we are in much later age of the Universe, not seeing "others" would mean more than it means now. For now, knowing the distances, whatever could be somewhere far could simply be too far from us, nothing more.

Edit:

Re: "The galaxy is only 100,000 light years across" note that that are the light years -- the distance that the light has to travel that much. We know that the whole islands on Earth were unreachable for many otherwise common species for millions of years. But these distances are nothing compared with problems of maintaining complex life over the cosmic distances and in the cosmically hostile environment. Also "10-50 million years ahead of us" doesn't have to mean what we think it could mean. For a context, the dinosaurs were, historically, 60 million years "ahead of us". Nobody can "just assume" the we will cross the limits that are ours limits now. Biologically, we did multiply "exponentially" as soon as we were able to use the natural resources to do so (1). But apart from that (the certainty of life using available resources for biological growth), the growth in technical capability for interstellar travel is not guaranteed. Not to mention that the resources are by definition limited.

So nobody should ask "why aren't others here" when we can't even claim that we reached somewhere else. What we surely can ask is "why don't we get some signals" because, for example, we are able to send a radio message, but I personally wouldn't assume that the other civilizations make such kind of intentional signalling that we must necessarily be able to detect, being where we are now. It's not "just connect the headset to the wire and a crystal and listen." It's something that has to concentrate immense amounts of energy to reach exactly us. At the moment it's not "is there anybody anywhere" but "is there anybody in the conveniently close neighborhood who's capable to send exactly what we expect to receive, and, most probably, to target exactly in this direction." It's immensely hard to even "see" the planets by the closest stars, that how far everything is. I think that probability for that "targeted messaging" is too low, not to mention that I don't think that the "intelligence" in our sense is inevitably happening on every planet -- we know that from the 3.5 billion years of life on Earth, not even plants existed for 3 billion years. Life can indeed be relatively common but the development that the Earth up to now had could still simply be uncommon enough.

1) https://ourworldindata.org/grapher/world-population-since-10...


Given the scale of the galaxy, “we’re early” doesn’t seem all that likely. The galaxy is only 100,000 light years across so even a few million years is likely enough for some signs of expansion to appear. You really think in the entire galaxy, nobody would be 10-50 million years ahead of us? That’s a pretty small margin.


If you’re going to reply to me, please reply to me instead of editing.


> please reply to me instead of editing.

Unfortunately, my account has some very, very low limits on the number of replies I can make per day, which I can't influence. Not all users have necessarily the same possibilities, and the privileges one may have aren't universal.

My belief is, the limits are there to prevent the discussion progressing too fast and if somebody misses some edit and that even helps the thread remaining slower, maybe the measure still achieved a goal. But I admit it can be confusing.

Of course, I'd personally prefer to have at least a bit higher "quota."


I think the Fermi paradox simply underestimates the enormous difficulty and expense of traveling between solar systems.


It won't be difficult once it has been figured out once. It won't be expensive once we've truly started to unleash energy from matter. On a geological timespan, the milky way is very explorable.

I think the most likely reason there isn't any observable alien life is that we're being intentionally undisturbed. Or at least, seemingly undisturbed.


Isn't this like saying it won't be difficult once we develop a FTL engine? Or a reactionless drive?

The very lack of contact with aliens might be interpreted as strong evidence that FTL travel is either outright impossible or so impractical that it is effectively impossible.

The answer to the Fermi paradox might be the tyranny of the rocket equation.


No. FTL travel is impossible without breaking causality. While unleashing energy from matter is something we can already (crudely) do. Even at only 1% of the speed of light, we should be able to explore the entire Milky Way in a dozen million years or so and there are plenty of proposals to transfer energy to an already high speed spacecraft.

Even forgetting high speed travel. If it takes a billion years (less than 10% of the age of the universe) we would still be able to colonize the Milky Way if we're able to either exist on spacecraft for prolonged periods of time or if replicating probes with AI could do so in our stead.

I'm much more of the opinion that exponential tech is self-terminating or that we're being observed by aliens but not largely interfered with or that we're in some sort of simulation or even that we're the first life forms with advanced intelligence and long life spans. The idea that mere physics is what would stop us from reaching the stars when we've already explored our local neighbourhood is just so unbelievable to me. At the very, very least we should be able to communicate over long distances and we've (as yet) been unable to pick up extraterrestrial signals that show signs of intelligence like enumerating the prime numbers, say.


In the case you are talking about, the Great Filter is that technology advancing to Star Trek levels that allow colonizing star systems is unlikely or impossible (ie step 9 here: https://en.wikipedia.org/wiki/Great_Filter).


If nothing else, we would see stars fade away as civilizations built Dyson swarms around them to collect more and more of their energy. That wouldn’t even require interstellar travel. Interstellar travel would in turn enable exponential growth across star systems, which would show up.


Dyson spheres are a stupid idea to harness power. You need ridiculous amount of resources to cover even a fraction of the surface of a star. An advanced civilization could simply harness fusion.


At which point they need to get hydrogen, and guess where all the hydrogen is.


Our Sun burns 1 GT of hydrogen every 2 seconds [1]. This is the mass of the hydrogen in 9 GT of water. The dwarf planet Ceres was discussed just yesterday on HN. It contains about 200 million GT of water. If that's not enough, we can go and use Jupiter, which contains about 1 quadrillion GT of hydrogen. Enough to last for a few billion years.

[1] https://en.wikipedia.org/wiki/Nuclear_fusion#/media/File:The...


A star IS a fusion reactor. Already built, already working, already fusing hydrogen. If you want to power an interplanetary civilization, there's zero point in building your own, when you already have a working one, and all you need is to keep adding energy collectors as the demand grows.


> there's zero point in building your own

That's quite a strong assumption.


Or alien civilizations just stop growing eventually before consuming all the energy their star provides.


Or they continue growing but grow inwards(into smaller spaces/dimensions), rather than outwards.


Why would all of them do that, universally?


Because those that don't eradicate themselves before they reach the stars for example. Maybe eventual stasis is the only stable form for civilizations. We don't know either way, but I wanted to point out that curiously dimming stars are not the only possible outcome.


Direct detection of a radio signal or some other EM emission is possible (hence SETI), but given the inverse square law (double the distance, a quarter of the signal strength), it's more likely we'd detect a planet with excess oxygen which is believed to be only a by product of life.

Also there some fascinating alternatives of biochemistry for life that might work:

https://en.wikipedia.org/wiki/Hypothetical_types_of_biochemi...

How we detect those is another question.


This article reads alarmingly similar to the Kurzgesagt video about the great filter (https://www.youtube.com/watch?v=UjtOGPJ0URM).

Of course, the Great Filter isn't a new concept, but provided that this is an argument made by somebody other than the person who coined the term (that was Robin Hanson I believe), combined with the fact that I read almost the entire article before then going on to find the video, I find the two too similar to write it off to simple chance.

There have been criticisms of Kurzgesagt in the past, one I can recall being the following, in video form (https://www.youtube.com/watch?v=q2dAGU-VJcE). I've now found out that the original video was retracted and the creator issued a public apology (https://www.youtube.com/watch?v=WcgHGslVdrg), in addition to Kurzgesagt committing to performing more scrutiny over their videos and citing their sources wherever possible.

I do applaud this last step on their part, and I understand that you have to admit that the Great Filter video was published back in 2018 before this commitment was made, but something about the lack of attribution throughout the video doesn't sit right with me, still.


FWIW, Great Filter is a really simple idea, and it's very well known in certain circles (e.g. LessWrong community, to formation of which Robin Hanson contributed). I personally had no idea Robin coined it, or that Bostrom wrote about it, despite reading some text of both - and yet I could still give the same explanation way before Kurzgesagt's video aired.

Which is to say, I didn't find the lack of attribution worrisome in this video, because the idea is pretty much "public domain" in the circles interested about the future. It was (as you note) before the time when Kurzgesagt decided to get super-thorough about citing their sources, and I charitably assume that the authors of the video were just aware of the concept of the Great Filter, and not necessarily where they learned about it from. It's a simple idea, easy to explain, and easy to make inferences from - so it may not have occurred to them it's worth tracking down who originally coined the term.


Honestly, I find the argument that some specifically destructive event (or even a collection of events) would be pervasive enough among all intelligent species to serve as a "Great Filter" to be completely absurd. The chances that any of the examples he listed actually resulting in full extinction (or even an endless spacefaring handicap) are only moderate at best, not assured, even if they did occur. In fact, I think it's quite reasonable to expect that a species, having undergone such a destructive event and been left not extinct, would actually be pushed into space far more efficiently. With that in mind, it's certainly completely absurd to actively hope that monumental (possibly species altering) discoveries are not made.


I always thought the notion of the "von Neumann probe" was odd. What is the motivation for a civilization to do this? It would be highly destructive, extremely destructive, and impossible to reverse. No one would set this in motion, right?


Consider that, to a very alien intelligence, humanity could look a lot like von Neumann probes.

If we stay on our current technological trajectory without destroying ourselves, we will spread out into the galaxy, consuming more and more resources in order to produce more copies of ourselves.

From a totally alien point of view, we might be as intelligent, conscious, or sapient to them as trees are to us. That is, we might as well be bio-robots. Von Neumann probes.


As one sci-fi author pointed out, it only takes 1 highly xenophobic species to mess up a galaxy once you have von Neumann capabilities. And that’s an optimistic view compared to Dark Forest thinking.


It good be accidental or natural, not driven by sapience. Bad code that evolves, or an AI, or similar. Even memetic viruses. Give Existence by David Brin a read some time.


Who wouldn't want to live forever?


I've always found these Fermi paradox arguments silly. Just by posing the problem in this way you are making lots of implicit assumptions which you pretend aren't there by dressing up the rest of your argument in faux rigor.


What are some of the implicit assumptions in your view?


This is newly interesting in light of the recent pentagon UFO revelations (https://www.nytimes.com/2020/07/23/us/politics/pentagon-ufo-...). While these have low credibility, and even if true are likely not extraterrestrial, finding life on mars would change this calculation.


> these have low credibility

Lower then low, to the point that whenever somebody mentions them, I believe there was just a "wish to be true" and then stop.

When checked, it was just a "rerelase" of the old already published material (one can also think about to whose benefit that distraction was) which was already carefully debunked by Mick West:

https://www.youtube.com/watch?v=Q7jcBGLIpus

Mick West's full playlist with explanations:

https://www.youtube.com/playlist?list=PL-4ZqTjKmhn5Qr0tCHkCV...

An article mentioning Mick West:

https://petapixel.com/2020/04/28/that-navy-ufo-footage-has-a...

Also, one day after NYT published their article "No Longer in Shadows, Pentagon’s U.F.O. Unit Will Make Some Findings Public" they published a correction too:

"Correction: July 24, 2020 An earlier version of this article inaccurately rendered remarks attributed to Harry Reid, the retired Senate majority leader from Nevada. Mr. Reid said he believed that crashes of objects of unknown origin may have occurred and that retrieved materials should be studied; he did not say that crashes had occurred and that retrieved materials had been studied secretly for decades. An earlier version also misstated the frequency with which the director of national intelligence is supposed to report on unidentified aerial phenomena. It is 180 days after enactment of the intelligence authorization act, not every six months."

Emphasis mine, now imagine how many articles in how many media were written before the above correction, and it will explain what most of the consumers got as the message (namely, the first, utterly fake version with fake "crashes").


Not sure I buy this argument... here are some random thoughts on why:

1) Someone has to be first. Let's say that there is no filter, and every species that doesn't destroy itself or get destroyed will eventually become technologically advanced to travel around meaningfully in space. There is a certain amount of time it will take for this to happen. Supposedly the universe is ~14B years old, and Earth is ~5B years old. It takes a certain amount of time for abiogenesis, a certain amount of time for life to evolve, and a species to emerge "victorious" long enough to start conquering the universe before the next extinction event. So we could be the "first", or the top 20% anyway, and it'll still take us a while for those top 20% to get anywhere where they can start detecting/traveling anywhere.

2) What is the maximum distance any extraterrestrial life could observe our technology/planet currently? Space is huge, like uncomprehendingly huge... even if we could go anywhere ourselves right now, would we go visit every single planet everywhere without a reason to? And how long would that take? Maybe there hasn't been enough time for a civilization to a) detect us from 100s of light years away and b) travel to us. Maybe detection of life from far away is impossible past a certain point, but traveling there isn't (since it just takes time).

So I guess my thought is, yeah, we might not find life in our solar system, but we might. But even if we do, I don't necessarily think we'll get filtered out either.


I'm not so confident we're in the top 20%. Our galaxy is middle-aged. Our star is only 4.6 billion years old in a galaxy that is 13.51 billion years.

There were several stars that blew up before we got the sun which means several systems and planets right in our neighborhood and that's just 1 star in the entire milky way.

Even at 1% the speed of light, it would take 2 million years to travel across the milky way which, relative to the age of the galaxy, is nothing.


Sure, but let's say you're on the other side of the milky way. You can travel at 1% the speed of light. Why would you come to Earth or anywhere near it? It takes 2 million years to get here, so unless you know something is there, that would be an insane trip to make.


Well, they wouldn't be looking for Earth specifically, they'd be looking for more resources which are everywhere. An intelligent and energy-hungry enough species would want to harness the power of every sun it can reach.

That growth would be exponential because acquiring resources provides more energy which allows them to acquire more resources even faster.

If an intelligent species doubled the number of stars they control every 2 million years, they'd have every star in the milky way in around 50 million years.

That's still like 0.2% the lifetime of the galaxy.


The thought that if the Great Filter is behind us and we kill ourselves is more depressing than the Great Filter is ahead of us and we kill ourselves. I'm not very concerned with which type of intelligent life expanses into the universe so long as at least one type does it successfully. Being human, AGI, or alien does not matter. So finding ETs or creating super power AGI would be a win in my book because it increases the changes that someone will make it.


The finding of extraterrestrial life wouldn't destroy my Christian worldview, but it would make things much less tidy. In his Mere Christianity, as I recall, C. S. Lewis opined that there probably was other intelligent life in the universe (i.e., other than God and the angels) and it left a bad taste in my mouth.


I don't agree with the assumption of a single great filter. Why not many sufficient enough filters. Maybe one occurrence of any is sufficient to locally halt forever evolution to space-faring civilization, or successive occurrence of arbitrary filters at arbitrary intervals suffices.


The author states the great filter could be composed of multiple filters towards the end of the paper.

> Nothing in the above reasoning precludes the Great Filter from being located both behind us and ahead of us. It might both be extremely improbable that intelligent life should arise on any given planet, and very improbable that intelligent life, once evolved, should succeed in becoming advanced enough to colonize space.


Thanks, got bored and stopped reading before I got to that.


If it was discovered that the laws of nature prevent a species from expanding into the universe through whatever mechanisms plausible, what would that do the idea of free will? Does free will require unbounded possibilities or would an upper bound rethink our idea of the universe?


There is no such thing as free will, is there? Decisions are a consequence of physical brain state.


The big assumption here that really bothers me is that any civilization that developed the technology to expand into the galaxy would've found us by now. That's so bogus. We can probably expand to Mars, but we're still bound by the speed of light.


What do we typically believe now that we would no longer believe if there were an advanced species benevolently not hunting us for sport?

I am trying to sort out what are the moral and ethical implications of humans not being the most intelligent or advanced species it is aware of.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: