Hacker News new | past | comments | ask | show | jobs | submit login
Creepy Study Suggests AI Is the Reason We've Never Found Aliens (sciencealert.com)
13 points by unnouinceput 13 days ago | hide | past | favorite | 37 comments





Robert J. Sawyer, the Canadian science fiction author, wrote a more entertaining version called Factoring Humanity.

Anyway, when we invented nukes, nukes were the great filter. Now that AI is looking promising, it's the great filter. I wonder what it will be in 40 years.


It's likely nukes are the Great Filter, and AI, and anything else one can think of. All of the Great Filters, all of the time. It's Great Filters all the way down.

This has the same issue I have with any theory where the great filter is a technology that evolves roughly contemporaneously with spaceflight (like atomic weapons): It’s just not probable that _every_ civilisation would destroy themselves before being multi planetary. By virtue of the events being so close, some civilisations would escape destruction if only through dumb luck.

Personally I think the lack of visible galactic civilisations is more plausibly explained by a combination of life being rare (and probably brought to Earth via panspermia after billions of years of evolution elsewhere) and both multicellular and sentient life also being rare.


> Personally I think the lack of visible galactic civilisations is more plausibly explained by a combination of life being rare (and probably brought to Earth via panspermia after billions of years of evolution elsewhere) and both multicellular and sentient life also being rare.

Besides these facts, space is big and the timescales of evolution are long.

Let's say there's a planet with advanced multicellular life just a dozen light years away. They're roughly the same development level as us with a post-atomic/semiconductor level of technology. We could not currently detect them unless they were beaming an extremely directional radio signal at us (accounting for proper motion between our star systems). We're not going to pick up their TV signals, those will fade below the cosmic background by about Saturn's orbit. Even high powered AM radio won't be coherent past Pluto. The odds a powerful military or weather radar beam crosses our section of their sky to be detectable by us are extremely low.

We might be able to detect spectra of industrial pollutants or biomarkers if that system is oriented such the planet transits its host star.

That assumes there's a project with the funding and appropriate telescopes to do the looking.

Space is big and terrestrial planet hunting is under-funded. It would take a lot of luck to see intelligent life next door. Postulating super AI or handwavy explanations for not finding aliens everywhere is silly. The boring answer is space is huge and aliens are almost impossible to see in the best case. We'd be very lucky to see ourselves from Alpha Centauri even knowing where to look.


> I think the lack of visible galactic civilisations is more plausibly explained by a combination of life being rare (and probably brought to Earth via panspermia after billions of years of evolution elsewhere) and both multicellular and sentient life also being rare.

I always put this out here when this sort of topic comes up, but it’s certainly possible that given life itself is apparently rare, and intelligent life is definitely rarer still…that we could be among the first and most advanced so far.

We make the assumption that there must be others and far more advanced, but all we know about life so far would indicate that intelligent life on the one place where we know it exists took nearly 4B forms of life to produce that 1 form that can apparently even conceptualize that there may be others out there…and…it apparently took over a third of the age of the universe for us to get as far as we have.


Atomic weapons emerged slightly before the first craft to leave the Earth’s atmosphere, but we’re still nowhere near interstellar travel. On the other hand, the difficulty of space travel is largely a function of the escape velocity, which is dependent on the gravitational pull of your planet and solar system (both of which are variable).

One issue with your claim though is that once you have interstellar flight, you certainly have weapons of mass destruction. Even crashing a coke-can sized meteor into the Earth at 1% of the speed of light would cause major destruction.


It also feels like there's one pretty damn critical building block of modern society and space travel that would be galactically rare: hydrocarbon fuel.

If we didn't have oil, coal, etc in the insane quantities we have it in, we'd be nothing. It singularly enables global transportation, which has enabled global information sharing, which enables everything else. Its critical in the production of plastic, which is the most important material humanity has developed, period. Its the only way we know how to get into space. Its unlikely that planet-scale deployments of renewable tech like wind & solar would be possible without oil; to enable the manufacturing, mining, deployment, and tertiary required systems like batteries.

If we had evolved in the first hundred-million-year phase of life on the planet, there would have been no significant life before us to rot underground for a few hundred million years to make the hydrocarbons we use today. Or, there would have been so little that the reservoirs of it would be used up after a few decades or a century of burning it in lamps (every planet gets dark!), before the intelligent life realizes its true value (propulsion and plastic).

Another totally unrelated aspect of the problem I think about: Take the IQ spectrum, split in half at 100, and kill all the humans on the >100 side. The humans that are left (<100) would be very unlikely to leave the planet's surface in their lifetimes, and because genetics are a big factor in inherent intelligence, their ability to get to space has been delayed possibly indefinitely.

Put this another way: Space travel is on the razor's edge of our intelligence capacity. And its not enough to have a three standard deviation genius come along every couple generations; that's just a Galileo being born in the 1500s, they'll discover and document things for sure. But it takes many highly intelligent, motivated, and enabled (see: Oil) people to unlock space flight, computing, nuclear power, etc. Its easy to imagine even a human civilization that sits with a median IQ of what we'd say is 60-70, and they literally never pursue these kinds of high technology. But you can get really far with 60-70 IQ. You get nearly all of the evolutionary advantages a 100+ IQ person would have, you can make hunting traps, outsmart prey, you can farm, even smith and mine minerals; but you probably aren't sending rockets into orbit.

My take is: The universe and even our galaxy is probably surprisingly full of bronze-age era civilizations; and they've probably been that way for centuries. The Great Filter is most probably a combination of Ultra-High Quantities of Hydrocarbon Fuel, plus widespread median Academic-level Intelligence.


I find it delusional to think we're close in any way to being multiplanetary. Mars hardly has an atmosphere and you can't survive outside a pressurized environment. You could build a base there but not a city, not even considering the amount of people need to have a self sufficient community and the amount of resources needed to transport what's needed to jumpstart it.

Which is why "biological life" doesn't make sense in space in the first place. It's just so inefficient and unfit for purpose.

It would be much easier to send "AI" across space and then print organic bodies if desired.


To me this reads like an old guy scared of new technology he does not understand, with a track record in astronomy so easily able to get papers published, pulling numbers out of thin air with regards to the hypothetical emergence of artificial superintelligence within the next 15 years. Then plugging this random guess into the Drake equation and getting an answer close to one end of the scale.

I've said it before and I'll say it again: despite the impressive advancements in current LLMs, there is absolutely zero evidence supporting the notion that these AIs want to do anything at all. They simply respond to prompts. They don't even remember anything, the entire previous conversation needs to be included in every prompt. There is no technical mechanism by which they are able to possess or attain agency.

My best bet for the explanation of the Fermi paradox is simply the intersection of economics and interstellar travel timescales. The payback time for any "interstellar colonisation" is measured in centuries or millenia, and will require tens of thousands of people to condemn both themselves and their children and grandchildren to never see anything else than the inside of a spaceship. The only factor extreme enough to motivate humans to such endeavours is impending planetary destruction, which does not result in anything like exponential growth.


> The payback time for any "interstellar colonisation" is measured in centuries or millenia

Payback? I’m not sure that you’d expect any payback at all; it’s not like shipping stuff _back_ would ever make more economic sense than just making the stuff at the point of origin. The space opera concept of shipping raw materials from interstellar colonies is just absurd on the face of it (unless you have really magic tech, I suppose).


How this argument reads to someone concerned about the trajectory of AI and the governance systems around it (current or future): There are various technical problems preventing these systems from being as scary as you imagine, but don't worry because we're working hard to solve them!

Also "agency" isn't a requirement for almost any AI doom scenario I've heard of. Goals and the formation of instrumental goals is all it takes.


But “goals” is doing an insane amount of heavy lifting here.

In order to have goals you need to have a model of the world. LLMs do not have a model of the world.

Your goal then is changing this world somehow so you need actuators.

So you need a model and actuators.

All we have is a stream of words appended in a probabilistic manner.


If only there were some examples of streams of words being used to enlist people to build (or become) actuators for evil purposes…

It's not that there are "technical problems" preventing anything. It's that these systems (LLMs) do not possess the fundamental properties that are necessary for being scary, and we have no clue at all how the systems would get those properties.

Current "hyperintelligent AI" fears are exactly like the "grey goo" fear when nanotechnology was a buzzword in the early 2000s. We can all agree that if you take the fundamental vague concept "AI" or "nanotechnology" and pile on a couple of wheelbarrows full of hypotheticals, you get to something scary. That doesn't mean it has any relevance for the universe we currently exist in.


And yet you qualify your statement with “current.” Why?

People anticipating future risks and discussing how and when to detect and mitigate them is actually exactly what we should be doing.


This. Llms aren’t very good and are just predictive text. And I say that as someone who has a lot of experience using them. We don’t have anything to fear from AI for quite some time.

As an aside, the AI generated search snippets in google are messing me up. They seem good for random things I don’t know about but are awful for programming things where I’m looking for a specific solution to a specific problem. They always just give me the “right” way no matter how much it doesn’t really work like that.

An example would be trying to figure out how to solve a weird edge case or error, something that stack overflow is amazing at and it just feeds me generic instructions - which honestly fooled me for a while before I learned to ignore.


Yeah, the article is just a trojan horse to say, "it's scary that modern militaries are starting to use AI."

Given how inhospitable life is to organisms like ours, it seems almost guaranteed that any space faring entity must be what we call "artificial". (Whether that be silicon computers or something else.)

And at that point, probably some sort of hive mind or "super intelligence" as well. (Since having billions of independent actors is very inefficient -- so much redundancy and annoying coordination problems to deal with).

It/they would probably have mastery of biotechnology and could print whatever life forms they want at the last mile if needed. Maybe purely for the entertainment purposes of embodying some highly constrained consciousness. Or for interfacing with new species encountered. Or just for downloading the stored consciousness of their pets from the home world so they can run around again.

It's a pretty minor detail as to whether the biological life forms that created the thing still exist happily on their home planet, or uploaded their consciousness into it, or were exterminated by it or themselves or some unavoidable natural disaster... Biology just doesn't have scaling power.


Space is inhospitable for computing machines as well, no computing machine has outlived human beings in lifespan without ship of theseus amounts of maintenance. It's a simple fact of the universe: all systems must be designed with their direct environments in mind. Lots of our technology relies heavily on the magnetosphere for example, or on the fact that the atmosphere is not made of sulfuric acid.

On the topic of biotechnology, it seems to me we are closer to engineering biological systems than we are to AGI. Bonus points, biomass is a readily available resource with which to build them.

Either way though, my core point is, we assume non living systems are more resilient, and I don't believe that that is the case.


Sure, cosmic rays and micrometeoroids are a problem for everyone. But computers can deal with much wider temperature and pressure and gravity ranges, eat energy directly, are easy to backup/restore, and don't get bored.

Are we really closer to solving longevity (and psychology), or having reliable hibernation / cold sleep etc, than we are to AGI? Maybe both are equally fantastical, but the latter at least seems to have a practical path forward.

We could potentially bioengineer more space-adapted human descendants (or "brain in a vat" / human-computer hybrids etc) but that will be politically infeasible for a long time to come.


I would imagine that the alien civilisations would also make that kind of studies, and pre-emptively prevent the extinction scenario from happening.

We have a rather lousy history of completely avoiding risks that we predicted.

Environmental issues consistently get addressed after people start noticing harms, but AI may not give civilizations time to react after they fuck up.


Yeah the article makes no sense. It suggests we need try and prevent the problem by 1) regulating AI development, 2) accelerating our progress towards becoming multi-planetary.

Well, if AI really is such an effective great filter, it would not be so easily mitigated by such predictable solutions.

If you can't implement those mitigations because of politics / economics / etc, then the great filter is in your social dysfunction, not the AI.


Another submission[1] on hn’s front page talks about whale songs being more structured than we thought.

If we cannot understand or even perceive information in messages from a species from the same planet, then I’d guess we’re incapable of finding (noticing) non earth signals.

Sure, an interstellar message could be tuned for easy recognition, but there are plenty of regular patterns in the observable universe already.

[1]: https://news.ycombinator.com/item?id=40322267


But you disprove yourself in the same observation, no? we can detect language pretty reliably without being able to decode it

No, we only happened to detect a language; still, we thought it’s simpler than it actually is, even though we are on the same planet and carbon based, with similar life spans, sizes, etc.

I wouldn’t say we can reliably detect languages, I’m sure there are many more communications between earth species that we’re still not seeing, not noticing or otherwise misclassifying.


All language is going to be governed by information theory though, so I would think that if we have trouble detecting language then either a) we’re not looking very hard, or b) the comms are so efficient it’s approaching indistinguishable from white noise.

Later category seems unlikely with animals, but I could see it being the case for hive minds leveraging emergent properties of simple behaviors like in bee dancing or ant pheromones


Maybe even both a&b can happen at once. The article also mentions Fermi’s paradox and “the great filter”, a proposed solution. There are more[1], notably: “our search methodologies are flawed and we are not searching for the correct indicators”.

[1]: https://en.m.wikipedia.org/wiki/Search_for_extraterrestrial_...


The AI craze is incredible.

I made a "joke" here recently playing with the idea that the time machine will never be created not because we would see people from the future now if it could be created [1] BUT because we don't see AGI from the future ala Terminator.

[1] https://en.wikipedia.org/wiki/Chronology_protection_conjectu...


" It possesses the potential to enhance its own capabilities at a speed that outpaces our own evolutionary timelines without AI."

Does it?

And there may be a limit to intelligence anyway. In humans extreme intelligence often seems to pair with some kind of psychosis, for example.

I think all we can say is IF AGI is possible, it can potentially be made to be very much faster than humans.


> Could AI be the universe's "great filter" – a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?

I feel like wrecking the biosphere that makes possible a civilization in the first place should be higher on the list of great filters than AI.


If humans would ever have to compete with a hypothetical ASI, why would they be limited by their own brainpower? It’s far more likely that we would pit ASIs against each other, many of which would be under human control.

And the ASIs would not be able to perceive that scenario and surreptitiously work together for their own purposes anyway?

The reason it's called "singularity" is because you can make 0 reliable predictions or even imagine what lies on the other side. You essentially have to substitute the word "ASI" for "god", so anything goes.


The Fermi Paradox implies that some percentage of advanced civilizations would desire to physically explore many star systems.

This assumes that advancements in physics don't make that activity less interesting to the aliens.


Honestly, I would expect VC to be a much more impactful change to a civilization than AI. We have plenty of evidence that greed destroys culture.

The reason we have never found aliens is because they don't exist



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: