Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh! I think I know what's going on here. This isn't a logical argument so much as it is a heuristic - and I think it's a very functional one.

Consider Pascal's Wager. Should you follow God? Consider the consequences. If there is no God, following him might waste some time. But if there is a God, the consequences are as stark as they could be! Heaven and hell wipe out every other consideration, no matter what (nonzero) odds you assign God's existence.

There is something wrong with this argument: the guy who set up the terms of the payoff schedule can manipulate you. A made up heaven can be as nice as you like, and a made up hell can be as nasty, as is necessary to make the reader tilt a certain way.

Of course, it's hardly just religion that does this. Authorities of every stripe do it! But most obviously, politicians, propagandists, and similar influencers do it. They talk up bogeymen to scare and manipulate people.

Now, if a bogeyman hails from your field, it is absolutely your responsibility to know about it and warn everyone. But if it's not your field, the heuristic is to... not care about the fire until you see smoke.

This seems like an entirely irrational strategy, and I agree that it is if you look narrowly at determining truth. But I think the attitude is entirely correct and functional when you take attention, effort, and adversarial behavior into account. Obviously, refusing to acknowledge potential catastrophes makes you much harder to manipulate. But there's an even more basic reason: if you educated yourself on every possible catastrophe, you would magnify their impact to the point that you never lived your life.

The heuristic being run into when you say "Aren't you worried about the AI apocalypse?" is probably easier to relate to if I say, "Aren't you worried about going to hell?" Hell sounds very bad, but then again you made the chart. There are, like, so many possible religions to think about, and someone with an apocalyptic issue wants to make it my issue every five minutes, never mind all the stuff my government wants me afraid of. Call me when you have some brimstone that I can actually smell myself.

This applies doubly when the catastrophe in question is reasoned about in a style that resembles Pascal's Wager (as is often, but not always, the case with AI Apocalypse stuff). You want something less philosophically Bayesian and more tangible - the equivalent of Germany invading Poland, so anyone can potentially see the problem is real.

Older people often spell this heuristic, "Well, if that awful thing happens, they can come get me here in the garden." Meaning, I refuse to let the bad times steal even one minute of the good ones.

The heuristic is timeless because the technique is timeless. To the political move Bogeyman, we have the counter-jitsu Refuse To Care About Bogeymen. I don't think the people pointing to non-catastrophes explicitly think of it in those terms - I think the behavior is more instinctive than that - but I think the intuition behind the argument is less "I know this potential catastrophe isn't real" and more "worrying about potential catastrophes is a costly and losing game".



> the argument is less "I know this potential catastrophe isn't real" and more "worrying about potential catastrophes is a costly and losing game".

Thanks, this succinctly puts what I have circled around a few times when discussing this with friends.


This is excellent analysis and expresses what I was thinking on the subject better than I could have. Thanks!


I sometimes draw an analogy between low-probability catastrophizing and the Drake Equation.

The Drake Equation attempts to calculate how many planets there should be in the universe where intelligent life exists, by multiplying together all sorts of probabilities. But if you put error bars on the various estimates, what you find is that the overall calculation can result in anything from "there should be trillions of planets with intelligent life on them" to "humanity is some kind of fluke; even we shouldn't exist." Which means that using the Drake Equation to make arguments about what we "should" be doing about intelligent alien life is actually useless. It doesn't tell us anything meaningful or useful; it's just a way some people justify their own priors.

There are a lot of things I can spend my time, money, energy, and attention on. Some of them are entertainment (sports, video games, TV/movies, music.) Some are serious day-to-day life (family, parenting, work, chores.) Some are trying to interact with the broader world from a positive influence perspective (political/religious advocacy, voting, charity, counseling, encouragement.) Some are planning for negative outcomes to protect myself and my family (having good insurance, canned food, filtered water storage, ways to create winter heat, an evacuation plan or two.) Someone using Drake Equation type reasoning can suggest that their particular negative-outcome scenario has such high costs that I should expend literally everything to mitigate them -- but as soon as I allow for error bars and for alternatives that they might get me to ignore if I'm all-in on their issue, that transforms my whole thought process. The AI signularity might be so dangerous that I should invest everything to stop it, or it might be such a nothingburger that it's not worth the Doritos I ate while writing this comment. Without a clear way to distinguish, I should just filter it out. If they can't do enough in the time they've already had to convince me it's actually a real issue, they haven't earned the right to my attention.


I suppose there's a numerics take on the situation: Epsilon times negative infinity is so poorly conditioned that we procedurally set it to zero until we can get some real numbers.


Indeterminate. There's no information to be gleaned. And we don't turn our lives upside down on no information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: