Hacker News new | past | comments | ask | show | jobs | submit login

>Maybe it's actually going to be rather benign and more boring than expected

Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.

Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.

That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.

Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.






> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.

I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)


It's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.

I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.


It's also reasonable as a Pascal's wager type of thing. If you can't affect the outcome, just prepare for the eventuality that it will work out because if it doesn't you'll be dead anyway.

> like a burger chain advertising itself on the brutality of its factory farms

It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.

I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)


That is how I normally hear the marketing theory described when people go into it in more detail.

I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.

Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.

Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.


Do you think opposing the manhattan project would have lead to a better world?

note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.


My answer is yes, with low-moderate certainty. I still think the USA would have developed it first, and I think this is what is suggested to us by the GDP trends of the US versus basically everywhere else post-WW2.

Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.

Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.

The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).

This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.

If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.

Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.

I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: