Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I re-read your comment and it was clearer, so I edited the response accordingly.

Please also respond to the main arguments I gave and linked to if you can.



Oppenheimer at one point believed that there was some possibility the atomic bomb would set the atmosphere on fire and kill all humans. However, at least that particular fear was falsifiable. Other physicists ran calculations and concluded it was impossible.

Do these beliefs about the dangerousness of AI possess even that quality? Are they falsifiable? No.

These arguments are begging the question. They assume as a given something which cannot be disproven and thus are pure statements of belief.


Lack of falsifiability (even if it’s true in this case, which is not a given) is not a license for inaction.

The world is not a science experiment.

And we know that it’s plausible the emergence of Homo Sapiens helped cause the extinction of Neanderthals.


Problem is, AFAIK the math tells us rather unambiguously that AI alignment is a real problem, and safe AI is a very, very tiny point in the space of possible AIs. So it's the other way around: it's like scientists calculated six ways to Sunday that the hydrogen bomb test will ignite the atmosphere, and Oppenheimer calling it sci-fi nonsense and proceeding anyway.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: