Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All kinds of negative outcomes are possible, at all times. What matters is their probability.

If you (or anyone else) can present a well-structured argument that AI presents, say, a 1-in-100 existential risk to humanity in the next 500 years, then you'll have my attention. Without those kinds of numbers, there are substantially more likely risks that have my attention first.



Shouldn't unchared territory come with a risk multiplier of some kind? Currently it's an estimation at best. Maybe 1-in-20 maybe 1-in-million in the next 2 years. The OPs point of this thread still stands, scientists shouldn't be so confident.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: