Hacker News new | past | comments | ask | show | jobs | submit login

> The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet

The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: