> The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet
The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.
The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.