Hacker News new | past | comments | ask | show | jobs | submit login

Maybe because "the AI alignment problem" is a classic academic inside baseball term for what would probably resonate with a larger audience better as "the Skynet apocalypse".



Skynet fights with lumbering robots and doesn't seem all that smart. If you want a picture to symbolize what we're worried about, don't imagine a picture of a Terminator robot with glowing red eyes; imagine a picture of the Milky Way with a 30,000-lightyear-diameter sphere gapped out of it, centered on Earth's former position.


Don't worry, it'll take at least 15k years to blast a hole that big, unless the AI is really smart.

And life seems to be quite resilient and eerily altruistic at times.

We still haven't wiped this planet clear despite having enough nukes to do so. Some people put their military careers and the security of their countries at risk refusing to launch nukes despite snafus higher in the command chain.

And then you have all those westerners whose day jobs became so meaningless, detached from their environment and sometimes outright hostile to other humans that even if they don't commit suicide, they willingly stop breeding and openly talk about replacing themselves with more down-to-earth folks who have their priorities right: food, children, food for children, and maybe then some little AI R&D, although who cares about that if you can have more children.

Maybe the selfish gene is doing quite well.


On the positive side we might end up with something as fundamentally pleasant as the Culture (who are supposed to be over 10,000 years more advanced than us):

https://en.wikipedia.org/wiki/The_Culture

I think something like the Culture represents the best case for humanities long term future as this is almost certainly going to include AIs of far greater powers than us bags of meat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: