Hacker News new | past | comments | ask | show | jobs | submit login

I am still struggling to understand “safe”. What is it we need to be kept safe from? What would happen if it is unsafe?



If you had a superintelligence it could basically manipulate people so even if you don't connect it to the internet it could try to further its goal. There is also a chance that their goals are not the same as ours.

So we need a way to discern intent from AGI and higher and the ability to align the goals.

With that said I'm not sure if we ever get to them being self driven and having goals.


A "safe" AI is one that allows humans freedom/self actualization while solving all intelligence/production problems. An "unsafe" AI is one that kills all humans while solving all intelligence/production problems.

They're trying to birth a god. They hope they can birth a benevolent god.

This isn't about AI that spreads/doesn't spread misinformation/etc, this is about control of the light cone, who gets to live to see it happen, and in what state do they get to live.


From competition.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: