Hacker News new | past | comments | ask | show | jobs | submit login

The foundation of this article is that there is a zero sum game for resources for AI safety. I think we should reject this notion and improve AI safety both in terms of existential and non-extensional risk. If more resources are needed and can be used effectively for dealing with safety we should instead take resources away from propelling AI forward and apply them to safety.

I see this argument against existential risk alliance – that AI needs humans to provide it with resources- this we can hold it hostage. I like to point out that power plants are increasingly since the pandemic remotely operable- that AI will increasingly be able to hold us hostage by hacking the infrastructure we need to survive. Throw in the ability to use some robots and bio weapons to keep humans from taking them back easily.

The other aspect is that AI doesn’t have to properly conclude ahead of time that such an attempt has a high probability of success. Its program just has to decide that this will help it achieve some objective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: