Hacker News new | past | comments | ask | show | jobs | submit login

I think it is because anything that can be used for good can also be used for bad. Advances in gene editing can provide a miracle medicine or a new means for biological warfare.

AI is the same, we can use it to do some great things but it can also be leveraged by bad actors and very easily. The broad scale of what it can be implemented on means there is a lot of attach surface for change in either direction.

The AI systems don't even need to be that advanced to cause real issues simply because of the sheer scale of society as it stands. It can be used in a form of Akido in which it uses the weight of the system to bring itself down.

Failure is an emergent property of complexity.




I think the only defense against superintelligent AI doom is superintelligent AI thinking about how to prevent AI doom.

Fewer projects with concentrated control is thus more dangerous than a diversity of projects: the majority of AI wont want to destroy the planet or all humans, and will thus fight the ones that do.


I wouldn't say that very many, if any projects would set out to destroy the world. The vast majority of people are out to set the world right even if that view is wildly different depending on the perspective. By that I mean a terrorist is seen as a freedom fighter depending on the angle.

What I fear is that something innocuous will have wildly intended outcomes. A good example, it is thought that a part of the 2008 credit crash was caused by algorithms handling all the bundling of securities. In bundling bad debt with good, it could hide a lot of the risk in the system.

It comes down to the whole issue of trying to define the environment in which we deploy these things, defining the goals and hoping there are no gaps that the system can exploit in optimizing for outcome.


What better to predict wildly unintended outcomes than another AI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: