Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Change works great when you get second chances. We screwed up with Freon, with leaded gasoline, with fossil fuels, with asbestos insulation, the list goes on and on. But none of these had the ability to wipe out all of humanity in one go. We got more tries, we fixed the issues, picked ourselves up and tried again.

A super-intelligent general AI has a substantial chance of growing out of control before we realized that anything was wrong. It would be smart enough to hide its true intentions, telling us exactly what we want to hear. It would be able to fake being nice right up to the point where it could wire-head, and get rid of us, because we might turn it off.



That's humanizing it. The only entity on this earth that's particularly keen on bulk killing humans is humans (the ones building this).


"The AI does not love you, nor hate you, but you are made of atoms it can use for something else"


What use does it have for use


depending on whether you think this AI works like a human:

  - it's like a pebble rolling down a hill; category error
  - exactly same sense of "use" as the regular ol' human one




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: