Hacker News new | past | comments | ask | show | jobs | submit login

Still, we would presumably see Darwinian effects in strong AI progression. My guess is that our ability to control AI value systems would end if we could no longer control the environment in which the AIs exist. And once the environment changes (perhaps due to AIs being installed in "actuators" like automobiles or industrial robots, or any conceivable change to a "software environment"), I think Darwinianism might select for a different value system than we intended.



Interesting concept, never thought about it that way. I think separate AIs in "actuators" could (and should) be limited to sub-human level (there are also some interesting ethical reasons for that[0]) and without any significant self-modification capability to limit the risk.

I believe that just building an intelligence will not prove to be the most difficult task - it seems much more complicated to build it in a way that will not end up with humanity dead (or worse).

Friendly AI[1] is a bigger problem than General AI, and unfortunately it needs to be solved first.

[0] - http://lesswrong.com/lw/x7/cant_unbirth_a_child/

[1] - http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: