

Slowing Moore's Law - kiba
http://www.gwern.net/Slowing%20Moore%27s%20Law

======
Eliezer
Gwern, the main surprise coming out of the post-Summit workshops was that
literally _everyone_ wished that uploads could come before AGIs. The problem
isn't that high-fidelity uploads are less trustworthy. The problem is that the
neuroscience enabling uploads seems extremely liable to enable "neuromorphic"
or "neurally inspired" AGIs which have neither a human upbringing, nor high-
fidelity human emotions. In other words, the problem with uploading is that
it's a difficult technology whose easier prerequisites branch into unFriendly
AI. We'd _take_ high-fidelity uploads if we could get them, the problem is we
don't see how to get them without getting unFriendly "neuromorphic" AI first.

~~~
TheEzEzz
There is a risk associated with emulation research, but I'm not convinced it
would so certainly lead to unfriendly AI as you suggest. Moreover, you
yourself have stated how difficult FAI research is. The real question then is
whether or not the risk of emulation research is small enough to outweigh the
small chance of friendly AI succeeding _before_ other AI projects succeed. I
have yet to see a comparative analysis of these two possibilities.

~~~
nextstep
How do you quantify friendly in a meaningful way? If AI systems were in
control over human systems, and it's involvement would directly harm or
benefit humans, wouldn't the friendly/unfriendly designation vary based on
perspective?

What I mean is let's say the AI is taught morality from a pure utilitarian
standpoint. Then, certain ethical decisions might harm a minority to benefit
humanity overall. Is this AI friendly or not? Ethical systems, like all first-
order logic, are inconsistent systems, and there will be ambiguities. Good and
bad are human concepts that are deeply rooted to a given observer's
perspective.

~~~
TheEzEzz
_Good and bad are human concepts that are deeply rooted to a given observer's
perspective._

Of course. By friendly I mean friendly from my own perspective, or perhaps
also from the perspectives of other people with sufficient overlap to my own
conception of good.

------
eof
Lesswrong link with existing discussion:
<http://lesswrong.com/lw/apm/how_would_you_stop_moores_law/>

