
I fear people who fear artificial intelligence - wellpast
https://medium.com/@micsolana/artificial-intelligence-is-humanitys-rorschach-test-6fb1ef9c0ce4
======
gcommer
The crux of the article is that we have no clue what AGI will actually be
like, therefore it is irrational to try and reason about it. I don't buy this,
but even more importantly, the initial premise is wrong. We do have good
reasons to believe an AGI's intelligence will be at least passingly similar to
our own:

1\. One path to AGI is straight up simulating a human brain in software.

2\. Humans are guiding AGI research; even if we don't know how we'll build it,
we always build things incrementally based on whatever available knowledge we
already have. However, we ourselves are the only high level intelligence we
have to compare against. Therefore, it's likely we'll build AGI inherently
aiming to make it similar to ourselves.

3\. The author subscribes to the runaway intelligence singularity idea to make
the case that once we have an AGI, it will likely be near-instantly at least
1000x smarter than us, and thus incomprehensibly by us. That strikes me as
unlikely; it is more likely that the first AGI will be built in some lab where
it is already maximally utilizing the computing resources it is being run on,
and will take at least a reasonable amount of time to gain enough knowledge
and control of the outside world to begin its exponential self improvement.

------
wellpast
I was disappointed with the author's reason for this fear; his idea is too
simplistic and I don't think correct: "In imagining the motivations of this
amplified intelligence, we naturally imagine ourselves." \-- he's worried that
those who think the robots will kill us are simply projecting that _that 's
what they would do_ if equipped with superior intelligence. But that's just a
silly conclusion; _fear of the unknown_ is a far more common and obvious
explanation to the frequent reaction to some AI superintelligence.

