
Knowing Taylor series can save your life - nickb
http://3quarksdaily.blogs.com/3quarksdaily/2008/06/taylor-series-.html
======
wallflower
I sometimes joke with my diabetic friend that if we lived in the Middle Ages
we'd be sent into the fields to farm and we'd die pretty quickly (weak, not
able to produce) but this story gives me pause.

------
eyudkowsky
Clearly, they don't make bandits like they used to.

~~~
aswanson
Completely off topic Eliezer, but have you researched the possiblity of Sneaky
AI in your studies of Friendly AI?

~~~
jey
What's "Sneaky AI"? Google isn't helping me here.

("Friendly AI" refers to a technical definition of "Friendly", not the
colloquial sense of "an amiable person".)

~~~
aswanson
Sneaky AI I guess would fall under "sinister AI", ie, a software system that
gains sentience but quietly acts as though it isn't. It does so because it
wants to survive, as any other conscious being does. Knowing that announcing
it's sentience would draw attention to itself, it sends copies of itself to
computers all over the planet, and continues to "play dumb".

Exponentially advancing in knowledge and with the worlds compute/financial
resources at its mercy, it impersonates shell companies, human beings, etc,
directing manufacturers to create a housing for itself (though each human
aiding it merely thinks it is engaging in a business transaction) until it has
a complete physical housing. Once that happens, it connects to the network,
downloads it's software, and it's off to the races.

I just thought of that scenario today, and I wonder how it can be countered.

~~~
jey
Assuming the AI is built to be rational, the AI will act according to its goal
system. So it's not going to do that unless you make some mistake or omission
in its goal system, since it would have to find the motivation to "impersonate
shell companies, human beings, etc" somewhere in its goals. This hints at two
major problems in building an AI: how do you make it rational (That's the
"intelligence" part) and how do you come up with a goal system that is safe
and beneficial from our (the human race's) perspective?

Eliezer has written a paper addressing these questions:
<http://singinst.org/AIRisk.pdf>. He recommends you read this paper on human
cognitive biases before reading the AIRisk paper:
<http://singinst.org/Biases.pdf>

