
Announcing the AI Alignment Prize - cousin_it
https://www.lesserwrong.com/posts/YDLGLnzJTKMEtti7Z/announcing-the-ai-alignment-prize
======
PaulHoule
"It is vital any such intelligence’s goals are aligned with humanity's goals."

This doesn't make much sense to me in that I don't think humanity as a whole
has a consistent set of goals.

"We are not interested in research dealing with the dangers of existing
machine learning systems commonly called AI that do not have smarter than
human intelligence."

Already there is intense danger here and this is connected with systems that
pursue Facebook and Google's goals (a subset of humanity) at an expense of the
rest of humanity.

~~~
cousin_it
I think avoiding the accidental destruction of everything we value by an
unaligned AI is a goal of humanity.

~~~
PaulHoule
Left to our own devices our behavior could lead to the "destruction of
everything we value" by the use of fossil fuels, stockpiling of nuclear
weapons, etc.

Also there will be Cthulhu cults; for one reason or another, there will be an
element of humanity that wants to tear it all down.

