Hacker News new | past | comments | ask | show | jobs | submit login
First Draft of the IEEE AI Ethics Paper: “Ethically Aligned Design” [pdf] (ieee.org)
32 points by spacehacker on Dec 14, 2016 | hide | past | favorite | 10 comments



I really dislike the increasingly used term "ethics" and "ethically". Particularly ethics isn't something that's universally set in stone, it's a matter of individual's philosophies (ie preferences) but the term gets treated like a universal thing.


It'd better be universal, or else in presence of a superhuman AI we're surely fucked (as opposed to probably fucked).

Fortunately, humans do seem to have some shared ethics core in their firmware.


Sure, we (mostly) share a core set of ethical values, and for that reason I'm not overly concerned about us creating a master race of killer androids (the Note 7 aside). My concern is the more subtle elements, where different perspectives will see different choices as being ethically sound.

Ask a truck driver and the CEO of a haulage firm what they think about the ethical considerations of self-driving trucks, and you'll get two completely different answers, both with valid points.


Regular humans have, for over half a century, possessed the ability to annihilate the entirety of civilization by basically pushing a button. So it always amazes me when people feel the need to make science fictional assumptions like AGI in their doomsday scenarios.


AGI isn't a replacement doomsday scenario; it's a set of additional scenarios to be considered together with the usual ones - nuclear war, biological war, global pandemic, gamma ray bursts, etc.


A malevolent AI is a particularly terrifying prospect.

Despite the fact that humanity has possessed the ability to destroy ourselves for quite so long, fortunately because we're still flesh and blood, biological entities and our evolution has lead to some more or less universal truths about us. We tend to love our families. We tend to want what we consider to be the best for our offspring. We tend to have some sense of obligation to protect our parents when they can no longer do so for themselves.

All of these (and many other) things that act to mediate our civilization-destroying traits wouldn't necessarily apply to an AI.


This is part of an ongoing discussion that the public is having about algorithms influencing our lives, from policymaking to customized user interactions. There is a growing number of people from academia proposing varying degrees of regulations. Facebook's influence in the recent presidential elections may serve as a catalyst for what is to come.

References

[1] http://hkspolicycast.libsyn.com/how-technology-governs-us [2] https://www.youtube.com/playlist?list=PLJkLD_s9pYaY_WD6emzzq... [3] http://www.econtalk.org/archives/2016/10/cathy_oneil_on_1.ht... [4] https://www.youtube.com/watch?v=f_PFhJrPxoU


I think Section 2 – Business Practices and AI is particularly relevant and is something I've thought about a lot.

> Engineers and design teams are neither socialized nor empowered to raise ethical concerns regarding their designs, or design specifications, within their organizations. Considering the widespread use of AI/AS and the unique ethical questions it raises, these need to be identified and addressed from their inception.

Has anyone got stories of trying to raise ethical concerns within their organisation? Were you listened to? Were you happy with the results or left frustrated?


Seems to be a one street. At some point do we not have to treat AI's ethically if they are approaching sentience?.

Maybe that should be separate topic and this is just about more basic AI.


There is a Star Trek episode that revolves around your first question, which I feel compelled to share: https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: