Fortunately, humans do seem to have some shared ethics core in their firmware.
Ask a truck driver and the CEO of a haulage firm what they think about the ethical considerations of self-driving trucks, and you'll get two completely different answers, both with valid points.
Despite the fact that humanity has possessed the ability to destroy ourselves for quite so long, fortunately because we're still flesh and blood, biological entities and our evolution has lead to some more or less universal truths about us. We tend to love our families. We tend to want what we consider to be the best for our offspring. We tend to have some sense of obligation to protect our parents when they can no longer do so for themselves.
All of these (and many other) things that act to mediate our civilization-destroying traits wouldn't necessarily apply to an AI.
> Engineers and design teams are neither socialized nor empowered to raise ethical concerns regarding their designs, or design specifications, within their organizations. Considering the widespread use of AI/AS and the unique ethical questions it raises, these need to be identified and addressed from their inception.
Has anyone got stories of trying to raise ethical concerns within their organisation? Were you listened to? Were you happy with the results or left frustrated?
Maybe that should be separate topic and this is just about more basic AI.