
How do you feel about computers deciding who lives and dies? - RichardHeart
Self driving cars already must decide between running you into whatever is in front of you, or hopping up on the sidewalk and running some people over, but saving you.<p>The phone record analysis of who called who&#x27;s cell phone in Afghanistan could convince a computer to drop a bomb on you or your wedding party with the approval of a human.  I hear they don&#x27;t know much more than what the cell phone was doing, and the ever watchful computer does most of the analysis on it&#x27;s own.<p>It&#x27;s a great thing smart men like Elon Musk have put their own money and time into things like the Future of Life Institute and OpenAI to increase the chances that machines have good morals and ethics and are hopefully less likely to see us as enemies.<p>There&#x27;s the funny story of the paperclip maximizer.  A machine doesn&#x27;t need to be evil, or dislike you at all, but still cause your extinction.  Imagine a poorly designed, but quite powerful machine, which is programmed to maximize the number of paperclips in the world.<p>It discovers that many atoms that could make good paperclips, are trapped in human bodies, and that the humans have to die in order to become what it thinks is the best thing in the world..paperclips.
======
gus_massa
Except the paperclip maximizer case, all the others can be implemented by
humans, without the help of a computer. How do you feel about human morons
deciding who lives and dies?

~~~
RichardHeart
I feel that until computers write their own code, the ethics and programming
skill of humans will live vicariously through the machine until a bug or edge
case. Garbage in, garbage out. Hopefully what we put in is the highest and
best ethics we can discern. The "trolley problem" is now real life.

------
saluki
Focussing on self driving cars . . .

I expect these situations will be rare, self driving cars won't allow them
selves to be in a position where there isn't a safe way out or way to stop.
There will be edge cases of course. For those yes there will be some
predetermination in the programming on whether the car crashes itself to avoid
pedestrians. But even then every effort will be made to save as many lives as
possible.

It it was me dying or 20 pedestrians or a bus full of kids I'd be glad it
crashed me rather than taking them out.

One thought, will there be luxury brands that would take out the pedestrians
protecting the driver at all costs, will programs take in to account the
person in the car, doctor, celebrity, elected official will that be part of
the equation?

But then there will be advances like this where car fatalities will be
extremely rare. (hopefully)

Demolition Man | Secure Foam

[https://www.youtube.com/watch?v=qc1cX0abcJc](https://www.youtube.com/watch?v=qc1cX0abcJc)

------
joeclark77
The problem is this: a human can make a mistake but a machine cannot. For
example a human driver may need to choose between swerving left or swerving
right and, in hindsight, _either_ choice could be defended because we know he
had to make a split-second judgment, the outcome wasn't premeditated, etc.

With a self-driving car though, some programmer sitting in an office in the
cold light of day had to make a conscious decision to kill the driver, or to
kill the pedestrian. The outcome is programmed, and there are no mitigating
factors like instinct or accident that can excuse it. This poses a very
different moral problem for the creator of the self-driving car.

------
ankurdhama
Remember: Whatever a computer does a human can do too using paper and pen.

~~~
RichardHeart
Nope. A human runs out of life long before he can find quite large prime
numbers, or a good containment shape for a fusion plasma, or look at all the
photos on flicker, etc. Heck my computer can even generate pure test tones.
human != computer.

~~~
ankurdhama
You are confusing two concepts which are completely different than each other
- "What computation can be done" vs "How fast the computation can be done".

