
Ask HN: How to deal with a world where we watch people die because of our code - aristophenes
There was a moment in Tesla&#x27;s autonomy day that struck me. Someone (Stuart Bowers?) was explaining how Tesla&#x27;s autonomous driving system learns and makes decisions. They can record and replay specific situations the cars get into. He even mentioned that &quot;he watches every accident&quot;:
    https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Ucp0TTmvqOE&amp;feature=youtu.be&amp;t=10777<p>I assume the following:<p><pre><code>  - Autonomous driving that decreases human suffering is a good thing, even if the accidents are more than 0.
  - The nature of the type of approach Tesla is taking (neural network, etc) is designed to reduce, not eliminate, high energy impacts
  - There will always be an ever diminishing tail of situations that the system will not respond to ideally that will cause accidents.
</code></pre>
So, the approach Tesla and probably others are taking is rather than wait for a completely ideal solution, which may never come, we should get a good enough system together that can immediately start reducing serious accidents, and learn over time to do even better.<p>This means that engineers will need to review the results of their work regularly. They&#x27;ll probably need to watch video of people getting killed, knowing that the reason that person died was a trade-off they made that perhaps saved the lives of 10 other people, but killed this one.<p>I have always found it easy to dismiss the trolley problem[1] as being purely theoretical, life isn&#x27;t so simple. But here we seem to have actually created it. What does this mean?<p>When a junior dev pushes a patch that takes down an Amazon datacenter, we can be glad that the poor engineer was not fired, as we are all fallible and need to learn from our mistakes. We share our stories of when we screwed up. What happens when the junior engineer pushes an update that kills a few hundred people, and at the next team meeting we need to review the footage?<p>[1] https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Trolley_problem
======
ThrowawayR2
The same way doctors do when their best efforts weren't good enough to save a
patient.

The same way charity and social workers do when they don't manage to quite
reach all of the people they intended to help.

The same way firefighters do when they aren't able to reach a trapped victim.

There's nothing new or special about making life-critical decisions just
because we happen to encounter them in the form of creating code.

~~~
aristophenes
Really? All the situations you mentioned were people who were trying to save
others. My point is the engineers needing to review their work that killed
others. That's different. A doctor can't save you from a heart attack, that's
tough. A doctor accidentally kills you while removing your tonsils, that's
kind of a big deal. A doctor that kills 100 people by prescribing the wrong
medicine, maybe goes to jail, gets sued to oblivion, and is probably an
emotional wreck.

I'm talking about, what if you had to watch only your mistakes, out of
hundreds of millions of interactions, where your code caused a car with a
passenger but no driver to have a fatal accident.

It seems to me this is worse even than military members who sign up knowing
they will kill people, and struggle with worrying if they killed the wrong
people in the wrong way, or should have hurt anyone at all.

~~~
ThrowawayR2
Yes, really. If you don't like the above examples, there are plenty of grisly
incidents from the early history of engineering for railroads, aviation,
automotives, and spaceflight to pick from. For example, the Apollo 1 crew
burned alive during a flight rehearsal gone wrong
([https://www.space.com/17338-apollo-1.html](https://www.space.com/17338-apollo-1.html))
because of poor engineering decisions. All of them were trying to improve
people's lives or achieve something new for humanity.

What makes software development different?

------
taneq
> What happens when the junior engineer pushes an update that kills a few
> hundred people

That's not how we do things when working on safety-critical systems. A change
is specc'd, the spec is reviewed, the confirmed spec is implemented, the
implementation is code reviewed, and the updated system is run through as much
regression testing as we can throw at it. By the time an update is "pushed" to
consumer hardwarwe it's been verified multiple times, and in a company the
size of Tesla, by probably dozens of people.

------
dekhn
People who build self-driving cars don't attempt to solve the trolley problem
because it would require superhuman intelligence. But since we allow regular
humans to drive (and kill people) it would not be reasonable to expect self-
driving cars to solve problems humans don't. And these systems can save many
more lives without having to solve trolley-level moral/ethical/weighted
judgement problems (most car deaths are banal).

------
dredmorbius
Very, very responsibly.

