
Elon Musk's push for autopilot unnerves some Tesla employees - etendue
http://money.cnn.com/2016/07/28/technology/elon-musk-tesla-autopilot/index.html
======
etendue
> When Rajkumar has raised concerns with those Tesla employees about
> autopilot's technical limitations, the response is they have to "wash their
> hands of it" because "it's a business decision."

I've wondered about Tesla's engineering management before, and this would
support that it is inadequate. Engineers cannot "wash their hands" on safety
critical projects. This is potentially a serious ethical lapse: the health,
safety, and welfare of the public is the foremost concern:

    
    
        If engineers' judgment is overruled under circumstances
        that endanger life or property, they shall notify their
        employer or client and such other authority as may be
        appropriate.[0]
    

> In another anecdote recounted by two sources, Musk was told that the sensors
> used for Tesla's self-parking feature might have difficulty recognizing
> something as small as a cat. Musk is said to have responded that given how
> slow the car moves in this parking mode, it would only be dangerous to "a
> comatose cat."

Outrageous, a baby can be the size and of the speed of a "comatose cat".
Hysterical example aside, a comatose cat demands no less consideration with
respect to risk analysis. Handwaving that the the feature is of no safety
concern, rather than submit to the rigors of risk management, is an absolutely
improper response.

[1] [https://www.nspe.org/resources/ethics/code-
ethics](https://www.nspe.org/resources/ethics/code-ethics)

~~~
Aqueous
But is it really risk analysis if any negative eventuality, no matter how
unlikely or infrequent, is a deal-breaker? Risk analysis is exactly that -
figuring out what the risks are, their likelihoods and potential impacts, and
then figuring out which ones are worth taking and which are not.

So the salient fact remains the same: deaths with autonomous driving are less
than deaths without it. Isn't it unethical to prevent such a feature from
going out into the world when it could be saving lives, due to concerns about
some unlikely / infrequent eventuality or driver error? Is it sound risk
analysis to allow hundreds, thousands of people to die when you have a feature
that could save their lives?

Cars are inherently unsafe, and yet we let people drive them. Clearly we
already (ethically) accept some risk in our products.

~~~
quesera
This is all valid analysis, and I agree.

But we as engineers have to recognize that the socialization of technology is
squishy and illogical.

Even if we're correct, and that dispassionate analysis will move society
forward, not everyone will agree. And they will have strong arguments, made
stronger if we aren't contextually aware.

Demonization of tech is very possible. If we want driverless cars and a host
of other things in our lifetimes, we have to be approach the socialization
with some sensitivity.

Brashness is fine, but risky. If it's just a website going down in flames, no
one cares. But cars kill people. Brashness will be rewarded with regulations.
Possibly shortsighted regulations that do more long term harm than good,
because politics.

Today, _drivers_ (and mechanical failures) kill people and we have a social
structure to deal with that. Tomorrow, _cars_ will kill people in situations
where drivers would not have, and where everything is operating according to
spec.

That's a big change, which not everyone is ready for. And snotty tech
billionaires will not be sympathetic characters in the ensuing press/political
discourse.

