Has anyone considered the first option? I.e. not actually touching the lever. At least, in the human example of the problem, and not the one related to autonomous vehicles. If so, what do you think of it?
I'm of the opinion that touching the lever would make you liable for the death/harm of the single individual. Sure, you "saved" 5 people and prevented them from coming to harm. But you knowingly decided to harm someone completely innocent in all of this. In my book, that's as good as letting go of the trolley in the first place. i.e. It's no longer an accident, but rather a deliberate (and arguably malicious) act.
This is precisely what the "trolley problem" aims to highlight, and why the question is interesting rather than just being answered "obviously you'll choose one death rather than five" by everyone.
The Wikipedia article says that 90% choose to touch the lever, and 10% choose the first option.
But people are more in agreement with you than it seems, because changing the formulation to the (ridiculously impossible) "fat man" version of the thought experiment causes most people to change their mind, even though the morally salient features of the experiment (choosing to kill one innocent person in order to save five innocent people) have not changed.
The problem with the trolley problem is that it discounts time. In the time between throwing or not throwing the switch and the trolley reaching the location of the potential victim[s], our experience gives us some small hope that the victim[s] might be rescued by others. themselves, or trolley malfunction. There is still time for good fortune to replace the bad. The problem does not account for our ordinary intuition in this regard.
This explains the apparent conundrum of divergence in the Fat Man version. If we push him off the bridge, there is no time for good fortune to save him, but there is time for the group to escape, self preserve or get lucky.
We're far far away from autonomous driving software being at a state where this is an interesting question. And, PR aside, Tesla's latest update is the same sort of assistive driving tech that's already in a number of high-end cars. It's not self-driving in that the driver is supposed to remain alert (and remains fully responsible for being in control of the vehicle).
We do seem to be rapidly heading toward an "interesting" place where assistive driving technologies are good enough that a lot of people realistically won't pay attention while driving (given that plenty of people text and use their phones even without such technologies).
You seem overly pessimistic about autonomous driving software; it seems to me like Google's driving software could easily already be at a point where it might be able to consider the choice between hitting pedestrians crossing the street, or purposefully crashing the car.
It's my understanding that Google's self-driving car technology as shown to the public, cannott currently handle a crew repairing potholes without advance notice, or drive to an arbitrary location using a route that hasn't been pre-run to record navigational waypoints.
In other words, the disclosed technologies can drive in areas for which extremely large amounts of data have been collected and only so long as conditions have not changed. If a new traffic light or stop sign is installed, the self-driving car is likely to miss it.
As a general approach, I can certainly see assistive driving algorithms making decisions about whether to veer off the road vs. possibly hitting something ahead, i.e. how do you prioritize staying in the lane vs. avoiding a known/likely collision. One implication is that you'd sort of like to know what "off the road" means in a given location.
It'll be interesting to see what happens the first time a self-driving car fucks up and ploughs through a bus queue. Who would be responsible? The "driver" who was chilling out reading their kindle? The manufacturer?
Presumably it depends on what "self-driving" means in the context of that particular car and what the laws are. Today, "self-driving" cars are legally just assistive technology like cruise control and you're responsible. If you have an accident while texting you're in pretty much the same situation as if your car didn't have those technologies.
Presumably at some point we have cars that can legally operate fully autonomously (under some defined set of conditions) and the driver would not be expected to be in control. In that case, I'd expect in that scenario there would be a lawsuit naming the manufacturer among others.
Like an autopilot in a plane, I imagine that there are some problems that a self driving car cannot or should not attempt to solve.
This shifts the ethical and moral burden to the human driver as it should be.
Though Armstrong in "Smarter Than Us: The Rise of Machine Intelligence" makes an interesting point:
"these programs [autopilots and stock trading programs] occasionally encounter unexpected situations where humans must override, correct or rewrite them. But these overseers, who haven't been following the intricacies of the algorithm's decision process and who don't have the hands-on experience of the situation, are often at a complete loss as to what to do - and the plane or the stock market crashes."
tldr; ".... Sacrificial dilemmas therefore tell us little about utilitarian decision-making". Paper proposes to "studying proto-utilitarian tendencies in everyday moral thinking" - seems a good option to choose.
I'm of the opinion that touching the lever would make you liable for the death/harm of the single individual. Sure, you "saved" 5 people and prevented them from coming to harm. But you knowingly decided to harm someone completely innocent in all of this. In my book, that's as good as letting go of the trolley in the first place. i.e. It's no longer an accident, but rather a deliberate (and arguably malicious) act.