I hate this logic. No, it didn’t need to be programmed ahead of time. It should try to avoid crashing. It shouldn’t try to solve a philosophical problem. We don’t want to require our cars to observe it’s surroundings and calculate number of things identified as humans to minimize expected loss of life. That’s difficult and error prone.
If you’re going down a street and something jumps in your way, make the program try not to hit anything. Ideally by braking, because you shouldn’t be going fast enough to not be able to do so based on your environment.
1. If the car sees a police car flash it's lights for it to pull over, should it pull over unconditionally - not allowing the driver to over-ride?
2. If there is an Amber alert broadcast specifying the car, should it automatically try to alert the authorities as to it's location and/or flag down the nearest police vehicle?
3. Should a car be able to be programmed to automatically avoid going to places that would violate a restriction order on its owner or occupant.
4. If owner/occupant experiences an emergency (heart attack, impending child birth, etc), should the car have a button that will allow it to break traffic laws to try to get to the hospital faster. For example, at 2 AM on a deserted street, do you really need to wait the full time the light is red if you can see that there are no other vehicles?
We do, in fact, make cars with audible / visible / tactile alerts when the driver exceeds the speed limit for the road they are on, but we seem to have no appetite for buying cars that take the final step to actually automatically brake, never mind requiring all cars to do this.
I take this as a very strong indication that overriding driver preferences as in 1-3 will not happen any time soon.
Most restriction orders aren't worded in a way that could easily be encoded beforehand; they could include language restricting autonomous vehicles...
For the emergency situation, if there is a button, it could just interact with the traffic control systems. If there's no traffic at the light, the light could let the car through.
People talking about self driving ethics should spend less time reading philosophy and more time reading the road rules.
I think it gets traction with self-driving cars because people, stupidly, expect computers to make perfect decisions 100% of the time. This idea runs into a logical inconsistency when presented with a scenario in which there is no perfect decision. Rather than confront the fact that it's unreasonable to expect perfect decisions 100% of the time, people try to come up with a way to declare a perfect decision in a no-win scenario.
I don't think it's that people expect perfect decisions.
You are right that it essentially never happens with humans. But why doesn't it happen with humans? I think it is because if we are driving down a highway at high enough speed for this issue to arise, we probably aren't going to be aware of what is in our swerve path, and even if we are we probably don't have the time to recognize that we have to make a choice, nor the processing power to make such a choice.
Hence, there isn't really any need to consider this issue with human drivers because humans cannot make a decision in such a situation.
Self-driving cars, on the other, should have the sensors and the attention and the processing power to take into account everything to the side of the road in addition to what is on the road. With them, unlike with human drivers, it is actually possible for them to make a decision in these situations.
I think it is getting traction simply because with self-driving cars, unlike with human driven cars, it is actually a meaningful question. It's meaningful even if you assume that computers don't make perfect decisions--they at least have the time and data to make a decision, unlike humans.
At some point it will have to be regulated.
And if the purpose of autonomous vehicles, as is often stated, is to reduce the numbers of deaths (40,100 in 2017 in the US) and injuries related to such scenarios and vehicles, we have to have at least a tacit understanding of that which we are implicitly prioritizing, if nothing else. However, if the point is just to apply an ideology in the form of a technology, until everything in our lives is computerized, and considered in terms of computation, or if (as I've talked about before on here) it's about removing the need for God and Man to make the choice, then avoiding such questions is the very (inherently not so) explicit goal, as that we can neither admit to either wanting to create the black box nor that it is a black box.
The only way to truly not make the decision is to actually not make it, that is, for the motor vehicle to never be traveling in the first place.
In any case, you lost me when you got to "the driver has a potential for an increased chance of death." How exactly do you propose a computer will calculate this? Even today's best supercomputers would take multiple orders of magnitude longer to model the odds than it would require to act on them, and even then it would just be a guess. Hell, we judge car safety today by ramming them into objects repeatedly to see how they perform on average, and it still has only a vague relationship to what might happen to you in an actual collision.
My point is that it is 100% unfeasible for the computer to model these probabilities, and I see no reason to think it will ever be feasible. Even without getting into the weird things like "what if the pedestrian is 99 years old, or a 20 year old pregnant woman?" the complexities are basically infinite.
So the only correct answer I see is for the computer to do exactly what we would expect a competent driver to do today: stop the car as quickly as possible. And assuming it's not overdriving it's sensors (something humans are not supposed to do either) then that should work damn near 100% of the time.
I would agree.
But my point is that making the decision, and then trying to say we're not making a decision, is disingenuous. So the question becomes: disingenuous to what end? We are making statements about what is or isn't permissible. In the case of the standard human laws, we (generally) find it impermissible for someone in such a situation to save their own life by ending someone else's, in the same way that if A puts a gun to B's head and says shoot C or die, for B to shoot C would be regarded in most jurisdictions as murder, regardless of any part A plays in the scenario. And replicating these laws in machines is perfectly fine. They, like us, have no chance of fully understanding any of the consequences of anything either of us experience, so we can probably only operate in a pragmatic functional fashion. However, in building a machine that replicates what is permissible in this way, as already applied to humans, we must also admit that we are instantiating in hardware a set of rules to, in certain circumstances, kill and or injure people, one way or the other. But we're doing this for the purpose of, also, preventing such where impermissible (and perhaps, even, reducing such overall). Yet, it still remains that we must construct what will do that; we must enact in a very precalculated fashion a predefined, concrete expression of what is permissible, and therefore, what is not permissible. But we, and this is the whole reason, I will contend, at the heart of the argument, and why the debate is a contentious as it is, that is that we very much do not want to admit to the expression. And only by saying the question can't be answered, therefore, here's an answer, can we do that.
The result of your choice, to brake or 'just stop' means there will be situations where passengers die to save real or theoretical pedestrians, as soon as this is established in the media the sales of these cars will plummet, and your decision will have repercussions increasing the number of human-accidents across the board.
> The car cannot calculate the probabilities
I think we may be talking about different things, probabilities are the foundation of self driving algorithms, there is never a 100% right place, just adjustments and corrections, calculations that lead to percentages that lead to choices. Maybe you are thinking about the older cruise control (like in some BMWs), they tried to anticipate things on the road and would brake faster than a human could, saved a few bumps, caused a few..
Think about how stupid it is to program trolley problem logic. It suggests you program the car to purposefully crash into something. We will be absolutely fine just trying not to crash.
That means the car is constantly thinking to itself (“should I give up and just choose what to hit instead?”). No thanks.
The most basic rule of driving is to adjust your distance & speed to the surroundings/conditions. You should always be able to come to a complete stop without hitting anything. Obviously we humans suck at taking everything into account, but would the same hold true for self driving cars?
These decisions seem unavoidable.
Not sure i'm so comfortable with this; it will result in a number of people killed who never accepted this risk and might have been safe. In order to save a number of people who accepted risk in the first place.
Given that there will be collisions, it seems prudent to have the car try to figure out what's better to collide with- the tree or the child. Given that there will be collisions, it also seems prudent to think carefully about other, more difficult moral dilemmas.
Refusing to code for an eventuality is itself a moral decision; inaction is an action that has an impact on the world.
This would be rare, but would still kill a few thousand people every year. Wouldn't it be natural then to tweak the algorithm to reduce the death toll? I can imagine the public demanding it, and the software engineers to start writing code that deals with the rare case where an accident is inevitable. It could start very simple (avoid large groups of people), but could get more advanced over time.
Absolutely. However, when answering that question, I don’t think it’s unreasonable to thoughtfully consider this stuff.
Clearly, even flawed autonomous driving— if widely adopted— would save many lives. But I can’t think of many things that would slow that adoption more than a public perception of “killer robots” roaming the streets.
If we seriously want wide adoption, it’ll be hard to avoid addressing the qualitative perceptual difference between a death caused by a human driver and one caused by a machine that we’ve engineered.
Personifying self-driving systems is a great disservice and misleading. A self driving car detecting a sudden obstacle will not have time to classify anything to the extent that these stories have in mind, and certainly not to the point that humans would (leaving aside the fact that humans also would not have time to classify the items at a "sudden" obstacle).
That being said, there are very real ethical questions in the self driving world:
1. Just because you can commute farther, should you?
2. Should more expensive cars be able to disrupt traffic for individual benefit, or should all self driving systems follow a consistent set of rules of the road to maximize throughput?
2.1 Should I be able to tweak my car to be a little more dangerous, but also quicker?
3. Should carpooling be mandated? Encouraged through tax breaks? subsidies?
4. Should self driving cars be allowed to cut through neighborhoods or be required to follow major roads around them? What about parking lots?
I'm sure there are other really interesting questions as well, but the trolley problem is not one of them.
For 1 and the first 2, dynamic tolls are the smart solution. Also applies to the second 2. If people want to go faster, let's tax them for it.
For example, a more reasonable metric for a machine to use is the probability of injury/death. If swerving is 90% likely to kill a person and staying straight is only 89% likely, then staying straight is a better choice. I don't see how attributes of the person would ever trump the probability of harm. The cases where probability is roughly equal for multiple actions will be incredibly rare.
> I don't see how attributes of the person would ever trump the probability of harm.
Maybe they shouldn't, maybe they should. How do we know? What are the terminal values? Society effectiveness? Or equality of all humans? Or minimizing public outcry in cases when autonomous car killed someone?
> The cases where probability is roughly equal for multiple actions will be incredibly rare.
With millions of autonomous cars on streets rare outcomes would happen all the time.
Just fat, or pregnant? We need to know! Get onto it, engineers!
The car's algorithm for deciding this question is a matter of life or death!
/s of course, because I think this whole philosophical argument about self-driving cars is ridiculous when our current tech can't even reliably determine if an object is a stationary barrier or not.
If you child died because there was a 1% higher projected chance that two people would be killed in a collision... would that seem just?
I think the insistence on experimenting with autonomous driving outside of controlled access highways is insane.
I don't understand the insistence of bringing everything back to "what if it was your child." Pretty much everybody has family, and friends, and all that. I'm not sure how somebody being a child of somebody is supposed to be a rational argument for adjusting any of our thinking. As far as I can tell, it's basically the ultimate appeal to emotion.
When you’re talking about selecting who dies with a 1% margin with some ML process with an unknown margin of error, I question your judgement.
And obviously, in the real world, a ML-given estimate with a 1% variance is probably entirely useless. In these sorts of hypotheticals,I'm not sure why that really matters. You can play with the numbers as you want, or even move the whole question out far enough into the future that the margin of error can be considered very low. The question remains the same.
I think it is easy enough to predict that all the systems will do is prioritize avoiding pedestrians while minimizing collision energy otherwise. A 30 mph collision will mess up a pedestrian. It won't be a big deal for restrained passengers.
If it is frequently the case that autonomous systems have to pick between pedestrians, that points to traffic control changes, not a philosophy engine in the car.
(I would hold this view even if I were the 99-year old.)
“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man's life for the lives of five.”
The whole thing starts with a pretty elitist framework, that the best way to run society is to lie to the public because they can’t be trusted. The problem then ends with the infamous trolley problem which is so stripped down of any context it’s basically a dark pattern. You are expected to choose who is to die instead of wondering why do people keep getting run over by trolleys? Why does anyone have to die? Why can’t the track workers have safe working conditions?
The car systems won't have elaborate neural nets for deciding when to jump over a pylon onto a train track.
Not on purpose, but "machine learning" will have the same outcome:
Unless the software is opensource how are we supposed to know? Are we so naive that we’re going to take any company’s word for it? ...and then are we so naive that we’re going to fall for the obvious plant arguments in forums against it? If the internet has taught me anything it’s that you can’t trust anyone.
A. Human error causes cars to collide and kills people
B. The cars don't collide
If "caught", the liability exposure is the same, so there's no downside. You can't incarcerate an algorithm or even take its license.
>Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability.
Real autonomous cars are already making real problems. I believe it's Waymo's job to take over the left turns if the waiting line is too long.
Would you tell a human driver to stay off the roads until they decide in advance what they'd do in such a ludicrous situation?