Hacker News new | comments | ask | show | jobs | submit login
A Study on Driverless-Car Ethics (newyorker.com)
39 points by theBashShell 19 days ago | hide | past | web | favorite | 69 comments



“ if a car detects a sudden obstacle—say, a jackknifed truck—should it hit the truck and kill its own driver, or should it swerve onto a crowded sidewalk and kill pedestrians? A human driver might react randomly (if she has time to react at all), but the response of an autonomous vehicle would have to be programmed ahead of time. ”

I hate this logic. No, it didn’t need to be programmed ahead of time. It should try to avoid crashing. It shouldn’t try to solve a philosophical problem. We don’t want to require our cars to observe it’s surroundings and calculate number of things identified as humans to minimize expected loss of life. That’s difficult and error prone.

If you’re going down a street and something jumps in your way, make the program try not to hit anything. Ideally by braking, because you shouldn’t be going fast enough to not be able to do so based on your environment.


I agree with you that this example is so theoretical as to be useless. In addition, focusing on this masks some other more relevant ethical/legal questions.

1. If the car sees a police car flash it's lights for it to pull over, should it pull over unconditionally - not allowing the driver to over-ride?

2. If there is an Amber alert broadcast specifying the car, should it automatically try to alert the authorities as to it's location and/or flag down the nearest police vehicle?

3. Should a car be able to be programmed to automatically avoid going to places that would violate a restriction order on its owner or occupant.

4. If owner/occupant experiences an emergency (heart attack, impending child birth, etc), should the car have a button that will allow it to break traffic laws to try to get to the hospital faster. For example, at 2 AM on a deserted street, do you really need to wait the full time the light is red if you can see that there are no other vehicles?


We have the capability right now to make cars that enforce speed limits on their drivers.

We do, in fact, make cars with audible / visible / tactile alerts when the driver exceeds the speed limit for the road they are on, but we seem to have no appetite for buying cars that take the final step to actually automatically brake, never mind requiring all cars to do this.

I take this as a very strong indication that overriding driver preferences as in 1-3 will not happen any time soon.


Vehicles with telematics are anyway broadcasting their location. Pretty much all recent vehicles have telematics.

Most restriction orders aren't worded in a way that could easily be encoded beforehand; they could include language restricting autonomous vehicles...

For the emergency situation, if there is a button, it could just interact with the traffic control systems. If there's no traffic at the light, the light could let the car through.


I would not buy a car that acts against my own interests, or allows the government to deprive me of freedom (the first 3). I don't have a meaningful opinion on #4.


Agreed. Swerving onto sidewalks or deliberately hitting pedestrians or vehicles is illegal anyway.

People talking about self driving ethics should spend less time reading philosophy and more time reading the road rules.


Should a car, generally prioritize the survival of its occupant or pedestrian? Not answering this is the same as answering it with 'surprise me'. And either answer is wrong.


You could ask the same question about human drivers, yet zero attention is paid to this question, and zero time is spent on it during driver training.


Is that perhaps why the question is so nettlesome?, it reveals something we've been living with all along and would've preferred to continue ignoring.


I really don't think so. It's ignored because it essentially never happens. The odds that you'll ever be presented with such a scenario are so low that it's pointless to spend any time on it. The odds that you'll be presented with such a scenario and the obvious best answer is something other than "maximum effort braking" are even lower.

I think it gets traction with self-driving cars because people, stupidly, expect computers to make perfect decisions 100% of the time. This idea runs into a logical inconsistency when presented with a scenario in which there is no perfect decision. Rather than confront the fact that it's unreasonable to expect perfect decisions 100% of the time, people try to come up with a way to declare a perfect decision in a no-win scenario.


> I think it gets traction with self-driving cars because people, stupidly, expect computers to make perfect decisions 100% of the time

I don't think it's that people expect perfect decisions.

You are right that it essentially never happens with humans. But why doesn't it happen with humans? I think it is because if we are driving down a highway at high enough speed for this issue to arise, we probably aren't going to be aware of what is in our swerve path, and even if we are we probably don't have the time to recognize that we have to make a choice, nor the processing power to make such a choice.

Hence, there isn't really any need to consider this issue with human drivers because humans cannot make a decision in such a situation.

Self-driving cars, on the other, should have the sensors and the attention and the processing power to take into account everything to the side of the road in addition to what is on the road. With them, unlike with human drivers, it is actually possible for them to make a decision in these situations.

I think it is getting traction simply because with self-driving cars, unlike with human driven cars, it is actually a meaningful question. It's meaningful even if you assume that computers don't make perfect decisions--they at least have the time and data to make a decision, unlike humans.


This sort of thing can happen for humans in a way where they have time to make a choice. A stick throttle with brakes that can’t overcome it could get you into a situation like that, for example. You’re right that computers would be able to make that kind of decision in a much wider range of scenarios. However, computers will also be much likely to get into those scenarios in the first place. I’m not at all convinced that the net result is computers encountering situations where they can make a choice more often than humans do. I think both will be so rare they any effort spent on them would be better spent on avoiding crashes altogether.


We spend a hell of a lot of time in court deciding what was right and wrong, and when we should brake or swerve. But we have the category of 'mistake' that the program will not be able to hide behind.


Neither. It should just try to stop. Calculating fatality probabilities is way out of scope.


Years down the line, a brand of self driving cars will market themselves as the safest care you can buy, exactly because they prioritize the occupants of the car.

At some point it will have to be regulated.


That would be saying that it will aim for pedestrians. If someone designs a car with that intent, then yeah, it will either be regulated or sued out of existence.


But this is just making the choice and then pretending it hasn't been made.


There is no choice to be made, that is the problem with this line of thinking.


(In this simple case) The vehicle currently traveling toward a stationary object. It has two possible courses of action (sensical action, perhaps would be better, as it could just continue at full speed, but I'll ignore that), but it can either A) attempt to stop before colliding with the object, or B) it can swerve around the object. Obviously, as stated, two possibilities are that if A the driver has a potential for an increased chance of death or injury and if B a third party has a possible increased chance of death or injury. Now, by shunting to A, that makes a statement of priorities: of all the unknowns, the unknowns about the driver (and passengers) and the states that what might result from A are acceptable to the alternative of B. If this were not so, then we could just as easily say that B is equal and acceptable, in that we can no more guarantee the diminshment of an increase in the potential for injury or death for one or the other party(ies) in either event, therefore, both scenarios are equal, unless they're not. And that most people are unwilling to weight B as as acceptable as A, and, say, flip a coin (or its electronic equivalent), it's clear that a judgement call is being made (based upon values) about what is to be prioritized in the situation (or equivalent situations). [The problem is people then take this specific instance of trolley-like problems and dismiss them as abstract construction, and then dismiss and equivalent problems, regardless of real-world applicability.]

And if the purpose of autonomous vehicles, as is often stated, is to reduce the numbers of deaths (40,100 in 2017 in the US) and injuries related to such scenarios and vehicles, we have to have at least a tacit understanding of that which we are implicitly prioritizing, if nothing else. However, if the point is just to apply an ideology in the form of a technology, until everything in our lives is computerized, and considered in terms of computation, or if (as I've talked about before on here) it's about removing the need for God and Man to make the choice, then avoiding such questions is the very (inherently not so) explicit goal, as that we can neither admit to either wanting to create the black box nor that it is a black box.

The only way to truly not make the decision is to actually not make it, that is, for the motor vehicle to never be traveling in the first place.


Humans are advised not to swerve around things, as a general rule, so I imagine similar logic works for self-driving cars. You're almost always better off -- for everyone concerned -- to dynamite the brakes and hope for the best. Swerving is a good way to lose control.

In any case, you lost me when you got to "the driver has a potential for an increased chance of death." How exactly do you propose a computer will calculate this? Even today's best supercomputers would take multiple orders of magnitude longer to model the odds than it would require to act on them, and even then it would just be a guess. Hell, we judge car safety today by ramming them into objects repeatedly to see how they perform on average, and it still has only a vague relationship to what might happen to you in an actual collision.

My point is that it is 100% unfeasible for the computer to model these probabilities, and I see no reason to think it will ever be feasible. Even without getting into the weird things like "what if the pedestrian is 99 years old, or a 20 year old pregnant woman?" the complexities are basically infinite.

So the only correct answer I see is for the computer to do exactly what we would expect a competent driver to do today: stop the car as quickly as possible. And assuming it's not overdriving it's sensors (something humans are not supposed to do either) then that should work damn near 100% of the time.


>My point is that it is 100% unfeasible for the computer to model these probabilities

I would agree.

But my point is that making the decision, and then trying to say we're not making a decision, is disingenuous. So the question becomes: disingenuous to what end? We are making statements about what is or isn't permissible. In the case of the standard human laws, we (generally) find it impermissible for someone in such a situation to save their own life by ending someone else's, in the same way that if A puts a gun to B's head and says shoot C or die, for B to shoot C would be regarded in most jurisdictions as murder, regardless of any part A plays in the scenario. And replicating these laws in machines is perfectly fine. They, like us, have no chance of fully understanding any of the consequences of anything either of us experience, so we can probably only operate in a pragmatic functional fashion. However, in building a machine that replicates what is permissible in this way, as already applied to humans, we must also admit that we are instantiating in hardware a set of rules to, in certain circumstances, kill and or injure people, one way or the other. But we're doing this for the purpose of, also, preventing such where impermissible (and perhaps, even, reducing such overall). Yet, it still remains that we must construct what will do that; we must enact in a very precalculated fashion a predefined, concrete expression of what is permissible, and therefore, what is not permissible. But we, and this is the whole reason, I will contend, at the heart of the argument, and why the debate is a contentious as it is, that is that we very much do not want to admit to the expression. And only by saying the question can't be answered, therefore, here's an answer, can we do that.


so, 'surprise me' then, i'm sure it will.


No, I said 'Neither.' You presented a false dichotomy.


its not false, you get to choose, or a choice will be made on your behalf.


There is no choice. The car cannot calculate the probabilities, which is a prerequisite to being presented with a choice. No question, no choice, no decision. And nobody is going to try too hard to make it feasible, either, because the answer to that hypothetical question would be the same -- occupants and outsiders will both be safer if the car stops moving. So ... just stop the car.


Don't look now but you just made a decision.

The result of your choice, to brake or 'just stop' means there will be situations where passengers die to save real or theoretical pedestrians, as soon as this is established in the media the sales of these cars will plummet, and your decision will have repercussions increasing the number of human-accidents across the board.

> The car cannot calculate the probabilities

I think we may be talking about different things, probabilities are the foundation of self driving algorithms, there is never a 100% right place, just adjustments and corrections, calculations that lead to percentages that lead to choices. Maybe you are thinking about the older cruise control (like in some BMWs), they tried to anticipate things on the road and would brake faster than a human could, saved a few bumps, caused a few..


Fundamentally irrelevant. It should prioritize. Not crashing. If it is in a situation where the only possible outcomes involve crashing, it should try to avoid crashing anyway.

Think about how stupid it is to program trolley problem logic. It suggests you program the car to purposefully crash into something. We will be absolutely fine just trying not to crash.

That means the car is constantly thinking to itself (“should I give up and just choose what to hit instead?”). No thanks.


Aside from a plane dropping out of the sky onto the direct road ahead, would a sufficiently well implemented self driving car ever end up in such a scenario?

The most basic rule of driving is to adjust your distance & speed to the surroundings/conditions. You should always be able to come to a complete stop without hitting anything. Obviously we humans suck at taking everything into account, but would the same hold true for self driving cars?


Swerve (with 60% chance of survival) or break (with a 40% chance of survival). This may sound like sci-fi, but its simple math; if you know the distance of the hazard, coefficient of friction and speed. And you can crunch numbers really fast, these percentages are part of the decision process. And one puts the public at risk and the other puts the driver at risk. Now even if you had no more information and a utilitarian programmer decided to go with the numbers and swerve; swerving off-side will generally endanger the driver and near-side generally will endanger the public.

These decisions seem unavoidable.


These conundrums are irrelevant anyway. Over one million people die every year on the roads. The instant AI becomes better than humans, it's a moral imperative to adopt it, no matter what it does in these rare and contrived circumstances.


If this is your motivation then you will want the car to always default to saving the driver over pedestrian. To do otherwise would discourage adoption.

Not sure i'm so comfortable with this; it will result in a number of people killed who never accepted this risk and might have been safe. In order to save a number of people who accepted risk in the first place.


And even if a car was implausibl in the same situation with the same options, it still of course could choose a response randomly or arbitrarily based on how the sorting algorithm happened to rank equal outcomes.


There is no such thing currently as reducing the risk of killing yourself or someone else with your car to 0. Braking distance and speed are insufficient variables to alter given people's preferences about the speed at which they'd wish to arrive at their destination. This will very likely also be the case when autonomous vehicles are deployed. There will be collisions, this is simply unfortunately a fact.

Given that there will be collisions, it seems prudent to have the car try to figure out what's better to collide with- the tree or the child. Given that there will be collisions, it also seems prudent to think carefully about other, more difficult moral dilemmas.

Refusing to code for an eventuality is itself a moral decision; inaction is an action that has an impact on the world.


There is no probability of impending crash for which it is moral to give up trying to not crash and favor choosing who to kill. Probability is never 1.


I agree that it's mostly a theoretical problem. But in the long term, if I imagine a world where we all cars are self driving, there would still be accidents from time to time, for things that can't be predicted: a rock falls on the road, an earthquake happens and destroys the road, etc.

This would be rare, but would still kill a few thousand people every year. Wouldn't it be natural then to tweak the algorithm to reduce the death toll? I can imagine the public demanding it, and the software engineers to start writing code that deals with the rare case where an accident is inevitable. It could start very simple (avoid large groups of people), but could get more advanced over time.


Is this whole "question" really just a mental exercise, like the trolley problem? Surely the more salient question is how much senseless death could be prevented by the adoption of self driving technology, isn't it? This 'conundrum' almost seems to be intended to undercut trust in what could surely be a massive improvement to vehicle safety.


> Surely the more salient question is how much senseless death could be prevented by the adoption of self driving technology, isn't it?

Absolutely. However, when answering that question, I don’t think it’s unreasonable to thoughtfully consider this stuff.

Clearly, even flawed autonomous driving— if widely adopted— would save many lives. But I can’t think of many things that would slow that adoption more than a public perception of “killer robots” roaming the streets.

If we seriously want wide adoption, it’ll be hard to avoid addressing the qualitative perceptual difference between a death caused by a human driver and one caused by a machine that we’ve engineered.


Speaking un-ethically, the person in the car likely has fewer rights due to agreeing to some or another Term of Service, whereas the people on the street likely have not agreed to such. It may be better to kill the person in the car.


I think philosophy and culture are important, but this meme is very silly.

Personifying self-driving systems is a great disservice and misleading. A self driving car detecting a sudden obstacle will not have time to classify anything to the extent that these stories have in mind, and certainly not to the point that humans would (leaving aside the fact that humans also would not have time to classify the items at a "sudden" obstacle).

---

That being said, there are very real ethical questions in the self driving world:

1. Just because you can commute farther, should you?

2. Should more expensive cars be able to disrupt traffic for individual benefit, or should all self driving systems follow a consistent set of rules of the road to maximize throughput?

2.1 Should I be able to tweak my car to be a little more dangerous, but also quicker?

3. Should carpooling be mandated? Encouraged through tax breaks? subsidies?

4. Should self driving cars be allowed to cut through neighborhoods or be required to follow major roads around them? What about parking lots?

I'm sure there are other really interesting questions as well, but the trolley problem is not one of them.


Well in the case of point 2, when you have a lot of vehicles disrupting traffic for individual benefit, what you end up with is traffic as we have it today--horribly inefficient and slow for everyone. When everyone follows a consistent set of rules to maximize throughput, that's what actually leads to the maximum individual benefit.


The answers to 2, 2 and 4 are pretty clearly no.

For 1 and the first 2, dynamic tolls are the smart solution. Also applies to the second 2. If people want to go faster, let's tax them for it.


Is it just me or does solving the moral dilemma of prioritization based on attributes of the person seem like a worthless endeavor? In practice, I can't come up with a case where it would matter.

For example, a more reasonable metric for a machine to use is the probability of injury/death. If swerving is 90% likely to kill a person and staying straight is only 89% likely, then staying straight is a better choice. I don't see how attributes of the person would ever trump the probability of harm. The cases where probability is roughly equal for multiple actions will be incredibly rare.


89% percent chance to kill homeless person or 90% chance to kill pregnant woman?

> I don't see how attributes of the person would ever trump the probability of harm.

Maybe they shouldn't, maybe they should. How do we know? What are the terminal values? Society effectiveness? Or equality of all humans? Or minimizing public outcry in cases when autonomous car killed someone?

> The cases where probability is roughly equal for multiple actions will be incredibly rare.

With millions of autonomous cars on streets rare outcomes would happen all the time.


>89% percent chance to kill homeless person or 90% chance to kill pregnant woman?

Just fat, or pregnant? We need to know! Get onto it, engineers!

The car's algorithm for deciding this question is a matter of life or death!

/s of course, because I think this whole philosophical argument about self-driving cars is ridiculous when our current tech can't even reliably determine if an object is a stationary barrier or not.


Do you expect a person to be able to solve that problem? Would you fault a person for not solving it the way you want it solved? If not, why should a driverless car?


What’s the reliability of the risk assessment?

If you child died because there was a 1% higher projected chance that two people would be killed in a collision... would that seem just?

I think the insistence on experimenting with autonomous driving outside of controlled access highways is insane.


And what about the family of those two people?

I don't understand the insistence of bringing everything back to "what if it was your child." Pretty much everybody has family, and friends, and all that. I'm not sure how somebody being a child of somebody is supposed to be a rational argument for adjusting any of our thinking. As far as I can tell, it's basically the ultimate appeal to emotion.


It’s a call to humanity, not emotion.

When you’re talking about selecting who dies with a 1% margin with some ML process with an unknown margin of error, I question your judgement.


I don't see how it's a call to humanity. "Think if it was your kid vs two strangers" isn't calling on humanity. It's calling on familial tribalism and selfishness if anything.

And obviously, in the real world, a ML-given estimate with a 1% variance is probably entirely useless. In these sorts of hypotheticals,I'm not sure why that really matters. You can play with the numbers as you want, or even move the whole question out far enough into the future that the margin of error can be considered very low. The question remains the same.


We let teenagers drive.

I think it is easy enough to predict that all the systems will do is prioritize avoiding pedestrians while minimizing collision energy otherwise. A 30 mph collision will mess up a pedestrian. It won't be a big deal for restrained passengers.

If it is frequently the case that autonomous systems have to pick between pedestrians, that points to traffic control changes, not a philosophy engine in the car.


It's the ultimate bikeshed. The only winning move is not to play.


I will posit that killing a 9-year old or a 29-year old is actually worse than killing a 99-year old, all else being equal.

(I would hold this view even if I were the 99-year old.)


These discussions always bring up the Trolley Problem which I find to be pretty insidious. The trolley problem is designed to hide the real moral issue. Here is the original Trolley Problem for comparison.

“Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man's life for the lives of five.”

The whole thing starts with a pretty elitist framework, that the best way to run society is to lie to the public because they can’t be trusted. The problem then ends with the infamous trolley problem which is so stripped down of any context it’s basically a dark pattern. You are expected to choose who is to die instead of wondering why do people keep getting run over by trolleys? Why does anyone have to die? Why can’t the track workers have safe working conditions?


Does this have anything to do with driverless cars? Driverless cars will not be programmed to target victims by race, and driverless cars' main benefit is reaction time to take simple safe actions like stopping the vehicle or turning into an open space.

The car systems won't have elaborate neural nets for deciding when to jump over a pylon onto a train track.


>Driverless cars will not be programmed to target victims by race

Not on purpose, but "machine learning" will have the same outcome:

https://theoutline.com/post/7022/ai-trolley-problem-ethics

https://twitter.com/wef/status/1058675216027660288


You say this and I want to agree... but then again who are the people building driverless cars?

Unless the software is opensource how are we supposed to know? Are we so naive that we’re going to take any company’s word for it? ...and then are we so naive that we’re going to fall for the obvious plant arguments in forums against it? If the internet has taught me anything it’s that you can’t trust anyone.


These problems are always brought as a drawback to driverless cars but are completely irrelevant. How many "trolley" situations are there compared to the thousands of times where the choice is:

A. Human error causes cars to collide and kills people

B. The cars don't collide


Can the car communicate over local network and check the pedestrians' credit scores?


A concern of mine is that in a situation where an autonomous vehicle is "at fault", its owner has a strong incentive to have it flee the scene.

If "caught", the liability exposure is the same, so there's no downside. You can't incarcerate an algorithm or even take its license.


https://idlewords.com/talks/sase_panel.htm

>Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability.


It used to be that Ivy League colleges laundered privelege into elite degrees for people who would become consultants who gave an aura of legitimacy to biased decisions. Maybe the robots will put them out of a job.


Consultants are the ones legitimizing the robots


The real ethical question is not asked: should a driverless car be allowed on the street if it waits 10 minutes at the left turn while making a traffic jam?

Real autonomous cars are already making real problems. I believe it's Waymo's job to take over the left turns if the waiting line is too long.


The Trolley Problem is the most boring part of the ethics of autonomous driving.

Would you tell a human driver to stay off the roads until they decide in advance what they'd do in such a ludicrous situation?


The difference with a human driver is that there is no point asking them to decide ahead of time. In all but the most highly-trained soldiers, what a person will do in the case of immediate fatal personal threat can't be preprogrammed.


Ah, the perfect way for Tesla to segment their different product lines — make the expensive ones more “selfish” in their accident mitigation algorithms.


Hope they will be taking minutes in all the meetings where the business requirements are collected.


ahh a pseudo-science article coming from The New Yorker. A comic strip from Dilbert would give more insightful information.


Well, he's recently started talking about self-driving cars, so it's on the cards: https://dilbert.com/strip/2019-01-25




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: