Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A self-driving car might decide you should die (medium.com/backchannel)
15 points by hmsimha on July 21, 2015 | hide | past | favorite | 17 comments


There's actually a much more exciting corollary to this though: the car can choose to crash in whatever way minimizes the risk of injury to you and others. This may entail deliberately crashing into the known best crumple zones, even at the optimal angle to ensure there is reduced risk of impingement of your limbs, say - perhaps even tailored to knowledge of your shape. It would know you have no passengers, perhaps, and so choose to cannibalize your rear by either spinning sharply (if this is safe, which admittedly it's probably not, unless perhaps the road surface has lost grip and you cannot decelerate effectively using breaks) or - if a pile-up is likely - breaking more sharply to begin the crash with the vehicle behind ahead of schedule. There could be cooperative crashing, with all vehicles minimizing occurrence of collision and maximising optimal crumple-zone utilisation. I have no idea whatsoever what is or isn't safe or possible, but the cars can be made to know.

These things are all a long time away, but it's actually far more exciting than terrifying.


Scenario: two passangers in your car, four in the other. Head on crash incoming. Let's say the AI has two options:

A - You both die, one passanger in the other car dies. B - You survive, all four in the other car die.

What should the AI do?

Should the AI be set to value the driver and occupants life above all others, no matter what? or do we set an equation? 1 driver = 2 others? 3 others? 4? maybe the AI should ascertain fault and make a decision on that? ("if you're to blame you get a x2 weighting to die")

My guess is Google is ignoring this possibility and working to the car never crashing. But at 55mph on ice a loss of traction and control is always a possibility.

See, once we have AI making decisions, then we must also implement responsibility chains. Who takes the fall when decision making AI goes bad? The code writer? code owning corp? vehicle owner? CEO?

In general: AI = good to go, decision making AI = we are not ready.


I understand that this is the problem posited by this and many other articles. But right now we are almost certainly not even considering asking these questions of a self-driving car.

It doesn't have remotely the situational awareness and data to make an informed decision, let alone the capacity. I would be surprised if any of the first several generations do anything beyond trying to avoid a collision. Including lowering velocity much below what a human might in adverse conditions.

After that I expect that mitigating collision impact on the vehicles will be the next problem, since that is an engineering one. Such as optimal crash angles, etc.

We are a long way from our cars making deliberate life-or-death moral decisions. Before we get there we have many decisions they can make very effectively that have no moral component. Once their decisions begin taking into account morality, society will no doubt legislate, but let's worry about that then.


We already have rather detailed rules (product liability law) for responsibility for products; and I've yet to see anything that points to any specific shortcoming of them when the product is an "AI making decisions".

I don't really understand why people act like "this product caused harm when used for its intended purpose, should someone be accountable and if so who?" is a question that society hasn't thoroughly considered.


Dumb products (for which the established laws apply) don't make decisions. AI does. That's what makes it AI, and it's gonna need new rules and regulations.


Even to the extent that the ways putative future AIs might "make decisions" might be more like the way humans do than the way existing products respond to stimulus and produce results, out body of liability law, thanks in part to the legacy of less egalitarian times, already had ample precedent for the treatment of responsibility for results of human decision making by entities that are legally property rather legal persons.

To the extent that more enlightened times might see actors with that kind of independence as legal persons even if they are manufactured, well, we have personal liability law already, which also addresses agency relations, etc. where one legal person might be responsible for the acts of another.

So, again, rather than vague handwaves at poorly defined distinctions, I'd like to hear those arguing that liability law is a real imminent problem with AI that will require major and fundamental change to do something to specify the specific ways in which existing law is inadequate.


To me, it's closer to the terrifying end of the spectrum, because we reading this will all be dead[1] before cars will actually be able to make the kind of deeply informed and complex decisions you describe.

It's still exciting, but in a pretty abstract way.

[1]: barring the advent of Kim Stanley Robinson's gerontological treatment, but I'm assuming that's an even tougher but to crack


I kind of suspect that within a small number of generations of self-driving cars the decision of how best to utilise crumple zones will be incorporated into crash decision making.

Coordinated crashing, however, will probably take several decades due to lack of interoperability, like so much tech progress.

_Moral_ decisions are almost certainly a long long time away.


This is a clickbait meme that has been done to death.

There is no reason to assume that self-driving cars won't behave as selfishly as human drivers. That is, act in the self-interest of the passengers in all but the most extreme cases, most of which will be beyond the capability of the AI (or human driver) to understand. Like a human, an AI might be expected to stop suddenly if a child runs into the street, even at the risk of being rear-ended, because the risk to the passengers is minor. In the same circumstance, one would not expect an AI to drive off a cliff to avoid the child.


Previous discussion (It has no too many comets, but the comments are interesting.) : https://news.ycombinator.com/item?id=9584026 (9 points, 60 days ago, 5 comments)


The problem with this article us that it does not take into account consequences. With human drivers the consequences of hitting another driver or pedestrian is on the driver while for automatised vehicles the consequences will be on a few companies. Companies are rarely held accountable for their action as seen by the GM ignition switch deatht except for a few fines and law suits. While a driver can be arrested and tried by criminal law for their actions. A person also has to live with the fact they killed someone while companies it will be a statistic.


Ah the endless protectionist argument : let's take all control from your life so that we can save lives.

The road to hell is paved with good intentions.

I'm not saying people dying is a good thing and nothing should be done to prevent it, but we are to be reasonable about it.

No matter what we do, people will eventually die, so let's not transform our lives in something so boring that the only non natural cause of death will be suicide.


Banks get hacked, governments get hacked, purveyors of malware get hacked, sleazy websites get hacked.... any reason to expect that self-driving cars won't get hacked? Something as complex as autonomous pilot software surely has plenty of attack surface, which will be promptly exploited by scum like HackingTeam to provide remote control/monitoring abilities to shady customers.


I honestly don't think this will ever be the case because it relies on two major assumptions: (1) there is some parent process in control of both cars involved in an accident. (2) all cars implement the same AI.

What is far more likely is a future in which there are many different types of self-driving AIs on the road from many different manafacturers, each of which will have different algorithms for all aspects of autonomous driving. Ultimately, this diversity will lead to carmakers designing algorithms that speculate on what another car might do. This is likely because the coordination costs between n different automaker's autonomous systems will rise with the square of n. In other words, unless we end up with a monoculture or only two or three different systems on the road, coordination just isn't going to happen. The second issue is that even if the automakers could coordinate, such coordination would rely on vehicle to vehicle communication of the actions each car wants to take. Coding the decision making for other cars into each car to anticipate onboard makes no sense because your vehicle would not have access to the sensor data to resolve the decision of the other car. Lastly, with V2V communications, negotiation and decision making, the impact of latency in likely to be so costly to making the best decision that any attempts of doing V2V negotiation will either be non-trivial or limited to an extremely simple model such as each car proposing n strategies (where n is like 2-3 solutions) and each care deciding the best option and incompatible option. This could keep negotiation fast enough to make a best decision.

Regardless of whether there is negotiation or merely guessing in real time about what the other car might do, the only thing each autonomous system designer can consider is the well-being of their passengers. All vehicles involved in an imminent accident acting in the self interest of their passengers is likely to be the optimal solutions. The only improvement I can see is some negotiation where some ideas of what decisions the other car might make allows each vehicle to make slightly between decisions to preserve the well being of their passengers.

People just like bringing this up because it lets them discuss this interesting philosophical question about morality. Once these discussions encounter the realities of how autonomous cars are likely to evolve and penetrate the market, it's going to be painfully obvious that this is a futile thought exercise to even waste your time with.


I'll go with a custom firmware which saves driver's life unconditionally.


How does the firmware save you from the prison shank?


Pity the title refers to an interesting discussion point that isn't discussed till quite late in the article, and then only fleetingly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: