Hacker News new | comments | show | ask | jobs | submit login

Former Lyft eng here. From my vantage point, as an industry, we're nowhere near where we should be on the safety side. The tech companies are developing driving tech privately instead of openly. Why can a private for profit company "test" their systems on the public roads? The public is at serious risk of getting run over by hacked and buggy guidance / decision system. Even when a human operator has his hands hovering 1 inch off the steering wheel and his foot on the brake, if the car decides to gas it and swerve into a person, it is probably too late for the human crash test driver to overtake. This is going to keep happening. The counterargument FOR this is that it is overall a good idea for the transportation system if the number of crashes & deaths is statistically less than human operated cars. I see this as the collision of what's possible with what's feasible and that we are years away from any of this being close to a good idea. :( Very sad for the family and friends.



Perhaps companies need to test their safety devices first. I.e., first prove that their LiDAR correctly identifies pedestrians, cyclists, etc. From there, build test-vehicles with redundancy, e.g. with multiple LiDAR devices. Then prove that the vehicles actually stop in case of emergency. And only then actually hit the road.

Of course, the US department of transportation should have set up proper certification for all of this. They could have easily done so because they can arbitrarily choose the certification costs.


What you're describing is a driving test that every human needs to go through before they're allowed to drive on public roads. Something that can be revoked temporarily or permanently.

I would be very interested in a 3rd party (government or private) create a rigorous test (obstacles, weather conditions, etc...) for self driving vehicles. Becoming "XXX safety Certified with a Y Score" for all major updates to the AI could help restore confidence in the system and eliminate bad actors.


How about if we start with a test no human driver is given:

Identify the objects in pictures.

We take our biological vision systems for granted, but it seems one autopilot system couldn't identify a semi crossing in front of the vehicle...


In some countries you have to pass a medical examination which includes a vision test.

The driving schools also have theoretical tests where one has to identify objects in a picture, interpret the situation, and propose the correct action. Of course, these tests are on a higher level: "this is a car, this is a pedestrian" vs. "you're approaching an intersection, that car is turning left, a pedestrian is about to cross the street, another car is coming from there etc."

Not to mention the road and track tests a driver has to pass which include practicing controlling the car in difficult conditions: driving in the dark, evasive actions on slippery surfaces and so on.

Edit: In my opinion it's insane to allow autonomous vechiles on the roads without proper testing by a neutral third party.


>the US department of transportation should have set up proper certification for all of this

I think you're severely underestimating the path that something like this would have to take. The certification itself would be under so much scrutiny and oversight that it would take years for that to get done. Unfortunately, the technology is far more readily available and easy to get working than the political capital required to create a certification for this.


if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.


And?


Cost 1000s if not millions of lives. You understand over a million people die every year due driving? They system is not working.


A "million" people do not die in the U.S. every year from driving. Not even close:

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Not that 37,000+ is a great number, but I don't think many of the detractors here are arguing that Uber et. al have a perfect record. Just that it's possible that progress is being made in a more reckless way than necessary. Just because space flight is inherently difficult and risky and ambitious doesn't mean we don't investigate the possibly preventable factors behind the Challenger disaster.

edit: You seem to be referencing the worldwide estimate. Fair, but we're not even close to having self-driven cars in the most afflicted countries. Nevermind AI, we're not even close to having clean potable water worldwide, and diarrhea-related deaths outnumber road accident deaths according to WHO: http://www.who.int/mediacentre/factsheets/fs310/en/


Yeah but the tech will spread there fairly soon after it's established in the US. In places like Africa the most common cars are not some African brand, they seem to mostly be Toyotas, who will probably implement self driving when it's proven.


For what value of “soon after” is very expensive automation going to reach Africa, India, and other places in numbers sufficient to put a dent in those fatalities? The slow march of other tech, safety included, suggests decades. Meanwhile the safety gains of automation are so far hypothetical, amd until they’re well demonstrated, potentially a distant pipe dream. Nothing about ML/AI today suggests a near-future of ultra-safe cars.


Wow, let's just put people in bubble suits so they don't hurt themselves. It's ridiculous to say people shouldn't drive cars because it's possible to hurt themselves or others. We might as well outlaw pregnancy for all the harm that can come to people as a result of being born.


> if we wait for the gov to set up a certification for this, we'll delay the whole industry 10 years.

That's not a particularly convincing argument, given that (so far), Uber's self-driving cars have a fatality rate of 50 times the baseline, per mile driven[0].

Having to wait an extra ten years to make sure that everything is done properly doesn't sound like the worst price to pay.

[0] Nationwide, we have 1.25 deaths per 100 million miles driven. Uber's only driven about 2 million miles so far: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...


In those 10 years ~350,000 people will die in car accidents in the US alone.

Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars. Every year we delay that is 15,000 people dead. This woman dying is a personal tragedy for her and those that knew her. However, as society we should be willing to accept thousands of deaths like hers if it gets us closer to safer self driving cars.


> Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars.

What's your evidence for why this is a reasonable expectation? The fatalities compared to the amount of miles driven by autonomous vehicles so far shows that this is not possible at the moment. What evidence is there that this will radically improve soon?


Why should we accept those deaths? This is like saying we should let doctors try out surprise untested and possibly fatal therapies on patients during routine check ups if their research might lead to a cure for cancer.


This is a silly interpretation of the data. You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. Which also would've been a silly thing to say. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.


> You can tell because up to now, Uber could've been characterized as having an infinitely better fatality rate than the baseline. If a single data point takes you from infinitely better to 50x worse, the correct interpretation is: Not enough data.

No, you couldn't have characterized Uber has having "infinitely better" fatality rate than the baseline, because that would have resulted in a division-by-zero to calculate the standard error. Assuming a frequentist interpretation of probability, of course; the Bayesian form is more complicated but arrives at the same end result.

It's true that the variance is higher when the sample size is lower, but that doesn't change the underlying fact that Uber's fatality rate per mile driven is empirically staggeringly higher than the status quo. Assigning zero weight to our priors, that's the story the data tells.


You're talking statistics. I'm talking common sense. Your interpretation of the data is true, but it isn't honest. As a response to scdc I find it silly.


Nope. Error bars do exist, and with those attached, the interpretation of the data before/after is consistent. Before it was an upper bound, after it is a range. Every driven mile makes the error on it smaller.


Under 50 times. Still horrible, of course.

https://www.androidheadlines.com/2017/12/ubers-autonomous-ve...

Presumably a bit more since December.

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Fluctuates just over 1 per 100 million.

(1/2+ million) / (1+/100 million) ~ 50-


You've hit upon one of the most obvious ways to improve the safety of these systems. Deploy the systems more broadly, without giving them active control.

Then, you can start to identify situations where the driver's actions were outside of a predicted acceptable range, and investigate what happened.

Additionally, if you have a large pool of equipped vehicles you can identify every crash (or even more minor events, like hitting potholes or road debris) and see what the self-driving system would have done.

The realistic problem is that Uber doesn't give a shit. As such, deployment will never be optimized for public safety. It will be optimized for Uber's speed to market.


Even worse: There are people who test their DIY self-driving hacks on public roads:

https://youtu.be/GzrHNI6eCHo?t=100

Using https://github.com/commaai/openpilot , which is cool but not on public roads.


That is indeed reckless, but that guy is testing open source self-driving technology.

Given that all the main car companies are keeping their technology private I don't see how open-source systems are supposed to keep up without people doing this.

I think you can also contrast this to the many more people who decide to drink alcohol and then drive.


Unless things have changed, all the important parts aren't open source -- neither the vision nor the decision pipeline.


https://github.com/commaai/openpilot this is the bit that needs training as far as I'm aware.


> I think you can also contrast this to the many more people who decide to drink alcohol and then drive.

Sure, but drinking (more than a little) and driving is already illegal, so they're not exactly comparable.


in germany you would get punished for doing something like that. (well somebody would need to catch you first, of course)


> Why can a private for profit company "test" their systems on the public roads?

I mean, this is the way cars have always operated. It is somehow any different to let them "test" there brake system, there acceleration systems? Car companies have always been able to do whatever they want, who cares about the deaths.

For some reason, when it comes to roads, we all just accept a certain amount of deaths each year. Over a million people are killed every year around the world, up to 40,000 in the US alone. That is acceptable according to every car driver.


Car companies have (multiple) private tracks that they stress test their vehicles on. Car companies are extremely risk averse. Public road testing for something safety related is going to be very late stage in development/testing, if at all.


Some level of integration, testing always has to meet the real world. Maybe in IT we would call a "canary" instead of a test. I once had a colleague who had been a test driver for a major European car firm. His job was to drive cars around in real traffic.

Self-driving cars need this all the more, since most of the hard and scary stuff is the integration of many systems and how they deal with unexpected scenarios. And worse still -- in a traditional car, you had an expert test-driver like my friend who could, say, take evasive action if the brakes didn't work.

But in these cars, the driver is the system-under test.


> Car companies have (multiple) private tracks that they stress test their vehicles on. Car companies are extremely risk averse. Public road testing for something safety related is going to be very late stage in development/testing, if at all.

Yes, and so do these guys. They might be somewhat risk averse, but lets not pretend they have in the past made sacrifices that have cost lives (Pinto, etc...) and there are constant recalls going on.


Google at least from what I read has all of that.


> That is acceptable according to every car driver.

All of them, I mean you took a poll? that must have taken a while.

My understanding (as a non-US resident) is that parts of the US are built around the assumption you drive.


> parts of the US are built around the assumption you drive

Yes, this is 100% true. Arizona included.

It's difficult, usually impossible, to buy-out of driving (or being driven) in most places in the USA. Ask DART why rapid transit ridership in the Dallas area is falling despite a growing population and a billion-dollar annual budget.


> All of them, I mean you took a poll? that must have taken a while.

If someone wants to pretend it is not acceptable while still driving, that is just silly.


The FLOSS side has argued for openness in critical systems for ages. Medical devices such as pacemakers, air plane systems, automatic breaks on cars, voting machines, and the list just goes on. If the device get hacked or has any bugs then people will likely die (except for the voting machine), but the code is kept private even for those running in devices which literally gets operated into your chest and which a person has to trust with their own life. Governments all over the world seems to be very much against the idea to require that such critical code is made openly for the sake of safety, as that would leak the oh so important trade secrets.


When regular cars first appeared, were they not tested "live" immediately? And yes, there was FUD even back then, and in parts of the world a backslash which resulted in sometimes silly safety rules.

The largest real difference seems to be the speed of communication nowadays, the rate at which public opinion can be manufactured and globally spread.


When regular cars first appeard vehicles had to be led by a pedestrian waving a red flag or carrying a lantern to warn bystanders of the vehicle's approach.

https://en.wikipedia.org/wiki/Red_flag_traffic_laws


And a countless number of people have been killed.


Real cars were an evolution of an existing technology (the carriage). We replaced a horse with an engine and a steering wheel.

In this case have a machine replacing a human and using non deterministic algorithms to do so.

Automated driving ONLY makes sense on specific, well mapped, sensor enabled, pedestrian free roads. Let’s focus on that use case before we let these things take over our towns and cities.


Such roads exist today, they are utilising two steel rails and are generally named railroads.


I'm for a middle-of-the-road way of doing things at first: make some highway lanes for autonomous cars only. Separate those lane physically. Remove the speed limit there to encourage people to get cars which can use those lanes.

Implement a way for the highway operator to affect the cars on it: make way for emergency vehicles, adapt to infos the operator can aggregate and transmit, change the speed of cars to the slowest one around etc.

Once the majority of people have self driven cars and those have been proven in this controled environment you can think about allowing them on open roads. Or not. Or adapt those open roads to what self driven cars need.


> make some highway lanes for autonomous cars only. Separate those lane physically

sure, makes sense to me.

But God, that would cost a fortune.


> But God, that would cost a fortune.

It would. Like an infrastructural job. I mean, it would be like railways for individual trains.

But that's kinda-sorta the job expected from government and what taxes are for: paying for things the individual would have problems doing.


What do you mean? Everyone making these comments is fairly certain that this is a completely trivial endeavor.


GM's "super cruise" fills the exact role you're describing aside from removing the speed limit, and aside from the billions of dollars in public infrastructure spending and increased traffic for everyone else.


> Why can a private for profit company "test" their systems on the public roads?

Because at some point, it's literally not possible to test something that's ultimately intended for use on public roads anywhere but public roads. Or, to put it in terms that would be more familiar to developers, "you can test all you want in QA, but prod is the ultimate test of whether your code really works".

As mentioned downthread, the problem here is that Arizona basically gave companies full leeway, without having to demonstrate that they had done sufficient tests internally before putting the cards on public roads. Apparently they're not even required to report data on accidents to the state, which is absurd.

I wouldn't be surprised if Uber were ultimately found to be cutting corners, but Arizona is also responsible for rushing to allow self-driving cars on their roads without doing their own due diligence first.


This is what it means when politicians say they’re cutting red tape. They’re letting industry do whatever the fuck they want at the expense of everyone.


When it is deemed necessary to test the vehicles on public roads after they have shown to be mostly safe, ALL of these vehicles should have flashing lights like any emergency vehicle.

I'm stunned that when I drive in Texas, there don't seem to be regulations over which vehicles can have flashing lights and with which colors. Yet in these other cities you have experimental robots that are designed to blend in with other cars as much as possible. NO! Make them stick out.


Which is hilarious to me because AZ is the bastion of those "Don't treat on me" stickers and all this anti-regulation sentiment...until someone gets killed by a lack of regulations and then everyone wants to lock everything down and blame the government for not doing their due diligence.


>Why can a private for profit company "test" their systems on the public roads? The public is at serious risk of getting run over by hacked and buggy guidance / decision system.

Because the for profit company has a say in government and lawmaking, the person getting run over doesn't.


>Why can a private for profit company "test" their systems on the public roads?

Some phase of testing will necessarily be of public roads. The fact that they started that phase prematurely doesn't somehow mean that "private for profit companies" should never test on public roads.

Boeing tests new airplanes in the public airspace too, over public areas.


Boeing certainly tests in public airspace, but only as part of a standardized certification process overseen by the US government - a process and set of regulations that are written in blood.

Uber abides by no such testing standardization.


Which would be a different standard than the one I was criticizing.


I strongly agree with this sentiment. If this technology is going to be tested on the public then all the datasets from these vehicles should be openly shared and development should be done cooperatively. Let the market decide winners based on metrics like cabin comfort and leasing terms rather than odds of causing grievous injury.


> Why can a private for profit company "test" their systems on the public roads?

My thoughts exactly. It is wrong. We have a chance to do this properly, openly and together, but instead our governments are letting private companies put our lives in danger.


You are right on the money. Also, why must self driving cars be on the roads with human drivers at all? We're an unpredictable bunch, and lives are at stake! We should simplify the problem domain and keep self driving cars away from other human drivers and pedestrians (even once the technology reaches a very advanced stage). When they are actually ready to go mainstream, then think about putting them on roads with humans (and even then, dedicated routes and infrastructure specifically for self driving cars may be the safer way to go).


To be fair though, it's not exactly like we're where we need to be at on the safety side from a human standpoint. I'd take my chances with a robo-car over a 16 year old texting any day.


That's why many unnamed companies in the self-driving space are testing solely on enormous closed-off tracks in the South Bay.


It is simple: Tech companies are allowed to test their systems on public roads because the communities/states have given them permission to do so(with restrictions like roads allowable, max/min speed allowable, safety driver present behind wheel, etc).


I can't imagine you would want any major new transportation technologies released without being tested on public roads?


There shouldn't be any testing on public roads until the executives and engineers are willing to put on blindfolds and spend six hours running back and forth across a track on which a hundred of their autonomous cars are maintaining 60 MPH in both directions.


> Why can a private for profit company "test" their systems on the public roads? The public is at serious risk of getting run over by hacked and buggy guidance / decision system.

Interesting way of putting it. Kinda echos the "privatize the profits, socialize the losses" sentiment from the financial crisis.


We already have a system for privatizing negative externalities in traffic accidents (and in 95% of the rest of life). It's called tort law, and it will certainly be used in this case if Uber was at fault.

There are of course situations were externalities fail to be privatized (e.g., traffic congestion), but I don't think auto accidents is one of them.


Someone died. There is no way to truly privatize that externality. Money won't bring the woman back.


Do you drive? How can you drive and not be aware of the 40,000 deaths a year in the US, and pretend you aren't part of that system.


Do you program? How can you program and not be aware of the X deaths, Y privacy violations and pretend you aren't part of that system?

Is every programmer in the world partly responsible for every death caused by every programming bug ever? Should we blame all civic engineers every time a bridge collapses?

I am an individual, not "part of a system". Simply being a driver doesn't make me a reckless or dangerous one. For example, 10K of those deaths involve DUI, and I am certainly not part of that system. I'm sure 95% of drivers never kill anyone.


>I am an individual, not "part of a system". Simply being a driver doesn't make me a reckless or dangerous one. For example, 10K of those deaths involve DUI, and I am certainly not part of that system. I'm sure 95% of drivers never kill anyone.

Massive misunderstanding. As a driver your actions form a part of the traffic around you, even responsible drivers are "attached" to the system and put pressure on it. Maybe one day somebody rear ends you and you are knocked forward into a crosswalk, or you break suddenly to avoid hitting a pedestrian and cause an accident behind you. Or maybe your very safe driving leaves enough room for somebody to take a foolish risk cutting in front of you and lose control of their car. Collectively society accepts that getting places quickly is worth the cost that we pay in human lives. And each road user agrees to the small risk of death/injury beyond their control in order to get somewhere.


I'd guess that in the US, at least, 99.2% of drivers never kill anyone while driving (including themselves).

(back of the envelop, assuming 300,000,000 drivers driving for 60 years of their lives at a 40,000 per year death rate)


It's almost impossible to drive a car without threatening small children and other people with lethal force. People regularly wait on the sides of roads when every traditional custom, convention and current law demands that car drivers stop, but they never do it. "I want to get to the shops quicker" is not an excuse for levelling a gun at someone and asking them to please step aside.

Moreover, it is illegal to drive a car safely. If you try it, you will be pulled over. You may be fined or lose your licence. Try it and see. So yes, being a car driver does make you a dangerous one.


Unless you define what you mean by "drive a car safely" I don't see how one can even begin to evaluate that claim meaningfully.


Perhaps the same standard should be applied to human drivers: if a human makes a mistake and kills somebody, the rest should be taken off the road for retraining.

I wonder how the issue developed when the automobile was first introduced. I remember something about needing to have a guy walking in front of the vehicle waving a warning flag, but apparently that didn't last long.

Found it - https://en.wikipedia.org/wiki/Red_flag_traffic_laws


It seems like a faulty analogy, unless you want to tell me that every Uber car is running unique software.


Yeah, possibly. On one hand, the Uber setup should be examined for flaws. On the other hand, we shouldn't get paranoid about the occasional traffic death when we've already decided (for human drivers) that this is acceptable collateral damage in the transport system.


The entire pitch for autonomous driving is that it's safer than human drivers. If that's false it's not clear why we should continue to allow Uber to test their work on public roads.


It could be safer than human drivers and still cause quite a few deaths. We can't evaluate such statistics from one accident. We don't even know if this particular accident would have been avoided by a typical human driver (there was apparently one behind the wheel of the vehicle).


I think the world can probably stand to wait a bit of time while precisely this question is investigated.


That's reasonable while the system is in testing and in small scale use. If it ever becomes a major part of the transportation system, it won't be desirable to shut it all down every time there's an accident.


In my view, it should not become a major part of the transportation system until we are confident that it's actually safer.


Not a meaningful counterargument here. Someone can not only drive, but have a history of having been an at-fault driver causing death, yet justifiably not believe that: public roads should be used for private testing such that the profit is private, but the harm is socialized; or that money compensates for lost lives.


And yet, people and companies undertake activities all the time that involve risks to life, and we do indeed attach a monetary value when those lives are harmed.

Or, alternatively, we do indeed have a cap to the amount of resources we are willing to expend to save a life - lives are not of infinite value.


Wat a completely terrible thing to say. This isn't about saving a life, this is about not killing somebody. Just because a company like Uber thinks it can make a lot of money doen't mean it can simply take risks like these.


Every time you are selling food you take the risk of killing people if something goes wrong. And what about carrying people in planes. These risks are taken continuously, for profit. How is that different?


Both of those industries have tremendous regulations in place to prevent accidents and injuries. If someone gets salmonella poisoning and it is traced back to a company, there is a massive recall at the company's expense. Air travel is one of the safest modes of transportation available (statistically) because of the NCTB and the rules/regulations put in place after each and every accident.

That's how it is profoundly different.


Air travel is only safe because companies have taken these risks with people's life. A society that takes no risk is a society that will achieve nothing new.


Air travel and eating food at a resteraunt is an opt-in action. To avoid this risk, you would have to opt-out of using the public road system that you are required to use.


We wouldn't except it if people got killed by selling them poisoned food, at least not where I'm from. Plane crashes are investigated and licenses are suspended and blacklists are kept, furthermore software is tested and verified before it is used in production. We shouldn't except excessive risks, see regulations with truck and bus drivers, just because a profit can be made.


But what makes you think Uber didn't test their software? When Boeing introduces carbon fuselage it is taking risks with people's life. They do reasonable testing but a technology isn't proven until it has been widely used for a long time. No risk = no innovation.


Since there is absolutely no binding federal regulation I don't have a lot of confidence that the level of testing is comparable to what's done for airplanes.


It's all well and good to not like the choice, but the choices still have to be made - how much are we willing to give up economically in order to reduce immediate risk to lives? Included in this must be the consideration that economic value can be used to save lives, through higher living standards and better health care.

Every regulatory system in the world has to consider these things, explicitly or (more commonly) implicitly. See e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1633324/ for the application of the idea to healthcare, or http://www.nytimes.com/2011/02/17/business/economy/17regulat... for the public policy implications for environmental regulation.

Or more relevant yet, this "guidance on valuing reduction of fatalities and injuries by regulations or investments" by the Department of Transportation: https://www.transportation.gov/sites/dot.gov/files/docs/VSL_...


Did you ever take an engineering ethics course? One of the things talked about is the monetary value placed on a human life. You can't make that value infinitely high or literally nothing can happen. You also don't want it super low.


We get it, its sad and money isnt everything, but the world cant stop because of a single death. If there is legal liability here then itll be handled.


The world doesn't have to stop, just testing self-driving cars live on public roads with very little evidence that the tech is ready.


More evidence than human drivers, the same human drivers who are legally responsible/liable for this incident.


This attitude went great for Ford with the Pinto, as I recall.


Courts are equivalent of exceptions in computer programming. You should have them for exceptional circumstsnces but you shouldn't use them in your program for routine control flow (because they are slow among other things).

Unless Python.


If you believe that then you need to change the entire economy.


Other countries don't work the way US does. Instead of suing company you just notify the regulator and they do an audit. If company is at fault it is fined and ordered to correct itself and monitored if it does. You can get some money from the company but it won't be a payday for you.


Manslaughter doesn’t fall under tort law, IIRC.


Tort law covers death. Manslaughter is a criminal charge but wrongful death, etc. are how tort law handles the situation. You may recall O.J. Simpson was sued for wrongful death and lost.

http://www.nytimes.com/1997/02/11/us/jury-decides-simpson-mu...


Wouldn't one typically press both charges? Wrongful death to allow the family to be compensated, corporate manslaughter to punish the company and incentivize not killing people tomorrow. It doesn't seem like it's necessary to choose the non-criminal case.

To state the obvious, I am not a lawyer.


Civil suits are intended to serve both purposes - compensation and deterrent (punitive vs compensatory damages). E.g. if you sue a company for doing you damage, the compensatory damages are based only on the injury you suffered, but punitive damages' presence and severity is affected by things like the level of negligence of the company.

The theoretical difference with criminal charges is that criminal charges deal with harms to the difference sovereign (ie the state and the public) that need to be punished regardless of the wishes of the immediate victim. e.g. you're not allowed to settle a murder charge out of court.


It depends on the details of the case. (IANAL either.)

Criminal law applies to criminal acts. It's possible for do something wrong that "injures" someone (physically or in some other way) without committing a crime.

Criminality generally requires wrongful intent. If you simply screwed up while otherwise obeying the law then you haven't committed a crime (negligence can be a crime, e.g. if you are more careless than a "reasonable man" would be this is in itself a form of bad intent -- so swinging swords around in public places while blind-folded isn't OK).

Also, tort law has a different standard of proof -- it is resolved based on preponderance of evidence rather than proof beyond reasonable doubt, so there's a lower bar than for throwing someone in jail.

Oh, and it's harder to prove criminal charges against nebulous entities. (Who's the criminal in this case? Uber's CEO? The head of the software team? The person who gave uber a permit to test their crap on public streets?)

Uber's overall conduct might approach the point of "criminal enterprise" at which point RICO statutes might be invoked. Not likely though.


The definition of manslaughter varies between states, and I am not a lawyer, but in general, criminal manslaughter charges only apply if there is an element of extreme negligence or doing something unlawful (like driving in an illegal manner) and causing a death.

For instance, there are accidents that cause deaths all the time; but they would only be prosecuted as manslaughter if someone were doing something considered inherently unsafe and unlawful.

If you run a red light, and kill someone in the process, it's possible you could be prosecuted for manslaughter, though even then you might not be if there could have been extenuating circumstances (sun in your eyes, etc).

In this case, there would only be a chance of a manslaughter charge if Uber were somehow being particularly negligent. In this case, it sounds like there was a human operator in the car, though the car was in autonomous mode. It's possible that the operator wasn't paying the attention that he or she should have been, or it's possible that even the human operator didn't see the pedestrian in time. It mentions that the pedestrian was crossing outside of any crosswalk, so it's possible that this was a tragic accident of them trying to cross a street with traffic without having been sufficiently careful.

On the other hand, it mentions a bicycle, but also says the victim was walking, which I find odd:

  "Elaine Herzberg, 49, was walking outside the crosswalk on a four-lane 
  road in the Phoenix suburb of Tempe about 10 p.m. MST Sunday (0400 GMT 
  Monday) when she was struck by the Uber vehicle, police said." 

  Local television footage of the scene showed a crumpled bike and a Volvo 
  XC90 SUV with a smashed-in front. It was unknown whether Herzberg was on 
  foot or on a bike.
Anyhow, it is relatively uncommon for drivers to be charged with manslaughter in this country; they generally have to be doing something quite egregious, like driving drunk or driving at 90 in a 30 mph zone. Most of the time, the response to people being killed in traffic accidents is just that accidents happen.

Unless the operator was being particularly negligent, or there was some serious and known problem with the self driving car but it was put on the road anyhow, I doubt any manslaughter charges will be filed.

Remember, Arizona has explicitly been encouraging the testing of self-driving cars, so I expect that testing a self driving car, with an operator to take over in cases which it can't handle, would not be considered unlawful or extremely negligent. Maybe what the operator was doing could be, but we'd need more information before it would be possible to tell.


> On the other hand, it mentions a bicycle, but also says the victim was walking, which I find odd

The police clarified that the victim was walking her bike across the street: https://twitter.com/AngieKoehle/status/975824484409077760


Do auto accidents fall under criminal law in Arizona? In NY where I live, unless it's DUI or otherwise reckless and/or intentional in some ways, it's just a traffic violation. You could take the offender to civil court, but that's still crazy. In this particular case, I think Uber, if they are at fault, would have to pay arms and legs to get out of this.

While the woman's death is tragic, I hope this doesn't set us back on whatever progress made in autonomous driving.

EDIT: removed "While this a serious f*up by Uber and"


Not sure why you think that taking the offender to civil court is crazy? It happens literally every day. And it the case of death you're going to get major damages. Normally these cases settle, but like it or not many, many people sue over serious and fatal auto accidents.


>While this a serious f*up by Uber

How do you know that? From my initial reading of the police report, the woman was jaywalking and ran out into the middle of the street. I'm not even sure a human or autonomous driver would have been able to stop in time. The only tech that might have been able to is Waymo's since, based on the video they released, they have a radar lock on every pedestrian within a mile of the vehicle and they do predictive tracking to determine their position. Even then, it might have still not stopped in time.


Uber still has a responsibility to avoid hitting jaywalkers. I would be more sympathetic on a freway, but this seems negligent. If they can’t avoid hitting jaywalkers they need to keep testing in a less dangerous circumstance.


You're making the mistaken assumption that this accident could have been avoided at all. While autonomous vehicles should have a higher threshold for responsibility than human drivers, it's not possible to expect them to never be involved in accidents. For all we know, the jaywalker ran right out into the road.


I guess you are right. What I meant to say was it's a terrible PR for Uber, and a serious f*up if Uber was at fault.


If self driving cars are only as safe as human drivers, why bother with spending billions to achieve status quo?

Unless it is a pure capitalist move.


They're far better than human drivers but that depends on the system. In my personal opinion, based on the sensor video that Waymo release a few weeks ago, their self-driving tech is far more focused on safety than Uber's and their vehicles are likely far safer than a human driver.


have you watch the video provided by the Tempe police? The woman was jaywalking, but it was far from running out. This is something that Uber should have picked up.


Interesting question here: did Uber have authorization to operate automated in AZ? If not, and they had a human operator behind the wheel as a backup, is that human liable for manslaughter charges for not being in control of the vehicle?


  Did Uber have authorization to operated automated in AZ?
According my reading of AZ Executive Order #2018-04 [0], presuming they meet the guidelines established in provisions #2 and #3, they do:

2) Testing or operation of self-driving vehicles equipped with an automated driving system on public roads with, or without, a person present in the vehicle are required to follow all federal laws, Arizona State Statutes, Title 28 of the Arizona Revised Statutes, all regulations and policies set forth by the Arizona Department of Transportation, and this Order.

3) Testing or operation of vehicles on public roads that do not have a person present in the vehicle shall be allowed only if such vehicles are fully autonomous, provided that a person prior to commencing testing or operation of fully autonomous vehicles, has submitted a written statement to the Arizona Department of Transportation, or if already begun, has submitted a statement to the Arizona Department of Transportation within 60 days of the issuance of this Order acknowledging that:

a. Unless an exemption or waiver has been granted by the National Highway Traffic Safety Administration, the fully autonomous vehicle is equipped with an automated driving system that is in compliance with all applicable federal law and federal motor vehicle safety standards and bears the required certification label(s) including reference to any exemption granted under applicable federal law;

b. If a failure of the automated driving system occurs that renders that system unable to perform the entire dynamic driving task relevant to its intended operational design domain, the fully autonomous vehicle will achieve a minimal risk condition;

c. The fully autonomous vehicle is capable of complying with all applicable traffic and motor vehicle safety laws and regulations of the State of Arizona, and the person testing or operating the fully autonomous vehicle may be issued a traffic citation or other applicable penalty in the event the vehicle fails to comply with traffic and/or motor vehicle laws; and

d. The fully autonomous vehicle meets all applicable certificate, title registration, licensing and insurance requirements.

-----

[0] "Executive Order: 2018-04 - Advancing Autonomous Vehicle Testing And Operating; Prioritizing Public Safety" - https://azgovernor.gov/file/12514/download


Jesus Christ, man! This was a person, not a "negative externality."


And conservative politicians continue to make moves to erode tort law practice. See "See You In Court" by Thomas Geoghegan


Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe self-driving vehicles for testing on public roads.

The ironic part here is that Uber already has a strong reputation for skirting the law and not caring. Their response and the following audits will be worth following.


> Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe self-driving vehicles for testing on public roads.

This assumes that the reputation costs are comparable in weight to the acts that they commit. Based on the past year and what I hear about Uber's growth[0], I'd argue that their reputation and their valuation are quite uncorrelated, but it's hard to know the truth.

0: https://www.forbes.com/sites/miguelhelft/2017/08/23/despite-...


> This assumes that the reputation costs are comparable in weight to the acts that they commit.

Rather this assumes reputation costs times risk of having those costs while undertaking some action is comparable to gain from this action.


The problem for Uber is that if they lose this race, then there is no Uber. The outcome of this is that they will be willing to risk a lot in order to succeed. This seems to be collateral damage for them. Completely unsurprising as it seems they lied about running the red light in autonomous mode and the ran their program without proper licensing (and reporting). Nobody should be surprised that Uber was the first company to rack up an autonomous driving death.


>Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe self-driving vehicles for testing on public roads.

And theoretically the only people who can regulate banks are the banks themselves.

Funny how you hear that argument every time a horrible industry is trying to keep profits high by eternalizing costs: meat packers in 1900, tobacco in 1950, oil and gas till today.


What if you already have a sh*t reputation like UBER?


I doubt it. If you exclusively kill people outside the vehicle who are walking or riding bikes, you encourage people who don't have a car to hire the vehicle.

The only possible costs would be if they're made to pay a price in a court of law, which seems unlikely since the woman who was killed had a bike with her, and it's unknown if she was on the bike at the time, but she's been described as a pedestrian outside of a crosswalk. Clearly Arizona authorities just don't care about safety.


> Theoretically the reputation cost would discourage for-profit organizations from releasing unsafe (...)

I wonder whether food companies (like the meat packing industry in Chicago) said the same thing before the FDA was created in 1906; i.e. that reputation will just regulate the industry to not sell mislabeled products, or spoiled or adulaterated food.


Really it's "socialize the costs" as well, considering that autonomous vehicle development was funded by taxpayers.[1] Like most high tech.

[1] https://en.wikipedia.org/wiki/DARPA_Grand_Challenge


The public spent a few million dollars on the Grand Challenge, while Waymo alone is worth 10's of billions of dollars.

https://www.cnbc.com/2017/05/23/alphabets-self-driving-waymo...

Self-driving car companies will have massive net positive externalities on the public, to the tune of hundreds of billions of dollars per year when fully deployed.


> The public spent a few million dollars on the Grand Challenge, while Waymo alone is worth 10's of billions of dollars.

False comparison. First of all DARPA has invested a lot more than that in all sorts of related AV technologies (e.g. LIDAR, AI, robotics). But the real question is when were those "few million dollars" invested. Seed investments generally are much smaller than what companies end up being worth.

In our high tech system, taxpayers generally take on the riskiest stage of very early development, where it takes billions of dollars in bets spread over a very large area over 10+ years. And you're right, there are some socialized benefits, but the statement stands that the costs are socialized as well.

Imagine if you told Google's earliest investor that their investment was a tiny percentage of what it's worth today, and they enjoy the benefit of using Google now, so it's fair that Google's later investors kept all the equity.


> First of all DARPA has invested a lot more than that in all sorts of related AV technologies

I was responding to your link about the Grand Challenge.

> Seed investments generally are much smaller than what companies end up being worth.

Of course. Are you really going to make someone else go point out the hundreds of millions in dollars of seed-level funding by Google, by Uber, etc., to demonstrate the obvious point that the DARPA Grand Challenge is terrible evidence for the claim "autonomous vehicle development was funded by taxpayers".

> so it's fair that Google's later investors kept all the equity.

People who invest in Google get to keep the equity they they purchase, but they don't get a claim to the equity of companies that exist because of Google's search product. Likewise, when the government invests in public goods, they don't then get to claim everything that is built on top of the public good.

Indeed, the government provides the rule of law, without which almost no modern economic development could take place. But that obviously doesn't mean the government has claim to all economic value. Likewise, none of us could work without eating, but that doesn't mean we owe all our income to farmers.


DARPA is waaay before what's called "seed" level funding in the commercial sector. They are the pre-pre-pre-seed. Taxpayers fund a lot of military procurement for nascent technologies as well.

Again you make a false comparison. Nobody claimed that early stage investors are entitled to "all economic value". The statement stands that costs are socialized. Silicon Valley is greatly subsidized by early stage taxpayer investment.

> Are you really going to make someone else go point out the hundreds of millions in dollars of seed-level funding by Google, by Uber, etc., to demonstrate the obvious point that the DARPA Grand Challenge is terrible evidence for the claim "autonomous vehicle development was funded by taxpayers".

"Autonomous vehicle development was funded by taxpayers" is a factually correct statement. You again fundamentally misunderstand the distinction of when investments are made vs. quantity of investment. Earlier stage investments are riskier; Google et al did not start pouring money in until the technology started showing some promise after many years of taxpayer investment. As is commonly the case with our high tech system.


> They are the pre-pre-pre-seed.

I guess we should credit Andrew Carnegie with pre^12-seed funding since he founded Carnegie-Mellon and a lot of relevant robotics research has taken place there.

Again, I am not discussing all of DARPA's activities, I am talking about the Grand Challenge you brought up, which I continue to maintain is minuscule compared to a million other sources and thus is terrible evidence that "autonomous vehicle development was funded by taxpayers" in any non-trivial sense.

> Nobody claimed that early stage investors are entitled to "all economic value".

You misinterpret my analogy. The point is that your approach, if taken seriously, would mean the government would have a claim on every single bit of economic value in the US, not that it would have full ownership of all of it.

> The statement stands that costs are socialized. > "Autonomous vehicle development was funded by taxpayers" is a factually correct statement

Ha, yes, and we can also conclude that autonomous vehicle development was funded by Carnegie, Roomba, and my buddy Alex who runs a robotic delivery startup.


Once again you have confused the timing and thus risk of investment with quantity. Furthermore, and trivially, donations by private citizens such as Carnegie are not a socialization of costs.

The simple fact remains that taxpayers have made significant and critical investments in nurturing AV technology, like many other technologies that Silicon Valley has commercialized once they bore fruit. Your fundamental premise that investments that are "minuscule" in size are necessarily minuscule in significance is trivially wrong.


You've repeatedly attributed multiple claims to me I'm not making. I can't tell how much of that is willful, but either way I won't continue the discussion.


>when the government invests in public goods, they don't then get to claim everything that is built on top of the public good.

If they used the right liscensing for the information that they created then they might.


Not if they are gasoline cars, particulate pollution alone kills tens of thousands.

An electric rail network on the other hand would have real positive externalities, but isn't as easily privitazable so won't be built.


Even electric cars aren't pollution free, and there are still other negative externalities (like health costs associated with car-centric life styles).


The latest EPA Tier 3 gasoline vehicles have extremely low particulate emissions. If we're talking about building new vehicles anyway then your fatality estimate is way too high.


You have no reliable basis for quantifying the net positive impact. The technology looks promising but at this point we don't even know whether it can be made to work reliably outside of a few limited areas.


Yeah, no.

A good example is pharmaceutical research. NIH budget is ~$30B and arguably a fraction of that is pure drug development.

The top pharma companies (ignoring VC investment in start ups) is over $70B.


That cost has long since been dwarfed by private investments. IMHO that was tax payer money well spent.


Self driving cars could potentially save tens of thousands of lives a year not to mention all of the other social benefits and quality of life improvements. No doubt whoever gets it working first will make a ton of money, but society will see a massive benefit as well.


It provides a benefit to the owners of such a company, as the labor costs are reduced. In other respects this is like the same nonsense about how we would live better lives and work less as our jobs were replaced by robots. It turns out that an economic "system" based upon dog-eat-dog principles will not allow such altruistic results to accrue for society. Like someone said above: "socialize the risks, privatize the profits"

treis 8 months ago [flagged]

I'm not quite sure how you think this will work. Is Google going to commission hitmen to kill 10s of thousands of people each year to make up for the deaths they prevent? Are they going to prevent those who can't drive now from using their cars? Or play irritating sounds in their cars to replace the stress of driving?


that's a strawman argument.

Automation will indeed do some harm to a certain group of people. The question is who should be responsible for bailing those people out (if at all). Is it society (aka, the govt), or the companies reaping the benefits of said automation?


Of course, economically, taxi drivers and others will lose while programmers/shareholders will win. The point is that for the 99% (or whatever) of us that aren't in either category, self driving cars will be a massive improvement to our lives. Especially to the 10s of thousands each year that won't die.


>Self driving cars could potentially save tens

The problem is that we should be 100% sure that there will be less death and not allow the self driving cars on public roads before.

I don't think we have the real numbers, how many Km each car drive and how many times the human had to get involved to prevent a big crash. If there are laws and checks in place so the actual numbers are reported then I want to see them, I read all self driving topics here and never seen this laws.

Until we have the real number the fact that self driving cars will save more lives is just a hope for a far away future.


"Could potentially save tens of thousands of lives a year" is not the same as "will save tens of thousands of lives a year right now". The fact that a technology has the potential for huge benefits does not give it a free pass to kill people while it's being developed.


Based on what? Where does your conjecture hold evidence that “society will see a massive benefit”?


I presume saving tens of thousands of lives is a benefit to society?


It's a bit of a wishful thinking now is it?

We don't really know if there will be any safety benefits at all, and while I believe we might get to that point, it's a bit farther away than some tech-giants would like us to believe.

If you listen closely to who say what, you'll notice that a lot of entrepreneurs claim that we will have full AI within a couple of years, while a lot of boring engineers huff and puff and are generally pessimistic.

I'm sure the there are great savings involved in self-driving transport - especially long-haul , so it will happen sooner than later, but this will initially mean more death - not less.


I'm not sure how wishful thinking it is. AFAIK no self driving car has been at fault in an accident. They have always been caused by a human driver and, at worst, the self driving car failed to avoid the collision.

For better or worse, I don't think a self driving car will be commercially successful until it shows it is at least as safe as a human driver. More likely it will have to demonstrate that it is significantly safer. It might be the case that whichever company first launches doesn't correctly gauge the safety of their car, but I definitely think the goal is to be safer from day one.


> AFAIK no self driving car has been at fault in an accident.

Nobody is saying the car itself was at fault. People are saying (justifiably) that the people who put the car out on public roads with faulty engineering are at fault.

> I don't think a self driving car will be commercially successful until it shows it is at least as safe as a human driver. More likely it will have to demonstrate that it is significantly safer.

I agree. And the incident under discussion illustrates that the technology has not yet reached that point.

Furthermore, "commercially successful" is not the first objective that needs to be met. The first objective is "safe enough to be allowed on public roads". The incident under discussion illustrates that the technology has not yet reached that point either.


My old comment from https://news.ycombinator.com/item?id=15076613

So much talk about LIDAR and other sensors. Why nobody talks about obvious idea of Road Object Message Bus? ROMB is a protocol where each road object (a traffic light, a sign, a car, a bicycle, etc.) transmits info about itself. A car could broadcast its direction vector, intention to turn, any non ROMB moving object it sees. A traffic light could broadcast current state and when it is going to change. That information would greatly enhance overall security, especially during rain and snow conditions, when even LIDAR fails.

Self-driving is such important (just after eliminating combustion engines) that we could upgrade existing cars with cheap ROMB boxes. Vehicle GPS tracking system costs about $30. ROMB box would cost about $60. Let's say that from 2027 all cars have to have a ROMB box to enter a downtown ...

Let's say your car ROMB received info about the white truck, while your car cameras and vision recognition systems see just a cloud and any truck in 100 m range.

ROMB purpose is not to replace cameras or LIDARs, but to extend gathered info.


Please don't paste old comments into new conversations. Especially not twice in the same thread!


I deleted the old comment because I thought it was connected to another deleted comment. You can check your web server logs and see that I deleted the old in the same time I posted the new one.

And I think that the comment is very relevant to the current discussion. It talks about how to increase the safety of self-driving cars.

Please answer why it is not relevant in your opinion. Because from my point of view it looks like you don't read it before downvoting.

BTW HN is not complaint with GDPR. After some time I cannot delete my comments.


The counterargument you propose is invalid because it would be DECADES until >90% of the entire current automotive market is converted to fully autonomous. MEANING cars currently on the road either being retrofitted or completely removed from road and replaced by an AV.

So we are all in a lose-lose. I'm anticipating severe backlash from Congress this week, if not today.


because it does not require handing over control to the car to get a good test. the cars can simply record their decision making process and immediately flag all exceptions, which includes hitting anything. It would be very interesting to see all exceptions because it is to be expected all these systems must fail.

we don't need active testing on our streets. automakers would have dared to test ABS or "safety" systems in such a manner for the simple reason of liability.


Could you please not use allcaps for emphasis in HN comments? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: