Of course, the US department of transportation should have set up proper certification for all of this. They could have easily done so because they can arbitrarily choose the certification costs.
I would be very interested in a 3rd party (government or private) create a rigorous test (obstacles, weather conditions, etc...) for self driving vehicles. Becoming "XXX safety Certified with a Y Score" for all major updates to the AI could help restore confidence in the system and eliminate bad actors.
Identify the objects in pictures.
We take our biological vision systems for granted, but it seems one autopilot system couldn't identify a semi crossing in front of the vehicle...
The driving schools also have theoretical tests where one has to identify objects in a picture, interpret the situation, and propose the correct action. Of course, these tests are on a higher level: "this is a car, this is a pedestrian" vs. "you're approaching an intersection, that car is turning left, a pedestrian is about to cross the street, another car is coming from there etc."
Not to mention the road and track tests a driver has to pass which include practicing controlling the car in difficult conditions: driving in the dark, evasive actions on slippery surfaces and so on.
Edit: In my opinion it's insane to allow autonomous vechiles on the roads without proper testing by a neutral third party.
I think you're severely underestimating the path that something like this would have to take. The certification itself would be under so much scrutiny and oversight that it would take years for that to get done. Unfortunately, the technology is far more readily available and easy to get working than the political capital required to create a certification for this.
Not that 37,000+ is a great number, but I don't think many of the detractors here are arguing that Uber et. al have a perfect record. Just that it's possible that progress is being made in a more reckless way than necessary. Just because space flight is inherently difficult and risky and ambitious doesn't mean we don't investigate the possibly preventable factors behind the Challenger disaster.
edit: You seem to be referencing the worldwide estimate. Fair, but we're not even close to having self-driven cars in the most afflicted countries. Nevermind AI, we're not even close to having clean potable water worldwide, and diarrhea-related deaths outnumber road accident deaths according to WHO: http://www.who.int/mediacentre/factsheets/fs310/en/
That's not a particularly convincing argument, given that (so far), Uber's self-driving cars have a fatality rate of 50 times the baseline, per mile driven.
Having to wait an extra ten years to make sure that everything is done properly doesn't sound like the worst price to pay.
 Nationwide, we have 1.25 deaths per 100 million miles driven. Uber's only driven about 2 million miles so far: https://www.forbes.com/sites/bizcarson/2017/12/22/ubers-self...
Let's say that halving the death rate is what we can reasonably expect from the first generation of self driving cars. Every year we delay that is 15,000 people dead. This woman dying is a personal tragedy for her and those that knew her. However, as society we should be willing to accept thousands of deaths like hers if it gets us closer to safer self driving cars.
What's your evidence for why this is a reasonable expectation? The fatalities compared to the amount of miles driven by autonomous vehicles so far shows that this is not possible at the moment. What evidence is there that this will radically improve soon?
No, you couldn't have characterized Uber has having "infinitely better" fatality rate than the baseline, because that would have resulted in a division-by-zero to calculate the standard error. Assuming a frequentist interpretation of probability, of course; the Bayesian form is more complicated but arrives at the same end result.
It's true that the variance is higher when the sample size is lower, but that doesn't change the underlying fact that Uber's fatality rate per mile driven is empirically staggeringly higher than the status quo. Assigning zero weight to our priors, that's the story the data tells.
Presumably a bit more since December.
Fluctuates just over 1 per 100 million.
(1/2+ million) / (1+/100 million) ~ 50-
Then, you can start to identify situations where the driver's actions were outside of a predicted acceptable range, and investigate what happened.
Additionally, if you have a large pool of equipped vehicles you can identify every crash (or even more minor events, like hitting potholes or road debris) and see what the self-driving system would have done.
The realistic problem is that Uber doesn't give a shit. As such, deployment will never be optimized for public safety. It will be optimized for Uber's speed to market.
Using https://github.com/commaai/openpilot , which is cool but not on public roads.
Given that all the main car companies are keeping their technology private I don't see how open-source systems are supposed to keep up without people doing this.
I think you can also contrast this to the many more people who decide to drink alcohol and then drive.
Sure, but drinking (more than a little) and driving is already illegal, so they're not exactly comparable.
I mean, this is the way cars have always operated. It is somehow any different to let them "test" there brake system, there acceleration systems? Car companies have always been able to do whatever they want, who cares about the deaths.
For some reason, when it comes to roads, we all just accept a certain amount of deaths each year. Over a million people are killed every year around the world, up to 40,000 in the US alone. That is acceptable according to every car driver.
Self-driving cars need this all the more, since most of the hard and scary stuff is the integration of many systems and how they deal with unexpected scenarios. And worse still -- in a traditional car, you had an expert test-driver like my friend who could, say, take evasive action if the brakes didn't work.
But in these cars, the driver is the system-under test.
Yes, and so do these guys. They might be somewhat risk averse, but lets not pretend they have in the past made sacrifices that have cost lives (Pinto, etc...) and there are constant recalls going on.
All of them, I mean you took a poll? that must have taken a while.
My understanding (as a non-US resident) is that parts of the US are built around the assumption you drive.
Yes, this is 100% true. Arizona included.
It's difficult, usually impossible, to buy-out of driving (or being driven) in most places in the USA. Ask DART why rapid transit ridership in the Dallas area is falling despite a growing population and a billion-dollar annual budget.
If someone wants to pretend it is not acceptable while still driving, that is just silly.
The largest real difference seems to be the speed of communication nowadays, the rate at which public opinion can be manufactured and globally spread.
In this case have a machine replacing a human and using non deterministic algorithms to do so.
Automated driving ONLY makes sense on specific, well mapped, sensor enabled, pedestrian free roads. Let’s focus on that use case before we let these things take over our towns and cities.
Implement a way for the highway operator to affect the cars on it: make way for emergency vehicles, adapt to infos the operator can aggregate and transmit, change the speed of cars to the slowest one around etc.
Once the majority of people have self driven cars and those have been proven in this controled environment you can think about allowing them on open roads. Or not. Or adapt those open roads to what self driven cars need.
sure, makes sense to me.
But God, that would cost a fortune.
It would. Like an infrastructural job. I mean, it would be like railways for individual trains.
But that's kinda-sorta the job expected from government and what taxes are for: paying for things the individual would have problems doing.
Because at some point, it's literally not possible to test something that's ultimately intended for use on public roads anywhere but public roads. Or, to put it in terms that would be more familiar to developers, "you can test all you want in QA, but prod is the ultimate test of whether your code really works".
As mentioned downthread, the problem here is that Arizona basically gave companies full leeway, without having to demonstrate that they had done sufficient tests internally before putting the cards on public roads. Apparently they're not even required to report data on accidents to the state, which is absurd.
I wouldn't be surprised if Uber were ultimately found to be cutting corners, but Arizona is also responsible for rushing to allow self-driving cars on their roads without doing their own due diligence first.
I'm stunned that when I drive in Texas, there don't seem to be regulations over which vehicles can have flashing lights and with which colors. Yet in these other cities you have experimental robots that are designed to blend in with other cars as much as possible. NO! Make them stick out.
Because the for profit company has a say in government and lawmaking, the person getting run over doesn't.
Some phase of testing will necessarily be of public roads. The fact that they started that phase prematurely doesn't somehow mean that "private for profit companies" should never test on public roads.
Boeing tests new airplanes in the public airspace too, over public areas.
Uber abides by no such testing standardization.
My thoughts exactly. It is wrong. We have a chance to do this properly, openly and together, but instead our governments are letting private companies put our lives in danger.
Interesting way of putting it. Kinda echos the "privatize the profits, socialize the losses" sentiment from the financial crisis.
There are of course situations were externalities fail to be privatized (e.g., traffic congestion), but I don't think auto accidents is one of them.
Is every programmer in the world partly responsible for every death caused by every programming bug ever? Should we blame all civic engineers every time a bridge collapses?
I am an individual, not "part of a system". Simply being a driver doesn't make me a reckless or dangerous one. For example, 10K of those deaths involve DUI, and I am certainly not part of that system. I'm sure 95% of drivers never kill anyone.
Massive misunderstanding. As a driver your actions form a part of the traffic around you, even responsible drivers are "attached" to the system and put pressure on it. Maybe one day somebody rear ends you and you are knocked forward into a crosswalk, or you break suddenly to avoid hitting a pedestrian and cause an accident behind you. Or maybe your very safe driving leaves enough room for somebody to take a foolish risk cutting in front of you and lose control of their car. Collectively society accepts that getting places quickly is worth the cost that we pay in human lives. And each road user agrees to the small risk of death/injury beyond their control in order to get somewhere.
(back of the envelop, assuming 300,000,000 drivers driving for 60 years of their lives at a 40,000 per year death rate)
Moreover, it is illegal to drive a car safely. If you try it, you will be pulled over. You may be fined or lose your licence. Try it and see. So yes, being a car driver does make you a dangerous one.
I wonder how the issue developed when the automobile was first introduced. I remember something about needing to have a guy walking in front of the vehicle waving a warning flag, but apparently that didn't last long.
Found it - https://en.wikipedia.org/wiki/Red_flag_traffic_laws
Or, alternatively, we do indeed have a cap to the amount of resources we are willing to expend to save a life - lives are not of infinite value.
That's how it is profoundly different.
Every regulatory system in the world has to consider these things, explicitly or (more commonly) implicitly. See e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1633324/ for the application of the idea to healthcare, or http://www.nytimes.com/2011/02/17/business/economy/17regulat... for the public policy implications for environmental regulation.
Or more relevant yet, this "guidance on valuing reduction of fatalities and injuries by regulations or investments" by the Department of Transportation: https://www.transportation.gov/sites/dot.gov/files/docs/VSL_...
To state the obvious, I am not a lawyer.
The theoretical difference with criminal charges is that criminal charges deal with harms to the difference sovereign (ie the state and the public) that need to be punished regardless of the wishes of the immediate victim. e.g. you're not allowed to settle a murder charge out of court.
Criminal law applies to criminal acts. It's possible for do something wrong that "injures" someone (physically or in some other way) without committing a crime.
Criminality generally requires wrongful intent. If you simply screwed up while otherwise obeying the law then you haven't committed a crime (negligence can be a crime, e.g. if you are more careless than a "reasonable man" would be this is in itself a form of bad intent -- so swinging swords around in public places while blind-folded isn't OK).
Also, tort law has a different standard of proof -- it is resolved based on preponderance of evidence rather than proof beyond reasonable doubt, so there's a lower bar than for throwing someone in jail.
Oh, and it's harder to prove criminal charges against nebulous entities. (Who's the criminal in this case? Uber's CEO? The head of the software team? The person who gave uber a permit to test their crap on public streets?)
Uber's overall conduct might approach the point of "criminal enterprise" at which point RICO statutes might be invoked. Not likely though.
For instance, there are accidents that cause deaths all the time; but they would only be prosecuted as manslaughter if someone were doing something considered inherently unsafe and unlawful.
If you run a red light, and kill someone in the process, it's possible you could be prosecuted for manslaughter, though even then you might not be if there could have been extenuating circumstances (sun in your eyes, etc).
In this case, there would only be a chance of a manslaughter charge if Uber were somehow being particularly negligent. In this case, it sounds like there was a human operator in the car, though the car was in autonomous mode. It's possible that the operator wasn't paying the attention that he or she should have been, or it's possible that even the human operator didn't see the pedestrian in time. It mentions that the pedestrian was crossing outside of any crosswalk, so it's possible that this was a tragic accident of them trying to cross a street with traffic without having been sufficiently careful.
On the other hand, it mentions a bicycle, but also says the victim was walking, which I find odd:
"Elaine Herzberg, 49, was walking outside the crosswalk on a four-lane
road in the Phoenix suburb of Tempe about 10 p.m. MST Sunday (0400 GMT
Monday) when she was struck by the Uber vehicle, police said."
Local television footage of the scene showed a crumpled bike and a Volvo
XC90 SUV with a smashed-in front. It was unknown whether Herzberg was on
foot or on a bike.
Unless the operator was being particularly negligent, or there was some serious and known problem with the self driving car but it was put on the road anyhow, I doubt any manslaughter charges will be filed.
Remember, Arizona has explicitly been encouraging the testing of self-driving cars, so I expect that testing a self driving car, with an operator to take over in cases which it can't handle, would not be considered unlawful or extremely negligent. Maybe what the operator was doing could be, but we'd need more information before it would be possible to tell.
The police clarified that the victim was walking her bike across the street: https://twitter.com/AngieKoehle/status/975824484409077760
While the woman's death is tragic, I hope this doesn't set us back on whatever progress made in autonomous driving.
EDIT: removed "While this a serious f*up by Uber and"
How do you know that? From my initial reading of the police report, the woman was jaywalking and ran out into the middle of the street. I'm not even sure a human or autonomous driver would have been able to stop in time. The only tech that might have been able to is Waymo's since, based on the video they released, they have a radar lock on every pedestrian within a mile of the vehicle and they do predictive tracking to determine their position. Even then, it might have still not stopped in time.
Unless it is a pure capitalist move.
Did Uber have authorization to operated automated in AZ?
2) Testing or operation of self-driving vehicles equipped with an automated driving system on public roads with, or without, a person present in the vehicle are required to follow all federal laws, Arizona State Statutes, Title 28 of the Arizona Revised Statutes, all regulations and policies set forth by the Arizona Department of Transportation, and this Order.
3) Testing or operation of vehicles on public roads that do not have a person present in the vehicle shall be allowed only if such vehicles are fully autonomous, provided that a person prior to commencing testing or operation of fully autonomous vehicles, has submitted a written statement to the Arizona Department of Transportation, or if already begun, has submitted a statement to the Arizona Department of Transportation within 60 days of the issuance of this Order acknowledging that:
a. Unless an exemption or waiver has been granted by the National Highway Traffic Safety Administration, the fully
autonomous vehicle is equipped with an automated driving system that is in compliance with all applicable federal
law and federal motor vehicle safety standards and bears the required certification label(s) including reference to any
exemption granted under applicable federal law;
b. If a failure of the automated driving system occurs that renders that system unable to perform the entire dynamic
driving task relevant to its intended operational design domain, the fully autonomous vehicle will achieve a minimal
c. The fully autonomous vehicle is capable of complying with all applicable traffic and motor vehicle safety laws and
regulations of the State of Arizona, and the person testing or operating the fully autonomous vehicle may be issued a
traffic citation or other applicable penalty in the event the vehicle fails to comply with traffic and/or motor vehicle laws; and
d. The fully autonomous vehicle meets all applicable certificate, title registration, licensing and insurance
 "Executive Order: 2018-04 - Advancing Autonomous Vehicle Testing And Operating; Prioritizing Public Safety" - https://azgovernor.gov/file/12514/download
The ironic part here is that Uber already has a strong reputation for skirting the law and not caring. Their response and the following audits will be worth following.
This assumes that the reputation costs are comparable in weight to the acts that they commit. Based on the past year and what I hear about Uber's growth, I'd argue that their reputation and their valuation are quite uncorrelated, but it's hard to know the truth.
Rather this assumes reputation costs times risk of having those costs while undertaking some action is comparable to gain from this action.
And theoretically the only people who can regulate banks are the banks themselves.
Funny how you hear that argument every time a horrible industry is trying to keep profits high by eternalizing costs: meat packers in 1900, tobacco in 1950, oil and gas till today.
The only possible costs would be if they're made to pay a price in a court of law, which seems unlikely since the woman who was killed had a bike with her, and it's unknown if she was on the bike at the time, but she's been described as a pedestrian outside of a crosswalk. Clearly Arizona authorities just don't care about safety.
I wonder whether food companies (like the meat packing industry in Chicago) said the same thing before the FDA was created in 1906; i.e. that reputation will just regulate the industry to not sell mislabeled products, or spoiled or adulaterated food.
Self-driving car companies will have massive net positive externalities on the public, to the tune of hundreds of billions of dollars per year when fully deployed.
False comparison. First of all DARPA has invested a lot more than that in all sorts of related AV technologies (e.g. LIDAR, AI, robotics). But the real question is when were those "few million dollars" invested. Seed investments generally are much smaller than what companies end up being worth.
In our high tech system, taxpayers generally take on the riskiest stage of very early development, where it takes billions of dollars in bets spread over a very large area over 10+ years. And you're right, there are some socialized benefits, but the statement stands that the costs are socialized as well.
Imagine if you told Google's earliest investor that their investment was a tiny percentage of what it's worth today, and they enjoy the benefit of using Google now, so it's fair that Google's later investors kept all the equity.
I was responding to your link about the Grand Challenge.
> Seed investments generally are much smaller than what companies end up being worth.
Of course. Are you really going to make someone else go point out the hundreds of millions in dollars of seed-level funding by Google, by Uber, etc., to demonstrate the obvious point that the DARPA Grand Challenge is terrible evidence for the claim "autonomous vehicle development was funded by taxpayers".
> so it's fair that Google's later investors kept all the equity.
People who invest in Google get to keep the equity they they purchase, but they don't get a claim to the equity of companies that exist because of Google's search product.
Likewise, when the government invests in public goods, they don't then get to claim everything that is built on top of the public good.
Indeed, the government provides the rule of law, without which almost no modern economic development could take place. But that obviously doesn't mean the government has claim to all economic value. Likewise, none of us could work without eating, but that doesn't mean we owe all our income to farmers.
Again you make a false comparison. Nobody claimed that early stage investors are entitled to "all economic value". The statement stands that costs are socialized. Silicon Valley is greatly subsidized by early stage taxpayer investment.
> Are you really going to make someone else go point out the hundreds of millions in dollars of seed-level funding by Google, by Uber, etc., to demonstrate the obvious point that the DARPA Grand Challenge is terrible evidence for the claim "autonomous vehicle development was funded by taxpayers".
"Autonomous vehicle development was funded by taxpayers" is a factually correct statement. You again fundamentally misunderstand the distinction of when investments are made vs. quantity of investment. Earlier stage investments are riskier; Google et al did not start pouring money in until the technology started showing some promise after many years of taxpayer investment. As is commonly the case with our high tech system.
I guess we should credit Andrew Carnegie with pre^12-seed funding since he founded Carnegie-Mellon and a lot of relevant robotics research has taken place there.
Again, I am not discussing all of DARPA's activities, I am talking about the Grand Challenge you brought up, which I continue to maintain is minuscule compared to a million other sources and thus is terrible evidence that "autonomous vehicle development was funded by taxpayers" in any non-trivial sense.
> Nobody claimed that early stage investors are entitled to "all economic value".
You misinterpret my analogy. The point is that your approach, if taken seriously, would mean the government would have a claim on every single bit of economic value in the US, not that it would have full ownership of all of it.
> The statement stands that costs are socialized.
> "Autonomous vehicle development was funded by taxpayers" is a factually correct statement
Ha, yes, and we can also conclude that autonomous vehicle development was funded by Carnegie, Roomba, and my buddy Alex who runs a robotic delivery startup.
The simple fact remains that taxpayers have made significant and critical investments in nurturing AV technology, like many other technologies that Silicon Valley has commercialized once they bore fruit. Your fundamental premise that investments that are "minuscule" in size are necessarily minuscule in significance is trivially wrong.
If they used the right liscensing for the information that they created then they might.
An electric rail network on the other hand would have real positive externalities, but isn't as easily privitazable so won't be built.
A good example is pharmaceutical research. NIH budget is ~$30B and arguably a fraction of that is pure drug development.
The top pharma companies (ignoring VC investment in start ups) is over $70B.
Automation will indeed do some harm to a certain group of people. The question is who should be responsible for bailing those people out (if at all). Is it society (aka, the govt), or the companies reaping the benefits of said automation?
The problem is that we should be 100% sure that there will be less death and not allow the self driving cars on public roads before.
I don't think we have the real numbers, how many Km each car drive and how many times the human had to get involved to prevent a big crash. If there are laws and checks in place so the actual numbers are reported then I want to see them, I read all self driving topics here and never seen this laws.
Until we have the real number the fact that self driving cars will save more lives is just a hope for a far away future.
We don't really know if there will be any safety benefits at all, and while I believe we might get to that point, it's a bit farther away than some tech-giants would like us to believe.
If you listen closely to who say what, you'll notice that a lot of entrepreneurs claim that we will have full AI within a couple of years, while a lot of boring engineers huff and puff and are generally pessimistic.
I'm sure the there are great savings involved in self-driving transport - especially long-haul , so it will happen sooner than later, but this will initially mean more death - not less.
For better or worse, I don't think a self driving car will be commercially successful until it shows it is at least as safe as a human driver. More likely it will have to demonstrate that it is significantly safer. It might be the case that whichever company first launches doesn't correctly gauge the safety of their car, but I definitely think the goal is to be safer from day one.
Nobody is saying the car itself was at fault. People are saying (justifiably) that the people who put the car out on public roads with faulty engineering are at fault.
> I don't think a self driving car will be commercially successful until it shows it is at least as safe as a human driver. More likely it will have to demonstrate that it is significantly safer.
I agree. And the incident under discussion illustrates that the technology has not yet reached that point.
Furthermore, "commercially successful" is not the first objective that needs to be met. The first objective is "safe enough to be allowed on public roads". The incident under discussion illustrates that the technology has not yet reached that point either.
So much talk about LIDAR and other sensors. Why nobody talks about obvious idea of Road Object Message Bus? ROMB is a protocol where each road object (a traffic light, a sign, a car, a bicycle, etc.) transmits info about itself. A car could broadcast its direction vector, intention to turn, any non ROMB moving object it sees. A traffic light could broadcast current state and when it is going to change. That information would greatly enhance overall security, especially during rain and snow conditions, when even LIDAR fails.
Self-driving is such important (just after eliminating combustion engines) that we could upgrade existing cars with cheap ROMB boxes. Vehicle GPS tracking system costs about $30. ROMB box would cost about $60. Let's say that from 2027 all cars have to have a ROMB box to enter a downtown ...
Let's say your car ROMB received info about the white truck, while your car cameras and vision recognition systems see just a cloud and any truck in 100 m range.
ROMB purpose is not to replace cameras or LIDARs, but to extend gathered info.
And I think that the comment is very relevant to the current discussion. It talks about how to increase the safety of self-driving cars.
Please answer why it is not relevant in your opinion. Because from my point of view it looks like you don't read it before downvoting.
BTW HN is not complaint with GDPR. After some time I cannot delete my comments.
So we are all in a lose-lose. I'm anticipating severe backlash from Congress this week, if not today.
we don't need active testing on our streets. automakers would have dared to test ABS or "safety" systems in such a manner for the simple reason of liability.