Hacker News new | comments | show | ask | jobs | submit login
Self-driving Uber car kills Arizona woman crossing street (reuters.com)
2361 points by kgwgk 5 months ago | hide | past | web | favorite | 1766 comments



One aspect that comes from this is that now car crashes can be treated more like aircraft crashes. Each self-driving car now has a black box in it with a ton of telemetary.

So it's not just "don't drink and drive", knowing that they'll probably reoffend soon anyway. Every crash and especially fatality can be thoroughly investigated and should be prevented from ever happening again.

Hopefully there's enough data in the investigation so that Tesla / Waymo and all other car companies can include the circumstances of the failure in their tests.


Every lesson from aviation is earned in blood. This death wasn't necessary though. The Otto/Uber guys have been informed about their car's difficulty sensing and stopping for pedestrians. I know this because I informed them myself when one almost ran me down in a crosswalk in SF. Can't learn anything from your lessons unless you listen. Maybe they can pause and figure out how to actually listen to their reports of unsafe vehicle behavior.


Police are reporting that it never slowed down. Hitting the pedestrian at 40 mph.


It's interesting that the car was exceeding the speed limit of 35 mph. I would assume the car would stay at or below the speed limit. Who gets the speeding ticket in this case? Does 5 mph affect the reaction time such that it could have noticed and taken evasive action?


>>Who gets the speeding ticket in this case?

Whoever owns the algorithm. Or at-least in whoever's name the license/permission was issued. If its an organization, the top management signing off on this has to take the blame.


Legally, the person behind the wheel was still the driver. They are responsible for both the speeding and for killing a pedestrian. At this stage it's no different than using cruise control - you are still responsible for what happens.


I really hope you're wrong. If the legal system doesn't distinguish between cruise control and SAE level 3 autonomy, the legal system needs to get its shit together.


IMO as long as there is a human in the driver’s seat who is expected to intervene, they should bear the consequences of failing to do so.


No, that's bullshit. It's physically impossible for a human to intervene on the timescales involved in motor accidents. Autonomy that requires an ever-vigilant driver to be ready to intervene at any second is literally worse than no autonomy at all; because if the driver isn't actively driving most of the time, their attention is guaranteed to stray.


I agree with you - but that's literally the stage we're at. What we have right now is like "advanced" cruise control - the person behind the wheel is still legally defined as the driver and bears responsibility for what happens. The law "allows" these systems on the road, but there is no framework out there which would shift the responsibility to anyone else but the person behind the wheel.

>> It's physically impossible for a human to intervene on the timescales involved in motor accidents.

That remains true even without any automatic driving tech - you are responsible even for accidents which happen too quickly for anyone to intervene. Obviously if you have some evidence(dashcam) showing that you couldn't avoid the accident you should be found not guilty, but the person going to court will be you - not the maker of your car's cruise control/radar system/whatever.


Currently have two cars; one Mazda '14 3 with AEB, Lane Departure alert, radar cruise, BLIS, rear cross alert - and the other an '11 Outback with none of that (but DSC and ABS, as well as AWD).

The assists are certainly helping more than anything, so I feel that the Mazda is much safer to drive in heavy traffic than the older Outback.

The cruise has autonomy over controlling the speed only, and applying brakes, but it is still autonomy. Of course since my hands never leave the wheel it may not fit with what you have in mind.

Having said that, Mazda (or Bosch?) really nailed their radar, having never failed to pick up motorbike riders even though the manual warns us to not expect it to work.

I feel more confident in a system where the ambition is smaller, yet execution more solid.

Fwiw I also tested the AEB against cardboard boxes driving through them at 30km/h not moving accelerator at all, and came away very impressed by the system. It intervened so last second I felt for sure it wasn't going to work, but it did - first time was a very slight impact, next two were complete stops with small margins.

This stuff is guaranteed to save lives and prevent costly crashes (I generally refuse to use the word "accident") on a grander scale.


The latest top end Toyota Rav4’s have that too. It’s quite amazing how well they are able to keep cruise control and maintain distance behind a car.

I do love that even though they have a ton of driver alerting features, hands have to be on the wheel at all times.

Either you have full autonomy without hands or you don’t. There is no middle ground, it’s a recipe for a disaster.


Bullshit?? It may be autonomous but these cars are still far away from driverless. YOU get in the car, you know the limitations, you just said you even consider yourself physically incapable of responding in time to motor accidents, and that the safety will worse than a non autonomous car. Sounds to me what's bullshit is your entitlement to step into an autonomous vehicle when you know it diminishes road safety. Autonomous vehicles can in theory become safer than human drivers, what is bullshit is that you want to drive them now, when they are strictly not yet safer than a human, but do so without consequences.


I attended an Intelligent Transport Systems (ITS) summit last year in Australia. The theme very much centred around Autonomous Cars and the legality, insurance/liabilities and enhancements.

There are several states is USA that are more progressive than others (CA namely). But with many working groups in and around the legal side - it hopefully will be a thing of the past.

In Australia, they are mandating by some year soon (don't have it on hand) that to achieve a Safety Rating of 5 star, some level of automation needs to exist. Such as lane departure or ABS will become as standard as aircon.


Assuming ABS means "Anti-Lock Braking System" in this context, isn't that already standard? I can't think of a (recent) car with an ANCAP rating of 5 that doesn't have ABS. I'm not sure I would even classify ABS as automation in the same way that something like lane departure is automation. ABS has been around (in some form) since the 1950s, and works by just adjusting braking based on the relative turning rates of each wheel. Compared to lane departure, ABS is more like a tire pressure sensor.


Generally ABS does mean anti-lock braking system, but my guess is that they meant "Automatic Braking System"?


It creates an incentive to buy autonomous cars that are well programmed.


Does this responsibility stay with the driver, despite this clearly being an Uber operation? Aside from the victim, did self-driving tech just get its first, uhm, "marter"?


By law(and please correct me if I'm wrong), the driver of the vehicle is responsible for everything that happens with the vehicle. Why would it matter if the vehicle is owned by UPS, Fedex, PizzaHut or Uber? Is a truck driver not responsible for an accident just because they drive for a larger corporation?

Let me put it this way - my Mercedes has an emergency stop feature when it detects pedestrians in front of the car. If I'm on cruise control and the car hits someone, could I possibly blame it on Mercedes? Of course not. I'm still the driver behind the wheel and those systems are meant to help - not replace my attention.

What we have now in these semi-autonomous vehicles is nothing more than a glorified cruise control - and I don't think the law treats it any differently(at least yet.).

Now, if Uber(or anyone else) builds cars with no driver at all - sure, we can start talking about shifting the responsibility to the corporation. But for now, the driver is behind the wheel for a reason.


From the article:

The San Francisco Chronicle late Monday reported that Tempe Police Chief Sylvia Moir said that from viewing videos taken from the vehicle “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway." (bit.ly/2IADRUF)

Moir told the Chronicle, “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident,” but she did not rule out that charges could be filed against the operator in the Uber vehicle, the paper reported.


I would be interested in hearing more about how he qualified that statement. Are shadows a known limitation with some or only Ubers systems?


The measured speed was 38mph. That is within 10% of the posted speed.


Driving rules in the UK have changed, since at least a decade ago, so that there is no 10% margin. Speedometers are required by law to read on or under and they are more reliable now. So if you're going 36mph then you'd be fined.

On top of the speedometer it has the GPS speed to compare as well, I can't see how there is any excuse for being over the limit.

The quoted stats from UK advertising were that at 40mph 80% of pedestrians will die from the crash, at 30mph 20% will die.

Had the car been doing just under the limt e.g. 33mph then there's a much better chance that the woman would have survived.


I cannot find a reference to backup your claim of the 10% + 2mph margin having been axed. In fact I remembered the Chief Constable calling for the end of it recently (implying it was still being used):

http://www.dailymail.co.uk/news/article-5332443/Police-chief... [30 January 2018]

https://www.telegraph.co.uk/news/2018/01/31/motorists-should...

Can you explain why you think this rule changed a decade ago?


Just what my partner was told when they were caught speeding and offered a course in how to avoid speeding instead of getting points on her licence.


That's not how posted speed limits work in Tempe though. Traffic flows an average of 5-10mph above the posted limit.


Isn't that still too fast? Maybe not worth ticketing for, but still relevant after an incident like this?


So when the road sign says 35mph it means the official speed limit is exactly 38.5mph?

Because sometimes that 10% is argued as a margin of error for humans supposedly not paying attention how fast they're going, but if that's the case then there's really no reason why the robot shouldn't drive strictly under the speed limit.

If you explicitly programmed a fleet of robots to deliberately break the law, then I think it's not enough consequences if you just fine for the first robot that gets caught breaking that law, while the programmers adjust the code of the fleet to not get caught again.

Consequences should be more severe if there's a whole fleet of robots programmed to break the law, even if the law catches the first robot right away and the rest of the fleet is paused immediately.


Should be noted that speedometers display a higher number than actual speed. So if cop flags driver at 38.5 mph, there's a good chance their speedometer showed 40+ mph.


It's said she's been hit immediately after entering a car lane outside a crosswalk. Quite possibly there was no time for the autopilot to react at all. I hope all video footage for self-driving crashes is mandatorily released.


From the other post on this:[0]

> Chief of Police Sylvia Moir told the San Francisco Chronicle on Monday that video footage taken from cameras equipped to the autonomous Volvo SUV potentially shift the blame to the victim herself, 49-year-old Elaine Herzberg, rather than the vehicle.

> “It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.

0) http://fortune.com/2018/03/19/uber-self-driving-car-crash/


Non-driving advocates have pointed out that many investigations of car crashes with pedestrians and cyclists tend to blame the pedestrian/cyclist by reflex and generally refuse to search for exculpatory evidence.

Based on the layout of the presumed crash site (namely, the median had a paved section that would effectively make this an unmarked crosswalk), and based on the fact that the damage being all on the passenger's side (which is to say, the pedestrian would have had to have crossed most of the lane before being struck), I would expect that there is a rather lot that could have been done on the driver's side (whether human or autonomous) to avoid the crash.


Your passenger's side comment didn’t make sense to me until I read the Forbes article linked above:

> Herzberg is said to have abruptly walked from a center median into a lane with traffic

So that explains that. However, contrary to the thrust of your argument, the experience of the sober driver, who was ready to intervene if needed, is hard to dismiss:

> “The driver said it was like a flash, the person walked out in front of them,” Moir said. “His first alert to the collision was the sound of the collision.”

And also:

> “It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.


Yes, I see. So she walked maybe 2 meters into the lane before being hit. At a slow walk (1 meter/second) that's 2 seconds. At 17 meters/second, that's 34 meters. And it's about twice nominal disengagement time. So yes, it's iffy.


And at a moderate sprint, like most adults do when they try to cross a roadway with vehicular traffic, that is 4-5 m/s, giving the vehicle 0.4 - 0.5 seconds to stop. 40 MPH ~ 18 m/s that gives the vehicle 7-9 meters to stop.

No human could brake that well, and simply jamming the brakes would engage the ABS leading to a longer stopping distance. Not to mention the human reaction time of 0.5 - 0.75 seconds would have prevented most people from even lifting the foot off the accelerator pedal before the collision, even if they were perfectly focused on driving.


> simply jamming the brakes would engage the ABS leading to a longer stopping distance

I was taught that the entire point of ABS is so that you can just jam the brake and have the shortest stopping time instead of modulating it yourself to avoid skidding. Do you have any source to the contrary?


ABS is intended to enable steering by increasing static road friction. It is not intended to decrease stopping distance, and in many cases increases stopping distance by keeping the negative G's away from the hard limit in anticipation of lateral G's due to steering.

Older dumb ABS systems would simply "pump the pedal" for the driver, and would increase stopping distance in almost all conditions, especially single-channel systems. Newer systems determine breaking performance via the conventional ABS sensors and additionally accelerometers. These systems will back off N G's, then increase the G's bisecting the known-locked and known-unlocked condition, trying to find the optimum. These systems _will_ stop the car in the minimum distance possible, but very few cars use it.


I was taught the point of ABS was to keep control over steering while stepping on the brakes instead of skidding out of control into god knows what/who

Wikipedia backs me up but adds that it also decreases stopping distance on dry and slippery surfaces, while significantly increasing stopping distances in snow and gravel. I’m from a country with a lot of snow so that makes sense.


That's correct. The ABS basically takes away the brake pressure as soon as the wheels block. On most surfaces this will shorten your stopping distance versus a human blocking the tires. It is never the optimal stopping distance though.

In terms of split second reactions, it's pretty much optimal still to just jam the brakes if you have ABS. It's much better than braking too little, which is what most non-ABS drivers would have done.


When you block the wheels in loose snow or gravel, it piles up in front of the tire and provides a fair amount of friction. This is usually the fastest way to stop, and one of the reasons that gravel pits along corners in motor racing are so effective.

That said, the point of ABS is in the rare event that you have to brake full power, the system automatically help you do it at a near optimal (slightly skidding) without additional input, and you remain full steering ability.

If you don't have ABS you'd need to train that emergency stop ability on a daily basis to even come close.


> Wikipedia backs me up but adds that it also decreases stopping distance on dry and slippery surfaces,

Many cars, such as my POS Ford Focus, use a single-channel ABS system. These systems will oscillate all four brakes even if only one is locked. Combined with the rear-wheel drum brakes, the ABS considerably increased stopping distances on dry road.


From my experience of walking my bicycle, you are slower than usual when doing so, and it's pretty difficult to abruptly change direction in that situation. I would be curious to know what is the FOV of the camera that recorded her.


Yes, I also assumed that. Back when I rode a lot, I don't recall sprinting across roadways with my bike. Also, from the photo, she had a heavy bike, with a front basket. And yes, they ought to release the car's video.


I don't think we have grounds to just assume it was a slow walk.


er... LIDAR needs ambient light now? Also if you look on Google Street View, the pedestrian entered the road from a median crossing that you can see straight down the middle of from the road hundreds of feet away. I bet they don't release the footage from the car though ;)


Sensor Fusion typically merges LiDAR with stereoscopic camera feeds, the latter requiring ambient light.


This is just victim blaming.


Isn't that an overly broad use of the term? I mean, if someone steps in front of a moving vehicle, from between parked vehicles, the driver may have only a few msec to react. Whose fault is it then?

Maybe it's society's fault, for building open-access roadways where vehicles exceed a few km/h.


I think you’re right about the street design being the main cause in this case. A street with people on it should be designed so that drivers naturally drive at slow, safe speeds. The intersection in question is designed for high speed. https://www.strongtowns.org/journal/2018/2/2/forgiving-desig...


I don't remember reading about parked vehicles. Accident location seems to be too narrow to park any vehicles.

As others have said in the comments, whole point of having technology is defeated if it performs worse than humans. Assuming vehicles were parked, a sane human driver will evaluate the possibility of someone coming out from between them suddenly and will not drive @40 Miles an hour speed.


>a sane human driver will evaluate the possibility of someone coming out from between them suddenly and will not drive @40 Miles an hour speed.

If that's the case most drivers on the road are very far from "sane drivers." I've been illegally passed, on narrow residential streets, many times, because I was going a speed that took into account the fact someone may jump out between parked cars.


Do you want AI to simulate insanity?


"no time for the autopilot to react" - that may be technically true but humans tend to slow before if they recognize a situation they don't understand

http://fortune.com/2018/03/19/uber-self-driving-car-crash/

> she came from the shadows right into the roadway

also we were told radars would have have solved exactly this limitation of humans

> Uber car was driving at 38 mph in a 35 mph zone

also we were told these car would be inherently safer because they would always respect limits and signage

> she is said to have abruptly walked from a center median into a lane with traffic

I don't know other driver, but when someone is on the median or close to the road I usually slow down in principle because it doesn't match the usual expectations of a typical 'safe' situation.

I've been advocating against public testing for a long time, because it's just treating people safety as an externality. Uber is cutting corners, not all company are that sloppy, but this is, overall, unacceptable.


Isn't the point of smart vehicles that they have superhuman reaction speed?


e=mv2, whether or not the driver is a superhuman robot or a human.

This means that there's a fixed distance from which the optimal driver can stop a car doing xmph. Yes, an autonomous vehicle has a faster reaction time* to begin the stop, but no matter the reaction time, a stop cannot be instantaneous from any substantial amount of speed.

If it takes 20 feet to stop a car doing 20MPH, it will take 80 feet to stop a car doing 40mph. If there's a human between the initial brake point and 80 feet from it, that human will be hit, no matter who or what the driver is.


The promise of self-driving cars is (was) that they’re much better than humans at predicting he behavior of other moving entities. A pedestrian doesn’t suddenly materialize on the road in front of the car. It comes from somewhere and the radar could have detected it (even « in the shadows »), and slowed down in anticipation of an uncertainty.

Or maybe it couldn’t, but then the whole « narrative « of the experiment is in serious jeopardy.


> whole « narrative « of the experiment is in serious jeopardy.

Not really. Self driving cars are supposed to be better than average human driver. That does not imply that they NEVER make mistakes.

I do not know specifics of this case, but a general comment: If somebody is hiding behind a bush, and (deliberately or by mistake) run in front of the car, there is no way the car can anticipate that. There is no way to avoid accidents in 100% of the cases.


We have some corners where old houses are even intruding a bit on the road. When passing these corners you will have to slow down so you can stop in case a child runs out behind the corner. You can't just blame the victim if you are in control of your own speed.

I can think of many situations where I have avoided hitting pedestrians because of my avareness of the situation. Eg: Pedestrian with earphones looking at phone crossing against red light just because the left-turning wehicle in the left lane stopped for a red arrow while I had green going straight. Pedestrian mostly behind the car, just seen thru the window of the car.

Pedestrian behind high snow-walls going towards normal pedestrian crossing, no lights. Almost completely covered by the high snow-walls and a buss parked at a buss station 50 m away from the crossing. 50 km/h road. Since I had seen the pedestrian far away already I knew someone would show up there at the time I arrived there. On the other hand I would never pass a buss like that in high speed, pedestrians like to just run across in front of the buss. And high snow-walls next to a crossing is a big red flag too.

I live in Sweden though, where pedestrians are supposed to be first class citizens that has no armor.


When you are driving you should be prepared to stop. If you're turning into a street you cannot see and you're going faster than you can stop, you're not prepared to stop - you're just hoping that no one is there. This should, and is too in Denmark, fully expected and enforced. This is not the same as as driving along the street and someone is jumping out in front of you.


I have now actually seen the movie of the crash and I can agree that it most likely was hard to avoid for a human. What surprices me is that the LiDAR completely missed her because she didn't run, she didn't jump, she was slowly walking across the road. I can't say if the light was too bad, a camera often looks much darker than what you see with the naked eye, not blinded by other lights. The driver was looking down on the instrument panel at the time of the crash, does he have some view of what the car sees there?

This looks like the exact situation the selfdriving cars are supposed to be able to avoid. A big object in the middle of the street. I expect the car to try to avoid this even though the bike didn't seem to have any reflexes. If the LiDAR doesn't catch this, I don't think they should be out in traffic at all.


> We have some corners where old houses are even intruding a bit on the road. When passing these corners you will have to slow down so you can stop in case a child runs out behind the corner. You can't just blame the victim if you are in control of your own speed.

Yes, but this is a 4-lane roadway. I can totally imagine driving cautiously and slowing down near residential areas where houses are close to the road. However, this seems like a different case.


> to begin the stop

It, or the driver, could do more than just stop though. You can change directions, even at 38mph.

Then we have to get into other questions, would I as a driver willingly sideswipe a car next to me to avoid hitting a pedestrian? Is it reasonable to expect an AI to make the same value decision?


It's not unknown for people to crash and burn to avoid hitting squirrels. And with modern airbag systems, it's arguably safer for all concerned for cars to risk hitting poles, and even trees. But on the other hand, once leaving the roadway there's the risk of hitting other pedestrians.


This is a major ethical decision to make. What if the airbags don't open up. What if there are other unseen things to crashing one's car to save somebody else's life. I honestly believe given a split second reaction time, any decision made by a human should be considered right.

An algorithm however is a different deal, what should happen is already decided in an algorithm, so in some way its already settled who gets killed.


When driving on surface streets, I do my best to track what's happening wherever in front, not just on the roadway. Given all the sensors on a self-driving car, why can't it detect all moving objects, including those off the roadway, but approaching?


Its all about what things are hiding behind the opacity. Blind spots are one thing, but if you jump right in front of a car out of nowhere from a place totally invisible to a sensor, its a totally different case.

You can't avoid what is hiding.


Yes, of course. But that isn't what happened here, right? A woman and bicycle on a median should have been quite obvious. I don't even see substantial landscaping on the median.[0]

0) https://www.nytimes.com/2018/03/19/technology/uber-driverles...


It appears that's pretty close to what happened here.

http://fortune.com/2018/03/19/uber-self-driving-car-crash/


We can try. In theory, each self-driving vehicle doesn't have to drive in isolation; they can be networked together and take advantage of each other's sensors and other sensors permanently installed as part of the road infrastructure.

That would increase the chance a particular vehicle could avoid an accident which it couldn't, on its own, anticipate.


Also the fact that most people now carry tracking devices. And that more and more, there are cameras everywhere. So there's potential for self-driving vehicles to know where everyone is.

It would be much safer if all roadways with speed limits over a few km/h were fenced, with tunnels or bridges for pedestrian crossing. Arguably, we would have that now, but for legislative efforts by vehicle manufacturers many decades ago. Maybe we'll get there with The Boring Company.


TL;DR: Nope.

"Most people" (which is, in reality, "most of my geek friends with high disposable income") shifts to "everyone" by the end of sentence. Also, my devices seem to know where I am...within a few city blocks: I do not like your requirement of always-on mandatory tracking, both from privacy and battery life POVs.

Even worse, this has major false negatives: it's not a probation tracker device - if I leave it at home, am I now fair game for AVs? And even if I have it with me and fine position is requested, rarely do I get beyond "close to a corner of X an Y Street," usually the precision tops out at tens of ft: worse than useless for real-time traffic detection.

Moreover, your proposal for car-only roadways is only reasonable for high-speed, separated ways; I sure hope you are not proposing fencing off streets (as would be the case here: 50 km/h > "a few km/h", this is the usual city speed limit).


OK, it was a dumb idea. Mostly dark humor. Reflecting my concerns about smartphones etc tracking location. But I do see that GPS accuracy for smartphones is about 5 meters,[0] which is close to being useful to alert vehicles to be cautious. And yes, it wouldn't protect everyone, and would probably cause too many false positives. And it's a privacy nightmare.

Some do argue that speed limits for unfenced roadways within cities ought to be 30 km/h or less. And although fatalities are uncommon at 30 km/h, severe injuries aren't. I live in a community where the speed limit is half that. But there's no easy solution, given how prevalent motor vehicles have become. Except perhaps self-driving ones.

0) https://www.gps.gov/systems/gps/performance/accuracy/#how-ac...

Edit: spelling


Tempe police have said that the car didn't brake ("significantly") prior to the collision[1]. So it does not seem that the car reacted optimally but simply had too much speed to halt in time.

[1]http://www.phoenixnewtimes.com/news/cops-uber-self-driving-c...


I haven't any insight to whether or not the car did or didn't attempt to brake, but it's necessary to respond to the "superhuman reaction speed" remark as, even if the reaction speed is superhuman, that doesn't necessarily mean that it's enough.

Accidents can still occur even if the car is perfect.


There are many instances in Asia of people committing suicide by car (jumping in front of a car with no chance for the driver to stop).

Not saying this is the case here, but it could be. As others have said, we need to wait until we know more before jumping to conclusions.


Actually you can't go from 40 miles/ph to 0 miles/ph in an instant. At least not with a passenger car. The reaction time is typically a few seconds. If one throws themselves in front of a car, the car would need a few seconds to react. Based on the speed, distance and other parameters, I don't think any car would be able to stop in an instant.

Thinking about it seriously, it may be shouldn't. Also these things could lead to a crash pile up with other vehicles coming in from behind.


superhUman reaction speed does not mean that the laws of physics stop applying. A vehicule at 40mph will still take the same distance to stop.


It is, actuators still have their lag, probably not much different from foot-pedal combo.


The speed limit on that street is 45mph. The 35mph speed limit being quoted is in the other direction.


this comment section should be read in front of congress the next time regulating the tech industry is in the table. these people literally think its ok to perform experiments that kill people.


Although it's comforting that this exact situation shouldn't happen again in an Uber autonomous car... there is no mechanism to share that learning with the other car companies. There seriously fucking needs to be a consortium for exactly this purpose: sharing system failures.

Also my problem with this is that a human death is functionally treated as finding edge cases that are missing a unit test, and progressing the testing rate of the code... and that really bothers me somehow. We need to avoid treating deaths as progress in the pursuit of better things


> We need to avoid treating deaths as progress in the pursuit of better things

Au contraire. Go read building codes some time. There's a saying that they're "written in blood" - every bit, no matter how obvious or arbitrary seeming, was earned through some real-world failure.

The death itself isn't progress, of course. But we owe it to the person to who died to learn from what happened.


Federal Aviation Administration's Joint Order 7110.65 is exactly what you are talking about. It is the manual that Air Traffic Controllers live by. To include situations that involve diverging aircraft or taxiing instructions and how to handle said situations. The entire manual was practically written from real-world experience.


This reminds me of the story behind the iron ring and the collapse of the second narrows bridge in Vancouver, BC.

http://www2.ensc.sfu.ca/undergrad/euss/enscquire/Vol10No2/pa...


It used to be the case that those “codes” were also written in the blood of the builders – as per the Code of Hammurabi.


You might enjoy _The History of Fire Escapes: How to Fail_, a talk Tanya Reilly gave at a DevOps Days conference earlier this year.

https://www.youtube.com/watch?v=02KEKtc-5Dc


Not just building codes, but a lot of car safety regulations and practices too. Stuff like crumple zones, ABS, breakaway road signs, safety glass, etc.


You seriously think Uber wouldn't try to claim "commercial in confidence" or "trade secrets" rights to all the data from every single death?


Clearly I think Uber is a benevolent entity that has all of our best interests at heart. Also, I eat babies.

Or, you know, don't jump on comments for their not explicitly addressing the hobby horse you're riding. Frankly I just wanted to express a better engineering context around the loss of life without getting into the political bullshit for once.


>There seriously fucking needs to be a consortium for exactly this purpose: sharing system failures.

This implies they're using the same systems and underlying models. If one model hit a pedestrian because of a weakness in training data plus a sub-optimal model hyperparameter, and therefore was classified a pedestrian in that specific pose as some trash on the street, how do you share that conclusion with other companies models?


I guess it depends on the data available, but I don't think the hyperparameters are what's needed. You just need to see what the car sensed and how it responded. Then the other car companies can try to replicate the similar circumstances and see what their models would do.


> You just need to see what the car sensed

I don't think all self-driving cars have the same sensors.

If you have LIDAR model X, and they were using LIDAR model Y, will your system "magically figure it out?

If your car has cameras at 5ft high, and the data is from a camera 6ft high?

Sure, someone could release the data, but will it screw up your models more than it fixes them?

(I totally agree the data should be released, I'm just not sure other self-driving cars will directly benefit. Certainly they can indirectly benefit from it.)


Is the design of self driving cars so limited that we will have to go through this every time carmakers must redesign hardware and/or vehicles themselves? Will the experiences of a sedan be impossible to transport to the experience of a semi truck? And vice versa?

The idea you present is possible, but I have to wonder how viable it makes the idea of self driving cars.


when self driving cars becOme ubiquitous it will be thru a renting model and therefore the fleet can be updated accordingly by companies owning them.


I understand, but the error may be in how it interpreted what it sensed. This is callous language, but if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.


> I understand, but the error may be in how it interpreted what it sensed.

It may, it also might not. If sharing data fails, further methods would be needed - but it's a good start for figuring out what data should be recorded for comparison. If the data is entirely incompatible, then we should have regulation to require companies to at minimum transcribe the data into a consumable format after an event such as this.

> ... if the models interpreted the pedestrian as trash on the street, then how it responded (driving over it) is not inappropriate.

If the models saw anything at all, they should not have driven over it. Even human drivers are taught that driving over road debris is dangerous. At minimum it puts the car/occupants at risk - in extreme cases, the driver may not recognize what it is they are driving over.

If this isn't a case where the car was physically unable to stop - it's more likely the telemetry didn't identify the person as an obstacle to avoid at all.


The idea is to come up with a certification test suite, and the cars have to pass it. Add/modify the tests as experience requires.


I would expect nowadays you could fairly trivially produce near life-like footage and allow self driving cars to 'play a video game' that tests tens of thousands of situations as a pre-requisite for certification.

The great thing about computer generated test cases is you don't need real footage of every hypothetical awful thing that could happen i.e. a truck losing control and rolling over sideways. These could be a stage 1 test -- a pre-requisite to real testing. Like the hazards test before you are allowed to sit the drive test with a real drive test employee [1] to make sure you're not a retard.

[1]: https://www.vicroads.vic.gov.au/~/media/images/licences/hpt_...


That may lead to algorithms being optimized for the “video game” instead of real life.


Next step is you put the system on a test track where you throw a wide variety of cases at it.


What happens if it uses lidar (or any other sensor)? I don’t think it’s easy.


Airplane certification uses a combination of simulator exercises and real world flying through prescribed maneuvers.


Mumble mumble Dieselgate.


I think there is likely a common need for a set of standard training models. This may hinder innovation for some time, but it's a cost we should accept when releasing a potentially dangerous technology to the public. It would have the added benefit of multiple companies contributing to a common self-driving standard which could accelerate its development.

That being said, when the first real cars were introduced to the world and later improved upon there were many more fatalities than we're likely to experience with self-driving technology.


As commented in another thread here [0], HAL at Duke University [1] is putting forward proposals for this.

[0]: https://news.ycombinator.com/item?id=16620439

[1]: https://hal.pratt.duke.edu/research


I'm sorry, but this is silly.

Every technology is dangerous. Every technology costs lives to some extent when spread across billions of people. I'm sure forks take more lives each year than self driving cars.

Weigh this against the potential lives saved. I posit you'd be killing more people with drunk drivers by slowing innovation than you'd be saving via luddism.


The first real cars, dangerous as they were, were far safer than horses.


Just sensor data to replay the scenario as fully as possible should be sufficient. Whatever it looks like to humans, there's clearly something in there that can be difficult for AI systems and so everyone should be using it as a regression test.


They need to share situations that made systems fail, and ensure that it doesn't happen with their specific system.


Well, for one thing, I think I have misgivings in part because Uber hasn't really demonstrated an attitude that makes me think they'll be very careful to reduce accidents. (Also, the extent to which they've bet the farm on autonomous driving presents a lot of temptation to cut corners)


Circumstances basically force them into betting the farm. Whether they want SDCs or not, if someone scoops them on SDCs, it's an existential threat to their business.


Sure, and also paying the drivers is a big part of the reason they aren't making money yet. Nevertheless, I don't think it changes the fact that there is a big temptation for someone to start cutting corners if it's not moving along fast enough.


Oh yeah, I'm not disagreeing with you. I think it actually makes the temptation much worse. They've built this business with billions in revenue, and they will probably lose it all if this SDC doesn't succeed.

What's more, a lot of people seem to think Waymo's tech is further along. So not only does this project have to succeed, it is also the underdog. So no wonder they're aggressive.


It makes me uncomfortable too. I think it's because it's a real world trolley problem.

We all make similar decisions all of our lives, but nearly always at some remove. (With rare exceptions in fields like medicine and military operations.) But autonomous vehicles are a very visceral and direct implementation. The difference betweeen the trolley problem and in autonomous vehicles is in time delay and the amount of effort and skill required in execution.

Plus, we're taking what is pretty clearly a moral decision and putting it into a system that doesn't do moral decision-making.


That raises an interesting question: Should there be the equivalent of the FAA for self-driving cars? (Perhaps this could be a function of DoT.)


Although the FAA is of course involved, it's generally the NTSB that is the lead investigator in airplane crashes. The NTSB already has jurisdiction to investigate highway accidents.

There's also NHTSA (which is indeed part of the DoT).

It looks to me like we don't need any new agency at all, just a very small expansion of NHTSA's mandate to specifically address the "self-driving" part of self-driving cars.


Anyone know how this is handled between Boeing and Airbus? Can we mandate the same mechanism between Uber and Waymo?


Thinking about different scenarios as unit tests it shouldn’t be hard for them to simulate all sorts of different scenarios and share those tests. Perhaps that would become part of a new standard for safety measures in addition to crash tests with dummies. In fact, I really think this will become the norm in the near future. It might even be out there already in some form.


> We need to avoid treating deaths as progress in the pursuit of better things

Then by all means lets stay at home and avoid putting humans in rockets ever again because if you think space exploration will be done without deaths you are in for a surprise.


Notoriously criminal company killing pedestrians with robo-cars to make money for themselves != Space exploration by volunteer scientists and service members


I hope we see the day when every car crash makes national news and there’s a big NTSB investigation into what happened.


This is by far the most insightful comment in the entire thread.

The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News". I'm one-third a class M planet away from the incident and reading about it.


> The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News".

They are rare only because self-driving cars are; I don't think the total driven miles of all self-driving cars are enough that even 1 fatality would be expected if they were human driven; certainly Uber is orders of magnitude below that point taken alone.

There are lots of fatalities from human-driven cars, sure, but that's over a truly stupendous number of miles driven.


neither of us has the data, but id bet that whatever miles / fatalities metric is, the self driving cars are still in the lead right now


> neither of us has the data, but id bet that whatever miles / fatalities metric is, the self driving cars are still in the lead right now

There's different estimates from different sources using slightly different methodology, but they are all in the neighborhood of road fatalities of 1 per 100 million miles traveled. [0]

Waymo claims to have reached 5 million miles in February [1], Uber (from other posts in this thread) is around 1 million miles; the whole self-driving industry is nowhere near 100 million, and has one fatality. So it's way worse, as of today, than human driving for fatalities.

Of course, it's also way too little data (on the self-driving side) to treat as meaningful in terms of prediciting the current hazard rather than simply measuring the outcomes to date.

[0] see, e.g., https://en.m.wikipedia.org/wiki/Transportation_safety_in_the...

[1] https://waymo.com/ontheroad/


Uber reached 2M miles in November. Going from 1 to 2M in only 100 days.


wow, ok. i was very wrong! thank you for explaining!


> This is by far the most insightful comment in the entire thread.

lol


NTSB are investigating this incident including electronic recorders https://www.ntsb.gov/investigations/Pages/HWY18FH010.aspx and they have another investigation from 2017 still to report https://www.ntsb.gov/investigations/Pages/HWY18FH001.aspx


With Uber's reputation, I wouldn't be surprised if they try to write an app to falsify black box telemetry in the event of a crash to put the liability on the victim. Maybe they'll call it "grey box" or "black mirror".

Does the NTSB have regulation on how black boxes are allowed to function?


> "Every crash and especially fatality can be thoroughly investigated"

Would be better if the code and data was open for public review.


That's knee-jerk reaction that may open a can of worm. Do you need personal details of the victim as well as the driver? Says if the victim had attempted suicide before? at the same crossroad? Or the driver had history of depression? Would that be a violation of their privacy? Would that cause a witch hunt?


Code and data, not people's personal details.


Sure. But if the result is inconclusive, do you leave it at that or demand to know more? What exactly does making the data public better than entrusting a competent organization such as NTSB?


Yes, you leave it at that, why would personal details be relevant?

Making it public means more eyes on the data, which can lead to a better understanding of what went wrong.


https://news.ycombinator.com/item?id=16547215

This comment by Animats in the context of self-driving trucks is quite telling. He warns precisely of this danger.


I'm hijacking my own post, but this is a very relevant MIT lecture on 'Technology, Policy and Vehicle Safety in the Age of AI' [0].

[0]: https://www.youtube.com/watch?v=LDprUza7yT4


> and should be prevented from ever happening again.

Take a look at the cost of airplanes, even small ones.


Black box with a ton of telemetry being piped into black box models though.


To ensure that all automotive software incorporates lessons learned from such fatalities, it would be beneficial to develop a common data set of (mostly synthetic) data replicating accident and 'near miss' scenarios.

As we understand more about the risks associated with autonomous driving, we should expand and enrich this data-set, and to ensure public safety, testing against such a dataset should be part of NHTSA / Euro NCAP testing.

I.e. NHTSA and Euro NCAP should start getting into the business of software testing.


Dr. Mary Cummings has been working on talking to NHTSA about implementing V and V for autopilots/AI in unmanned vehicles for a few years now. She's also been compiling a dataset exactly like what you are talking about.

I think the idea is to build a "Traincar of Reasonability" to test future autonomous vehicles with.

You might want to check out her research https://hal.pratt.duke.edu/research


Thank you for that link. I will pass it along to my former colleagues who I suspect will be very interested in her work.


No problem!

I'm sure Dr. Cummings would be more than happy to talk about issues facing validation and verification in the context of NHTSA/FAA.


Why would Uber agree to such regulations?

They were unwilling to legally obtain a self-driving license in California because they did not want to report "disengagements" (situations in which a human driver has to intervene).

Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.


> Why would Uber agree to such regulations?

This is a strange question to ask. The regulation is not there to benefit Uber, it is to benefit public good. Very few companies would follow regulation if it was a choice. The setup of such regulation would be for it to be criminal to not comply. And if Uber could not operate in California (or the USA) if they did not comply, it would in their interest to provide the requested information.


Uber has shown very often that they are willing to break the law. It seems within their modus-operandus to just ignore these rules.

Essentially, Uber engages in regulatory arbitrage but taking into account the cost-benefits of breaking the law. I.e. if it breaks the law but is profitable for them, they seem to do it.


Sure, so make the regulation expensive. For example, if a company is not in compliance then the executive team can be charged for any crime their self-driving toy committed under their guidance.


I don't believe this will be effective. Thinking back to the VW scandal, did any executive get punished for this? Same question for the Equifax breach and the insider trading issue.

My 'money' is on people with money figuring out loopholes, like plausible deniability.


Yes, it means that we need to write new regulations with real teeth, and vote out the politicians on all sides of the aisle that continue to punt on this issue.

One of my biggest complaints about the self-driving car space is that real lives are at stake; light-touch "voluntary" rules suitable for companies that publish solitaire clones aren't going to cut it here.


> Why would Uber agree to such regulations?

Uber doesn't get to pick and choose what regulations they wish to follow.

> Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.

That sounds quite negligent and a cause for heightened repercussions if anything happens.

The strange attitude you display is the _reason_ there are regulations.


Very cynical but - if your self-driving is way behind of your competitors- wouldnt it help to have your lousy car in a accident - so that your competitors get hit with over-regulation and you thus kill a market- on which you cant compete?


I‘m quite sure this would backfire A LOT in terms of brand damage. Uber in a sense made history today and now has actual blood on their hands. And if such a strategy should EVER leak (Dieselgate anyone?), people are going to prison.


GM killed 124 people with faulty ignition switches[1], yet the brand still survived. It's a cost calculation: will the brand damage outweigh the benefit to the company? Sadly, human lives don't factor into that equation.

[1] http://money.cnn.com/2015/12/10/news/companies/gm-recall-ign...


Sadly, that’s a common occurrence with big automakers.

I can’t say anything about GM‘s rep in the US, but here in Europe they are not doing so well. Chevrolet was killed in 2015, and Vauxhall/Opel are doing only ok-ish. Chevy had SO many recalls in the years before they killed it.


Opel got bought back to europe by PSA the owners of Citröen and Peugeot in 2017 so they have a chance to turn it around.


No, because what you risk is associating your brand with death, rather than AD.

Uber already has a terrible reputation with everyone in the tech industry for the sexism, bullying, law breaking, and IP theft. Do they really want to be the self-driving car company with a reputation for killing people?

It doesn't take a lot for people to think "Maybe I'll take a lyft" or "Maybe I'll ban uber from London because of their safety record"[0].

They aren't going to kill the market for this - the other players not only have big incentives to make sure they look safe, but you've got a really unique problem when your biggest competitor is a company that controls access to news and which adverts your customers will see.

[0] https://www.theguardian.com/politics/2017/oct/01/sadiq-khan-...


It's a cynical approach, but they could be playing both angles. Take enough risks that maybe you do succeed and you can cut your costs enormously by actually having SDCs. But if you fail, you also protect yourself by taking the competition down with you.


Uber ultimately backed down and applied for a California permit: http://fortune.com/2017/03/08/uber-permit-self-driving-cars-...


The start tossing executives in jail.

It’s the only real solution to corporations misbehaving.


Cooperations will just start paying compensation for executive jail time - and replace executives at a accelerated rate.

The only working regulation is one that is a existential threat to the company. Which means- huge financial punnishments.


Something I'm surprised no one has considered seriously is revoking business licenses. That is a much more existential threat, literally.


Corporate death penalty. It's the only way to be sure.


Phoenixing :(


...from orbit.


Maybe they start paying compensation to the family, making sure that they're set even if you fail - firmly stepping into mafia territory.

Still, corporations, while being essentially a different kind of life, are not entirely separate entities - they are composed of people. Fear of jail might just be enough to get some high-level executives to exert proper influence on the direction of the corporation.


You can’t return a decade sitting in a miserable cell.


Oh, contrair- if you have a decade sitting unemployed in a room- and you get a chance, by taking all the responsibilities and liabilitys, to live your golden years in the sun- what would you have to loose?


I guess my hopeful answer would be they start getting treated like Enron and someone high up goes to prison until they start to comply.


Did anyone actually go to prison from Enron?


You mean, besides the CEO, Jeffrey Skilling?

https://www.reuters.com/article/us-enron-skilling/former-enr...

Enron's founder, Kenneth Lay, was also convicted and faced as much as life in prison. However, he died before sentencing: http://www.nytimes.com/2006/07/05/business/05cnd-lay.html


Yes, their CEO is still in prison: https://en.wikipedia.org/wiki/Jeffrey_Skilling .


It would be required by law. Violating the law would hold corporate officers criminally responsible.


Possibly there's enough business risk that if Uber doesn't, someone else will, and then they will have SDCs but Uber won't, and then Uber will go bankrupt just about instantly.


There may eventually be standard test suites that can be applied to any of the self-driving systems in simulation. This would give us a basis of comparison for safety, but also for speed and efficiency.

As well as some core set of tests that define minimum competence, these tests could include sensor failure, equipment failure (tire blowout, the gas pedal gets stuck, the brakes stop working) and unexpected environmental changes (ice on the road, a swerving bus).

Manufacturers could even let the public develop and run their own test cases.


How about not testing it on the unpaid public in incremental patches like this age of software "engineering" has decided it was a good idea to do?


You ultimately have to at some stage, since any test track is a biased test by its nature.

It is more an issue of how sophisticated these vehicles should be before they're let loose on public roads. At some stage they have to be allowed onto public roads or they'd literally never make it into production.


"Never making it into production" sounds like the perfect outcome for this technology.


Then make the officers of the company stake their lives on this, not the lives of innocent pedestrians.

If they're not willing to dogfood their own potential murder machines then why should the public trust them on the public roads?


This is what's going to happen. If you've ever seen a machine learning algorithm in action, this isn't surprising at all. Basically, they'll behave as expected some well known percentage of the time. But when they don't, the result will not be just a slight deviation from the normal algorithm, but a very unexpected one.

So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.

Moreover, the human brain won't like processing these freak accidents. People die in car crashes every damn day. But we have become really accustomed to rationalizing that: "they were struck by a drunk driver", "they were texting", "they didn't see the red light", etc. These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".

But these algorithms will not fail like that. Each accident will be unique and weird and scary. I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road. It'll always be tragic, unpredictable and one-off.


Very little of what goes into a current-generation self-driving car is based on machine learning [1]. The reason is exactly your point -- algorithmic approaches to self-driving are much safer and more predictable than a machine learning algorithms.

Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road. The extent to which machine learning is used is to classify whether each obstacle is a pedestrian, bicyclist, another car, or something else. By doing so, the self-driving car can improve its ability to plan, e.g., if it predicts that an obstacle is a pedestrian, it can plan for the event that the pedestrian is considering crossing the road, and can reduce speed accordingly.

However, the only purpose of this reliance on the machine learning classification should be to improve the comfort of the drive (e.g., avoid abrupt braking). I believe we can reasonably expect that within reason, the self-driving car nevertheless maintains an absolute safety guarantee (i.e., it doesn't run into an obstacle). I say "within reason", because of course if a person jumps in front of a fast moving car, there is no way the car can react. I think it is highly unlikely that this is what happened in the accident -- pedestrians typically exercise reasonable precautions when causing the road.

[1] https://www.cs.cmu.edu/~zkolter/pubs/levinson-iv2011.pdf


Actually, because there's a severe shortage of LIDAR sensors (much like video cards & crypto currencies, self driving efforts have outstripped supply by a long shot), machine learning is being used quite broadly in concert with cameras to provide the model of the road ahead of the vehicle.


That is what the comment is saying. Of course the vision stuff is done with machine learning - that is after all the state of the art. But that is a tiny part of the self-driving problem. So you can recognize pedestrians, other cars, lanes, signs, maybe even infer velocity and direction from samples over time. But then the high-level planning phase isn't typically a machine learning model, and so if you record all the state (Uber better do or that's a billion dollar lawsuit right there) you can go back and determine if the high-level logic was faulty, the environment was incomplete etc.


I was responding specifically to "Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road." - LIDAR isn't economically viable in many self driving car applications (for example: Tesla, TuSimple) right now.


Then your comment is off-topic, because the realm of discussion was explicitly "self-driving cars equipped with LIDAR". Uber's self-driving vehicles are all equipped with LIDAR, as are basically all other prototype fully-autonomous vehicles.


How is it off topic when we're discussing "current-generation self-driving" vehicles?

It's a point of clarification that the originally listed study doesn't take into account, but which could be important to the broader discussion. Especially considering that while this vehicle had LIDAR, the other autonomous vehicle fatality case did not.

> as are basically all other prototype fully-autonomous vehicles

As I pointed out with examples above, no, they are not.


The vehicle involved in the accident has an HDL64 on the roof.


Is it true?

You can get depth sensing (time of flight) 2D camera Orbecc Astra for $150 or 1D laser scanner RPLIDAR for $300. Of course they are probably not suited for automotive, but for me even extra $2000 for self-driving car sensors isn't that much.


But that's the issue: identifying a pedestrian vs a snowman or a mailbox or a cardboard cutout is important when deciding whether to swerve left or right. It's an asymptotic problem: you'll never get 100% identification, and based on that, even the rigid algorithms will make mistakes.

LIDAR is also not perfect when the road is covered in 5 inches of snow and you can't tell where the lanes are. Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.

With erratic input, you will get erratic output. Even the best ML vision algorithm will sometimes produce shit output, which will become input to the actual driving algorithm.


> Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.

Neither are humans, and a self-driving car can react much faster than any human ever could.


I can see when the car in front of me is acting erratic or notice when the driver next to me is talking on their phone and adjust my following distance automatically. I don't think self-driving cars are at that point yet. The rules for driving a car on a road is fairly straightforward - predicting what humans will do -- that's far from trivial and we've had many generations of genetic algorithms working on that problem.


Self-driving cars could compensate for that with reaction time. Think of it this way: you trying to predict what the other driver will do is partly compensating for your lack of reaction time. A self-driving car could, in worst-case scenario, treat the other car as randomly-moving car-shaped object, compute the envelope of its possible moves, and make sure to stay out of it.


Normal cars could do this too. Higher end luxury cars already started using the parking sensors to automatically apply the brakes way before you do if something is in front of the car and approaching fast. If this was really that easy, then we wouldn't have all these accidents reported about self driving cars: the first line of your event loop would just be `if (sensors.front.speed < -10m/s) {brakes.apply()}` and Teslas and Ubers wouldn't hit slow moving objects ever. I suspect that's not really how this works though.


Exactly, with LIDAR the logic isn't very tricky. if something in front, stop.


More than that - if something approaching from the side at intercept velocity, slow to avoid collision.


You're handwaving away the crux of the matter: while for a human, the condition seems straightforward (as we understand that "in front" means "in a set of locations in the near future, determined by a many-dimensional vector set", expressing this in code is nontrivial.


> I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road.

Are you working on the next season of Black Mirror?

In all seriousness, my fear (and maybe not fear, maybe it's happy expectation in light of the nightmare scenarios) is that if a couple of the "weird and terrifying" accidents happen, the gov't would shut down self-driving car usage immediately.


I am definitely not. Their version of the future is too damn bleak for me.

Your fear is very much grounded in reality. US lawmakers tend to be very reactionary, except in rare cases like gun laws. So it won't take much to have restrictions imposed like this. Granted, I believe some regulation is good; after all the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation. But self driving cars are so new and our lawmakers are by and large so ignorant, that I wouldn't trust them to create good regulation from the get go.


> except in rare cases like gun laws

They're still very reactionary in that, which is precisely why it isn't very effective when a subset of them do react: there are plenty of smart things that could get proposed, but the overlap between people who know what they're talking about and people that want the laws is exceptionally small, so consequently dumb, ineffective stuff that has no chance of passing anyway gets proposed. What does get proposed is a knee-jerk reaction to what just happened, and rarely actually looks systemically at the current laws and gun violence as a whole. Example: the Las Vegas shooting prompted a lot of talk of bump stock bans. Bump stocks are so rarely used at all, nevermind in violence, and they will generally ruin guns that weren't originally made to be fully-automatic very quickly if they're actually used for sustained automatic fire. Silly point to focus on suddenly. After the Florida shooting last month so much focused on why rifles are easier to obtain than handguns. And it's because overwhemingly most gun violence is handguns. Easily concealable rifles are already heavily regulated at the federal level for that very reason.


> Example: the Las Vegas shooting prompted a lot of talk of bump stock bans. Bump stocks are so rarely used at all, nevermind in violence, and they will generally ruin guns that weren't originally made to be fully-automatic very quickly if they're actually used for sustained automatic fire.

<off-topic> This is non-sense. Typical semi-auto are wayyy over build. Unless mechanical wear or explicit tempering of the disconnector, there is no risk whatsoever to fire thousands of rounds with a bump stock. Actually, plastic/wood furniture are more likely to burn/melt before the mechanical parts will actually fail. As worst, you might bend a gas piston, but the rifle will otherwise be fine.

The underlying reasoning behind the push against the bump stock ban is that it was basically a semi-auto ban, as you can trivially with a bit of training bump fire any semi-auto without a bump stock from either the shoulder or the hip with a mere finger. </off-topi>


>> you might bend a gas piston

Tubes on low- to mid- range civilian DE guns can burn out very quickly, and are in fact designed to do so long before you get damage to the more expensive parts of the gun - I've seen it happen in most of the cases (which are admittedly quite few in number despite how often I'm there) where I've seen someone using a bump stock at a range. In the most recent case I think the guy was on his 3rd mag and it ruptured. It was a M&P 15 Sport II, if I recall. Not a cheap no-name brand, but about as low-cost as you can get and missing all the upgrades in the version they market to cops. High-end ARs would fair better, I'd expect, but high-end ARs are again so rarely used for actual violence because they're usually only purchased by people shooting for a serious hobby in stable life situations. And honestly I feel the same people buying those probably feel bump stocks are tacky and gaudy like I do.

Even in the most liberal interpretation of the proposed law, I don't think any bump stock ban would become a semi-auto ban. I could see the vague language getting applied to after-market triggers, especially ones like Franklin Armory, but you've gotta have some added device for any of the proposals I've seen to even remotely apply.


Largely because of Ralph Nader.

It's amazing how reformers get demonized even after their platform is accepted wholesale by the rest of the world.


US lawmakers are very reactionary in the case of gun laws too, it's just that gun owners usually have enough political pull to block them from successfully getting laws passed. (The current campaign for gun control is 100% a reactionary response to whatever's been making the biggest news headlines. For example, the vast majority of US gun homicides are carried out with handguns, yet gun control supporters seem to think it's absurd they're more tightly regulated than AR-15s - which are relatively rarely used to kill anyone and have more mundane uses for things like hunting - just because the AR-15s are in the headlines. The US's most deadly school shooting was done with handguns too.) In fact, I'd argue the reactionary nature of US lawmaking is important to understanding why "sensible", "common-sense" gun control laws are so strongly opposed in the first place.


> US lawmakers tend to be very reactionary, except in rare cases like gun laws.

And for a good reason, they are constitutionally prohibited to .


> the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation.

All safety functionality was introduced and used way before regulators even knew that it's possible.

Edit: please explain the downvotes, ideally with examples


> Are you working on the next season of Black Mirror?

In Black Mirror, all cars in the world will simultaneously swerve into nearby pedestrians, buildings, or other cars.


It doesn't even need to be that. Imagine the shit show that a city would be when it's entire transportation fleet is immobilized because someone has messed with their safety features.


That could happen during a massive, remotely-triggered software update.


Just imagine a batch of cars with malfunctioning inertial sensors (it's brought down more than a few of my drones). GPS and perception (through ML or LIDAR) will work most of the time to override such errors, but if there was a second malfunction... "The car is swerving left at 1m/s; correct right."


Like the one that happened some days ago with Occulus helmets.


That could happen because of a malicious 'time-bomb' placed by a hostile state actor in such an update.


More likely it would be something about dissidents being killed in their cars, or cars going after people who aren't liked.


If that's bad, what happens when the robocars get hacked?


Like we have botnets made out of thousands (millions?) of compromised computers, we could have entire fleets of compromised cars, rentable on the black market using cryptocurrency, that could be used to commit crimes (e.g. homicide) while keeping the killer anonymous.

Scary stuff. I hope these self-driving cars will be able and designed to work while completely offline, with no built-in way to ever connect to a network. But given the biggest player in the field seems to be Google, they'll probably be always connected in order to send data to the mothership and receive ads to show you.


"I hope these self-driving cars will be able and designed to work while completely offline, with no built-in way to ever connect to a network."

Don't hold your breath about that. There will be a huge load of data ready to be sold to advertising companies just by listening what passengers talk about when passing near areas/stores/billboards/events etc.


> receive ads to show you

I'm just envisioning a scenario where the car automatically pulls to the side of the highway, locks the doors and dishes you with a 15 second ad, and then the doors unlock and the journey resumes as normal.


Or just (virtually) replace billboards with personalized content


Using these cars to commit homicide was actually one of the plot points in the second book of Three Body Problem trilogy by Liu Cixin. Very much recommended if you are into sci-fi.


Those books are interesting in that Liu seems to get away with seriously portraying narratives that would be out of bounds in the "approved" popular culture of the West. Murder by robocar is a minor example, but others include portrayal of the inherent weakness of societies in which men are effeminate and the superiority of leaving strategic decisions to military authorities. (I don't particularly agree with those propositions, but they are certainly present in the books.)


I'm half-way done with the last book in the trilogy and I am finding the assumptions and viewpoints of the world from a Chinese view quite interesting. The one that struck me most was how he presents humanities greatest strength over the technologically superior aliens is the human ability to conceal their true thoughts and the possibility of deception. Quite different from Christianity's high value placed on honesty.


The great stories of paganism and animism were composed by artists, and they all feature trickery and uncertainty. The Christian Bible has some of that (I like Job), but the majority was written by humorless unimaginative prudes. I suppose some of the Chinese philosophers are a little better than St. Paul, but mostly when they're being playful.


Confused by the downvote here. This is a perfectly legitimate question, and indeed it should be asked more, not less, often.


This is stupid question. The answer is - the same as if someone will physically mess with your car.


Physical hacking can only happen to one car at a time.


What if someone hacks the assembly line for Ford?


What if someone put a bomb in your car or mess with your breaks?


Right, but what if someone could put a bomb in 100k cars or mess with 100k car's breaks remotely over the internet. A large enough quantitative change becomes a qualitative change.


To be fair, all the examples you gave could also happen to a human driver.


will never happen. least not with any explicit depth sensors. any car with lidar has depth perception orders of magnitude better than yours and would never chase an object merely because it resembles road markings


I mean, maybe the self-driving car shouldn't exist if it's just going to run people over.


>So we will have overall a much smaller number of deaths caused by self driving cars

Why? This is what the self-driving cars industry insists on, but has nowhere near been proven (only BS stats, under ideal conditions, no rain, no snow, selected roads, etc -- and those as reported by the companies itself).

I can very well imagine a greater than average human driving AI. But I can also imagine being able to write it anytime soon not being a law of nature.

It might take decades or centuries to get out of some local maxima.

General AI research had also promised the moon once again in the 60s and 70s, and it all died with little to show of in the 80s. It was always "a few years down the line".

I'm not so certain that we're gonna get this good car AI anytime soon.


If self-driving cars 1. don't read texts whilst driving, 2. don't drink alcohol, 3. stick to the speed limit, 4. keep a 3-4s distance to the car in front, 5. don't drive whilst tired 6. don't jump stop signs / red lights it will solve a majority of crashes and deaths. [0]

The solutions to not killing people whilst driving aren't rocket science but too many humans seem to be incapable of respecting the rules.

[0]: http://www.slate.com/articles/technology/future_tense/2017/1...


But it doesn't work like that. You can't just say "If they don't do X Y or Z", because while they may not do X Y or Z, that doesn't mean they won't do A B or C that are equally bad or worse. Human and self-driving are two completely separate categories, you can't just assume that things one does well the other also does well, and so just subtract the negatives. You could easily flip your comment to go the other way: "If human drivers don't mistake trucks for clouds or take sharp 90o turns for no reasons then they're safer".

I do think that self-driving cars will be safer, but it's upon it's proponents to prove that.


As the sibling comment says, it does depend on self-driving cars matching human level performance. But with all AI/Neural Networks it is very possible to match human performance because most of the time you can throw more human-level performance data at it.

Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.

I further agree with you it's up to the proponents to prove that. It's a good thing to force a really high bar for self-driving cars. Then assuming the technology is maintained once AI passes the bar it should only ever get better.


> Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.

Not if you put neural networks / deep learning in the equation. This stuff is black boxes connected to black boxes, that work fine until they don't, and then nobody knows why they failed - because all you have is bunch of numbers with zero semantic information attached to them.


Neural Networks are only a small part of self driving car algorithms. The planning and sensor fusion etc. is usually not done with deep learning (for this reason). Only visual detection, because we have nothing else working better in this realm. But lidar, radar, sonar, what have you all work without any deep learning. The decision making on a high level is also without deep learning.

The only questionable parts will be where the vision system fails, and those are similar actually to human problems. Because human vision also often fails (sunlight on windshield, lack of attention, darkness, etc.)


> But with all AI/Neural Networks it is very possible to match human performance because most of the time you can throw more human-level performance data at it.

Are you in very vague words implying that AGI has been invented? AI might have matched humans in image recognition, but it is far away in general decision making.

And finally, I am tired of listening to "safer than a human". That should never be the comparison, but a human at the helm and an AI running in the background which will take over when the human does an obvious mistake -- you know, like a emergency braking system,


"Each of the crashes that self-driving cars can be fixed and prevented from happening again."

If those situations recur exactly as they happened the first time, sure they can be prevented from happening again.

That is, if a car approaches the exact same intersection as the exact same time of day, and a pedestrian that looks exactly like the pedestrian in this accident crosses the street in exactly the same way, with exactly the same other variables (like all the other pedestrians and cars around there that the sensors can see), the data could be enough the same that the algorithm will detect it at close enough to the original situation to avoid the accident this time.

But it's not at all clear how well their improvements will generalize to other situations which humans would consider to be "the same" (ie. when any pedestrian in any intersection crosses any street).


You missed a rather important point 0:

If self-driving cars are, at their best, roughly as capable as a human driver.

This is a big 'if'.

The solution to not killing people is a kind of rocket science. In fact, it's probably harder than rocket science[0]. It's predicated on a lot of things that are very, very, very hard. The fact is that humans, who are already pretty capable of most of these very very hard things, often choose to reduce their own capabilities.

If the best self-driving tech is no better than a drunk human, however, then we haven't gained much.

---

[0] though perhaps not harder than brain surgery.


I really don't think it's a big 'if'. As long as there is human level performance data, neural networks can be trained to match that level of performance. So it's a matter of time. It is indeed very, very hard, but also solvable.


I agree that it's solvable.

However, the process your describing, of collecting human-level performance data, requires the ability to gather all of the data relevant to the act of driving in a manner consumable by the algorithm in question. This is the simulation problem, and it's very, very, very hard (it's why genetic algorithms have traditionally not gotten much further than toy examples, in spite of being a cool idea). Perhaps it is the case that it is very important to have an accurate model of the intentions of other agents (e.g., pedestrians) in order to take preventative action rather than pure reaction. Perhaps it is very important to have a model of what time of day it is, or the neighborhood you're driving in. The likelihood that it is going to rain some time in the next hour. Whether the stock market closed up or down that day.

It also assumes that neural networks (or the more traditional systems used elsewhere) are sufficiently complex to model these behaviors accurately. Which we do not yet have an answer to yet.

So, when I say, 'a big if', I mean for the foreseeable future, barring some massive technological/biological breakthrough. That could be a very long time.


For those who do not get the reference: https://www.youtube.com/watch?v=THNPmhBl-8I


Well, one answer is either it will be positively demonstrated to be statistically safer, or the industry won't exist. So once you start talking about what the industry is going to look like, you can assume average safety higher than manual driving.


> This is what the self-driving cars industry insists on, but has nowhere near been proven

Because machines have orders of magnitude fewer failure modes than humans, but with greater efficiency. It's why so much human labour has been automated. There's little reason to think driving will be any different.

You can insist all you like that the existing evidence is under "ideal conditions", but a) that's how humans pass their driving tests too, and b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.

It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.


> It's why so much human labour has been automated. There's little reason to think driving will be any different.

Repetitive, blunt, manual labor now, and probably much basic legal/administrative/medical work in the near future. But we still pay migrant workers to harvest fruit, and I don't imagine a robot jockey winning a horse race anytime soon.

Driving a car under non-ideal conditions is incredibly complex, and relies upon human communication. For example: eye contact between a driver and pedestrian; one driver waving at another to go ahead; anticipating the behavior of an old lady in an Oldsmobile. Oh, the robots will be better drivers eventually, but it will be awhile. We humans currently manage about one death per hundred million miles; Uber made it all of two million. I expect we'll have level 5 self-driving cars about the same time we pass the Turing test.


> But we still pay migrant workers to harvest fruit

Harvesting fruit is far more complex than driving. It's a 3D search through a complex space.

> Driving a car under non-ideal conditions is incredibly complex, and relies upon human communication.

No it doesn't. The rules of the road detail precisely how cars interact with each other and with pedestrians.

> We humans currently manage about one death per hundred million miles; Uber made it all of two million.

Incorrect use of statistics.


> Harvesting fruit is far more complex than driving. It's a 3D search through a complex space.

Are you making a joke? "a 3D search [for a path that reaches the destination safely and legally] through complex space" is exactly how I would describe driving. (Also, driving is an online problem.)


Cars don't leave the road which is a 2D surface. In what way is that a 3D problem?


Ever heard of an elevated highway, ramp, flying junction, bridge, or tunnel?

I mean, yeah the topology is not as complex as a pure unrestricted 3d space but it's also more complex than pure 2d space. It's a search through a space, and it's complex, I don't know if nitpicking about the topology adds a lot here?


That's still 2D space. A car simply can't move along the z axis, so the fact that the road itself moves in 3 dimensions is irrelevant.

Even navigational paths that consider all of the junctions, ramps, etc. are simply reduced to a weighted graph with no notion of any dimensions beyond forward and backwards.


Your comment is just random hopeful assertions though...

>It's why so much human labour has been automated.

But how much human labour that is as complicated as driving has been automated? As far as I can tell automation is very, very bad when it needs to interact with humans who may behave unexpectedly.

>b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.

>It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.

Actually plenty of experts within the field disagree with you.

“I tell adult audiences not to expect it in their lifetimes. And I say the same thing to students,” he says. “Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation.”

Steve Shladover, transportation researcher at the University of California, Berkeley

http://www.automobilemag.com/news/the-hurdles-facing-autonom...

With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard. But guess what? To solve the real problem, for you or me to buy a car that can drive autonomously from point A to point B—it's not even close. There are fundamental problems that need to be solved.

Herman Herman, Director of the National Robotics Engineering Center @ CMU

https://motherboard.vice.com/en_us/article/d7y49y/robotics-l...


>>But how much human labour that is as complicated as driving has been automated?

Quite a lot actually.

These days you can produce food for several thousands of people using a few hundred people and plenty of machines.

Part of the reason why we haven't yet reached a Malthusian catastrophe is this.


Automated food production is very much simpler, because you're usually only producing one food item at large scale. That's the super easy stuff to automate.

Automated driving is more like a fully automated chef, that can create new dishes from what his clients tell him they like. Without the clients being able to properly express themselves. That's a lot more complicated than following a recipe.

Difficulty of automation goes roughly trains < planes << cars.

Automated trains are simple, but don't provide much value. Automating planes provided value because it's safer than just with human pilots. Automated cars are a different league of complexity.


> But how much human labour that is as complicated as driving has been automated?

Driving is not complicated at its core. Travel along vectors that intersect at well-defined angles. Stop to avoid obstacles whose vectors intersect with yours.

Sometimes those obstacles will intersect with your vector faster than you can stop, which is probably what happened to this woman. As long as the autonomous car was following the prescribed laws, then it's not at fault, and a human definitely would not have been able to stop either.

> Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting.

Which is why self-driving cars don't depend on visual light, and why prototypes are being tested in regions without inclement weather. Being on HN, I'm sure you're well familiar with the product development cycle: start with the easiest problem that does something useful, then generalize as needed.

> With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard.

Right, so the experts agree with me that the problem the pilot projects are addressing is readily solvable, and that general deployment will take a number of years of further research, but isn't beyond our reach. This past year I've already read about sensors that can peer through ice and snow. 15 years is not at all out of the question.


Driving isn't just travel along a vector. Maybe trains, but not urban roads. urban roads are full of people and animals.

If a ball bounces in front of me, I slow down expecting a dog or a child running after it. No self driving car now, and in 30 years is going to be able to infer that.

Driving is essentially interacting with the environment, reading hand signals from people, understanding intent of pedestrians, bicycles and other drivers. No way any AI can do that now.


> Driving isn't just travel along a vector. Maybe trains, but not urban roads.

Trains travel along a straight line, not a vector in 2D space.

> If a ball bounces in front of me, I slow down expecting a dog or a child running after it. No self driving car now, and in 30 years is going to be able to infer that.

Incorrect. I don't know why you think humans are so special that they're the only system capable of inferring such correlations.


Or take a slow approach like Google Waymo which was almost perfect so far on the roads. Uber is rushing it, and this could cost whole industry.


exactly. I would personally never trust an Uber self-driving car, specifically because I've lost trust in the company itself.


Seems like you're jumping to conclusions here. Let's wait to see what exactly happened. I highly doubt that any of these companies just use "straight" ML. For complex applications, there's generally a combination of rules-based algorithm and statistical ML based ones applied to solve problems. So to simply: highly suspect predictions aren't just blinded followed.


Totally agree on your premise that we can rationalize humans killing humans - but we cannot do so with machines killing humans.

If self-driving cars really are safer in the long-run for drivers and pedestrians - maybe what people need is a better grasp on probability and statistics? And self-driving car companies need to show and publicize the data that backs this claim up to win the trust of the population.


It's a sense of control. I as a pedestrian (or driver) can understand and take precautions against human drivers. If I'm alert I can instantly see in my peripheral vision if a car is behaving oddly. That way I can seriously reduce the risk of an accident and reduce the consequences in the very most cases.

If the road was filled with self-driving cars there would be less accidents but I wouldn't understand them and with that comes distrust.

Freak accidents without explanations are not going to cut it.

Also, my gut feeling says this was a preventable accident that only happened because of many layers of poor judgement. I hope I'm wrong but that is seriously what I think of self-driving attempts in public so far. Irresponsible.


If you ask me which one is coming first, Quantum computing or "better grasp of probability and statistics" among the general public - I take the first with 99% confidence.


If a human kills a human, we have someone in the direct chain of command that we can punish. If an algorithm kills a person... who do we punish? How do we punish them in a severe enough way to encourage making things better?

Perhaps, similar to airline crashes, we should expect Uber to pay out to the family, plus a penalty fine. 1m per death? 2? What price to we put on a life?


>How do we punish them in a severe enough way to encourage making things better?

This is a tough problem, but if we stop or limit the allocation of resources to protect the secretive intellectual property that is autonomoilusly running people over, that is the most effective incentive I can see. Plus it's pretty easy to do.

We don't even have to force them to disclose anything, by affording less legal protection, their employees will open-source it for us.


It's not just about punishment though. It's about our brains being (mostly) good at saying "well they were unlucky, but I'd never find myself in that situation, so I'm OK." And with some brains wired for anxiety doing the opposite and instead only thinking about how they are the ones that would be in all those situations.


> maybe what people need is a better grasp on probability and statistics

Definitely, though my interpretation of your statement is "self driving cars have only killed a couple people ever but human cars have killed hundreds of thousands". If that's correct, that's not going to win anyone over nor is it necessarily correct.

While the state of AZ definitely has some responsibility for allowing testing of the cars on their roads, Uber needs (imo) to be able to prove the bug that caused the accident was so much of an edge case that they couldn't easily have been able to foresee it.

Are they even testing this shit on private tracks as much as possible before releasing anything on public roads? How much are they ensuring a human driver is paying attention?


Hm, people are fine with folks getting mauled in an industrial accident, or killed misusing power equipment. So its not purely a machine thing.

Maybe because its unexpected - the victim is not involved until they are dead?


The innocent bystander is indeed the reason. People working with machinery accept certain risks.


Pedestrians and cyclists ( and car occupants for that matter) accept risks near or on roads. You expect that at least 1. drivers are vigilant and cars are maintained so the 2. brakes don't fail or 3. wheels don't fall off. Yet in my life I have eyewitnessed all 3 of those assumptions being wrong at least once.


I'd be surprised if you could educate this problem away just by publishing statistics. Generally, people don't seem to integrate statistics well on an emotional level, but do make decisions based on emotional considerations.

I mean, people play the lottery. That's a guaranteed loss, statistically speaking. In fact, it's my understanding that, where I live, you're more likely to get hit by a (human-operated) car on your way to get your lottery ticket than you are to win any significant amount of money. But still people brave death for a barely-existent chance at winning money!


> These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".

Tangent: is there a land vehicle designed for redundant control, the way planes are? I've always wondered how many accidents would have been prevented if there were classes of vehicles (e.g. large trucks) that required two drivers, where control could be transferred (either by push or pull) between the "pilot" and "copilot" of the vehicle. Like a driving-school car, but where both drivers are assumed equally fallible.


Pilots don't share control of an aircraft; the copilot may help with some tasks, but unless the captain relinquishes control of the yoke (etc.) he's flying it. So you'd still have issues where a cars "pilot" gets distracted, or makes a poor decision.


It's even more complex - there's the temporary designation "pilot flying" and "pilot not flying", with a handover protocol and whatnot: https://aviation.stackexchange.com/questions/5078/how-is-air...


> This is what's going to happen. If you've ever seen a machine learning algorithm in action, this isn't surprising at all. Basically, they'll behave as expected some well known percentage of the time. But when they don't, the result will not be just a slight deviation from the normal algorithm, but a very unexpected one.

Do we even know yet what's happened?

It seems rather in bad taste to take someones death, not know the circumstances then wax lyrical about how it matches what you'd expect.


May be for now, autonomous driving should be limited to freeways and roads that dont have pedestrian crossings/pavements.


This is a great point. Solve it one step at a time.

But the problem is Uber's business plan is to replace drivers with autonomous vehicles ferrying passengers. i.e. take the driver cost out of the equation. Same goes for Waymo and others trying to enter/play in this game. It's always about monetization which kills/slows innovation.

Just highway-mode is not going to make a lot of money except in the trucking business and I bet they will succeed soon enough and reduce transportation costs. But passenger vehicles, not so much. May help in reducing fatigue related accidents but not a money making business for a multi-billion dollar company.

That being said, really sad for the victim in this incident.


From what I could tell, it never was a money making business. Uber now and forever, operates at a loss.


Another quirk of people, particularly acting via "People in Positions of Authority" is that they will need to do something to prevent next time.

Why did this happen? What steps have we make sure it will never happen again? These are both methods of analysing & fixing problems and methods of preserving a decision making authority. Sometimes this degrades into a cynical "something must be done" for the sake of doing, but... it's not all (or even mostly) cynical. It just feels wrong going forward without correction, and we won't tolerate this from our decision makers. EVen if we will, they will assume (out of habit) that we won't

We can't know how this happened. There is nothing to do. ..and.. this will happen again, but at a rate lower than human driver's more less opaque accidents.... I'm not sure how that works as an alternative finding out what went wrong and doing something.

Your comment is easily translated into "you knew there was a glitch in the software, but you let this happen anyway." Something will need to be done.


Even if we assume that we wanted to address this for real, I fear that it will be next to impossible to actually assess whether whatever mistake caused this has actually been addressed when all the technology behind it is proprietary. I can easily see people being swayed by a well-written PR speech about how "human safety" is their "top priority" without anything substantial actually being done behind the scenes.

I think any attempts to address such issues have to come with far-ranging transparency regulations on companies, possibly including open-sourcing (most of) their code. I don't think regulatory agencies alone would have the right incentives to actually check up on this properly.


It‘s amazing how quickly things can happen after an accident.

In a nearby town, people have petitioned for a speed limit for a long time. Nothing happened until a 6 year old boy was killed. Within a few weeks a speed limit was in place.


Safety rules are written in blood. Often someone has to die before action is taken. But eventually people forget, get sloppy, or consider the rules completely useless. See: nightclub fires.


> So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.

One of the big questions I have about autonomous driving is if it's really a better solution to the problems it's meant to solve than more public transportation.


Do you have any experience developing autonomous driving algorithms? Because you are making a lot of broad claims about their characteristics that only someone with a fairly deep level of expertise could speculate about.


Interesting insight. While self-driving cars should reduce the number of accidents, there are going to be some subset of people who are excellent drivers for which self-driving cars will increase their accident rates (for example, the kind of person who stays home when its icy, but decides that their new self-driving car can cope with the conditions).


Your comment reminded me of the One Pixel Attack [1] and my joke about wearing a giant yellow square costume...

[1] https://github.com/Hyperparticle/one-pixel-attack-keras


> Moreover, the human brain won't like processing these freak accidents.

I think this is really key. The ability to put the blame on something tangible, like the mistakes of another person, somehow allows for more closure than if it was a random technical failure.


Very well put.

It boggles my mind that a forum full of computer programmers can look at autonomous cars and think "this is a good idea".

They are either delusional and think their code is a gift to humanity or they haven't put much thought into it.


I don't believe I'm delusional, my code is certainly medium-to-okay. I've put a lot of thought into this. I think autonomous cars are a very good idea, I want to work on building them and I want to own one as soon as safely possible.

Autonomous cars, as they exist right now, are not up to the task at hand.

That's why they should still have safety drivers and other safeguards in place. I don't know enough to understand their reasoning, but I was very surprised when Waymo removed safety drivers in some cases. This accident is doubly surprising, since there WAS a safety driver in the car in this case. I'll be interested to see the analysis of what happened and what failures occurred to let this happen.

Saying that future accidents will be "unexpected" and therefore scary is FUD in its purest form, fear based on uncertainty and doubt. It will be very clear exactly what happened and what the failure case was. Even as the parent stated, "it saw a person with stripes and thought they were road" - that's incredibly stupid, but very simple and explainable. It will also be explainable (and expect-able) the other failures that had to occur for that failure to cause a death.

What set of systems (multiple cameras, LIDAR, RADAR, accelerometers, maps, GPS, etc.) had to fail in what combined way for such a failure? Which one of N different individual failures could have prevented the entire failure cascade? What change needs to take place to prevent future failures of this sort - even down to equally stupid reactions to failure as "ban striped clothing"? Obviously any changes should take place in the car itself, either via software or hardware modifications, or operational changes i.e. maximum speed, minimum tolerances / safe zones, even physical modifications to configuration of redundant systems. After that should any laws or norms be changed, should roads be designed with better marking or wider lanes? Should humans have to press a button to continue driving when stopped at a crosswalk, even if they don't have to otherwise operate the car?

Lots of people have put a lot of thought into these scenarios. There is even an entire discipline around these questions and answers, functional safety. There's no one answer, but autonomy engineers are not unthinking and delusional.


We look at the alternative which is our co-workers, and people giving us our specs, and our marketing teams and think 'putting these people in charge of a large metal box travelling at 100kmh interacting with people just like them - that is a good idea'...

It is not that we think that software is particularly good, it is that we have a VERY dim view of humanity's ability to do better.


    I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road

Shouldn't be an issue with 3D cameras


You can't prove a negative and should be careful about promising what may turn out to be false. There is potentially quite a bit of money to be made by people with the auto version of slipping on a pickle jar. When there is money to be made, talented but otherwise misguided people apply their efforts.


Well, in that case a non-textured or reflective clothing could have a similar effect.


Wouldn't LIDAR pick it up still?


Imagine someone carrying a couple of two by fours they are holding vertically. They then stop on the sidewalk to check their phone at just the right angle. I am not really giving specific examples, as much as trying to illustrate that the way ML systems fail isn't by being slightly off from the intended programming, but by being really off.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: