Hacker News new | past | comments | ask | show | jobs | submit login
Police Say Video Shows Woman Stepped Suddenly in Front of Self-Driving Uber (bloomberg.com)
262 points by samcampbell on Mar 20, 2018 | hide | past | web | favorite | 375 comments



There's at least two other trends that are, IMO, more to blame. SUVs in general for one, and the proliferation of particular mixed-use road antipattern.

SUVs is an easy one. When an SUV hits a bicyclist, motorcyclist, or pedestrian, the unfortunate person tends to go through rather than the over they'd be subjected to by a sedan. Motorcycle accident stats bear this out - the overall accident rate has gone down as people have started riding safer, but the fatality rate has gone up as SUVs have taken over market share.

The roads this is a bit more complicated, but basically roads need to either be slow enough to safely share space (25mph or below), separated out so that only cars can use it (freeways), or kill an alarming number of pedestrians and bicyclists. Four lanes and a 35 MPH speed limit with infrequent crossings is pretty much going to have a body count.

I'm not trying to excuse Uber here, just trying to maybe convince urban planners to stop building things that convince people to try to cross four lanes of traffic going 35 MPH.


I'm a dreamer, but I'm dreaming self driving cars or even just sensors on normal cars are able to record the data and make this kind of thing indisputable.

Then maybe we'll have corporate pressure aligned with social pressure to change the laws and move traffic patterns beyond what random luck has 'evolved' to safe and sensible defaults.


Well...there's supposed to be a video. The police claims to have seen it, so at least NTSB should have it.

The proliferation of dashcams makes your dream far more likely.


Why do people blame SUVs when pickup trucks are some of the the highest selling vehicles in America and vans/minivans have existed since forever and are still popular? And they’re both the same size as SUVs?


If you have five children, you're likely buying a car with seven seats. If you regularly transport thousands of pounds of stuff, you're getting a pickup truck. SUVs, on the other hand, often get purchased in lieu of a sedan; pickup trucks and minivans have use-value that often justify their place on the road.


SUVs exist largely because fuel economy standards effectively outlawed station wagons: http://www.thetruthaboutcars.com/2012/10/how-cafe-killed-com...


SUVs have use value as well: driving in difficult conditions.


Very few people actually use SUVs for this and many modern ones aren't even that good at anything beyond a rainy day.


What sort of little bubble do you live in? Where I live people get pretty good utility out of their sport utility vehicles! We have had a ton of snow this winter, including a couple of storms where having ground clearance was key to getting around. The snow was coming down so hard that the plows couldn't keep up. I saw who got stuck and who didn't in that big storm. The interstates were a mess, secondary roads had 2' banks where plows had only gone through in one direction. Front-wheel drive cars and minivans were stuck everywhere, as were tractor trailers, RWD cars without snow tires, and so forth. Everything AWD was still moving pretty well.

Now if you're saying people with true low and high range 4wd trucks and SUVs don't go rock crawling with them that much, that's probably true, but those vehicles have a lot of utility doing things like pulling a boat trailer up a wet boat ramp, driving on a frozen lake to go ice fishing, and driving on farm roads or other unimproved surfaces.


Which only means they have additional value off road, not on the road. The entire premise of having a road is to avoid difficult driving conditions.


Having lived in both places, commuting in northern New England this time of year often means that you're not driving on a surface a Silicon Valleyite would consider a "road"


Yeah, living in the Midwest I've actually encountered potholes where, if I drove my Fiat over them at any speed, would literally swallow the entire wheel and likely render the car undrivable without major repairs. Not only does an SUV/pickup have bigger tires, they're also made for driving over incredibly rough terrain. Solid axles and four-wheel drive mitigates a lot of those issues.


It's a relatively flat and level surface. An AWD car is just fine.

Yes, SV lives in it's own bubble in many ways but you don't need an SUV to commute on paved roads. I too live in New England.

Also, MA isn't northern New England.


AWD cars are great, I own a Subaru that does fine about 90% of the time.

There are still days that it stays at home and the trucks and SUVs come out because the snow depth is above the bumper.

Who said I live in MA?


>Who said I live in MA?

Statistics/stereotyping said so. If I guess that every New Englander on HN is from MA and lives east of 495 I'll be right far more often than if I pick some other place in New England.

>There are still days that it stays at home and the trucks and SUVs come out because the snow depth is above the bumper.

I've lived in all three of the northern New England states and currently live in MA because money is more important than happiness right now.

I was kind of looking forward to telling someone from MA that you don't need more than AWD car amounts of ground clearance on roads that never get more than 5" of snow between plows. Even in northern New England it's very rare for there to accumulate more than 6" of snow on public roads except maybe a few days a year and even then only in the most rural areas.


I think it's also social pressure to some extent. In the US, cars appear to be on average about 1.5x the size (in all dimensions) of cars in Europe. If you see one on the streets in Europe between other cars it often looks comically huge. I assume that if you drive a regularly sized car in the US (or a smaller one like a Toyota Aigo), it looks comically tiny between the rest of the giant traffic.


If this piece of road is the one I’m thinking of, there’s nary a crossing area for at least a mile. Phoenix isn’t known for walkability, and this isn’t downtown Tempe.


Seems to be a bit down this road (location chosen because of the speed limit sign): https://www.google.com/maps/@33.4350531,-111.941492,3a,75y,3...


how are SUVs even remotely related here? makes the rest of your argument difficult to take seriously


Because when an SUV hits someone, they get pushed under the wheels. When a sedan hits someone, they get pushed over the hood.

This is because a person's certain of gravity is above where a sedan's front would hit them.


The self-driving car involved in the accident is a SUV.


Some of the commenters here think they can do a better job reconstructing the incident from newspaper cartoons than actual investigators who are working at the scene. You guys can just as well make up a story in which a murderous Uber robot chased a pedestrian off of the sidewalk and onto the road and then intentionally ran her over.


Why are we resorting to reconstructing a scene when Uber's Robot caught the whole thing on camera and LIDAR? The police statement seems quite premature, especially when the NTSB is running an investigation into the accident.


+1; Time will tell what happened as there should be enough data onboard. These systems are essentially in training mode and new scenarios will need to be accounted for.

I am curious as to why the driver didn't react. I wonder if the system keeps track of the driver as well as I would imagine it has to be mind numbing to sit behind a wheel for hours on end and not do anything... So in this situation, how attentive and alert was really the driver who is supposed to be the fail safe mechanism.


There's video from inside too: https://t.co/2dVP72TziQ


I think part of the problem is that almost all of the news articles have painted a picture of the fault being the biker. And there seems to be no one defending her and its ridiculous that the news articles are already one-sided here considering that this is a brand-spanking new type of thing that has happened on this planet.

That being said, I'm totally open to this being the fault of either Uber or the woman, but there are too many questions in my head that suggest it was the fault of the woman.

Are we saying that at this point we're going to let the initial police statement win because they are "experts" at driving. Are they also expects at the software in SDVs? I think at this point we shouldn't assume either story - but at the same time we can't let either Uber, the police, governments cover this story up for the sake of money and politics.

I think there are some pretty massive implications that should come from this - even if a human driver may have very well killed this person too (which I don't personally believe) don't we need to take a minute and ask a few questions about why the vehicle reportedly didn't even slow down after the hit? Why the dent is on the RIGHT hand side of the car (.2 seconds seems false)? Would a human that was driving slowed a lot earlier - was the pedestrian/biker expecting the car to slow because she clearly had a hand signal up - seeing a fake driver in the car?

There are too many questions and I hope that this specific story is treated with the utmost careful consideration. I for one can't seem to let it go in my head.

Someone died and it very well could have been because of our collective ego that we can accomplish this (SDVs) at this point in our history. We'll never hear from this woman again.


The NYT just published a graphic that purports to show the location of the vehicle and the victim: https://www.nytimes.com/interactive/2018/03/20/us/self-drivi...

This is the first report I've seen that indicates the Uber AV was in the right lane. Remember that the victim was crossing from left to right, and that all of the visible damage on the AV is on the right-side bumper. It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather. I would think most humans would be able to at least hit the brakes, if not completely avoid a collision.


Please stop spreading rumors and speculations. The graphic is an illustration and does not necessarily place the car in the actual lane. It clearly contradicts with the police chief’s statement that the victim walks right into traffic from center medium. It even contradicts with the left arrow pointing to the impact location in the same picture.


Please don't accuse me or other commenters of spreading "rumors and speculations", without evidence or logical arguments.

I didn't state a bunch of claims, I linked to a NYT published article. If it turns out that the NYT depicted the incorrect lane, then I expect we'll see a correction. But they published this graphic staked on their reputation for accuracy, not on just "rumors".

Moreover, it does not contradict what the police chief said; here is her initial and only interview:

https://www.sfchronicle.com/business/article/Exclusive-Tempe...

> Pushing a bicycle laden with plastic shopping bags, a woman abruptly walked from a center median into a lane of traffic and was struck by a self-driving Uber operating in autonomous mode.

> “The driver said it was like a flash, the person walked out in front of them,” said Sylvia Moir, police chief in Tempe, Ariz., the location for the first pedestrian fatality involving a self-driving car. “His first alert to the collision was the sound of the collision.”*

The police chief also said a number of things that have subsequently been changed. In the Chronicle article, she's reported to have said that the Uber was driving 38 in a 35mph zone. The NYT graphic and other stories from today have said it was a 45mph zone.


"It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather. I would think most humans would be able to at least hit the brakes, if not completely avoid a collision."

Imagining an approximate version of the scenario and thinking that most people would be able to do what the known unimpaired driver could not is literal speculation. You are reading more into the graphic than can be reasonably derived from the graphic, and the -text- of the graphic is itself explicitly uncertain about the victim's location before and after the collision. Your comment expresses more credulity than the very source you linked to.

If you want to speculate that the Uber driver and police are covering up negligence, that's fine. Trying to claim some sort of logical high ground in a known uncertain scenario is not.


The parent didn't say that he is imagining anything. Read the post again.

They are saying that based on the information in the NYT article it does not seem credible that the woman came out of nowhere, that there was no time to react, and that the first thing the driver noticed was the sound of hitting the woman before even seeing her. The driver is a party in this story and may well be at fault for causing her death.


> It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather

Explaining the imagining, is the salient issue.

> The parent didn't say that he is imagining anything

Splitting hairs over the phrasing is not compelling. The poster was proposing a scenario as the most reasonable interpretation while couching it to minimize criticism (the equivalent of the "just sayin" trope).

The premise is flawed. It never sounded reasonable to assume the pedestrian did not see the vehicle at all. I can imagine a perfectly reasonable scenario where the pedestrian tried (and failed) to clear the vehicular path before being struck on the attempted destination side of the road.


Sincere question: how did I couch it to minimize criticism? And I don't know exactly what the "just sayin" trope is, is it this one:

https://www.urbandictionary.com/define.php?term=just%20sayin...

Not sure how it applies. But anyway, in my defense, I didn't say that I was imagining scenarios. I said the exact opposite, that I could not (or, it was "very hard to") imagine a scenario in which someone crossing left-to-right is mostly unseen by a right-lane driver. By saying I can't imagine, I'm limiting myself to my own experience and observations.


I'm not splitting hairs. The person I replied to, and you, are splitting hairs.


Literally every discussion of note involves reading more into the original article, so yes, I guess you have me there, I am speculating. I didn't realize that was against the rules of HN discussion to bring in evidence external to the OP and state our premises?

Is it speculation to point out that the police and OP misidentified the gender and name of the driver (it's Rafaela Vasquez, not Rafael)? Or that newer articles have disputed the 35 mph limit?

edit: I'm not reading too much into the graphic by stating explicitly what it shows. I'm making the assumption that the NYT designer isn't being loose on the facts here and that the Uber vehicle was in the right lane.

So why can't I claim a logical basis based on the other agreed facts, such as the victim being a woman who is walking her bike across the street, and our general observation of how fast people are able to cross a lane of traffic? What set of physical laws and constants should we be using here?


"If you want to speculate that the Uber driver and police are covering up negligence, that's fine" - the conclusion of the comment you're replying to.

My beef is with 'Please don't accuse me or other commenters of spreading "rumors and speculations", without evidence or logical arguments.' The problem is fooling ourselves into being more certain than we can be. While this is common human behavior, especially in emotionally fraught situations, as people attempting to engage in reasonable discussion on a discussion board, we need to do better. Acknowledging our biases and limited perspective while researching and before replying improves the discussion for everyone.

addendum: what everyone agrees and states is uncertain - where on the road the victim was hit. Another area of uncertainty among us non-witnesses is the immediate traffic at the time of collision.


"The problem is fooling ourselves into being more certain than we can be. "

I want to thank you for saying something I have been having trouble putting into words. I'm not sure if it is me or whether something has changed recently with discourse on the internet but I am coming across more and more people speaking as thought they are an Authority of a subject, while simultaneously posting complete incorrect information.


I can barely even stand to read threads here anymore, if the topic is something I'm very knowledgeable about. So many intermediate-level folks making way overconfident assertions that are way off base, or completely unaware of the state of the art. And it's virtually guaranteed that if I was foolish enough to spend time replying with corrections, they'd try to waste my whole afternoon arguing with me. I wish some of these overconfident statement-makers understood they and this community would be better off if they asked questions instead.


As someone who is doing exactly what you recommend. Most of my questions go answered. It seems people are going perform actions which bring them attention.


If you are an expert/outlier for a given topic, wouldn't you generally expect (in an open forum) that the majority of people speaking/opining on the topic will sound dumb/basic relative to what you know?


(I'm not the person you're replying to)

It's not their level of knowledge, it's them having a high level of conviction when they have no grounds to have such a high level of conviction.


Exactly. Their confidence not only helps to spread misinformation, but also signals to me that if I offer corrections, they'll argue with me down to the last pedantic toe-hold they can win. No thank you.


That's fair. I interpreted the other commenter as saying that what I linked to was just rumors and speculation. I don't see the NYT graphic (even with its caveats) as being just rumors, but yes, I'm clearly engaged in speculation and should be called out when I'm out of line. I only disagreed that my speculation was based on rumor (e.g. "Tempe Police have a history of mistreating and oppressing homeless people, so it's likely they are overstating factors favorable to Uber")


OK, but I already explained my reaction was interpreting the user describing my comment as "rumors". I thought it was an unfair accusation and wanted more "evidence", e.g. updated reports that contradicted the NYT graphic. I wasn't aspiring to this being anymore than a usual discussion of news events.

But I agree, yes, it is easy to fool ourselves into being more certain than justified. I like having that pointed out during HN discussions :)


This is a hard conversation to balance. You were right to state OP added more to the scenario than is known. But people tend to react defensively (which turns into offense often) when they’re personally called on as you did there.

The nonviolent thing to do, I am honk, is to simply describe what is actually known and point out how the accounts differ.

For what it’s worth, none of us were there and it sounds like definitive information is lacking. My hope is for this to be valuable used to ensure safer driving for everyone (autonomous or not).


News articles frequently post artistic renditions of far-away star systems but it would obviously be foolish to try draw any scientific conclusions from those graphics.

As far as I know, there hasn't been any official acknowledgement that the accident occurred in the right lane (or any specific lane for that matter). In fact, IIRC, there was a mention of the accident happening at around 100 yards from a crosswalk, whereas the stretch of road depicted in the picture is around 200ft away. The graphic also uses weasel words like "somewhere in this area". I mean, somewhere where? 100ft south? 10ft to the left? Only someone who has seen the actual video would be able to make an informed comment, and so far the police has not released it to the public.

I'm not convinced this picture can be reliably used for armchair forensics.


Agreed. As I mentioned in an earlier thread, it would make a lot more sense if the pedestrian was crossing from the right side (https://news.ycombinator.com/item?id=16625829). But it's all speculation until we see the video, or at least get more detailed and reliable reports.


>I didn't state a bunch of claims, I linked to a NYT published article. If it turns out that the NYT depicted the incorrect lane, then I expect we'll see a correction. But they published this graphic staked on their reputation for accuracy, not on just "rumors".

My experience with the NYT (and basically every news outlet) on technical no issues that I'm familiar with, is that they are not particularly concerned with getting the details 100% correct.

I don't expect better from them in the discipline of auto accident reconstruction.


Not just crossing all those lanes, since the damage is to the right side, she crossed in front of the vehicle. And yet it never braked.

That's a failure on every level, from prediction to just sheer radar obstacle detection. You don't hit things right in front of you, mid-class vehicles you can buy today already have sensors to emergency brake when the driver won't but an obstruction is ahead.


Right, so now she's crossing three lanes. So that's about 9 meters. At 1-3 meters per second for her, and 17 meters/second for the Uber SUV, it should have seen her 3-9 seconds (50-150 meters) before impact.

That's a pretty clear fail.

I wonder what the Uber's field of view is. At what distance should it have detected stuff about 9 meters off track?

But she did likely walk out from landscaping. So perhaps she initially "looked" like waving branches or whatever. And there must be some mechanism to suppress such false positives, or vehicles would brake when it's windy.


> That's a pretty clear fail.

And yet the police were so quick to say that camera footage indicates that Uber was likely not at fault. It was so quick that it makes me inclined to think it's pretty obvious. However, after the SF Chronicle published its exclusive interview with the Tempe's chief of police, the Tempe Police PR person had to issue a statement that, "Tempe Police Department does not determine fault in vehicular collisions."

I'm not one to believe in conspiracies or to automatically suspect shadowy influence, so I want to believe that the the camera footage seems to argue that this was an unavoidable accident. But the evidence released so far argues against that, regardless of whether the victim's crossing was illegal or not. And why are the police making a judgment on this so soon in the first place, especially when the decision will be made by county law officials, and ostensibly after referring to Uber's full suite of sensor data?

https://www.engadget.com/2018/03/20/uber-fault-pedestrian-fa...


I mean... the police just make shit up all the time. And Arizona was very happy to attract Uber with their regulation-free state law.

The fact that they came out with a statement instead of saying "we don't comment on ongoing investigations" makes it even more shady.


I don't agree with the sentiment that it happens all the time (but don't disagree that it's happened more than a few times), but I do think police are much less likely to make something up if they know (and have seen) the camera evidence. No point in saying something (the day after the incident) that can so easily be contradicted later.


The first bombing victim in Austin was one of the first suspects of the police in his own bombing.


http://foxsanantonio.com/news/local/first-bombing-victims-fa...

https://www.thedailybeast.com/police-treated-first-austin-bo...

The police in Georgetown about half an hour north of Austin went with his gut in one murder case - "it has to be someone the victim knew" and ignored physical evidence at the scene and the eyewitness testimony of the murder victim's son. Then the prosecutor kept working against the accused for the next 25 years and suppressed evidence. https://www.texasmonthly.com/politics/the-innocent-man-part-...


> And yet the police were so quick to say that camera footage indicates that Uber was likely not at fault.

I was surprised at that -- it doesn't seem to me that the police would have the technical knowledge of Uber's hardware and algorithms to categorically state that, let alone so quickly.


Well, if you saw that video and it really did show a < 1 second time between the pedestrian becoming visible and being hit, then the autonomous systems don't matter at all. That accident is physically impossible to prevent in that car.

So whether or not the police understand self-driving systems is not relevant to the fundamental physics problem. The key question is how long was the ped visible.


By the time that pedestrian has even traversed a single lane, the LIDAR on top has generated gigabytes of point cloud data showing her moving, assuming constant velocity of both car and ped, on a straightforward collision path.

I don't see how from a single forward-looking camera you want to deduce this crash was "physically impossible". That isn't even half of what we expect from human drivers. We expect them to swivel their head.


Where did I say just "camera"? By visible I mean visible in any way to the systems.


"The pedestrian becoming visible" would be to the video cameras. If the vehicle has other sensors (LIDAR, RADAR, etc), how sooner/later would the pedestrian be visible to these sensors? Would it be possible to judge that by looking solely at the dashcam? Would the sensors ignore the pedestrian as a false positive in these circumstances?


That's what I wonder. Yes, it has LIDAR etc. But there was this comment yesterday by arbie:[0]

> Sensor Fusion typically merges LiDAR with stereoscopic camera feeds, the latter requiring ambient light.

So might LIDAR data get de-emphasized when there's too little light for the cameras?

0) https://news.ycombinator.com/item?id=16619917


If there’s not enough light for the cameras, LIDAR would get emphasized, not de-emphasized.


Thanks. So in this case, how come the woman wasn't identified based on the LIDAR data?


No idea


I don't understand that comment. "Sensor fusion" is the integration of data from two or more different sensors. It doesn't matter what those sensors are. They don't even have to be different types of sensors.

It may well be that the Uber cars only have LIDAR and optical, but that doesn't have anything to do with sensor fusion.


I didn't mean just visible as in visible light visible - I mean detected in any way by the systems.


<1 second between becoming visible (on the left side of the vehicle) and being hit by the right side of the vehicle doesn't seem likely.


An accident may be physically impossible to prevent, but that's not the only standard. Initial reports (from the police) say that Uber AV did not even brake. Is not braking at all within a 1 second window the performance touted by AV's detection and braking system?


"ped"?


pedestrian


[flagged]


Which you follow up with a vulgarity? Come on. Be better than that. If your goal is to improve online discourse, lead by example.


[flagged]


I don't see how "ped" is necessarily an offensive abbreviation (though it's one that normally isn't used), so I don't think you're justified in believing that the commenter is trying to dehumanize the victim.


[flagged]


There are "PED XING" signs in the US for pedestrian crossings. I don't think it's considered disparaging.

https://www.google.com.au/search?q=ped+xing&source=lnms&tbm=...


[flagged]


As a non native speaker I'm reacting to the use of Ped as well, that paired with the obvious victim blaming that seems to be thrown around a lot, but to a much lesser extent here than in other forums discussing this issue.

Driving is a sensitive subject, people take it personal trying to do generalizations about drivers is just going to land you in a land of troubles, almost worse than profanity.


lol seems like your sense of humor is not being well received here


Well it's more sideways, perhaps even caustic, something like that, I'd say, rather than humorous per se.

Still... stiff upper lip. Though I can't believe I'm getting downvoted for reminding people not to kill anybody while behind the wheel :-o


You're upset for someone using the word "ped" for pedestrian while using a word that's got a dictionary definition of being vulgar and offensive for women? I have to say I'm rather confused. I feel I'm being trolled.


Actually, in UK slang, that refers to someone who's being stupid. And still, everyone knows what it means in the US, I believe.

Edit: Sorry about the ambiguity. I wan't referring to "ped". But rather to the slang that's offensive to women.


Upon reflection, I wonder where that British term comes from. Maybe it basically means "woman". Reflecting the slur that women are stupid. So it's still sexist, albeit not as vulgar as the US term.


Derived from pedant I believe.

noun a person who is excessively concerned with minor details and rules or with displaying academic learning.


Sorry, I was ambiguous. I meant the other slang.


Bah... my intent was certainly to be vulgar, and I stand by that. But if it's going to be interpreted as specifically offensive towards women (an implication not present in UK English) then I'll certainly apologise to all that were thus offended, because that was not part of the plan.

Looks like it's too late to delete it, so we're stuck with it forever.

Please still do drive carefully!


I'm a male, but I do think it's an offensive term for women in the U.S., despite its more casual usage in the UK. The average female reader would be inclined to think that such slurs -- and their anti-female sentiment -- were considered OK here.

It's too late to delete but I think it's worth it (for future readers) to say that offense isn't intended, and that that word is frowned upon by most HN users and mods.


You should consider they might be downvoting you for other reasons other than a hard to believe one.


Arguably, you're getting downvoted for abrasiveness and name calling.


I fall more into the "AZ wants the SDC companies in state and having a death due to a SDC would hamper that with the public" camp. Better to find the SDC not at fault and save face.


If it falls to a question of stopping distances, self driving vehicles are not really practically different from any other car, save perhaps for the ability to reduce the reaction time phase of stopping. I can certainly imagine there will be scenarios in which experienced traffic cops can make a reasonable determination, especially if the speed, distances, locations etc of the car and pedestrian are known.


It's all speculation until we get a release of the actual video footage from the car. This is ridiculous. The NTSB/police need to release the actual footage to dispel speculation. We have actual evidence of exactly what happened and yet no one outside of the investigators and Uber can see it.


NTSB investigation has begun and it make take a few weeks:

https://twitter.com/NTSB_Newsroom/status/976215176323194880

I do think discussion is warranted because the Tempe Police have gone out of their way to share details and preliminary judgments about the case. Some of these details have been inaccurate, both for and against Uber's favor. But Tempe police officials are already going out of their way to downplay what their chief claimed:

http://www.phoenixnewtimes.com/news/federal-agencies-investi...

> Glover downplayed Moir's statements, saying some were taken "out of context" by the Chronicle. The chief disagrees with the Chronicle's headline, "Tempe police chief says early probe shows no fault by Uber," Glover said.


It's probably just PR.


My uncle bought a car with some proximity sensor, basic thing for walls during parking. He, like many, was annoyed at the regular bips etc.

I'm sure a system that warns more for the sake of cautiousness would be rejected by people.


edit: I agree with other commenters that the NYT's labeling is a little too vague. My assumption is that the NYT wouldn't arbitrarily show the Uber vehicle in the far-right lane unless there was some factual basis. If it turns out the NYT did make an arbitrary choice, and Uber AV was actually in the left lane, then NYT needs to be called out to make a correction.

Frustratingly, so many stories make no mention of the lane. This is the only one I've found so far, and it's from the local news-weekly:

http://www.phoenixnewtimes.com/news/federal-agencies-investi...

> The Volvo was in the lane nearest the curb, about 100 yards south of Curry Road, and going about 40 mph at the time of the collision. Initial evidence shows the vehicle didn't brake "significantly" before the impact.

My assumption is that the writer thinks "lane nearest the curb" is unambiguous to the average reader, which would mean the right-most lane. Because...why would average reader (who isn't looking at the accident scene right now) assume that the left-most lane on any road is "nearest the curb"?

However, from Street View, we see that there are curbs on both sides of the street -- i.e. the median itself has a curb: https://www.google.com/maps/@33.4368426,-111.9428855,3a,75y,...

¯\_(ツ)_/¯


Good find.

And if the Uber had changed lanes quickly, to avoid hitting the woman, the backup driver would presumably have mentioned it.


Err, what? I don't follow at all. Don't roads have curbs on both sides?


When someone thinks of an average road/street, they (probably) think of opposing lanes of traffic. Curbside would be right-side. Yes, there is a curb on the left-side of the street, but it is on the far side of the opposing lane.

Example: if I tell you to drive two blocks down a "normal" street and then pull over to the curb, you will most likely not pull over to the curb on the opposite side of the street.

Also, isn't "curbside" used when talking about valet services, which generally have you pull over to a curb on the right when meeting the valet? (I don't know, I don't have the opportunity very often)


If the car was really in the right lane, I don't think an autonomous vehicle has any excuse for not reacting in this situation. A real person might not see the crossing pedestrian in bad light, but a self-driving car should have plenty of sensors capable of detecting a pedestrian crossing three lanes until they're actually in the fourth lane where the car was driving.


> Elaine Herzberg was struck while walking her bike across the street somewhere in this area

How do I know this graphic isn't entirely speculative?


I think that's a fair question, and that includes the text that says "Body seen in this area". I had tweeted at the author about it but hadn't gotten a reply (note: this may be because they don't see/reply to tweets from nobodies like me).

The distances, as depicted, are a little strange. If Uber AV hit the victim at 40mph, you would expect her body to be seen farther away from where it says "Body seen in this area".

It is incontestable that the Uber hit her on its right-front side. Here's a clear picture recently tweeted by NTSB: https://twitter.com/NTSB_Newsroom/status/976215176323194880


If this was indeed the overall scenario, there are few factors I could think of that would excuse a human driver here:

1. Another car was in the lane left of the fatal car, blocking its view of the pedestrian. 2. The pedestrian fell back after reaching the curb.

But it looks like both the artificial and the human driver were asleep.


1. Wouldn't the pedestrian have to walk through said car then? 2. Quite plausible, especially trying to heft what looks like a pretty heavy loaded bike onto a curb.


Maybe that car had stopped for the pedestrian. We're not going to actually know what happened until the video is released.


Random note: drivers, when you are on a two-plus lane road, do not randomly stop (when their is no designated yield) to let pedestrians cross/other drivers turn. Cars in the other lanes may (probably, even) not stop, causing an accident.

It probably is completely unrelated to this event, but it’s an anti-pattern by well-meaning drivers.


The question that comes to my mind then is.... why didn't the human driver that was sitting behind the wheel react?

Could this be a demonstration of what people talk about with regards to partial autonomy being a detriment to human attention?


Could this be a demonstration of what people talk about with regards to partial autonomy being a detriment to human attention?

Definitely. By the time the human realises something is wrong and has to take control, it's already too late.

Incidentally, this is also why autopilots in planes have been more successful --- when something goes wrong high up in the sky, there's still relatively much time to assess the situation and react. With cars, even if the car suddenly told the driver to take over, the driver would essentially have to already be fully attentive to the situation in order to make a decision in time. It's the difference between seconds to a minute in a plane vs. fractions of a second in a car.


Plus pilots in planes with autopilot engaged still have their hands on the controls and still monitor the activity of the autopilot.


> The NYT just published a graphic that purports to show the location of the vehicle and the victim:

It has a vague location for where the victim's body ended up, yes, but I don't interpret that graphic as purporting to show the actual location of the vehicle or victim at the time of the trash.

> This is the first report I've seen that indicates the Uber AV was in the right lane.

I'm not sure I'd go that far. I think the arrow represents the direction of travel, but I would assume they just placed it near the body, and aren't explicitly trying to suggest which lane the car was in. And the notation "somewhere in this area", in my eyes, confirms this.

Further, if they do know which lane it was in, that would be an excellent bit of investigative journalism that no one else has reported, and you'd expect it to be in the story, if not the actual lead. It's not, so...


I think the police are incorrect in saying that the woman walked out the median. If you look at the photo (in the NYT) with the bike and car, its shows (and is captioned as) the car damaged on the right side and the bike is on side walk. It seems most likely that the woman stepped out from behind the trees on the right side of the street.

If people are actually correct about the median thing then I cant imagine how this is not a colossal failure of software. But again it makes way more sense that she walked out from the right side of the road.


well now the video was released. she was walking from the median. So yes, that seems like a colossal failure of the system.


> The NYT just published a graphic that purports to show the location of the vehicle and the victim:

Emphasis on ”purports”. The NYT graphic doesn’t indicate the vehicle was in the right lane, nor does it indicate a precise location for Elaine. I don’t think you have bad intentions here, but extrapolating a hypothesis from this graphic seems unfounded.

There isn’t enough information to go on to develop reasonable hypotheses from any angle, unfortunately. The investigation needs to complete and footage needs to be released.


> It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react [...]

(Pure speculation follows)

I wonder if it possible that the problem is that she was moving too slowly for it, not too quickly?

If for some reason it could not get a good read on the component of her velocity parallel to the lane, it might mistake her for another vehicle moving at similar speed to the Uber, and then see her transverse velocity component as being due to normal drift within the lane. It would read the situation as a normal passing situation, not someone about to enter its lane.


The NYT graphic shows an empty road. Do we know that to be the case?

Is it possible the road had other vehicles blocking the view?


The OP article (from bloomberg, not nytimes) says that the woman moved from the median into traffic:

> The Uber had a forward-facing video recorder, which showed the woman was walking a bike at about 10 p.m. and moved into traffic from a dark center median. "It’s very clear it would have been difficult to avoid this collision in any kind of mode,” Sylvia Moir, the police chief in Tempe, Arizona, told the San Francisco Chronicle.


It's not hard to imagine at all. Did you watch the embedded video? The pedestrian is almost invisible. On my first viewing I didn't see her at all. On repeat viewing, and with foreknowledge, I could just make out her sneaker by reflected light, and her torso in silhouette. She has no reflective gear on her body or her bicycle, and the street lighting is nonexistent where she chose to cross (between street lights, and not on a marked crosswalk or corner). I'm pretty sure I would have hit this woman under these conditions, as horrifying as that is.


There's no blame for the car driving above the speed limit even though the driver is unable to see the risks ahead?


I'm pretty amazed the SDV didn't "see" her, and yet the photograph shows the SDV and the bicycle in the same frame. I mean, yes, I'm sure it can stop from 38mph in, say, 60 feet under perfect conditions, but there's no way a human reacted to the collision and slammed on the brakes or hit an emergency stop button quickly enough to bring it to a stop with the bike in frame like that. I wonder if the vehicle is programmed to do a full panic stop on collision -- which, in and of itself seems a bit dicey, in terms of determining when to slam on the brakes (bird strike? dog? deer?)


> Remember that the victim was crossing from left to right

I don't remember reading this, where was that reported? (Most of the blurbs I read were pretty scarce on the details...)


From the article:

“It’s possible that Uber’s automated driving system did not detect the pedestrian, did not classify her as a pedestrian, or did not predict her departure from the median,” Smith said in an email. “I don’t know whether these steps occurred too late to prevent or lessen the collision or whether they never occurred at all, but the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”


Maybe she stepped back into traffic after crossing? Perhaps the car thought she would be on the sidewalk by the time it arrived at that position but she stepped back or fell back into the lane.


The police are comparing what would have happened if the driver was human. But Ai cars are supposed to have larger range of view. I personally feel waymo car would have probably noted and avoided the woman. And with lidar it's being dark is actually no excuse as it wouldn't matter.


I mean, there was a human in the car too, and the human's statement was that the first sign that there might be a collision was after the collision occurred.


Right, because "I was bored out of my mind and wasn't paying attention until an actual crash happened" is a statement that the human would have totally made if that were the case.


I haven't seen anyone post this yet but I highly suspect the fact that she was pushing a bicycle impacted the machine learning driven AIs ability to determine she was a pedestrian. This reminds me of the kangaroos messing up the AV programs being tested in Australia. A human paying attention may have been wary of a person pushing a bike in the shadows but for all we know the algorithm thought that she was a bush or something because her profile was impacted by the bike. There is a lot to speculate about but machine learning isn't as smart as we humans tend to believe it is and nowhere does it yet approach the form of general intelligence required to respond appropriately to all of it's inputs the way a human paying attention could.


Weird scenarios that can't be predicted are important to consider. Waymo presentations like talking about a situation they ran into where the car needed to stop because it encountered an old woman in a wheelchair with a broom chasing a turkey. As I understand it, most of these companies sensibly say "unknown weird thing in/near road means stop the car."

ooo, there's video of the turkey wheelchair broom chase: https://www.theguardian.com/technology/video/2017/mar/16/goo...


A person pushing a bike is far, far from a "weird scenario", though.


It may depend if the test cars are used on routes with lots of homeless people or not, for the Mountain View test center I know of a handful of regularly seen homeless people on loaded bikes nearby in downtown Los Altos/Mountain View but it's not close to the level of San Francisco/San Jose/Oakland/Los Angeles/Anaheim/San Diego.


A person pushing a bike is quite a common scenario, and it falls to the Uber engineers to account for it.


To a computer that’s never encountered this scenario before it might be.


Then that is the fault of Uber for not exposing it to this extremely common scenario beforehand.


She had a bunch of shopping bags hanging off the front. I imagine that could change how it was identified.


> As I understand it, most of these companies sensibly say "unknown weird thing in/near road means stop the car."

That's not entirely true. A Tesla, for instance, won't stop if it detects a stationary object on the highway, assuming it's a false positives of the algorithm.

https://www.wired.com/story/tesla-autopilot-why-crash-radar


That sounds rather dangerous considering stationary objects do appear on highways from time to time.


Given that another comment says she "suddenly" crossed three lanes of traffic and was on the far side of the car when she was hit, my suspicion is that the car thought she was riding the bike. Of course all of us computers know that bikes always go over 5 mph so surely she'd make it across. Why brake?

I want safe self-driving cars, when they are safe. But Uber's clearly-established cavalier attitude toward human beings apparently can't be trusted with self-driving cars. I'm disappointed that politicians thought they could.


I don't understand this line of reasoning at all, but it's getting repeated a lot on HN. Is the AI expected to hit cyclists, but not pedestrians? Shouldn't it consider any moving object to be a hazard?


It may not have had good enough information to tell that she was a cyclist at all, and instead interpreted her to be something static/non-human altogether (in parent's example, a bush). Of course, without seeing the data its sensors gathered, and what that data registered as, it's all speculation. The problem is with misidentification, not bad priorities.


I wouldn't be surprised if there was some kind of classification problem involved. But wouldn't the AV by default treat a solid object on the road as a hazard to brake for? There's no possible way to specifically train for every variation of thing that could be encountered on a city street (imagine Halloween, for starters).


It was a 3-lane highway. It wouldn't really make sense to break if the system detected a immobile object in a different lane. The problem is more that the system failed to detect the woman moving between lanes.


But the system has the ability to detect whether an object is moving in parallel (i.e. within its own lane) vs. across lanes, right? That would seem to be fundamentally necessary in everyday conditions, e.g. day traffic in which cars are switching/merging lanes.

If the problem is that the system didn't detect the woman moving between lanes, then that seemingly contradicts the police statement that the victim moved quickly enough to surprise the AV and its driver.


I don't see how this is relevant. The car should avoid hitting ANYTHING. It shouldn't have to recognise what the object is if it's in the path of the car.


When I was getting my learners permit I was practicing on a curvy mountain road. I saw a person with a camera phone on the "drop off" side of the road, taking a picture of from what my angle would've been a boring cliff face. But I knew that a camera phone photographer was probably actually photographing a human, who would likely be on the other side of the blind corner. So I slowed down, despite my instructor chastising me for doing so. When we came around and there was a person in the road she asked, "How did you know?" If a self driving vehicle can't make predictions about "irrational" pedestrian behaviour like this and slow down it shouldn't be on the road.


Your own story disproves your conclusion. Your driving instructor, like many (many) other drivers, didn't make the infererence that you did. People don't drive well. Cars are dangerous.

The criterion (really the only criterion) for whether or not automatic vehicles "should be on the road" is whether or not they are safer than the alternative. And that certainly doesn't include "being as good a driver as jdavis703 was in this particular anecdote".


The instructor, though, wasn't the one driving.

Being engaged in a particular role relative to a situation can (I believe) alter one's capacity to perceive and respond to it.

This isn't to say that for definite the instructor would have had the insight described in the parent post, but it is to say that you can't rule this out just based on the information that as a passenger they didn't have the insight.


> This isn't to say that for definite the instructor would have had the insight described in the parent post, but it is to say that you can't rule this out just based on the information that as a passenger they didn't have the insight.

That's a senseless digression though. Again, the criterion to use when deciding whether autonomous vehicles are safe is whether autonomous vehicles are measurably as safe as human drivers and very much not whether we can "rule out" the possibility that the machine might be subject to failure modes that human driver are already known to have anyway.

For every scenario and hypothetical like this one you can imagine where an autonomous vehicle would fail, I can come up with an equally hypothetical reason why they're better (hell, just read any of the media coverage of them). Both arguments are meaningless without numbers and analysis, c.f. this very article we're discussing.


> That's a senseless digression though.

In terms of your agenda, it might make no sense. It just irks me when I see simplistic assumptions about human information processing that don't take the "situatedness" of the processor into account.

I have no views, and did not attempt to comment, on machine information processing in this instance. I just wanted to point out that you can't simply go from the instructor's actually missing the inference as a passenger to the conclusion that they would have missed it as a driver. Thats it :)


But I think we can say that many human drivers would not have slowed down in that situation.


The self driving accidents might be the easiest for humans to avoid, but if accident rates are lower for self-driving cars, it's still worth it. Let's look at overall numbers and severity, not how avoidable it would be for a human.


I am a massive proponent of self-driving vehicles for exactly the same reasons you indicated. However, I do a huge reservation: this tech can only peak if it is the only thing on the road.

My outlook on the transition period is not very optimistic. Humans are good at anticipating crazy behavior. Fatalities increasing (before they eventually decrease) is currently just as likely as the promised miracle in my mind - we don't have enough data to sway my mind either direction just yet.


Should your driving instructor also not be on the road?


Probably not. But I also failed my first driving test for going to slow (I was within the minimum and maximum speed range, but new drivers going slow I guess is a sign of an unconfident driver).


It’s been a while since I took a driving test, but I recall the test to be more than staying within the speed limit.

There was also staying in between the lines, following road signs, and integrating with traffic. Do you think you might have failed not because you weren’t pegged at the speed limit, but instead because you weren’t capable of driving safely?

I wasn’t there, so it could’ve been because you really were just driving too slowly. But maybe it was more than that.


They tell you why you failed when the test is over. And it was because of that.


Maybe shouldn't be a driving instructor.


I wouldn't particularly blame your instructor, a lot of people would have reacted the same way, which to me proves that it's not just AI, it's the roads and the driving that are unsafe. As pedestrians, we learn what's dangerous and what we can't do, and as drivers, we assume pedestrians behavior from our own knowledge. But both AI and humans are subject to occasional 'irrational behavior'. Humans are not safe either in that situation. Even if in some cases a human might have advantage over AI, AI has many other advantages over humans. It would be a big mistake to reject AI just because there are situations where its performance can be inferior to some humans.

I agree there's still a lot of work to be done, but my point is that the problem is the driving system itself, and a lot of people dies because of it. AI can't directly change that, but if it can still do better than humans (in a few years), then it's worth it.


A proper (e.g. non-Uber) self driving car would never take a blind corner at a speed that doesn't allow it to come to a complete stop before it reaches its vision range.


As a side note, I find - as always - this part disturbing:

>The driver, Rafael Vasquez, 44, served time in prison for armed robbery and other charges in the early 2000s, according to Arizona prison and Maricopa County Superior Court records.

He served time in prison for armed robbery more than 15 years ago, what kind of relevance would have this?

Presumably he has a valid driving license issued by the State, and was not under the effect of alcohol or drugs.

Having committed armed robbery has seemingly no connection (I mean it isn't like he was condemned for having killed someone while driving a car or something like that), and even if that was the case some State must have issued (or renewed) his driving license, meaning that he was legally authorized to drive the car.


In the US, once you commit a crime, it'll follow you for the rest of your life, even after you paid your time and money.


>In the US, once you commit a crime, it'll follow you for the rest of your life, even after you paid your time and money.

I know, and understand how this (right or wrong as it might be[1]) could be of use for the police or for the judiciary system, what I am highlighting is just how gratuitious it is on a piece of news.

[1] this is a (lesser known) writing by G.K Chesterton: "The perpetuation of punishment" (page 504 of "On Lying in Bed and Other Essays") circa 1907 https://books.google.it/books?id=QtWvMclbR9YC&pg=PA504


I imagine they're trying to imply that someone who was convicted of a serious crime could still be an unreliable, irresponsible, or untrustworthy person.

It's another question whether it's fair to assume that. I don't think it is, though personally I think it's reasonable to take the information into account as long as you restrain yourself from jumping to any conclusions.


>I imagine they're trying to imply that someone who was convicted of a serious crime could still be an unreliable, irresponsible, or untrustworthy person.

Sure, the whole point being that it is meaningless and IMHO disturbing to imply that.

Not that I am familiar with bank robbers, and of course have no idea on the specific crime for which the driver was earlier condemned, but I believe that bank robbers need to be actually rather punctual, have planning capabilities and usually be exceptionally good drivers, at least this is what Hollywood usually shows us.

>I don't think it is, though personally I think it's reasonable to take the information into account as long as you restrain yourself from jumping to any conclusions.

I wonder how you take that into account without making any conclusion or however influence your opinion, this kind of info is IMHO either relevant or not (gratuitious).

It is a car accident, where the automatic means failed and the driver supposed to take commands in case of failure of the automated system didn't or couldn't, the criminal past of the guy has no relevance, it is more likely that the whole thing happened too fast, or that he was distracted, and unless and until it is established that the accident was caused voluntarily or by "voluntary inaction".

What if the driver was differently "marked" by society?

Like - say - known to belong to the Communist Party or to the Neo Nazi's?

Or if he was mentioned as being (choose one) gay, transgender, black, latino, illegally immigrated?

What kind of added info is that in the context of a car accident?


Call me crazy but isn't a big part of the promise of these systems supposed to be they see things humans wouldn't or couldn't? If they're just as surprised as the human behind the wheel, that feels like a problem.


Yes, to me, this is the most relevant part of the posted article:

> “It’s possible that Uber’s automated driving system did not detect the pedestrian, did not classify her as a pedestrian, or did not predict her departure from the median,” Smith said in an email. “I don’t know whether these steps occurred too late to prevent or lessen the collision or whether they never occurred at all, but the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”

We can accept that by the time the Uber AV "saw" the victim, it was too late to brake. But that doesn't mean we can't ask questions about what the AI actually saw and classified. And even if it was a "sudden" event for humans, was it "sudden" relative to computer reaction time? How fast did the victim move into the road that the Uber AV couldn't even brake?


The most relevant part is from someone with zero knowledge of what happened?


Yeah, but my comment was made back when there were 2 stories about the Uber accident. This one, and the original one. The Bloomberg story has little more information than what the police chief claimed to the Chronicle, and at that point, the Tempe PD was downplaying the importance of her determination.

But everything the police claimed was in context of witness interview and camera footage. Since this is a self-driving car, we know there is more data and more ways to assess performance besides camera vision.


it could be that they never saw someone pushing a bike across a road in tests before, maybe it confused that with someone riding on the road at a different angle


But why is accurate classification needed, when the overriding decision is about how the car should react now that an unidentified, moving object is moving directly into the car's path? Even if someone were riding at an angle, it's still in a direction that may intersect with the car's path.


Exactly, it shouldn't matter what the thing is, if it's in front of my car and bigger than a plastic shopping bag I don't want to hit it.


If either of those two cases is true, then the AI is not ready to be driving on public roads.


If the autonomous vehicle performed just as well as the human did (meaning a human would've hit her also), then yes although it can be improved, I don't see how it is a problem.


If they are just robot versions of humans then I would argue they are far too underdeveloped to be on real roads with real people.


Agreed! This seems like an instance where even if the car knew it needed to stop, it couldn't do so in time.


Every millisecond of braking reduces the severity of the collision.


It is astonishing how many conflicting bits of information are slowly trickling out about this person's death.


It seems pretty common after a newsworthy crash for there to be a lot of uninformed speculation.

Investigations take time. Just like with airplane crashes, it's probably best to ignore everything about this incident until we see the results of the investigation.


Especially when there is video of it. Please tell me there is dashcam footage, since that's the first thing that should be available as evidence with a self-driving car.


Well, considering the whole thing is in a test phase, they must have a recording of all the sensor data, including what the cameras saw.


I wonder if they have a camera pointed at the safety driver? I'm curious if he is telling the truth, or was just distracted and could have theoretically braked for the woman.


Interestingly, pointing the finger at the driver is a get-out-of-"jail" card for Uber, but will ruin his life.

They can claim he's an integral safety for the test of the whole self-driving system, and he failed to do his job.

Which would make other safety drivers either think "I have to pay 100 %attention" (which they should, anyway), or "I'm outta here!"


Yes, the report you'd hope to be hearing from Uber is along the lines of "0.2 seconds after the pedestrian entered the travel lane the automated system identified her as a hazard and initiated an avoidance maneuver. Despite braking and beginning to swerve away, the car impacted the pedestrian 1.2 seconds later." Thats the promise the automated vehicle people are selling. But in this case it seems the car was clueless.


If the woman suddenly walked in front of the car at the last minute, I don't see how the situation could have been avoided, no matter how fast a computer is able to process the information from the sensors.

There was also a driver behind the wheel, so it would suggest that humans are just as clueless.


”If the woman suddenly walked in front of the car at the last minute, I don't see how the situation could have been avoided”

Likely, but not necessarily. Depending (hugely!) on what actually happened, it might be that the car should have started slowing down before the woman changed course.

As an extreme example, if you’re driving at 50 mph and come up behind a kid cycling at the side of the road with a foot separation between the extrapolated trajectories of the bike and your car, should it be OK to continue your course at full speed, or should drivers take into account that kids are kids, and may move erratically?

Also (again purely hypothetical), if the car had already had several similar events on that road that resulted in near misses, should it have slowed down before it even entered the street, knowing the road segment to be particularly dangerous?

I think human drivers, even though they are horrible at attending to the road for extended periods, get into accidents relatively rarely because they know when they really need to pay attention.

Finally, I do not rule out that that “driver behind the wheel” reacted slower than would have happened if (s)he was actually driving the car.

Disclaimer: I’m a layman, and haven’t seen the video ⇒ Let’s wait and see what the NTSB will say about this.


There definitely exists a sort of pre-cognitive human quality to traffic.

Sometimes you just slow down on a subconscious hunch without any information on that particular situation. Or you might be very conscious not to slow down if you're driving a motorcycle in Asia in heavy traffic trying to make a turn with a huge truck barreling behind you.

There are massive differences in road cultures between countries, and it takes a while to adjust to local habits. Somebody who doesn't know these invisible rules is a very accident prone driver even if they can drive safely in another country.


A couple of personal examples.

1. I've noticed that adaptive cruise control doesn't account for vehicles entering my lane in front me, even with their turn signal on. As a human driver, I see them entering and know to slow down (at least take my foot off the gas), but the system only performs a hard brake AFTER the car has actually moved into my lane. Disclaimer: I don't know if autopilot systems suffer from the same limitation.

2. I was once waiting to make a protected left turn at a busy intersection. There were two protected left turn lanes, and I was in the farther left lane. Light turns green, I take my foot off the brake, and prepare to accelerate. The car on my right, in the other left turn lane, is also barely starting to move forward, when it suddenly rocks to a halt, indicating they applied the brake, hard. Without thinking, I also brake hard. Out of nowhere, a car appears speeding right through where I would have been if I hadn't stopped. Apparently, this car was entering the intersection on the cross street, going right to left. He wanted to beat the red light (he didn't), and also make his left turn, right down my street, in the opposite direction I was headed. I guess he didn't realize there were two left turn lanes. I never saw him. If I hadn't gone with the herd instinct and hit the brakes, I'm sure there would have been a serious impact. I wonder if autopilot would have caught that one, but it probably depends on camera placement.


A single self driving car probably wouldn't (might do the same thing you did at best).

It'll take actual inter-vehicle communication to extend the sensor-net. Much like humans presently do via proxying though the other car's deviations from anticipated behavior. (EG if the freeway traffic keeps going at proper speed around corners)


It's referred to in the motorcycle community as 'Spidey Sense' - the way your subconscious picks up dangerous situations before you consciously become aware of them, giving you a chance to slow down and become more vigilant.


This is a really great point. I wonder if at some point self driving cars will be taught with these regional anomalies in mind. And will they be activated/changed depending on where the car is shipped/activated (Asia vs. America)?


Apart from regional differences, there are a host of other, similar matters. Cruising through your city's bar district at 3am? I'm sure you'll drive more defensively and be prepared for erratic behaviour on the part of others.

However, one thing that I haven't seen discussed anywhere, but that strikes me as a hard problem: by having a non-human driver, you're literally taking out the human element of communication.

Everybody is taught about the importance of eye contact in driver's ed. What will we replace it with? I have no clue, and I doubt anyone else does. Yet it's a crucial technique we all use to negotiate traffic every day, regardless of our mode of transportation.

This is a hard problem, and it's less technical than cultural. It will be a bitch to solve.


Eye contact is another thing that doesn't apply in some Asian countries where by default everyone has dark tinted windows making it virtually impossible to see inside other vehicles.


> some Asian countries where by default everyone has dark tinted windows making it virtually impossible to see inside other vehicles.

Hmm. Perhaps. Personally, I've never noticed this.

Anyway, the concrete example doesn't matter.

I guess you know what I was trying to say: implicit communication between humans. Even without eye contact, there will be a host of minuscule actions (or inactions) that we use to convey intent. There will always be humans on roads, e.g. as pedestrians. Vehicles will be forced to interact with them, and this communication barrier makes it very difficult.

Don't think of highways, think of supermarket parking lots.


Well actually the parking lots are especially difficult due to lack of eye contact.

But I get your point.

People compensate by being very careful when parking and of course you can always roll down your window if the situation requires it.

A robotic vechile simply couldn't manage here. There are way too many dynamic human exceptions. For example, it's common to drive against traffic on the wrong side of the road for short distances since the intersections are so far apart.


That’s just human intuition, which is trained over time through experience. Which is also how machine learning works, so it’s not quite clear that humans have a sustainable advantage here.


Machine learning also has tendency to unlearn things and corrupt it's own models when left unaided.


So do people ;)


Well actually we have not proven that ML works the same way, or proven that our ML has the same input parameters as a human yet have we?


I have had vehicles blow by me on a bicycle at 50 mph, on fairly narrow county roads.


Same here during my road biking years (I switched to Jogging and unicycle). Even though I disliked the risk, I enjoyed the wind as it allowed me to ride faster.


or that at 50mph, a foot away, you're likely to blow the cyclist over ....


The woman cannot go from being "not in path" to "in path" instantaneously unless you believe in teleportation. Supposing the woman got about 1 foot into the path of the car before being struck, the car would have had a minimum of 1 ft / <woman's speed in ft/s> seconds to react, assuming total blindness until the woman was "in path" (I can't think of a real world scenario where this assumption would be strictly true). Suppose a speed of 12 ft/s (~8mph), the car would have had a minimum of 8/100ths of a second to react. Supposing 3 ft of visibility before in path (about the distance from edge of car to lane), the car would have a minimum of ~1/3 second to react. That's assuming the woman was biking along at a good clip already, which is unlikely given that she was crossing a road. So, in all likelihood, the car had over 1/3 of a second to do something. That's just above the typical reaction time of a human (~1/4 second), but I don't think it's an unreasonable expectation for an autonomous vehicle.


I don't see how this is at all accurate without taking to account the speed of that car. If human reaction time is 1/4th of a second, how time does that actually leave for the car move as well - the car can't teleport either.


We know the car was going roughly 40mph, so that puts some constraints on the minimum response time that was available. Unless this woman literally catapulted in front of the car, there were at least 4’ of lateral walking pace worth of 40mph time to react. You do need to make assumptions about how fast she was moving, of course, and as has been noted elsewhere in the thread you have to assume that the driver was in the left lane for this scenario to even be remotely plausible. Even in this sequence, the car should have been able to substantially decelerate but, looking at the pictures, that doesn’t seem to have happened.


If I was driving a car and saw the pedestrian ahead of time walking or trying to cross I would have slowed down and changed lanes away fro them or at least moved far right if safe. Same thing when you pass bicycles. Change lanes away if possible or at the very least give more buffer. You can never tell what someone may do so you should anticipate the unexpected.


I haven't read the words "defensive driving" in any of the articles about this incident.


Yes, and they thought it was a good idea to make cars go at 38mph in a 30mph zone.

Edit: The limit seems to be 35mph sorry, thank all of you for pointing that out. (38 is still over the limit, why allow a robot to do that?)


This article said 35mph. Other articles have suggested it was a 45mph zone.

On Google Streetview, you can see that the road changes from 35mph to 45mph at the underpass before the site of the accident.


"The vehicle was doing about 40 miles per hour on a street with a 45 m.p.h. speed limit when it struck Ms. Herzberg, who was walking her bicycle across the street, according to the Tempe police"


> "The vehicle was doing about 40 miles per hour on a street with a 45 m.p.h. speed limit when it struck Ms. Herzberg, who was walking her bicycle across the street, according to the Tempe police"

According to the article page that you are commenting, "The speed limit where the accident occurred is 35 mph, police spokeswoman Lily Duran said."

Even so, defensive driving is more than simply driving the speed limit. Also from this article, "... the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision."


Might be an error? If you travel that road on street view, you see a sign for 30mph, the 35mph before the bridge over water, then 45mph at the underpass. The sign is mounted above the road, on the underpass itself.

Of course, something might have changed since street view went through.


It was a 35 mph zone.


Are you sure? Per Street View, there's a 45 mph sign where the road goes under the underpass, prior to the site of the accident. Unless that's since changed, of course.


Another possibility: the car was relying on outdated speed limit data. Which is another potential failing of such systems.


The human in the car said that the first sign they got of a potential issue was the sound of the collision itself. Unless they were paying no attention, that might indicate that this scenario was likely to be very difficult to avoid?


Humans are terrible at remembering specific events, particularly after something so traumatic as hitting and killing a pedestrian. And given that a woman was killed, driver inattention doesn’t seem at all unlikely.


Yep, after a friend had someone try to use her van to commit suicide I am paranoid and get ready to avoid every pedestrian. I don't know what I'd do if I lived somewhere with many pedestrians.


You should be paranoid about pedestrians it turns out, even if they aren’t suicidal. You are in a pedestrian-killing machine, after all. Being paranoid about this is no less reasonable than being sure to keep your chemicals and sharp knives out of the reach of toddlers if you have them both in your home.


This is an absolutely terrible idea. Don't do this. Your job as traffic is to be as predictable as possible. You're just going to cause more accidents by driving erratically.


An accident might have been unavoidable, but fatality is the big question. The risk of pedestrian fatality jumps by ~4x between 30mph and 40mph.[0] So the question is - did the car start breaking as soon as possible?

As an aside, that statistic is why I never speed in pedestrian zones.

https://nacto.org/docs/usdg/relationship_between_speed_risk_...


> did the car start breaking as soon as possible?

“braking”; “breaking” is something else, and not a desirable response.


"breaking" implies it was "working" in the first place.


[flagged]


[flagged]


G-


It's warm HN bottled, not on tap.


Based on the NTSB pictures of the vehicle as well as police statements, it doesn’t seem that the vehicle tried to brake.


If the woman suddenly walked in front of the car at the last minute, I don't see how the situation could have been avoided

From /r/roadcam yesterday: https://www.youtube.com/watch?v=wYvKPMaz9rI

Human driver avoided hitting the pedestrian. There are lots of videos of human drivers successfully not hitting people who "suddenly" pop out "at the last minute". Self-driving cars should be able to match that. If they can't, they don't belong on public roads yet.


But, ideally, the machinery should be much, much faster at becoming less clueless than the human driver; as the parent said, the ideal is that it should successfully identify, and fail to resolve, the impossible situation.

It should not have no awareness it's in an impossible situation (because that implies it will have no awareness is similar, possible, situations)


The fact that in most cases the human has to do nothing means that it is unlikely that the human is anywhere close to being as vigilante as an actual human driver. The fact that the human is clueless isn't surprising at all. In fact this is the real danger with systems like Tesla's autopilot. You still ned to be vigilante, but you are so bored that it is hard to be.

The safety humans here might as well be hood ornaments.


The word you're looking for is "vigilant". "Vigilante" means "a member of a self-appointed group of citizens who undertake law enforcement in their community without legal authority, typically because the legal agencies are thought to be inadequate".


Correct. I also meant “need” and not “ned”.


I think the parent commenter to your comment was alluding to this kind of stuff, a lack of depth in fallback strategies and systems trained to focus on objects in motion over stationary ones.

https://www.theatlantic.com/technology/archive/2018/03/uber-...


I'm really skeptical about swerving as an evasive maneuver. It's complicated to get that right. But yes, if it didn't at least slam the brakes then something went badly wrong.

Maybe you can't prevent an accident, but you can at least reduce momentum and demonstrate that your system was working correctly.


You can (and should!) brake and swerve at the same time. Unless you are driving an oldtimer, the car is capable of doing that.

In my driving lessons I got told to swerve for humans. Pedestrians and cyclists are the least protected participants in traffic. If you swerve and hit another car, the resulting crash will cause a lot of damage, but injuries to humans will likely be much less severe, if they even happen.

Animals, especially small ones are a different matter. Using the same argument, it is better to risk hitting the animal than risking sverving into other traffic.


Yes and no. The very action of braking will reduce available grip for the swerve. However, ABS and stability control mean you are at least quite unlikely to lose control if you try to do both at the same time. And of course, in this case, only a mild swerve would have been required, and adding braking would help scrub speed in the event that the obstacle moves in a manner that your swerve does not avoid.


There are so many variable here. If you are travelling at 30kmph and swerve into the path of a car going the opposite direction at 80kmph, that's a massive crash.


Why didn't the human hit the brake? I don't understand how this is even news to be honest. There was a human in the driving seat and he failed to stop just like the computer.


Probably the human wasn't paying close attention. That is the problem with level 3-4 automated vehicles: a human who is not in primary control of the vehicle has trouble paying attention and is unable to take over control quickly if it is needed.


That's why I think vehicle automation development follow the path where the computer is not in primary control of the vehicle and can take over quickly when needed. Unlike humans, computers don't have trouble paying attention.


Wait, are you now blaming level 3-4 automated vehicles for the inattention of the human?


It’s a problem people knew about back when “autopilot” was just for aircraft, not cars.

https://blog.prototypr.io/automation-is-making-us-dumber-and...


Ultimately, we have to design systems for the humans that actually exist, not some hypothetical superhuman. If it's not safe for actual humans to use then it's not safe full stop. It's not like this is some new issue; it's one of the main reasons why experts have been pessimistic about the prospects of self-driving cars and why existing car companies have been reluctant to deploy heavy automation in the field.


I think he's blaming the folks who put level 3-4 vehicles on the road presumably knowing full well that said human inattention would put people at serious risk of injury and death in the car's edge cases.


Computers have better reaction times than humans. Forcibly braking is a simple maneuver.


> Computers have better reaction times than humans. Forcibly braking is a simple maneuver.

The question isn't about the MHZ speed of a computer or the theoretical reaction time but the lack of reaction by the computer in the actual real world. It doesn't matter if this is a sensor failure, a software bug, or a slow reaction by a computer. The result is the same. A person died.

From the article:

*"... the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”


Swerving is a better way to avoid a collision, but braking would have at least reduced the severity.

Of course, this all presumes you can quickly analyze the situation and conclude that it is safe to swerve (which, in theory, the computer can do far faster than a human can).


Even the reaction is within 0.1s for the machine, you cannot brake and stop immediately. This is physics, and a mechanical problems.

Please remove the software mindset when looking into this.


The car can begin to brake right away, and must certainly be expected to have a faster reaction time than a human. Whether it started to brake seems like the most important question here.


Not in the field but I am assuming that the AI/ML cannot handle the ability to see possible road hazards with in the human peripheral vision area

Such as a kid attempting to chase down a ball that is rolling towards the road (object vector path collision), specially with the ball and or kid suddenly hidden from view because of a parked car (real environment vs visual environment). Or a group playing basketball in a driveway. Both where slowing down is always the safer bet.


Sure, but I don't think you can expect that kind of information the next day or even the next month.

Someone is dead and there may well be one or more trials as a result. Facts will take a s long as they take to surface.


As was the driver, mind you


A 4000 lb SUV, traveling nearly 40mph, at night, on a 4 lane divided highway, hits a pedestrian walking a bike.

The kinetic energy mismatch is the real problem, and at the very least, these companies should be testing at only 20-25mph, with _much_ lighter vehicles.

We'll have to wait for the NTSB to check in, but I'd be surprised if Uber isn't shut down(at least in Tempe) for a good long while.


Agreed. While I can understand it's much easier to strap sensors/equipment on a already available car, it makes more sense to me from a safety point of view to test with some lightweight shell of a vehicle to cause the least damage to other entities.


Oh it's her fault she was jay walking and the robots now take precedence. No need to ticket jay walkers! Uber's fleet will take care of these law breakers!

Ridiculous and this company after all it's done is still around and now it's killing people!


it'll be a matter of time before autonomous manufactures push to outlaw pedestrians and bicyclists from roadways as a 'safety measure'


I assure you that if you jump in front of a moving train you will be killed. How is this different?


cars are not trains, for one.

road usage is shared among cars, motorcycles and scooters, trucks, pedestrians, bicycles, mobility scooters, and other things.


Train tracks are also shared at train crossings. Yet if a car went there after the warning lights went on, the car would be to blame.


So autonomous vehicles should come with flashing warning lights every where they go?

Also, trains don't change lanes, or stop quickly. Its simply a bad analogy.


You are right, a better analogy would be if a person tried to cross in the middle of a random track segment. The woman didn't cross at a crosswalk so that part of road was not designated to pedestrians at all.

Cars (at lease autonomous ones) signal before changing lanes and slowing down.

I think that flashing lights is too annoying, but forcing cars that drive in the night to have lights sounds like a reasonable (and existent) rule.


The main difference is that a train, even under emergency braking, can't stop quickly, and it also can't swerve to avoid an obstacle. If you jump in front of a moving train, even if the train driver could see you before you jumped and could predict you would jump, you will get hit.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: