SUVs is an easy one. When an SUV hits a bicyclist, motorcyclist, or pedestrian, the unfortunate person tends to go through rather than the over they'd be subjected to by a sedan. Motorcycle accident stats bear this out - the overall accident rate has gone down as people have started riding safer, but the fatality rate has gone up as SUVs have taken over market share.
The roads this is a bit more complicated, but basically roads need to either be slow enough to safely share space (25mph or below), separated out so that only cars can use it (freeways), or kill an alarming number of pedestrians and bicyclists. Four lanes and a 35 MPH speed limit with infrequent crossings is pretty much going to have a body count.
I'm not trying to excuse Uber here, just trying to maybe convince urban planners to stop building things that convince people to try to cross four lanes of traffic going 35 MPH.
Then maybe we'll have corporate pressure aligned with social pressure to change the laws and move traffic patterns beyond what random luck has 'evolved' to safe and sensible defaults.
The proliferation of dashcams makes your dream far more likely.
Now if you're saying people with true low and high range 4wd trucks and SUVs don't go rock crawling with them that much, that's probably true, but those vehicles have a lot of utility doing things like pulling a boat trailer up a wet boat ramp, driving on a frozen lake to go ice fishing, and driving on farm roads or other unimproved surfaces.
Yes, SV lives in it's own bubble in many ways but you don't need an SUV to commute on paved roads. I too live in New England.
Also, MA isn't northern New England.
There are still days that it stays at home and the trucks and SUVs come out because the snow depth is above the bumper.
Who said I live in MA?
Statistics/stereotyping said so. If I guess that every New Englander on HN is from MA and lives east of 495 I'll be right far more often than if I pick some other place in New England.
>There are still days that it stays at home and the trucks and SUVs come out because the snow depth is above the bumper.
I've lived in all three of the northern New England states and currently live in MA because money is more important than happiness right now.
I was kind of looking forward to telling someone from MA that you don't need more than AWD car amounts of ground clearance on roads that never get more than 5" of snow between plows. Even in northern New England it's very rare for there to accumulate more than 6" of snow on public roads except maybe a few days a year and even then only in the most rural areas.
This is because a person's certain of gravity is above where a sedan's front would hit them.
I am curious as to why the driver didn't react. I wonder if the system keeps track of the driver as well as I would imagine it has to be mind numbing to sit behind a wheel for hours on end and not do anything... So in this situation, how attentive and alert was really the driver who is supposed to be the fail safe mechanism.
That being said, I'm totally open to this being the fault of either Uber or the woman, but there are too many questions in my head that suggest it was the fault of the woman.
Are we saying that at this point we're going to let the initial police statement win because they are "experts" at driving. Are they also expects at the software in SDVs? I think at this point we shouldn't assume either story - but at the same time we can't let either Uber, the police, governments cover this story up for the sake of money and politics.
I think there are some pretty massive implications that should come from this - even if a human driver may have very well killed this person too (which I don't personally believe) don't we need to take a minute and ask a few questions about why the vehicle reportedly didn't even slow down after the hit? Why the dent is on the RIGHT hand side of the car (.2 seconds seems false)? Would a human that was driving slowed a lot earlier - was the pedestrian/biker expecting the car to slow because she clearly had a hand signal up - seeing a fake driver in the car?
There are too many questions and I hope that this specific story is treated with the utmost careful consideration. I for one can't seem to let it go in my head.
Someone died and it very well could have been because of our collective ego that we can accomplish this (SDVs) at this point in our history. We'll never hear from this woman again.
This is the first report I've seen that indicates the Uber AV was in the right lane. Remember that the victim was crossing from left to right, and that all of the visible damage on the AV is on the right-side bumper. It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather. I would think most humans would be able to at least hit the brakes, if not completely avoid a collision.
I didn't state a bunch of claims, I linked to a NYT published article. If it turns out that the NYT depicted the incorrect lane, then I expect we'll see a correction. But they published this graphic staked on their reputation for accuracy, not on just "rumors".
Moreover, it does not contradict what the police chief said; here is her initial and only interview:
> Pushing a bicycle laden with plastic shopping bags, a woman abruptly walked from a center median into a lane of traffic and was struck by a self-driving Uber operating in autonomous mode.
> “The driver said it was like a flash, the person walked out in front of them,” said Sylvia Moir, police chief in Tempe, Ariz., the location for the first pedestrian fatality involving a self-driving car. “His first alert to the collision was the sound of the collision.”*
The police chief also said a number of things that have subsequently been changed. In the Chronicle article, she's reported to have said that the Uber was driving 38 in a 35mph zone. The NYT graphic and other stories from today have said it was a 45mph zone.
Imagining an approximate version of the scenario and thinking that most people would be able to do what the known unimpaired driver could not is literal speculation. You are reading more into the graphic than can be reasonably derived from the graphic, and the -text- of the graphic is itself explicitly uncertain about the victim's location before and after the collision. Your comment expresses more credulity than the very source you linked to.
If you want to speculate that the Uber driver and police are covering up negligence, that's fine. Trying to claim some sort of logical high ground in a known uncertain scenario is not.
They are saying that based on the information in the NYT article it does not seem credible that the woman came out of nowhere, that there was no time to react, and that the first thing the driver noticed was the sound of hitting the woman before even seeing her. The driver is a party in this story and may well be at fault for causing her death.
Explaining the imagining, is the salient issue.
> The parent didn't say that he is imagining anything
Splitting hairs over the phrasing is not compelling. The poster was proposing a scenario as the most reasonable interpretation while couching it to minimize criticism (the equivalent of the "just sayin" trope).
The premise is flawed. It never sounded reasonable to assume the pedestrian did not see the vehicle at all. I can imagine a perfectly reasonable scenario where the pedestrian tried (and failed) to clear the vehicular path before being struck on the attempted destination side of the road.
Not sure how it applies. But anyway, in my defense, I didn't say that I was imagining scenarios. I said the exact opposite, that I could not (or, it was "very hard to") imagine a scenario in which someone crossing left-to-right is mostly unseen by a right-lane driver. By saying I can't imagine, I'm limiting myself to my own experience and observations.
Is it speculation to point out that the police and OP misidentified the gender and name of the driver (it's Rafaela Vasquez, not Rafael)? Or that newer articles have disputed the 35 mph limit?
edit: I'm not reading too much into the graphic by stating explicitly what it shows. I'm making the assumption that the NYT designer isn't being loose on the facts here and that the Uber vehicle was in the right lane.
So why can't I claim a logical basis based on the other agreed facts, such as the victim being a woman who is walking her bike across the street, and our general observation of how fast people are able to cross a lane of traffic? What set of physical laws and constants should we be using here?
My beef is with 'Please don't accuse me or other commenters of spreading "rumors and speculations", without evidence or logical arguments.' The problem is fooling ourselves into being more certain than we can be. While this is common human behavior, especially in emotionally fraught situations, as people attempting to engage in reasonable discussion on a discussion board, we need to do better. Acknowledging our biases and limited perspective while researching and before replying improves the discussion for everyone.
addendum: what everyone agrees and states is uncertain - where on the road the victim was hit. Another area of uncertainty among us non-witnesses is the immediate traffic at the time of collision.
I want to thank you for saying something I have been having trouble putting into words. I'm not sure if it is me or whether something has changed recently with discourse on the internet but I am coming across more and more people speaking as thought they are an Authority of a subject, while simultaneously posting complete incorrect information.
It's not their level of knowledge, it's them having a high level of conviction when they have no grounds to have such a high level of conviction.
But I agree, yes, it is easy to fool ourselves into being more certain than justified. I like having that pointed out during HN discussions :)
The nonviolent thing to do, I am honk, is to simply describe what is actually known and point out how the accounts differ.
For what it’s worth, none of us were there and it sounds like definitive information is lacking. My hope is for this to be valuable used to ensure safer driving for everyone (autonomous or not).
As far as I know, there hasn't been any official acknowledgement that the accident occurred in the right lane (or any specific lane for that matter). In fact, IIRC, there was a mention of the accident happening at around 100 yards from a crosswalk, whereas the stretch of road depicted in the picture is around 200ft away. The graphic also uses weasel words like "somewhere in this area". I mean, somewhere where? 100ft south? 10ft to the left? Only someone who has seen the actual video would be able to make an informed comment, and so far the police has not released it to the public.
I'm not convinced this picture can be reliably used for armchair forensics.
My experience with the NYT (and basically every news outlet) on technical no issues that I'm familiar with, is that they are not particularly concerned with getting the details 100% correct.
I don't expect better from them in the discipline of auto accident reconstruction.
That's a failure on every level, from prediction to just sheer radar obstacle detection. You don't hit things right in front of you, mid-class vehicles you can buy today already have sensors to emergency brake when the driver won't but an obstruction is ahead.
That's a pretty clear fail.
I wonder what the Uber's field of view is. At what distance should it have detected stuff about 9 meters off track?
But she did likely walk out from landscaping. So perhaps she initially "looked" like waving branches or whatever. And there must be some mechanism to suppress such false positives, or vehicles would brake when it's windy.
And yet the police were so quick to say that camera footage indicates that Uber was likely not at fault. It was so quick that it makes me inclined to think it's pretty obvious. However, after the SF Chronicle published its exclusive interview with the Tempe's chief of police, the Tempe Police PR person had to issue a statement that, "Tempe Police Department does not determine fault in vehicular collisions."
I'm not one to believe in conspiracies or to automatically suspect shadowy influence, so I want to believe that the the camera footage seems to argue that this was an unavoidable accident. But the evidence released so far argues against that, regardless of whether the victim's crossing was illegal or not. And why are the police making a judgment on this so soon in the first place, especially when the decision will be made by county law officials, and ostensibly after referring to Uber's full suite of sensor data?
The fact that they came out with a statement instead of saying "we don't comment on ongoing investigations" makes it even more shady.
The police in Georgetown about half an hour north of Austin went with his gut in one murder case - "it has to be someone the victim knew" and ignored physical evidence at the scene and the eyewitness testimony of the murder victim's son. Then the prosecutor kept working against the accused for the next 25 years and suppressed evidence.
I was surprised at that -- it doesn't seem to me that the police would have the technical knowledge of Uber's hardware and algorithms to categorically state that, let alone so quickly.
So whether or not the police understand self-driving systems is not relevant to the fundamental physics problem. The key question is how long was the ped visible.
I don't see how from a single forward-looking camera you want to deduce this crash was "physically impossible". That isn't even half of what we expect from human drivers. We expect them to swivel their head.
> Sensor Fusion typically merges LiDAR with stereoscopic camera feeds, the latter requiring ambient light.
So might LIDAR data get de-emphasized when there's too little light for the cameras?
It may well be that the Uber cars only have LIDAR and optical, but that doesn't have anything to do with sensor fusion.
Driving is a sensitive subject, people take it personal trying to do generalizations about drivers is just going to land you in a land of troubles, almost worse than profanity.
Still... stiff upper lip. Though I can't believe I'm getting downvoted for reminding people not to kill anybody while behind the wheel :-o
Edit: Sorry about the ambiguity. I wan't referring to "ped". But rather to the slang that's offensive to women.
a person who is excessively concerned with minor details and rules or with displaying academic learning.
Looks like it's too late to delete it, so we're stuck with it forever.
Please still do drive carefully!
It's too late to delete but I think it's worth it (for future readers) to say that offense isn't intended, and that that word is frowned upon by most HN users and mods.
I do think discussion is warranted because the Tempe Police have gone out of their way to share details and preliminary judgments about the case. Some of these details have been inaccurate, both for and against Uber's favor. But Tempe police officials are already going out of their way to downplay what their chief claimed:
> Glover downplayed Moir's statements, saying some were taken "out of context" by the Chronicle. The chief disagrees with the Chronicle's headline, "Tempe police chief says early probe shows no fault by Uber," Glover said.
I'm sure a system that warns more for the sake of cautiousness would be rejected by people.
Frustratingly, so many stories make no mention of the lane. This is the only one I've found so far, and it's from the local news-weekly:
> The Volvo was in the lane nearest the curb, about 100 yards south of Curry Road, and going about 40 mph at the time of the collision. Initial evidence shows the vehicle didn't brake "significantly" before the impact.
My assumption is that the writer thinks "lane nearest the curb" is unambiguous to the average reader, which would mean the right-most lane. Because...why would average reader (who isn't looking at the accident scene right now) assume that the left-most lane on any road is "nearest the curb"?
However, from Street View, we see that there are curbs on both sides of the street -- i.e. the median itself has a curb: https://email@example.com,-111.9428855,3a,75y,...
And if the Uber had changed lanes quickly, to avoid hitting the woman, the backup driver would presumably have mentioned it.
Example: if I tell you to drive two blocks down a "normal" street and then pull over to the curb, you will most likely not pull over to the curb on the opposite side of the street.
Also, isn't "curbside" used when talking about valet services, which generally have you pull over to a curb on the right when meeting the valet? (I don't know, I don't have the opportunity very often)
How do I know this graphic isn't entirely speculative?
The distances, as depicted, are a little strange. If Uber AV hit the victim at 40mph, you would expect her body to be seen farther away from where it says "Body seen in this area".
It is incontestable that the Uber hit her on its right-front side. Here's a clear picture recently tweeted by NTSB: https://twitter.com/NTSB_Newsroom/status/976215176323194880
1. Another car was in the lane left of the fatal car, blocking its view of the pedestrian.
2. The pedestrian fell back after reaching the curb.
But it looks like both the artificial and the human driver were asleep.
It probably is completely unrelated to this event, but it’s an anti-pattern by well-meaning drivers.
Could this be a demonstration of what people talk about with regards to partial autonomy being a detriment to human attention?
Definitely. By the time the human realises something is wrong and has to take control, it's already too late.
Incidentally, this is also why autopilots in planes have been more successful --- when something goes wrong high up in the sky, there's still relatively much time to assess the situation and react. With cars, even if the car suddenly told the driver to take over, the driver would essentially have to already be fully attentive to the situation in order to make a decision in time. It's the difference between seconds to a minute in a plane vs. fractions of a second in a car.
It has a vague location for where the victim's body ended up, yes, but I don't interpret that graphic as purporting to show the actual location of the vehicle or victim at the time of the trash.
> This is the first report I've seen that indicates the Uber AV was in the right lane.
I'm not sure I'd go that far. I think the arrow represents the direction of travel, but I would assume they just placed it near the body, and aren't explicitly trying to suggest which lane the car was in. And the notation "somewhere in this area", in my eyes, confirms this.
Further, if they do know which lane it was in, that would be an excellent bit of investigative journalism that no one else has reported, and you'd expect it to be in the story, if not the actual lead. It's not, so...
If people are actually correct about the median thing then I cant imagine how this is not a colossal failure of software. But again it makes way more sense that she walked out from the right side of the road.
Emphasis on ”purports”. The NYT graphic doesn’t indicate the vehicle was in the right lane, nor does it indicate a precise location for Elaine. I don’t think you have bad intentions here, but extrapolating a hypothesis from this graphic seems unfounded.
There isn’t enough information to go on to develop reasonable hypotheses from any angle, unfortunately. The investigation needs to complete and footage needs to be released.
(Pure speculation follows)
I wonder if it possible that the problem is that she was moving too slowly for it, not too quickly?
If for some reason it could not get a good read on the component of her velocity parallel to the lane, it might mistake her for another vehicle moving at similar speed to the Uber, and then see her transverse velocity component as being due to normal drift within the lane. It would read the situation as a normal passing situation, not someone about to enter its lane.
Is it possible the road had other vehicles blocking the view?
> The Uber had a forward-facing video recorder, which showed the woman was walking a bike at about 10 p.m. and moved into traffic from a dark center median. "It’s very clear it would have been difficult to avoid this collision in any kind of mode,” Sylvia Moir, the police chief in Tempe, Arizona, told the San Francisco Chronicle.
I don't remember reading this, where was that reported? (Most of the blurbs I read were pretty scarce on the details...)
“It’s possible that Uber’s automated driving system did not detect the pedestrian, did not classify her as a pedestrian, or did not predict her departure from the median,” Smith said in an email. “I don’t know whether these steps occurred too late to prevent or lessen the collision or whether they never occurred at all, but the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”
ooo, there's video of the turkey wheelchair broom chase: https://www.theguardian.com/technology/video/2017/mar/16/goo...
That's not entirely true. A Tesla, for instance, won't stop if it detects a stationary object on the highway, assuming it's a false positives of the algorithm.
I want safe self-driving cars, when they are safe. But Uber's clearly-established cavalier attitude toward human beings apparently can't be trusted with self-driving cars. I'm disappointed that politicians thought they could.
If the problem is that the system didn't detect the woman moving between lanes, then that seemingly contradicts the police statement that the victim moved quickly enough to surprise the AV and its driver.
The criterion (really the only criterion) for whether or not automatic vehicles "should be on the road" is whether or not they are safer than the alternative. And that certainly doesn't include "being as good a driver as jdavis703 was in this particular anecdote".
Being engaged in a particular role relative to a situation can (I believe) alter one's capacity to perceive and respond to it.
This isn't to say that for definite the instructor would have had the insight described in the parent post, but it is to say that you can't rule this out just based on the information that as a passenger they didn't have the insight.
That's a senseless digression though. Again, the criterion to use when deciding whether autonomous vehicles are safe is whether autonomous vehicles are measurably as safe as human drivers and very much not whether we can "rule out" the possibility that the machine might be subject to failure modes that human driver are already known to have anyway.
For every scenario and hypothetical like this one you can imagine where an autonomous vehicle would fail, I can come up with an equally hypothetical reason why they're better (hell, just read any of the media coverage of them). Both arguments are meaningless without numbers and analysis, c.f. this very article we're discussing.
In terms of your agenda, it might make no sense. It just irks me when I see simplistic assumptions about human information processing that don't take the "situatedness" of the processor into account.
I have no views, and did not attempt to comment, on machine information processing in this instance. I just wanted to point out that you can't simply go from the instructor's actually missing the inference as a passenger to the conclusion that they would have missed it as a driver. Thats it :)
My outlook on the transition period is not very optimistic. Humans are good at anticipating crazy behavior. Fatalities increasing (before they eventually decrease) is currently just as likely as the promised miracle in my mind - we don't have enough data to sway my mind either direction just yet.
There was also staying in between the lines, following road signs, and integrating with traffic. Do you think you might have failed not because you weren’t pegged at the speed limit, but instead because you weren’t capable of driving safely?
I wasn’t there, so it could’ve been because you really were just driving too slowly. But maybe it was more than that.
I agree there's still a lot of work to be done, but my point is that the problem is the driving system itself, and a lot of people dies because of it. AI can't directly change that, but if it can still do better than humans (in a few years), then it's worth it.
>The driver, Rafael Vasquez, 44, served time in prison for armed robbery and other charges in the early 2000s, according to Arizona prison and Maricopa County Superior Court records.
He served time in prison for armed robbery more than 15 years ago, what kind of relevance would have this?
Presumably he has a valid driving license issued by the State, and was not under the effect of alcohol or drugs.
Having committed armed robbery has seemingly no connection (I mean it isn't like he was condemned for having killed someone while driving a car or something like that), and even if that was the case some State must have issued (or renewed) his driving license, meaning that he was legally authorized to drive the car.
I know, and understand how this (right or wrong as it might be) could be of use for the police or for the judiciary system, what I am highlighting is just how gratuitious it is on a piece of news.
 this is a (lesser known) writing by G.K Chesterton:
"The perpetuation of punishment"
(page 504 of "On Lying in Bed and Other Essays") circa 1907
It's another question whether it's fair to assume that. I don't think it is, though personally I think it's reasonable to take the information into account as long as you restrain yourself from jumping to any conclusions.
Sure, the whole point being that it is meaningless and IMHO disturbing to imply that.
Not that I am familiar with bank robbers, and of course have no idea on the specific crime for which the driver was earlier condemned, but I believe that bank robbers need to be actually rather punctual, have planning capabilities and usually be exceptionally good drivers, at least this is what Hollywood usually shows us.
>I don't think it is, though personally I think it's reasonable to take the information into account as long as you restrain yourself from jumping to any conclusions.
I wonder how you take that into account without making any conclusion or however influence your opinion, this kind of info is IMHO either relevant or not (gratuitious).
It is a car accident, where the automatic means failed and the driver supposed to take commands in case of failure of the automated system didn't or couldn't, the criminal past of the guy has no relevance, it is more likely that the whole thing happened too fast, or that he was distracted, and unless and until it is established that the accident was caused voluntarily or by "voluntary inaction".
What if the driver was differently "marked" by society?
Like - say - known to belong to the Communist Party or to the Neo Nazi's?
Or if he was mentioned as being (choose one) gay, transgender, black, latino, illegally immigrated?
What kind of added info is that in the context of a car accident?
> “It’s possible that Uber’s automated driving system did not detect the pedestrian, did not classify her as a pedestrian, or did not predict her departure from the median,” Smith said in an email. “I don’t know whether these steps occurred too late to prevent or lessen the collision or whether they never occurred at all, but the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”
We can accept that by the time the Uber AV "saw" the victim, it was too late to brake. But that doesn't mean we can't ask questions about what the AI actually saw and classified. And even if it was a "sudden" event for humans, was it "sudden" relative to computer reaction time? How fast did the victim move into the road that the Uber AV couldn't even brake?
But everything the police claimed was in context of witness interview and camera footage. Since this is a self-driving car, we know there is more data and more ways to assess performance besides camera vision.
Investigations take time. Just like with airplane crashes, it's probably best to ignore everything about this incident until we see the results of the investigation.
They can claim he's an integral safety for the test of the whole self-driving system, and he failed to do his job.
Which would make other safety drivers either think "I have to pay 100 %attention" (which they should, anyway), or "I'm outta here!"
There was also a driver behind the wheel, so it would suggest that humans are just as clueless.
Likely, but not necessarily. Depending (hugely!) on what actually happened, it might be that the car should have started slowing down before the woman changed course.
As an extreme example, if you’re driving at 50 mph and come up behind a kid cycling at the side of the road with a foot separation between the extrapolated trajectories of the bike and your car, should it be OK to continue your course at full speed, or should drivers take into account that kids are kids, and may move erratically?
Also (again purely hypothetical), if the car had already had several similar events on that road that resulted in near misses, should it have slowed down before it even entered the street, knowing the road segment to be particularly dangerous?
I think human drivers, even though they are horrible at attending to the road for extended periods, get into accidents relatively rarely because they know when they really need to pay attention.
Finally, I do not rule out that that “driver behind the wheel” reacted slower than would have happened if (s)he was actually driving the car.
Disclaimer: I’m a layman, and haven’t seen the video ⇒ Let’s wait and see what the NTSB will say about this.
Sometimes you just slow down on a subconscious hunch without any information on that particular situation. Or you might be very conscious not to slow down if you're driving a motorcycle in Asia in heavy traffic trying to make a turn with a huge truck barreling behind you.
There are massive differences in road cultures between countries, and it takes a while to adjust to local habits. Somebody who doesn't know these invisible rules is a very accident prone driver even if they can drive safely in another country.
1. I've noticed that adaptive cruise control doesn't account for vehicles entering my lane in front me, even with their turn signal on. As a human driver, I see them entering and know to slow down (at least take my foot off the gas), but the system only performs a hard brake AFTER the car has actually moved into my lane. Disclaimer: I don't know if autopilot systems suffer from the same limitation.
2. I was once waiting to make a protected left turn at a busy intersection. There were two protected left turn lanes, and I was in the farther left lane. Light turns green, I take my foot off the brake, and prepare to accelerate. The car on my right, in the other left turn lane, is also barely starting to move forward, when it suddenly rocks to a halt, indicating they applied the brake, hard. Without thinking, I also brake hard. Out of nowhere, a car appears speeding right through where I would have been if I hadn't stopped. Apparently, this car was entering the intersection on the cross street, going right to left. He wanted to beat the red light (he didn't), and also make his left turn, right down my street, in the opposite direction I was headed. I guess he didn't realize there were two left turn lanes. I never saw him. If I hadn't gone with the herd instinct and hit the brakes, I'm sure there would have been a serious impact. I wonder if autopilot would have caught that one, but it probably depends on camera placement.
It'll take actual inter-vehicle communication to extend the sensor-net. Much like humans presently do via proxying though the other car's deviations from anticipated behavior. (EG if the freeway traffic keeps going at proper speed around corners)
However, one thing that I haven't seen discussed anywhere, but that strikes me as a hard problem: by having a non-human driver, you're literally taking out the human element of communication.
Everybody is taught about the importance of eye contact in driver's ed. What will we replace it with? I have no clue, and I doubt anyone else does. Yet it's a crucial technique we all use to negotiate traffic every day, regardless of our mode of transportation.
This is a hard problem, and it's less technical than cultural. It will be a bitch to solve.
Hmm. Perhaps. Personally, I've never noticed this.
Anyway, the concrete example doesn't matter.
I guess you know what I was trying to say: implicit communication between humans. Even without eye contact, there will be a host of minuscule actions (or inactions) that we use to convey intent. There will always be humans on roads, e.g. as pedestrians. Vehicles will be forced to interact with them, and this communication barrier makes it very difficult.
Don't think of highways, think of supermarket parking lots.
But I get your point.
People compensate by being very careful when parking and of course you can always roll down your window if the situation requires it.
A robotic vechile simply couldn't manage here. There are way too many dynamic human exceptions. For example, it's common to drive against traffic on the wrong side of the road for short distances since the intersections are so far apart.
Edit: The limit seems to be 35mph sorry, thank all of you for pointing that out. (38 is still over the limit, why allow a robot to do that?)
On Google Streetview, you can see that the road changes from 35mph to 45mph at the underpass before the site of the accident.
According to the article page that you are commenting, "The speed limit where the accident occurred is 35 mph, police spokeswoman Lily Duran said."
Even so, defensive driving is more than simply driving the speed limit. Also from this article, "... the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision."
Of course, something might have changed since street view went through.
As an aside, that statistic is why I never speed in pedestrian zones.
“braking”; “breaking” is something else, and not a desirable response.
From /r/roadcam yesterday: https://www.youtube.com/watch?v=wYvKPMaz9rI
Human driver avoided hitting the pedestrian. There are lots of videos of human drivers successfully not hitting people who "suddenly" pop out "at the last minute". Self-driving cars should be able to match that. If they can't, they don't belong on public roads yet.
It should not have no awareness it's in an impossible situation (because that implies it will have no awareness is similar, possible, situations)
The safety humans here might as well be hood ornaments.
Maybe you can't prevent an accident, but you can at least reduce momentum and demonstrate that your system was working correctly.
In my driving lessons I got told to
swerve for humans. Pedestrians and cyclists are the least protected participants in traffic. If you swerve and hit another car, the resulting crash will cause a lot of damage, but injuries to humans will likely be much less severe, if they even happen.
Animals, especially small ones are a different matter. Using the same argument, it is better to risk hitting the animal than risking sverving into other traffic.
The question isn't about the MHZ speed of a computer or the theoretical reaction time but the lack of reaction by the computer in the actual real world. It doesn't matter if this is a sensor failure, a software bug, or a slow reaction by a computer. The result is the same. A person died.
From the article:
*"... the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”
Of course, this all presumes you can quickly analyze the situation and conclude that it is safe to swerve (which, in theory, the computer can do far faster than a human can).
Please remove the software mindset when looking into this.
Such as a kid attempting to chase down a ball that is rolling towards the road (object vector path collision), specially with the ball and or kid suddenly hidden from view because of a parked car (real environment vs visual environment). Or a group playing basketball in a driveway. Both where slowing down is always the safer bet.
Someone is dead and there may well be one or more trials as a result. Facts will take a s long as they take to surface.
The kinetic energy mismatch is the real problem, and at the very least, these companies should be testing at only 20-25mph, with _much_ lighter vehicles.
We'll have to wait for the NTSB to check in, but I'd be surprised if Uber isn't shut down(at least in Tempe) for a good long while.
Ridiculous and this company after all it's done is still around and now it's killing people!
road usage is shared among cars, motorcycles and scooters, trucks, pedestrians, bicycles, mobility scooters, and other things.
Also, trains don't change lanes, or stop quickly. Its simply a bad analogy.
Cars (at lease autonomous ones) signal before changing lanes and slowing down.
I think that flashing lights is too annoying, but forcing cars that drive in the night to have lights sounds like a reasonable (and existent) rule.