System can't decide what's happening.
> It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg
System is too slow to realize something serious is happening.
> That triggered what Uber called “action suppression,” in which the system held off braking for one second
A hardcoded 1 second delay during a potential emergency situation. Horrifying.
I bet they added it because the system kept randomly thinking something serious was going to happen for a few milliseconds when everything was going fine. If you ever find yourself doing that for a safety critical piece of software, you should stop and reconsider what you are doing. This is a hacky patch over a serious underlying classification issue. You need to fix the underlying problem, not hackily patch over it.
How is this not the title of the story? This is so much worse than the "it couldn't see her as a person, only as a bicycle". At least the car would still try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.
> with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.
And then top it off with systemic issues around the backup driver not actually being ready to react.
The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).
This is borderline criminal negligence.
Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??
SMH. This is just ... I'm flabbergasted.
> Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??
That sort of depends on the specifics of how their obstacle prediction and avoidance works - having a fuzzy view on what exactly something is at 5.6 seconds out is probably ok. The important bit is that it notices that the obstacle is moving out to the road and should stop it. Classification is not needed to avoid objects. The key words here are actually "at each change, the object's tracking history is unavailable" and "without a tracking history, ADS predicts the bicycle's path as static" which is a horrible oversight.
>> That triggered what Uber called “action suppression,” in which the system held off braking for one second
> This is borderline criminal negligence.
Yeah, this is. Even though it was too late to swerve, there was still enough time to slow down and attempt to reduce the speed of impact. _This_ is probably where an alarm should have fired - the car is now in a really unsafe state and the safety driver should absolutely know about it.
Disclaimer - Up until a few months ago I worked at a major (non Uber) self driving car company.
The big picture is so overwhelmingly positive that even if the engineers were purposefully running people over to gain testing data it would still probably be a large net win for greater society. Thankfully there is no call for such reckless data gathering.
If anything, punishments in this case should be much more lenient than normal rather than breaking out the cruel and unusual measures.
How is it cruel or unusual to subject someone to risks posed by a system they're building? The Wright brothers didn't hit up bars to find guys willing to drunkenly try and soar into the sky on an oversized kite. The vast majority of things that were ever invented involved huge risks and often injuries to the inventor. Why do people writing code get exemptions from that?
I agree with the GP: the hardcoded 1 second wait time sounds exactly like some hack I'd put in a bit of code, the key difference being, my code makes fun little applications on an iPhone work, it does not DRIVE A 3500 POUND VEHICLE IN PUBLIC.
I bet if that engineer thought it would be him in front of that things bumper, he would've put in a bit more work to figure out what was going on.
More unusual than cruel, I must admit. We just don't do that anywhere.
If I were to guess why - if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable. That sounds nice if you don't think about it too hard but in practice it would shave % off how much we produce in goods and services for no reason.
Aeroplanes are a great example. It sounds nice to say they are a super-safe form of transport, but actually thinking through the risks people face on their commute to work each day the amount of money we spend doesn't make sense. I mean, what if the risk of being in a plane crash was only as rare as being struck by lightening? And the ticket was much cheaper? That wouldn't be so terrible. I don't know a lot of people who were struck by lightening but I know a lot of people who don't have much money but take overseas holidays anyway.
> I bet if that engineer thought it would be him in front of that things bumper, he would've put in a bit more work to figure out what was going on.
Probably not in my experience. It is quite hard to get better performance out of a knowledge worker by threatening them. From what I've seen they tend to either resign or just do what they were already doing and hope. It isn't like anyone can stop writing buggy code by trying harder.
Again I disagree, perhaps it would be unusual to do it via some sort of mandate after the fact, but the long history of human innovation involves tons of inventors risking life and/or limb to test safety systems of their own design. I'm reminded of the person who invested that special brake for circular saws that when it detected human flesh in front of the blade, it clamped down so hard it would often destroy the tool. The thing is, it also prevented you from losing a finger.
> if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable.
I mean, maybe? I just think there's something lost there when you have a software engineer working on the code in a large company that's then testing on the unwitting public. I'm not saying it has to be a standing threat of "we're testing this on you" but, I mean, look what happened. An ugly hack that has no business in a codebase like this went in and someone was seriously injured, and I'm not saying the engineer necessarily deserved to be injured in their stead, but the person surely didn't have any role in this, they were just in the wrong place at the wrong time.
> It isn't like anyone can stop writing buggy code by trying harder.
One person, no, but an organization can. Talking of airplanes, when you look into the testing and QA setups for something like Boeing, where the software is literally keeping people in the air alive, its layer after layer of review, review, review, testing, analysis, on and on. Something like "if you think there's a pedestrian ahead, wait one second and check again" would NEVER have made it through a system like that.
You know I'm all for innovations but Silicon Valley's VC firms have a nasty tendency to treat everything like it's "just another codebase" forgetting that some of this stuff is doing real important shit. Elizabeth Holmes and Theranos come to mind.
Boeing has a corporate lineage that extends back for more than a century and for most of that they did not have the levels of engineering safety excellence they can manage today. The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.
And that goes to what I'm trying to communicate in this thread - people are bringing up examples of how people deviating from standard practice in mature, well developed industries where there are highly safe alternatives.
This is a different industry. Today in 2019 mankind knows how to fly a plane safely but does not know how to drive safely - I've worked in a high safety environment, they were notching speeds down to 30kmph and 40kmph down from 100kmph on public roads because the risk of moving any faster than that just isn't acceptable. People were substantially more likely to die on the way to work than at it. They'd probably have bought in 20kmph if the workers would reliably follow it. Driving is the single highest risk activity we participate in. Developing car self-driving technology has an obvious and imminent potential to save a lot of lives. Now we aren't about to suspend the normal legal processes but anyone who is contributing to the effort is probably reducing the overall body count even if the codebase is buggy today and even if there are a few casualties along the way.
What matters is the speed with which we get safe self driving cars. Speed of improvement, and all that jazz. Making mistakes and learning from them quickly is just too effective; that is how all the high safety systems we enjoy today started out.
It is unfortunate if people don't like weighing up pros and cons, but slow-and-steady every step of the way is going to have a higher body count than a few risks and a few deaths learnt from quickly. We should minimise total deaths, not minimise deaths caused by Uber cars. Real-world experience with level 3/4 autonomous technology is probably worth more than a few tens of human lives, because it will very quickly save hundreds if not thousands of people as it is proven and deployed widely.
But those lessons were learned. We know how to do it now. Just like...
> Today in 2019 mankind ... does not know how to drive safely
Yes we do, and by and large we do it correctly. It's easy to think nobody knows how to drive if you spend 5 minutes on r/idiotsincars, but that's selection bias. For every moron getting into an avoidable accident each day there are millions of drivers who left home and returned completely safely.
You can make the argument that people sometimes engage in too-risky behaviors while driving, that I'd agree with, but people know how. Just like people know how to develop safety systems that don't compromise safety, even when they choose to not, as I believe happened here.
> Making mistakes and learning from them quickly is just too effective; that is how all the high safety systems we enjoy today started out.
But again, we know how to do this already. And again, my issue isn't even that someone got hurt while we perfected the tech, all of our safe transit systems today are built on top of a pile of bodies, because that's how we learned- my issue is the person hurt was not an engineer, was not even a test subject. Uber jumped the gun. They have autonomous vehicles out in the wild, with human moderators who are not paying attention. That is unacceptable.
There was a whole chain of errors here:
* Ugly hacks in safety control software
* Lack of alerts passed to the moderator of the vehicle
* The moderator not paying attention
All of these are varying levels and kinds of negligence. And someone got hurt, not because the technology isn't perfect, but because Uber isn't taking it seriously. The way you hear them talk these are like a month away from being a thing, and have been for years. It's the type of talk you expect from a Move Fast Break Things company, and that kind of attitude has NO BUSINESS in the realm of safety, full stop.
This is true, but those lessons that got them to that culture are written in blood.
Like the old saying, "Experience is learning from one's own mistakes. Wisdom is learning from others mistakes." I don't think re-learning the same mistakes (process mistakes, not technological) is something that a mature engineering organization does.
One of my worries is that SV brings a very different "move fast and break things" mindset that doesn't translate well to safety-critical systems.
As for the rest of your post, what you're talking about is assessment of risk. Expecting higher levels of risk of an experimental system is fine, but there's a difference when the person assuming that risk (in this case, the bicyclist) doesn't have a say in whether that level of risk is acceptable
Certified engineers absolutely can and do go to prison for structural mistakes in their projects, especially if those mistakes lead to loss of human life, but sometimes even without it - chief engineer of what was the tallest human-made structure on Earth(radio mast near Warsaw) went to jail for 2 years after the mast collapsed due incorrect design of the support structures.
Does it stop people from becoming engineers or from embarking on difficult projects? Of course not. If anything, it shows that maybe programmers working on projects like the autonomous cars should not only be certified the same way engineers are, but also held liable for what they do.
I personally have not come across a single software engineer or controls engineer who has had to certify a design.
If someone is so stupid that they don’t realize that a car needs to be able to avoid people in the road, even outside crosswalks, they have no business doing development of self-driving cars. They simply aren’t qualified. Ditto if that person simply doesn’t care. We’re not learning anything from this that we didn’t already know. Of course if you do nothing to avoid a person or a bicycle at 40 MPH, you’re gonna kill someone eventually.
This kind of thinking reeks of the “it works with expected input, problem solved” that causes so many issues in critical software.
This is like Boeing with the 737 Max. They didn’t want to deal with failure conditions, so they avoided detecting them so they could pretend like they don’t exist. Subsequently, those failure conditions then caused two airplanes to crash and kill the people aboard.
If (and it is a big if) there is one engineer who was grossly negligent then yes they shouldn't be working on self driving cars.
But it is far more likely that this was totally normal corporate ineptitude and will be fixed in the due course of normal engineering processes.
A safe culture is not built by people making wild guesses about what and why on forums. It is likely that the engineer responsible for this code is also going to be responsible on net for a very large number of lives saved and judgement of who may have been culpable for what should be left to the courts. Morally I'm happy to say that I believe not only is he or she in the clear but probably deserves a pat on the back for helping to push forward the technology most likely to promote save young lives that I've ever seen. I've had young friends who died from car crashes and ... not much else. Maybe drugs and diseases in a few rare cases. I want technology that gets people taking their their hands off the wheel and I want it ASAP. It doesn't need to be perfect; it just needs to be about as good as what we have now and consistently improving. Anyone halfway competent who is working on that as a goal has my near total support.
This is not the time to be discouraging people who work on self driving cars. This is a time to do things properly, non-judgmentally and encouragingly while acknowledging that we can't have cars running people over if we can possibly avoid it and fixing what mistakes get found.
We have over a century on accident statistics for normal cars. Anyone pretending that pedestrians will not jaywalk on a well lit street with low traffic should not develop self driving cars and most certainly should not have a drivers license .
> it is likely that the engineer responsible for this code is also going to be responsible on net for a very large number of lives saved and judgement of who may have been culpable for what should be left to the courts.
Citation needed. Claims like that can be used to proclaim the most hardened serial killer a saint. I mean if we just let them continue they might start saving people at some point, point me to any "currently applicable" evidence that says otherwise.
> I've had young friends who died from car crashes and ... not much else.
In my area people die from old age and cancer. Maybe you should move to a place with sane traffic laws that doesn't let the mental equivalent of a three year old loose on the local population.
> Anyone halfway competent who is working on that as a goal has my near total support.
As a test obstacle on the road?
Normal engineering processes were built on bodies. We do not want to minimize outrage here. Public anger and pressure is the only way to keep engineering process going.
Im going to strongly opposed this, and to support my perspective I present to you the Boing / FAA fiasco.
There is no call to get riled up because Uber as a corporate entity let a mistake slip through. The justice system will come up with something that is fair enough and we will all be better off for the engineering learnings. This is young technology, which is different from aircraft.
Intentionally programming a system to ignore object history each time reclassification happens seems like a glaring oversight.
Sitting behind the wheel as a test driver using your phone seems like a glaring oversight.
I’ll continue with my rile until change occurs, cheers.
As a species, we've long accepted that living around each other poses some hazards and we've made our peace with it.
But to stretch that agreement to a multi-billion dollar corporation that only wants to make money off it? That's too much to ask for.
It is such a trite and overused argument.
Why does it feel so different to you if the accident was caused by negligent human texting, versus an engineer making decisions in code?
If anything that engineer was most likely operating in much better faith than the texting, perhaps drunk human driver.
Can code be "accidental"? Surely not. Someone had an intent to write and implement it. If it fails, its not "accidental"; the system was just programmed that way.
So the question is: are we okay with for-profit companies intentionally writing software that can lead to deaths?
The problem here is that if I'm doing 70mph+ on the motorway and I see a plastic bag flying in my path, the correct course of action isn't to brake hard or panically try to avoid it - but a computer cannot tell a difference between a soft plastic bag and something more rigid. If the only classifier is that "there's an object in my way, I need to stop" then that's also super dangerous. In fact, there are cases where unfortunately the correct course of action is to keep going straight and hit the object - any kind of small animal you shouldn't try to avoid, especially at higher speeds - yes they will wreck your car, but a panicked pull to the left or right can actually kill you. As a human you should do this instinctively - another human = avoid at all cost, small animal = brake hard but keep going straight, plastic bags = don't brake, don't move, keep going straight. How do you teach that to a machine if the machine can't reliably tell what it's looking at?
Edit: I will say that "size and relative speed of the object" is an excellent start, though.
Unfortunately that also means that when you have a person walking a bicycle that might be recognized as either one, this situation happens, and they initialize the velocity of a newly tracked object as zero.
Even this though shouldn't have been enough to sink them, because the track should have had a mean-zero velocity, but with high uncertainty. If the uncertainty were carried forward into the planner, it would have noted a fairly high collision probability no matter what lane the vehicle was in, and the only solution would be to hit the brakes or disconnect.
Furthermore, if you've initiated a track on a bicycle, and there's no nearby occlusion, and suddenly the bicycle disappears, this should be cause for concern in the car and lead to a disconnect, because bicycles don't just suddenly cease to exist. They can go out of frame or go behind something but they don't just disappear.
Taking your disclaimer into account, that is a very wrong assumption. If you classify an object wrongly that means your model will not be able to properly estimate what the object's capabilities are, including its potential acceleration speed and most likely trajectory from a stand-still. And that information will come in very handy in 'object avoidance' because a stationary object could start to move any time.
Was it an oversight or were they having trouble with the algorithm(s) by keeping the object history and decided to purge it?
In any case, the history clearing seems to be a bug, they updated the software to keep the position history even when changing the classification.
Also, we can detect human gait already can't we? There has to be better signifiers for human life that we can override these computer vision systems with to prevent these tragedies.
What about external safety features, such as an external airbag that deploys on impact, cushioning the object being struck if the system senses immediate collision is unavoidable. Some food for thought for anyone working in this space, would love to hear your thoughts and ideas.
This basically is being done already - recorded data is hand labeled as ground truth for classifiers. "human in darkness entering the street" might be a rare case, though. Humans in the dark on the sidewalk should be pretty common.
> Also, we can detect human gait already can't we? There has to be better signifiers for human life that we can override these computer vision systems with to prevent these tragedies.
I'm not a perception expert - it looks like in this case the victim was mostly classified by LIDAR capture. I don't know if the cameras were not good enough for conditions or if their tech was not up to the task. This is sort of a red herring though - even if the victim was classified as "bicycle" or "car" she still should have been avoided.
> What about external safety features, such as an external airbag that deploys on impact, cushioning the object being struck if the system senses immediate collision is unavoidable. Some food for thought for anyone working in this space, would love to hear your thoughts and ideas.
Why not do this on all cars?
That'd be a disclosure. (When you disclose something, it's a disclosure.)
It's a disclosure that is (among other things) disclaiming neutrality, so also a disclaimer.
That being the purpose of a disclosure, my point stands. What was yours?
The whole point is that the person in question was not incorrect at all, they simply used one of two perfectly accurate descriptions when you preferred the other. Your description of them as having been incorrect was incorrect. (I note, though, that you have since edited the personal insults about he fitness of the poster you purported to correct for their prior job from your post, which would be commendable were it not directly linked to your dishonest pretense that that insult did not exist and that a reference to an insult in your comment must be to the mere act of proposing a correction.)
Also, the person in question was not me, I'm just the one who called you out on the error in your “correction”. So for someone so critical (incorrectly) of others’ failure to pay close attention to detail, you are doing a poor job of demonstrating attention to detail.
Thank you for flatly and plainly stating this. That is unacceptable and anyone involved with approving that decision should be prosecuted in the death.
Its pretty clear that Uber's techbros have some serious culture issues. They have shown they are incompetent and should be disqualified by regulators.
I've noticed when I am riding with a good (human) driver who obeys pretty much all rules of the road and drives slower than the posted speed limit (at a speed they are comfortable driving) that there are always a few drivers behind us who will honk their horns, flash their lights, or even pass into the next lane just to pass back a little too close for comfort. I don't know if they are always horrible humans but that's besides the point.
How would a self-driving car react in such a situation? Would human drivers be better behaved around a Waymo vehicle because the Waymo car has a lot of cameras and sensors and can pretty much show a clear cut case of road rage?
The studies have been done, they've been replicated, and the statistics are very clear. Google the '85th percentile rule'.
Can you expand on that? I'm not sure I follow what you're saying here.
Human drivers in fatal accidents don't have their licenses permanently revoked and there's no outrage.
Humans have their licenses temporarily revoked because it's assumed that humans can change their ways (maybe or maybe not a reasonable assumption).
With a computer, you can't expect it will get better unless its programming is improved. So isn't that reasonable grounds for "permanent" revocation, at least in the sense of indefinite, where you wouldn't do the same thing to a human?
Criminal negligence resulting in death by motor vehicle is a minimum of six months license revocation in NY.
It's that this quality of software was allowed to operate in a capacity where that could happen.
I've asked him a few questions, probing gently his experience and skills. Aside from his arrogance there was nothing special. No love of technology. No depth. Nothing. We didn't hire him.
Companies have different cultures. Companies hire and promote differently. And it matters.
Also, Uber's business plan is 100% to maintain share in the rideshare market until their cars drive themselves. That is the end game here. To shut down their self-driving research is to doom the company entirely.
Not that I'd be against that. Would be great to see Uber die. It's already caused enough headache and pain.
They don't have to shut down their research. I don't care if they want to write software and play on private tracks.
As I see it, Uber should be explaining exactly why their privilege to endanger the public by operating unsafe robots on public roads should not be revoked for criminal incompetence.
Uber's business is clearly more important than lost human lives due to negligence of proper software safety measures. /s
Yes, but on the border between “criminal negligence” and “depraved indifference”  not the border between “criminal negligence” and “innocent conduct”.
Well, also, any of those categories should trigger braking themselves. You can't hit vehicles or bicycles. You can't hit "other" either. If you plan to hit something, it needs to be a piece of paper or some such. As a_t48 points out, classification is not needed to avoid objects, so being confused about what the object is is totally irrelevant.
Basically, absent vehicles behind you and to each side, a driver should stop until the object can be identified and determined if it will do damage or be avoided.
A bicycle on the side of the same lane will typically move in the same direction and stay on the side. Thus the car has to overtake.
That is different from a person crossing the street.
You always have to make assumptions, about elements in the path. And it has to have an option to handle the worst case.
I wonder if it's because their system is not reliable enough yet; a continuous alarm isn't useful.
There are a lot of commenters here who seem to be operating on a subconscious presumption that being "the best we could do" is some sort of defense. It's not. If the best you can do is not good enough, don't deploy it.
Don't be so sure. What if it's a bird, and you're traveling at 70 mph on a busy freeway? Slowing down or stopping could result in getting rear-ended or worse.
Clearly more testing is needed, and more work needs to be put into identifying objects early.
If you're traveling at 70 mph (which the vehicle wasn't) and you see something up ahead you're not sure about, you can slow down a bit. If you get rear ended when you take your foot off the accelerator, that's not your fault.
If you're still confused, light braking is appropriate. At some point, you'll know to either fully brake or go on through.
This system doesn't need more testing. It needs to be designed so it can stop safely and so that it will stop safely and alert the operator in case it is unable to determine if it's safe to proceed.
At least this doesn't seem to be that hard of a line to draw in terms of competency before allowing on public roads
And admit we're going to continue to need human safety drivers...
Meanwhile, you let humans who are far less than 100% accurate (and are often distracted) keep killing people? I went into a panic stop one rainy night when a billowing piece of plastic blew across the road in front of me, it looked exactly like a jogging person to myself and my passenger.
We don't need self-driving cars to be perfect before they are released, just better than people.
People are not going to be swayed by statistics, they are going to be swayed by their emotions and if the self driving car attempts that 'hit the road' so to speak are going to be sub-par in the public perception even if they are might be statistical win then they will get banned. So if you are really gung ho on having self-driving happening then this should concern you too because otherwise you will never reach that goal.
Self driving cars need to be - right from the get go - objectively obviously better than human drivers in all everyday situations. Every time that you end up in a situation where a self driving car kills someone human drivers - and all their relatives and the press - are going to go 'well, that was stupid, that would not have happened to me' and then it will very soon be game over. And you won't get to play again for the next 50 years.
See also: AI winter.
So if you identify a failure mode that contributes to a catastrophic hazard for instance, you better build your system to drive the probability down. The resultant severity x probability score you end with has to fall within the risk parameters deemed acceptable by management/safety
Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?
A person died.
Lots of people die in car accidents.
Nearly 40,000 people die each year in car accidents. At least 8 of the top 10 causes of accidents would be improved with self-driving cars.
2. Drunk Driving
4. Reckless Driving
6. Running Red Lights
7. Night Driving
8. Design Defects
10. Wrong-Way Driving/ Improper Turns
Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?
Aside from figuring out where the edge of the road is, the biggest accident risk that I've seen with driving in heavy snow is speed -- no one wants to drive 15mph for 2 hours through a snowstorm to stay safe, so they drive 30 - 50mph instead.
And I'm not sure how to solve the road visibility issue with self driving cars, but presumably the same heuristics that humans use could be emulated in software (which I suppose is primarily following the car ahead or his tracks or looking for roadside signs that mark the edge of the road).
I doubt anyone would refer to those as silly mistakes either.
My point with the second part is that humans have proven driving in snow storm and places that aren't fully mapped is possible, something that self-driving cars have not.
Who goes to jail when the self driving car kills people?
The interesting moral dilemma is what to do if the car decided it was better to run into a pedestrian and protect the driver than to run into a brick wall and protect the pedestrian.
There's no easy answer to that dilemma.
"I can text because my car will warn me if I run off the road or stop me before I hit the car in front of me"
"I don't need to slow down on this snowy road, ABS will help me stop safely"
"Sure, I'm driving too fast for this road, but my airbags and seatbelts will protect me if I crash"
- unknown object detected ahead, collision possible- likely
- dangerous object (car) detected approaching from rear with likely trajectory intercepting this vehicle (people easily forget multitasking/sensing like this is something an autonomous car should be able to do better than a human who can only do relatively intermittent serial scans of its environment)
- initiate partial slow down and possibly change path: make some decision weighting the two detected likely collision obstacles.
You do not have to slam on brakes and be rear ended, but speed is a major factor in fatal crashes, so even if you can drop 30% of your momentum by the time of impact and avoid the rear end, that's still a responsible decision.
And we can accept that sometimes cars are put in potential no-win situations (collision with two incoming objects unavoidable).
What's a negligent/borderline insane decision? Put a one second hard-coded delay in there because otherwise we have to admit we don't have self-driving cars since we can't get the software to move the vehicle reliably if it's trying to avoid its own predicted collisions.
(Another issue is an inability to maintain object identity/history and it's interaction with trajectory prediction... personally, IMO, it is negligent to put an autonomous car on a public road that displays that behaviour, but that's just me)
Conversely, there are many things that are bird sized that can do significant damage to a car and even be fatal to the car behind you. Ie: a trailer hitch caused one of the first Tesla battery fires; loose pieces of pavement have been known to be kicked up and kill a following car.
Classification: Vehicle - by radar
Path prediction: None;
not on the path of the SUV
- Radar makes the first detection of the
pedestrian and estimates its speed.
Classification: Unknown - by lidar
Path prediction: Static;
partially on the path of the SUV
- Lidar detects an unknown object; since
this is a changed classification, and an
unknown object, it lacks tracking history
and is not assigned a goal. ADS predicts
the object’s path as static.
- Although the detected object is partially in
the SUV’s lane of travel, the ADS
generates a motion plan around the object
(maneuver to the right of the object);
this motion plan remains valid—avoiding
the object—for the next two data points.
Classification: Bicycle - by lidar
Path prediction: The travel lane
of the SUV;
fully on the path of the SUV
- Lidar detects a bicycle; although this is a
changed classification and without a
tracking history, it was assigned a goal.
ADS predicts the bicycle to be on the path
of the SUV.
- The ADS motion plan—generated 300
msec earlier—for steering around the
bicycle was no longer possible; as such,
this situation becomes hazardous.
- Action suppression begins.
* The vehicle started decelerating due to the approaching intersection, where the pre-planned route includes a right turn at Curry Road. The deceleration plan was generated 3.6 seconds before impact
In this particular case I would be perfectly ok with that. If you can't operate a vehicle safely then coming to a stop or failing to get moving is fine.
In this case, we know there weren't many vehicles around.
This very scenario is a great example where I'd want a car to stop if it saw a deer or even a dog or armadillo.
> If the car braked every time it spotted another vehicle it would almost never move.
In defensive driving it's often taught that you are supposed to slow whenever you're approaching another vehicle and don't know what it's doing. You're supposed to exercise caution at intersections, and definitely supposed to exercise caution when passing, being passed, there are things in the road or other people or vehicles on the side of the road.
Counterargument: the safety driver is supposed to be prepared to take control at any time, should the vehicle do something unsafe.
It seems to me to be clearly criminal to use this code to run a fully automated environment (i.e. no safety driver). It's not clear to me what the expectations should be of the code when there is supposed to be an attentive safety driver in the vehicle.
I think the safety driver is going to jail, because they were watching videos instead of watching the road at the time their vehicle killed a pedestrian.
That's impossible. This has been tested in countless studies with train drivers and pilots and it is absolutely impossible for a human to stay alert if an 'almost good enough' computer system is in the drivers seat.
By the time you're needed your situation awareness will be nil.
Even if this was constantly happening, it would give the safety driver some sense of purpose - their job would be constantly figuring out "is this a real thing or not" - and then they wouldn't be bored out of their mind and be watching videos.
I agree with that, and the NTSB should consider adding this to their requirements when approving test programs of this sort.
But stepping back, I think there's a very significant difference in culpability between "safety driver couldn't react in time because they zoned out" and "safety driver was watching a sitcom". In the first case, the driver was trying to do their job, and the nature of the ask made it difficult/impossible. In the second case, the driver was knowingly not doing their job, and was knowingly engaging in unsafe behaviour. We don't have any examples of a fatal accident involving the first kind of error, and this case is an example of the second kind.
> "something is moving closer to the car in a way we're not expecting" would be a great thing to show the safety driver.
Isn't that what you get by looking through the windshield using your eyeballs?
Maybe, but the bigger picture is if you hire people for low wages and give them impossible tasks, you're not paying enough for them to make a good attempt or be a scapegoat. The problem is management.
If they are truly trying to stick to the guns of "well the 'driver' should have been prepared to take over at zero notice", then their (potentially criminal) negligence is in how they present and market their product and not in the software. But it's one or the other.
In some cases in the most literal sense of that.
Yes, if this code was in a Tesla, it would be criminally negligent, because in the driver's seat of a Tesla is a consumer that bought a product called "autopilot".
Uber, and Waymo, and Cruise, are all _testing_ their systems, and that's why they installed safety drivers. There's no marketing here, and there is no customer -- the person in the driver's seat was employed to sit there and pay attention to what's going on, for the explicit reason that the car's computer is NOT yet capable of driving itself.
That's in marketing materials rather then the legal ones. Which is great to make regular people interpret it "look, it can drive itself, I'll just fiddle with my phone" but doesn't put any of the legal responsibility on the manufacturer's shoulders. I'm guessing regulators will eventually have to weigh in on this.
It’s not reasonable for a human to sit around for hours doing absolutely nothing, then suddenly be thrust into an emergency with seconds to respond without warning. Most humans aren’t capable of that level of vigilance. If they aren’t watching a video, they’ll be daydreaming. It’s very unlikely they will be able to recover from any kind of situation requiring an immediate response. At a system design level, by placing a human in that role you have already failed.
Maybe if you take the most qualified humans with the best reaction times, they would have a chance, but the qualifications for that job would be more like those of a fighter pilot. This job is not recruiting our best and brightest, it’s recruiting people who want to sit around doing nothing.
This person’s role in the system isn’t to provide safety - it’s to absorb liability. Ergo, a fall guy.
This is borderline criminal negligence.
Uber had told the world who they are: criminals. They have flaunted many laws since inception.
Will they ever stop? Who will stop them?
I suspect there's a big problem in the other direction, too. If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.
If it's (appropriately) conservative about starting to brake it will be braking/slowing all the time. If it's not conservative, people or cars will occasionally get hit. The former could make people uncomfortable or carsick or create some subtler danger by stopping short needlessly. The latter might mostly work and mostly not kill people, except for when it does.
"They had to not react quickly because they keep mis-detecting things" is really not an acceptable justification. If that's the why (and it probably is) that car should never have been on the street.
The scariest thing is that it's so fucking familiar, this idea of technologists solving a range of simple cases and assuming the solution extrapolates to success in the real world. We cannot get into these cars if that kind of software development feels anything like everyday software development, where random web site bugs can be tolerated.
As a preliminary step to any regulatory approval Tesla should release every byte of data from their tests so we can analyze the scenarios and events that the software has dealt with, so we can second-guess them. I seem to remember a common criticism of Tesla is that it's kinda shitty to work there and I don't think the best work comes out of an environment like that.
We should know for sure what they/you mean by "seem to" and "very low." Trade secret protection is insignificant when public safety is involved.
Oh no, it doesn't work and Travis & friends have been lying for years. Wake me when there are consequences for the follies of rich white guys.
if your car always sees empty space ahead as non-empty space, you probably shouldn't let the car on the road until you fix that. Once you fixed that, if the car sees that the space ahead is non-empty, even it can't classify it, it should slow down well before and warn the driver and continue with the braking if the driver is asleep/watching youtube. It is AZ, there is no rain nor snowflakes falling which would mislead the lidar. An object ahead - slow down and stop if you can't navigate it.
Presenting it as the AI-hard issue of misclassification is just an attention misdirection from and a whitewhashing/laundering of the foundational issue of knowingly letting car on the road with missing basic safety level of "don't hit objects in front of the car". Similar to the Boeing blaming 737 MAX crashes on the failed sensor.
This is so understated. This car was basically driving in ideal conditions at night for a self-driving car. If it can't avoid a hit under these conditions, it clearly wasn't ready to be on the road.
One thing that self driving will "give back" is time to the occupant.
And the thing that most human driven accidents are correlated to damage; speed.
This reeks of a type of thinking where you are relying on other parts of the system to compensate. You might expect to hear things like "it's okay, the safety driver will catch it". Speaking for myself personally, this type of thinking comes very naturally. I like to come up with little proofs that a problem is handled by one part of a program, and then forget about that problem. But in my experience (which does not involve writing anything safety critical) this strategy kinda sucks at getting things right. Dependencies change, assumptions become invalid, holes in your intuitive proof become apparent, etc, etc, etc, and the thing falls over.
If you are designing a safety critical system, something you really want to work, I don't think you should be thinking in the mode where each problem is handled by one part of the system. You need to be thinking in terms of defense in depth. When something goes wrong, many parts of the system should all be able to detect and correct the problem. And then when something bad does come up, and 9 out of 10 of those defensive layers each individually were sufficient to save the day so there was no disaster, you should go figure out what the hell went wrong in the tenth.
I apply encryption to storage. I can't tell you how often people try to push back on encrypting storage with stories like, "But we have access controls and auditing in place. And when we have a deprovisioning process for our drives. Encryption is costly and redundant, so why should we do it?"
Through the years I can recount several after-the-fact incidents where encryption ended up saving their bacon because of weird and entirely unanticipated events. One notable one was where a hypervisor bug caused memory to persist to an unintended location during suspend/resume, and the only reason customer data wasn't exposed in production was because the storage target was encrypted. In another case the "streams were crossed" when assigning virtual disks to virtual machines. The (older) base disk images weren't encrypted in that case, but because the newer machines were applying encryption in the backend before the blocks were exposed to the guest OS, the "unencrypted" disk content came across as garbage (plaintext was "decrypted," which with the algorithms we were using was equivalent to encrypting), again preserving the confidentiality of the original disk images.
The concept of "belt and suspenders" is often lost on people when it comes to safety and security systems.
Oh, you have access controls in place? Great. What happens if they fail?
Oh, you have a deprovisioning process in place? Great. What happens when someone doesn't follow it?
Systems fail all the time. If your defense only has one layer, when that layer fails (and it will, eventually) you're SOL. Multiple layers of defense give you resiliency.
This is what Boeing did with Max. The airframe wasn’t stable in and of itself, and they relied on software to compensate. Terrible idea.
And there should be a definitive priority established between those layers so that, if one fails, the other 9 don't attempt to correct in different ways. It should fail from the most conservative to the least so that a false positive results in erring towards stopping the vehicle.
I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle. Yes, this particular driver was actively negligent by apparently watching videos when they should have been monitoring the road. But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs. And that could be enough that their delay in taking over could lead to the worst.
Certainly not defending the safety driver here - or Uber. But I think there's a bit of a paradox in that the better an AV system performs, and the more the human driver trusts it, the easier it is for that human to mentally disengage. Even if only subconsciously. This seems like a difficult problem to overcome, especially if AV development is counting on tracking driver interventions to further train the models for new, unexpected, or fringe driving situations.
One advantage the self-driving cars have over a human driver is that NTSB and Uber can yank the memory and replay the logs to see what went wrong, correct the problem, and push the correction to the next generation of vehicles. That's not a trick you can pull off with our current fleet of human drivers, unfortunately(1).
(1) This is not a universal problem with human operators, per se... The airline industry has a great culture of observing air accidents and learning from them as a responsibility of individual pilots. We don't have a similar process for individual drivers, and there are far, far more car crashes than air crashes so the time commitment would be impractical at 100% of accidents.
Why not run these systems in shadow mode to collect data, rather than active? Have the human completely in control and compare system's proposed response to human's. At my last job running a new algorithm in shadow mode against the current one was a common way to approach (financially) risky changes.
As somebody who used to regularly drive the I-5 between SF and LA, I can wholeheartedly vouch for this statement.
> A hardcoded 1 second delay during a potential emergency situation. Horrifying.
Also laughable, if it wasn't so horrifying. The self driving car evangelists always argue how much faster their cars could react than humans. It's basically their main selling point and the reason why these things ought to be so much safer than humans.
Sorry, but I as a human don't have a one second delay built in. That's an absurdly slow reaction time for which I would have to be seriously drunk to not beat it.
The average human apparently has a 2.3 second delay to unexpected, interruptive stimulus while driving (https://copradar.com/redlight/factors/IEA2000_ABS51.pdf). We almost never perceive it as such because we tend to measure our own reaction times from the point we are conscious of stimulus to the point we take willful action to respond to it, but the hard numbers appear to show that critical information can take 1+ seconds to percolate to conscious observation (remember, the brain spends a lot of time figuring out what in the soup of sensory nonsense is worthy of attention).
Whether this time is shorter or longer for humans is another question entirely (though the human intelligences' ability to deduce intent from behavior and forecast actions of other humans in traffic should give robocars a good challenge in that department as well). But in terms of raw reaction time after determining "I have to brake NOW", a human is definitely faster than one second.
To put things into perspective, the Uber spend 4 seconds after actually detecting the incursion trying to figure out whether it needed to respond and doing nothing, then a further second of enforced pause after concluding it did need to do something until finally starting to reduce speed 0.2 seconds before impact.
> the time from incursion start to throttle release included the reaction time of the tow vehicle driver pulling the foam car (which was consistently less than 200 milliseconds)
Handling of Emergency Situations. ATG changed the way the ADS manages emergency
situations (as described in section 1.6.2) by no longer implementing action suppression. The
updated system does not suppress system response after detection of an emergency situation, even
when the resolution of such situation—prevention of the crash—exceeds the design specifications.
In such situations, the system allows braking even when such action would not prevent a crash;
emergency braking is engaged to mitigate the crash. ATG increased the jerk (the rate of
deceleration) limit to 20 m/s3
>Path Prediction. ATG changed the way the ADS generates possible trajectories—predicts the path—of detected objects (as described in section 1.6.1). Previous locations of a tracked object are incorporated into decision process when generating possible trajectories, even when object’s classification changes. Trajectories are generated based on both (1) the classification of the object–possible goals associated with such object, and (2) the all previous locations.
edit: This was also improved, clearly it was not ready when it rolled-out before these changes:
>Volvo provided ATG with an access to its ADAS to allow seamless automatic deactivation when engaging the ADS. According to ATG and Volvo, simultaneous operation of Volvo ADAS and ATG ADS was viewed as incompatible because (1) of high likelihood of misinterpretation of signals between Volvo and ATG radars due to the use of same frequencies; (2) the vehicle’s brake module had not been designed to assign priority if it were to receive braking commands from both the Volvo AEB and ATG ADS.
>Volvo ADAS. Several Volvo ADAS remain active during the operation of the ADS; specifically, the FCW and the AEB with pedestrian-detection capabilities are engaged during both manual driving and testing with the UBER ATG autonomous mode. ATG changed the frequency at which ATG-installed radars supporting the ADS operate; at the new frequency, these radars do not interfere with the functioning of Volvo ADAS. ATG also worked with Volvo to assign prioritization for the ADS and Volvo AEB in situations when both systems issue command for braking. The decision of which system is assigned priority is dependent on the specific circumstance at that time.Volvo ADAS. Several Volvo ADAS remain active during the operation of the ADS; specifically, the FCW and the AEB with pedestrian-detection capabilities are engaged during both manual driving and testing with the UBER ATG autonomous mode. ATG changed the frequency at which ATG-installed radars supporting the ADS operate; at the new frequency, these radars do not interfere with the functioning of Volvo ADAS.
>ATG also worked with Volvo to assign prioritization for the ADS and Volvo AEB in situations when both systems issue command for braking. The decision of which system is assigned priority is dependent on the specific circumstance at that time.
Even 20m/s³ doesn't seem all that aggressive to me. A good car can brake with around 9m/s² (depending on the state of the road) which means it's going to take 0.45s to go from 0 to full braking.
"In these 37 incidents, all of the robo-vehicles were driving in autonomous mode, and in 33, self-driving cars crashed into other vehicles."
This is saying the self-driving cars crashed into other vehicles.
Edit: I emailed the register and they fixed it immediately. Nice!
Those still all seem to fall into the category "thing you should avoid hitting", though, right?
Why delete the history on a classification change?
Shouldn't classifications be tiered? In this case, while the system was struggling to PERFECTLY classify the object, it was clearly thinking it was something that should be avoided (oscillating between car, bike, other).
In this case, I would expect the system to keep it's motion history. IMO, this could have prevented the accident because although it didn't determine it was a bicycle/person until "too late" ... it had determined with plenty of time that it was maybe a car, maybe a bike.
A) Should I avoid driving this car into this thing?
and then there are subsidiary questions that help to answer that, but would be fully obviated if you already had a good answer for A):
B1) Is this a human?
B2) Is this a vehicle?
B3) Is this unrecognizable?
The "natural category" here is "drive into" vs "don't drive into", not "human vs vehicle vs fruit stand vs other".
In the article, the A)-type question is whether the object has vanadium, and the B)-type questions are whether the object is a blegg (blue round thing) or a rube (red square thing). The distinction becomes stark when you know it has vanadium, but it doesn't neatly fall on the blegg/rube continuum.
That is, it doesn't really matter if the object history is "deleted" or not; if it can't associate a new data point with a previous history (by identification or predicted position), the practical result is the same as if there is no object history.
This could be a result of using velocity based tracking, which I don't know that Uber uses, but is a fairly standard method, as it's what raw GPS measurements are based on.
This sounds a bit strange, given that the entire technology around which autonomous driving is built is about identifying and recognising patterns in extremely noisy and variable data.
If so, how come these categories where not encompassed into a more abstract "generic object" whose relative position seems to be getting closer since first detection ? That ought to have triggered the same braking scheme as, say, an unmoving car detection ahead.
I'd go for : utter engineering malpractice.
But regardless, taking sensor classification (inherently error-prone) at face value is engineering malpractice.
This might be a little too nitpicky but it doesn't get wiped. It's simply no longer associated with that object because it's considered a different object. It's still a huge, glaring issue, obviously, but all the data is still there.
In this particular case, the "object" was identified as one type of object and so all of the data related to that was classified as "car" info, for example, and then, when it's reclassified, that additional data starts recording to the "bike" info bucket. The software should have been keeping track of certain data regardless of that classification but it's only seeing the latest bucket of data. If the tracking history got wiped, we wouldn't have the data to look back on to see how this was all happening.
That alone should have been ground for immediate cessation of operation until a driver could take over, for the system to be declared unworthy of operation on public roads until this problem was fixed. The differences between 'pedestrian with bicycle', 'vehicle' and 'bicycle' are so large that any system that wants to steer a vehicle should be able to tell the three apart at at least 50 yards of distance or even more.
That is the reason why regular drivers have to pass an eye test before they are allowed behind the wheel. If you can't see (or understand what you are seeing) you should not drive..
> And then top it off with systemic issues around the backup driver not actually being ready to react.
It's even worse than that! Once the human does take the wheel the computer stops doing anything.
So from when the human is alerted and grabs the wheel, until the human can react, the car isn't even slowing down!
That's like the worst of both worlds.
> Employees also said the car was not always able to predict the path of a pedestrian.
The brake inhibition was very intentional and was the result of in-fighting as well as engineers trying to make the Dara demo:
> Two days after the product team distributed the document
discussing "rider-experience metrics" and limiting "bad
experiences" to one per ride, another email went out. This one was from several ATG engineers. It said they were turning off the car's ability to make emergency decisions on its own like slamming on the brakes or swerving hard. ...
> The subtext was clear: The car's software wasn't good enough to make emergency decisions. And, one employee pointed out to us, by restricting the car to gentler responses, it might also produce a smoother ride. ...
> A few weeks later, they gave the car back more of its ability to swerve but did not return its ability to brake hard.
And then they hit Herzberg.
The UberATG leaders who made it through the Dara / Softbank demo likely vested (or are slated to vest) millions of dollars.
It takes about 300ms for your brain to react to unexpected stimulus, so the alarm is useless in this case. Sad.
I'm increasingly convinced that virtually every unstructured problem in the physical world is an AI-hard problem and we won't be seeing fully autonomous driving for decades.
The steering mechanism would have to be modified obviously, but surely steering is a trivial part of the problem compared to actually figuring out where to steer to?
Would it have killed the developers to make the car sound its horn when it gets into this absurd 1s "action suppression" mode?
Based on news stories I found, she was glancing at a television show on her phone .
> make the car sound its horn when it gets into this absurd 1s "action suppression" mode?
If they added the suppression because there were too many false positives, that would just have resulted in the car honking at apparently arbitrary times. It's just converting the garbage signal from one form into another. It's still too noisy to be reliable.
1: https://www.azcentral.com/story/news/local/tempe/2019/03/17/... Vasquez looked down 166 times when the vehicle was in motion, not including times she appeared to be checking gauges [...] In all, they concluded, Vasquez traveled about 3.67 miles total while looking away from the road. [...] starting at 9:16, Vasquez played an episode of “The Voice,” The Blind Auditions, Part 5, on her phone.
I love how they went from "our vision system is too unreliable to have warning signals every time it doesn't know what's in front of it" to "okay let's do it anyway but just not have warning signals". Like it didn't make them stop and think "well maybe basing a self-driving car off of this isn't a good idea".
/* issue 84646282 */
sleep(60 * 1000)
1.2 seconds after hitting a predestrian. That's a pretty poor reaction time. Typically you want to apply the breaks before you come into contact with a person.
Usually breaks are applied shortly after the moment of contact, but the brakes should certainly be applied earlier.
(I'm genuinely sorry, but I couldn't resist.)
Edit: ok, I was just making a bit of a joke at first, but I looked it up. Reaction times vary by person, but tend to be between 0.75 and 3 seconds. 2.5s is used as a standard, so I guess I have to conceded that 1.25s is pretty good... I guess, for whatever that's worth.
So if my reaction was 5.7 seconds, I'd definitely have applied brakes far too late. I conclude the total time from classifying the object moving into my way and applying brakes was less than a second. (And btw, my car has emergency braking / pedestrian avoiding system and it didn't trigger, so I was faster).
We have a rare sight here: someone not being right, learning more about the situation, changing their opinion, and then making an edit about it all.
Kudos, and thanks for making this a better place for discussions :)
And understand why the designers felt this was okay...(Assuming of course, this was the actual reason for the delay. They may have a legitimate reason?)
I hope it's not the case that the hazard analysis stated that the human in the loop was adequate no matter what haywire thing the software did.
As a controls engineer in the automotive industry, I can tell you that a 1-second delay for safety-critical systems is not atypical.
The expectation is that the normal software avoids unsafe operation. Bounding "safe operation" is difficult, so if an excursion is detected, there's essentially a debounce period (up to 1 second) to let the normal software correct itself before override measures are taken.
This helps prevent occasional random glitches or temporary edge cases from resulting in a system over-reaction, like applying the brakes or removing torque unnecessarily, that would annoy the driver and potentially cause unsafe operation themselves.
Obviously there are still gaps with that approach. But there is supposed to be a driver in charge; and the intent is to prevent run-away unsafe behavior. It essentially boils down to due-diligence during development.
So Volvo engineers didn't think at all like you do / say. Their system was 0,1 second faster than Uber at detecting it, 1,1 seconds faster if you factor in Uber active suppression, and it would have braked 2,1 seconds sooner than the Uber did.
Why didn't it? Because Uber deactivated it's braking ability.
This is exactly why I keep saying that autonomous vehicles are not 10 or even 20 years away. More like 50-100 years away if that. Same reason as to why famously a group of researchers in the 60s thought that solving computer recognition of objects would take few months at max, and yet in 2019 our best algorithms still think that a sofa covered in a zebra print is actually a zebra with 99% confidence.
Had a human been actually paying attention to the road, I can bet they would start breaking/swearving as soon as they saw something, even if they weren't immediately 100% certain that it's a human - a computer won't until it's 99%+ certain, which is too risky assumption considering the state of visual recognition of objects.
> At least the car would still try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.
"Don't know what it is, let's ram it."
Never mind not detecting a pedestrian, that in itself is terrifyingly incompetent and negligent.
Better question than why did this happen: How often do these cars "see" bicycle and decide to glide on by. How often do they seeing things horribly incorrectly and we are all just lucky nothing happens.
Smoothing bugs out via temporal integration. The oldest trick in the book.
Elaine Marie Wood-Herzberg, 49, of Tempe, AZ passed away
on march 18, 2018. A graveside service took place on
Saturday, April 21, 2018 at 2:00pm at Resthaven/Carr-
Tenney Memorial Gardens in Phoenix.
Elaine was born on August 2, 1969 in Phoenix, AZ to Danny
Wood and Sharon Daly.
Elaine grew up in the Mesa/Apache Junction area and
attended Apache Junction High School.
Elaine was married to Mike Herzberg until he passed away.
She was very creative and enjoyed drawing, coloring, and
writing poems. She always had a smile on her face for
everyone she met and loved to make people laugh. She
would do anything she could to help you, and was there to
listen when you needed it.
Elaine is survived by her mother Sharon, of Apache
Junction, AZ; her father Danny, of Mayor, AZ; her son
Cody, of Apache Junction, AZ; her daughter Christine, of
Tempe, AZ; her grandchildren Charlie, Adrian, and Madelyn;
her sister Sheila, of Apache Junction, AZ; and many other
That doesn't imply Uber has the right chops to solve the problem, but I hope someone does.
Look at Europe (or heck, NYC) for alternatives to a car-dominated society:
* walkable, mixed-use, dense neighborhoods
* public transportation (rail-based, in particular)
* car-free streets (and cities)
The solution to traffic deaths is not self-driving cars. It's moving away from the Levittown-style suburbs that have proliferated the US since WW II.
Moving the world to a traffic free societies is not happening in a hurry, if ever. I'm writing this from a pedestrianised area in London but we still have plenty of cars.
Self driving type tech however has the potential to transform things in the next decade or so. Even if we don't have actual self driving the collision avoidance tech is getting quite good eg https://news.ycombinator.com/item?id=21388790
Not impossible, but less than likely in the short horizon. We'll probably get working SDCs sooner.
We used to have streetcars. We tore then down. We can put them back.
And I haven't met anybody working on the problem who thinks it is easy. But a lot of them do think it's worth the hassle if it will save even a fraction of tens of thousands of lives a year (even if fully autonomous operation is decades out, the semi-autonomous driving enhancements that have come from the research are already saving lives). Adopting and enhancing mass transit is also an excellent idea, but I think it's unrealistic to assume that will work exclusively. America has had quite a few decades to decide mass transit is something that everyone will jump on to, and it seems to not be happening.
Public transport doesn’t work in the US because of how garbage it is. If the spent half an effort to build up a proper transportation system adoption rates would be different. I live in CA and we have some of the better transportation systems in the country. It takes me 1hr to get to work via car or 2-3 hours via public transport. Solve that problem and you’ll see people adopt it real quick.
Further the real problem is people not obeying traffic laws. Not that they can’t, they’re choosing not too. An easy solution is to but a device in vehicles that automatically cites a person for things like crossing solid lines and merging with no signal, speeding etc.
... And SDCs can be programmed to obey traffic laws.
As for jobs, we're not going to see jobs taken away by self-driving cars in the near future. More likely, we'll see a fleet of vehicles with driver assist technology derived from attempting to solve the self-driving problem that will make the lives of career drivers, such as those in the trucking industry, that much better and safer.
I agree that improvements to mass transit would also help. But those aren't mutually exclusive problems.
We have to keep in mind, it's not in either-or story, it's a statistics and numbers story. on the same day this woman was tragically killed, nine other people were killed in traffic accidents. They just don't make the news, because that tragedy has become so ubiquitous that we are utterly desensitized to it.
We shouldn't be.
How many deaths would be prevented by automatically citing a vehicle for breaking traffic laws? This is a much cheaper and simpler option than an SDC.
SDCs can be programmed to obey laws, I never said they couldn’t. However a software system will not reach the same level of reasoning a human brain can in our lifetimes. Compute capacity isn’t even close yet. So again, an SDC / Software System / AI is no replacement for a human brain.
We have driver assist technology now. No company in the world is going to keep a human driver on the books when they can be replaced by a machine. History is there as proof. It’s not a question of if they will be replaced, it’s how long it will take to make the technology robust enough.
They don’t make the news because death is a normal, acceptable thing for us. People die, accidents happen. Covering every person that dies would not at all be time efficient.
There’s also something to be said for darwinism here, but I’m not going to get into that.
That's how human cognition deals with things we don't think we have control over. There was a time when smallpox deaths were just "part of God's plan."
I disagree the car crash body count is inevitable or needs to stay normal any more than some people's children just naturally succumb to smallpox.
> There’s also something to be said for darwinism here, but I’m not going to get into that.
Yes, I definitely wouldn't. It's an offensive attitude to have around people who have lost friends and loved ones this way to imply they weren't good enough to live.
Suppose the CEO of Uber was a cannibal, and you framed letting him eat people as a necessary perk in order to keep him happy and the self-driving program on track. Would it be valid to say the number of people it's permissible for him to eat is exactly zero, even if it slows down the production of a truly self-driving car? I mean, what's one or two lives compared to 40,000 a year or whatever? There's a lot of uncertainty about the costs and benefits though, even if you strictly adhere to a utilitarian viewpoint.
I'm deeply excited by the possibilities of self-driving cars, and I would agree that it's necessary to take risks to make them a reality. The question is always if we're taking a necessary risk or just being reckless.
Uber has taken unnecessary risks and learned relatively little from them. They didn't need a fleet on public roads to tell them that object detection was terribly broken.