Hacker News new | past | comments | ask | show | jobs | submit login
Former Uber Backup Driver: 'We Saw This Coming' (citylab.com)
121 points by mgiannopoulos on April 1, 2018 | hide | past | favorite | 137 comments



This isn't just about Uber -- all semi-autonomous cars are going to have this problem. The people behind the wheel will become distracted, increasingly so as the technology becomes better and they need to intervene less and less often. Even people who are trained to pay attention while testing self-driving cars become distracted, so we can assume the general public will become distracted too.

The only viable autonomous cars are ones that will never require human intervention, because humans aren't able to intervene effectively and may make situations worse as they snap out of a distracted state. But we can't get to full autonomy without first testing the technology, and no artificial test conditions could replicate the complexity of the real world. Society will either learn to accept some avoidable deaths during the testing phase or will ban the technology. I suspect a few more incidents will result in a ban, and we will not have fully autonomous cars for a very long time.


Also known as the "Level 3 is inherently unsafe" argument. Waymo noticed this back in 2012: https://driverless.wonderhowto.com/news/waymo-was-right-why-...


Granted, but this wasn't some grand discovery by Waymo.

The human side of automation [0] is an issue that has been known and written about since at least 1983 by Lisanne Bainbridge and Don Norman in 1990.

[0] https://www.jnd.org/dn.mss/the_human_side_of_au.html


In case others are wondering about the details about the levels:

https://en.wikipedia.org/wiki/Autonomous_car#Classification


Well, so level 3 can be safe under certain conditions and still be world changing.

For example, imagine a self driving truck that works perfectly, but only during the day, on highways, when it is not raining.

Such a system would still put millions out of work, as firing half of the truck drivers is possible if something works when it is sunny, on the highway.


In the physical world there's no such thing as a system with preconditions that can "work perfectly". There will always be a boundary region where is is hard to say whether the precondition is present and the system is unpredictable.

What we really mean when we say a system is reliable/perfect is that the conditions which can cause failure are so rare that they can be neglected. Any system that depends on things that are inside normal experience: rain, day/night, errant pedestrians, earthquakes, and so on can never be trusted to act reliably/perfectly. A "reliable" system might be excused for messing up under conditions such as meteor strikes or falling space stations.


There is going to be a higher threshold of self-driving vehicles, because random accidents by random people is not narrative-inspiring but for self-driving vehicles all accidents will be the fault of The Corporation. Every single individual accident will be the focus of world media fascination and speculation to derive how the motives of the corporation's singular supermind chose to allow it to happen, which will turn out to be that it was too greedy, scheming, uncaring, lazy, etc. There will be intense focus on a trickle of rare accidents because they are caused by a powerful corporate overmind that is potentially betraying us all, and little interest in a greater number of random accidents caused by no one of particular recognition for no discernable reason. Which is exactly what we've seen so far.


> random accidents by random people is not narrative-inspiring but for self-driving vehicles all accidents will be the fault of The Corporation

I think we can conclude, absent conditions of wilful neglect, that this hypothesis is false. I thought this, too. But people are reacting reasonably to Tesla and Uber’s, erm, faux pas.


I'm inclined to say "too reasonably" given the fact that the statistics seem to imply a much higher risk from the self-driving software than a human driver (Uber doesn't have many miles driven and Tesla is using overall-telsa statistics to hide the auto-pilot statistics).


The pedanty here is insane.

Imagine if instead of saying the word "perfectly" I said "works better than humans".

Do you reject the idea that there are situations, such as sunny highways, that are much much easier than everything else? And that if we can solve this much easier problem, it would still be world changing?

Why couldnt that be trusted to work better than humans? It is a much easier problem.


> The pedanty here is insane.

One person's pedant is another person's clear conscience. There's massive disagreement in the self-driving debate as to the extent to which we are dealing with safety critical systems. As you can no doubt tell, I'm firmly on the safety critical side.

I've a fair bit of experience with designing and shipping critical infrastructure for emergency services. One has to ship, but in my experience the majority of people don't have the necessary eye for detail to make a safe system. This is why we have process: to allow fallible people to ship safe systems (and to avoid delusion in the minority who do have an eye for detail). The self-driving field is still developing their processes, so they have to be conservative with their shipping.

Many of those arguing for self-driving are under estimating the safety of a typical human driver. The bar that a self-driving system has to meet is very high. If conditions are easy for a self-driving car then they are probably easy and safe for a human as well.

Accidents are skewed towards a subset of poor conditions involving speed, alcohol and fatigue. Is it fair to replace a safe driver that avoids speed, alcohol and fatigue with a system that whilst safer than average is still less safe than the specific circumstance?

Maybe an interim use of self-driving technology is to have the person driving, and the self-driving system in a co-driver/critique mode acting as an "unsafe driving detector"?


ome can just look at the state of autonomous robots in industrial plants. they do a handful of functions, they have limited reach and they have predictable patterns and yet when you add humans to the mix all kind of dangerous conditions arise to the point most robots sharing space with humans need to be caged

this is our current state of the art with automation and safety and yet we allow companies to slap a radar and a pretend safety driver and hurl a ton of metal down the pubblic roads

autonomous vehicle will undoubtedly at some point be safer than human, but shortcutting known limitations like radars vs dodging static objects because 'drivers are the backup' is lunacy and the recent deaths are just a symptom of people never having worked in safety system design playing fast and loose with human lifes, and the companies which enable that should be made an example of by courts.

some years ago serious automaker did their analisys and come up with the concept of designing the anticollision system first and use those as a backup to autonomous driving technology, that somehow has been lost in the race for profitability.


"this is our current state of the art with automation and safety and yet we allow companies to slap a radar and a pretend safety driver and hurl a ton of metal down the pubblic roads"

This is what bugs me most: Who on earth ever thought it's a good idea to allow a company like Uber with their history of corner cutting and shady business to test such dangerous technology on public roads?

The mind boggles.


I’m confused about this hypothetical- when ot starts raining or in any other adverse condition thousands of truck loads are just parked and delayed until it improves? I don’t see how this would put anyone out of work.


Imagine this on the highway in the desert. Is it ever going to start raining then?

Or imagine if you just look up the weather ahead of time and only run the self driving trucks if there is a less than 5% chance of rain.

These situations are still very common. You could replace all trucks on the highway in any desert climate, for example. And that's still many billions of dollars.


What's the point of a technology that only works under the least challenging conditions, that is probably less than one percent of the actual occurring conditions - and where the current solution, human beings, work better anyway because they don't have to stop for rain?

What you're imagining is a car on a fixed track - a railcar. We already have trains but the US does need to build out their railway, yeah.


Because it is not 1% of circumstances. It would be more like 50% of circumstances.

There are lots of places where it is sunny, and not raining and it is daytime.

Just don't run the trucks on self driving mode during the other times where they don't work, and we could still save up to 50% in labor costs.

Obviously, though, the market for self driving taxis is different than for self driving trucks, but once again, we could just only use them for trucks and still save Trillions.


What's an acceptable time period for autonomous cars to fail catastrophically on the road until they are better than humans at driving safely?


> The pedanty here is insane.

You mean pedantry :). -- Admiral Pedant.


Truck drivers do a lot more than just drive trucks: http://marginalrevolution.com/marginalrevolution/2018/02/wil...


That wasn't convincing. Obviously someone else (or a machine) would have to load/unload. Sure there will be routes that doesn't automate well, where the destination don't have the knowledge or scale for taking care of loading etc. but for the most parts it would be trivial compared to getting the truck to drive.


This kind of reaction is totally baffling for me.

I’ve done quite a bit of inventory manual work during college/university Holliday’s and even with such a limited experience I can’t even begin to explain how wrong you are because it would go on for hours.

In a neighbor country wich is pretty much considered as a leader in industrial automation (Germany) it’s almost mandatory for engineers to experience months of low level jobs to get a grasp of what practical real world problems are when working.

This is pure Silicon Valley overconfidence. Software "move fast and break things" approach really don’t translate well for hardware. And in this case I think Uber will learn this lesson the hard way.


Could you at least try to make your point?

All I said was that the act of loading/unloading etc. would be done by different people than the truck driver.

Also, we already have semi-automated warehouses, it is not unimaginable that the drones driving around with boxes could be adapted to able to help load a truck. Probably with the help of another machine. Not saying that this process needs to be void of any human at all - just that the truck driver wouldn't be part of it.

I really didn't say that the inventory work would just disappear - it would just be moved from the truck driver.


Well first research field of automated warehouse is hudge and so it’s very reductive to think that what happens after the truck is the easiest part.

Secondly truck driver usually are in charge of the final delivery to shop or customer and have to deliver good to buildings that are not by any mean standardized.

Having a human on board that know how to operate the truck and how to properly unload it is a feature that will probably be missed out. As It would be pretty unwise IMHO to rely on external people to "not fuck up" when handling heavy objects around a costly automated truck.

So maybe I am totally wrong, but if we concentrate our current R&D on building expensive automated truck so that we can reduce cost by firing driver... Instead of focusing our R&D on producing more economic and ecological truck and alternative transportation... we might as well be fucked up as a specie.


Well, obviously the truck driver won't be in charge of the final delivery, and if the buildings are not by any mean standardized then don't use an automated truck or do standardize that location. Simple as that.

> As It would be pretty unwise IMHO to rely on external people to "not fuck up" when handling heavy objects around a costly automated truck.

I agree that this will be an issue and the people on the ground would need to be educated. But I hope you don't think this problem is unsolvable in practice. We somehow do it for aircrafts (I don't think the pilot does the actual leg work there).


"standardize that location. Simple as that."

Again that’s nor simple neither cost effective by any mean. Your are dealing with a lot of externalities here.

Logistics is a chain, you can’t replace a link and expect that the others will magically adapt to the new element. It’s like an API you can’t expect users to be happy if you break all compatibility for the sake of progress. And again for API only software have to be adapted which is far easier than hardware.

I don’t doubt than sometimes in the future when all the caveheats will have been addressed and that some standard will have been defined yes you will have automated trucks. But it’s not simple and I don’t see them comming before at least 15 years and in any case after passengers automated cars.


If the economics don't work out, don't do it. This won't be an over night transformation but rather decades. You start where the circumstances are beneficial.

Perhaps the first to adopt this would be postal services and large stores with many outlets throughout a country with their own trucks that do the same route every day. They have control over each destination and it would most likely be easier work alongside existing logistics and everything is kept within the same company.


On that we agree I was just pointing out that it’s not as simple as you initialy thought.

And sorry if I’ve being boring it’s a deformation from Geography my first scholarship. Geographers tends to focus on systemic interaction rather than on individual components.


The ice road truckers will still have a job!


I’ve been pretty convinced for a while that this is effectively how it will play out. Effectively the “iRobot” model of designated areas/highways that are autonomous and highly controlled/protected and surface streets will remain human controlled for a long time.


For example, imagine a self driving truck that works perfectly, but only during the day, on highways, when it is not raining.

LOL. The other day I was in Mexico on a cave diving trip. It rained, short sudden downpour, out of nowhere. The carnage on the highway just in the few miles from the dive shop to the cenote was insane. No fair-weather robot would stand a chance in those conditions.


At least level 4 is required for that.


> But we can't get to full autonomy without first testing the technology, and no artificial test conditions could replicate the complexity of the real world.

Actually, I think a test environment could be a lot more complex.

It's mind blowing to me that there doesn't seem to be some big track somewhere with automated crash test dummies riding their bikes across the street in the dark. Automated cars running red lights and stop signs. Automated accidents in front of self driving cars. Automated kids chasing a football across the road.

It's mind blowing to me that governments do not require self driving cars do well on all these test before they allow the cars on the road. Governments should prepare these test, and car technology should complete the tests before they are allowed on the road.

And when a car fails any real world situation, we can step back and create new tests.

Really: We can test to any complexity we want. A plan lading on the road? A flood? A heard of sheep or flock of birds? A dust storm or other weather than might prevent the cars sensors working.


There are test tracks. Both Waymo and Uber have publicly said that they have fake cities to test the code https://www.theatlantic.com/technology/archive/2017/08/insid...

http://www.businessinsider.com/ubers-fake-city-pittsburgh-se...


I guess there are no cyclists crossing the road unexpectedly in Ubers fake city.

Do they have crash test dummy kids running out on the road? To they have road work with giant potholes?


Not that I know too much about machine learning but even in my very limited experience with homework we came across the problem of over fitting: where our model fits the noise of the training data too well but will not perform well against other real world data.

I can't even imagine the amount of data being generated by the fleets at waymo and others but compared to all the infinite possibilities I must assume their data set is tiny.


I can't imagine test tracks data being in the training set, so overfitting is irrelevant here.


> It has a giant roundabout, fake cars, and roaming mannequins that jump out into the street without warning

Well, there you go.


You can test all you want on a test track, but there is no substitute for real-world driving conditions. With AI and ML, you'll get what you measure -- a bunch of self-driving cars that pass the tests perfectly but are unprepared for the real world.

Normally it's fine to test by using models of the world rather than the real thing. But in the case of self-driving cars, driving conditions are too complex to simulate and people could die if things go wrong. My point is that we are left with a problem where the only viable test case is the real world, but it's likely that after a few more incidents people will favor banning self-driving cars rather than letting them improve at the cost of human lives.


I don't believe that without enough money and effort, test tracks can be indistinguishable from real world conditions.

At the very least a test track should be able to teach the AI how to react to a cyclist crossing the road unexpectedly.


Proper development requires a conservative and methodical approach. Uber is racing to be the first, rushing to launch before properly testing cases like those you enumerated. As they would say, "Move Fast and Break Things"


People


fyi -- people are working on this: https://mcity.umich.edu/


Most autonomous vehicle companies are approaching the problem backwards. They're trying to build a vehicle where the computer is driving until it gets confused, and then asks a human (either in the vehicle or remote operator) to take over with little notice in an emergency. But we could get greater safety benefits sooner by having the human driver always drive with the computer overriding control inputs when it detects an unsafe situation. The basic concept is already proven with stability control systems and, more recently, front collision avoidance (automatic braking) systems. We should focus on expanding and extending those systems to cover more situations and the full range of vehicle control axes.


That's impossible in reality, because computer doesn't have God's view of the environment - it can only predict things with a degree of confidence. It can't say for sure whether you get T-boned at an intersection you enter at 35MPH the moment the light turns green, so it won't reliably save you from a collision. A fully self driving system, on the other hand, won't enter the intersection without waiting 1-2 seconds, and would lower its speed below the speed limit when visibility is limited. A system that constantly keeps braking 'for no reason' won't get very far in the real world, because people would be very frustrated with it.


It's not at all impossible. It's already being done. Existing vehicles with forward collision avoidance systems are working well today.

We'll never reach zero risk and you can always contrive some scenarios where automated systems wouldn't help. But it's still worth pursuing incremental improvements. For example, street sign recognition is getting pretty good. In a few years it should be possible to build a system that detects when the driver is about to run through a stop light or sign, and automatically brake.


There's more money and power to be had by taking control away from people.

Fundamentally that is what's going on here. This is a massive power grab. This is convincing people to give up control of their transportation under a facade of improved safety and convenience.

Imagine streets full of powerful multi-ton wheeled drones relatively few people have control over the software operating.

Ignoring conspiracy scenarios, just imagine how 0-day hacks of that system manifest.

None of these situations are possible until you convince everyone to take ownership of one of these robots, or at least accept and utilize a ridesharing fleet of them.


As other people sometime say, there can be self driving slow vehicles. For elders, for tourism, for near-home moving of things or just parking/unparking.

In the meantime level 5 auto automation can mature in test areas.


> there can be self driving slow vehicles.

This is not a safe option, particularly on certain roadways. Accidents increase when you have a higher levels of relative speed difference between lanes of traffic. Having "turtle mode AI cars" isn't going to help us.

Speaking of the elderly.. they often end up in fatal accidents even at low speeds. Their frail bodies combined with strong safety restraints are not a good combination; particularly in the rear seats where their level of risk increases to nearly 2x the front seats.


I meant reaaally slow. Think not-road-accepted slow. Golf cart speeds at 'worst'. Maybe these aren't cars. But you can still transport some stuff and people in specific places and usage.

Otherwise I agree with you.


You're referring to Neighborhood Electric Vehicles. Those operate under special rules in many areas.

https://en.wikipedia.org/wiki/Neighborhood_Electric_Vehicle


Great comment. Uber's behavior shows it is interested in securing a monopoly of local transportation markets, not self-driving technology. Self-driving technology is a means to that end. If they cared about self-driving technology itself they would be starting with much smaller applications and working their way up rather than forcing themselves on to our streets as quickly as possible before their funding runs out.


While I agree with much of what you say, I think we can have autonomous vehicles without having to be 100% independent of human intervention.

It will require an effective "early warning system" which foresees issues it may not be able to process --however, this will require more inter-automobile communication and extravehicular communication (GPS, other inputs). Relying on only what the vehicle has on-board will probably not be good enough, for now.

These supporting systems would communicate "trouble ahead" (ambiguous information, incomplete information, lack of information, unknown information, etc.)


I think you're describing SAE level 4, while the parent is arguing for level 3 being unsafe. Level 3 means the user has to take over from an active driving situation within a pre-specified time period, whereas level 4 means the car will always be able to hand over in a "safe" state (an thus with unbounded response time).


Human attention vs low probably events is a known and solved problem. I've heard the TSA does this. You simply introduce fake events into the system at a frequency judged to cause human operators to stay alert, and responsive.


I think however the other way round does work. That is computers do stay alert and are able to take over with very short notice.

This is essentially human augmentation. The regular collision detection in many cars now is a good example. It's a relatively simple system that should stop a distracted driver from crashing.

Musk said something similar for the Tesla trucks, the auto-pilot can take over if the driver falls asleep or becomes unwell. Again this is augmentation.

The only way to keep the driver alert is to have them driving all the time.


They don't >have< to have this problem. The problem is that the robot should be watching the person, not vice-versa.

This however, can't be sold as 'almost self-driving' to a gullible public.


I wonder if we can gain any insights from experiences with airplane autopilots.


As I understand it, even if an autopilot drops out completely and without, you have a reasonable amount of time to take back over -- worst case scenario you just glide, and you have a reasonable separation from anything else in the skies. With a car, if you go even 5 seconds without control, there is a good chance you'll have hit something. It's a very different environment and constraints.


Unfortunately, the situation there isn't as good as it should be. It isn't always clear which autopilot mode the aircraft is in, and the Pilots no longer get enough flight hours in the degraded modes.

This contributed significantly to the Air France 447 crash in 2009.

see: https://99percentinvisible.org/episode/children-of-the-magen...

and more broadly https://www.youtube.com/watch?v=pN41LvuSz10


Airplane autopilots operate in a completely different environment, in which planes' flight paths are set and monitored by Air Traffic Control. A plane on autopilot would never have to dodge a pedestrian, and would be separated from other planes by thousands of feet.


> in which planes' flight paths are set

They're 'set' in the sense that if communications is lost between the pilot and ATC, there's a pre-determined and more importantly pre-authorized path for the aircraft to fly through.

> A plane on autopilot would never have to dodge a pedestrian

It happens quite a bit, which is why we invented TCAS. [1] A common misconception is that _all_ traffic is required to file a flight plan, and that's only true for IFR (instrument) traffic and not for VFR (visual) traffic. So, an IFR flight can, and will, encouter VFR flights that interfere with their flight.

> and would be separated from other planes by thousands of feet.

In certain situations, in transit regions and during clearance/delivery, this is not true.

1: https://en.wikipedia.org/wiki/Traffic_collision_avoidance_sy...


But it's not the autopilot that's taking evasive action, as would be the case in a self driving car. TCAS merely advises the pilot that there's traffic that's too close, and instructs the pilot to take evasive action (climb, descend, etc.).


An airplane has never had to dodge a pedestrian.


Since you said "never", I'll just mention that there are videos on Youtube from skydivers who would disagree with you.


Are skydivers really pedestrians though?


If skydivers aren't pedestrians, then what exactly would be considered a pedestrian in the sky?


Skydiver on a treadmill maybe? Not all words have to make sense next to eachother.


Au contraire.

https://flightsafety.org/runway-incursions-down/

But realistically it's not like all flights could be done completely on autopilot. It's not uncommon for a flight plan to change due to delays at an airport.


Surely this is a solved problem in several other contexts/industries?


If it'd been a Waymo or Tesla car I'd have been more likely to give some benefit of the doubt. Driving is hard and most of these systems already make fewer mistakes than human drivers.

But Uber never has been and never will be trustworthy. It's a toxic organization that will continue moving fast and breaking things - sometimes intentionally - until it's stopped by either the market or the government.


> Driving is hard and most of these systems already make fewer mistakes than human drivers.

When driven in a small number of states that have beneficial weather conditions. I don't think we've done _nearly_ enough testing to come to this conclusion yet.


Even so, the places that Tesla and Waymo primarily drive - while "easy" for an autonomous car - should then also be easier for human drivers. His point is that they're in fewer mistakes and human drivers. In the end that is all we need from these cars. As long as they make fewer mistakes than us they are a success.


> Even so, the places that Tesla and Waymo primarily drive - while "easy" for an autonomous car - should then also be easier for human drivers. His point is that they're in fewer mistakes and human drivers.

Genuinely curious if you've come across accident statistics that break down human miles in to similar categories as the kind of miles autonomous cars drive.


Interesing to see how little customers can do to stop them. As they are always in the red they never attracted enough customers for their business case. Nevertheless they always are able to get fresh money. The customer is irrelevant and left out of the equation. It is the shareholders that bet on the future no matter how dire the present is. That really speaks volumes about how capitalism works and is depressing as there is nothing you can do to boykott this company.


Why should we need to boycott a company like this? The government should prevent them from operating, otherwise it has failed at its job. In this case, this has led to a death, which is still probably not enough for the government to intervene permanently. Seems too me like the failure here is completely with the government failing to do its job of regulating and protecting people. Given enough cash, I'm not surprised and am prepared to see many more people die at the hands of companies like uber without any serious consequences whatsoever for the company.


Capitalism has resulted in us drowning in capital. This seems to me to be a statement about its success, not its failure(s). Would the alternative (scarce, expensive capital) be better?


I am pretty sure you are not talking about the majority here when referring to "us".


The numbers so far, including human intervention rates do not support the claim that autonomy beats humans. With a limited dataset, the suggestion is very much in the other direction. These vehicles may someday be better than us, but “someday” isn’t today. In particular I think Uber had a human intervention on avg every 13mi.

Edit: I’d love to hear a counterpoint, although of course I accept that’s not required. Like the discussion about fusion, people seem to argue from a point in the future when all of the problems and limitations of today are gone. Let’s try arguing from what’s possible now instead.


Not today, but don't worry after these robot cars kill 1000s to more innocent people they will be safer. All those killed will just help us learn to improve and make progress.


If you haven't heard of it, you may be interested in the trolley dilemma.

https://theconversation.com/the-trolley-dilemma-would-you-ki...


The trolley dilemma is something that people are not very receptive to in real life, because the very fact that someone presents such a dilemma leads to suspicion about the accuracy of the number of lives saved vs. destroyed.

The type of mentality that would sacrifice one life to save five is instinctively assumed by a lot of people to be the sort of mentality that would deceive themselves and/or others about the correct statistics and the uncertainty thereof. Assuming the ratio of 1:5 to be accurate and worthwhile is largely missing the real issue, in my opinion.


"noncompliant actors" is the Uber term for pedestrians in the road.

I understand the computer science concept to abstract the term but when you're talking to the drivers of the cars this is in retrospect an awful term.

Rather depressingly all I can think of is the ED 2000 from Robocop that shoots citizens who don't comply.


Where did you hear this?

The Caltrain calls incidents where it hits a pedestrian "tresspasser strikes". Always thought this was awful.


From the article:

> In it, they are trained to keep hands hovering near the wheel at all times so that they can quickly take control when the car does not safely respond to dangerous road conditions or “noncompliant actors,” such as people walking in the roadway


In Japanese rail, conductors and drivers periodically point with their hand [1][2] to indicate awareness of the task. Reading the article, perhaps these distracted test drivers could use a similar approach.

[1] https://en.wikipedia.org/wiki/Pointing_and_calling

[2] https://www.youtube.com/watch?v=9LmdUz3rOQU


Huh, I had a bus driver in Seattle (not Japanese) that did something similar. He would point at and call out all the major intersections, speed limit signs, etc. I honestly thought it was some kind of neurosis/obsessive compulsive disorder, but seeing this exact same behavior makes me wonder if he actually picked it up as a trick to increase is awareness on driving.


This is one of the strategies they use on some of the drivers in the TV show "Canada's Worst Driver", to get the ones scared of driving to regain control, convince the distracted ones to watch the road, and teach the ones who don't know where to look how to look at everything.

(They don't do any pointing though, because that would require taking their hands off the wheel)


Autonomous cars are not coming.

A certain amount of over familiarity with western driving conditions blinds these attempts to the reality that there are two worlds on the roads - the legal and socially manufactured layer and reality.

Which is why looking at the actual problem space, like nations where road rules tend to be ignored, makes it clear that what you actually have to solve for.

It has been highlighted before - but the autonomous car can only work if the road itself is active / aware


Societies (and therefore roadsystems) will adapt, because speed and safety. Roads will be changed to cater to the needs of self-driving cars; smart traffic lights, clear lining, clear signage, less leveled intersections, dedicated lanes, I can think of dozens more changes which make it easier and safer and more deterministic for computers to drive cars.

Development of these changes will be spurred once self-driving cars can make more efficient use of the road; the immense pressure of society to get rid of traffic jams will make sure of that.

Maybe some nations won't implement these things nationwide with 100% coverage, but they certainly will implement changes for certain roads. Highway roads first, then collector roads, and lastly neighborhood roads.


In America, at least, many (most?) roads and highways are already neglected and getting worse. This is, so we hear, due to a lack of money to keep our roads properly maintained. If we don't have the money to do that, where is the money to retro-fit every road in the country to accomodate autonomous vehicles going to come from?


It's a good question since most of those cars are electric.

Taxes typically taken in at the gas pump are being skirted around by having electric cars. Do we just start charging MUCH more for registration? Especially cause these cars have the potential to operate most of the day (except when charging) so they will be causing more wear than a normal car.


Passenger vehicles (ICE or electric) cause negligible wear on roads. The vast majority of road wear is caused by heavy trucks and other large vehicles like buses. There's a non-linear relationship between vehicle weight and road damage.


Proportional to the fourth power, actually

http://www.pavementinteractive.org/equivalent-single-axle-lo...

An 18-wheeler causes 10,000 times the wear of a car, approximately.


It's not just about road wear. Just installing and maintaining the support infrastructure (traffic lights, road signs, cameras, etc.) and planning the whole system would also count for a big chunk of expenses.


Assuming 40 mpg equivalent, gas taxes for 2 years (the period of a registration in NY) would come to about $300 for 15K miles. That may seem like a lot compared to some states, but I know some charge substantial property taxes on cars anyway. And some states already have special charges of $100 or more for hybrids and electric cars.

I'm not sure what you are referring to when you say electric cars have the potential to operate most of the day. How is it that gasoline cars don't?


Referring more to the self-driving aspect. Gas cars can as well but most services are trying to go electric.


In New Zealand, Road User Charges (RUC) are charged separately on diesel vehicles, where petrol vehicles typically pay for them at the pump.

Presumably, you would pay the RUC in the same manner for electric vehicles at some point, if it isn't already in place. (I don't own one so don't know if they're already required.)


In America, I predict society to realize they need much less roads. It's much better to have less but much better quality roads.

I'm unsure of how this change will get traction. I suspect states will eventually explain their infra budget cuts.


If everybody were to give up driving, wouldn't we still need highways for freight?


This is a great idea, then your city could be "smart car certified" and if your city doesn't meet the requirements, then your smart car doesn't allow its autopilot to be turned on.


If incidents only occur when the driver is distracted then it will always look like the cause was driver distraction even if they were diligent for 99.99% of the trip.

There will likely be a bias in the future for incidents like this to show some level of driver distraction. This makes it easy to blame the driver when the idea of a human as a supervisor is fundimentaly flawed.


The two people interviewed were fired for cause from this same program, of course they will have a negative opinion. One even fired for the same thing this safety driver failed to do.

>Both Kelley and the former driver in Tempe were dismissed from their jobs with Uber earlier this year for safety infractions: Kelley said he was let go after rolling through a stop sign while he was operating the car, which he disputes; the individual in Tempe said he was dismissed for using his phone while the vehicle was in motion.

The bit about level 3 considered harmful makes a lot of sense and isn't something that I would have intuitively thought of


If trained and well disciplined airline pilots and railway engineers have problems staying alert in situations like this you can bet that your average Uber 'backup driver' (what a job title) isn't going to be any better.


I guess you missed the point of TFA. It is impossible for humans to perform the tasks these two persons were hired to do. 100% of people who attempt to do that job will eventually fail, likely sooner rather than later.


Wrong. At least 95% of the people who made the cut and didn't get fired after a few weeks handled being the only operator of the vehicle with ease.


And they will continue to handle it with ease right up to the moment when they hit another cyclist.

What makes you so confident that the mere two years of running the program is enough to reliably calculate that number, when drivers like the one in the article managed to last more than a year until they got fired?


It would seem reasonable to me that once someone stepped off the curb that the human driver should get notified that there was someone in the street on their left. It would heighten their awareness and make it more likely they'd react if the car did not. Something as simple as right and left buzzers. Course if the lidar didn't detect the person at all then Uber's problems are much worse.


Trucks already have these kind of audible warnings when vehicles or cyclists are in their blind spot.

The problem in Tempe was that the car did not recognise the pedestrian, so how could it be relied upon to warn the driver? Perhaps a better option would be to buzz when the driver is not looking at the road, as was the case in the Tempe accident. This technology is already in use [0].

[0] https://en.wikipedia.org/wiki/Driver_drowsiness_detection


Heck that sounds pretty useful even for current cars today.

I imagine it isn't cheap though to add and even if it was reasonably priced I imagine they would make it optional in case people didn't want to add it to their car price in which case many would simply say "Nah I don't need that. I'm a great driver!"


These kinds of things have existed for a while now https://newatlas.com/volvo-s60-pedestrian-safety-system/1358...


I think this is going to turn out to be the fundamental issue--that the computer wasn't programmed to address illegal actions.

The car has the right of way, it didn't consider she might enter the space anyway.


Shouldn't test vehicles have at least the same level of attention monitors as current auto pilot/assist vehicles do?

For these specific vehicles shouldn't they have some hand placement sensors or wifi strength detectors or mobile signal detectors or eyeball monitoring at a minimum?

Or use a custom app to "lock" the phone while driving, exit app and the car pulls over.

Not that the driver would have prevented this incident, but seems like a logical requirement.


In my opinion, automation is a way better use case for mass transit than 2-5 passenger consumer vehicles. These companies ought to be focusing on cities where population is growing but density is low (one example would be Jacksonville, FL). Start automating buses that run predictable routes and fire all those city employee bus drivers who are driving up property taxes because they need a pension.


Seems like they need another system to monitor the backup driver to make sure their eyes are on the road.


These people are basically giving driving lessons, just to cars instead of people. Regular driving instructors can do their job fulltime in a safe way without getting distracted. They don't control the car most of the time, yet they still are able to stop it in case of a student error.

So to me the question is: What is making this difference? Is it the level of training for the instructor? Human interaction? Is it the sign on their car that shows it's a lesson car to the other drivers? Or is it something else?

PS, this post is based on the way driving lessons happen in the Netherlands. I'm not sure if it's completely comparable to the US.


  What is making this difference?
Driving instructors are constantly graduating their most experienced students and recruiting zero-experience students.

These zero-experience students need the instructor to intervene from time to time, thus justifying and reinforcing the instructor's habit of vigilance.

On the other hand, if an autonomous car has one near-accident every 100,000 miles, it's not often enough to maintain and keep strong a habit of vigilance.


Though impractical to implement, this is an interesting thought experiment. What if the autonomous driving system has a "chaos monkey" that required intervention by the trainer every so often?

Note that we already do this to keep people vigilant in rote tasks, e.g. for airport x-ray scanner operators or train drivers.


Self-driving car companies are "newbies" who could learn from railway companies, which have decades of experience with keeping drivers alert in vehicles that keep moving without driver action. For example, locomotives have "dead man" switches to make sure the driver is still on the job. Japanese drivers are trained to point and vocalise. Why shouldn't the car drivers be required to call and point out every hazard whilst in autonomous mode?

Maybe part of the problem is that the self-driving companies view their drivers as temporary measures, so choose to invest their resources into the machine component rather than the human component?


The question is how do we do that in a way that is good enough to get engagement without actually endangering anyone. Imagine the uproar when one of these chaos monkeys cause a crash because the driver didn't react in time and it's evaluation of 'safe but seemingly dangerous' was just a bit off. For other systems this is easy because the system and the response are both totally contained inside the software so fake events are easy to generate and the user responds back through the system so the response can be caught and evaluated.


The two roles are not comparable in my opinion. In most cases an instructor of humans has no capability to control the car. I've heard of some cases where they have brakes installed on the passenger side, but I'm not sure how common that is. I've never heard of a teaching car with full duplicate controls, though it's certainly possible they exist. I think even a bad new human driver is still better than an Uber automated system at the moment.


Every student car I've see had a brake on the passenger side, some even have a steering wheel.


Driving instructors probably don't drive nearly as long without a break (each lesson is 30-60 minutes). They also probably have much more mental stimulation, since (1) they're talking to the student, (2) the student probably needs much more frequent intervention than a self-driving car does, and (3) they have a variety of different students each day, each of which has a different skill level and personality.


The teacher also asks the student to do increasingly difficult challenges and actively provides feedback on their performance. The same should be done for self driving cars, millions of miles in ideal conditions is worthless. These cars need to be driven by bank robbers, gang members and stunt drivers for a Quentin Tarantino movie.


I'd say it was human interaction. The AV is a black box that calmly goes about its business, after a few hours you're probably inclined to trust it. A human learner, on the other hand, is a bundle of shakes and nerves and jerks and danger that you wouldn't completely trust until several months after they passed their driving exam.


For my driving lessons, the instructor was actively telling me which way to go (e.g. "I want you to take the next left"). That conveys a certain level of road awareness even if they aren't actually driving.

They are also probably expecting a student to make mistakes, which puts them in a different mental state than with a car that is right almost all of the time.


there is a very different kind of learning at work in autonomous cars vs humans. these people are "backups", not instructors giving explicit instruction to any student.

and have you seen the vide of the "instructor" during the fatal uber crash? completely zoned out compared to what a human-instructing instructor would do.


The student is also the client and supervisor.


Rather than audit dashcam footage after the fact for infractions related to attention and phone use, these companies need realtime remote monitoring of the drivers (and realtime AI driven monitoring for areas without signal and attention lapses by the remote monitor employees).


Lmao, so now you need two people to drive a car.


While developing immature technology, yes. Maybe three people total (driver, and someone monitoring the driver, and someone else monitoring the scene from a camera up on a small boom pole).


What I don’t understand: if Uber was firing people for using their mobiles in the car, which did they allow them to have a mobile phone while on the job?


- News reports have mentioned that Uber was using some of their driverless cars to pick up passengers, for which the driver would need access to the Uber app.

- You want the driver to be able to call 911 if they're involved in an accident, or are the victim of a crime, or have a medical emergency.

Presumably, they'd only be firing people for using their phones while the car was driving itself, and they were supposed to be supervising it. It would be OK to use a phone while the car wasn't moving.


It's pretty ironic though that Uber's driver app almost/does require you to use it in motion though, eg when you are driving and another fare comes up. Nearly every uber drive I see goes over to the phone and slides to accept the fare.


They don't need to access any other apps though. Uber could provide a locked device allowing only emergency calls and the Uber app.


It's plainly stated in the article: for emergencies.


Surely they can provide a mobile phone with just the Uber app and ability to call though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: