A common saying around here is that we have two seasons: winter and (road) construction.
Construction zones have pretty much every obstacle to automated driving you can think of:
* painted lanes that don't correlate to the temporary lanes marked by cones
* lanes that don't correspond to pre-programmed maps / gps
* irregular and unpredictable vehicle and pedestrian entrances and exits (construction workers and trucks)
* Areas where traffic is reduced to a single lane for both directions, and must take turns coordinated by humans with signs at each end of the lane
* speed limits marked by temporary signs
* rough, temporary transitions between pavement and gravel
Unless we can somehow get every state to compel every road construction company and every autonomous vehicle maker to use a single communication protocol, implement it at every construction site (so autonomous cars are made aware of these dangers) it's not going to happen.
Oh, and said protocol has to be hack-proof so trouble-makers can't start convincing cars that they're in the middle of a construction zone and force them out of their lanes on normal roads.
It's conceivable that the coordinated effort could happen, but I'm not going to hold my breath (due to the sheer increase in cost to the government) nor will I trust that said protocol will have fail-proof security.
Why would it be easier for trouble-makers to fool autonomous cars? As a human driver, I'd be fooled by pretty much any road marking or guy in an orange vest.
It’s amazing, how much forgiving we can be for human errors ( accidents every year) but absolutely not for machine/autonomous vehicles, even when, statistically speaking, machines may make better decision much faster(or at-least no worse than human judgement)... I guess feeling/perception of being in control is more important to us...
Other interesting observation I find in every autonomous vehicle discussion is, how we only focus on edge cases... when in reality every tool that we use (including the car we been driving) today are built for general use case and operate under mostly a control environment.
Rather if we think autonomous car as additional pair of eyes and hands when we need it most might serve us well in short run before the technology gets mature over next decade or two.
I’ll be really happy and relaxed if my car can mostly (70-80%) drive it self to daily commute or next trip to LA; expecting it to be my chauffeur is bit too much, personally.
Maybe it works for long haul trucking though.
amazing, how much forgiving we can be for human errors ... but absolutely not for machine
Also, when a human driver's negligence results in injury or severe damage, criminal charges result. That's a deterrent. With autonomous driving, you can't prosecute an algorithm.
would "use at your own risk" vindicate the company behind autonomous vehicle? or owner is responsible for his vehicle's actions? i guess never in the history, we had so much advance automations in direct hands of consumer...
As for the failure, I have reasons to disagree... if autonomous cars are working under "unsupervised learnings", my assumptions is, it most likely will makes different decisions for same scenarios based on data on hand.... so thousand's of failure events... though it may look similar may or may not end in same results... similar to how we would react when faced to some unknown situation on road... your scenario might more likely to play out for bad batch of hardware devices/sensors/lidar/camera etc in autonomous system...
If it's sold as fully autonomous, i.e. significantly beyond Tesla's system today, I don't see how the manufacturer could not have the liability. How comfortable would you be to use a car that could expose you to severe criminal liability because some company made a mistake with their software?
The company responsible would also have a clear incentive to alter/destroy any damning evidence gathered in telemetry.
Not saying it doesn't happen. But now you've gone from a product liability case which rarely has individual criminal consequences to actions that clearly do.
If/when we get to this point, it will be "interesting" though. Outside of maybe the medical area, there aren't many examples of consumer-facing products that, when used as directed, kill people because sometimes "stuff happens." And people generally understand that's just the way it is.
It's not out of the realm of possibility to imagine government-approved autonomous driving systems that insulate everyone involved from liability so long as they're used and maintained as directed. See e.g. National Vaccine Injury Compensation Program. I'm not sure it's likely but it might become a possibility if manufacturers find they're too exposed.
There's a caveat here that this 70-80% must be contiguous and the car must be superhuman-level reliable in that segment. Otherwise, the "additional pair of eyes and hands" significantly increase the danger. If your car suddenly decides that it can't handle something and asks you to take over in the last second, you won't be able to handle it either.
Which is actually a big win as long highways drives are boring and probably have a decent chunk of more serious accidents.
It doesn't give you the robo-taxi use cases that are what a lot of urbanites care about the most. But it would be a nice safety and comfort add-on for how a lot of people spend many hours of their weeks.
Like any risk, you also need to consider the impact of getting it wrong. If an audio assistant gives you the wrong answer to the population of your hometown, no big deal. But if your car thinks everything is okay and drives you into a stationary fire truck on the shoulder of a freeway when you are travelling at 70mph, the downside of that edge case is infinitely worse.
Sure, humans can make these mistakes, too. But the fact is that your notional world where computers are able to make smarter decisions than humans about how to drive doesn't actually exist. No one has figured out how to make it work. And they won't anytime soon. They've solved all the easy parts. But it turns out there's a lot more involved in driving than all the billions of dollars poured into the problem so far can figure out.
My point is, computer with,
- more data, (historic on how to act on certain situation, live data for event i.e. sensor data, lidar/radar data, images) vs human driver who would not have access or the ability to process these.
- faster and parallel processing vs human driver
- single focus/goal (of driving from x to y safely and making appropriate decisions to achieve it) vs human driver (with "physical limitations", "emotions", "hormones" and other things that makes up "life") is more likely to be distracted...
computer with all of above advantages compared to human driver may able to make better informed decision much faster than human driver can do (and when it doesn't it's hard to know/prove if human driver would consistently make better decision every time for same situation)
having said above, I agree that tech is in its infancy and it's gonna take a decade or two to be matured and even after that human intervention just in time in some cases would be needed but for the most controlled/learned environment (which is 70-80% of total driving on day to day basis) these systems would be immensely helpful.
Note that self driving vehicles aren't different from humans in that respect, except they see much farther.
So, it takes a lot more work on the programming side to compensate.
Imagine someone hacking the 'construction zone protocol' and spoofing thousands of cars into thinking they're in a construction zone at once. You'd be hard pressed to fool thousands of geographically separated human drivers at the same time.
That only works if a police car doesn't come by and catch the perpetrator in the act.
With a wireless communication to automated drivers, someone could plausibly feed bad information from a hidden or otherwise remote location.
Beyond that, just as automation allows human-intensive processes to scale by removing the humans, fooling automated drivers can scale much more readily than fooling human drivers.
As soon as it becomes a robot, a lot of the social pressure to be a good person falls away. Less so if there are people inside, but I can see empty autonomous cars being given a pretty hard time just for kicks.
Once you have autonomous cars that driving safely, but can't manage complex situations like you describe, you delegate those for remote pilots that are allowed to operate car in slow speeds. You need 5G network coverage
with mission-critical features (mcMTC) to archieve that. BLER 10^-6 and E2E latency < 5- 10 ms. Construction work crews might be required to erect 5G mini cell tower before they can start working to make sure that traffic goes smoothly.
Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.
I do wonder if that's a factor behind Musk's push into low-orbit satellite Internet.
> Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.
Even if all mileage was human-driven there would be very large benefits if you could really consolidate taxi drivers in call-centres for remote driving. No need to transport or preposition drivers and much less trouble estimating demand.
fleet of 10,000 vehicles might need only 100-200 remote operators
I was just pointing out that, if you can't guarantee you won't need to handoff to a physically present driver, then there are a lot of things you can't do with the car even if needed interventions are just an occasional thing.
Getting to absolute 100% will require either AGI or an incredible infrastructure investment. Now personally I think FSD is worth on the order of $1 trillion per year to the economy, so it’s the next great Moon Shot, and totally worth every bit of infrastructure investment we can throw at it.
But it makes sense to see how much further we can get with in-car algorithmic driving before the infra investments start coming in earnest to fill in the gaps.
Another possibility is there could be ways for a passenger to assist the algorithm without actually using a steering wheel and pedals as input.
I believe the level before truly perfect FSD allows the car to get stuck as long as it does so safely. Approaching and stopping at a single lane construction zone, for instance.
The current Tesla AP does remarkably well on highways with missing lane markings. A stretch I drive every day is ground down in prep for new pavement and just has the occasional white square marking, but it’s enough for AP to lock in on. It also seems to do fine with cones.
It’s worth noting that construction zones aren’t even particularly safe for human drivers (accident rate skyrockets). So technology to make construction zones more passable overall is important, even if it just enables self driving as a side effect.
The remote control can be to tell the car "drive this path" instead of direct control of the vehicle over a high latency link.
Essentially a POC that the car can see, but is so brief that the human cannot see anything.