> before it becomes stable and measurably successful; nobody dies
It’s already there for non-freeway driving. (Nobody dying is a perfectionist metric. It’s better than humans.)
What’s limiting it are capital costs. Once Waymo finds a non-Jaguar form factor it can mass manufacture, I imagine this would get rolled out rapidly. (To the extent Tesla has a shot, it’s in its mass manufacturing expertise.)
> Why is freeway driving more difficult (genuine question)?
“Stopping in lane becomes much more dangerous with the possibility of a rear-end collision at high speed. All stopping should be planned well in advance, ideally exiting at the next ramp, or at least driving to the closest shoulder with enough room to park.
This greatly increases the scope of edge cases that need to be handled autonomously and at freeway speeds.
…
The features that make freeways simpler — controlled access, no intersections, one-way traffic — also make ‘interesting’ events more rare. This is a double-edged sword. While the simpler environment reduces the number of software features to be developed, it also increases the iteration time and cost.
During development, ‘interesting’ events are needed to train data-hungry ML models. For validation, each new software version to be qualified for driverless operation needs to encounter a minimum number of ‘interesting’ events before comparisons to a human safety level can have statistical significance. Overall, iteration becomes more expensive when it takes more vehicle-hours to collect each event.”
> Would they delay their rollout over the capital costs of a few thousand $60,000 Jaguars?
Yes. Google, better than many others, gets scaling economics.
(And the cost is likely $100+ thousand per vehicle, after sensors and compute, with costs rising rapidly if you try and order too many too fast due to supply-chain bottlenecks nobody has bothered optimizing yet.)
Surely they need the same sensors and compute no matter what car they attach them to - they're either strapping $75k of sensors to a $60k luxury EV, or a $30k economy EV.
Unless they're planning 20mph golf carts to save on long-distance sensors, which as far as I know they aren't.
IMHO it's more likely they have capacity constraints because of all the different parts involved in a roll-out; you can't just double the capacity of a production facility without hiring lots of inexperienced new hires, meaning they can only build the things so fast. You can't roll out to a new city without places to park the cars, and places to charge them, and people to repair them, and places to store the spare parts, and trade routes to replenish the spare parts, and so on.
> Surely they need the same sensors and compute no matter what car they attach them to
Sure. But they may not have decided precisely what mix they need where, yet. More importantly, those suppliers may not have invested in factories that can produce millions of Waymos a year.
Ships, like planes, are fairly autonomous today and usually only have a very small crew. There are however multiple good reasons you don't want to have fully autonomous ships. They operate continuously in salt water for long duration away from ports, the maps they have are far from perfect and the sea bottom shift unpredictable, GPS are not always reliable, and broken down ship at sea is a massive safety and environmental risk.
It should not be understated how troublesome sea water is to complex machinery.
Because of the responsibility. In my country, if you kill someone on the road, you go to jail, and pay a huge fine. Which Google exec is going to jail if Waymo kill someone?
You prefer more dead people and more people in prison, rather than fewer of both?
Also, the same event (e.g. someone dying in a car crash) doesn't always have the same responsibilities behind it. If I kill someone by driving recklessly, I have more responsibility than if I kill someone when a bird crashes on my windshield. There are extreme cases where someone bears full responsibility, and extreme cases where an accident is just an accident and nobody is responsible. It may be that with self driving, a larger percentage of cases lean on the "true accident" side. (It's just an idea though, I agree there's an important question here that merits careful consideration.)
I don't prefer anything, I tell you you have to assign responsibilities, and that's will slow adoption.
If the car cause an accident because it fails to spot something, do you pass it on 'mechanical error'? Because in my country that would mean the code has to be audited, like every X kilometers a mechanic has to 'audit' my car to prevent mechanical failures, and take on the responsibility if something breaks and it kills someone.
I think Waymo won't accept code audits, so the company has to take on the responsibility if a car kills someone. The only way to be sure it ends well is if Waymo is 100% sure their cars can't cause any accidents.
I think it's two to four years if it's a driving mistake (distracted driving, failure to yield, reckless speed or for people too old to see), a bit more for DUI (up to 10, but it's often around 6), an intentional (road rage) can climb up to 20 but the only case I've heard of it was 12.
"Nobody dies" is the metric for as long as companies are not held responsible for their self-driving cars the same way people are. People are fallible, but for that reason they are also held responsible. If your company cannot be held responsible, it must not be fallible. And no, a court settlement is not being held responsible, that's just paying your way out of the justice system.
Then what? How soon until trucks, ships, etc are now autonomous?