Yes I agree, Waymo is real, and very nice tech, but if there is such high confidence, couldn't we find drivers who can remotely monitor the rides for safety ?
It could potentially solve the double-speech: "oh no no no, never willing to be the legal driver it's not a safe system", and at the same time: "oh but this system is very safe, we can put it in real traffic with real lives on the line".
Even if the system is safer than human drivers (I can’t speak to this), why would the lowest paid workers at the company voluntarily sign up for _any_ increased liability? If one of the programmers pushes out a bug that kills someone, why should an employee who had nothing to do with this be charged with a crime?
You raise a good point - if the system is so safe, the people who make such claims should accept some liability - but those are not the same people as the ones who sometimes intervene in the driving.
You're asking for... what? For the secretary to be liable because a random car they have never seen or interacted with broke a law? How is that in any way shape or form reasonable? And why would anybody ever sign that contract except under duress or exceptional circumstances when they can get a job... literally anywhere else?
And even if that was a viable approach right now, the point of AI/self-driving technology is that the number of cars will vastly outnumber the number of employees once fully deployed and not in limited pilot programs.
The responsibility of this safety driver is to monitor the ride and be responsible in case of accident.
It's his main task, like any driver, just he has to pay attention to less things than a standard ride, since he has a very smart car (like when you sit in a Tesla with FSD).
Just he is not sitting in the car, instead in his office or home.
I basically covered this already in my previous comment, so I'm just going to restate it.
> why would anybody ever sign that contract except under duress or exceptional circumstances when they can get a job... literally anywhere else?
> And even if that was a viable approach right now, the point of AI/self-driving technology is that the number of cars will vastly outnumber the number of employees once fully deployed and not in limited pilot programs.
I understand what you mean, though drivers already take personal risks on the behalf of corporations.
If you are a Uber-driver, and enable the FSD on your Tesla, it's not Uber nor Tesla who will get in troubles if you are having an accident.
For now the situation with Waymo, can be the same as a driving instructor. The remote driver is the driving instructor, and Waymo AI is the student getting monitored and doing the driving.
The driver is responsible and, then they can potentially sue (or have agreement with Waymo) if it caused damages beyond their own responsibility (for example, if driver hits the brakes, and Waymo has overridden his manual commands).
Once this solution is safe, drivers will get more comfortable babysitting more than one car at once; essentially solving the scale issue that you mention.
It could potentially solve the double-speech: "oh no no no, never willing to be the legal driver it's not a safe system", and at the same time: "oh but this system is very safe, we can put it in real traffic with real lives on the line".