Still don't see fully (fully automated) self driving cars happening any time soon:
1) Heavy steel boxes running at high speed in built up areas will be the very last thing that we trust to robots. There are so many other things that will be automated first. Its reasonable to assume that we will see fully automated trains before fully automated cars.
2) Although a lot is being made of the incremental improvements to self-driving software, there is a lot of research about the danger of part-time autopilot. Autopilot in aircraft generally works well until it encounters an emergency, in which case a pilot has to go from daydreaming/eating/doing-something-else to dealing with catastrophy in a matter of microseconds. Full automation or no automation is often safer.
3) The unresolved/unresolvable issue of liability in an accident: is it the owner or the AI who is at fault.
4) The various "easy" problems that remain somewhat hard for driving AI to solve in a consistent way. Large stationary objects on motorways, small kids running into the road, cyclists, etc.
5) The legislative issues: at some point legislators have to say "self driving cars are now allowed", and create good governance around this. The general non-car-buying public has to get on board. These are non-trivial issues.
My alternative possible timeline interpretation is that two forces collide and make self-driving inevitable.
The first force is the insurance industry. It's really hard to argue that humans are more fallible than even today's self-driving setups, and at some point the underwriters will take note and start premium-blasting human drivers into the history books.
The second force is the power of numbers; as more and more self-driving cars come online, it becomes more and more practical to connect them together into a giant mesh network that can cooperate to share the roads and alert each other to dangers. Today's self-driving cars are cowboy loners that don't play well with others. This will evolve, especially with the 5G rollout.
1) Teslas crash much less often, mostly due to autopilot.
2) Tesla can harvest an incredible amount of data from one of their cars and so they can calculate risk better
Does Tesla see when you speed and increase your premiums?
I want to understand why being in a high-speed steel/plastic box with humans (overrated in some views) controlled by a computer scares you so much. Is it primal or are you working off data I do not have? Please share. I am being 100% sincere - I need to understand your perspective.
To re-state in brief: (individual) autonomous self-driving tech today tests "as safe as" ranging to "2-10x safer" than a typical human driver. This statistic will likely improve reliably over the next 5-10 years.
However, I am talking about an entire societal mesh network infrastructure of cars, communicating in real-time with each other and making decisions as a hive. As the ratio flips quickly from humans to machines, I absolutely belive that you would have to be quantifiably unsane to want to continue endangering the lives of yourself, your loved ones and the people in your community by continuing to believe that you have more eyes, better reaction and can see further ahead than a mesh of AIs that are constantly improving.
So yeah... I don't understand your skepticism. Help me.
Look: I don't want my loved ones in the car that gets hacked, and I'm not volunteering yours, either. Sad things are sad, but progress is inevitable and I refuse to live in fear of something scary possibly happening.
It is with that logic that I can fly on planes, ride my bike, deposit my money in banks, have sex, try new foods and generally support Enlightenment ideals.
I would rather trust a mesh of cars than obsess over the interior design of a bunker.
If all the cars in the area know one of the cars is about to do something and can adjust accordingly then it will be so much safer than what we have now it is almost unimaginable.
It would seem at some point in the future, people are not going to even want to be on the road with a human driver who is not part of the network.
In 2017 AlphaGo could probably give a world champion somewhere between 1 and 3 stones.
From an algorithmic perspective the range between "unacceptably bad" and superhuman doesn't have to be all that wide and it isn't exactly possible to judge until the benefit of hindsight is available and it is clear who had what technology available. 15-20 years is realistic because of the embarrassingly slow rate of progress by regulators, but we should all feel bad about that.
We should be swapping blobs of meat designed for a world of <10kmph for systems that are actually designed to move quickly safely. I've lost more friends to car accidents than any other cause - there needs to be some acknowledgment that humans are statistically unsafe drivers.
I can drive myself home relatively safely in conditions where the computer can’t even find the road. We’re still infinitly more flexible and adaptable than computers.
It will be at least 20 years before my car will drive me home on a leaf or snow covered road. Should I drive on those roads? Most likely not, but my brain, designed for <10 km/h speeds, will cope with the conditions in the vast majority of cases.
Fully automated since 1998, and very successful.
I have driven in snow a few times that I was not sure I was even on the road. Or the only way I knew I was going the right direction was because I could vaguely see the break lights of the car going 15mph in front of me through snow.
That is an easy problem to solve though because I simply should not have been driving in that.
I am optimistic about solving those problems. Regulation always comes after the tech is invented. Cars have more opportunity to fail gracefully in an emergency; pull off onto the shoulder and coast to a stop or bump into an inaminate object.
If/when we get fully automated cars, this kind of driverless Uber will become extremely common. Who bears the risk then? This is a complicated situation that can't be boiled down to "Of course the owner is to blame"
Not so in car-sharing pools, and there it's already materializing as a problem. How do you solve that with your 'robo-cab'? Tapping on dirty/smelly in your app, send back to garage? What if you notice it only 5 minutes after you started the trip, already robo-riding along? What if you have allergies against something the former customer had on/around it? Or was so high on opioids, that even a touch of the skin could make you drop? As can, and did happen.
How do you solve for that without massive privacy intrusions? Or will they be the "new normal" because of all that Covid-19 trace app crap?
To go contrary to this is to invite outright bans of the tech.