The big issue is still in terms of processing speed and sensor fusion and Tesla isn’t leading the pack in either.
As far as wide scale autonomous driving goes for it to be good enough there needs to be an agreeable and verifiable decision making model for all manufacturers to follow and for regulators to validate, this could be as simple as never to perform an action that would cause a collision but that’s not exactly an ideal model either because you might then get a silly decision such as you don’t engage the breaks to slow down because you’ll end up hitting the car in front of you in either case.
Sadly I’m seeing too many people that tend to complicate things and chase the red herrings when it comes to autonomous driving such as “ethics” while it’s an interesting philosophical exercise in the real world it’s not the important part as when it comes to accidents the vast majority of drivers act on basic instinct and don’t weight the orphan vs the disabled veteran simply because they don’t have that data nor the capacity to incorporate it into their decision making process.
First we need to get sensors that don’t get confused by oddly placed traffic cones, birds and shadows the rest is pretty much irrelevant at this point.
Have you ever driven in a city? People don’t follow right of way rules and road signs. Pedestrians jump into traffic randomly. You absolutely have to be looking around trying to gauge peoples’ intentions in order to drive safely.
And the “theory of mind” comes to assume the other driver doesn’t want to cause an accident either and will slow down at which case the estimation can be broken to math such as would they have to slow down given the average response time and their speed.
As a person you have no idea as well if they are drunk, paying attention or not the only thing you can see is how fast they are going and usually if they are going well above the speed limit you’ll assume they are jerks and won’t let you merge which luckily is simple enough for the car to do as well
You can infer motivations and intentions based on how they’re driving. If you turn on your blinker and the guy in the lane next to you speeds up to close your available space, you have a pretty good idea of what they’re trying to do.
I’m not sure if there’s an algorithm that will figure out if other people on the road are complete fucking assholes that you want to avoid.
If you have enough time to stop if say a kid just jumped in front of you you will stop, if you don’t you don’t have enough time to process all the information around you and gauge the most optimal outcome if you swerve you swerve but that is likely going to happen regardless of what jumps out in front of you simply because that what your instincts are telling you to do.
The best thing an autonomous vehicle can offer is a better response time so you’ll be more likely to stop or to slow down to non/less than fatal velocity.
Drivers are drivers doesn’t matter if it’s humans or robots distracting them with nonsense wouldn’t make them better or safer but rather quite the opposite.
There is no right answer to that question, because I didn't specify what the animal is. Whether the animal is a baby deer or a human child will influence your choice. This isn't contrived. When I hit a deer, I made a choice to stay on the road. Had it been a child, I would have swerved, endangering myself in the process.
- Are they zooming by, then suddenly moving to the right and slowing down? Probably got a phone call, make ready to overtake them. And watch out, they might get really slow.
- Is the car in front stuck behind a truck and slowly moving to the left? Driver probably plans to overtake the truck soon. Reduce speed to not crash into his rear end.
Looking forward to when these things will be included in the reasoning model of self-driving cars.