And it's not contrived since we've seen situations of Telsa Autopilot behaving weirdly when it sees people on the side of billboards, trucks etc.
Andrej Karpathy's most recent presentation showed how his team trained a custom detector for stop signs with an "Except right turn" text underneath them . How are they going to scale that to a system that understands any text sign in any human language? The answer is that they're not even trying, which tells you that Tesla is not building a self-driving system.
Even so, it is quite possible to train for this in general. Some human drivers will notice the sign and will override autopilot when it attempts to stop, this triggers a training data upload to Tesla. Even if the neural net does not 'understand' the words on the sign, it will learn that a stop is not necessary when that sign is present in conjunction with a stop sign.
LiDAR also isn't a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.
Going from 3 nines of safety to 7 nines is going to be the real challenge.
Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.
Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.
Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.
Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.
Human color detection is about six orders of magnitude greater than mantis shrimp's.
Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.
I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.
Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.
(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)
And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.
As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.
So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.