To my understanding, this is why they don't have a real model of the world. They don't understand that the road signal right ahead of you poses no danger, because you're actually in the middle of a curve and your trajectory won't actually intersect it. Or that driving below a bridge is fine, even though you're pointing straight at it.
Once they are sophisticated enough to have an actual world model (something which, to my knowledge, only Waymo actually has ATM) they can stop ignoring static objects. Until then, they would keep slamming the brakes every 30 seconds for all the stuff they don't understand.
They probably bet on having the DNN figure this out from the data.
Of course this seems like a really hard thing. Probably with enough layers and with the right learning tasks it might be able to do it eventually. But the probability for noise and various learning artifacts is enormous, when you want to learn a sort of model of everything road related.
Once they are sophisticated enough to have an actual world model (something which, to my knowledge, only Waymo actually has ATM) they can stop ignoring static objects. Until then, they would keep slamming the brakes every 30 seconds for all the stuff they don't understand.