But if the radar just sees a static object and can't tell if it's an overhead sign or a car, and the camera vision is too washed out, how would sensor fusion help in your example?
Perhaps stop cheaping out on the cameras and procure those with high dynamic range. Then again those may be "expensive and complicate the supply chain with for a small delta"
A human driver slows down and moved their head around to get a better view when the glare from the sun is too strong to see well. I’d expect a self driving car to similarly compromise on speed for the sake of safety, when presented with uncertainty.
Lidar would make it pretty obvious whether it's a sign or a car, even if the camera didn't tell you. The part where the lidar doesn't bounce back at vehicle level would be a dead give away.