Death actually can be the price of being wrong. Just wait for someone to do the wrong thing with an AI tool they weren't supposed to use for what they were doing, and the AI to spit out the worse possible "hallucination" (in terms of outcome).
What you say is true, however with self-driving cars death, personal injury, and property damage are much more immediate, much more visible, and many of the errors are of a kind where most people are qualified to immediately understand what the machine did wrong.
An LLM that gives you a detailed plan for removing a stubborn stain in your toilet that involves mixing the wrong combination of drain cleaners and accidentally releasing chlorine, is going to happen if it hasn't already, but a lot of people will read about this and go "oh, I didn't know you could gas yourself like that" and then continue to ask the same model for recipes or Norwegian wedding poetry because "what could possibly go wrong?"
And if you wonder how anyone can possibly read about such a story and react that way, remember that Yann LeCun says this kind of thing despite (a) working for Facebook and (b) Facebook's algorithm gets flack not only for the current teen depression epidemic, but also from the UN for not doing enough to stop the (ongoing) genocide in Myanmar.
It's a cognitive blind spot of some kind. Plenty smart, still can't recognise the connection.