I guess you're focused on languages that don't have the following two constraints from C: (1) be as fast as possible (2) compile ahead-of-time. The "weird" behavior in case of UB in C/C++ is a direct result of these two constraints and the fact that the transformations you're referring to are only affect code that already has _programming bugs._ The standard already specifies that optimizations don't change semantics -- and really they don't -- for bug-free code. Adding runtime error checks to that code only helps buggy programs and pessimizes the performance of correct ones. This is a tradeoff C/C++ does not make.
That's a good analogy except that with traffic, you have factors outside your control that can cause a crash. Avoiding bugs in code is theoretically completely within the programmers control. There are various tools that help with this in different languages to compensate the fact that programmers are human: compiler warnings, best practices and linters, sanitizers, testing, static typing, runtime checks etc. Different safety mechanisms offer different tradeoffs both for cars and programming.
One could argue with the your analogy that UB in C is silly because it assumes programmers who make no mistakes. In reality, writing C code can be economical (adequate safety, low cost, high performance) -- pricing in all the human errors -- and making the proposed would also cost performance and possibly money for some, that's why there is pushback on such proposals.
In the utopian world that C developers are all 10x, have 100% control over the complete source code of their applications and don't depend on third party C libraries.
What I implied by "programmers are human" as exactly that they are 1x and make mistakes. The tools help in the situations where "100%" assumptions cannot be made. But you can't put a seat belt on everything and contain every issue because it will cost more.
But you must also know that you must trust at least part of the environment your operating in. There will never be a language or a tool that can make a program useful and correct when nothing can be trusted. C/C++ regarding UB is just explicit about trusting the code that is currently being compiled for the sake of optimizing code that actually is. I get where you're coming from, but this is one of the possible solutions, and a very widely accepted one given that the whole world runs on C.
Managed languages might avoid some class of bugs, but they will always retain some, and I'm not sure it is possible to guard against all without solving the halting problem. Some C/C++ people might on the contrary find it bizarre that your languages do absolutely unnecessary bounds checks. None of them are better than the other on an absolute scale, it all depends on what you want to use the tool for.
Ada is not a managed language for example, yet it avoids such errors.
That is the typical C hand waving that every language has errors, so no big issue.
Of course there are no perfect programming languages, however it is quite different having to handle "Logic Errors" or "Logic Errors + Memory Corruption Errors + UB Errors + Implicit Conversion Errors".
Ada overflow semantics are precisely what programmers assumed were C overflow semantics on architectures that roll-over on signed arithmetic (e.g. basically every processor architetcure in current wide use). The original intent of the C "undefined behavior" category, as far as I can tell, was to in this case permit compilers on dissenting architectures to make another choice. For example, on the Itanium you might want to have a trap on overflow. Because C is comfortable with "it works differently on different processors". So it is my intention to repair the obscurity in the language by spelling out what I am 100% sure would have been obvious to C programmers and language developers at on time.
It's probably an accurate comparison, given the amount of code out there that really doesn't need to be written in C, but nevertheless it is (and the majority of people writing it aren't capable of writing bug-free C).
Just think about the memory corruption issues that occasionally pop up on Linux, in spite of their static analyzers and the review process of getting anything into kernel mainline.