For the most part, Classic was designed by committee with many concessions to every corporate partner, retaining compatibility with their IrDA stacks, etc...
...while LE was designed by a few smart guys working together
This is about the newer LE 2M PHY, which was added for "replacing Classic" (which is 2-3M) use cases. It's not surprising that it's not as efficient as the more widely used 1M PHY.
In Swift, if cow is a reference type you all share a cow that moos whoever asks, but if cow is a value type then you all have your own copy of a cow that only moos for you.
We had a great plan somewhere around Ubuntu intrepid or jaunty that rather than have apps rewrite /etc/resolve.conf, we'd leave it as a static file and have a "nameserver dynamic" type config in there.
ie. sensible default for many people, but super easy to override
This particular undefined case is a really interesting one; it seems utterly non-obvious why it exists, until you remember one thing...
The most popular architecture today is 64-bit, with ABI specifying that the integer type is 32-bits wide.
So when faced with performing an operation on two 32-bit signed integers, on a 64-bit platform, you have to either:
1) perform the operation in 32-bit registers - if available - and even if available, usually far far far fewer in number thus massive performance & resource penalties to code
2) perform the operation in 64-bit registers, then add code to check the result, and if it would overflow in 32-bit, compute what the overflowed result would have looked like in a 32-bit register, and return that instead - again quite a performance hit
3) perform the operation in 64-bit registers, and just return the lower 32-bits, whatever that might be
The standard declined to pick a choice, which is why we have UB here.
(For unsigned integers, you can just truncate the result and it works every time, which is why the standard defines the result)