fastmath is bad in general. In Julia it's not as bad as many other languages because it's attempted to be kept local. The @fastmath macro is essentially a find-replace macro, where it finds things like ^ and replaces it with the fast_pow function which drops the 1ulp requirement. However, this global option was really the one piece left in Julia that where it could creep in, hence the reason to drop it.
> It turns out (somewhat insanely) that when -ffast-math is enabled, the compiler will link in a constructor that sets the FTZ/DAZ flags whenever the library is loaded — even on shared libraries, which means that any application that loads that library will have its floating point behavior changed for the whole process. And -Ofast, which sounds appealingly like a "make my program go fast" flag, automatically enables -ffast-math, so some projects may unwittingly turn it on without realizing the implications.
With Julia, there is the advantage here that (a) most libraries don't have binary artifacts being built in another language, (b) the Julia core math library is written in Julia and is thus not a shared library effected by this, and (c) those that do have their binaries built and hosted in https://github.com/JuliaPackaging/Yggdrasil. So in the binary building and delivery system you can see there are some patches that forcibly remove fastmath from the binaries being built to avoid this problem (https://github.com/search?q=repo%3AJuliaPackaging%2FYggdrasi...). The part (b) of course is the part that is then made globally safe by the removal of the flag in Julia itself, so generally Julia should be well-guarded against this kind of issue with these sets of safeguards in place.
There are still some other difficulties of course, since fastmath in the C ABI is quite wild (or I guess, it's really the GCC implementation up to GCC 13 (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55522#c45)). Simon wrote a nice piece about the difficulties in general: https://simonbyrne.github.io/notes/fastmath/. In a general sense there is still the potential vulnerability that effects the Python ecosystem which is that if any package has binaries built with fastmath it could cause other calculations to be fastmath as well in a non-local way (https://moyix.blogspot.com/2022/09/someones-been-messing-wit...):
> It turns out (somewhat insanely) that when -ffast-math is enabled, the compiler will link in a constructor that sets the FTZ/DAZ flags whenever the library is loaded — even on shared libraries, which means that any application that loads that library will have its floating point behavior changed for the whole process. And -Ofast, which sounds appealingly like a "make my program go fast" flag, automatically enables -ffast-math, so some projects may unwittingly turn it on without realizing the implications.
With Julia, there is the advantage here that (a) most libraries don't have binary artifacts being built in another language, (b) the Julia core math library is written in Julia and is thus not a shared library effected by this, and (c) those that do have their binaries built and hosted in https://github.com/JuliaPackaging/Yggdrasil. So in the binary building and delivery system you can see there are some patches that forcibly remove fastmath from the binaries being built to avoid this problem (https://github.com/search?q=repo%3AJuliaPackaging%2FYggdrasi...). The part (b) of course is the part that is then made globally safe by the removal of the flag in Julia itself, so generally Julia should be well-guarded against this kind of issue with these sets of safeguards in place.