This is essentially Swift’s problem on the server. If you have a service running and serving hundreds of clients, then any one of those threads can cause a trivial panic if there’s an overflow and take the entire service down.
As a result, a number of use cases for swift on the server actually use heavyweight out of process threads (i.e. one process per client) so that the failure of one of them doesn’t take down the rest.
Unfortunately there’s no way of overriding the panic (is implemented as an in-line ud2 instruction rather than an overridable function) and while you can install a signal handler to catch it, resuming isn’t straightforward and there are some cases where you really can’t do anything about it (malloc failure).
This works great for single user devices where there’s only one app running at once and when it crashes the user restarts it, but doesn’t work in the server case unless you are using a cgi-bin/lambda pattern when each request is processed by a new process each time, which is what TF and the AWS lambda approaches do.
This reminds me of Rust unconditionally ignoring SIGPIPE. Except in this case there's a non-trivial amount of pre-existing (and even future) code that assumes a process will be terminated if it attempts to write to a closed pipe. One of Rust's selling points is ease of FFI, so it's largely irrelevant that Rust APIs wouldn't let you ignore write errors. Rust breaks the environment for libraries that rely on the behavior.
It's a bad library design to rely on a process-global setting, because one day you'll have two libraries in your project that rely on opposite values of such a global setting, and then you'll start to see very funny things.
And the library design that relies on an application it's used in being killed by the OS so that the library don't have to tidy things up after itself or to properly handle error conditions is really bad. Don't you just like when one of your random indirect dependency, thrice-removed, decides to call std::abort() just because it's tired of living?
I agree libraries shouldn't rely on process-global state. It's also bad library design to trigger signed overflow. In my libraries I try to handle both situations cleanly, but that's all beside the point.
There's also the problem that SIG_IGN handlers are inherited across exec. So Rust programs that exec other programs without restoring the default handler could cause problems in programs where relying on SIGPIPE is more reasonable. This is also true of posix_spawn: SIG_IGN handlers aren't implicitly reset.
Note that Rust doesn't restore any other handlers to their default disposition as far as I can tell. Do you write all your programs to reset signal disposition on startup? I'll admit I don't normally do that unless it's a daemon.
If I were to write a program that relied on "writing to a file might terminate me and that's exactly what I want" behaviour, then yeah, I absolutely would include a call to "signal(SIGPIPE, SIG_DFL)" in the main().
Of course, if I were to actually write a filter-like program (a la cat, head, grep, etc.), I would rather have just checked the return values of my write()s. It's -1? Time to call exit(errno).
One of the post hoc rationales that seems to have cropped up is as you mentioned. But that logic only makes sense if you expect platform-specific code to be rare. I doubt that's the case: from my limited experience there seems to be quite a lot of Rust code written specifically for the Unix environment. Rust is a "systems" language after all; writing platform-specific applications, not simply platform-specific modules or wrappers, is to be expected.
It's a legitimate glitch in Rust. It happens. I don't understand the urge for people to defend Rust here. It violates well accepted norms in the world of systems programming--don't f'up the global environment without being asked. Indeed, part of the reason green threading failed was because it required too much magic interposed between the application and the native platform environment. Rust wasn't trying to go the route of Go.
I’m curious if one could use a mildly-modified Swift compiler that converts all the different ways to abort execution into some sort of panic handler you can override would help work around this issue.
As someone who doesn't know Swift, is there a reason it doesn't handle these errors by throwing exceptions? That approach works well for Java and .Net.
There is a compiler option to disable overflow/underflow checks, so unless the overflow is in some library outside your control, it's not actually an issue.
As a result, a number of use cases for swift on the server actually use heavyweight out of process threads (i.e. one process per client) so that the failure of one of them doesn’t take down the rest.
Unfortunately there’s no way of overriding the panic (is implemented as an in-line ud2 instruction rather than an overridable function) and while you can install a signal handler to catch it, resuming isn’t straightforward and there are some cases where you really can’t do anything about it (malloc failure).
This works great for single user devices where there’s only one app running at once and when it crashes the user restarts it, but doesn’t work in the server case unless you are using a cgi-bin/lambda pattern when each request is processed by a new process each time, which is what TF and the AWS lambda approaches do.