Rust is really nice, I like it a lot. It's probably also one of the easier languages to write performant code with reasonable latencies in most cases, almost all.
However, automatic resource freeing/deallocation can still bite you if you need really reliable low latencies, and if you think the language will handle it all.
Since the automatic deallocation in rust, per default, happens at the point where the value goes out of scope, if a resource destruction could block, or perform expensive operations on destruction, it can not be allowed to go out of scope in a latency sensitive thread.
Usually not that much of an issue unless you are chasing really low latencies.
But things can still happen in rust that catch you off guard. Say a value goes out of scope, and because of reasons, it does so with a destructor doing a logging call, which has accidentally become blocking on a socket send call, which it did because nobody realized that the AWS/GCP/whatever logger adaptor actually didn't perform all IO in a thread with which was only communicated with locklessly, which nobody noticed before because it was only if a buffer was full, which only happened today because ....
Not a big deal, it's almost all the same things which mess upp latencies in C++ code. And that's the thing. It's not necessarily easier to get low latency in rust than in C++, but the work required for hitting a quality/performance/latency target in rust is probably still lower than for C++. Unless you are lucky to have a very mature C++ low-latency stack, together with all the utility functionality you need, which seems to be exceedingly rare. Is the work required lower than Java on a custom/tuned JVM?
We'll have to wait and see. It probably is, but it's a complex balance between access to utilities, language complexity, and several more parameters which ultimately decide which platform provides the best environment for low latency code, especially if the complexity is non trivial.
Since the automatic deallocation in rust, per default, happens at the point where the value goes out of scope, if a resource destruction could block, or perform expensive operations on destruction, it can not be allowed to go out of scope in a latency sensitive thread.
Usually not that much of an issue unless you are chasing really low latencies.
But things can still happen in rust that catch you off guard. Say a value goes out of scope, and because of reasons, it does so with a destructor doing a logging call, which has accidentally become blocking on a socket send call, which it did because nobody realized that the AWS/GCP/whatever logger adaptor actually didn't perform all IO in a thread with which was only communicated with locklessly, which nobody noticed before because it was only if a buffer was full, which only happened today because ....
Not a big deal, it's almost all the same things which mess upp latencies in C++ code. And that's the thing. It's not necessarily easier to get low latency in rust than in C++, but the work required for hitting a quality/performance/latency target in rust is probably still lower than for C++. Unless you are lucky to have a very mature C++ low-latency stack, together with all the utility functionality you need, which seems to be exceedingly rare. Is the work required lower than Java on a custom/tuned JVM?
We'll have to wait and see. It probably is, but it's a complex balance between access to utilities, language complexity, and several more parameters which ultimately decide which platform provides the best environment for low latency code, especially if the complexity is non trivial.