This is missing a few useful ones, like Box<str>, Arc<str>, Cow<'a, str>, SmallVec<u8>, transmuted newtype references like &UserId, and of course the string type you implemented yourself because the previous ones were not good enough.
I agree with this, but I don't think dynamic types is the only solution. Something like Roc[0] strikes a better balance: it gives you a flag for development, and when enabled, all compilation errors become warnings. The compiler substitutes every function it couldn't compile with one that panics at runtime.
I'd recommend taking inspiration from Nim, which uses some really clever metaprogramming techniques to allow for very terse representation of code.
Idiomatic Python is the way it is due to having really good defaults, but in order to achieve the speed of Rust, you often have to go against them. Metaprogramming allows for optimization of code by having a function examine another function, and changing the defaults in use to suit its particular use-case.
See also Mojo, which adds some C or Rust-like concepts to Python.
Not the parent, but my own opinion is that if a C/C++ passion project becomes successful, it burdens other people with yet another source of security vulnerabilities.
This is even more so for small projects that didn't have the decades of security hardening of Firefox/Chrome behind them, and now people go to these projects assuming their security is on par with Firefox/Chrome.
Why is your imaginary burden more important than someone's passion?
I find Rust people to be really annoying these days if you can't even write software in the language you want to write without being a burden to society.
As the original article noted, one of the biggest problems of "thread per core" is the name of it, because it confuses people. It does not mean "one thread per one core" in the literal sense of the word, but rather a specific kind of architecture in which message passing is NOT done between threads (as is very common in Erlang), or it is kept to the minimum possible. Instead, the processing for a single request happens, from the beginning to the end, on one single core.
This is done in order to minimize the need to transfer L1 caches between threads, and to keep each thread's cache pool tied to one request, and not much else (at least, to the extent possible).
In the context of Rust async runtimes, this is very similar to Tokio if work-stealing did not exist, and all futures spawned tasks only on their local thread, in order to make coding easier (lack of Sync + Send + 'static constraints), while also making code more performant (which the article argues it does not).
For examples of thread-per-core runtimes, see glommio and monoio.
Would've really liked to see monoio and glommio in here (thread-per-core with io-uring).
I've heard people say io-uring offers improvements in IO-based workloads, but I haven't seen what that truly means in the context of Rust async frameworks. And Tokio's integration with io-uring is not ideal (it would require re-architecting from scratch).