Implementing each of these features have required whole program refactorings in a large-scale codebase performed by few individuals while hundreds of other developers where simultaneously evolving the software and implementing new features.
Having been a C++ programmer for over 10 years, none of these refactorings would have payed off in C++ because of how time consuming it would have been to track all bugs introduced by them.
Yet they do pay off in Rust because of my favourite Rust feature: the ability to refactor large-scale software without breaking anything: if a non-functional change compiles, "it works" for some definition of "works" that's much better than what most other languages give you (no memory unsafety, no data-races, no segfaults...).
Very few programming languages have this feature, and no low-level programming language except for Rust has it.
Software design is not an upfront-only task, and Rust let's you iterate on software design as you better understand the problem domain or the constraints and requirements change without having to rewrite things from scratch.
(One language that does some of this is Erlang, but AFAIK the stdlib data structures like digraphs, sets-of-sets, etc. aren’t actually the ones the compiler uses, but are rather there for use by static verification tools like Dialyzer. Which means that the Erlang digraph doesn’t know how to topsort itself, even though there’s a module in the Erlang compiler application that does topsort on digraphs. Still feels like being a second-class citizen relative to the runtime’s favoured compiler.)
Rust does give you access to its internal data-structures on nightly. These change quite often, so you will need to update code that uses them pretty much every week.
Why doesn't many language does this in some stable form? Because that sets the internal data-structures and APIs of the compiler in stone forever, which makes it infinitely harder to improve the compiler and implement new language features.
Then you have Eiffel, Smalltalk and Lisp variants.
Scheme and Lisp, for example, only expose their AST. If you only care about the AST, you can access it from stable Rust, and there are great libraries for working with it, doing AST folds, semi-quoting, etc.
The OP wanted to work on the CFG graph. You could write a library to compute the CFG from the AST, but you don't have to because Rust exposes this as well. The CFG data-structures are much more tied to the intermediate representations of the Rust compiler, and these do change over time as new features are added to the language.
Some tools do use the compiler internal CFGs though. For example, rust-clippy is a linter that's built on top of most of the compiler-internal data-structures, not only type-checking, but also CFGs. The rust-semverver is a tool that detects semver breaking changes between the last released version of a library and the current one, is built on top of the type checking data-structures, and can deal with all kinds of generics (types, lifetimes, etc.).
These tools are typically tied to particular versions of the nightly compiler and do break often, but for example rust-clippy is distributed via rustup, so you always get a version that works with whatever nightly compiler you have, and well, you also get a version that "magically" works with a stable Rust compiler.
Other people have built all sort of tools on top of this, from Rust interpreters, to instrumentation passes that compute the maximum stack size requirement of embedded applications, e.g., by using the CFG to compute the deepest possible stack frame of a program, and the size of the stack frame for that case.
1. Implementation-specific thing that has no formal equivalent. Structs of structs of structs, with business logic intermingled with the structure. Plenty of examples of this. I’m not suggesting anyone put these in the stdlib; that’d be silly.
2. ADT with no formalism behind it, that implements a particular set of behaviours “for best performance”, with no guarantee about the implementation or the time/space complexity, and a set of exposed operations that only state what they do, not how they do it, such that you can’t guess what algorithm a given operation is going to use.
Example of #2: Objective-C’s NSDictionary. It only guarantees that it “acts like” a dictionary; it doesn’t guarantee that it’s O(1), because being O(1) with a high constant can actually be worse for performance in some cases. So instead, it makes no guarantees, and tries to optimize for performance by actually switching between different implementations as the data held reaches different size thresholds.
I’m not suggesting anyone put these in the stdlib either, if they currently live in a specific application, because this widens the scope of their stability requirements. Right now only the compiler maintainers can optimize this ADT into doing something entirely different, and then ensure that the compiler (the only consumer) is happy with the result; if the ADT were available in the stdlib, they wouldn’t be able to test all consumers, so they’d have to be much more conservative with their changes, lest they break an edge-case. (Changing how sorting an array ADT works, for example, can break code that assumed that sorting either nearly-sorted or entirely-randomized sets was optimal.)
And, of course, sometimes in a compiler the Control Flow Graph ADT will be of this type. If it is, fine, don’t export it. But usually—because of how compiler maintainers interact heavily with compiler theorists—it’s instead the third kind:
3. Formal data structures, where the name comes from a paper which defines the expected behaviour and a base-level implementation; where either that implementation, or another paper’s pure improvement on said implementation (without changing any of the semantics) is what you can expect to find. As well, the names of all algorithms implemented against the ADT are also well-defined in papers, such that you can know what algorithm you’re using by the name of the function. (Usually, in these cases, you’ll have multiple algorithms that do the same thing differently living as sibling functions in the same module.)
Examples of #3: the geometric primitives and index search-tree types that PostGIS exposes to C-level code. Any implementation of vector clocks. Or, my favourite: regular expressions (i.e. deterministic and nondeterministic finite automata.)
Also, example to make the naming point: for a particular use-case, the use of a segment tree might be an optimization over use of an interval tree. But in code that uses these types of formal data structures, you’d never find an ADT named “IndexTree” that could happen to be either; instead, you’d expect separate SegmentTree and IntervalTree implementations, and then maybe a strategy-pattern ADT wrapping one or the other. But the concrete implementation of the formalism is very likely to exist, because people tend to just sit down and implement these things by translating pseudocode in papers into modules; and then tend to want things to stay that way, because keeping a 1:1 mapping from the module to the paper, and then linking the paper in module-doc, is one of the only good ways to get new maintainers to understand what the heck has been implemented here.
It’s #3 that I would suggest is a good candidate for stdlib inclusion. These data structures don’t change in a way that breaks their guarantees, because their existence is predicated on a particular formalism, and nobody cares about improvements to the performance of a formalism that break the formalism. (Fun analogy: nobody would care about improvements to speed running a video game that required changing the code of the game. You’re trying to optimize that game, not some other one!)
You started arguing that the Rust compiler should expose its internal data-structures, which it does, and somehow ended arguing that the Rust standard library should expose spatial data-structures for geometry processing.
I have no idea how you got from one to the other, but writing a huge wall of text full of disorganized thoughts shows very little appreciation for the time of those participating in this discussion thread, so I won't interact with you anymore.
The standard library exposes an API for some of them, and you can access the internal data-structures of the Rust by adding a line to your program to import them (requires unstable Rust) or by adding a line to your Cargo.toml to load them from crates.io.
The argument is flawed in that these are set in stone in any practical way. This is because Rust binaries do not run on a whiteboard, they run on real hardware, which means that there are thousands of different ways to efficiently implement these data-structures depending on what your precise use case is, and most of them affect their API design. For example, while Rust is lucky enough to provide a HashTable with a Google Swisstable-compatible API, C++ std::unordered_ containers are not, and cannot be upgraded.
The argument is also flawed into assuming that Rust is a static language. It isn't, the language is evolving, and as the language evolves the requirements on the internal data-structure changes, resulting in changes to the algorithms and data-structures being used (not only their implementation).
For example, at the end of last year Rust shipped non-lexical lifetimes, which uses a complete different borrow checking algorithm, data-structures, etc. The old ones just are not compatible with it.
Then there is also the fact that many people work on improving the performance on the Rust compiler. This means that the graph data-structures used are often not generic, but exploit the graph structure and the precise type of the graph nodes. As these data-structures are parallelized, made cache friendly, allocation-free, exploit usage of other platform-specific features like thread-locals, atomics, ... their APIs often changes. Also, often somebody just "discovers" that there are better algorithms and data-structures for implementing one pass, and they just change them.
Putting these in the standard library incurs a massive cost since this makes them almost impossible to evolve inside the Rust compiler, adds a huge maintenance cost, etc. And for what value? If you want to precisely use what the Rust compiler uses, you can already add a line to your program to import that from somewhere else that makes it clear that these are not stable. If you want general graph data-structures, chances are that your graph won't look like the ones used by the rust compiler. There are hundreds of crates for manipulating graphs in crates io, depending on how big the graphs are, whether you can exploit some structure, the common operations that you want to do with them, etc.
Sure, on the whiteboard, all of them probably have the same or similar complexity-guarantees, but on real hardware constant factors make a big difference for real problems.
Doesn't mean I wouldn't like it myself, but it is a genuine drag on continued compiler development.
The nice thing about these formal data structures (or formal ADTs, I should say, though the papers themselves never tend to think of themselves as introducing an ADT) is that successive papers that find better algorithms, or make small changes to the data structure, pretty much always hold the set of operations required of the data structure constant (since that’s the only way to be able to compare these successive implementations on their performance/complexity.) So a “family” of papers about a given data structure is really a family of papers about an implicit ADT whose defined interface is the set of operations from the first paper that later papers compare themselves on.
Personally, I feel like all “formal” data structures of this type could benefit from stdlib inclusion (e.g. interval trees; ropes; CRDTs; etc.) but that’s just a personal preference.
But when a compiler is a built-in part of a runtime, with its modules already exposed as stdlib modules in a sense (being there as modules in a package namespace that you don’t have to retrieve, and which is always locked to the version of the compiler installed) such that people can actually reach in and use the compiler’s ADT for formal data-structure Foo—and people do!—and yet this growing dependency doesn’t cause the language maintainers to become worried about how they’re creating an implicit stdlib module by exposing the compiler internals in this way, but forgetting to track its usage for breaking changes... then I start to feel like it’s a bit ridiculous.
To me, language projects should be run under the Microsoft philosophy: you don’t get to decide what your stable interfaces are. If your customers are all using and depending on feature X, then feature X should be treated as a stable interface. It’s up to you as a language maintainer to discover your stable interface requirements, and promote the things people are “treating as stable”, into being managed as stable in a language-evolutionary project-management sense.
The philosophy only works because private APIs are hidden enough to reduce the effort required in maintaining backward compatibility. The fact that MS works (or rather, worked, it was much more vital in the Windows 95 transition) hard to maintain compatibility even on private dependencies doesn't mean that private doesn't mean anything.
I think a bigger (and certainly more visible) element of the MS philosophy is in maintaining compatibility for public, stable APIs. For example, look at how Windows 10 still has all these old Control Panel dialogs. That's not incidental compatibility with undocumented APIs; it's designed for extensibility, the maintenance of which holds the whole platform back to a certain degree.
I worked on the Delphi compiler and runtime library. Versioning was an exceedingly important concern. Patch releases couldn't break APIs because it could affect our partner market, if we affected the dependencies of third-party components, those third parties would need to re-release. The ecosystem only worked because there was a clear separation between public and private. You need private APIs because you need the flexibility to change things - if you can't change things, you must create everything perfectly first time, and that's just not possible.
You might want to get access to the compiler internals, but if you build something nifty with that access, and that nifty thing gets widespread use, you will hold back the entire ecosystem. You will be the cause of warts like Windows 10's 20 year old Control Panel dialogs.
If you'd like to do topological sorting you'd use one algorithm if that topological sorting is for a set of fixed tasks. Online topological sorting (that is - delta-based algorithms) used in databases would use a different algorithm and will also have different performance constraints.
I could see that in the long run we'll eventually figure out how to abstract all of the "complete/delta" stuff. I don't think we have yet figured this out yet.
I think such a type would be less useful than you'd think, for precisely the same reason why a linked list in the stdlib is pretty much useless: almost always, you don't want the stdlib allocating nodes; you want an intrusive data structure instead.
Really, the problem is that the compiler needs to store extra data with the nodes that the stdlib can't know about, but you don't want two separate types `stdlib::GraphNode` and `compiler::ControlFlowNode` because you usually need to be able to convert between these types in both directions. (one direction can be handled by embedding one of the types in the other; but the reverse direction will require overhead for an extra pointer, or horribly unsafe pointer arithmetic)
Of course in Rust, there could still be a digraph trait and the stdlib could still provide generic algorithms.
Though it's also not so rare in compilers that nodes are members of multiple graphs simultaneously (with different edges in each, e.g. control flow nodes are typically not just part of the control flow graph, but also belong to a dominator tree). It's non-trivial to create a graph abstraction that can handle all these cases while remaining efficient (you don't want to put nodes in a HashSet just to check whether a graph algorithm already visited them), so it's not surprising that compiler developers don't bother and just write the algorithm directly for their particular data structures. In the end, most graph algorithms are only about a dozen lines, much simpler than the abstractions that would be required to re-use them across completely different graphs.
I guess that depends on the language; in dynamic, pure-functional languages (like Erlang) there’s nowhere you can go for further efficiency beyond “digraph expressed as labelled V and E sets held in dictionaries”, so the data structures themselves actually are quite generic, and you would never really need a separate specialization of them.
You’re probably right that in systems-level languages where you’re effectively programming a pointer / RAM word machine, you can get better specialized data structures per use-case, so writing the self-hosting compiler isn’t going to just “throw off” a handy reusable stdlib digraph library as a side-effect.
But in the cases where you’re not using a systems-level language—i.e. the cases where the graphwise algorithms you’re writing are being written against the same stdlib ADT anyway, even if you reimplemented it compiler-side—it doesn’t make sense to me to hide these algorithms away inside the compiler application package. They’re generic algorithms! (And since the ADT is a formal one from a paper, they’re probably very stable ones, too!)
As a restatement of my original premise: if your digraphs are generic, and topsort is a generic function on digraphs with only one obvious implementation given the digraph ADT you’re operating against; and given that you’ve implemented topsort for these stdlib digraphs; why aren’t you putting that algorithm in the stdlib, as a function of stdlib digraphs?
(And it’s not just a question of only putting the most common stuff into the stdlib. Erlang has lots of very arcane digraph operations only useful in proof-verification sitting around in the stdlib in the digraph_utils module. There’s no reason that operations only useful in compilers wouldn’t belong in there equally well. But instead, they’re in the compiler. Kinda weird, if you ask me.)
Don't C++'s std::list and Rust's std::collections::LinkedList effectively implement the moral equivalent (insofar as memory layout is concerned) of an intrusive list, though?
You can't call `std::list<Node>::erase()` with a Node* , and you can't write a function that converts from Node* to `std::list<Node>::iterator` in standard C++, even though it's really just a matter of subtracting a fixed number of bytes from the pointer value. So you instead need to store the `std::list<Node>::iterator` in the `Node`, wasting memory.
You can kind of cheat by always using `std::list<Node>::iterator` instead of Node* , but that means you can't use normal methods, because the `this` pointer will lack access to the iterator.
And you always need to pass around a pair of `std::list&` and iterator, because the iterator alone isn't sufficient to delete the node or insert elements before/after it. Usually the code ends up being a lot simpler if class Node just reimplements a linked list.
And that's still the simple case where each node is only contained in a single data structure at a time, which (at least in compilers) is rarely sufficient.
By the same token, I'd like to see dynamic languages with a tracing JIT, show what types are used in the actual runtime. There was an article posted to HN a few years back, where some researchers noted that most dynamic language servers seem to go through an initial startup period, where there's some dynamic type shenanigans, then settle down to a state where the types are basically static.
C# has https://docs.microsoft.com/en-us/dotnet/csharp/programming-g... (only partially what you want I think)
I think crater also deserves some credit here - given how much Rust source is in Cargo, it's very useful to be able to run a refactored compiler against all those packages and see which don't compile or start failing their unit tests.
Ada has had the same characteristics since '83. Thanks to strict typing (among other features), Ada programmers have enjoyed "safe" code refactoring for decades.
But it is nice that new languages like Rust are finally picking similar ideas and design choices.
Rust is still far from Delphi, Eiffel, .NET Native experience though.
Although it is great that it keeps improving.
That would be super interesting to read because it is mainly a C++ compiler and some C++ features like two-phase lookup and macros make it quite hard to do things in parallel. You have to do things in a certain order but I suppose that if its query-based as well it will work.
The only option I know is /CGTHREADS but that only uses multiple-threads for optimizations and code generation which is something that the Rust compiler has been able to do for a very long time (e.g. there these are called codegen-units and LLVM supports these so it is quite trivial for a frontend to do so as well).
A compiler needs to do a lot of stuff beyond code generation and optimizations (e.g. in debug builds optimizations are even often disabled). For C++ you have overload resolution, template argument deduction, template instantiation, constexpr evaluation... and well parsing, tokenization, type checking, static analysis (e.g. for warnings), etc.
Parallel optimizations and codegen is trivial when compared with an end-to-end multi-threaded compiler. All LLVM frontends for all programming languages do parallel optimizations and parallel codegen, it's only an LLVM option away. You enable it, and it's done.
> The /MP (Build with Multiple Processes)
This is one process per translation unit, each single translation unit is then compiled with a single thread until optimizations and codegen.
Often you need to finish compiling/linking some translation units before continuing.
The Rust compiler has pipelined compilation: compilation of a translation unit starts as soon as what this requires of its dependencies is already available, before its dependencies have finished compiling. It also compiles each translation unit itself using multiple threads end-to-end, so that if one of your translation units is bigger than the others, you can speed the compilation of that one by throwing more threads at it.
Visual C++ pre-compiled headers is quite amazing and works really great, avoiding this almost completely.
But as far as I remember it isn't fully multi-threaded across all phases.
In an apples-to-apples comparison (ie: same pre-processed code content) from circa 2015, Visual Studio's compilation was about the same speed as clang and faster than GCC, without the use of incremental compilation / incremental linking / PCH.
It's very easy to dig yourself into a hole when developing for Windows because Microsoft has historically had terrible discipline for minimizing the amount of code brought in through the Windows platform includes as well as the language headers. PCH's ease the penalty significantly but are a bitter pill to swallow if you're trying to cut down on the number of symbols exposed to each translation unit.
Naturally one can do that across UNIXes as well, they just historically seem not to have invested that much either in incremental linking nor in a proper pre-compiled header model, hence no one really uses them. At least that is my perception.
In any case modules are in now, so lets see how they evolve.
1. You get access to all of the JVM libraries.
2. You don't have to work within the confines of the borrow checker.
3. Kotlin “Common” targets 3 platforms: LLVM, JVM, and JS (whereas, Rust only targets LLVM).
4. Kotlin is probably a more terse language, and suitable for doing algorithm / problem-solving interviews, and therefore a good one to be fluent in.
5. Kotlin's greater industry traction means that it might be more useful professionally. Outside of Android, I've also heard of servers/back-ends being written in Kotlin.
6. ANTLR is well-documented, compared to LALRPOP (the best existing Rust parser generator), and ANTLR targets/generates code for several mainstream languages (versus only Rust with LALRPOP). ANTLR is also probably a more useful skill to have for future jobs/projects.
But despite of all the pluses of using Kotlin, I'm still leaning a bit closer to Rust, because of all the good things I'm hearing about it.
The second order effects are even worse. After a minute, the programmer will start thinking about other things, running flow.
If compiles regularly take 5min, devs will leave their desks (and honestly, who can blame them for it).
My editor (emacs) uses `cargo watch -x check -d 0.5` to run `cargo check` (which is blazing fast for incremental edits) to type check all the code and show "red squiggles" with the error messages inline.
So my interactive workflow with Rust is only edit-"type check"-edit-"type check" where "type check" takes often less than a second.
Asynchronously in the background, the whole test suite (or the part that makes sense for what I'm doing) is always being run. So if a test actually fails to run, I discover that a little bit later.
I don't know any language for which running all tests happens instantaneously. With Rust, if anything, I write less tests because there are many things I don't need to check (e.g. what happens on out-of-bounds accesses).
This is the best workflow I've ever had. In C++ there was no way to only run type checking, so one always had to run the full compilation and linking stages, where linking took a long time, and you could get template instantiation errors quite late, and well, linking errors. I don't think I've ever seen a Rust linking error. They probably do happen, but in C++ they happen relatively often (at least once per week).
gcc has -fsyntax-only. Despite the option name, this also includes type checking and template instantiation. AFAIK it reports all compiler errors, though it skips some warnings that are computed by the optimizer (e.g. -Wuninitialized).
I never invoke clang or gcc directly. When using cargo, I use `cargo check` instead of `cargo build`. But in C or C++ depending on the project `make check` might not exist, or it might build all tests and run them, or do something else entirely like checking the formatting using clang-format.
Pick one. I'm sure there is though.
It wouldn't be hard with make but then again I'm much more familiar with it than cmake.
-fsyntax-check on GCC and IIRC clang as well.
If you use Emacs you can integrate it w/ flycheck.
Maybe doing `CXXFLAGS="$CXXFLAGS -fsyntax-check" CFLAGS="$CFLAGS -fsyntax-check" make ...` ?
I've found it a helpful practice as a programmer to be intentional about this.
With a little awareness, you can identify situations where it really would be best to busy-wait while something compiles. (If the wait is not all that long and switching tasks harms focus.)
And with a little mental discipline and practice, you can train your mind not to wander. You don't totally blank your mind out, but you also don't let unrelated thoughts distract you. Just continue to think about the same thing you were when the compile started. Don't shift gears mentally, just ease up on the mental gas pedal.
It's so easy to let anxiety or guilt about wasting 2 minutes lead you into giving up focus, which is more precious. It's a false economy, and thus doing something else with those 2 minutes is actually more of a temptation than a smart idea.
(Of course, it's better if the tools are just fast! But sometimes you can't have that.)
It's especially useful when only making minor changes to the code (which is pretty common).
If you compare the compiled artifact (rlib) of a crate with the source code, you'll quickly see that the compiled artifact is much larger than the source code. libglutin-0c732c31a1d003fb.rlib has 8.1 MB while glutin-0.22.0-alpha1.crate has 53 KB.
Most people have shitty internet. And often it's nothing you can do about it because you have shitty ISPs. You can buy a computer with good CPUs and those are usually cheaper in comparison than good internet for a year, at least in many rural areas in the US. It's not just the US, some other countries have it even worse.
Now of course if you have good internet and a bad CPU it's a good deal, so there should definitely be an option to use it, maybe even with autodetection. But I think cargo has too much dependency on the internet, not too little. There should be no manual input required to turn off precompiled crates downloads if it is faster to compile the crates locally.
If I would enjoy compiling everything from scratch I would be using Gentoo.
Will they have a trusted compile farm, only supporting a subset of targets, or involve some kind of distributed trust model supporting whatever people use? Will it be greedily populated with a specific subset of targets / features or lazily populated based on combinations people actually use?
Crates.io is already a security trainwreck in progress. Do we really need to add even more attack vectors?
It's also solving a non-problem. I modify source code downloaded from crates.io zero times per day, so I compile each crate only once. Compile times matter for code I write myself: I modify (and therefore compile) that code dozens of times per day.
The faster Rust gets the more overall time LLVM takes.
See https://github.com/bjorn3/rustc_codegen_cranelift#not-yet-su... for progress
For example, on macOS in debug builds, the compiler and linker are reasonably fast, but then 2/3rds of the compilation time is spent in "dsymutil", presumably chewing through megabytes of the debug info.
I think I spent 3-4 days compiling rust, almost forgot I started it at the time.
They probably just forgot to un-comment
// let compileSpeed = fast;
Their resources were limited and they chose to focus on more important things at the time. Now they're focusing on compilation speed. And those more important things Rust offers aren't really offered by any other mainstream language.
But the Rust compiler is an amazing static analyzer. Running one on C++ to get the same level of memory safety will often take hours on sizeable projects.
But then rust errors out on a lot of things the C++ analyzer wouldn't, a lot of those things are actually being reasonable code.
For example I discovered recently that you cannot swap two pointer variables in rust without using unsafe. Surely there is some deep reason for that, that is really important until they can figure out how to fix it in some future Rust version, which they undoubtedly will.
It's not that swapping is an obscure operation, many important and fundamental algorithms depend on it.
I'm probably overly cynical there, but I sometimes feel that all these arbitrary restrictions that make code hard to write make people feel better about rust because if they get their code to actually compile it's a sense of achievement that makes them proud. Kind of like a journey to find yourself or something like that.
But then i see that there is actually some progress in Rust to relax some of these restrictions (e.g. non lexical lifetimes in the borrow checker). If my theory above is true that would actually lower Rust's popularity over time. True? Probably not. But it's an interesting thought.
Wouldn't this work?
>But then i see that there is actually some progress in Rust to relax some of these restrictions (e.g. non lexical lifetimes in the borrow checker). If my theory above is true that would actually lower Rust's popularity over time. True? Probably not. But it's an interesting thought.
I'm not sure. Sometimes these restrictions are there, because it's too difficult to decide what should happen. I don't think it's malevolence on the language creators side. And I do think usability is always better.
I find that I'm running into way fewer such situations now than last year though, so I think a lot is happening here.
If you click the [src] link on that page, you'll see that the implementation of std::mem::swap uses unsafe. I'm no Rust expert, but I'm also not aware of a way to swap pointers without unsafe.
The goal of unsafe is to reduce the attack surface where you need to check your code more thoroughly. So by encapsulating this in a library function that's well tested you don't need to create such a section yourself. But if you do need unsafe it's not wrong, and if only a small part of your code needs unsafe then it's much easier to verify that your code is doing what it should.
And speaking of C++, it's obvious that Rust programmers take a very adversarial stance towards C++, but you should be instead thankful that it exists, because without it many of those marketing articles would lose their meaning.
"Slow compiler is now less slow, but probably still slower than those of other languages you know" doesn't quite have the same effect.
High reliability, static checked any language requires you to write code a certain way. C++ is just the most prominent and meaningful example here.
ADA would be another good comparison, but I haven't worked with it on large projects so I can't say much about compile speed. It has a lot of similarities in being a safer C++. It felt similarly restrictive to Rust though.
I think the fact of the matter is that it you want safety you need some kind of restrictions and your compiler needs to do some extra work (ADA and Rust) or you need to look at the whole toolchain that o of necessary to achieve safety (in C++ this is very long if you run your tests with Static analyzers, ASan and Msan, too which remove some of the bugs Rust tells you about).