I don't know how the Rust compiler is built. However, I am implementing an Ownership/Borrowing system for the D programming language, and to make it work requires Data Flow Analysis. DFA is slow. It's normally only used for optimized builds, which is why optimized builds are slow.
But when DFA is a required semantic feature of the language, it has to be done for debug builds, too, and so they're going to be slow.
IIRC Microsoft released a paper a few years back that pioneered some techniques which significantly improved DFA efficiency.. And TypeScript utilized those in their control flow analysis implementation. Maybe I'm imagining all that.
Can you elaborate on why data flow analysis is slow, necessarily? For example, does it need whole-program information, or does it do some sort of fixed point, or is the algorithmic complexity super-linear, or something else?
It tends to be quadratic, based on the number of variables and the cyclomatic complexity. The DFA equations cannot be solved directly, but only iteratively until a solution is reached.
Debug code generation, on the other hand, involves none of that and the compiler just blasts out code expression by expression. The time goes up linearly with the number of expressions.
DFA (and the O/B DFA) is done at the function level. It can be done globally, but I try to keep the compile times reasonable.
Many implementors try to cache the results of DFA and patch it incrementally. Anecdotal evidence is that they spend endless hours dealing with weird optimization bugs because they patched it wrong.
I decided to redo the DFA from scratch anytime the AST changed, and have had pretty reliable optimization as a result.
But when DFA is a required semantic feature of the language, it has to be done for debug builds, too, and so they're going to be slow.