I figure it is on account of the desirable situation you describe in your other post not obtaining: in order to satisfy the goals of the IDE, it attempts to go beyond where the compiler parser would stop, as the compiler is more of a batch than a responsive one, and sometimes the IDE gets it wrong. As you say, batch to responsive is the difficult way to go.
In addition, I suppose that there are people hard at work applying ML in tools to help understand incomplete code and mitigate the false positive problem of traditional static analysis. I can imagine probabilistic parsing being useful in this case, but not so much in compiling.
Bad language plugins in an IDE can show you this. Sometimes I'll be using a niche language with someone's side-project plugin that has some issues even though it's correct, like when its file-formatter can't parse the code and fails with an error even though it's valid code for the compiler.
I was a little curious about this too. It's contrary to what I see in the Go and Rust compilers. My understanding was that it's good to have a go at parsing all input if possible so the end-user can batch fix mistakes, but it's unreasonable to expect error checking in post-parsing steps to occur if there are parse errors because the AST is almost certainly incomplete.
Compiler will get plenty complicated without IDE scenarios, trust me on that one. Slowness is also never really a thing to worry about here, especially because usage patterns in an IDE vs. a batch process are so different. It's almost always the other way around: someone writes something that's completely fine for a batch process but tanks IDE performance.