This has been one of the most frustrating things about Rust's development experience. A piece of code looks error-free, then you fix an error somewhere else, reach the next compiler phase, and an error pops up where you just were. It makes it really hard to develop something piece by piece.
I'm glad they're aware of these problems and are making a concerted effort to address them.
Isn't this the case with all compilers? The most basic example being (for example) I have a syntax error on line 10, and that's the only reported problem, but then I fix the syntax error and now I have an error on line 200.
I agree it is a problem, the world would be better if all languages could avoid it, but I don't expect it ever to go away. Do you believe this problem is better or worse in Rust compared to other languages?
And even inside a method it'll try to massage the AST into a somewhat valid state while you're typing so it can keep analyzing the code and show errors/make suggestions. A missing bracket on line 15 won't prevent it from soldiering on to spot a misspelled method call in line 40 even though the nesting is totally messed up and if taken at face value line 40 would be syntactically invalid since it's outside the class body.
Unlike other languages, Rust still has borrow checking to go after all the type errors are gone.
So that "Unlike other languages" isn't quite true.
What Rust has done, was to prove it is viable to push such ideas into mainstream computing.
The borrow checker is actually original, though based on known techniques. Specifically, the original part is that it allows the per-use-site choice of aliasing vs. mutability instead of making the decision globally.
So while it is great that Rust is making these concepts mainstream, in about 5 years time, the other languages will have their New Jersey style implementations mature and available for their communities.
See "Safe C++ Subset is Vaporware" by Robert O'Callahan: https://robert.ocallahan.org/2016/06/safe-c-subset-is-vapour...
Google and Microsoft are the ones driving that effort, in what concerns C++.
That vapourware is quite welcomed when using DriverKit, Unreal, COM/UWP, Project Treble drivers,..
(borrow checker seems like it is one of the last phases).
I mean, an early warning, before I actually made those mistakes would help a lot, if it can be done that is :)
I have a hard time believing you make this choice often or "usually". This seems like flat out prejudice.
Is this true? The way the d compiler does it is to make a special ast node type called 'error', which allows it to avoid this problem. I have also gotten error messages from c compilers that indicated they were doing something similar.
Now, in the d compiler, this is not a panacea, because erroneous code can cause erroneous error messages about unreachable code, but it generally works quite well.
I'm also contributing to https://opencollective.com/rust-analyzer, I want it to be a viable project.
rust-analyzer is definitely alpha-quality at the moment.
BTW, thanks the project and good luck! I use it daily and it is awesome.
You are not mistaken indeed. More specifically, I do want to maintain freedom of pushing completely broken code. It's not necessary about popularity, it's about user expectations. And I also don't allocate as much time as I could into things that directly increase popularity. Like, the "install extension from the market place" could have been done almost a month ago, but I still haven't got to it. It's not that I am deliberately pushing back against such improvements, I just don't push them forward to actively.
You'd end up sharing a lot of the same infrastructure that you'd want for an LSP server, basically to do as much preprocessing as you could on the initial compilation and then work incrementally on top of that.
You'd also of course still want your globally-optimizing batch mode compiler for production, but for many projects it seems like such a thing could produce good enough binaries for development, and greatly improve engineer productivity.
Here's a good overview of this on Stackoverflow: https://stackoverflow.com/questions/3061654/what-is-the-diff...
For Rust, it looks like they are following a similar approach with the rust analyzer. This is indeed not intended for building production code, which you'd typically do on some CI server and using optimizations that maybe make the process a bit slower but the output a bit faster/smaller.
If each step is incremental, it follows that it's easier to also let each step "integrate" deltas; for example, adding a comment shouldn't need to recompile anything at all, merely modify the AST, because it doesn't lead to any actual code artifacts. The second benefit is that all the information needed to inspect the program — from syntax highlighting to type information — is already there.
If you can keep the entire high-level compilation state in memory, then it's easy to query it for all sorts of IDE features. This same design was used for the TypeScript compiler. Anders Heljsberg has a great little lecture  about it.
Eclipse is interesting in that it grew out of IBM's VisualAge for Java, which was written in Smalltalk as an extension of the original VisualAge IDE/compiler, which goes back all the way to the mid-1980s. I wonder if any of the inspiration for this design came out of Smalltalk's runtime state being persistent. If all your state is derived from earlier state, then there's no need to always start processing from scratch, since you can just continue from where you were before.
While it doesn't have the niceties added in later versions and in Free Pascal (like dynamic arrays, generics, anonymous functions and a few other things) it still has more than enough features to create big complex applications (e.g. object oriented language, rich RTTI that allows automatic serialization, properties, native string support, optional reference counting, real module support).
In terms of generated code... well, it isn't exactly great (it came last on this benchmark i wrote some time ago http://runtimeterror.com/tools/raybench/ that compares some C and Pascal compilers - i mainly wrote it for retrocoding, but threw in some modern compilers as well - EDIT: i misremembered, it didn't came last, Borland C++ 5.0 came last, in fact it came slightly above Free Pascal 3.0.4 with its default settings, though by enabling 64bit and modern instructions FPC was able to generate much faster code).
But then again, it is written in itself and as i wrote, it is stupidly fast. So it does the work perfectly fine for most tasks - as long as you don't need brute force number crunching.
Of course i'm not advocating the use of Delphi 2 (and i do not know how it compares with modern Delphi - last time i tried out the free version they have the entire IDE felt very weird and sluggish). However the fact that it exists is a proof that you can have a very fast compiler and IDE for a rich language that produces very good code - maybe not the best possible code, but good enough for a large majority of tasks.
Modern Delphi is still quite fast.
I don't know about the latest version of modern Delphi, but as i wrote, when i tried the free version 2-3 years ago it wasn't that fast. Lazarus felt faster and much snappier. Perhaps the non-LLVM Delphi compiler (if it is still available) is faster (the Delphi 2 compiler is certainly faster than FPC) but it is held down by all the bloat piled on top.
I would be quite happy with the compile speed of the latest Delphi version when using Rust.
BTW i haven't used Delphi since its use of LLVM (or it wasn't in the free version) but didn't it affect the compiler's speed?
Or the version 4 of IBM's Visual Age for C++
Visual Age for C++ was quite impressive, the whole program was serialized into a database, an approach similar to Energize, and you could edit C++ code just like in a Smalltalk environment.
Sadly not much of it has survived, so save pages like this while they exist
The problem with those two products was that they were ahead of their time, so with their hardware requirements, ended up being a commercial failure.
Then for something more down to earth, there is C++ Builder, the only actual C++ RAD tooling, but their market is only deep pocket corporations. Think VB experience, but using C++ instead.
Finally there is VC++, which also supports edit-and-continue with incremental compilation and linking, and with C++/CX I thought it could eventually be like C++ Builder but was wrong. Also with the C++/WinRT reboot, the tooling is still playing catch up with C++/CX.
However there is also the option to use interpreters like Cling.
Then you also have the development experience of Common Lisp, Dylan and Eiffel.
All of those have rich IDEs, with interpreter/JIT compilers for development, and then you can also AOT optimize for release builds.
Eiffel to this day uses their JIT (oder MELT as they call it) for rapid development, and then relies on a C or C++ compiler for the release builds.
Dylan while killed by Apple, had a short commercial life, and you can still get old of its open source variant, althought the IDE is Windows only.
Common Lisp experience is available on LispWorks and Allegro CL.
I know! we need differentiable compilers! :)
I'm surprised that you consider compilation to be a big productivity limitation (at least where current rust or C++ compilers at low optimization levels aren't enough). A faster computer with more cores addresses compilation speed, but also makes tests run fast. :)
Supermicro sells boxes with up to 4 dual socket zen2 boards (512 cores!) that fit in 2U, e.g. 2124BT-HNTR ... :P and you can stick a bunch of developers remotely connected concurrently compiling on it.
It's almost magical, DESPITE scala being slow as snail at compilation.
The scala metals LSP server also has better than enterprise grade documentation. https://scalameta.org/metals/docs/editors/vim.html
I wish this would be a higher priority generally, developer time being as expensive as it is.
At the same time I know how hard is to estimate projects.
As a user, I can say it's pretty good and you should use it today if you're writing Rust code. So to me, it doesn't matter whether the docs recommend it, or if it's distributed by default with the compiler. I just use it and tell everyone I know to use it.
And then there's Delphi, doing both.
No, not at all.