I think you may be underestimating the feasibility of what you propose.
This whole LSP thing is a mindbogglingly bad idea, brought to you by the same kinds of thought processes that created the disaster that is today’s WWW.
Furthermore, many languages ship with a runtime which does not play nice with other runtimes. Try running a JVM, .NET VM, Golang runtime, and Erlang BEAM VM in the same process and see what happens. Better yet, try debugging it.
The other side of that is that we're not in the 90s with single core processors and there's been a lot of hardware & software optimization over time. Most people can run something like VSCode and a language server and still have multiple cores left over — and since the events which trigger language server interactions are generally made by a human with a keyboard it's not like you need to be in the microsecond range to keep up.
A PC full of programs built with these assumptions will probably grind to a halt all the time despite having very high spec hardware.
I mean, taking your argument seriously would mean everything should be hand-tuned assembly. Obviously we figured out that other factors like time to write, security, portability, flexibility, etc. matter as well and engineering is all about finding acceptable balances between them. Microsoft has been writing developer tools since the 1970s and in the absence of actual evidence I’m going to assume they made a well-reasoned decision.
While you're right that productivity versus performance is a trade-off, and an editor is not necessarily a high performance application, its not clear to me whether future optimizations would reduce the gap, as much as optimizing compilers did vis-a-vis C and assembly.
In any case, that aside, the core guarantee of software stability with LSP remains to be seen.
I don't follow this conclusion: haven't we already seen it with the way language servers crash and are just restarted without other side effects?
> Do you have profiler data showing that the microseconds needed to pass a message between a process is a significant limiting factor
What? I think you didn't get my point. Let me try again.
You can look at a single operation and say "oh, that's nothing, it's so cheap, it only takes a millisecond". Even though there's a way to do the same thing that takes much less time.
So this kind of measurement gives you a rational to do things the "wrong" way or shall we saw the "slow" way because you deem it insignificant.
Now imagine that everything the computer is built that way.
Layers upon layers of abstractions.
Each layer made thousands of decisions with the same mindset.
The mindset of sacrificing performance because "well it's easier for me this way".
And it's exactly because of this mindset.
Now you have a super computer that's doing busy work all the time. You think every program on your machine would start instantly because the hardware is so advanced, but nothing acts this way. Everything is still slow.
This is not really fear mongering, this is basically the state of software today. _Most_ software runs very slow, without actually doing that much.
The technologies that win are the ones that account for that.
But then maybe the answer should be to provide better C invoke wrappers for high-level languages, not to use HTTP instead of C.
How to support intellisense for a language one time and have it work on many editors?
You are simulating a condition that will almost never happen unless your machine itself is dying in which case you have bigger problems than auto-completion to worry about.
Sure, could do some hyper complex setup with multiple VMs etc... or you could just run one command to temporarily increase latency on localhost.
If you know some better way of doing this, feel free to tell me.
Setting up a new interface is pretty easy on most unix OS's and then you can increase the latency on just that interface without impacting the interface most software on your machine is built to expect to be blazing fast. And you can mess with that interface to your hearts content knowing the only thing affecting it is the applications you are running on it.
I'd claim that for most languages you could define a formal grammar as an EBNF that would provide some basic utility.
Such a grammar would probably be overly permissive (it doesn't know anything about references, types or host environments) but it would provide you a baseline for syntax checking and autocompletion and could generate an AST that more language-specific rules could evaluate.
In particular, I know that using a BNF is not useful for shell, having ported the POSIX grammar to ANTLR.
(Some programming languages are context-sensitive, some programming languages allow ambiguity in the main grammar and have precedence rules or "tie-breakers" to deal with those situations.)
"Universal grammar engines" and "Universal ASTs" are wondrous dreams had by many academics and like the old using RegEx in the wrong place adage: now you have N * M more problems to solve.
I think the OP is looking for something more like this. This paper is from the same author as Nix, so he has some credibility. But I think this paper is not well written, and I'm not sure about the underlying ideas either. (The output of these pure declarative parsers is more complicated to consume as far as I remember.)
Still, the paper does shows how much more there is to consider than "BNF". Thinking "BNF" will solve the problem is a naive view of languages, and as you point out, is very similar to the problem with not understanding what languages "regexes" can express.
Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions.