Sure: dcom, corba, rpc, soap. The lesson I learned is that you can't abstract away the network barrier; to the contrary - that barrier must be the central point of the design of storage and communication components of the app.
I suspect that previous attempts haven't been successful for a number of reasons but especially because they tend to be synchronous. Node is a great platform for asynchronous rpc because the asynchronous character is already designed to make the programmer have to think about where the high-latency barriers are. I ran into this problem with drb in ruby for a job queuing system and had to write my own thread pool and polling system to compensate for it. Handing a callback off to your remote method does seem to work reasonably well at addressing these common rpc pitfalls.
"I suspect that previous attempts haven't been successful for a number of reasons but especially because they tend to be synchronous."
No. Node.js has no special contribution to asynchronicity, which has been in common use for something like 20 years now. Every UI-based program you have ever used is asynchronous-event based, even if it has synchronous components, to say nothing of the many many other Node.js-like libraries that have been written over the past decade in other environments, many with more capable languages that Javascript. None of these have been able to fix RPC, and many of the biggest and most well-funded attempts like CORBA and COM have come from smack in the middle of some of the largest piles of event-based code in the world.
The reason trying to pretend a network transaction is a function call fails has to do with the different semantics of a function call versus a network transaction. Latency differences even when everything is working correctly certainly are a factor, but are really less interesting than the problems of the unreliability of the network and the fact you are crossing a semantic boundary. Every network transaction can fail. Every network transaction might succeed, but with unacceptable levels of latency. Every network transaction might initially succeed but cutoff halfway through, or dribble its results in one byte at a time, or send you a gigabyte unexpectedly, or any of a variety of other failure cases you must at least be ready for, even if you can't "handle" (because in some cases there is no "handling" them). Furthermore, every network transaction incurs a serialization step, in which the semantics of the local program must be re-enforced on the data, possibly with failures thereto. For instance, you might get back JSON that specifies an integer greater than 5 billion in size where your environment is still only 32-bit; no local function call can do that. (Which isn't to say local function calls are therefore perfect, it is just that their failures lie in other places. For instance, your overlarge number got truncated somewhere else in your program, but it wasn't a function call that did it, it was some actual math computation somewhere. The point is that there is a different sort of semantic failure that can occur with an RPC vs a local function call and this inevitably leaks out of any abstraction you could try to wrap around the RPC.) And that was simply one tiny example, not the totality of the issues you can encounter, the vast majority of which are far more subtle than that.
RPC (where "procedure" is an old synonym for function) should by design not deal with these issues if it's really going to be "RPC", because by definition of RPC it really ought to look like a function call. Therefore it must handle all of these issues implicitly, and since no one answer is adequate for all cases, usually incorrectly. You can't seal over network issues anywhere near well enough to make dealing with a network as easy as dealing with a function. You must deal with latency issues, but again, these are ultimately the least interesting aspect. You must be able to deal with serialization and semantic issues; if your language forced you to deal with that on every function call you'd never use it. You must have some way of dealing with the various network failures and even throwing various appropriate exceptions only gets you a subset of the actions you might actually want. You must, inevitably, allow these things to poke through somewhere, at which point your are "configuring" your RPC call, at which point it is really no longer a "procedure call" at all, it's something else.
That it has historically imposed an additional point of synchronous behavior in some cases is because languages up to this point have also been largely synchronous, but that has nothing to do with RPC's far more fundamental failures as a network communication metaphor. It is also the case that if you've absorbed too much Node.js hype that you may underestimate the world's understanding of the problem; take a moment to search for "asynchronous COM", for instance. The first hit I get is an article from April of 2000. The synchronous problem is easily and trivially solved by turning RPC calls into futures objects instead, which has been done, and has not salvaged RPC because it is not the core problem RPC has. And there are other solutions, too, which also don't work.
Regrettably, calling things over a network must be more complicated than a local procedure call; all attempts to make it otherwise have indeed failed to live up to their promises.
I feel bad for you having type this well-reasoned argument only to have one person appreciate it (beside me, didn't need any convincing to start with).
Couple of things I'd like to add are versioning problems (on the net old and new versions of code talk to each other, which almost never happens in local calls), and time-travel problem where an operation succeeds, only to have all of its effects reversed because the remote server failed and was restored from a day old backup.
Interesting thing about rpc is that it is so very sexy, and it is tempting you to think that you will add error handling later. So it hides 99% of all problems from you under a sexy appearance.