Aren't you just describing traditional RPC calls? Many tools for this: DBus on Linux, Microsoft RCP on Windows, and more that I'm not aware of.
If you've only got a basic IPC system (say, Unix domain sockets), then you could stream a standard seriaization format across them (MessagePack, Protobuf, etc.).
To your idea of gracefully moving to network-distributed system: If nothing else, couldn't you just actually start with gRPC and connect to localhost?
When you start with gRPC and connect to localhost, usually the worst that can happen with a RPC call is that the process crashes, and your RPC call eventually times out.
But other than that everything else seems to work as a local function call.
Now when you move the server into another computer, maybe it didn't crash, it was just a network hiccup and now you are getting a message back that the calling process is no longer waiting, or you do two asynchronous calls, but due to the network latency and packet distribution, they get processed out of order.
Or eventually one server is not enough for the load, and you decide to add another one, so you get some kind of load mechanism in place, but also need to take care for unprocessed messages that one of the nodes took responsibility over, and so forth.
There is a reason why there are so many CS books and papers on distributed systems.
Using them as mitigation for teams that don't understand how to write modular code, only escalates the problem, you move from spaghetti calls in process, to spaghetti RPC calls and having to handle network failures in the process.
If you've only got a basic IPC system (say, Unix domain sockets), then you could stream a standard seriaization format across them (MessagePack, Protobuf, etc.).
To your idea of gracefully moving to network-distributed system: If nothing else, couldn't you just actually start with gRPC and connect to localhost?
Is there something I'm missing?