When you do this, you have to live by the combined constraints of both transports. It's not necessarily wrong, but having different mechanisms which are optimized for the specific constraints of each is also valid.
I'll give you a more concrete experience. We have protocols designed to take advantage of shared memory to allow dma directly from the hardware to the pages backing the file you end up reading from. There are many layers/processes between my program and the driver which ultimately talks to the hardware, and being able to bypass copies for the data is useful. Now the protocol encodes this expectation of shared memory usage within it. We later built a mechanism to take existing protocols and use them across network boundaries, but were unable to utilize any protocols which relied on shared memory. It may be possible to emulate shared memory across the network boundary, but it's hard to do so performantly in all cases. So rather than modify the existing protocol to avoid shared memory, negatively affecting existing use cases, we opted to create a second protocol which was optimized for network use cases.
There are more of these sorts of examples I could enumerate if you find it worthwhile.
It should be possible to encode those constraints as part of the language. For example, the distinction between sync and async calls can be represented by having a Future type and wrapping the return type in it.
Sync and async are all about cooperatively yielding control flow. However, in many cases, you may want to yield control on IPC, or to hold onto control on RPC. Yielding control depends on the sender logic. IPC/RPC depends on receiver characteristics. They are very orthogonal concepts.