OSC is cool, but I'd say MIDI is often better if networks aren't being used, simply because the routing and connections can be centrally managed with JACK or PipeWire or the like.
If networks are being used, MQTT is my method of choice. It is so much easier to develop and debug when everything is in one place and you have a broker.
Even if there's no network needed, I'd much rather see something embed an MQTT broker than a plain OSC socket, because reliability is often desirable, you often want to know if you are disconnected, etc. MQTT gives you a lot of that stuff for free.
Plus, it gives you retained messages and last will and testament support.
None of these are really perfect for all use cases though. I wish we had something like MQTT but for streaming media.
I've been using an internal MQTT broker as a loose IPC bus on my latest embedded Linux project and it's been awesome. I'm never using DBus again.
The revelation was when I had a developer overseas that needed to test something and didn't have hardware. One firewall change and an exposed port and now he was talking to a live target from halfway around the world.
The only place it kind of falls down as an IPC is confirmation that a specific subscriber saw a message you published. That's fixable with a little bit of shim code to send back an ACK, and that's something I can live with. The protocol wasn't really designed to be a round-trip communication system.
CV has zero semantics, and so it can't really be compared to MIDI.
What a given CV "signal" does depends on its destination as much as its source, and in that respect, is somewhat similar to OSC, despite being more "universal" yet simultaneously "semantic free".
Semantics seem to somerimes get in the way unless they are fully specified into well defined profiles. Which is amazing when they are, in the form of "Any port of this type can meaningfully connect to any other port and it does what you think it should".
But a lot of the time you wind up with something that STILL needs config on both ends.... because programmers hate arbitrary choices and always like to leave room for omitting features or simplifying later.
And of course, programmers also like to break things, so even if semantics DO work, they won't hesitate to make v2.
As much as I really love modular pluggable systems, it seems like semantics free things are unfortunately easier to build an ecosystem around.
Nobody gets excited for a chart of data types it seems.
Many of the MIDI CC's have defined semantics, and another bunch do not. Many MIDI receivers choose to ignore those defined semantics (e.g. bind "volume control" messages to some arbitrary parameter). If you choose to ignore the semantics for the defined CC's, or only use the undefined ones, then MIDI CC does become more like CV. However, that's not the intention of the MIDI specification, and it certainly doesn't cover the rest of the MIDI specification, which dwarfs the CC part.
CV (Control Voltage) in the world of analog modular synthesizers is typically any signal within the bounds of the systems power supplies (e.g. -12 to +12 Volts for Eurorack).
CV can typically can also be Audio (AC), Gates (DC), Triggers (Short DC pulses) or modulations (slow changing voltages) — or any weird mixture of those.
The difference to "normal" audio inputs is typically just that audio inputs limit the bandwidth for the low frequencies roughly at the audible range, while CV inputs can also accept DC).
Another thing: there are certain interface standards like e.g. oscillators accepting 1V/Octave inputs.
That means most Eurorack modules that produce tones have an calibrated input into which you can e.g. route precisely 1V to make the tone go up an octave, -4V down four octaves etc. If you route in 1/12V it goes up an semitone. If your signal is in between you do microtonal changes, vibrato, pitch bend and similar things.
Years ago I co-developed a C library to connect OSC endpoints together in a decentralized manner and semantically translate messages with arbitrary operations such a scaling and filtering. There are still a few users and developers.
Thanks! One feature that we developed, called "instances", is actually quite interesting, we ended up even publishing paper just on that topic [1]. It's a proposed way to design mappings that handles the idea of signals that "appear and disappear", or might have multiple instances, like in the context of a multitouch display. So instead of mapping a specific signal (eg. position of an active touch), you map a signal "class" which can be "instantiated", and the receiving side can automatically handle this, e.g. a synthesizer can allocate or deallocate voices in response. The protocol handles all events and corner cases such as the two sides having asymmetric resources, or the associated instances along the chain having asymmetric lifetimes, etc.
This is one thing that we feel is quite unique to this solution and does something that MIDI, or other protocols that we know of, really cannot do (well).
1. There are no standard semantics for the messages. With MIDI, if you want to tell something that makes noise to start making noise, you send it a NoteOn message. While that may or may not do precisely what you want, the semantics are clear. With OSC, no such message exists. Every OSC receiver defines its own set of messages with their own semantics.
2. String parsing. It really does cost quite a lot to parse and dispatch OSC, because everything is string based. You can argue that it's worth it because of the flexibility, and in many cases, that's probably the right verdict. It doesn't get rid of the cost, however.
You may find this interesting (at least once it gets a little more developed again): https://github.com/wires/wtf type-checked JSON w/ custom sum and product types
I recently migrated some services to uasyncio with Python. I was most impressed by how much code I was able to delete compared to my old implementation. I am yet to try the same approach with Deno (the next generation Node.JS).
Unix domain sockets are comparably performant to shared memory without the need for semaphores and locks. Of course, it only works on single node deployments. UDP/TCP based protocols are usable on multi-node deployments.
indeed, also when a lot of data needs to be transfered with low latency between 2 programs on the same computer, shared memory method could be more attractive. can u transmit a huge block of data in OSC in one go? or is it attribute, value pair?
If networks are being used, MQTT is my method of choice. It is so much easier to develop and debug when everything is in one place and you have a broker.
Even if there's no network needed, I'd much rather see something embed an MQTT broker than a plain OSC socket, because reliability is often desirable, you often want to know if you are disconnected, etc. MQTT gives you a lot of that stuff for free.
Plus, it gives you retained messages and last will and testament support.
None of these are really perfect for all use cases though. I wish we had something like MQTT but for streaming media.