Some added context: MIDI 2.0 is still not that widely supported (in hosts, plugins, etc). It's also not present in 99% of currently on the market hardware synths and music controllers etc you can buy, and it's available in 0% of older hardware.
There's also MPE which adds some more expressive capabilities to existing MIDI hosts, plugins, and hardware, which has some (small still) adoption. MPE is an extension to MIDI 1.0 to add polyphonic expressivity, and a lot of companies have added it in an ad-hoc fashion (what it offers is within the bounds of MIDI 1.0 "extended" modes).
So, unless you're living in the cutting edge, MIDI 2.0 will not be of much use for modern music production for the next few years at least.
That was true a few years ago. The next few years are here :) take another look, you’ll be pleasantly surprised.
E: that sounded like snark, definitely not the intention, Ableton supports MPE now and they were lagging behind. Roger Linn keeps a list of MPE/MIDI2 compatible software and hardware, every time I revisit it, the list has grow : https://www.rogerlinndesign.com/support/support-linnstrument...
coldtea’s point is that MPE is backwards-compatible with MIDI, but MIDI 2.0 is not.
Linn’s list appears to include MPE devices but I can’t find any mention of MIDI 2.0. I’m not sure why you’re calling this a list of “MPE/MIDI2 compatible” devices.
It sounds like you’re saying that MIDI 2.0 is widely supported; do you have a list of MIDI2 devices on the market?
I'm not a native English speaker but the original sentence was equally clear to me. Can you enlighten me why you had to 'fix' it? Does it now match the rules set by Oxford and Cambridge or something?
I hope this means we can have class compliant USB midi devices with Jitter correction and "interrupt driven" transfers (as opposed to the low priority bulk transfer mode used in USB midi 1.0). This should mean way better midi clock synchronization at the cost of a bit latency.
Unsure how much work there is in for the DAWs, as they will have to not only use the new API's but also send midi events some millis early to take advantage of the jitter correction. Hopefully there is someway for the Alsa client to determine this automatically so we don't have to manually twiddle with negative track delay or similar "hacks"
The idea of MIDI is that you can grab any midi controller and any synthesizer manufactured in the last forty years, plug them together, and they just work. (As long as we can keep up the pretense that we're synthesizing something that behaves like a piano, and we aren't trying to use USB which introduces its own set of problems.)
The idea of OSC is that you can send any kind of message at all from any device to any other device and the semantics of those messages are left as an exercise to the reader. There's no guarantee of interoperability. It's just assumed that if you're using OSC you know what you're doing and it's up to you to make sure both devices are speaking the same dialect.
Sometimes people tunnel what is effectively MIDI over OSC, but I think that's the closest thing to a common standard, beyond the basic low-level data layout that OSC messages are expected to conform to.
I think some people use OSC for various things, but it tends to be for custom bespoke installations that were designed to do something specific.
"Behaves like a piano" doesn't do it justice. Piano never had channel/note aftertouch, pitch bends or whatever you assign to the 128 arbitrary controllers.
The only "piano-related" limitations of MIDI that I've run into is that
a) a note-on message is fixed at just three numbers (note, velocity, channel), so while I can map velocity to, say, a filter and envelope settings but I can't easily have independent per-note control of them. It doesn't come up that often, but sometimes I do end up with two slight tweaks of the same patch on two channels and it's a bit of a faff compared to just having more numbers per note event.
b) there is poor support for sending articulation (ie., a number that changes without sending a new note-on event) that references a held note, because CCs refer to a channel, not an existing note. Poly AT but isn't universally supported and at least if you're using hardware it tends to hog bandwidth.
The tuning is assumed to be 12-tone equal temperament. Other temperaments are possible, but you're kind of off in uncharted territory. Especially if you want a temperament that doesn't use a 12 note repeating octave. (MTS exists but it's not widely supported.)
You can't play a unison without resorting to multiple channels.
No way to do the equivalent of guitar hammer-ons and pull-offs in a polyphonic manner without resorting to multiple channels. (MIDI 2.0 at least adds per-note pitch bend.)
(A lot of MIDI limitations can be gotten around by just using multiple monophonic channels instead of one polyphonic channel, and that's what MPE is. Sadly most of the otherwise amazing famous analog poly synths out there can only receive on one channel.)
There are system-exclusive commands and reserved/undefined event codes for rare situations like that. (Apart from non-12-note systems, and it's kinda bizarre to expect everyone's music standard to cater for something that esoteric).
The 12-tone system captures most small natural number ratios and, as such, covers most non-western note systems as well (allowing for temperament differences).
It's safe to say that less than 0.1% of all music and all composers or performers ever used anything else, and they wouldn't be using synthesizers for that task.
> "The 12-tone system captures most small natural number ratios"
Notably absent are any ratios that involve a power of 7, and ratios that involve a power of 5 are noticeably out of tune.
> "and, as such, covers most non-western note systems as well (allowing for temperament differences)."
Temperament differences are pretty significant.
> "It's safe to say that less than 0.1% of all music and all composers or performers ever used anything else"
I don't know how many composers and performers are interested in carnotic music, Indonesian gamelan, traditional Turkish music, western music as practiced more than a couple hundred years ago, barbershop harmony, lap steel, slide guitar, Appalachian dulcimer, and so on but I think 0.1% is very much an under-count.
Even if you're 100% focused on mainstream western music, guitarists will do things like tune their G string 15 cents flat or so because it sounds better for some chords. (I haven't tried that one. I prefer to just replace the fingerboard, re-arranging all the frets so I can play guitar in proper just intonation.)
> "and they wouldn't be using synthesizers for that task."
Well, that's the problem right there. Synthesizers should be the first choice for microtonal music, not the last one. There's no technical reason for synthesizers to be biased towards 12-EDO or any other specific tuning, it's just a historical accident that the protocol we use is one that wasn't designed with any other use case in mind.
>Notably absent are any ratios that involve a power of 7
Except the tritone that's 1% off 7/5.
>ratios that involve a power of 5 are noticeably out of tune
Depending on your definition of "tune". Regardless, pitch bends cover that if needed.
>Temperament differences are pretty significant.
And one's free to implement whatever temperament in midi, in fact even consumer pianos allow that. If there was any semblance of demand for that midi would spare an event for that.
>carnatic music
Show so much variability between players that it's safe to approximate their microtones in 12 notes, with bends if desired.
>barbershop harmony, lap steel, slide guitar, Appalachian dulcimer
All serviceable in the 12 note system.
Even early pianos were tempered for a particular key until common sense prevailed.
>it's just a historical accident that the protocol we use is one that wasn't designed with any other use case in mind
If entire countries cede their musical theory systems for a foreign one it's not a matter of an accident, it's an indication of superiority.
>There's no technical reason for synthesizers to be biased towards 12-EDO
Yes there is, it's exactly because near 100% of their users want just that. Most midi devices don't use anything past velocity and pitch bends, complicating them to include non-12 note systems would be a total flop.
That's technically true. 7/5 is about 99% of the square root of 2, but in terms of pitch intervals, a difference between 582.52 cents and 600 cents is about 17.49 cents. That's not necessarily terrible -- it's not that much worse than the difference between a just major third and an equal tempered major third, but it's not great either.
Other useful intervals like 7/6 and 7/4 don't really have any close equivalent in 12-edo, though dominant seventh chords do sort of imply a 4:5:6:7 relationship even with the 7 being way off.
> Depending on your definition of "tune". Regardless, pitch bends cover that if needed.
Pitch bend isn't a general solution unless you go all the way to one-note-per-channel like MPE does. MIDI 1.0 has no individual note pitch bend -- pitch bend affects all notes in a channel at once, which makes it quite a bit less useful for correcting intonation issues than it would otherwise be. (MIDI 2.0 added per note pitch bend.)
> All serviceable in the 12 note system.
According to your definition of "serviceable". Some of the people who actually practice these forms of music might have other ideas.
> If entire countries cede their musical theory systems for a foreign one it's not a matter of an accident, it's an indication of superiority.
Bad ideas can persist as well as good ideas -- 12-EDO wouldn't have survived as long as it had if it didn't have some useful characteristics, but I think it's important to be aware that 12-EDO is a compromise. Tuning suffers, but you can play equally well in any key, instruments are simpler when you only have to care about 12 notes per octave, and 12-EDO instruments play together.
With electronic music, the barriers to using other tuning systems are largely conceptual. Instruments could have as many notes per octave as you want, you could transpose freely while staying in precise just intonation, and tuning instruments to each other is just changing a number in memory somewhere -- much easier than retuning, say, a piano. So maybe the old tradeoff isn't a good one anymore, and we can make a better one. Or at least make it easy for people to choose what tradeoffs they want to make rather than having their choices made for them.
> Yes there is, it's exactly because near 100% of their users want just that. Most midi devices don't use anything past velocity and pitch bends, complicating them to include non-12 note systems would be a total flop.
I will grant you that 12-EDO is commercially successful and that major music companies largely have no interest in microtonality. That is, however, a very different question than whether non-12-EDO music has artistic and cultural merit.
For what it's worth, I expect MIDI 2.0, which has some features that make it better suited to microtonal music than MIDI 1.0, to be a flop. The microtonal crowd is probably better served by MPE (as kludgy as it is, it does work reasonably well), and device manufactures seem to have absolutely no interest in supporting it. Maybe it's more popular for software synths?
Interestingly, modular synthesis uses a 1v/octave pitch standard that is agnostic of tuning system -- people are free to use whatever they like. That is as it should be. Most people stick with 12-EDO because that's what they're familiar with, but if you want to do something microtonal, the only roadblocks are the artificial limitations put into individual devices, and you can largely avoid those if it's a problem.
You could make the same argument against unicode. "Why do we need all these weird character sets? The latin alphabet works fine for me, and all the English-speakers I know."
Notes that deviate from exact 12-EDO aren't actually all that esoteric. Even in traditional western music where almost all musicians think in 12-EDO, it's not all that weird to bend notes a little to bring chords better in tune with themselves.
Carla (and maybe Cabbage?) uses it for control messages but there's the problem that it only works if you've got a big single host running a lot of subject plugins; the idea was supposed to be that your sequencer would send midi to your sampler and your tracker would send control messages to your sampler over OSC and all three of these would be peer programs. There's not a particular technical reason it doesn't work it's just that very few programmers built software to take advantage of it.
OSC still exists it just never became the defacto standard for the industry; primarily because there is no standardized device mapping for interoperability.
MIDI 2.0 fixes a number of issues that OSC was intended to fix, but imposes vendor limitations which lead to standardized adoption.
I still use OSC for music production, and it's my goto protocol for writing code that needs networking as it's a super simple message passing protocol that lets me quickly connect various devices and services :)
OSC is basically a serialisation format with a few niceties bolted on for batching and scheduling messages. The massive unsolved problem is discovery – discovery both of endpoints (servers, devices) and messages that can be sent to said endpoints. There have been efforts to solve this over the years, but are largely vendor-specific.
MIDI has OS-level support for discovering endpoints/devices, and there's enough standardisation of supported controllers that things largely "just work".
MIDI has never been userland in multitasking environments. It's a real-time, low-latency interface between some number of running processes and some number of hardware ports: the timing glitches and weird buffer behaviors you'll get from mixing together different hardware are totally kernel's wheelhouse.
If you meant General MIDI synthesis, that's a different spec and related only in that the MIDI message encoding is reused to describe sequences.
That's ok. You can talk to USB devices from userspace without issues. That's how many game controllers work. Also most (all?) SDR devices.
In most systems that impacts the mininal latency you can achieve though, so unless you have a system designed around it, you typically don't want real-time usage crossing to userspace too many times.
What’s a good method for semi real-time, multi process, and multi hardware port synchronization, in user space? All of the real-time kernel code I’ve worked on wasn’t possible 15 years or so. Are there some new user space hooks?
The kernel aint hard real time anyway. In userspace, guess you can pin the task to one or two core, have it the highest rr priority and get more or less the same soft realtime we have in kernel?
For Windows VST I use Yabridge, which is amazing with Wine. This week I have setup Native Instruments Komplete 14 (Kontakt and Komplete) working with Ardour connecting my Roland A-88 Mk2. Maybe i should do some videos to explain.
How's the latency when doing live performances or just practicing piano? Because recently I've stopped doing much production and moved from using DAWs to lighter programs like Gig Performer since I'm more of a pianist than a producer, I use like 5 plugins max (a modeled piano with a few effects like reverb and comp) and I'd be interested in a Linux alternative if the delay isn't going to annoy me during practice.
For now using Native Instruments without any configuration i don't have a good enough latency for performance. However i didn't follow yet Archlinux's pro-audio tutorial to use realtime capabilities and improve latency (i didn't have time yet). Before i used a Yamaha p-125 with its internal sounds directly redirected through the usb connection to Ardour which was amazing for practicing and live performance (it was still under Jack I think, i switched to Pipewire this year). (Although need to be careful the p125-a removed all those nice features). With fewer, linux based virtual instrument/plugin, latency would be less of an issue, but honestly i didn't take enough time yet on it to give you a viable answer. I am also more of a pianist than a producer, i still prefer a real piano when i can over any virtual appliances for performance, although as I said a Yamaha p-125 is quite alright for that purpose too (i don't know well Roland's equivalents). I know Pianoteq has some Linux native virtual instruments which are very good i heard. If you only play piano i would look into that, before paying they must have some demo version.
If you want more information and help, we could stay in contact I would be pleased to help and see if we can find any ways to make it work for your needs. But basically what I would do is: based on the distro you want to use, configure it for real time and pro-audio according to the community recommendations. Then depending on your needs for the virtual instruments, see whether specific plugins increase latency to a level that is sustainable or not for your usage. I could do some testing if i have time. Anyway it was my next step to look into for my new native instruments setup.
Forget everything I said.
I just spent an hour trying to fix things because you motivated me.
I found out there was an issue with my Focusrite and my pipewire config.
By setting the config to "Pro-Audio" in PAVU (Pulse-Audio Volume Control) in the configuration tab for my interface and disabling all others. As well as changing the the `jack.conf` with according node latency that I copied from `/usr/share/pipewire/jack.conf` to `/etc/pipewire/jack.conf`, restarting pipewire, wireplumber and disconnect/reconnect my inerface then running `pw-metadata -n settings 0 clock.force-quantum 256` (with the quantom value I set in the `jack.conf`.
I have now amazing latency (5ms) even in Native Instrument.
(of course I did overkill and did all i could find on internet, with more time we could pin down to the few things that really fixed the issue, sorry to send all those informations like that).
Do you think it'd be possible to make a shell script that anyone (running the same distro as you) could run to get all of this working? Alternatively just a readme detailing every single thing you did (and how you did it) would also make for an interesting read.
From the testing I did, the main culprit for the latency I was experiencing with my USB interface and my MIDI controller was the fact that my Pipewire profile wasn't set as `pro audio` (which can be set in PulseAudio Volume Control software in the `Configuration` tab for your USB audio interface).
However as its a full profile, it must be one of the specific setting inside that fixes the issue, but I don't know which one.
I might need to look into Wireplumber settings (the session manager I use for Pipewire instead of the default one `pipewire-sessions-manager`)
On another note, I found out that my focusrite was held back by factory defaults (3rd gen) and needed to follow a procedure to unlock higher samplerate.
Now I am using 1024/192000 Hz and I reach 5.3ms latency without XRuns. (although when the pro audio profile isn't set Ardour would still say that the latency is 5.3 however you can clearly hear its not the case).
Good morning!(i am back from the dead).
I think the best would be that I first pinpoint which parameter was the culprit (also given what I read it might be related to which sound card we use). Then we could see whether a short tutorial or a script would be the best. My first idea was to do an installation video of Archlinux fron scratch for Pro-audio, however i can understand that you would like to use your own distribution. I will try to see tonight (if i have time) what exactly was the issue and then explain how to solve it. Hopefully i could have a better understanding of what happened.
May I ask how you installed Kontakt and such? If I try to install some sound libraries using Native Access 1, it complains (something about ISOs can't be mounted and the data itself is stored within the ISO, but without file access). If I try to use Native Access 2 (the shitty electron app) it won't even install correctly.
Hey!
Sure, I really think I will do a video about this (should I post on Youtube or other platform, any advice?).
Yes you should use Native Access 1.
Basically what I do:
- Edit `/etc/udisks2/mount_options.conf`
- uncomment `[default]`
- search the lines starting with `udf`
- uncomment the one starting with `udf_defaults`, append `,unhide` at the end.
- uncomment the one starting with `udf_allow`
- save&exit
Then, when you download any library which contains ISO files, it will fails but no worries, the ISO is downloaded in you `~/Downloads` folder!
The issue is related to Wine that can't show hidden files or something like this when mounting disks.
So the solution, now that you reconfigured your default mounting options (I need this under Archlinux because udisks2 auto mount after creating a loop, if you dont have this issue you could just ignore the previous steps and manually mount the loop with the `unhide` parameter).
So, you go find you ISO in your `~/Downloads`, then you open a terminal and you do:
- `udisksctl loop-setup -f ~/Downloads/LIBRARY.iso`
- Now you will have a newly mounted (if there is auto-mount) disk in your `/run/media/$USER` directory.
- You go there, you will see inside there is an `LIBRARY_NAME.exe`
- You do `wine LIBRARY_NAME.exe`. It will prompt some windows UI, you do next, install, etc..
- After if finished, you go in your Native Access and you click the refresh button and TADA its installed!
tips: unmount the disk afterwards. also you can restart native access between installs if you install a lot of them. I dont know why but sometimes it told me that I had not enough disk space for install but after restarting it it didnt complain anymore.
Here, I made a quick video to show the process.
Sorry for the video quality, I am a bit tired its late in Taiwan.
https://peertube.tw.chapuis.ovh/w/3CPvFXfuZ4i6Y7ea6V6etQ
(dont mind the audio, I just put the first thing that came under my hand in my old harddrive, its just a microphone test I did, the pitch isn't great)(And NativeAccess didnt like that I put scaling 200x for the video recording in order to better see the text for the rest of the UI)
I finally had time to try this, but support for mount_options.conf was added to udisks2 in version 2.9.0. Sadly the Linux distribution I'm using for audio stuff is still Ubuntu 20.04 which ships with udisks2 version 2.8.4.
So I manually remounted the loop device using mount -o remount,unhide /mountpoint and now I'm seeing the hidden files.
VST as a format is platform dependent AFAIK. I think you can do something with WINE but other than that the authors of Ardour can't magically remake proprietary Windows VSTs in a compatible format.
As for your third point, I think Linux is improving in this area all the time. Ten years ago it was a black art. There's a lot you can do to set it up, but also a million different things that can be wrong and cause problems, so unfortunately you have to troubleshoot a specific setup.
As a composer software engineer: all I can personally say is that MIDI is a pretty terrible protocol that's borderline unusable for everything except very limited applications. It's fine to use it for keyboard (piano, organ, celeste, harpsichord etc). For literally anything else it's useless and I see nothing in 2.0 that fixes it.
> So why is this expressive touch control important? Because under skilled hands, the ability to capture subtle musical gestures enables more interesting and expressive solo performances than a MIDI keyboard's on/off switches. Consider a guitarist's string bends, a violinist's skillful vibrato, or a wind player's subtle breath control, all of which can't be authentically replicated on a MIDI keyboard.
>In my view, the limitations of MIDI keyboards have resulted in electronically-generated music being used largely for background music
When was this written? Isn't this kind of of solved by MPE (which I believe Linn makes use of)?
We're starting to see several MPE-capable instruments hit the market. I have a pre-order out for the Osmose keyboard, which has per-key pressure and per-key (horizontal) pitch bend. If pre-release reviews are to be believed, it does a very good job capturing things like "a guitarist's string bends, [or] a violinist's skillful vibrato".
> For literally anything else it's useless and I see nothing in 2.0 that fixes it.
Sorry, you must be holding it wrong. It is useful and flexible enough for countless applications. It was defined in the 80s and still used universally today, so objectively the opposite of useless and terrible. It is limited though.
Given that it has been used succesfully for 30+ years, for all kinds of musical applications (aside from keyboards - including all kinds of exotic controllers), in huge productions (Hans Zimmer level huge), I can believe that it is not optimally (or even badly) designed, but I find the above statement needlessly hyberbolic.
It also (contrary to popular belief) supports far more finegrained granularity in parameters than 256 levels. Also MPE uses existing MIDI protocol (1.0) capabilities to map the polyphonic messages.
I use MIDI commands from a MIDI programmable pedal to change the parameters of my amplifier and to command my Fractal Audio amp modeler, for me it is very useable
I don't understand how you could use it for other categories of instruments unless you mean syncing their clocks?
A midi guitar seems fairly non-sensical to me (they exist but aren't perfect) so how could we stardardise around something that's likely to completely change in 5-10 years?
For electronic musical instruments, which in theory ought to only be limited by our imagination, it seems rather arbitrary and limiting to have synthesizers that can only be played like a piano. Users that just want an electronic piano are well served by the status quo, but "all music" is a broader category than "all music that is intelligibly playable using a piano-like interface in 12-tone equal temperament."
The ubiquity of MIDI and its piano-centric design has meant that it's been a very uphill battle trying to do any kind of electronic music that isn't plain 12-edo piano music. That's a shame. We've lost out on a lot of interesting music that could have happened if it was just a little easier.
MPE at least has made it reasonably possible for things like the Linnstrument to exist and be commercially successful.
I know it is different from what you want but Korg gives you the possibility to use other tunings on the Mono/Minilogue (XD) models. Aphex Twin made several availlable for download.
Yeah, lots of synthesizers have that feature, but the problem is that it's accomplished "client side", by the synth. It recieves a MIDI message that says "Play me a D" and the minilogue in Werckmeister mode plays a D detuned by 6 cents instead. Crucially, it makes the instrument play something different than what the controller asked for.
Unless your entire setup is composed only of Minilogues all configured in the same tuning settings, your electronic instruments will be out of tune with each other. Plug in an old yamaha because you like the sound? Now everything is out of tune. This isn't supposed to happen in a MIDI workflow.
It would be much better to be able to specify tuning system in the midi messages itself, so that any instrument, hardware, software, or whatever, knows to make it's Ds 6 cents flat so your whole electronic orchestra can be in Werckmeister tuning together, instead of just the one minilogue.
Here’s a non exhaustive list of what midi (and the standard midi file format) does poorly, off the top of my head:
- Control Change msgs are largely a wasteland of non supported features
- messages are not uniform size, so you can’t read a midi file backwards for instance
- the file format, usb format, uart format, and bluetooth format are all related but not fully compatible. Especially the file format which has meta messages that cannot be sent over bluetooth or uart, etc.
- sysex messages are not packetized in the original spec, needing delay to send large messages
- NRPN and RPN messages are another wasteland
- same with system real-time, song changes, bank changes, smpte, all needless complexity no one uses.
- the file format needs a chunk length written in the header, which is annoying to keep rewriting when recording continuously
- the file format needs an “end of track” msg that needs to be rewritten again and again when recording to disk
It’s fine for NoteOn and NoteOff. Most of the rest of it is pretty mediocre imo.
- no way to play a unison without resorting to multiple channels
- communication is one-way, so no standard way to query what CCs the synth actually supports
- tuning table support wasn't part of the original spec (MTS exists but most synths don't implement it)
- poly aftertouch exists but it's hard to actually use it effectively due to how it's implemented in keyboards and how the protocol is designed
- 128 notes per channel isn't enough for some instruments without resorting to MPE
- MPE addresses an important use case but breaks some MIDI abstractions (users have to know about "zones" instead of "channels")
- it's possible to lose a "note-off" resulting in a stuck note
- 31.25 kbps is slow by modern standards (USB is faster, but then you're stuck with short cables and the "what if neither or both devices are a USB host" problem)
On top of that, I think it's a bad sign that midi.org requires you to login to download the MIDI 1.0 spec. What are they trying to accomplish by making it harder than it needs to be to just read a 40-year-old specification?
I think it may be a good time for a new protocol that isn't based on the same fundamental abstractions as MIDI. Device compatibility isn't necessarily even that big a problem as long as cheap conversion devices to translate between protocols exist (though of course MIDI devices would be limited to what it's possible to express in MIDI).
Instead of note on / note off, I'm imagining a bidirectional protocol where the controller explicitly allocates voice circuits, explicitly tunes them to a particular pitch, and then either triggers envelope generators or takes manual control of the envelope.
For a physical layer I think CAN bus could be a pretty good option. It's meant for low-latency real-time tasks, it's supported by a lot of microcontrollers, the standard speed is 1mbps, and cable length can be very long at that speed (40 meters).
> Instead of note on / note off, I'm imagining a bidirectional protocol where the controller explicitly allocates voice circuits, explicitly tunes them to a particular pitch, and then either triggers envelope generators or takes manual control of the envelope.
Sounds way more complicated.
What makes midi great is how simple Note On and Note Off are.
At the end of the day, piano keyboards are just buttons. It’s no different than a qwerty keyboard. Getting the note data should be trivial.
The simplicity of the Note On Note Off api is the only reason midi still exists despite the rest of it.
Well, sort of. Consider though that a polyphonic synthesizer typically has to internally go to the trouble of allocating a free voice circuit, tuning it to some pitch, and triggering the envelope. What I'm suggesting is making that more explicit in the protocol. In some ways it's simpler because it's closer to how the synthesizer actually works.
> What makes midi great is how simple Note On and Note Off are.
It is nice that you can get a working MIDI controller or synth up and running with very little effort, but if a thing is too simple then it misses a bunch of nuance and becomes an impediment to more sophisticated use cases.
> At the end of the day, piano keyboards are just buttons. It’s no different than a qwerty keyboard. Getting the note data should be trivial.
Technically, that's not true. Most MIDI controllers use a double switch mechanism, where the time elapsed between activating the first and second switch tells you have fast the key was pressed. That gets sent as velocity in the NOTE-ON message.
Sending high-resulotion pitch data should be fairly trivial as well, it's just that we don't have a standard widely-supported way to do it. (MPE seems to be the current best option.)
meh. it's perfect for what it was designed. lowish latency key input to/from quasi analog devices on the cheap. you can't get better than it. canbus is newer and worse. midi is so simple you have it anywhere without braking bank.
midi is only bad if you try to abuse it for actual digital audio.
I have a hardware hacker friend who did an LED bank that lit up based on MIDI signals with a decoder she wrote in verilog in two days. If it's more complicated than that it's just not going to take off among musicians.
Out of curiosity what limitations do you find yourself running into when composing? I’m a songwriter myself, but my arrangements are probably too basic so I haven’t had any noticeable issues.
A) The full protocol seems to be proprietary to some degree. To access the official specs, I believe you need to be a paying member. There are other sources, but this is not an ideal state of affairs.
B) The current protocol seems to be composed of dozens of random accretions on top of the original protocol. Various extensions are only sporadically supported, so in effect you typically don't have any more capability than the simplest version of the protocol anyway.
A) You just need to sign up with an email address. The docs are free. If you don't want to do that, you probably know someone who has access.
B) There's a new binary protocol and a new pseduo JSON-rpc framework on top. If you read the free documents you would not come away with this conclusion.
We're a long way removed from sending 7-out-of-8 bit messages along 3 feet of cable with optocouplers doing our decoding. It's like a weekend of work to write a MIDI 2.0 decoder for a massive amount of benefit. And it's transport agnostic.
Midi 1.0 is still in active, since they didn't change the protocol until 2.0
It's hard to see what you want: You can't be complaining about lack of physical interfaces not present on your hardware (the additional specs for Midi over USB, RTPMidi, TRS plugs and the 3.3v 5pin din) so are you sad that your drum machine doesn't play bagpipes when you switch to program #110 (General Midi) or that a Juno 106 doesn't magically add reverb when you crank up controller #91 (the suggestions for CC mappings) or perhaps you are trying to send samples to a Yamaha DX7 using the DLS standard?
You are not really missing out though. I.e. you can still use the broil setting on your toaster oven even if your microwave doesn't support that. no-one is forcing you to use the lowest common denominator on every device.
> Midi 1.0 is still in active, since they didn't change the protocol until 2.0
This is not correct at all. There are tons of active extensions on top of the midi 1.0 protocol, even outside of sysex. It's not like they just wrote it all at once and then left it alone. These are the accretions I am referring to. Examples:
MIDI Time Code (MTC) (MMA-001 / RP-004 / RP-008)
General MIDI System Level 1 (GM1) (MMA-007 / RP-003)
General MIDI 2 1.2 (GM2) (RP-024/RP-036/RP-037/RP-045)
File Reference System Exclusive Message (CA-018)
Sample Dump Size, Rate and Name Extensions (CA-019)
MIDI Tuning Updated Specification (CA-020/CA-021/RP-020)
Controller Destination Setting (CA-022)
Key-Based Instrument Controllers (CA-023)
Global Parameter Control (CA-024)
Master Fine/Coarse Tuning (CA-025)
Modulation Depth Range RPN (CA-026)
Extension 00-01 to File Reference Sysex Message (CA-028)
CC #88 High Resolution Velocity Prefix (CA-031)
Response to Data Inc/Dec Controllers (RP-018)
Sound Controller Defaults (RP-021)
Redefinition of RPN 01/02 (RP-022)
Renaming of CC91 and CC93 (RP-023)
Three Dimensional Sound Controllers (RP-049)
MIDI Polyphonic Expression 1.0 (RP-053)
Perhaps I was unclear: None of those additional specs change the wire protocol - Just like IMAP, HTTP and DNS doesn't change how TCP/IP works. I.e. MTC is a sysex format, General Midi is just instruments names -> program numbers assignments + CC/RPN mappings. The tuning stuff are typically RPN messages. MPE hogs all channels for sending poly-expressions but its all using the basic Midi 1.0 message types
you originally wrote:
> ... Various extensions are only sporadically supported, so in effect you typically don't have any more capability than the simplest version of the protocol anyway.
This is just not true - every device have different capabilities and you control them with what ever midi messages the device responds to. Sometimes they follow those standards and other times the don't.
I.e. I can control the Reverb level on a Yamaha AN1X through CC#91 (which is a General Midi defined controller mapping for Effect 1 level). If I send CC#91 to an Arturia MicroFreak it will change ARP/SEQ rate. Sure it's a little annoying that I have to read the midi implementation list for each of my devices, but I'm not going to give up and never control those settings just because the don't align with the published extensions.
OK, sure, I agree that at some level the wire format has been stable, but that's like saying "http and http/2 both use TCP, so they are the same protocol".
Why? I was under the impression midi was used for all composition. Is that wrong? If so, what do composers, EDM artists, etc. use? Something proprietary?
Yes, it’s all MIDI. People complaining about it are ignoring the fact that it’s beyond ubiquitous and has not required a change for literally decades. It’s a smashing success, warts aside.
The dude replying to you with “Sibelius” apparently doesn’t understand that, like all other DAWs and notation programs…Sibelius uses MIDI, because of course it does.
There are probably some composition programs that can do this, but MIDI is more of a hindrance to that task than a help.
MPE is kind of a kludge, but it works decently well and it absolutely addresses a real use case. For a certain kind of user it was a welcome addition to the MIDI spec.
I hacked together a just intonation thing for a friend using midi notes for a friend using midi and puredata a while back. It didn’t take long - less than an hour. Obviously took advantage of pd’s implementation detail.
The MPE presets I’ve messed with with some Roli kit have been fun.
Not the parent, but some limitations are that you can only really have events that describe channel/time/value. That's mostly fine for instruments like piano. But on a guitar you can pluck the string with a soft/hard object, close/far from the bridge, add vibrato to an ongoing note, etc. Sometimes it's enough to use multiple instruments with different characteristics to simulate that. Sometimes not.
MIDI is fine for some instruments and for controlling few high level parameters. If you want to make music though, you'd go with a DAW instead which can start with MIDI on some tracks, but otherwise works with waveforms instead.
There's also a difference between when you only compose, want to hear the idea put together and give musicians the sheets, and when you want to create the end result fully on your computer.
EDM artists work within the limitations that their tools provide. There's a reason that you hear synthesised/sampled pianos all the time in electronic music, but rarely synthesided/sampled guitars: midi sucks at expressing guitars.
There are some really great guitar sample packs out there, I particularly love the Impact Soundworks Shreddage series, but just look at how much they had to mangle MIDI to make it feasible, with NINE (9) sections of the keyboard dedicated to mode switches, and chopping up the velocity space into five different articulation modes, and they still can't model unison bends which are a staple of classic rock guitar: https://www.youtube.com/watch?v=P2h9AmL2BhI
edit: actually after double-checking, they do support unison bends, but it's a special CC parameter that you have to turn up to do the technique and then turn back down when you don't want it anymore, pretty hacky. The point is, you can't use MIDI to actually play these instruments like real guitars live, even though these sample packs can produce realistic guitar sounds, you need to do butt-tons of midi programming in ways that MIDI wasn't intended to be used in order to make it sound mostly right to the point where it's probably easier to just learn to play the instrument than put in the extreme effort of meticulously programming all this faff.
Reshuffling that sentence for clarity:
The original MIDI protocol is widely-used by musical devices. MIDI 2.0 is a major overhaul to the original protocol.