
MIDI 2.0, first major overhaul to music interface standard from 1983 - davio
https://reverb.com/news/what-will-midi-2-dot-0-mean-for-musicians
======
kazinator
What will Midi 2.0 mean for musicians?

I suspect: frustrations with shit not working with other shit, like it used to
with MIDI, and a big decrement in DIY hackability.

USB is in the mix! Pretty much `nuff said, but I will say it anyway. USB is a
complex beast with documentation that is a good fraction of a foot high, if
printed out in all its versions on standard letter sized laser paper. If you
bring that into any standard, that is now part of your standard.

USB connectors do not have the optical isolation of MIDI current loops; USB
interconnecting will bring in noisy ground loops that will have musicians and
recording engineers pulling out their hair.

The clever thing in MIDI is that a device which sends messages over MIDI to
another device drives current pulses (not voltage). These current pulses
activate an octo-coupling device in the receiver such as a phototransistor.
There is no galvanic connection between the devices; they don't share any
ground or anything.

All sorts of talented musicians have done incredible things with MIDI. The
resolution of MIDI has been just fine for people with real chops. MIDI 2.0
isn't going to solve the real problem: talent vacuum.

~~~
Exmoor
> All sorts of talented musicians have done incredible things with MIDI. The
> resolution of MIDI has been just fine for people with real chops. MIDI 2.0
> isn't going to solve the real problem: talent vacuum.

I find it difficult to believe that someone with even a passing knowledge what
MIDI does would have this opinion. Most of the variables are only 7 bits of
resolution which produces jarring jumps when you try to adjust parameters in
real time.

I remember taking a college class 20 years ago where we talked about the
deficiencies of MIDI and what MIDI 2.0 should look like. It's been 20 years
since that conversation and it's mind boggling to me that MIDI is only getting
updated now.

~~~
kstrauser
Also, MIDI has the nice, round speed of 31,250 bps. Since it uses start and
stop bits, that's 3,125 bytes per second. A "note on" message to start playing
a note is three bytes long: a 4 bit "this is a note on" field, followed by a 4
bit MIDI channel number, then an 8 bit note number, then an 8 bit velocity
("how hard I hit the key") number. "Note off" messages, sent when you want to
stop playing a note, is identical except for the first 4 bit status field. So,
if everything's perfect, playing and releasing one single note will take 6
bytes out of the available 3,125 available each second, or 1.92ms. That's why
a lot of so-called "black MIDI" songs are probably literally unplayable
through an actual hardware MIDI interface.

But forget about playing _and_ releasing notes. Say you're triggering a MIDI
drum machine and a synth. Sounds like violins have a slow "attack" \- that is,
you don't go instantly from "no sound" to "full sound", but ramp up over a
short interval. Imagine a violinist that has to start moving their bow, or a
trumpeter that has to start blowing. It doesn't matter if you send a
synthesizer a set of "note on" messages saying "play a middle C major chord"
for violin sounds and they don't all get there simultaneously, because it was
going to take them all a little bit to start playing anyway. Drums are a
different story. If you expect a kick and hi-hat to play at exactly the same
time, you don't have that many milliseconds between their starts before a
normal human listener can start to really notice it.

So, the worst case scenario is that you'd have a piece of sequenced music that
plays two drums, a piano chord, a bass line, and a violin chord at the same
time. This is were sound engineers start getting hyper nitpicky about
stringing the equipment together so that:

\- The two drums fire off in adjacent time slices so that they sound as
simultaneous as possible.

\- The piano notes come next, from lowest (because if it's a sampled sound,
low notes will be played back more slowly and therefore have a slower attack)
to highest.

\- The bass sound comes next because those don't _usually_ have a super
aggressive attack.

\- Violins come last, and it doesn't really matter because they're lazy and
they'll take a few hundred milliseconds to really kick in anyway.

The worst case scenario is:

\- One drum fires off.

\- The rest of the instruments fire off in reverse order of their attacks,
like high piano, bass, high violin; medium piano, medium violin; low piano,
low violin.

\- The other drum fires off.

Because MIDI is so glacially slow compared to every other protocol commonly
used, it's going to sound absolutely terrible.

MIDI is amazing in so many way, but it has some very severe technical
limitations by modern standards. I can't believe it's taken this long for a
replacement to come along.

~~~
phlakaton
We used to deal with the serial problem using a hack: bump events back/forward
by one or two quantums of time to ensure that they go out over the wire in the
order that you want. It's laborious and I am looking forward to the next
generation never having to worry about it. (That _will_ be fixed, right?)

~~~
kazinator
If you really had to send the data from multiple sources into a single MIDI
destination over a single cable, then if a small global delay were acceptable,
a smart scheduling algorithm with a 10-20 millisecond jitter buffer would
probably take pretty good care of it so that the upstream data wouldn't have
to be tweaked.

(Note that if you stand with your guitar 5 meters from your 4x12 stack, you're
hearing a 15 ms delay due to the speed of sound.)

~~~
phlakaton
Unfortunately, because of the differences in instrument attack, which a MIDI
controller would have almost no knowledge of, I think a random jitter would
not fix the issue.

~~~
kazinator
An interrupt controller has no knowledge of device semantics; it can just
prioritize them based on a simple priority value. The scheduler could do the
same thing. It could even be configuration-free by using some convention, like
lower instrument numbers have higher priority.

Also, the physical layer of MIDI could simply be extended into higher baud
rates while all else stays the same.

I can't remember the last time I used a serial line to an embedded system in
the last 15 years that wasn't pegged to 115 kbps. Bit rate is a relatively
trivial parameter in serial communication; it doesn't give rise to a full
blown different protocol.

115 kbps is almost four times faster than MIDI's 31250. Plain serial
communication can go even faster. The current-loop style signaling in MIDI is
robust against noise and good for distance. 400 kbps MIDI seems quite
realistic.

This would just be used for multiplexed traffic, like sequencer to synth; no
need for it from an individual instrument or controller.

------
tartoran
For me MIDI 1.0 served my needs. I may look into 2.0 if the need arises. It is
however great to know that it will be backwards compatible:

 _MIDI 2.0 will be backwards compatible, meaning all new MIDI 2.0 devices will
be able to use and send MIDI 1.0 data. And if, in time, you create a hybrid
setup of old 1.0- and new 2.0-equipped machines, your rig 's MIDI 2.0 models
will interact together with the fresh capabilities of the new spec, while the
MIDI 1.0 models will continue to work as they always have._

~~~
wwweston
This is the most important feature. MIDI lives in a context where the
computing world's conception of obsolescence would be even more hostile to
owners than it is now; decades old hardware is still used and loved, keeping
it part of the protocol is key.

~~~
ptah
This. I have switched to hardware and going DAWless purely because of the
software culture of upgrading for upgrading's sake

------
PaulDavisThe1st
This quote from the section on the Capability Inquiry:

"The type of instant-matching that is, as of now, still based on proprietary
messages between Ableton hardware and software (or similar systems from other
companies) will instead be universally available through MIDI 2.0"

is misleading. The "matching" between (say) Live and a Push 2 are not based on
proprietary messages sent between them, but merely on both ends knowing which
messages to send. That's why an open source DAW like Ardour can also interact
with a Push 2, in the same "instant-matching" way that Live can.

Since MIDI is an open protocol, it is _always_ possible to determine what
messages are being sent. The capability inquiry is a good idea, but it doesn't
replace the sort of carefully-built match between the hardware controller and
the software that already exists.

------
jacquesm
Midi works fine for anything keyboard based. As soon as you deviate from that
it becomes an exercise in frustration.

Much better article:

[https://www.midi.org/articles-old/details-about-
midi-2-0-mid...](https://www.midi.org/articles-old/details-about-
midi-2-0-midi-ci-profiles-and-property-exchange)

~~~
robin_reala
Not that I’m familiar with the subject, but this article[1] suggests that
Roli’s Seaboard would need MIDI >1.0 to transmit per-note pressure and pitch
bend info, and the Seaboard is definitely keyboard based.

[1] [https://reverb.com/news/roland-unveils-first-midi-2-ready-
ke...](https://reverb.com/news/roland-unveils-first-midi-2-ready-keyboard-
controller-a-88mkii)

~~~
PascLeRasc
Pitch bend has been on MIDI controllers since forever, more or less. I'm super
excited for this synth though, with per-note pitch-bend and multiple
instruments reacting to key pressure:
[https://www.youtube.com/watch?v=UjZ6SuWxBFg](https://www.youtube.com/watch?v=UjZ6SuWxBFg)

~~~
52-6F-62
Coming from strings, that makes so much sense. Learning some keyed instruments
I always found myself attempting to bend/vibrato in that way without thinking.

Also, that man looks a lot like Garth Hudson...
[https://pbs.twimg.com/media/DjtfCgOUwAA6K7o.jpg:small](https://pbs.twimg.com/media/DjtfCgOUwAA6K7o.jpg:small)

~~~
piva00
Wow, he actually does. That man is Cuckoo, he is a quite well-known synths
YouTuber, I quite like his thorough reviews of some synthesisers :)

------
fortran77
We have an organ in our office warehouse with 8 ranks of real pipes, and
several dozen virtual ranks and it is MIDI controllable. MIDI does a poor job
of mapping organs to MIDI messages. There's no good way to control stops
(i.e., what stops are selected on which manual/pedal) and couplers without
"overloading" a lot of the meta commands. And there's no way of defining which
stops (which can be on any manual) are under expression. There's no way to
represent a "tutti" stop, etc.

And even for conventional "piano" instruments, which you'd think MIDI would
work well for, it's lacking. It doesn't have pedal velocity or position (often
the damper pedal is held at some in between position) and it doesn't have key
position. Advanced "player" systems, like the Bosendorfer SE or Disklavier Pro
overload other MIDI messages to account for this. Anyone who plays a real
piano will see MIDI's problems.

At the very least, every keyboard instrument like Organs and Pianos should be
100% controllable from MIDI.

~~~
6581
> It doesn't have pedal velocity or position (often the damper pedal is held
> at some in between position)

It does have pedal position. Pedal data is transmitted as a control change
message (e.g. CC#64 for damper) with a 7-bit data value. Many recent digital
pianos support half-pedalling including transmitting and receiving the pedal
position via MIDI.

~~~
scarecrowbob
Indeed, this is true.

I've often been super frustrated by things that receive MIDI: they don't
expose certain controls to CC, or they don't respond to multiple channels, or
they don;'t pass certain information to the Thru port.

But often what is frustrating is that it's do-able in MIDI, it just wasn't
implemented well on the device.

------
_delirium
Kind of strange. This seems to be intended for the case where you connect your
devices with something like USB, not with physical MIDI cables, and use MIDI
as just an event/messaging protocol on top of the generic data connection. And
it's true that MIDI has some limitations in that setup. But for that use-case,
Open Sound Control (OSC) already exists, and is supported by almost everything
on the software side, plus a growing number of things on the hardware side.
Why not just use that?

~~~
RossBencina
MIDI is not "just an event/messaging protocol", although it could be used like
that. MIDI describes an application-level schema for "channels" which allow
for the dynamic control of "notes" including pitch, poly after-touch, etc. It
also provides for channel-level control parameters, various special
application-level events. Global clock transport, bulk data transfer, etc.
It's true that you could implement all of these with OSC, (indeed there is a
standard embedding of the short MIDI messages into OSC), but OSC is in general
silent on application-level semantics. This makes OSC a much better choice if
you want to implement some other semantic, but you still need an application
level schema for commercial music devices, and that doesn't exist.

------
andyjpb
Whenever I talk friends who are interested in such things, they tell me that
Open Sound Control (
[https://en.wikipedia.org/wiki/Open_Sound_Control](https://en.wikipedia.org/wiki/Open_Sound_Control)
) is the new MIDI.

It seems to support a lot more than traditional MIDI, especially in terms of
control and synchronisation.

Does anyone have any idea how well MIDI 2.0 and OSC will compete with or
complement each other?

~~~
RossBencina
OSC is a message transport protocol. It describes how to packetise messages,
but not what they mean -- it doesn't define an application-level semantics.
This is both an advantage (making it flexible, and malleable to requirements
of ad-hoc projects) and a disadvantage (places a limit on seamless
interoperability between COTS hardware). In general, OSC needs either (a) the
sending and receiving endpoints to a priori agree on an application-level
protocol (message schema), or (b) some kind of glue/mapping layer in either
the sender or receiver that can translate and map schemas. Such a mapping
layer is easy to construct if you're using a programmable environment. OSC has
support in pretty much every programming language and many music environments
and, like MIDI 1.0, is a viable DIY protocol.

By contrast, MIDI (both MIDI 1 and 2) are flexible application level
protocols. For example, among other things, MIDI 1.0 describe schemas for
musical notes, parameters, transport control, and time synchronization. Built-
in application schemas allow devices that fit the application model to
communicate in a relatively seamless way. I believe that MIDI 2.0 provides a
more extensive schema, that includes (for example) device discovery and
capability queries, and removes some limitations of the old schema. I'm not
familiar enough with the details of the final MIDI 2.0 spec to say much more
than that.

As I recall, some of the features of MIDI 2.0 (e.g. capability queries,
discussed elsewhere on this page) were proposed for an "OSC 2.0", however the
fine people at CNMAT who produced the OSC 1.0 spec didn't have the resources
to sponsor 2.0 development, and no one else stepped up. In contrast, the MMA
(MIDI Manufacturers Association), who sponsored the 2.0 spec, have all of the
major music corporations as members (e.g Roland, Korg, Yamaha). That said, as
I understand, the MIDI 2.0 process was open to anyone, and I know of at least
one independent developer who was involved.

Will they compete? I suspect that the situation will continue much unchanged:
commercial hardware will support MIDI (1 and/or 2), and as is currently the
case, few commercial music devices will support OSC. OSC will likely continue
to be the protocol of choice for custom projects using custom hardware,
software and application schemas. Perhaps with time, as the tools improve and
we get API support for MIDI 2.0 in operating systems and embedded libraries,
it might become easy enough to develop MIDI 2.0 software to choose between OSC
and MIDI 2.0.

Edit: clarity.

~~~
FraKtus
I am working on VJ software, we are missing the capacity to associate preview
icons for music events. I hope OSC 2.0 would allow that with application-level
protocol by example... We tried to convince the music community 10 years ago
but could not make it happen.

------
unlinked_dll
I wish they would have removed SysEx messages. They cause way more trouble
than they're worth, now that we have property exchange/profile configuration
built into the spec.

~~~
dkersten
What's the problem that SysEx messages cause? I've only limited experience
with them (having used them only to send config data to a device, in a way
that didn't need the device to keep functioning while being updated, from a
programming point of view, I found them quite convenient and simple, except
perhaps the fact that you only get 7 bits per byte so may have to pack the
data).

~~~
SeanLuke
[For the benefit of HN: SysEx is a special manufacturer-specific MIDI message
which is undefined, so a synth manufacturer can use it for whatever he wishes]

I build a lot of open source patch editors for older synthesizers. My beef
with Sysex is that every manufacturer uses completely different approaches to
defining their own proprietary messages with it. For example, nearly every
synth in the universe has a sysex message for dumping a patch (a full set of
parameter settings) from a synth to another or to a computer; but they define
their messages is radically different ways, so I must construct an entirely
different set of parsing and emitting tools for _every single synthesizer_ ,
even within a given manufacturer. It's a nightmare.

So continuing this example, if the MIDI association had gotten together early
on and said that MIDI dumps _should_ have a header that looks like THIS and
then all the parameters in order, two bytes per parameter with no bit packing
whatsoever, no two's complement, and end with a specific checksum, then I'd
have written 10x more patch editors so far. I wouldn't have to write custom
parsers and dumpers: I'd just provide a list of parameters and their bounds.

~~~
fit2rule
(Disclaimer: been working in synth industry for decades now..)

Most SYSEX dumps are just dumps of the plain ol' structs that the synth
engines are using to drive their output. A lot of synths don't have the
processing power to do more than just dump the struct.

So, it wouldn't really make much sense to have them all use the same struct -
this can't be enforced too well. Forcing synth mfr's to all use the same
struct means that, even if they have their own internal plain-old-structs,
they'd need code to dump the SYSEX according to the standard.

~~~
SeanLuke
> Most SYSEX dumps are just dumps of the plain ol' structs that the synth
> engines are using to drive their output. A lot of synths don't have the
> processing power to do more than just dump the struct.

I don't think it's processing power: it's stingy RAM utilization. Many bad
actors (Kawai, Casio, later Yamaha) did crazy bit-packing of parameters rather
than just keep them in a simple array, while the more sane (Oberheim, E-mu,
Waldorf, early Yamaha) at least tried to pack in a consistent way. Other bad
actors (ahem Korg, as late as 2000) decided to use, shall we say, creative
parameter encodings, going even so far as embedding textified versions of
parameter numbers into sysex byte streams. And many used all sorts of crazy
schemes for banks and patch numbering, most of which are incompatible with one
another.

And it's not just encoding: basic synth patch dump features are missing from
different models. There are five basic tasks that most synth editors require:

\- Change patch \- Request a dump from working memory \- Dump to working
memory \- Dump to patch RAM and save \- Send a single parameter change (for
any parameter)

Manufacturers couldn't even agree to make machines which supported all five of
these. Some machines (Yamaha) have no way to write to RAM. Some machines
couldn't do single parameter changes. Some machines can't properly change
patches in a consistent manner. Some machines have no patch request mechanism.
Many machines can't dump to current working memory: only to patch RAM!

The situation is only getting worse. Whereas in the past manufacturers at
least attempted a complement of sysex messages, now many manufacturers can't
even be bothered to allow access to their machines (Korg, Roland). Others
treat their sysex messages as proprietary secrets (Arturia, Digitech, Alesis).

There is only one truly good, shining actor in the open MIDI spec space, and
that is Sequential. Which shouldn't be a surprise given who runs it.

~~~
PaulDavisThe1st
"Send a single parameter change (for any parameter)"

This makes no sense. That would also imply a way to discover (and name, and
probably provide semantics for) all parameters. That's a huge ask if MIDI
(even MIDI 2.0) is the only communication protocol available.

Yes, the first 4 of your list are common. The first one is covered by the core
MIDI spec. The 2nd and 3rd have no standard msg, but your complaint seems to
be about the contents of the message, which is no business of the requestor.
The 4th assumes "patch RAM", which cannot be assumed, as you note, and that
seems correct to me.

~~~
SeanLuke
> This makes no sense.

Why? It's highly standard. About 90% of the synthesizers I've written patch
editors for provide exactly this facility. In fact some (PreenFM2, Korg
Microsampler, Futursonus Parva) provide _only_ this facility.

> The first one is covered by the core MIDI spec.

Actually it's not. Program Change only works for 128 patches. If a synth has
more than 128 (and many do), they must rely optionally on Bank Select, but
their definitions of "banks" vary because a bank is not a formally defined
concept. Some rationally treat banks as divisions of the patches. Others treat
banks as media choices: cards versus RAM versus ROM. Some require that Bank
Select be _immediately_ before Program Change with nothing in-between; others
do not. Some ignore banks entirely and instead define a "Program Change Table"
of 128 slots pointing to arbitrary patches in memory, and then Program Change
indicates which patch slot to use.

And there are several _major_ synthesizers (Yamaha TX81Z and DX11 are famous
examples) where Program Change is in fact broken and requires unusual
workaround hacks. Further, most synths require a program change prior to a
patch load: but others (notably the Oberheim Matrix 6 and 1000) require a
program change _after_ a patch load. It's a mess.

~~~
PaulDavisThe1st
It's not standard at all. There is absolutely no MIDI standard for the
contents of a patch. I really don't know what you're thinking of.

Back in the 90's, when things like "MIDI Librarians" were common (and widely
used), each new device needed to be added to the MIDI Librarian's code to deal
with the specifics.

~~~
SeanLuke
> It's not standard at all. There is absolutely no MIDI standard for the
> contents of a patch. I really don't know what you're thinking of.

I think you may have misread what I had said. I didn't say that patches had to
be the same format or content -- that would be insane.

------
sneakernets
I'm so glad this happened, this may put an end to the many, many bespoke midi
implementations I've come across.

I participated in a piano competition over a decade ago (I believe it was
sponsored by YAMAHA) which recorded all participants through an extended MIDI
format that increased the resolution and bumped almost everything up to 1024
max from 127 max. With MIDI 2.0 this wouldn't even be required, all the
functionality is included.

~~~
elihu
> I'm so glad this happened, this may put an end to the many, many bespoke
> midi implementations I've come across.

Maybe it will, but I'm not terribly optimistic that we'll avoid the scenario
where the various manufacturers implement the parts of MIDI 2.0 that they care
about, and we'll have another mess of partial implementations that aren't
entirely compatible with each other.

It might help if someone puts out an open-source highly portable reference
implementation that everyone can use rather than every manufacturer writing
everything from scratch.

------
hoistbypetard
I really hope the background compatibility is idiot-proof. Because it's really
been great. My Roland EP-9 from the mid-90s is easily the oldest device I have
that I can connect to my iPad and have it just work with modern software.
(Granted, a dongle or two is involved...)

And it's worked with every computer I've cared to connect it to in years
prior.

That strikes me as a sign of a standard well-done.

------
Jamwinner
I am hopeful, but skeptical.

All you musicians who can't feel the MIDI 1.0 delay need to play on some
accoustical insturments and get what you have been missing. That couple of ms
between each note make every chord a rapid appregio, drum hit a flam, and it
gets worse as control data (much less sysex!) is added.

While I was hoping for a timing-agnostic standard, what we seem to be getting
is not terrible from what I can see. Does anyone have a link to an actual spec
sheet or prototype implementation?

~~~
seandougall
Back of envelope: With running status, a note on message is two bytes on a
31250 baud connection. That means the latency from transport is on the order
of half a millisecond. If you’re feeling latency, it’s in the gear, not the
protocol. MIDI implementation quality has long been wildly inconsistent, and I
don’t see a reason to believe that will change with MIDI 2.0.

~~~
6581
> That means the latency from transport is on the order of half a millisecond.
> If you’re feeling latency, it’s in the gear, not the protocol.

As long as you don't transmit anything else. When you have 16 channels with
many note and controller messages, latency and asynchronicity between channels
can become noticeable.

~~~
noizejoy
I remember using 8x8 midi interfaces to get around some of that - not just for
the additional midi channels, but also to thin out the traffic for each
connection. i.e. each midi channel would get it's own cable/connection.

However that approach didn't work when I wanted to send 16 channels to a
single (non USB) multi instrument sound device like my trusty JV-1080. It
became an excuse to buy more synths! :-)

------
lioeters
For me, one of the highlights is the higher resolution of values, for example
for control messages, from 7 bits (0~127) up to 32 bits.

------
FraKtus
Because MIDI 2.0 is bi-directional, will it allow us to exchange preview icons
for each individual MIDI note? I am working on VJ software and we want to
display the video loop associated with each MIDI key...

------
Archit3ch
I got to see it in ADC19.

For me the "2.0" part is pure marketing. However, since the industry has moved
on (e.g. MPE) it's nice to standardize on _something_.

------
ptah
> Is this important to me?

maybe.

> Is this something that's worth potentially getting rid of something I love
> for new capabilities that may or may not be compelling?

never

------
fao_
There's a standard? I googled a few weeks ago and obviously my google-fu was
terrible because I didn't find a standards document for it

~~~
AndrewDucker
[https://www.midi.org/specifications-old/item/the-
midi-1-0-sp...](https://www.midi.org/specifications-old/item/the-
midi-1-0-specification)

~~~
robin_reala
But nothing seemingly available for 2.0?

~~~
PascLeRasc
[https://www.midi.org/articles-old/details-about-
midi-2-0-mid...](https://www.midi.org/articles-old/details-about-
midi-2-0-midi-ci-profiles-and-property-exchange)

The actual spec sheet is yet to be released, but this is a really great
article about it.

