
Details about MIDI 2.0 - dmoreno
https://www.midi.org/articles-old/details-about-midi-2-0-midi-ci-profiles-and-property-exchange
======
diydsp
This bothers angers me: >To implement MIDI-CI and MIDI 2.0, you need a
manufacturers SysEx ID. A SysEx ID by itself is $250 a year

That would be a re-hash of the hobbyist USB VID/PID fiasco. MIDI synthesizers
are one of the main non-activities which non-professional people have been
doing. So many amateurs and educators whip up MIDI synthesizers in a few hours
in a workshop or after work. And that is thanks to MIDI being a dirt-simple,
unlicensed standard.

>You will also have access to the MMA Github which has code for MIDI 2.0 to
MIDI 1.0 translation (and vis versa)

~~~
dmoreno
I was just listening to the midi 2.0 webminar from the midi association, and
as I understood there are some free sysex and ci id for non profits /
hobbists, but if you make money out of it you should register. I guess mainly
to prevent ID collisions.

~~~
mrob
It's pure money grabbing. They could just as well avoid collisions by
specifying a UUID for the SysEx ID, and then everybody could generate their
own independently.

~~~
dmoreno
I guess you can use the free IDs, and then place the uuid in the stream or
packet. Everybody happy.

------
mathnmusic
I feel MIDI is severely underutilized outside music. It's a simple, standard
protocol that relays switches being turned on/off, and knobs being turned high
or low. The only standard alternative that I know about is would be USB HID
interface which has its limitations.

What are some cool uses of MIDI you have seen outside music-making?

~~~
PeterisP
I'm not in that industry, but isn't it also used for controlling stage lights
in some cases?

~~~
formercoder
MIDI will often be used to trigger the console to change cues which then uses
DMX512 to control the lights.

------
habosa
I have just started to play around in the world of electronic music and have
had my first real exposure to MIDI. I just want to say ... wow. Something like
this is so rare. I have pieces of equipment from different decades that can
talk to each other using a $2 cable and no computer in between.

As a programmer I'm used to walled gardens and competing standards. MIDI is a
breath of fresh air. I hope whatever 2.0 brings can keep this spirit alive.

~~~
781
There is MIDI competition, more open and some say better, it just never caught
on outside open-source/hacker/maker circles.

[https://en.wikipedia.org/wiki/Open_Sound_Control](https://en.wikipedia.org/wiki/Open_Sound_Control)

~~~
seandougall
OSC is fantastic in many ways, but it makes for a pretty inefficient (and
unnecessary) replacement for the primary use case of MIDI, which is note
on/off messages. It’s much better suited to some of the other layers that got
bolted on top of MIDI, such as MIDI Show Control. For most musical purposes,
MIDI v1 is perfectly sufficient, well-documented, optimized, and open (from
the standpoint of being free to implement).

OSC absolutely has caught on in a lot of professional applications, just not
the ones that MIDI was initially designed to serve. It’s huge in the world of
theater, for example.

(Background: I wrote the initial OSC implementation for QLab [a theatrical
show control application].)

~~~
unsatchmo
My main beef with OSC is that there is no “there” there. It is hyper flexible
at the cost of you needing to design your own meta-protocol. Like every single
instrument that supports OSC has a different API with a mess of docs you need
to read. Doesn’t really get me in the mood for making music. The DAW
manufacturers had a very hard time creating user interfaces that studio
engineers could use to configure OSC, and I think that was a main reason
nobody ever used it as a synthesizer control protocol.

~~~
seandougall
Very much so. It’s a useful layer to build an API on top of, but saying a
device speaks OSC is like saying a backend service speaks HTTP.

------
tnolet
I once had a dream that every new synth, sampler or drum machine I brought
into my studio would automatically recognize the local MIDI network, joined it
and would pop up as a new device in my Cubase sequencer.

All wireless of course. Like Bluetooth but actually working. A man can dream.

PS: it would also stream multitrack audio over ASIO wireless and expose its
inputs and outputs.

~~~
okket
Also:

\- perfectly synchronised

\- zero latency

This may be hard to realise with lots of devices and wireless connections.

~~~
hnarn
> This may be hard to realise with lots of devices and wireless connections.

Assuming that the latency of the wireless connections is at least relatively
predictable, you could just introduce a suitable delay like with NTP time
syncing, right? Of course, it's possible that the latency will be wildly
unpredictable and in that case it's pretty much impossible, but that doesn't
have to be the case.

~~~
barryfandango
In the parent's case this latency includes time from a human key press (piano
key). The other cases you mention can be compensated for but of course the
human input is non-predictable.

------
dmoreno
I for one expect that MIDI 2.0 helps to boost the RTP MIDI protocol adoption
(over ethernet). It has much more speed, longer distances, less latency,
proper packet loss recovery (which is not a problem on local networks) and
overall much better hardware ecosystem.

Even on WiFi it is useful although latency is very jittery.

Disclaimer: I'm working on a RTP MIDI implementation for linux
([https://github.com/davidmoreno/rtpmidid](https://github.com/davidmoreno/rtpmidid))

~~~
paranoidrobot
> Even on WiFi it is useful although latency is very jittery.

Could you elaborate on why Wifi latency jitter would be an issue for MIDI?

From an outsider's perspective and with only a vague knowledge of MIDI, it
doesn't seem to be something that should be any more sensitive to latency
jitter than other realtime applications like audio/video.

~~~
dmoreno
Music performance is a synchronized effort, and with very precise timings. If
there is lag, everything is just a bit late and a musician can (unconsciously)
compensate up to some point. But for example (if my math is correct) 120 bpm,
has 8th notes every 250ms.

If drum beat is sometimes, and only sometimes, 100ms later than it should, the
result is not nice at all. And it is random jitter.

~~~
paranoidrobot
Ok, thanks - I wouldn't have thought 100ms variation would be that noticeable.

~~~
edejong
As an amateur musician, I played with real time synthesis a lot. Latency, and
especially jitter in latency is the biggest enemy. 100 ms is an eternity. 5-7
ms is still noticible and anything above 10 ms becomes a nuisance. Some
artists, especially drummers, hear 2 ms differences.

And it’s quite logical really. A reasonably fast tempo is 180 bpm. Playing
sixteenths notes would separate them by 80 millis. Then you have separation in
swing style sixteenths, funk (which is often ahead of the pulse by a tiny
fraction) and the real scale is around 20 ms. That’s comparable to 50 frames
per second.

This is also the reason (among others) why orchestras need conductors. The
right side would hear the left side 100 millis later due to the speed of
sound.

------
dmoreno
As I read it, in brief, it adds much more resolution, and a introspection
protocol using bidirectional communication which allows property exchange and
profiles.

Both very welcome additions! Finally no stepping on CC and note velocity, and
the DAW can really know about the synth / controller capabilities, as it knows
already with VSTs.

And keeps MIDI 1.0 compatibility.

~~~
warent
Now we just wait a decade for any DAW to support it, plus another million
years if you're an Ableton Live user...

~~~
Kye
Reaper will probably have it quickly. They update it at least once a month and
don't have any need to upsell to the next version or tier.

------
elihu
Midi 2 adds some much-needed features (per-note pitch bend, for instance), but
what I expect to happen is that the major manufacturers are going to only
implement the parts they care about.

I'd kind of like to see MIDI replaced outright with something built on a
somewhat different (less piano-centric) abstraction; something more voice-
oriented rather than note-oriented. Instead of having some number of fixed
pitch notes that you turn on and off and settings that apply to all notes, you
have voices that you can control independently (set volume, filter cutoff,
control the envelope, disable and enable, and so on). You can do that now with
the one-note-per-channel trick or MPE if it's supported, but it's kind of
kludgy and only works with synths that are multitimbral to begin with.

------
kitotik
> Property Exchange uses JSON inside of the System Exclusive messages.

For some reason this made me lol. The idea of cramming some JSON inside a
SysEx seems crazy.

Hopefully the timing is actually improved in 2.0. Once you are past “hello
world” getting midi gear to sync up has always been a nightmare.

~~~
VLM
One of ESR's relatively recent rants about protocol design is optimizing the
right thing vs scaling a protocol for use over long time, so he's a big fan of
using JSON in the next generation of NTP.

One quote summarizes a couple thousand words "We should be designing to
minimize the cost of human attention."

[http://esr.ibiblio.org/?p=8254](http://esr.ibiblio.org/?p=8254)

when you can run all the world's stratum 1 traffic on a raspberry pi, you
shouldn't be optimizing for bw and speed, but for bug-free-ness and security
and ease of use. Likewise back when a minimal computer system was a multicard
S100 Z80 system minimal midi made sense, but now a days it should be like
adhoc wifi with a REST API or something similar.

~~~
Ericson2314
Text is not how we optimize for correctness. I can already see my cheap synth
failing on scientific notation oddities, BOM, and other JSON gotchas.

Please read up on langsec.

------
stevehiehn
I'm trying to understand the 'bi-directional' part. There used to be a idea of
midi out & midi in. So does this mean you only need one cable connected now?

~~~
ductionist
It sounds that way - but they also say that connections over 5-pin DIN is/will
be MIDI 1.0 only.

------
thefounder
The future of MIDI is AVB and/or AES67 with OSC

~~~
stuntkite
You seem to be the only person commenting that gets it. I think that might be
because anyone who cares just skipped this press release. Even calling this
MIDI 2.0 is a deception. We have MIDI, it's great. The tools that follow exist
already and MIDI 2.0 doesn't appear to add any value.

Whatever this hustle is, it can fuck right off in my book.

------
yardie
Wow. This took so long to happen that I assumed it had already happened, a
decade ago. Early RFCs went out while I was in college.

------
stuntkite
MIDI 2.0 looks like garbage and it barely features what Open Sound Control has
been doing forever. I'm interested, but the demos here were made by people
that don't make music and suck at hardware, software, blogging, and video
presentation.

Whatever this is, I'm hard pressed to care.

------
yc-kraln
Is the connector still that only DIN+USB-Micro hybrid? It looks like it's
expensive and fragile.

~~~
matchagaucho
For backwards compatibility, all MIDI 2.0 features will still work over 5 Pin
DIN cable.

~~~
makomk
According to the linked page, 5-pin DIN still only supports traditional MIDI
1.0 and there's no plan to change this currentlyt.

~~~
elihu
...which means that almost everyone using midi 2.0 between multiple devices
will be doing it over USB, which is a shame because very few hardware
synthesizers or controllers can act as a USB host.

That's fine if you're connecting everything to a computer, but it's kind of a
step backwards from what MIDI used to be, which was an easy way to connect
almost any keyboard to almost any synthesizer made in the last three and a
half decades or so.

I've wondered if CAN bus would be a good modern-ish alternative to DIN-5 and
USB, but I don't know enough about it to say if it has some limitation that's
not immediately apparent but which would become a problem. (On the plus side,
it's much faster than plain DIN-5 midi, allows longer wires than USB, and it
seems to be supported natively on a lot of cheap microcontrollers.)

~~~
brokenmachine
I reckon CAN would be a great alternative. It has inbuilt support for message
prioritization, so important stuff like timing sync messages could have higher
priority. Also it's differential so long cable runs are not a problem and it
has good noise immunity. Oh, and it's a bus so virtually unlimited devices on
the same bus, in any topology.

Oh, and also I've always wanted to use my synths in the car!

It would be nice if the protocol of the future was wireless and could support
the actual audio as well though, but all that adds extra complexity of course.

------
tsegratis
> 16bit note velocity

For electronic drums I would of much prefered at least 24bit. Volume is by far
and away a drum's most expressive dimension, so it will be limited by a 16bit
velocity range. Adding velocity curves just masks the problem

Though of course 16bit is orders of magintude better than the current 8bit
range

~~~
coldtea
> _For electronic drums I would of much prefered at least 24bit. Volume is by
> far and away a drum 's most expressive dimension, so it will be limited by a
> 16bit velocity range._

It wont be limited at all, real human players have no control as subtle as 256
levels, much less 65K levels... Nobody would even notice anything...

~~~
edejong
Not my experience. The expressive control is exponential, so either you clip
on either end of the velocity spectrum, or you get discrete steps at the lower
end.

~~~
sjwright
I agree with the previous person that 256 levels of amplitude _should_ be
sufficient purely when it comes to velocity as long as these levels are spread
appropriately (i.e. non-linearly). If the expressive control is exponential,
that suggests to me that the data itself should be exponential.

I know literally nothing about drumming but I've no doubt there are plenty of
other characteristics of a drum strike than velocity. Such as the mass being
applied to the hit (e.g. is it being hit with the weight of the stick alone or
is it the drummer's whole arm) or the location of the strike, or the release
time.

~~~
teetow
Speaking as a producer who's programmed a lot of natural-sounding drum
tracks...

There are plenty of variables other than velocity -- location on the drum head
being the prime one, but there are others. Because of this, sampling an
acoustic drum kit involves capturing a suitable number of random variations,
and the end result is often gigabytes (i.e. hours of content) in size, even
though each sampled hit is just a few seconds long. Not having enough
variation in your sample set makes the programmed drums sounds unnatural,
since excessive repetition doesn't gel with how we experience acoustic drums.

Certain variables are more important than others, though. One notable sound is
the 'rim shot' which means striking the drum head and rim simultaneously,
which causes all kinds of constructive interference and results in a very
powerful sound. It's the holy grail of rock drumming. In drum programming, rim
shots are often a separate stack of samples with its own velocity layers, each
layer with a set of random variations.

~~~
TheOtherHobbes
The new attributes in the spec should make it possible - although not entirely
easy - to include 2D position information with note on messages.

I would have been happier with a more general note spec that left the number
of attributes and their resolution open and system-definable. This would allow
2D/3D/4D/etc control of note events, super-high resolution pitch definitions
for microtonal support, and so on.

Bandwidth really isn't an issue any more, so there's no reason to limit the
spec to a low common denominator.

Even so - 2.0 is better than the limitations of 1.0. So that's progress.

~~~
dbtx
3D positioning was added to 1.0, described in RP-049 dated 2009-07-23. The
parameters' MSB is 61 (0x3D, nice)

It bears some resemblance to OpenAL source parameters which is little surprise
as Creative seems to have written it. Some obvious differences:

\- sources' positions are sent in azimuth/elevation/distance, i.e. spherical
coordinates instead of rectangular

\- the positions are always relative to the listener instead of often having a
listener that moves around in a stationary 3D world

\- the source is now allowed to be both spatialized _and_ stereo with extra
parameters for angular distance between the "speakers", the roll angle of the
pair, etc.

I located the PDF maybe on Google, maybe by accident more than a few years
ago. (I think it was from MIDI.org even then) I had to make an account at
MIDI.org in January just to look through the specs, and it was there. Now I
can't find a link so I'm afraid it disappeared behind the MMA member paywall.
<sigh> Here's to progress.

------
vkaku
Not as fun to read, outside the music industry. Lack of consistent samples has
been MIDI's major issues, and even in games these days, nobody uses it, sadly.

~~~
vnorilo
MIDI is just a control protocol. You are thinking of General MIDI [1], which
defined a vague set of sounds associated with a program number, that allowed
playing a sequence on different engines with horribly mixed results as you
said.

MIDI 2.0 brings additional capabilities to device interconnection, and AFAIK
has nothing to do with GM (which not many people care about anymore)

1:
[https://en.m.wikipedia.org/wiki/General_MIDI](https://en.m.wikipedia.org/wiki/General_MIDI)

~~~
burfog
Lack of consistent samples is far worse without General MIDI. You could get a
flute switched with a snare drum.

~~~
coldtea
Lack of consistent samples is not an issue with professional use of MIDI.

Professionals (musicians, engineers, etc) don't use MIDI as a general-purpose
playback method, they use it to control their samplers, synths, external
effects units etc.

They don't need "consistent samples" because they provide their own samples,
different for every song (plus pure synth sounds, etc).

