Hacker News new | past | comments | ask | show | jobs | submit login
MIDI 2.0 Prototyping announced (midi.org)
397 points by kristiandupont 58 days ago | hide | past | web | favorite | 195 comments



Only semi related, but I still have to share.

The guys at Roland are genius. I have a somewhat old Roland RD-150 digital keyboard. While searching online for the user manual, I found a 'firmware update' on the official site.

I was curious. My keyboard is pretty old school, it doesn't have USB or anything like that. I downloaded the firmware update and opened the zip file.

It contained a readme... and a .midi file. It sort of blew my mind. They were sending firmware updates over MIDI! You had to press a certain key combination on the keyboard and play back the MIDI file into the MIDI in, and the firmware update would be complete.

True hackers.


That's not Roland being genius, though. That's kind of literally one of the things sysex messages were added to the MIDI 1.0 spec. It's also why it's taking so effing long for webmidi to become a standard, because allowing sysex would mean your browser could in theory tell your midi device to brick itself.


In fairness, Roland founded the MMA and wrote most of the MIDI spec. The odds are fairly good that Roland did invent SysEx.


I thought Dave Smith was primarily responsible? Just to put another notch in Silicon Valley’s belt, I guess.


Dave Smith was the first to get on board with the project and made substantial contributions to the design of MIDI, but it was Ikutaro Kakehashi who started the whole endeavour. MIDI was heavily influenced by DCB, a proprietary interface developed by Roland in 1981.


Yep, when I did a little work for the Midifighter[1] controller, I used sysex to send configuration (but not firmware updates) to the device, to be stored in EEPROM. It was a nice and seamless mechanism.

[1] https://www.midifighter.com/ I made the first version of the configuration tool.


I also updated my TC Electronic Nova System via MIDI as well. It is a common feature across other MIDI devices.


I discovered something similar lately in the KORG Volca Sample - they use audio for data file transfer, like a modem. They even made their implementation free software (BSD-3): https://github.com/korginc/volcasample


Émilie from Mutable Instruments (Eurorack) open sources all her modules and uses audio to update modules with new firmware. It's pretty cool! https://github.com/pichenettes/stm-audio-bootloader


The whole Volca line gets updates this way - it's kinda amazing in its simplicity.


This is also how all communication is handled between Fractal Audio Axe-FX units and PC/Mac software Axe-Edit. Everything is done over MIDI, and nearly everything within MIDI via SysEx messages. Parameters of FX blocks can be individually controlled with short specific SysEx messages targeted toward the FX block and then the specific parameter within combined with its new value to be set with a checksum to wrap it all up. MIDI's low bandwidth (3125 bytes/sec) explains why it's so slow to upgrade and also why there is such high latency in modifying parameters and syncing the Axe-Edit software with the Axe-FX hardware unit.


Semi-semi-related, but I have a Xerox Printer-Scanner and I tried to update it using my Mac: the official instructions said to use the terminal to send the firmware update to the printer through the printer spooler, like you would print a document. I don't know if that's standard for printing hardware but I found it quite weird, also.


Speaking of this, I was trying to reverse-engineer the operating system for the DSI Tempest because you can send its binary data as firmware updates over SysEx messages. So you have the binary source of the OS, but not the source code. Unfortunately, I can't figure out what processor it uses without opening the thing...


There are people who are pretty good at guessing the CPU architecture just from a hex dump.

Though, CPU architecture is only part of the problem, identifying the actual SoC is easier when you open the device.


Open it! You never know what you will find inside..


likely a void warranty on a $2k glorified synth. just return it and buy one that publishes specs.


The MT-32 sound module(also a Roland creation) was the target device of a lot of PC games circa 1988-1992. The more sophisticated soundtracks would reprogram the built in patches, also using sysex for this operation. Not only that, they would put text messages on the MT-32's display!


A huge amount of vintage electronic music gear, from multiple manufacturers, receive firmware updates via MIDI. Keyboards, synths, even electronic drum systems. It is extremely common, though not so much anymore thanks to USB.


Similar, you can save and load synth presets using MIDI on microKORG. Surprisingly, it doesn't work with cheaper MIDI cables.


Fractal Audio does the same with their amazing Axe-FX guitar processor. All updates to firmware are supplied as MIDI file as a SYSEX data stream. I was amazed to see that they updated internal firmware the same way they uploaded patches, and even WAV form maps for Impulse Responses. Definitely a genius way to maximise the protocol.


Yet they are screwing up with their sound engines and overall hardware :)


I think it's quite astonishing how solid the MIDI standard feels -- it's embraced by the entire industry and even though it's nearly 40 years old, it has worked very well for many types of instruments and software that couldn't even have been imagined at the time.


We tolerate MIDI, but it's really pretty terrible. It's ubiquitous, but it's just barely good enough to have avoided being replaced.

The first major issue is simply the lack of bandwidth. The physical layer operates at 3125 bytes per second, which just isn't enough for anything more than a single instrument with relatively sparse control data. MIDI devices can ostensibly be daisy-chained, but that's a really dumb thing to do because you get horrible timing problems. Back when we still used a lot of hardware synthesisers, it was the norm to have a large multi-port MIDI interface connected to your computer, providing one interface per instrument. That still doesn't solve your problems if you have a multitimbral module with lots of polyphony - if you start sending control channel messages, the timing of your note on/off messages will fall apart.

The second major issue is the lack of resolution. MIDI is an 8-bit standard, which is generally acceptable for velocity but grossly inadequate for most control channel messages. There are some pretty nasty workarounds being used to avoid zipper noise when you adjust a control parameter; this is most commonly an issue when sweeping a resonant filter. There are various hacks to send 14-bit control change messages, but they're non-standard aside from velocity and pitch bend.

MIDI is also built around the western scale, with no real accommodation for other tuning systems. We're forced to use the crude bodge of sending a note on message immediately followed by a pitch bend message; MIDI doesn't support per-note pitch bend, so you can't have polyphony and microtimbrality on the same channel. This is a fairly niche issue in the west, but it's a showstopper in a lot of other musical traditions.

OpenSoundControl addressed these issues and more besides, but it lacks widespread support because it was developed unilaterally rather than as an industry-wide collaboration. The spec is very powerful, but it just isn't very nice to work with. It's exciting to see that the MMA have a number of big players involved with the development of MIDI 2.0. We've been talking about fixing MIDI for a long time, but it seems like there's finally the traction to get a new standard widely adopted.


> There are some pretty nasty workarounds being used to avoid zipper noise when you adjust a control parameter

Workarounds, sure, but I wouldn't call it a nasty hack to simply interpolate between the discrete MIDI steps.

> There are various hacks to send 14-bit control change messages

NRPN is part of the standard and allows 14-bit control messages, so no hacking required.

> MIDI is also built around the western scale, with no real accommodation for other tuning systems.

There's the MIDI Tuning Standard since 1992, so there is a standardized accommodation for other tuning systems, and a bunch of synthesizers that implement it. Just not part of the MIDI 1.0 standard itself.

> OpenSoundControl addressed these issues and more besides, but it lacks widespread support because it was developed unilaterally rather than as an industry-wide collaboration.

IMO OSC probably lacks widespread support exactly because it doesn't specifically address most of these things. It's a protocol for sending timestamped and namespaced data over a network, and anything much more specific than that is just a matter of ad-hoc convention. It solves getting high precision data to devices in a timely manner, but it doesn't address the issue of getting a client to play a note.

In general I think that MIDI 1.0 just being tolerated is a great way to summarize the situation, and I'm glad that actual MIDI 2.0 implementations are seemingly around the corner.


> MIDI doesn't support per-note pitch bend

"MIDI Polyphonic Expression" and is a new MIDI standard created by us, ROLI, Apple, Moog, Haken Audio, Bitwig and others for communicating over MIDI between MPE controllers ... The principal reason for MPE is to get around a limitation of MIDI: Pitch Bend and Control Change messages must apply to all notes on the channel. This prevents polyphonic pitch bends and polyphonic Y-axis control (which uses Control Change messages) over a single MIDI channel. MPE solves this problem by sending each note's messages on a separate MIDI channel, rotating through a defined block of channels. Here's a brief summary of MPE: http://www.rogerlinndesign.com/mpe.html

Today there seem to be work arounds, not that a new midi version would not work better, however.


MPE is a bit of a mess. First, it requires a dedicated MIDI “port” per device, and two, you have latency issues since you’re sending a lot of control data over 31KBAUD.

Then there’s the fact that many manufactures promise MPE support in their devices, but rarely in 1.0 firmwares... you either ship with it or wait 6 months for either a crummy implementation or a statement from the manufacturer walking back support. It sounds like a non-trivial software implementation task, especially on devices with resource starved I/O processors


I think if you're doing MPE, you can afford USB-MIDI


I could be wrong, but USB MIDI is still limited by the same bandwidth and latency constraints. I don't think it's any faster in terms of baud rate, despite being carried over a faster bus.


USB MIDI isn't limited by the constraints, because they're only about the legacy physical link


Pretty sure I don't want cumbersome and amazingly terrible MIDI cables anymore in 2019. Give me an updated MIDI protocol, and then talk/listen to my device over USB. There is no reason to use incredibly bad MIDI cables for this. And before we all go "but USB cables are more fragile", don't use a standard consumer or studio cable during a live performance, and done. Let these dumb MIDI ports die with MIDI 2, they are not of this time.


I'd have to disagree with this. I've done lots of stupidly over-complex live music production, and MIDI has been rock solid and reliable whenever I've done any of it. The cables are physically robust, and the protocol works well. It's an incredibly well-designed system. MIDI ports are not dumb - in fact I'd venture that they are one of the reasons that MIDI has become so popular - they're simple and universal, and cables and connectors are easily made yourself - not something that's the case for USB, particularly USB-C - can't imagine trying to solder anything like that together!

Back in the day I made loads of custom switch boxes and so on to allow whatever bizarre system I thought was a good idea into a reality - in fact I only got the last one out of my studio about 3 weeks ago when I finally removed the 19" rack I've had in there for the last 20+ years. The ubiquity and compatibility of MIDI has been one of the reasons that it's still around today. Forgetting that would be a huge mistake, IMO.


Couldn’t agree more.


USB with it's max 5 m cable length unless you use special repeaters etc wouldn't be a good choice IMHO.

If you wanted to switch to something common, I'd personally say Ethernet (or something "normal", and a specified transport over Ethernet). Handles the distances and data rates more than fine, parts to implement it are widely available, it's already used more and more in stage and recording setups, so specialized components are available too. In the audio sector, standards to distribute precise clocks are around, one could maybe reuse those to get it past P2P connections (although that kind of gear is still kind of expensive right now). Still quite a step up in complexity though...


There's already a lot of professional sound gear that uses Ethernet with a custom layer 3 (iirc) protocol for digital audio, so supporting Ethernet would sure make sense.


Your comment doesn't make any sense to me. The 5-pin DIN plug was specifically chosen for MIDI because it's a rugged, reliable connector suitable for use in professional environments. If you think that MIDI 2.0 should only use USB and that USB cables aren't suitable for use on stage, then you're saying that nobody should use MIDI on stage. That strikes me as profoundly silly.

MIDI is the lingua franca of music equipment. It controls mixing consoles and effects units, it synchronises music to lighting, it provides timecode and controls recorders. You might not use that stuff, but it's an essential part of the spec that is used every day by professional musicians and engineers.


that USB cables aren't suitable for use on stage

I think they're saying one should use rugged USB cables for live performance, rather than a typical consumer-grade cable.


I do not think MIDI should only use USB, but I absolutely want the option to stop using MIDI cables. As for ruggedness, that's a reversed argument. If MIDI over USB-C is a thing, and the industry embraces it, you get rugged enough cables and connectors to work on stage and on the set. There is, right now, without MIDI2 even being done, literally no reason for anyone to make rugged USB-C cables. So saying "there are no rugged USB cables, MIDI has proven itself" is a matter of course.

If MIDI2 can work with USB-C in a way that devices can be linked, instantly, because the USB-C circuitry they use can speak just enough USB to verify the connection is between two MIDI devices, then I will be more than happy to give my MIDI cables away and never bother with them again.

And yeah of course a stage or studio setup will still need them for the foreseeable future, it's not like decades of midi instruments vanish overnight. But you'll also start seeing USB cables thrown into the mix, and eventually you might not see MIDI cables at all in 20, 30 years.

It has to start somewhere.


USB has a huge problem compared to MIDI cables: it's not optoisolated. That alone almost takes it out if the running for controlling synths via laptops.


To be fair the approach taken by MIDI for optoisolation (and getting essentially differential interface for free with that) is in fact one big ugly hack.


Why is it a problem that it's not optoisolated? Audio hardware (including synths, laptops, keyboards, DAWs etc) are connected via USB all the time.


Connecting a controller keyboard to a laptop isn't an issue because it shares power and ground with the laptop. But synths on separate power are another issue entirely. For example, the Waldorf Blofeld has a USB MIDI connection; I attach mine to my Macbook Pro and its audio presents a ground hum and an electric buzz whose pattern matches the Macbook's processor utilization. Attach via 5-pin DIN MIDI and the problem is gone.

USB wasn't designed for this use case, and it is a very, very common problem.


Except sometimes it is: I have an audio workstation with two "yellow" USB ports (as opposed to standard USB2's black and USB3's blue) that are dedicated audio USB ports with additional circuitry specifically for connections to audio devices. My Komplete Audio 6 kept buzzing and letting me hear my cpu activity in my old system (is it both amazing and the most annoying thing ever to hear your mouse move as HF signals), and has had perfect audio since switching to those for-audio USB ports on the new motherboard. As far as I know only Gigabyte makes these (my specific board is a z170x gaming 5) but they exist, and they're great.

Certainly, UBS in general can't be used for this purpose, but if there's true industry buy-in for MIDI over USB then it's not hard to imagine more and more USB hardware will get made that can compensate for that problem. For instance, I wouldn't imagine laptops to get isolated USB, but a shiny new post-MIDI-2 audio interface sure would.


Ground loops. Stages are very electromagnetically noisy, mainly because of lighting dimmers. If you connect a mains-powered computer to a mains-powered master keyboard via a USB cable, you've just created a big antenna for all of that RF noise. That noise will travel through the ground plane into your USB audio interface.

In that environment, isolation is a constant concern. Connections that are inherently isolated make fault-finding far more straightforward and obviate the need for external isolation boxes.


Must be a solved problem, since USB is ubiquitous.


Only solved in the sense that you can buy additional hardware for ground loop isolation for USB and audio connections. Ground loops and RF interference are a persistent and recurring problem in pretty much all USB audio setups, except perhaps when you're solely using relatively high-end hardware that has been designed specifically for audio (cables included). Without isolation, all it takes is one bad component to ruin the whole setup.

For even a simple USB audio interface setup connected to a single audio source I would recommend isolation and balanced audio cables.


It's a "solved problem" in the sense that it doesn't matter for the use cases USB was designed for. Most people are using USB to connect a few devices at most and over short distances, so these effects are negledgible. But that is not true for stage audio.


Not even remotely.


There are still many reasons to prefer MIDI cables over USB. The main one is that you can plug pretty much any MIDI device into any other and expect it to just work. With USB, you almost always at least need a computer or MIDI host device to act as an intermediary. MIDI cables can also be a lot longer than USB and (if the receiving device implements the spec properly) the devices are optically isolated.

There are a lot of potential technologies that could improve upon the antiquated MIDI cable standard, but USB is actually worse in a lot of ways.

CAN bus might be a good option; it's fast enough, and is already available on a lot of cheap microcontrollers. I haven't worked with it, though; maybe there's a shortcoming I'm not aware of.


Most midi controllers support 14 but resolution using msb and lsb transmission - iirc this is part of the general midi specification, not a hack.

I agree that there are plenty of things to improve on the old midi spec, but I think for a lot of people the issues are irrelevant - while physical midi is 31k, most people I encounter now use multiple soft synths in their DAWs, and aren't even aware of the original spec and speed of midi (unlike back in the day with multiple synths chained off a single midi output on an atari ST where it was definitely an issue).

I remember zipi, which promised similar useful changes and came to nothing - I think for the vast majority of people midi doesn't get in the way of their work.

Did mlan also promise similar improvements?


> We tolerate MIDI, but it's really pretty terrible. It's ubiquitous, but it's just barely good enough to have avoided being replaced.

sounds like C


sounds like VBA


The physical layer operates at 3125 bytes per second which just isn't enough for anything more than a single instrument

The typical MIDI 'instruction' requires 3 bytes. So, 1000 non-intense instructions per second. Not good enough for intense controller mods, but if the played 'instruments' do most of the work, that's over 80 ips (notes e.g.) for each of 12 voices. (Also that's 3125 bytes per MIDI input, of which you're welcome to use many.)

Perfectly adequate most of the time ... and one reason that the original, genius design lasted this long.

NOT to say that it isn't insanely great to hear this news.


> MIDI is an 8-bit standard

And considering one of those bits is used for status, it’s effectively 7 bits worth of resolution (128 discrete values is... not great for anything but maybe western scale note pitch values).


not sure that's useful information to highlight how bad MIDI is, given that you can effectively get any frequency out of a hardware device that supports the coarse+fine channel tuning instructions. It's high time MIDI gets an update, but "I can't get my non-western scale out of this device!" is not one of the problems MIDI 1.0 actually suffers from.


Microtuning is a long way from being a solved problem in MIDI 1.0. MIDI Tuning Standard is sufficiently poorly implemented that there are several non-standard alternatives in widespread use.

This is very much a "Falsehoods Programmers Believe About Names" type of situation - what western engineers perceive as an insignificant edge case is a constant frustration for many non-western musicians. The answer to the question "How do I play music from my own culture using electronic instruments?" should not be "Well, it's complicated...". Support for non-western scales should not be a clumsy, optional part of the spec.

https://www.midi.org/articles/microtuning-and-alternative-in...


Agreed. This reminds me of a few discussions i saw about unicode being silly and over the top and that ascii is perfectly functional. Non english languages can work around the spec!


Tell how western-centric they were to those Japanese engineers from Yamaha, Roland, Korg and Kawai who worked on MIDI 1.0 specs.


Japanese traditional music is primarily pentatonic or heptatonic and aligns closely to western 12-tone equal temperament. Contemporary Japanese music is entirely compatible with western music theory. That's a happy coincidence for a Japanese-American project, but it's distinctly inconvenient for Arabic, Persian or Indonesian musicians who use notes that have no useful equivalent in the western scale.


Sorry, I don't think I made my point well here. I was trying to say how 128 discrete values is incredibly course grained for a lot of uses, but I should have mentioned that it seems to be worse for Control Change parameters than Note + Velocity. 7 bits maps fairly well to an 88 note western style keyboard a lot better than emulating a potentiometer on an analog filter.


The physical layer operates at exactly the maximum throughput then available on an Apple II serial port.


I love MIDI, its pretty much a universal standard. Every device I need it on has it (my modular is the exception for sound sources), and just a standard 5 pin DIN socket that is easy to solder.

Yes, it could be better but I don’t just tolerate it.


> it's just barely good enough to have avoided being replaced

The probability this is true for any system in use (software, natural language, tax code, home appliance...) asymptotically approaches 1 with its age in years.


OTOH if MIDI is so terrible how come there are probably millions of tracks produced using it?

Sure, it's old and outdated, but it does the job.


> We tolerate MIDI, but it's really pretty terrible. It's ubiquitous, but it's just barely good enough to have avoided being replaced.

OK, what I'm going to say is not constructive, but, boy, how I hate this type of comments. Where were you 40 years ago? Why did you not prevent it from being terrible? What did you do to replace it with something better?

Yes, it may be "suboptimal" by the 2019 "standards", but back then, I'm pretty sure, people put a lot of thought in to it and tried to make it as good as possible. Just because it looks childish on the current hardware does not make it "pretty terrible".

EDIT: It's almost like complaining that 555 is absolute rubbish comparing to 8266 that can do sooo much more, and why just did they not come up with something better back then.


>OK, what I'm going to say is not constructive, but, boy, how I hate this type of comments. Where were you 40 years ago? Why did you not prevent it from being terrible? What did you do to replace it with something better?

I'm not saying that they should have come up with something better. MIDI was an incredible breakthrough in 1983, but it was designed in 1983. That's a very, very long time in technology. We've known about and dealt with the shortcomings of MIDI for decades, but it's difficult to replace a deeply-entrenched standard. MIDI has been pushed as far as possible without breaking backwards-compatibility, but it is now long overdue for the industry to move in unison and create something fit for the 21st century.

Imagine if you dealt with lots of devices on a daily basis that had a 33kbaud serial interface. Would that kind of annoy you? Would you say "this archaic serial format is just great!". No, you'd be kind of cheesed off that you were stuck with it.

MIDI 2.0 looks great. It's a really big deal for the electronic music industry. You can only usefully say why it's a big deal if you acknowledge all of the crummy, annoying parts of MIDI 1.0.


> boy, how I hate this type of comments

And I'm not a fan of this kind, either.

Yes, it might be unfair to poke holes in a standard from the 80s from a 2019 perspective, but it was GP's assertion that the MIDI standard was unsuitable to last as long as it did without an update. _That_ is what is being criticised as pretty terrible, and that's fairly clear from what GP wrote.


When I started work on version 1 of Polychord for iPad (almost 10 years ago, wow!) MIDI wasn’t available yet on iOS as a public framework. I decided to use the MIDI format for the underlying synth I built, regardless. It made sense because it was already a solved problem, and was well-suited to a resource constrained environment.

In the end it was maybe the most impactful decision I made, as MIDI flourished and Polychord found a niche as a controller. Inter-app MIDI became one of the first ways in which iOS apps could work together rather than being little islands unto themselves, which then led to Audiobus, which led Apple themselves to open the door to cross-app features on the platform.

It may be a dated standard, but a standard nonetheless; and that’s valuable in ways that are hard to quantify.

Hacking on top of MIDI is not always easy, but it can be gratifying. You also discover fun Easter eggs — like the names of defunct manufacturers buried in the data bytes of the spec.


Musicians don't throw instruments away like other markets throw their gadgets away. I have a room full of 30-year old synths that still work the same as they did the day I bought them, and they're still awesome - I will never 'upgrade'. They're instruments.

It'd be great if the other realms of electronic gadgetry could accomplish the same degree of uptake as electronic musical instruments - however, the anti-patterns that make modern consumer gadgets so appealing to the people who market and sell them, are a smell to musicians who have very little patience for designed obsolescence. Musicians don't tolerate that much .. anyone attempting to induce it in a product intended for music-making will find that they won't get far in this industry.


Hm, I don't agree fully. There is zero hardware innovation for digital keyboards at least. Some old electronic instruments are 'vintage'; what makes a Moog is the vintage Moog sound, so yes, you can't upgrade from there. But most digital tools are not instruments in that sense and definitely have the corporate gadget-marketing schemes going on.

- Yamaha's Motif XS has some sounds that were adopted as signature sounds for some subgenres of funk, soul and gospel, but these are just samples that the upgraded versions XF, MX, MOXF have as well for this reason, and I don't see anyone with the XS anymore.

- Nord clearly limits the features, storage and processing power of each keyboard they release so that they can upgrade it slightly the following year (the latest 2018 flagship model has 480MB of sample memory vs. the preceding 2015 model's 380MB).

- Korg rereleased the Kronos with an SSD instead of a HDD and marketed it as a new machine. You could open it up and DIY for 80 bucks and save yourself ~$1.5k+ upgrade costs.

- Roland still slaps the Juno brand name on random iterations of digital instruments that just have an extra button for this or that function that could have been added with a software update if they wanted to.

New digital keyboards these days are just software updates that model familiar sounds slightly better, packaged in "new" hardware. The $3.5k pricetag for flagship keyboards that have barely changed in size/shape/material for years is a clear sign. I wonder why no manufacturer has just come out and said "this is our flagship keyboard until 2030, buy it for $2k and subscribe to software updates for $10/month" or "new software verson at $100 every year". That way what buyer's pay for would be much more closely connected to what they are actually getting.


You're just describing stage keyboards. I'll grant you that there is zero innovation there. But that's because the customers in that market want a very simple thing which is essentially a commodity, to which the market has converged. It's like complaining that the market for peanut butter has no innovation.

Some historic digital innovation. It seems to me that almost everything PPG and Waldorf ever made or makes would be considered innovative, including the upcoming Kyra. Other innovative digital synths would include the VS and Wavestation family; the Fizmo, Morpheus, UltraProteus, and Proteus 2000; the FS1R; and perhaps the Prophet X. And I must grudgingly admit that the MicroKorg was innovative given what they crammed in there for the price.

I'll grant you that most of those devices are two decades old. The current incentive for innovation (and hence risk) for digital devices has been destroyed by software synthesizers and laptops. Most digital stuff nowadays has gone for cheap rather than new, in the hopes of going after the poor musician. Monologue, Monotribe, Volca, Reface, Boutique. I don't know if that's bad, but it does make me sad.


Exactly. The stage keyboard is dead innovation wise. The synth market is alive and well. The whole modular boom, Korg's analogs, Moog and smaller vendors are making exciting stuff.


What about Minilogue XD then? Does its extensibility count for being innovative?


It seems to me that the Minilogue XD is just a stripped down Prologue. The Prologue is fairly prosaic (and largely analog, since we were talking about digital) -- being largely 8 Monologues in a box. Except for one item, the programmable oscillator facility. That is definitely innovative. Extensibility is pretty interesting; of course, it's been done before (Mutable, etc.), but not on this scale.


What really flummoxes me is how my Yamaha SY99, which is a digital FM synth from 1991 with 512kb of memory and, by modern standards, a tiny, tiny CPU, manages to do things that are really CPU intensive on a normal OS (like large plate reverb, tons of polyphony, on 12 tracks at a time) without ever lagging. The only real limit I've hit on it is loading in proper samples, versus waveforms, though this[1] insane project seems to have hacked memory expansion boards for the old SY/TG random. This is a piece of purely digital equipment that is now 27 years old, and yet my phone doesn't last for more than 2 years without feeling obsolete.

Why is this? I've always wondered if these golden-age hardware FM synths provide any benefits over software (people talk about the super HQ DACs, etc., but I'm not sure if I buy that these components are better than modern commodity equipment?). The thing is, to my ear, the SY99/SY77 etc. do sound a lot better than e.g. Dexed or FM8, but is that just because Yamaha patented a bunch of the FM algos and architectures, and, with a retail price of £3,000 in 1991, they put more care into a complete end product than a VST FM developer realistically would? Or, can really low memory, low CPU, super purpose-built hardware somehow win against a modern OS?

[1] https://www.sector101.co.uk/waveblade.html


Modern systems largely sacrifice consistent latency for more throughput and more energy efficiency. This leads to modern systems being able to do much more, but also having frequent little responsiveness hiccups and occasional larger hiccups. If you want to do a smaller job consistently, a simpler embedded system architecture is still the way to go. This is especially true when doing any kind of low-level signal generation, whether it be audio, video, or control.

As for the sound, I don't think Yamaha's implementation of "FM" [1] has been comprehensively reverse-engineered in the way that, say, the Commodore SID has. There are a lot of little quirks and edge cases to take into account when considering the full operational ranges of the various chips. Even MAME's implementation is allegedly distinguishable even after 20 years of tweaking and testing against hundreds of games, and I imagine most VSTs are using significantly less mature code.

[1] I recall reading that the actual implementation is phase modulation because directly doing FM in the digital domain would put quantization error in the frequency domain, i.e. notes would be off-pitch instead of having noise.


Comparing difficulty between FM and SID is a bit apples to oranges. The SID has analog variance in its filter, and no two are exactly the same, so a pure DSP implementation has to do rather heavy math to approximate the filter. As a result most people lean on a handful of SID emulation cores because otherwise their emulation is just wrong.

OTOH the things that tend to differentiate implementations of Yamaha's FM are the sample rates, bit depths, and envelopes used, plus any output distortion(YM2612 for example has a well known distortion in its implementation that adds a harsher edge). The core algorithms they use are something a high school student could pick up and do something with, and emulation quality issues are more a matter of it being easy to write an emulation that gets it 98% correct without covering the last steps, since those are details that really need error-for-error reproduction of the original ASICs and boards, including any timing issues - things which frustrate emulator writers everywhere.


> manages to do things that are really CPU intensive on a normal OS

Dedicated circuits.

This isn't all in CPU - the CPU on these synths mainly sets parameters for the sound generating chips.

not the same chips, but something like this probably:

https://en.wikipedia.org/wiki/Sound_chip#Yamaha_2


Phase modulation synthesis as Yamaha does it is computationally very cheap. That was one of the things that really boosted its fortunes in the 80s: you could produce very complex timbres (albeit often nasal or harsh sounding) with just a few sine waves pushing each other around. It's literally just a sine wave whose output pushes the phase of the next sine wave (operator in Yamaha's parlance) forward. You don't even need floating point math, and Yamaha got by with 12-bit DACs.

The SY99/77 really are great, though. I sold my TG77 and FS1R recently, but will probably get myself a used SY99 keyboard at some point. Or save up for a Montage.


>


The best-known Yamaha FM synth chips don't even have analog output; they pair with a separate DAC chip.


Stage keyboards aren't really supposed to be innovative. If you try and compete with Mainstage, you end up with a PC in a weird form factor. Stage keyboards are built to be rugged, reliable and intuitive instruments for performing musicians. These users overwhelmingly don't want frequent software updates and innovative new features, they want a tool that does the job with a minimum of hassle.

There's a reason why so many keyboard players adore their Nord keyboard, despite it having relatively old-fashioned technology. Nord instruments are incredibly ergonomic and sound absolutely killer, largely because of their obsessive attention to detail and their deep relationships with working musicians. A trivial example is the music stand - it's rock solid, it fits in the gig bag, it slots in place in two seconds and it's wide enough to hold four pages. It's a really important feature for a lot of Nord users, but it just wouldn't occur to most engineers.

Pressed steel, Neutrik jacks and keybeds aren't subject to Moore's Law. High-end stage keyboards will always be expensive, because they're built to high standards in relatively small quantities. The lack of innovation just isn't particularly relevant, because old technology does the job perfectly well.


Yeah but then we also fall in the "and we throw the old one away" category. Sure, if we find the perfect one, we'll be sticking with it for a while, but if the new thing is actually better: stage pianos are tools, they're not precious instruments lovingly hand crafted by instrument makers. I will happily throw a Roland RD800 out the window if you give me an RD2000 instead. Same for a Nord Stage3 vs an old Nord Stage. Maybe if they were vintage, as pointed out somewhere else, but they're just not. There isn't a single modern stage piano that isn't just an excellent sampler with ideally a rock solid hardware UI. Zero reason to care about replacing those if the new model has better samples, or better hardware UI, or both.


The original Nord Stage is still an excellent instrument and commands high prices on the used market. There's no reason to throw the old one away and few reasons to buy the new one, because the old one does pretty much everything you need it to do.

We're talking about very mature technology. The difference between Nord's 100MB piano sample sets and a 20GB mega-multisample is extremely subtle. The synth and effects engine on the Nord Stage sounds fantastic. You could add a ton of sample memory and processing power, but the sonic benefits would be absolutely marginal.


None of those devices are being thrown away and many of them will be re-surfaced and cycled as 'classics' within a few years and used in mainstream music again.

You can't say that about our computers or mobile phones, except among the hoarding/collectors scene.


I have to agree. 14 years ago Korg released the OASYS which was meant to be open and expandable. They then released the Kronos which, while based largely on the same technology, is a step backwards for open development and expansion.


What innovation do we need ? technology is probably way above what's needed, the real requirement is on labels and bands.


Most fields don't throw their gadgets away like we do with computer hardware. I regularly with electrical equipment older than that.


Even if those synths stop working, it is usually some capacitor that has failed or some other easy-to-replace part and it's good to go.


Its solid as in "it works and has aged well", but the spec has accumulated lots of cruft and is not straightforward at all. I once upgraded a Rust midi parser to learn Nom, thinking that midi should be an easy format. I was very wrong.


It's the simplest spec I know of, except for the sysex part, which due to basically being "whatever a vendor wants it to be", you can basically ignore entirely. Anything that has to work with sysex is basically an independent parser that you write in additional to your MIDI parser.

Getting it cleaned up, though, will be wonderful. It's about a decade late, but hey. Music industry. It moves at a speed parallel to us, just at a 10 year delay. Looking forward to machine learning synths in 2030!


I think the only thing I wish they'd done differently, considering the limitations of the time, is to disambiguate how you turn a note off. I don't know why you would release 1.0 of a spec with specifying that Note off and Note on with velocity 0 are interchangeable.


It's because of running status. If your status byte doesn't change you don't have to retransmit it. This cuts the latency down by 1/3ms when you're just playing notes which is the most common case. You should only send note off messages (instead of note on velocity 0) when your instrument actually supports release velocity, which few do.


Ah, that makes sense. I'd have said that note on velocity 0 would have made most sense as a single choice because of running status, but I didn't know that release velocity had any practical significance.

That said, I'm sure I've seen cheap controllers send note off messages though I doubt they supported release velocity.


Yeah, I think you're describing a very common experience for developers hoping to work with MIDI. "Hey, a standardized format! How hard can it be? Oh . . ."

Even without achieving all the things outlined above, just clearing away ambiguity in the spec would be a huge leap forward.


Are you sure that you aren't confusing MIDI with SMF? With MIDI my experience was that it was very simple to implement and understand the standard. It took me <100 lines of C. SMF on the other hand looks quite a bit messier.


Ah yes, its about parsing a midi SMF file. Wasn't aware there is such a large difference between that and midi-the-wire-protocol.


As successful as MIDI is, I can't help but think that it has something to do with a disengagement from music from the public.

If you listen to musicians who were big in the 1970s (say David Bowie or Billy Joel or CSNY) it seemed liked they banned guitars and drums and other real instruments and that everything was made with some kind of "music word processor" and pop music became glib and lifeless.


20 years ago plenty of people who should have known better complained about how crappy 'MIDI music' sounded. I liked to point out that MIDI doesn't make any sounds. It's a communications protocol. A Casio is not a Moog. Hollywood managed to work miracles with MIDI.

Similarly, if you find modern music 'glib and lifeless', that's not the fault of the instruments. For some reason the audience hasn't chosen to support something better. And let's just say that a lot of producers like to loop the same sample endlessly, because .... (they can't play? they lzy bches?)


or it's the sound they want - an exactly repeating groove was what James Brown was trying to achieve with a band, and various disco producers with tape loops, before there were sampled loops.


Maybe so. But I can tell yesterday whether I'm listening to Jimmy's band or a pasted-together wannabe track. All respect but ain't no Bootsy Button on the 303.


I don't see what the rise of popularity with electronic music has anything to do with the specific communication protocol used.


I don't think it was at all MIDI's fault. It was going to happen with or without it. Unless you just of consider MIDI a touchstone of the general computerization of the industry.


It already was happening without it. Before MIDI, there was CV.


And drums have never recovered.


What do you mean? Are there not still plenty of rock bands with real drummers?


Why it is important in one paragraph:

“The MIDI 2.0 initiative updates MIDI with auto-configuration, new DAW/web integrations, extended resolution, increased expressiveness, and tighter timing -- all while maintaining a high priority on backward compatibility. This major update of MIDI paves the way for a new generation of advanced interconnected MIDI devices, while still preserving interoperability with the millions of existing MIDI 1.0 devices. One of the core goals of the MIDI 2.0 initiative is to also enhance the MIDI 1.0 feature set whenever possible.”


This is fantastic news, I can’t wait to have CC parameters with more than 128 levels. Does anyone know what the new resolution for CCs will be?


Open Sound Control has that, but support for the format is very selective.


Good point, OSC is pretty good I should invest some time looking into it


If you're looking for information on OSC, I'm currently in a computer music class and we have a section on it: http://www.cs.cmu.edu/~music/cmsip/slides/06-networks.pdf


if you want to check out a nice OSC project I invite you to test https://ossia.io the OSC sequencer I'm developping


They wrote 32 bits on the website (against 7 in the current spec)


32 seems excessive.


Why? 64 bit chips are dirt cheap these days, if anything it's surprising they went for 32 bit instead of just going for 64 to buy the protocol one or more decades of "no one's going to run into the limitations of that".

Don't design it for what humans expect, design it for what the machines that need to talk to each other operate on. In that sense, 32 bits is the bare minimum you want in a new spec.


Because MIDI has very low bandwidth. People in complex environments doing complex things saturate it already. The new version may somehow allow higher bandwidth but it wants backward compatibility as well. We'll see, I'm sure this version will be better and I hope it becomes more popular again.


because the intent of a CC is to emulate a knob on a synth .

A synth knob turns a little less than 360 degrees. While 7 bits doesn't give quite enough resolution to have one degree per code, 32 bits would give 11 million codes per degree. Even a thousand codes per degree would suffice... but 11 million?

There would also be no analog to digital converter that could measure much more than 24 bits with any accuracy whatsoever.

And if you were to use your proposal of "just going to 64" it would multiply the space between the existing 11 million values between every degree on a knob by 4 billion. That would have 44 trillion variations of knob control. No one can sense that. if the knob was used to modulate e.g. LFO speed, it could send changes that wouldn't even be heard for millions of years.


> because the intent of a CC is to emulate a knob on a synth .

The intent of CC is to facilitate control changes. Higher range means greater flexibility in what exactly this means. Just off the top of my head I can think of at least one realistic application where 360000 discrete steps simply wouldn't suffice but a higher range control protocol would be useful: precision audio seeking, because recorded audio used in music is often longer than 360000 samples.


Then you're not being creative enough. Digital turntables are extremely precise. I once hooked up my ns7 (1) to an old windows laptop, spun the turntable, and crashed the computer. I think that highly precise applications like this could find valid uses for 32 bit control


I know right? I was expecting 16 bit tops for CC.


Plenty of potential applications of control change messages where 16 bits would be insufficient, for example precision seeking through recorded audio.


Interesting, you're right for this type of things "MIDI" controllers usually had to resort to HID (e.g. platters on most DJ controllers), I'm curious to see how people will use MIDI 2


Wait until you start applying pitch shifting/note bending. You'll want that 32-bit resolution.


With this spec, my LFO can be on a separate device and spit MIDI to modulate sample-accurate playback of an entire season of a TV show (assuming that it can be loaded into memory) recorded at 96000k.

Talk about future proofing. No complaints here.


Any chance there'll be a way to get a few channels of audio into the same container stream as MIDI 2.0, or at least down the same cable?


I recently read the biography (for lack of a better term) of sequential circuits. Dave Smith(the founder who still makes awesome synthesizers today) was a primary stakeholder in pushing the standard though. What essentially happened was All companies had a data/sync standard for electronic instruments. Then Oberheim released a system that many manufacturers saw as a threat to their business. Dave Smith convinced Roland to just adopt something and that momentum of sequential and Roland would turn adoption in their favor. That pretty much worked. Iirc the midi association then made Tom Oberheim president.


I'm currently saving my pennies for a Prophet 6 synth from Dave Smith Instruments (which I believe has now been re-renamed back to Sequential Circuits). I grew up with the classic sound of the Prophet 5 through all the music I listened to in the 80s.

From memory, Yamaha, who owned the Sequential brand were magnanimous enough to give Dave back all his IP some years ago to allow him to build the next generation of synthesisers based on his old ones, and it looks like they have given him back his original company name now too!


Mind sharing which book/article you're referencing? Sounds interesting.


I’m not OP, but I suspect it was “The Prophet from Silicon Valley: The Complete Story of Sequential Circuits,” for which Dave Smith wrote the foreword.



interesting how there is a new version of $40 and two used for >$1000 ! out-of-control algorithm ?!


MIDI is great, and I am so thankful that they had the foresight to require a mandatory opto-coupler in the input to avoid ground loops of interconnected gear. Even today, very few devices have a comparable insulation in their USB connection.


On my guitar pedalboard, I tried to be clever once and I hooked all of my digital pedals with a usb hub to a main connector. Horrible ground loops and digital noise was introduced. Does anyone make an electrically isolated usb hub?


A bit harder for USB, because of the power it supplies. USB is also not connected in loops, where MIDI will form loops with audio connections.


Will it allow me to poll the state of the knobs on my keyboard? Many instruments have MIDI extensions to allow you to check the state of the controls, but having one built in to the language would be great, especially for WebMIDI.


nope.


Is the midi connector the oldest one still in active use? It is my prime example of getting things right at the first (or at least first-ish) try.


Depending on what you'd consider a 'connector,' the headphone jack has quite a long lineage. I believe the 3.5mm headphone jack was originally created in the '50s. Of course, on the very distant planet that phone vendors live in, the headphone jack is not in active use, because people there were apparently absolutely begging for phones that were fractions of a millimeter thinner. In any case, if you count the larger variants, it's been around for a fair bit longer than that, even.


Interesting that you went with the 3.5mm jack, and not the 1/4" jack, which predate the 20th century and will be in use until analog electric signals become impossible, rather than being tied to whether or not "cabled portable audio devices" stay a thing or whether we give up on those.


I mentioned them, but ordinary people don't use 1/4" jacks. In fact, I don't know who still uses 1/4" jacks. Audio professionals seem to use XLR, though I am not one myself. I know 3.5mm connectors are still in active use by the general public.


Every person with a guitar, or an older HiFi system, or people who care just enough about audio to own an audio interface, in addition to everyone who owns, say, a mini keyboard with audio out. It's all 1/4".


Lots of people with studios use 1/4” Jacks (and sockets!). I have hundreds of cables with them. I use XLR for Microphones, speakers, and a few other devices but all my synths and most effect units are 1/4” and some are TRS (balanced) 1/4”.

3.5mm I have a lot of because of my EuroModular.


> In fact, I don't know who still uses 1/4" jacks.

People with electric guitars, at least.


TIL.

Looks like there is at least a bit more variance on the 1/4" jack, with multiple types that are in use. Meanwhile, virtually any consumer 3.5mm jack is going to be TRS or TRRS, though admittedly there are definitely some less-standard things there too.

So the 1/4" jack has been around a lot longer, but may be less 'standard' in that sense. Of course, I also have no idea whether 3.5mm jacks have always been as 'standard' as they are today.


They are standardized, though: EIA RS-453/IEC 60603-11


I think the 3.5mm will still be widely used 20 years from now.

In the wired realm, it doesn't get much better than the 3.5mm jack from a user convenience point of view.


Except maybe the 1/4" TRS jack.


The 1/4" audio jack goes back to about 1878: https://en.wikipedia.org/wiki/Phone_connector_(audio)


Quarter inch jack has to be older than most, as it's pre-1900.

Still standard equipment on hi-fi, guitars, headsets and other uses. Even the screw on 1/4 to 3.5mm adaptor has become standard. Only smartphones decided to be awkward.


Connectors come in all ages, but it surely has to be the oldest digital interface that you might still occasionally find on new devices in the non-food section of a larger supermarket.


I suppose one could dispute "in active use", but D-sub connectors predate DIN connectors and are still in use today. I see them primarily on industrial equipment and, obviously, not on PCs much anymore. Given that MIDI is pretty specialized, too, I'd say that D-sub is at least as common, if not more so.


In addition to the other great suggestions here, I'd like to add some of the various domestic power sockets in use around the world.


I think MIDI over USB is most common nowadays, especially for connecting with a computer. There is no lack of devices with the 5 pin DIN connector though. I recently bought a new keyboard and it has In and Out DIN connectors besides USB. The keyboard model was released in 2016, I think, so nothing vintage.


The U.S. garden-hose thread dates back to 1890.


What about the headphone jack?


Spade connectors probably predate that.

For it to make sense I think you'd have to say specialised connectors.

Phono?

XLR (microphones) are quite old and still widely used.

Car "cigarette lighter" fittings, and lightbulb "Bayonet" fittings are pretty old too; not sure if the latter counts.

https://en.m.wikipedia.org/wiki/XLR_connector


Perhaps Ethernet, or TRS.


RS232, DB9 ..


Interesting, introduction in 1960 by EIA.

Also

> The lack of adherence to the standards produced a thriving industry of breakout boxes, patch boxes, test equipment, books, and other aids for the connection of disparate equipment.

The value of a standard is also when nobody follows it :)


I want to believe this will get decent adoption in my lifetime. MIDI has so many historical limitations that have become ridiculous in a modern context, like only having 16 channels per bus or the very low control precision. These have lead to hacky solutions and compromises that often end up creating a poor experience for users.

I've worked on plugins that require per-note tuning and pitch bend, and the current best solution, MPE, is limiting. You're forced to use an entire MIDI bus to control a single instrument. This is a huge hack, and it means you can't create a MIDI plugin that controls multiple instruments.

Vendors are slow to adopt new standards in the audio world (as in many domains), and I hope this will be followed up with good diplomacy. We can learn from the non-adoption of Steinberg's VST3 standard as a path to avoid.


Steinberg is no longer granting licenses for VST2, so legally there can be no more new VST2 developers. But not all the hosts support VST3 yet and VST3 is over 10 years old. It's a giant pain in the ass.

They are trying to force it, but people just don't want it.


MPE (Multi Polyphonic Expression) would be nice to have.


Considering MPE is in MIDI 1.0 as an extension, you can bet it will be in MIDI 2.0


Given Roli is on board, I'm pretty sure this will be included. The Seaboard is pretty much the killer app for MPE.


My impression was that this is included in Midi 2.0


There is better than MIDI for a long time; it's called OSC. It's a shame the industry did never embrace it. I had some contact with Roland and the MMA, and they explained to me that the industry did invest so much into MIDI in term of hardware development that it will be tough to replace it.


> All companies that develop MIDI products are encouraged to join the MMA to participate in the future development of the specification

The good news: a new enhanced MIDI spec is in the works!

The bad news: you're gonna have to fight for it.


why aren't all input devices, including mice, (text) keyboards, gamepads etc MIDI devices ?


The MIDI protocol is very specific to music, it works in the context of "note on, key pressed this hard. other note on. note off. note off." etc.

The "normal" connectors are fairly bulky by modern standards (large DIN-style, like on really old keyboards and mice).

The electric level would have been a good basis, so one could imagine a world where MIDI was extended to also transport keyboard keypresses etc, but it would have meant extending the protocol quite a bit, or weirdly mapping concepts onto the music-specific basics. (EDIT: actually, it might be kind of overkill, and thus more expensive than simpler methods. What's useful in a large studio or stage environment isn't really needed on my desk)


In the days of the dinosaurs, some audio cards had a connector that could be used as either a joystick port or a MIDI port (https://en.wikipedia.org/wiki/Game_port ).


I was looking at one yesterday whilst have a clear out ...


MIDI produces great value with encoding a domain specific mapping. It's one of the differences with OSC which, by being too open-ended, provides no standard interpretation..


Vaguely in this vein - there was a game on the Atari ST called "MIDI Maze" which actually used MIDI as a network protocol!


With the thread about Quake Path Tracing engine, I read about FPS history, and Midi Maze is one of the early famous ones

https://en.wikipedia.org/wiki/First-person_shooter


I'm not an expert so I might be wrong but probably because one solution that fits for all can be a bad idea. Driver complexity is one of the reasons most devices are not compatible with all existing OSs by default.


MIDI is - and should be with future standards - simple, slow and reliable: Musical Instruments don't need much/many volume/types of information, so writing the drivers should be fairly simple.

Having said that, some midi to usb converters are still not up to full compliance with the 80s MIDI 1 standard


Does that mean that the converters implement the protocol themselves? I expected them to be roughly FTDI USB UART chips, tied to the MIDI baudrate.


I'm not entirely sure, but the fact that there are compatibility/compliance issues leads me to believe it's a mixture of both.

Either way, the drivers may be the issue: Parsing the serial correctly etc.


I sometimes wonder whether the ongoing USB-C trainwreck is partly due do trying to be everything for everybody. Leading to complex and flaky HW and SW.


I'm not sure I understand your comment. USB-C is still just USB, which was already a universal messaging transport solution. Driver complexity hasn't gone up just because a new physical plug got invented, so presumably you're thinking of the increased number of devices that now use USB, with a C connector, but the fact that it can do "even more device types!" now makes literally zero difference when the spec was already designed to allow all devices: the only difference is that HW/SW is now finally capable of transmitting reliably enough to handle the massive loads those now-supported devices require (hdmi, ethernet, etc.)


But now your USB-C cable can brick your device.


Sysex calls already allowed that since day 1.


Curious, what’s the best method currently for automatically mapping or discovering controls of a connected midi instrument?

Especially if you don’t have a predefined midi map/template.

Or is manually activating controls and defining your map still the best way?


There's stuff like NI's NKS [0], but it's highly vendor specific and very inflexible. I assume this will remain the case, given the relative variability of MIDI controllers - the position and amount of control elements, not to mention their implicit relationships (knobs that are directly above or below a fader for instance); and their near-universal non-responsiveness - there is no way to say, label a control for you to see physically.

Interesting issue to solve though, I'd bet my money on modular MIDI controllers becoming more prominent in the future.

0 - https://www.native-instruments.com/en/specials/komplete/this...


Good news!

Hopefully it will be as solid as the current version.


Good news for those of us with steppy filters.


Please make the connector smaller!


I'd have to disagree with this - I've had quite a few devices that have used mini-dins, and they aren't anywhere near as robust. Maybe have options for different sized connectors, but they should all be equal in terms of functionality (a bit like USB-B connectors), and need to be as robust as the current ones. I can think of many times where the simplicity and size of the DIN connectors on MIDI gear has been the difference between 'no worries, I can just about get my arm down the back of this rack and get the plug in' and 'no chance, can't feel where this needs to go'. And DINs are strong enough that you can rotate them to get them to the correct orientation without causing damage.



I wonder if they will also support USB-C just like Thunderbolt 3 got tacked onto the USB-C connector?


MIDI is just a protocol, it doesn't care which cable it gets sent over, as long as both ends of the cable know which message transport system if being used.

As such, pretty much every modern MIDI device can already do MIDI over USB (including the ones that have dedicated MIDI ports on the back).


> MIDI is just a protocol, it doesn't care which cable it gets sent over, as long as both ends of the cable know which message transport system if being used.

Strictly speaking, no. MIDI also specifies the mechanical and electrical connections. There is a MIDI USB device class specification, but that's not an implementation of the MIDI 1.0 standard and just roughly corresponds to its packet protocol.


Ah, I did not know this. Thanks.


MIDI is already well-supported over USB, I don't see why MIDI 2.0 should be any different.


when is the article from?


Friday, 18 January 2019




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: