The guys at Roland are genius. I have a somewhat old Roland RD-150 digital keyboard. While searching online for the user manual, I found a 'firmware update' on the official site.
I was curious. My keyboard is pretty old school, it doesn't have USB or anything like that. I downloaded the firmware update and opened the zip file.
It contained a readme... and a .midi file. It sort of blew my mind. They were sending firmware updates over MIDI! You had to press a certain key combination on the keyboard and play back the MIDI file into the MIDI in, and the firmware update would be complete.
 https://www.midifighter.com/ I made the first version of the configuration tool.
Though, CPU architecture is only part of the problem, identifying the actual SoC is easier when you open the device.
The first major issue is simply the lack of bandwidth. The physical layer operates at 3125 bytes per second, which just isn't enough for anything more than a single instrument with relatively sparse control data. MIDI devices can ostensibly be daisy-chained, but that's a really dumb thing to do because you get horrible timing problems. Back when we still used a lot of hardware synthesisers, it was the norm to have a large multi-port MIDI interface connected to your computer, providing one interface per instrument. That still doesn't solve your problems if you have a multitimbral module with lots of polyphony - if you start sending control channel messages, the timing of your note on/off messages will fall apart.
The second major issue is the lack of resolution. MIDI is an 8-bit standard, which is generally acceptable for velocity but grossly inadequate for most control channel messages. There are some pretty nasty workarounds being used to avoid zipper noise when you adjust a control parameter; this is most commonly an issue when sweeping a resonant filter. There are various hacks to send 14-bit control change messages, but they're non-standard aside from velocity and pitch bend.
MIDI is also built around the western scale, with no real accommodation for other tuning systems. We're forced to use the crude bodge of sending a note on message immediately followed by a pitch bend message; MIDI doesn't support per-note pitch bend, so you can't have polyphony and microtimbrality on the same channel. This is a fairly niche issue in the west, but it's a showstopper in a lot of other musical traditions.
OpenSoundControl addressed these issues and more besides, but it lacks widespread support because it was developed unilaterally rather than as an industry-wide collaboration. The spec is very powerful, but it just isn't very nice to work with. It's exciting to see that the MMA have a number of big players involved with the development of MIDI 2.0. We've been talking about fixing MIDI for a long time, but it seems like there's finally the traction to get a new standard widely adopted.
Workarounds, sure, but I wouldn't call it a nasty hack to simply interpolate between the discrete MIDI steps.
> There are various hacks to send 14-bit control change messages
NRPN is part of the standard and allows 14-bit control messages, so no hacking required.
> MIDI is also built around the western scale, with no real accommodation for other tuning systems.
There's the MIDI Tuning Standard since 1992, so there is a standardized accommodation for other tuning systems, and a bunch of synthesizers that implement it. Just not part of the MIDI 1.0 standard itself.
> OpenSoundControl addressed these issues and more besides, but it lacks widespread support because it was developed unilaterally rather than as an industry-wide collaboration.
IMO OSC probably lacks widespread support exactly because it doesn't specifically address most of these things. It's a protocol for sending timestamped and namespaced data over a network, and anything much more specific than that is just a matter of ad-hoc convention. It solves getting high precision data to devices in a timely manner, but it doesn't address the issue of getting a client to play a note.
In general I think that MIDI 1.0 just being tolerated is a great way to summarize the situation, and I'm glad that actual MIDI 2.0 implementations are seemingly around the corner.
"MIDI Polyphonic Expression" and is a new MIDI standard created by us, ROLI, Apple, Moog, Haken Audio, Bitwig and others for communicating over MIDI between MPE controllers ... The principal reason for MPE is to get around a limitation of MIDI: Pitch Bend and Control Change messages must apply to all notes on the channel. This prevents polyphonic pitch bends and polyphonic Y-axis control (which uses Control Change messages) over a single MIDI channel. MPE solves this problem by sending each note's messages on a separate MIDI channel, rotating through a defined block of channels. Here's a brief summary of MPE: http://www.rogerlinndesign.com/mpe.html
Today there seem to be work arounds, not that a new midi version would not work better, however.
Then there’s the fact that many manufactures promise MPE support in their devices, but rarely in 1.0 firmwares... you either ship with it or wait 6 months for either a crummy implementation or a statement from the manufacturer walking back support. It sounds like a non-trivial software implementation task, especially on devices with resource starved I/O processors
Back in the day I made loads of custom switch boxes and so on to allow whatever bizarre system I thought was a good idea into a reality - in fact I only got the last one out of my studio about 3 weeks ago when I finally removed the 19" rack I've had in there for the last 20+ years. The ubiquity and compatibility of MIDI has been one of the reasons that it's still around today. Forgetting that would be a huge mistake, IMO.
If you wanted to switch to something common, I'd personally say Ethernet (or something "normal", and a specified transport over Ethernet). Handles the distances and data rates more than fine, parts to implement it are widely available, it's already used more and more in stage and recording setups, so specialized components are available too. In the audio sector, standards to distribute precise clocks are around, one could maybe reuse those to get it past P2P connections (although that kind of gear is still kind of expensive right now). Still quite a step up in complexity though...
MIDI is the lingua franca of music equipment. It controls mixing consoles and effects units, it synchronises music to lighting, it provides timecode and controls recorders. You might not use that stuff, but it's an essential part of the spec that is used every day by professional musicians and engineers.
I think they're saying one should use rugged USB cables for live performance, rather than a typical consumer-grade cable.
If MIDI2 can work with USB-C in a way that devices can be linked, instantly, because the USB-C circuitry they use can speak just enough USB to verify the connection is between two MIDI devices, then I will be more than happy to give my MIDI cables away and never bother with them again.
And yeah of course a stage or studio setup will still need them for the foreseeable future, it's not like decades of midi instruments vanish overnight. But you'll also start seeing USB cables thrown into the mix, and eventually you might not see MIDI cables at all in 20, 30 years.
It has to start somewhere.
USB wasn't designed for this use case, and it is a very, very common problem.
Certainly, UBS in general can't be used for this purpose, but if there's true industry buy-in for MIDI over USB then it's not hard to imagine more and more USB hardware will get made that can compensate for that problem. For instance, I wouldn't imagine laptops to get isolated USB, but a shiny new post-MIDI-2 audio interface sure would.
In that environment, isolation is a constant concern. Connections that are inherently isolated make fault-finding far more straightforward and obviate the need for external isolation boxes.
For even a simple USB audio interface setup connected to a single audio source I would recommend isolation and balanced audio cables.
There are a lot of potential technologies that could improve upon the antiquated MIDI cable standard, but USB is actually worse in a lot of ways.
CAN bus might be a good option; it's fast enough, and is already available on a lot of cheap microcontrollers. I haven't worked with it, though; maybe there's a shortcoming I'm not aware of.
I agree that there are plenty of things to improve on the old midi spec, but I think for a lot of people the issues are irrelevant - while physical midi is 31k, most people I encounter now use multiple soft synths in their DAWs, and aren't even aware of the original spec and speed of midi (unlike back in the day with multiple synths chained off a single midi output on an atari ST where it was definitely an issue).
I remember zipi, which promised similar useful changes and came to nothing - I think for the vast majority of people midi doesn't get in the way of their work.
Did mlan also promise similar improvements?
sounds like C
The typical MIDI 'instruction' requires 3 bytes. So, 1000 non-intense instructions per second. Not good enough for intense controller mods, but if the played 'instruments' do most of the work, that's over 80 ips (notes e.g.) for each of 12 voices. (Also that's 3125 bytes per MIDI input, of which you're welcome to use many.)
Perfectly adequate most of the time ... and one reason that the original, genius design lasted this long.
NOT to say that it isn't insanely great to hear this news.
And considering one of those bits is used for status, it’s effectively 7 bits worth of resolution (128 discrete values is... not great for anything but maybe western scale note pitch values).
This is very much a "Falsehoods Programmers Believe About Names" type of situation - what western engineers perceive as an insignificant edge case is a constant frustration for many non-western musicians. The answer to the question "How do I play music from my own culture using electronic instruments?" should not be "Well, it's complicated...". Support for non-western scales should not be a clumsy, optional part of the spec.
Yes, it could be better but I don’t just tolerate it.
The probability this is true for any system in use (software, natural language, tax code, home appliance...) asymptotically approaches 1 with its age in years.
Sure, it's old and outdated, but it does the job.
OK, what I'm going to say is not constructive, but, boy, how I hate this type of comments. Where were you 40 years ago? Why did you not prevent it from being terrible? What did you do to replace it with something better?
Yes, it may be "suboptimal" by the 2019 "standards", but back then, I'm pretty sure, people put a lot of thought in to it and tried to make it as good as possible. Just because it looks childish on the current hardware does not make it "pretty terrible".
EDIT: It's almost like complaining that 555 is absolute rubbish comparing to 8266 that can do sooo much more, and why just did they not come up with something better back then.
I'm not saying that they should have come up with something better. MIDI was an incredible breakthrough in 1983, but it was designed in 1983. That's a very, very long time in technology. We've known about and dealt with the shortcomings of MIDI for decades, but it's difficult to replace a deeply-entrenched standard. MIDI has been pushed as far as possible without breaking backwards-compatibility, but it is now long overdue for the industry to move in unison and create something fit for the 21st century.
Imagine if you dealt with lots of devices on a daily basis that had a 33kbaud serial interface. Would that kind of annoy you? Would you say "this archaic serial format is just great!". No, you'd be kind of cheesed off that you were stuck with it.
MIDI 2.0 looks great. It's a really big deal for the electronic music industry. You can only usefully say why it's a big deal if you acknowledge all of the crummy, annoying parts of MIDI 1.0.
And I'm not a fan of this kind, either.
Yes, it might be unfair to poke holes in a standard from the 80s from a 2019 perspective, but it was GP's assertion that the MIDI standard was unsuitable to last as long as it did without an update. _That_ is what is being criticised as pretty terrible, and that's fairly clear from what GP wrote.
In the end it was maybe the most impactful decision I made, as MIDI flourished and Polychord found a niche as a controller. Inter-app MIDI became one of the first ways in which iOS apps could work together rather than being little islands unto themselves, which then led to Audiobus, which led Apple themselves to open the door to cross-app features on the platform.
It may be a dated standard, but a standard nonetheless; and that’s valuable in ways that are hard to quantify.
Hacking on top of MIDI is not always easy, but it can be gratifying. You also discover fun Easter eggs — like the names of defunct manufacturers buried in the data bytes of the spec.
It'd be great if the other realms of electronic gadgetry could accomplish the same degree of uptake as electronic musical instruments - however, the anti-patterns that make modern consumer gadgets so appealing to the people who market and sell them, are a smell to musicians who have very little patience for designed obsolescence. Musicians don't tolerate that much .. anyone attempting to induce it in a product intended for music-making will find that they won't get far in this industry.
- Yamaha's Motif XS has some sounds that were adopted as signature sounds for some subgenres of funk, soul and gospel, but these are just samples that the upgraded versions XF, MX, MOXF have as well for this reason, and I don't see anyone with the XS anymore.
- Nord clearly limits the features, storage and processing power of each keyboard they release so that they can upgrade it slightly the following year (the latest 2018 flagship model has 480MB of sample memory vs. the preceding 2015 model's 380MB).
- Korg rereleased the Kronos with an SSD instead of a HDD and marketed it as a new machine. You could open it up and DIY for 80 bucks and save yourself ~$1.5k+ upgrade costs.
- Roland still slaps the Juno brand name on random iterations of digital instruments that just have an extra button for this or that function that could have been added with a software update if they wanted to.
New digital keyboards these days are just software updates that model familiar sounds slightly better, packaged in "new" hardware. The $3.5k pricetag for flagship keyboards that have barely changed in size/shape/material for years is a clear sign. I wonder why no manufacturer has just come out and said "this is our flagship keyboard until 2030, buy it for $2k and subscribe to software updates for $10/month" or "new software verson at $100 every year". That way what buyer's pay for would be much more closely connected to what they are actually getting.
Some historic digital innovation. It seems to me that almost everything PPG and Waldorf ever made or makes would be considered innovative, including the upcoming Kyra. Other innovative digital synths would include the VS and Wavestation family; the Fizmo, Morpheus, UltraProteus, and Proteus 2000; the FS1R; and perhaps the Prophet X. And I must grudgingly admit that the MicroKorg was innovative given what they crammed in there for the price.
I'll grant you that most of those devices are two decades old. The current incentive for innovation (and hence risk) for digital devices has been destroyed by software synthesizers and laptops. Most digital stuff nowadays has gone for cheap rather than new, in the hopes of going after the poor musician. Monologue, Monotribe, Volca, Reface, Boutique. I don't know if that's bad, but it does make me sad.
Why is this? I've always wondered if these golden-age hardware FM synths provide any benefits over software (people talk about the super HQ DACs, etc., but I'm not sure if I buy that these components are better than modern commodity equipment?). The thing is, to my ear, the SY99/SY77 etc. do sound a lot better than e.g. Dexed or FM8, but is that just because Yamaha patented a bunch of the FM algos and architectures, and, with a retail price of £3,000 in 1991, they put more care into a complete end product than a VST FM developer realistically would? Or, can really low memory, low CPU, super purpose-built hardware somehow win against a modern OS?
As for the sound, I don't think Yamaha's implementation of "FM"  has been comprehensively reverse-engineered in the way that, say, the Commodore SID has. There are a lot of little quirks and edge cases to take into account when considering the full operational ranges of the various chips. Even MAME's implementation is allegedly distinguishable even after 20 years of tweaking and testing against hundreds of games, and I imagine most VSTs are using significantly less mature code.
 I recall reading that the actual implementation is phase modulation because directly doing FM in the digital domain would put quantization error in the frequency domain, i.e. notes would be off-pitch instead of having noise.
OTOH the things that tend to differentiate implementations of Yamaha's FM are the sample rates, bit depths, and envelopes used, plus any output distortion(YM2612 for example has a well known distortion in its implementation that adds a harsher edge). The core algorithms they use are something a high school student could pick up and do something with, and emulation quality issues are more a matter of it being easy to write an emulation that gets it 98% correct without covering the last steps, since those are details that really need error-for-error reproduction of the original ASICs and boards, including any timing issues - things which frustrate emulator writers everywhere.
This isn't all in CPU - the CPU on these synths mainly
sets parameters for the sound generating chips.
not the same chips, but something like this probably:
The SY99/77 really are great, though. I sold my TG77 and FS1R recently, but will probably get myself a used SY99 keyboard at some point. Or save up for a Montage.
There's a reason why so many keyboard players adore their Nord keyboard, despite it having relatively old-fashioned technology. Nord instruments are incredibly ergonomic and sound absolutely killer, largely because of their obsessive attention to detail and their deep relationships with working musicians. A trivial example is the music stand - it's rock solid, it fits in the gig bag, it slots in place in two seconds and it's wide enough to hold four pages. It's a really important feature for a lot of Nord users, but it just wouldn't occur to most engineers.
Pressed steel, Neutrik jacks and keybeds aren't subject to Moore's Law. High-end stage keyboards will always be expensive, because they're built to high standards in relatively small quantities. The lack of innovation just isn't particularly relevant, because old technology does the job perfectly well.
We're talking about very mature technology. The difference between Nord's 100MB piano sample sets and a 20GB mega-multisample is extremely subtle. The synth and effects engine on the Nord Stage sounds fantastic. You could add a ton of sample memory and processing power, but the sonic benefits would be absolutely marginal.
You can't say that about our computers or mobile phones, except among the hoarding/collectors scene.
Getting it cleaned up, though, will be wonderful. It's about a decade late, but hey. Music industry. It moves at a speed parallel to us, just at a 10 year delay. Looking forward to machine learning synths in 2030!
That said, I'm sure I've seen cheap controllers send note off messages though I doubt they supported release velocity.
Even without achieving all the things outlined above, just clearing away ambiguity in the spec would be a huge leap forward.
If you listen to musicians who were big in the 1970s (say David Bowie or Billy Joel or CSNY) it seemed liked they banned guitars and drums and other real instruments and that everything was made with some kind of "music word processor" and pop music became glib and lifeless.
Similarly, if you find modern music 'glib and lifeless', that's not the fault of the instruments. For some reason the audience hasn't chosen to support something better. And let's just say that a lot of producers like to loop the same sample endlessly, because .... (they can't play? they lzy bches?)
“The MIDI 2.0 initiative updates MIDI with auto-configuration, new DAW/web integrations, extended resolution, increased expressiveness, and tighter timing -- all while maintaining a high priority on backward compatibility. This major update of MIDI paves the way for a new generation of advanced interconnected MIDI devices, while still preserving interoperability with the millions of existing MIDI 1.0 devices. One of the core goals of the MIDI 2.0 initiative is to also enhance the MIDI 1.0 feature set whenever possible.”
Don't design it for what humans expect, design it for what the machines that need to talk to each other operate on. In that sense, 32 bits is the bare minimum you want in a new spec.
A synth knob turns a little less than 360 degrees. While 7 bits doesn't give quite enough resolution to have one degree per code, 32 bits would give 11 million codes per degree. Even a thousand codes per degree would suffice... but 11 million?
There would also be no analog to digital converter that could measure much more than 24 bits with any accuracy whatsoever.
And if you were to use your proposal of "just going to 64" it would multiply the space between the existing 11 million values between every degree on a knob by 4 billion. That would have 44 trillion variations of knob control. No one can sense that. if the knob was used to modulate e.g. LFO speed, it could send changes that wouldn't even be heard for millions of years.
The intent of CC is to facilitate control changes. Higher range means greater flexibility in what exactly this means. Just off the top of my head I can think of at least one realistic application where 360000 discrete steps simply wouldn't suffice but a higher range control protocol would be useful: precision audio seeking, because recorded audio used in music is often longer than 360000 samples.
Talk about future proofing. No complaints here.
From memory, Yamaha, who owned the Sequential brand were magnanimous enough to give Dave back all his IP some years ago to allow him to build the next generation of synthesisers based on his old ones, and it looks like they have given him back his original company name now too!
link to the book.
3.5mm I have a lot of because of my EuroModular.
People with electric guitars, at least.
Looks like there is at least a bit more variance on the 1/4" jack, with multiple types that are in use. Meanwhile, virtually any consumer 3.5mm jack is going to be TRS or TRRS, though admittedly there are definitely some less-standard things there too.
So the 1/4" jack has been around a lot longer, but may be less 'standard' in that sense. Of course, I also have no idea whether 3.5mm jacks have always been as 'standard' as they are today.
In the wired realm, it doesn't get much better than the 3.5mm jack from a user convenience point of view.
Still standard equipment on hi-fi, guitars, headsets and other uses. Even the screw on 1/4 to 3.5mm adaptor has become standard. Only smartphones decided to be awkward.
For it to make sense I think you'd have to say specialised connectors.
XLR (microphones) are quite old and still widely used.
Car "cigarette lighter" fittings, and lightbulb "Bayonet" fittings are pretty old too; not sure if the latter counts.
> The lack of adherence to the standards produced a thriving industry of breakout boxes, patch boxes, test equipment, books, and other aids for the connection of disparate equipment.
The value of a standard is also when nobody follows it :)
I've worked on plugins that require per-note tuning and pitch bend, and the current best solution, MPE, is limiting. You're forced to use an entire MIDI bus to control a single instrument. This is a huge hack, and it means you can't create a MIDI plugin that controls multiple instruments.
Vendors are slow to adopt new standards in the audio world (as in many domains), and I hope this will be followed up with good diplomacy. We can learn from the non-adoption of Steinberg's VST3 standard as a path to avoid.
They are trying to force it, but people just don't want it.
The good news: a new enhanced MIDI spec is in the works!
The bad news: you're gonna have to fight for it.
The "normal" connectors are fairly bulky by modern standards (large DIN-style, like on really old keyboards and mice).
The electric level would have been a good basis, so one could imagine a world where MIDI was extended to also transport keyboard keypresses etc, but it would have meant extending the protocol quite a bit, or weirdly mapping concepts onto the music-specific basics. (EDIT: actually, it might be kind of overkill, and thus more expensive than simpler methods. What's useful in a large studio or stage environment isn't really needed on my desk)
Having said that, some midi to usb converters are still not up to full compliance with the 80s MIDI 1 standard
Either way, the drivers may be the issue: Parsing the serial correctly etc.
Especially if you don’t have a predefined midi map/template.
Or is manually activating controls and defining your map still the best way?
Interesting issue to solve though, I'd bet my money on modular MIDI controllers becoming more prominent in the future.
0 - https://www.native-instruments.com/en/specials/komplete/this...
Hopefully it will be as solid as the current version.
As such, pretty much every modern MIDI device can already do MIDI over USB (including the ones that have dedicated MIDI ports on the back).
Strictly speaking, no. MIDI also specifies the mechanical and electrical connections. There is a MIDI USB device class specification, but that's not an implementation of the MIDI 1.0 standard and just roughly corresponds to its packet protocol.