Bitwig knocked it out of the park coming out the gate with Linux support. Can't say I'm surprised though, it's made by the original team that made Ableton Live.
"Bitwig ... founded by Claes Johanson, Pablo Sara, Nicholas Allen and Volker Schumacher. Our experience in the computer music software industry includes Ableton, where we were all part of the development team behind the successful music software Live ... "
I will add the request for ALSA sequencer devices, which I think should include Jack ones.. but I will investigate this further, as maybe I'm wrong. There is a bridge for sure.
Anyway just don't hijack the MIDI device and be nicer with the linux MIDI ecosystem is a must.
It’s honestly one of my favourite bits of software these days. Feels very respectful and responsive. I tired logic and ableton. Both are great tools but a bit janky.
Never though I’d say I love a Java program but they did a really really good job.
It also has an LLVM-based JIT compiler for Grid devices. Which I would love to see a deep dive on, if Bitwig folks would like to write an engineering blog.
Although the python API could be documented, it was not meant to be a public API. And it seems the open source community managed to generate the documentation.
I don't know any other DAW that allows as deep integration for third party devs through their Max for Live API.
...which looks like it's basically just a representation of the underlying midi.
One way they could represent other pitches is for "key" to allow a floating point value rather than an integer. So, for instance, 65.5 would be a quarter-tone (50 cents) above note 65.
According to their reference document, "key" is currently required to be an integer:
There's other ways to support microtuning. They could apply a tuning table to the midi notes, for instance. (Ideally they should support more than the 128 notes that midi supports, because 128 isn't enough for some use cases.) They might also allow you to apply pitch bend to individual notes (which isn't allowed in midi 1.0 but was added to 2.0).
At this layer of note information, fact is the data will be either sent to a MIDI device or a plugin as a MIDI event, so you might as well keep it in the native format and not convert to a frequency.
MPE exists, and can handle microtuning of pitch. MIDI 2.0 supports per-note pitch bend. Old multitimbral MIDI synths can be used in an MPE-like fashion by using the one-note-per-channel and pitch bend trick. Modular synthesizers (and virtual modular synthesizers like VCVRack) use control voltage directly, which has no built-in assumptions about how or if the voltages are quantized.
Microtonal music, or even just plain regular traditional music that's tempered to be more in tune with itself, should be trivially easy to do with electronic instruments, and yet due to some historical decisions to base the one near-universal music protocol we've been using for the last 40 years on a piano-centric representation, it's a lot harder than it ought to be.
I think we should not disregard microtonal music just because it isn't cleanly backwards compatible with a protocol that's over 40 years old.
I don't know what all the options are, but MTS-ESP is one. If I understand it right, it's a sort of out-of-band channel where you can apply tuning tables to all the MIDI synths that use the MTS-ESP API from one place.
I think it's not sent over MIDI so it only works if everything is running on the same computer.
I mean, you could, but that would get really gross when implementing certain scales that are nowhere similar to the MIDI tuning (like working out the fractions for a 13-note scale)
Eh, it's fine. With two decimal points of precision you're basically working in cents, which is a pretty standard way of representing tuning.
This doesn't need to be a trivially human-readable format, but I think maybe it would be nice to allow for multiple pitch representations: floating point frequency, floating point fractions of a semitone in any arbitrary EDO, fractional frequency as used in just intonation, etc..
You could also allow for annotations, like maybe a note pitch is represented as "63.86" in 12-EDO, but it's not just some weird in-between note, it's actually the E that's 5/4 above C, so the file could have an annotation that says that's what the note actually means in this context.
Isn't the issue that pitch bends in midi apply to every sound on that channel, so if you play a chord with a different temperament and send the bend data of each note the result is that the whole chord will be tempered as normal but with every note the same amount higher or lower?
I've heard the recent midi 2 can get around this but it's still a pain with the 40 years of midi equipment that surrounds us.
Yes, that's a problem with MIDI 1.0. There are a few ways to work around it. Some synths support custom tuning tables, and there's even a MIDI spec for custom tuning tables called MTS that hardly anyone actually implements as far as I know.
One way that works for multitimbral synth (which a lot of 90's romplers are) is to only play one note at a time on each channel, and use multiple channels for polyphony. This allows you to individually control the pitch of every note.
The one-note-per-channel trick works okay, but it's kind of awkward. If you want to use all 16 channels, it means setting every channel to the exact same patch, which is tedious. Also, you have to know what the pitch bend range of the synth is if you want to bend by the exact right amount. So, eventually MPE was adopted as an official MIDI standard to provide an easier, more user-friendly and standardized way to do one-note-per-channel MIDI.
MIDI 2.0 has added per-note pitch bend. For anyone making microtonal instruments right now, though, I'd say MPE is probably the best/easiest option.
I feel like we're only months/years away from software and file formats being obsolete for this for this sort of thing, and instead songs are composed by humming a tune to ChatGPT and have it add all the other instruments, some nice vocals, and output the MP3.
If you want to make changes, you give ChatGPT the MP3 file and say "switch out the trumpet for a piano and make the singer a guy".
That's an interesting point. I don't get much value from art personally (not that i don't see why people value it, i do!), so the idea of AI generated art isn't disgusting to me.
Music however.. i adore music. A big (but not essential) thing i love about music is the story that got the artist to that point. Pain, joy, emotion is transferred. Yea, often it's not essential so maybe AI Music has a place in my life, but easily 60% of my music is loved partly, if not heavily, because of the emotions that got the artist to that point.
This is a hilarious take and juxtaposition with MP3 (lame encoders anyone?) and some far fetched gtp application. It would be a musak equivalent of the bored ape nft’s.
Like midi note transcription or waveform modeling will be cool machine learning tools. An end to end composition and mastering bot may make something passable like other current derivative music out there, being generous by saying passable.
Show me the leaps and bounds in self-driving cars that were only 3-5 years away 8 years ago.
There is still need for better daw, music transcription, and audio file formats in the next century. Lol humans want to make art in their free time not play around with pretend chatbot personalities.
P.s. people still record to TAPE and it sounds awesome even though there are albleton plugins
Yeah, MIDI 2.0 has per-note pitch bend. They kept with 7 bits for note numbers, though, which is kind of inconvenient. If you have an instrument with more than 128 keys, you have to do a kind of dynamic allocation thing where you find the nearest unused key and bend it to pitch.
I think for most use cases, MPE is actually simpler. (Also it's probably supported by more instruments and synths at this point.)
All the more reason to include microtuning as a supported feature in this new format.
Not per note pitch bend. Note On/Off messages support an extra 16 bits of "attribute" data that can optionally be used as an unsigned 7.9 fixed point pitch offset in semitones. Note numbers are also expanded to 256 since they can use the full 8 bits of the note number.
MPE is a super limited hack, I doubt anyone is going to use it once MIDI 2 becomes available in synths (it's a future technology, fwiw - it will be a year or two before you can buy any controller that uses it)
> Not per note pitch bend. Note On/Off messages support an extra 16 bits of "attribute" data that can optionally be used as an unsigned 7.9 fixed point pitch offset in semitones.
> Note numbers are also expanded to 256 since they can use the full 8 bits of the note number.
You might be right on the attribute data, but I thought they had per-note pitch bend as well. I'm skeptical that 8th note number bit is available, but I'm going off of memory since the MIDI association has decided for whatever reason to require a login just to see the specs as if the MIDI specs are some kind of secret, and that section of their website is throwing SQL errors right now.
> MPE is a super limited hack, I doubt anyone is going to use it once MIDI 2 becomes available in synths
Maybe MIDI 2.0 will be adopted, but so far it seems to be getting very little traction, at least in the hardware synth/controller space. I'm less familiar with software synths; maybe it's getting picked up there.
MPE is kind of a gross hack, but it works pretty well and is supported by most of the expressive controllers out there and at least some synths. The only expressive controller I'm aware of that uses MIDI 2.0 is Lumatone. I think Roland also makes a regular keyboard controller with MIDI 2.0 support. Other than that the major music incumbents seem to be staying away, and the smaller expressive instrument manufacturers seem to mostly be sticking to MPE.
I'd be in favor of just ditching MIDI entirely, and use a different protocol that's more like what MPE would be if it didn't have to be mostly backwards-compatible with MIDI 1.0. I'd also be in favor of using CAN-bus instead of 31.25 kbps serial for anything that's not using USB.
Ah you're right, I missed in the message layout that the MSBit is reserved for note number. That's kind of pointless.
And it is a bit absurd that they're so intent on doing things behind closed doors. But the spec is very comprehensive and decently polished.
It's not really "out" yet in either hardware or software. The last update made some notable changes. Allegedly Korg is releasing a line of MIDI 2 controllers soon. I think NAMM in January is going to have a lot of MIDI 2.0 demos.
In the software world, some people (Steinberg) don't even want to support MIDI in synths at all. VST3 barely supports MIDI 1, and it will not support MIDI 2.0.
This is pretty nice. I've written some tools for my own softsynth that revolved around parsing various DAW formats and having a standard format, even just to export to, would have been very nice to have.
Yes please!
I went for Mixcraft because friend has it already, but still struggling to make it running alongside Spitfire with wine. Spitfire needs win 10 but then no sound with the "low latency" driver in Mixcraft. Anyway… an exchange format would help me go native while keeping friends!
MIDI was enthusiastically supported by Yamaha and Roland, two of the biggest synth players of that era.
So I imagine, for it to become widely successful, that this new standard would need the wholehearted support of at least 2 of Steinberg, Apple, Ableton, and Avid.
Indeed, MIDI 1.0 is one of the oldest implemented standards in existence. Unfortunately, MIDI 2.0 is not as successful. Let's hope Bitwig's initiative takes off.
Maybe I’m misinterpreting the readme but other DAWs would not need to cooperate if you’re developing a tool to transpile between formats. I think most of the proprietary specs could be reverse engineered.
Why would they want to support AU? The only reason to use AU is Logic. Everything else (Protools excepted) supports VST, and all the plugin devs release in VST and AU, so it would be a waste of time. Bitwig is putting their time into CLAP, and for good reason. It's cross-platform, much easier to develop against, and much more advanced that all the alternatives. Even Avid has shown interest in CLAP. So has Image Line. I expect Studio One to support it in v7.
For starters, AUs are "easier" to run under Roestta than VSTs since AUs run out-of-process by default. This means you can use a x86_64 AU without running the entire DAW under Rosetta on Apple Silicon.
It hadn’t occurred to me that AU run out of process, but this makes sense. Does this mean an AU crashing in theory won’t take down your DAW? (The same thing that Bitwig has it’s own isolation feature for)
I always default to AU just because I’m on a Mac and I arbitrarily decided to do so long ago. The Rosetta thing has been a nice bonus. Not sure what the other (dis)advantages are, these days I try to focus on using native Ableton and M4L stuff anyway!
> Does this mean an AU crashing in theory won’t take down your DAW?
Yes, this has been my experience with AUs in Logic Pro, at least. You simply get a message saying a plugin failed and that you can try reloading it if you'd like.
Do you encounter many AU-only plugins? From what I can tell, the limitations of VST/AU/AAX are more pressing and Bitwig are working to address that with CLAP: https://u-he.com/community/clap.
Man i hope more people (plugin developers too) embrace CLAP or something open. I'm writing a Rust program and i really want to allow for Modart's Pianoteq plugin and VST just makes it feel so difficult.
There have always been plug-ins, especially by small developers on the Mac, which are AU only.
Apple Silicon macs can now run iOS / iPadOS AUs as well, so there are a ton of plugins from that world that are not available as VSTs.
Apple Logic can't load VSTs, so typically on the Mac it makes sense to just have AUs. Ableton Live on the Mac can load VSTs or AU, but there isn't really a reason to keep the VSTs around.
epic project, one of my fav DAWs already supported too. So damn happy to see this. Hope Renoise will pick it up too though that one moves a lil slow :P maybe someday! just wanted to say thanks. this is needed sooo much!
Probably not happening any time soon. Bitwig’s potential for growth is in users migrating from Ableton Live and making that easier is likely not in Ableton’s interest.
If the goal is widespread industry adoption, I'd recommend the approach of extending a professional, standards-based interchange format already supported by major DAWs: AAF.
Yes, which is why the Yet Another Standard approach is so mystifying. Bitwig is surely aware that AAF has wide support and could easily be extended to support everything they want to do.