Hacker News new | past | comments | ask | show | jobs | submit login
DAWproject: Open exchange format for DAWs (github.com/bitwig)
187 points by anigbrowl on Sept 27, 2023 | hide | past | favorite | 89 comments



Bitwig is quickly becoming my favorite DAW. Big fan of how they develop software.


Bitwig knocked it out of the park coming out the gate with Linux support. Can't say I'm surprised though, it's made by the original team that made Ableton Live.


Bitwig was founded by people who were previously working for Ableton. Some longer, some less long.

But they were not the original team that made Live, that is a bit misleading.


"Bitwig ... founded by Claes Johanson, Pablo Sara, Nicholas Allen and Volker Schumacher. Our experience in the computer music software industry includes Ableton, where we were all part of the development team behind the successful music software Live ... "


On Linux it steals/hogs the hardware MIDI, so you can't connect to it with other JACK MIDI apps.


I am also very bothered by this. I developed rtpmidid just to be able to use rtpmidi on bitwig.. and I need to use the virtual midi device hack.

I was just thinking if there is a way to ask for features.. and there is! https://bitwish.top/c/features/5

I will add the request for ALSA sequencer devices, which I think should include Jack ones.. but I will investigate this further, as maybe I'm wrong. There is a bridge for sure.

Anyway just don't hijack the MIDI device and be nicer with the linux MIDI ecosystem is a must.


It’s honestly one of my favourite bits of software these days. Feels very respectful and responsive. I tired logic and ableton. Both are great tools but a bit janky.

Never though I’d say I love a Java program but they did a really really good job.


And a very cool project of what can be done in Java based desktop software.

Yes it has has C++ under the hood, which is also a good example of how not to go full into a single stack, rather mix-and-match.


It also has an LLVM-based JIT compiler for Grid devices. Which I would love to see a deep dive on, if Bitwig folks would like to write an engineering blog.


Although the Spectral Suite drama was a poor move from them.


I found it interesting to discover that an Ableton Live project file is a gzipped xml file as well, so you can get at the xml by gunzipping it.

I've played around with trying to generate a schema file from a project file with mixed results.

I very much wish Ableton would support this format or at least publish their schema somewhere so translation could be done.


This. I've been trying to write a VCS for Ableton projects but their schema is way too complex to try to make sense of it.

Ableton is incredibly hostile to third party devs building upon their work. Eg: There is no official documentation for any of their Python APIs.


Although the python API could be documented, it was not meant to be a public API. And it seems the open source community managed to generate the documentation.

I don't know any other DAW that allows as deep integration for third party devs through their Max for Live API.


If it was not meant to be public, then why do vendors who partner with Ableton and build hardware for Live complain about the lack of documentation?

The documentation generated by the community is severely lacking and it enables only elementary usage of the API


> it seems the open source community managed to generate the documentation

That would be terrific. Do you have more?

A while ago, I tried to cutomize a Novation Launchpad but couldn’t find coherent docs or an intro.

Some people have mastered it, as witnessed by this great extension [1]. But that’s example code, not a documentation.

[1] https://github.com/hdavid/Launchpad95


>I don't know any other DAW that allows as deep integration for third party devs through their Max for Live API.

REAPER?


Ah makes sense. I was just thinking of the "big name" ones. I need to look into Reaper again.


REAPER has unsurpassed scripting and extension, every other DAW looks like a joke in comparison.


Ya, Reaper is the only DAW (I know of) that allows me to import midi, apply fx, and render tracks via scripts.


Not really DAW but bipscript is interesting for this use case


FL Studio has a well documented python API to allows 3rd party scripts!


One thing I'd like to see is native support for some notion of pitch other than 12-tone equal temperament.

Looking at their example, I see this:

<Note time="0.000000" duration="0.250000" channel="0" key="65" vel="0.787402" rel="0.787402"/>

...which looks like it's basically just a representation of the underlying midi.

One way they could represent other pitches is for "key" to allow a floating point value rather than an integer. So, for instance, 65.5 would be a quarter-tone (50 cents) above note 65.

According to their reference document, "key" is currently required to be an integer:

https://htmlpreview.github.io/?https://github.com/bitwig/daw...

There's other ways to support microtuning. They could apply a tuning table to the midi notes, for instance. (Ideally they should support more than the 128 notes that midi supports, because 128 isn't enough for some use cases.) They might also allow you to apply pitch bend to individual notes (which isn't allowed in midi 1.0 but was added to 2.0).


At this layer of note information, fact is the data will be either sent to a MIDI device or a plugin as a MIDI event, so you might as well keep it in the native format and not convert to a frequency.


MPE exists, and can handle microtuning of pitch. MIDI 2.0 supports per-note pitch bend. Old multitimbral MIDI synths can be used in an MPE-like fashion by using the one-note-per-channel and pitch bend trick. Modular synthesizers (and virtual modular synthesizers like VCVRack) use control voltage directly, which has no built-in assumptions about how or if the voltages are quantized.

Microtonal music, or even just plain regular traditional music that's tempered to be more in tune with itself, should be trivially easy to do with electronic instruments, and yet due to some historical decisions to base the one near-universal music protocol we've been using for the last 40 years on a piano-centric representation, it's a lot harder than it ought to be.

I think we should not disregard microtonal music just because it isn't cleanly backwards compatible with a protocol that's over 40 years old.


Idunno, half the instrument VSTs I have seem to support some form of microtuning or alternate tunings...


How do they achieve that?


I don't know the mechanics, my guess is they translate the MIDI notes to the scale given to them by a tuning file

Here's a pretty comprehensive-looking list of software instruments supporting microtuning:

https://en.xen.wiki/w/List_of_microtonal_software_plugins


I don't know what all the options are, but MTS-ESP is one. If I understand it right, it's a sort of out-of-band channel where you can apply tuning tables to all the MIDI synths that use the MTS-ESP API from one place.

I think it's not sent over MIDI so it only works if everything is running on the same computer.

https://oddsound.com/mtsespsuite.php


You don't need to convert to frequency, you can use fractional MIDI pitch values. E.g. 60.5 would be a quarter tone above middle C.

In fact, the VST2 and VST3 SDKs support microtonal offsets per note event (measured in cent). Unfortunately, only few plugins and hosts support it.


I mean, you could, but that would get really gross when implementing certain scales that are nowhere similar to the MIDI tuning (like working out the fractions for a 13-note scale)


Eh, it's fine. With two decimal points of precision you're basically working in cents, which is a pretty standard way of representing tuning.

This doesn't need to be a trivially human-readable format, but I think maybe it would be nice to allow for multiple pitch representations: floating point frequency, floating point fractions of a semitone in any arbitrary EDO, fractional frequency as used in just intonation, etc..

You could also allow for annotations, like maybe a note pitch is represented as "63.86" in 12-EDO, but it's not just some weird in-between note, it's actually the E that's 5/4 above C, so the file could have an annotation that says that's what the note actually means in this context.


Isn't the issue that pitch bends in midi apply to every sound on that channel, so if you play a chord with a different temperament and send the bend data of each note the result is that the whole chord will be tempered as normal but with every note the same amount higher or lower?

I've heard the recent midi 2 can get around this but it's still a pain with the 40 years of midi equipment that surrounds us.


Yes, that's a problem with MIDI 1.0. There are a few ways to work around it. Some synths support custom tuning tables, and there's even a MIDI spec for custom tuning tables called MTS that hardly anyone actually implements as far as I know.

One way that works for multitimbral synth (which a lot of 90's romplers are) is to only play one note at a time on each channel, and use multiple channels for polyphony. This allows you to individually control the pitch of every note.

The one-note-per-channel trick works okay, but it's kind of awkward. If you want to use all 16 channels, it means setting every channel to the exact same patch, which is tedious. Also, you have to know what the pitch bend range of the synth is if you want to bend by the exact right amount. So, eventually MPE was adopted as an official MIDI standard to provide an easier, more user-friendly and standardized way to do one-note-per-channel MIDI.

MIDI 2.0 has added per-note pitch bend. For anyone making microtonal instruments right now, though, I'd say MPE is probably the best/easiest option.


Thanks, that sas a helpful response and I'll surely have a fun time looking into these different potential solutions.


I feel like we're only months/years away from software and file formats being obsolete for this for this sort of thing, and instead songs are composed by humming a tune to ChatGPT and have it add all the other instruments, some nice vocals, and output the MP3.

If you want to make changes, you give ChatGPT the MP3 file and say "switch out the trumpet for a piano and make the singer a guy".


As someone who makes music as a hobby that's about as enticing as outsourcing sex. I don't care if the machine can do it better.


That's an interesting point. I don't get much value from art personally (not that i don't see why people value it, i do!), so the idea of AI generated art isn't disgusting to me.

Music however.. i adore music. A big (but not essential) thing i love about music is the story that got the artist to that point. Pain, joy, emotion is transferred. Yea, often it's not essential so maybe AI Music has a place in my life, but easily 60% of my music is loved partly, if not heavily, because of the emotions that got the artist to that point.

I too am not interested in outsourcing this.


... music is art, no?


In English speaking world "art" can be used in two ways, as a synonym for "visual art" or to mean really any art (including music, cinema etc).


This is a hilarious take and juxtaposition with MP3 (lame encoders anyone?) and some far fetched gtp application. It would be a musak equivalent of the bored ape nft’s.

Like midi note transcription or waveform modeling will be cool machine learning tools. An end to end composition and mastering bot may make something passable like other current derivative music out there, being generous by saying passable.

Show me the leaps and bounds in self-driving cars that were only 3-5 years away 8 years ago.

There is still need for better daw, music transcription, and audio file formats in the next century. Lol humans want to make art in their free time not play around with pretend chatbot personalities.

P.s. people still record to TAPE and it sounds awesome even though there are albleton plugins


We might soon have good auto-generated music, but that won't mean that everyone else who was making music the regular way is going to stop.


A DAW that utilises ODD-SOUND's MTS-ESP should be something all DAW's support. International music styles are suffering in DAW-land imo :(

https://oddsound.com/


This is built into MIDI 2, fwiw


Yeah, MIDI 2.0 has per-note pitch bend. They kept with 7 bits for note numbers, though, which is kind of inconvenient. If you have an instrument with more than 128 keys, you have to do a kind of dynamic allocation thing where you find the nearest unused key and bend it to pitch.

I think for most use cases, MPE is actually simpler. (Also it's probably supported by more instruments and synths at this point.)

All the more reason to include microtuning as a supported feature in this new format.


Not per note pitch bend. Note On/Off messages support an extra 16 bits of "attribute" data that can optionally be used as an unsigned 7.9 fixed point pitch offset in semitones. Note numbers are also expanded to 256 since they can use the full 8 bits of the note number.

MPE is a super limited hack, I doubt anyone is going to use it once MIDI 2 becomes available in synths (it's a future technology, fwiw - it will be a year or two before you can buy any controller that uses it)


> Not per note pitch bend. Note On/Off messages support an extra 16 bits of "attribute" data that can optionally be used as an unsigned 7.9 fixed point pitch offset in semitones.

> Note numbers are also expanded to 256 since they can use the full 8 bits of the note number.

You might be right on the attribute data, but I thought they had per-note pitch bend as well. I'm skeptical that 8th note number bit is available, but I'm going off of memory since the MIDI association has decided for whatever reason to require a login just to see the specs as if the MIDI specs are some kind of secret, and that section of their website is throwing SQL errors right now.

> MPE is a super limited hack, I doubt anyone is going to use it once MIDI 2 becomes available in synths

Maybe MIDI 2.0 will be adopted, but so far it seems to be getting very little traction, at least in the hardware synth/controller space. I'm less familiar with software synths; maybe it's getting picked up there.

MPE is kind of a gross hack, but it works pretty well and is supported by most of the expressive controllers out there and at least some synths. The only expressive controller I'm aware of that uses MIDI 2.0 is Lumatone. I think Roland also makes a regular keyboard controller with MIDI 2.0 support. Other than that the major music incumbents seem to be staying away, and the smaller expressive instrument manufacturers seem to mostly be sticking to MPE.

I'd be in favor of just ditching MIDI entirely, and use a different protocol that's more like what MPE would be if it didn't have to be mostly backwards-compatible with MIDI 1.0. I'd also be in favor of using CAN-bus instead of 31.25 kbps serial for anything that's not using USB.


Ah you're right, I missed in the message layout that the MSBit is reserved for note number. That's kind of pointless.

And it is a bit absurd that they're so intent on doing things behind closed doors. But the spec is very comprehensive and decently polished.

It's not really "out" yet in either hardware or software. The last update made some notable changes. Allegedly Korg is releasing a line of MIDI 2 controllers soon. I think NAMM in January is going to have a lot of MIDI 2.0 demos.

In the software world, some people (Steinberg) don't even want to support MIDI in synths at all. VST3 barely supports MIDI 1, and it will not support MIDI 2.0.


This is pretty nice. I've written some tools for my own softsynth that revolved around parsing various DAW formats and having a standard format, even just to export to, would have been very nice to have.


Yes please! I went for Mixcraft because friend has it already, but still struggling to make it running alongside Spitfire with wine. Spitfire needs win 10 but then no sound with the "low latency" driver in Mixcraft. Anyway… an exchange format would help me go native while keeping friends!


Nice idea, but not useful unless each DAW manufacturer supports the format. And there are a lot of DAWs.


MIDI was enthusiastically supported by Yamaha and Roland, two of the biggest synth players of that era.

So I imagine, for it to become widely successful, that this new standard would need the wholehearted support of at least 2 of Steinberg, Apple, Ableton, and Avid.


Indeed, MIDI 1.0 is one of the oldest implemented standards in existence. Unfortunately, MIDI 2.0 is not as successful. Let's hope Bitwig's initiative takes off.


Maybe I’m misinterpreting the readme but other DAWs would not need to cooperate if you’re developing a tool to transpile between formats. I think most of the proprietary specs could be reverse engineered.


Well, you have to start somewhere.


It would be nice if Bitwig would support existing plug-ins formats like AU.


Why would they want to support AU? The only reason to use AU is Logic. Everything else (Protools excepted) supports VST, and all the plugin devs release in VST and AU, so it would be a waste of time. Bitwig is putting their time into CLAP, and for good reason. It's cross-platform, much easier to develop against, and much more advanced that all the alternatives. Even Avid has shown interest in CLAP. So has Image Line. I expect Studio One to support it in v7.


For starters, AUs are "easier" to run under Roestta than VSTs since AUs run out-of-process by default. This means you can use a x86_64 AU without running the entire DAW under Rosetta on Apple Silicon.


It hadn’t occurred to me that AU run out of process, but this makes sense. Does this mean an AU crashing in theory won’t take down your DAW? (The same thing that Bitwig has it’s own isolation feature for)

I always default to AU just because I’m on a Mac and I arbitrarily decided to do so long ago. The Rosetta thing has been a nice bonus. Not sure what the other (dis)advantages are, these days I try to focus on using native Ableton and M4L stuff anyway!


> Does this mean an AU crashing in theory won’t take down your DAW?

Yes, this has been my experience with AUs in Logic Pro, at least. You simply get a message saying a plugin failed and that you can try reloading it if you'd like.


In other words, just like VSTs on Bitwig. Even if the entire audio engine crashes, you just restart it with a click.


bitwig already does this with vsts because plugins are sandboxed


Do you encounter many AU-only plugins? From what I can tell, the limitations of VST/AU/AAX are more pressing and Bitwig are working to address that with CLAP: https://u-he.com/community/clap.


Man i hope more people (plugin developers too) embrace CLAP or something open. I'm writing a Rust program and i really want to allow for Modart's Pianoteq plugin and VST just makes it feel so difficult.


There have always been plug-ins, especially by small developers on the Mac, which are AU only.

Apple Silicon macs can now run iOS / iPadOS AUs as well, so there are a ton of plugins from that world that are not available as VSTs.

Apple Logic can't load VSTs, so typically on the Mac it makes sense to just have AUs. Ableton Live on the Mac can load VSTs or AU, but there isn't really a reason to keep the VSTs around.


Or LADSPA/LV2 which is actually free.


Fewer hosts should use AU


It sounds nice but I don't know how they're going to convince other DAWs devs (bigger than Bitwig) to adopt this.


If Reaper and Bitwig supported one interchangeable arrangement format it could already be big.


The readme says that PreSonus Studio One supports the format, so I think it's possible others might in the future.


epic project, one of my fav DAWs already supported too. So damn happy to see this. Hope Renoise will pick it up too though that one moves a lil slow :P maybe someday! just wanted to say thanks. this is needed sooo much!


Incredible. If Ableton adopts this it'll be a huge thing.


Probably not happening any time soon. Bitwig’s potential for growth is in users migrating from Ableton Live and making that easier is likely not in Ableton’s interest.


why do applications prefer zipped xml's ?

what are the advantages and disadvantages ?


oh my god thank you


XML, why?

And... 99% of the issues will be around plug-in management.


XML is fine. Schema validation is really handy more often than you’d think.


You can do schema validation with other formats too.

For example you have JSON Schema, that many people experience through Kubernetes.


Probably a relic of the musicxml format which still holds some popularity in programs based around sheet music/western notation


On the subject of MusicXML, there is currently ongoing work to develop a successor (currently named MNX): https://w3c.github.io/mnx/docs/


What would you propose? Json?



easy to parse, supported by basically anything and not a shitty format like YAML for example


Why not?


If the goal is widespread industry adoption, I'd recommend the approach of extending a professional, standards-based interchange format already supported by major DAWs: AAF.

https://en.wikipedia.org/wiki/Advanced_Authoring_Format

https://www.loc.gov/preservation/digital/formats/fdd/fdd0000...


The README extensively references AAF.


Indeed. This is a Bitwig project. I think the default for Bitwig engineering should be "assume extreme competence."


What's wrong with Bitwig engineering?


Nothing, they are praising the engineers.


Yes, which is why the Yet Another Standard approach is so mystifying. Bitwig is surely aware that AAF has wide support and could easily be extended to support everything they want to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: