This is a project I’ve been working on since the start of the year. We’ve just released 0.4.0, so I figured now would be a nice time to start making it a bit more public. There’s a bunch of interesting tech under the hood which I thought you all would definitely be interested in :)
Axiom’s a project that grew out of my third try at building a realtime software synthesizer for 64k intros in the demoscene. You can’t really fit an mp3 file in an executable that small and expect for it to sound any good, so instead we synthesize the audio and play it in realtime. A few other groups have written synthesizers for 4k and 64k productions, however I built this for two reasons: I wanted to make one myself, and I wanted to try some interesting things with combining node graphs and basic scripting. At some point, however, I realized that this could actually be a really useful tool for any musician to have, since it really flips things on its head and allows much more control than just stringing together a bunch of plugins (the question is, of course, do people who make music _want_ this control - I'm not sure on the answer yet).
Technology-wise, Axiom compiles 'node surfaces' with LLVM (no interpreters here, the code has to run comfortably 44100 times per second!). The editor, written in C++ with Qt, builds a MIR and passes it into the compiler, written in Rust. This was my first large project in C++ and first project in Rust... ultimately I think the Rust learning curve has definitely been worth it, as it's by far the stablest part of the program!
Ultimately I’m hoping to somehow be able to turn this into a real product, possibly by offering what you see as the core open-source software and then building on it, into something like a DAW or plugin for procedural audio in game engines (which a few people have suggested to me, and I think would be a really cool application of the technology!).
Check it out, let me know what you think (either here, or shoot me an email, chat on twitter, etc), ask questions, build something cool, have fun!
I suppose that you are familiar wuth analog modular synthesizers. There is a demand for these, even though they become very expensive toys fast. But musicians seem to like the flexibility and room for experiments that they allow. Even software simulations of these synths are pretty successful. VCV Rack (http://www.vcvrack.com) is a recent open source emulation of eurorack synthesizers that has found a following fast.
If you are interested in providing procedural audio for games, you should take a quick look at what WWise provides in that area. I have not had time to do that myself, but from what I have seen it provides a lot of features for controlling sound and music based on the current game situation. It is also a pretty expensive package.
Yeah, it works quite well. In my case, I used ‘ExternalProject_Add’ to run the builds through Cargo, then setup a static lib target with that as a dependency, linking to the files built by Cargo and a few system libs it expects. From there you can treat it like any target in Cmake, and it just works(tm).
Can you give a little more detail in how this compares architecturally with something like PD? I imagine a tradeoff of generating LLVM MIR and compiling it is that you can get better performance, but don't get the interactive experience of hearing changes immediately as you tweak the object graph.
Does the DSP graph operate on blocks or sample-by-sample?
I haven't actually properly played around with PureData (other than watching a few videos on it back when I started this project), so I can't really comment too much on how that compares.
I was initially worried about how interactive compiling would be, however it turns out (at least with the projects we've tested it with so far), it's pretty easily able to keep up with around 3ms from a modification to codegen being complete. The LLVM IR we generate is pretty simple which helps, and we also try to only recompile as much as possible by putting everything in separate LLVM modules (hurts runtime performance a bit since we can't optimize across modules, but it doesn't end up mattering too much).
The graph operates per sample - we really needed that feature to be able to do feedback loops, for things like FM synthesis.
Edit: I started adding in an explanation of Axioms architecture in the hope that someone with knowledge of similar projects could chime in, but it was getting quite long... might look at writing a blog post on it sometime soon, if people would find that interesting.
Thanks for replying, I'd definitely be interested in more architectural details.
3ms to complete codegen is orders of magnitude faster than I expected. Does that include everything necessary from making the change through actually synthesizing samples?
Just did some proper testing on a reasonably complex project, looks like I was about an order of magnitude off (to be fair it was 2AM here at the time, and I wasn't reading the numbers right haha). 30-40ms is still fast enough that you don't notice for the most part though. For this specific project, it breaks down roughly like this:
- ~1ms to build the new MIR based on the editor state
- ~5ms to process/run passes on the MIR (this involves some funky graph traversal which could very likely be optimized a lot)
- ~12ms to generate LLVM IR
- ~14ms in the LLVM JIT
In a project that makes good use of groups (nodes that contain a surface inside and then expose some controls back out) I'd expect this to be lower, since we only need to perform those operations on the surfaces that change.
Just had a look, and it turns out that's including running LLVM's optimizer as well, which takes roughly half the time. It's basically doing the equivalent of -O2 currently. That's probably more aggressive than what's needed, but I haven't taken the time to fine-tune it yet.
Other than that, you're right that it might be a bit high. There's very likely parts of the lowering code which can be optimized, I just haven't had the need to yet.
Awesome, I've always wanted to make something like this myself but never came around to do it!
It would be really neat if someone combined this with a very simple sequencer/tracker to create and manipulate tunes, which could then be dropped in (sequencer + synthesizer) directly into projects. It could be a quick and easy way to add music to small-scale retro game projects, for example.
Yes, this was one of the initial inspirations for the design of Axiom! I never actually had the chance to play with it though... would be quite interested in learning how it works.
Looks cool, looking forward to playing around with it!
If it is compiling, can it also export code/libraries that can be embedded in other projects, or is the way to go to embed the entire thing? Since you mention demos as a target group I'd guess the first, but it isn't entirely clear.
Yep, that was one of the original ideas with using LLVM. The actual runtime that's needed is pretty small and the code optimizes well, so the output ends up being very close to if you'd written the code in, say C++, yourself.
But that said, I haven't actually gotten to writing the 'exporter' yet (up to this point I've mostly been focused on usability and stability), so I can understand that it's not entirely clear.
Overall looks pretty neat. I’m a big fan of dark themes, but I think this one takes it a bit too far. I’m having trouble reading the text which doesn’t contrast well against the background. Consider lightening some of the elements.
Yeah, you're not wrong that it's pretty dark. I have done a bit of work recently on lightening up (it was even darker before!), but I'll definitely look at brightening it further in the near future.
This is a project I’ve been working on since the start of the year. We’ve just released 0.4.0, so I figured now would be a nice time to start making it a bit more public. There’s a bunch of interesting tech under the hood which I thought you all would definitely be interested in :)
Axiom’s a project that grew out of my third try at building a realtime software synthesizer for 64k intros in the demoscene. You can’t really fit an mp3 file in an executable that small and expect for it to sound any good, so instead we synthesize the audio and play it in realtime. A few other groups have written synthesizers for 4k and 64k productions, however I built this for two reasons: I wanted to make one myself, and I wanted to try some interesting things with combining node graphs and basic scripting. At some point, however, I realized that this could actually be a really useful tool for any musician to have, since it really flips things on its head and allows much more control than just stringing together a bunch of plugins (the question is, of course, do people who make music _want_ this control - I'm not sure on the answer yet).
Technology-wise, Axiom compiles 'node surfaces' with LLVM (no interpreters here, the code has to run comfortably 44100 times per second!). The editor, written in C++ with Qt, builds a MIR and passes it into the compiler, written in Rust. This was my first large project in C++ and first project in Rust... ultimately I think the Rust learning curve has definitely been worth it, as it's by far the stablest part of the program!
Ultimately I’m hoping to somehow be able to turn this into a real product, possibly by offering what you see as the core open-source software and then building on it, into something like a DAW or plugin for procedural audio in game engines (which a few people have suggested to me, and I think would be a really cool application of the technology!).
Check it out, let me know what you think (either here, or shoot me an email, chat on twitter, etc), ask questions, build something cool, have fun!