Meanwhile, this thing sounds horrible (sorry), and as I understand it, it's more like a fancy lookup table than anything resembling AI. (I think they needed to precompute parts of the model to make it run in realtime). Looking at the source code, there's nothing particularly impressive in there. But the part about open source hardware is a nice touch though.
https://heartofnoise.com/products/galaxynth - morph all types of sounds, laid out on a 2d canvas (made by me)
https://soniccharge.com/synplant - treating synth parameters as genomes, allows for combination and mutation of sounds
These are commercial projects, but there is also a lot of interesting academic stuff going on, from IRCAM, CCRMA and MTG among others. Unfortunately there's not a strong tradition of open source in audio, although that is starting to change.
Would also be interested in hearing more about recommended projects though. I'm building my first synth at the moment so I'm pretty keen on the topic right now.
Some things I'd like AI audio researchers to work on:
- High quality isolation of vocals and other instruments in recordings
- Convert audio recordings into multitrack midi/vst recordings - so a recording of a jazz quartet could be converted into the notes and appropriate patches/sample banks
- Convert between styles of music
- Convincing "vocal synthesis" - think text to speech but with singing allowing emulation of famous singers