First and foremost the old fltk based GUI has been completely rewritten from scratch using a new custom toolkit (mruby-zest). This was done in order to make the interface more usable by making it easier to navigate, easier to use within plugin hosts, easier to get feedback, and easier on the eyes (the fltk UI wasn't exactly the prettiest).
Secondly the work on the GUI was made possible by some work which decopuled the sound synthesis engine from the interface. As of the 2.5.x series all parameters can be accessed out-of-process (or on a different networked computer) via open sound control. This decoupling has improved low latency processing and made it easy to map physical (midi) knobs/sliders onto the virtual ones.
Overall this work has resulted in a number of child-projects including the RtOSC OSC implementation ( https://github.com/fundamental/rtosc ), the mruby-zest GUI toolkit ( https://github.com/mruby-zest ), and the Stoat LLVM static analysis tool ( https://github.com/fundamental/stoat ).
I'm currently hoping that the large interface upgrade (which was supported with some recent crowdfunding efforts) will get more people interested in the project and help speed up the momentum of the project as a whole.
Has this funding model for opensource (releasing binaries, and then waiting for some time to release the sources) been a success? Have you been able to generate a decent revenue with the crowdfunding efforts?
Paid binaries are going to continue to be available for platforms where users are likely not going to want to compile for themselves (i.e. windows/macOS). This will help to continue to maintain support on those platforms as well as fund other aspects of development.
Can't hear much (or any) aliasing in the tracks on the page. How does Zyn manage to avoid that?
For both the pad-synth algorithm and for the add-synth engine users create a definition for an oscillator in terms of the frequency components. Then to play the oscillator it is converted into a time domain wavetable. For pad-synth a collection of wavetables are created with each one corresponding to a different frequency (and thus a different antialiasing threshold). For add-synth when a user hits a note, then a new wavetable is build by removing the high frequencies which would alias before performing an inverse fft.
There are still some issues with this approach since it's possible for the frequency of the note to change over time (e.g. frequency lfo/envelopes or add-synth cross-oscillator modulations), so in the future I'd like to add some optional oversampling to the cases where aliasing isn't eliminated by the current approach.
Here is a nice breakdown on the original author's thoughts on harmonics and randomness:
It's a very interesting read for those that are interested in such things, and gives some insight into what has helped elevate this particular synth over the more robotic or sterile sounding offerings out there
I wanted to try it on OS X but I have to install Jack—will you consider supporting Audio Unit in the future?
As per AU support that largely depends upon it getting implemented in the plugin abstraction layer which is borrowed from the DISTRO Plugin infrastructure: https://github.com/DISTRHO/DPF
If you find the idea of using a Raspberry PI for live use interesting I'd recommend taking a look at the work that http://zynthian.org/ have been working on. They're a pretty cool open hardware/software project that does just that (for Zyn and other synths/effects).
Thank you for this wonderful piece of software which has helped me take several tracks out of my head and into reality :-)