

Audio synthesis / processing in JavaScript - quinnirill
http://audiolibjs.org

======
paulnasca
Does it support large (256k samples) array FFTs? I could use it for PADsynth
algorithm ( <http://zynaddsubfx.sourceforge.net/doc/PADsynth/PADsynth.htm> ).

Also, from the API, it seems that I could implement Paulstretch algorithm (
<http://hypermammut.sourceforge.net/paulstretch/> ) easily. When I'll have
some free time, I'll try it. EDIT: I looked a bit closer on interface EFFECT
and Paulstretch cannot be implemented as an effect, but as a generator. Needs
to be studied more :)

~~~
quinnirill
fft.js supports as large buffers as JS can handle, that depends a bit on the
browser, but 256k samples will probably work, you might have some trouble in
Chrome, if you have too much other stuff in memory as well. Would be cool to
see a demo of PADsynth on the web, especially with audiolib.js!!!

Paulstretch would be a perfect addition to the library, it would open whole
new doors for example for samplers (whoa, the sample is sustained forever!),
of course with some limitations in regards of real time. Send me a message on
GitHub, if you decide to take up on the task, I'd be glad to give you a hand
with it! :)

~~~
paulnasca
Unfortunately, in this period of time I am quite busy porting Paulstretch to
Android. I have rewritten a (C++) fixed point implementation of Paulstretch
and now I have to write the GUI for Android. After I will release it, I hope
that I will have some time to port it to JS. Anyway, feel free to add the task
for me (my user is 'paulnasca').

If you want to look at the algorithm of Paulstretch you can have a look to a
simple Python implementation:
[https://github.com/paulnasca/paulstretch_python/blob/master/...](https://github.com/paulnasca/paulstretch_python/blob/master/paulstretch_stereo.py)
The algorithm is pretty fast, the most time consuming operations are the FFTs
and IFFTs. That's why I think that it might work on real-time JS.

------
rsiqueira
I wrote an interactive demo to create sound waves (FM Synthesis) using
JavaScript. Now I will try this audiolibjs to make it better and add effects.
I was creating effects from the scratch (using rand to produce white noise,
etc): <http://js.do/sound-waves-with-javascript/> (Click each "Interesting
Sound and Waves" to see each effect. Graphics/waves are created using
Processing.JS)

~~~
quinnirill
Wow, that's very cool! I think you could make a great learning resource for FM
synthesis from there, especially if you'd explain how each of those examples
are put together!

------
bprater
Ooh. This might be the beginning of JS/web-based synths. I'm curious what
kinds of cool synths musicians might dream up and how the having synths on the
web could make it different than anything before. Are browser-makers
considering an API that allows me to plug my MIDI keyboard into a webpage?

~~~
quinnirill
One of the earlier demos (about a year old and unfinished) :
<http://niiden.com/orbisyn/> :) has a virtual MIDI keyboard, to which you can
connect a MIDI synth if you have Java. It isn't exactly native support, and
it's very unpredictable, but it's something that can be used meanwhile.

However, the Device API should allow access to external inputs, such as
microphones and MIDI controllers, so it's coming along!

------
aaronblohowiak
This is interesting, but it seems like they are processing/producing a single
sample at a time (though some effects seem to be buffer-based.) While
convenient, it is a very, very inefficient way of doing things.

~~~
quinnirill
Actually, the API allows for both, whichever suits your needs. When doing
buffer based processing, you can even add advanced automation, etc.

~~~
aaronblohowiak
I've looked at all of the examples in the src/generators, and the amplitude
processor and I don't see the buffer-based generate() -- maybe I am just
missing something?

~~~
quinnirill
Ah yes, looking at the source code makes it a bit obscure, because the
inheritance patterns are described in the wrappers. But for generators, you
can do, for example osc.append(buffer, channelCount);, because generators are
by nature multi-channel, but for effects you have to create a multi-channel
instance, like flt = audioLib.LP12Filter.createBufferBased(channelCount,
sampleRate, cutoff); and then you can use the append function similarily to
generators.

This is a design choice to make implementing new effects simple, and the
framework providing this extra functionality for you.

~~~
aaronblohowiak
I see that you have layered a Buffer-style API on top of your per-sample code
(api-buffer-effects.js), and I could still be missing something, but it looks
like that just automates the call to process each sample individually --

    
    
        //api-buffer-effect
        self.effects[n].pushSample(buffer[i + n], 0);
    
        //api-generator
        out[i + n] = this.getMix(n) * this.mix + buffer[i + n];
    

This is _very_ different from being able to process batches of samples within
your effect or osc code -- even if the JIT inlines the functions of this.mix,
you will still incur unnecessary setup cost.

Of course, without profiling this is just speculation.

~~~
quinnirill
Yep, I definitely see where you're coming from with this. It was my initial
take as well, but as crazy as it is, after some time spent comparing different
approaches, turned out that this is the fastest way to do it in JavaScript
world, and that is true even up to older browsers such as IE8 (I haven't
bothered measuring anything related in browsers older than that, because IE8
is the oldest browser with some way to output generated sound), but especially
true with the latest optimizations such as CrankShaft.

However, I should probably set up some public performance tests and compare
the instructions these things compile to have some hard data back my claims, I
wouldn't exactly trust me just saying it is so, as I have just tested these
things on the fly.

But, however, this approach is more modular, and fits more use cases, and that
matters even more for me right now, but trust me, I've spent my time
optimizing every operation to the point that it doesn't make things harder to
use. :)

~~~
aaronblohowiak
Very cool! It is so exciting to see jit technology improve so much. In C, it
is definitely faster to go in batches -- having things in batches in C also
makes it easy tO plug in vector libraries if available on the current
platform.

I would _love_ to see your performance benchmarks.

