I would like to thank everyone for the kind words. You guys are absolutely amazing! In my wildest expectations, I didn't see such a warm response and wide audience.
Thank you!
I am sure there are people who deserve that much and more attention, but aren't getting it, so I would like to say something to you guys, working tirelessly, day in and day out, on your projects - keep doing what you love!
I am not always talking about synthesizers or music, but you can follow me in these places:
I find the quality of the Synth amazing. Over the years I have tried out many Synth plugins for Reason 5 e.g. and I have never come across one, in which all the settings sound sooo good. I instantly wanna compose something with it. Good job @author!
Thank you!
I also used the Minimoog as my primary source of inspiration. Robert Moog was a brilliant man. And so modest, too... He changed the music, forever.
I must admit that I was pretty sceptical about it, all the way up to when I wrote a couple of lines to connect a few nodes, then I wrote a very simple pair of noteOn, noteOff functions (no envelopes, no nothing) and a tiny function to parse the MIDI key number... And when I started playing that... single sine wave without any flavor on it or effects... I loved it! The latency was great, the oscillator produced such a pleasant sound and above all... I (couldn't make this "capital-enough") have built this teeny-tiny (for me - uber-cool) instrument. :)
The filter does not appear to keytrack, thus it gets more or less powerful depending on which notes you play(this is sometimes desirable but usually not).
It is possible to make the filter hoot at low cutoff and maximum emphasis.
It's bad if your synth is aiming to be a virtual analog. The correct behavior of extreme resonance, at least in a Moog-type filter, is self-oscillation [0], but actually achieving that through digital subtractive techniques remains CPU-intensive and a cutting-edge filter design challenge, because all the action is taking place at the top of the spectrum, where sampling rates start to matter.
A generic "clean digital" filter sound that can do typical resonant sweeps would be acceptable for a web synth, and a purely additive emulation like that of IL Harmless would be good enough for almost everything. There's no such thing as a totally worthless sound, but some sounds are more immediately useful for musical purposes, and letting the filter break usually indicates a low attention to detail. Low-quality digital resonance tends to have a "dentist's drill" timbre to it, even when it's not broken.
A last thought: I don't know how your envelopes are made right now, but they matter a great deal to the timbre. The curvature matters, as does the speed. You can use a generic filter with poor resonance performance, but the overall result will still sound pretty good with quality oscillators and well-tuned envelopes.
This is very useful. I am not using my own filter, right now. It's the default LP one in the Web Audio. I think it is probably one of the "not-so-good" parts of it. I don't want to offend anyone from the guys that made the spec, they did more than anyone expected with this API, but I do notice this "dentist's drill" timbre. I always feel it like... "pixelated sound".
As for the envelopes - I spent quite some time tuning them. Yet, this is the first time I work on such a project, so I don't know exactly how bad or good they are.
Sounds incredible. Would be awesome if you could add an arpeggiator or basic step sequencer. It's a bit difficult to fiddle with the nobs while pressing down the keys.
Should be the same latency that you get when working with your own DAW. It's just translating your system's internal audio APIs (so for me on Mac, it's Core Audio) into something that can be used on the Web with JS.
The Web Audio API is pretty awesome. I've been trying to pair up with some folks to see if we can build some kind of web DJ software. Imagine being able to JIT download songs from Spotify, Soundcloud and Youtube and DJ with them :)
It's the amount of time that it takes from the press/release of the key to trigger the actual sound, and then the actual sound to come out of the speakers. Possibly something like this:
http://en.wikipedia.org/wiki/Latency_%28audio%29
And my lame interpretation (which misses important technical aspects and does not give timing for the individual points, but you can imagine them)
1. User presses key or clicks the mouse.
2. The keyboard/mouse usb/com sends signal to the OS
3. The OS sends signal to the listening windows
4. The browser gets the key/mouse signal
5. Sends it for handling to the underlying javascript ( guessing here, haven't looked) code
6. Some decision, calculation is done (wave signal generated?)
7. The wave is send to the API (web-audio api? again not familiar).
8. The API is handled by the browser, and possibly each browser has some kind of asynchronous audio processing, maybe relying on a platform specific audio library (DirectX, OpenAL, I dunno really).
9. This library further might do some pre-mixing, or send it raw to the OS through some other way (write to /dev/dsp? or who knows what)
10. The OS takes it and queues it to the OS audio manager.
11. The OS audio manager would pick it up ever 1ms, or every 5ms, or every 15ms, or... you guessed it - who knows :)
12. The wave is possibly then processed by the AC97, or was it AC79 (not a hardware guy here)
13. Finally it might came out through cables to your speakers.
14. Then it has to travel some short amount of space. I hope you are not under water, as this would further reduce the time for the sound to come.
15. Eventually your ears would hear it, possibly 1/2ms delay for your one of your ears if you are sitting sideways (or maybe less/more).
So from 1 to 15 is the amount of time it takes for you to press the key, and hear the feedback (e.g. the sound).
Awesome! Now lets see how we make that a CI test...hm... :D
Really? Is this the way? When you asked me, I started thinking - maybe the latency of the keyboard all the way up to my code is known... in some spec. As well as the latency after. As a matter of fact, I don't have any control of anything before and after, so there should only be something in the pathway under my control, that makes sense to be measured.
I definitely will give this some thought, thanks for asking.
This by far is most portable, testable and unbiased way of doing it. If you rely on internal measuring then, well.. Actually I don't know how you can do that - for example when the video / audio signal actually leaves is only verifiable with some external equipment... like camera ;)
I would also do something like tap your desk sharply with a pen right before you film the test, and make sure the audio track is aligned with video at that point. I have no idea how accurate A/V syncing is on say a smartphone, but that would be a bummer if that was completely throwing off the test.
This is an inspirational thing to see on the web. My, we've come a long way. Thank you for bringing this all together.
How are the waveforms rendered? Are these few or single cycle samples? I noted that, with slight detuning, I was getting inconsistent tracking for the detuning. Higher notes yields wider pitch differences than lower notes.
I was unable to get consistent results with really short envelopes. e.g. 0-4-0-0 AEnv for percussive sounds or synth blips. Similarly, fast filter envelope.
Would you be able to consider an alternate data entry? e.g. Toggle a mode where you can type parameter values in vs. using the sliders & knobs?
I pulled it up on my Surface Pro 3 and, with some zoom, could play it via touchscreen. It would be nice to be able to get bigger keys, touch spots, etc.
Definitely the sort of thing that makes me want to see it grow and continue. Excellent work. Keep it up!
About the detuning issue - I don't hear it. The step is 100 cents (a semitone). I tried on the Clean Sine patch, pressing G# on the 4th octave, when only oscillator 2 is enabled and detuned max to higher (+8 semitones), is the same as pressing C on the 4th if only oscillator 1 works without detuning. The issue here, for me, is that it's set on semitones, while it should "continuous".
For percussive sound, the primary Envelope should probably be more like 0-0-4-2, than 0-4-0-0. My envelope is a bit unstandard, though, as you are probably used to have the Decay always start from 1 (or 1*1/noteVelocity), where mine starts at where the Attack has ended.
Other inputs make sense, I am logging this one up.
I want to make a rendering especially for mobile. This is on my backlog already.
Thank you very much for the kind words and the great input.
Good job! I'd add a social patch sharing, record buffer, and upload-to-soundcloud as your next priorities - this is as good as many starter softsynth plugins, so make the most of it and become a fixture in the inevitable Web DAW paradigm.
Eventually someone builds a DAW... Well, I wouldn't expect native performance, but imagine the power of having a open javascript interface! Hope something interesting come out of this...
This has a down-side, though - when you try to clean up a sound and there still seems like a couple of oscillators are running...arghhhh, I have another tab playing.
Thank you! I have played on an M-Audio keystation 61es, the whole time I've worked on it.
Just saw how many knobs and sliders you have on that thing... man, I have a lot of work to do. :) On that note I am thinking about allowing people to assign knobs and maybe have a set of pre-written "drivers"... Will think about it.
Hi TeeWEE, if you are talking about a QWERTY - sadly I didn't have the time to get to it. But it's coming!
If you mean MIDI - only Chrome (latest - 43) supports Web MIDI without a flag. The procedure is - plug your keyboard, turn it on, restart browser (not just the tab, the whole browser).
Well, not at the moment, no. Once there is any form of automation allowed, probably yes. Or if it is to become a part of some web DAW, this would not be handled by the instrument, I guess.
I am sure there are people who deserve that much and more attention, but aren't getting it, so I would like to say something to you guys, working tirelessly, day in and day out, on your projects - keep doing what you love!
I am not always talking about synthesizers or music, but you can follow me in these places:
I am also looking into job opportunities (or even if you just want to connect) so here is my LinkedIn: https://bg.linkedin.com/pub/nikolay-tsenkov/38/754/955Cheers!