Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Viktor NV-1 Synthesizer (nicroto.github.io)
127 points by tsenkov on June 1, 2015 | hide | past | favorite | 60 comments

I would like to thank everyone for the kind words. You guys are absolutely amazing! In my wildest expectations, I didn't see such a warm response and wide audience. Thank you!

I am sure there are people who deserve that much and more attention, but aren't getting it, so I would like to say something to you guys, working tirelessly, day in and day out, on your projects - keep doing what you love!

I am not always talking about synthesizers or music, but you can follow me in these places:

  * Twitter: https://twitter.com/@NikolayTsenkov
  * Ello: https://ello.co/tsenkov
  * Facebook: https://facebook.com/NikolayTsenkov
I am also looking into job opportunities (or even if you just want to connect) so here is my LinkedIn: https://bg.linkedin.com/pub/nikolay-tsenkov/38/754/955


I find the quality of the Synth amazing. Over the years I have tried out many Synth plugins for Reason 5 e.g. and I have never come across one, in which all the settings sound sooo good. I instantly wanna compose something with it. Good job @author!

I was just coming here to say something similar. Really beautiful sound, the vibrato is so milky smooth

Thank you. :)

Thank you, so much, for these kind words.

This is a solid sounding synth. From a former sound engineer.

Wow, thanks! :)

That's a very cool piece of work! I recorded a brief demo using a MIDI controller - https://www.youtube.com/watch?v=ZU8FE9xLBdM and the lack of latency is fantastic.

Question - is it possible to do polyphonic sounds?

That's awesome! :)

I am going to implement polyphony, I just decided to release early, since monosynth is still a playable instrument.

And it definitely will be a switch option (mono/poly), because I just love the monosynth style of playing.

Did you see you can save patches and Export/Import your entire custom library?

I did :-) If all plugins exported in JSON...


I just recently looked at the Web Audio API and the Google Moog Doodle (http://www.google.com/doodles/robert-moogs-78th-birthday) and I'm seriously impressed by what you did there. Congrats, Sir, keep up the good work!

Thank you! I also used the Minimoog as my primary source of inspiration. Robert Moog was a brilliant man. And so modest, too... He changed the music, forever.

My first impression was that this was rendered with WebGL, and I couldn't believe that this could be done with web. Kudos!

I didn't think this was possible, since audio editing was long claimed to be a desktop only solution (VST, Juce, Cubase, ProTools).

Very cool indeed!

(And good learning material for me, an ex game C++ tools developer -> web backend/frontend java/gwt now).

Thank you for the kind words.

I must admit that I was pretty sceptical about it, all the way up to when I wrote a couple of lines to connect a few nodes, then I wrote a very simple pair of noteOn, noteOff functions (no envelopes, no nothing) and a tiny function to parse the MIDI key number... And when I started playing that... single sine wave without any flavor on it or effects... I loved it! The latency was great, the oscillator produced such a pleasant sound and above all... I (couldn't make this "capital-enough") have built this teeny-tiny (for me - uber-cool) instrument. :)

the worst part about coding audio tools, is getting around the awfull host code for your plugins on things such as protools

The filter does not appear to keytrack, thus it gets more or less powerful depending on which notes you play(this is sometimes desirable but usually not).

It is possible to make the filter hoot at low cutoff and maximum emphasis.

Filter doesn't have keyboard tracking, yet. I've seen it on many synths and I definitely want to have it on the NV-1.

Oh, you're right! On a very low C, full emphasis and cutoff at 1. Is this bad?

It's bad if your synth is aiming to be a virtual analog. The correct behavior of extreme resonance, at least in a Moog-type filter, is self-oscillation [0], but actually achieving that through digital subtractive techniques remains CPU-intensive and a cutting-edge filter design challenge, because all the action is taking place at the top of the spectrum, where sampling rates start to matter.

A generic "clean digital" filter sound that can do typical resonant sweeps would be acceptable for a web synth, and a purely additive emulation like that of IL Harmless would be good enough for almost everything. There's no such thing as a totally worthless sound, but some sounds are more immediately useful for musical purposes, and letting the filter break usually indicates a low attention to detail. Low-quality digital resonance tends to have a "dentist's drill" timbre to it, even when it's not broken.

A last thought: I don't know how your envelopes are made right now, but they matter a great deal to the timbre. The curvature matters, as does the speed. You can use a generic filter with poor resonance performance, but the overall result will still sound pretty good with quality oscillators and well-tuned envelopes.

[0] https://www.youtube.com/watch?v=dVgIf71uWB4

Thank you.

This is very useful. I am not using my own filter, right now. It's the default LP one in the Web Audio. I think it is probably one of the "not-so-good" parts of it. I don't want to offend anyone from the guys that made the spec, they did more than anyone expected with this API, but I do notice this "dentist's drill" timbre. I always feel it like... "pixelated sound".

As for the envelopes - I spent quite some time tuning them. Yet, this is the first time I work on such a project, so I don't know exactly how bad or good they are.

Great info! Very useful.

Sounds incredible. Would be awesome if you could add an arpeggiator or basic step sequencer. It's a bit difficult to fiddle with the nobs while pressing down the keys.

Thank you!

Arpeggiator is on my radar. Thanks.

Latency seems quite low! Seems to sound quite good too from my laptop speakers :) WELL DONE!

Is latency known / have you measured?

Should be the same latency that you get when working with your own DAW. It's just translating your system's internal audio APIs (so for me on Mac, it's Core Audio) into something that can be used on the Web with JS.

The Web Audio API is pretty awesome. I've been trying to pair up with some folks to see if we can build some kind of web DJ software. Imagine being able to JIT download songs from Spotify, Soundcloud and Youtube and DJ with them :)

This seems like a pretty cool idea! Where can I follow?

Thank you! The truth is I have no idea what the latency is. I give all the credit to the well made WebAudio API. :)

If you have a suggestion how could this be measured reliably (won't this be relative to hardware?), please share.

It's the amount of time that it takes from the press/release of the key to trigger the actual sound, and then the actual sound to come out of the speakers. Possibly something like this: http://en.wikipedia.org/wiki/Latency_%28audio%29

And my lame interpretation (which misses important technical aspects and does not give timing for the individual points, but you can imagine them)

   1. User presses key or clicks the mouse.

   2. The keyboard/mouse usb/com sends signal to the OS

   3. The OS sends signal to the listening windows

   4. The browser gets the key/mouse signal

   5. Sends it for handling to the underlying javascript (  guessing here, haven't looked) code

   6. Some decision, calculation is done (wave signal generated?)

   7. The wave is send to the API (web-audio api? again not familiar).

   8. The API is handled by the browser, and possibly each browser has some kind of asynchronous audio processing, maybe relying on a platform specific audio library (DirectX, OpenAL, I dunno really).

   9. This library further might do some pre-mixing, or send it raw to the OS through some other way (write to /dev/dsp? or who knows what)

   10. The OS takes it and queues it to the OS audio manager.

   11. The OS audio manager would pick it up ever 1ms, or every 5ms, or every 15ms, or... you guessed it - who knows :)

   12. The wave is possibly then processed by the AC97, or was it AC79 (not a hardware guy here)

   13. Finally it might came out through cables to your speakers.

   14. Then it has to travel some short amount of space. I hope you are not under water, as this would further reduce the time for the sound to come.

   15. Eventually your ears would hear it, possibly 1/2ms delay for your one of your ears if you are sitting sideways (or maybe less/more).

So from 1 to 15 is the amount of time it takes for you to press the key, and hear the feedback (e.g. the sound).

I hope this might've helped.

I meant I know what latency is as a notion, just haven't measured it and I don't know how-to.

I feel a bit lame for misleading you in writing this whole thing. Sorry. :(

Well - here is how - record with a camera your keypresses mouse-clicks and at the same time record your screen, and what comes out from your speakers.

Then with some AV editing software, you can mark the time you've pressed the keyboard/mouse click, and when the sound came out.


Awesome! Now lets see how we make that a CI test...hm... :D

Really? Is this the way? When you asked me, I started thinking - maybe the latency of the keyboard all the way up to my code is known... in some spec. As well as the latency after. As a matter of fact, I don't have any control of anything before and after, so there should only be something in the pathway under my control, that makes sense to be measured.

I definitely will give this some thought, thanks for asking.

At least in video games, responsiveness is being measured mainly with high-speed cameras ;) - (60fps, e.g. 16ms is good enough sampling rate)


This by far is most portable, testable and unbiased way of doing it. If you rely on internal measuring then, well.. Actually I don't know how you can do that - for example when the video / audio signal actually leaves is only verifiable with some external equipment... like camera ;)

I would also do something like tap your desk sharply with a pen right before you film the test, and make sure the audio track is aligned with video at that point. I have no idea how accurate A/V syncing is on say a smartphone, but that would be a bummer if that was completely throwing off the test.

This is an inspirational thing to see on the web. My, we've come a long way. Thank you for bringing this all together.

How are the waveforms rendered? Are these few or single cycle samples? I noted that, with slight detuning, I was getting inconsistent tracking for the detuning. Higher notes yields wider pitch differences than lower notes.

I was unable to get consistent results with really short envelopes. e.g. 0-4-0-0 AEnv for percussive sounds or synth blips. Similarly, fast filter envelope.

Would you be able to consider an alternate data entry? e.g. Toggle a mode where you can type parameter values in vs. using the sliders & knobs?

I pulled it up on my Surface Pro 3 and, with some zoom, could play it via touchscreen. It would be nice to be able to get bigger keys, touch spots, etc.

Definitely the sort of thing that makes me want to see it grow and continue. Excellent work. Keep it up!

Thank you.

I honestly don't know how are they rendered. I use the default four waves and add a couple of custom forms from here: http://chromium.googlecode.com/svn/trunk/samples/audio/wave-...

About the detuning issue - I don't hear it. The step is 100 cents (a semitone). I tried on the Clean Sine patch, pressing G# on the 4th octave, when only oscillator 2 is enabled and detuned max to higher (+8 semitones), is the same as pressing C on the 4th if only oscillator 1 works without detuning. The issue here, for me, is that it's set on semitones, while it should "continuous".

For percussive sound, the primary Envelope should probably be more like 0-0-4-2, than 0-4-0-0. My envelope is a bit unstandard, though, as you are probably used to have the Decay always start from 1 (or 1*1/noteVelocity), where mine starts at where the Attack has ended.

Other inputs make sense, I am logging this one up.

I want to make a rendering especially for mobile. This is on my backlog already.

Thank you very much for the kind words and the great input.

Good job! I'd add a social patch sharing, record buffer, and upload-to-soundcloud as your next priorities - this is as good as many starter softsynth plugins, so make the most of it and become a fixture in the inevitable Web DAW paradigm.

Thank you! Haven't even thought about upload to SoundCloud, yet. Interesting idea.


Great work. You should totally bake that into an installable chrome app. Hit me up if you need any help ;)

Thank you. I might hold you on that promise. :)

No problem. I am tinkering with them anyways. My github handle is @pascalopitz

Eventually someone builds a DAW... Well, I wouldn't expect native performance, but imagine the power of having a open javascript interface! Hope something interesting come out of this...

Well, there are a few emerging ones. Soundtrap pops to mind. I'm sure it already does. :)

Heh, you can create layers by opening multiple tabs.

Indeed you can! :)

This has a down-side, though - when you try to clean up a sound and there still seems like a couple of oscillators are running...arghhhh, I have another tab playing.

Nice smooth sound, low latency - good job!

Thank you!

This is very nice, has some great sounds!

Thank you! I have played on an M-Audio keystation 61es, the whole time I've worked on it.

Just saw how many knobs and sliders you have on that thing... man, I have a lot of work to do. :) On that note I am thinking about allowing people to assign knobs and maybe have a set of pre-written "drivers"... Will think about it.


How do I enable keyboard input?

Hi TeeWEE, if you are talking about a QWERTY - sadly I didn't have the time to get to it. But it's coming!

If you mean MIDI - only Chrome (latest - 43) supports Web MIDI without a flag. The procedure is - plug your keyboard, turn it on, restart browser (not just the tab, the whole browser).

I hope you'll like it.


Waits on loading something from platform.twitter.com for me :/

The Tweet button. :( Did it load?

Proxy issue; probably not a big deal for most of your users :)

Nice job with the synth. I've got lots of recreational software synthesis under my belt (csound) and appreciate what you've built. Well done!

Thank you!

That is goddamn amazing!

Thanks! :)

so awesome


Productivity of HN users worldwide will hit rock bottom thanks to this.

Any intention to allow the import/export of midi files?

Haha. Guilty as charged! Sorry! :)))

Well, not at the moment, no. Once there is any form of automation allowed, probably yes. Or if it is to become a part of some web DAW, this would not be handled by the instrument, I guess.

While I think this is nice work, it's a shame it can't be plugged into a DAW. A nice toy but kinda useless for some of us

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact