Hacker News new | comments | show | ask | jobs | submit login
[dupe] Google’s NSynth Super is an AI-backed touchscreen synth (theverge.com)
33 points by joeyespo 5 months ago | hide | past | web | favorite | 7 comments



As a researcher and developer in the audio field, it's disheartening that the only things to get crossover attention is stuff from google with some buzzwords attached. There is so much amazing stuff being done, but it's hard to summarize for the mainstream audience I guess.

Meanwhile, this thing sounds horrible (sorry), and as I understand it, it's more like a fancy lookup table than anything resembling AI. (I think they needed to precompute parts of the model to make it run in realtime). Looking at the source code, there's nothing particularly impressive in there. But the part about open source hardware is a nice touch though.


Could you share some projects that you like?


Well, in the same spirit as NSynth, but miles ahead, we have:

https://heartofnoise.com/products/galaxynth - morph all types of sounds, laid out on a 2d canvas (made by me)

https://soniccharge.com/synplant - treating synth parameters as genomes, allows for combination and mutation of sounds

These are commercial projects, but there is also a lot of interesting academic stuff going on, from IRCAM, CCRMA and MTG among others. Unfortunately there's not a strong tradition of open source in audio, although that is starting to change.


According to @svantana's profile he makes this cool looking bit of work, amongst other things:

https://heartofnoise.com/products/galaxynth/

Would also be interested in hearing more about recommended projects though. I'm building my first synth at the moment so I'm pretty keen on the topic right now.


If it's a neural network it's AI. The shitty neutral network I made in undergrad may not rise to your standards of "particularly impressive" but what does that have to do with it being AI?


Creating new synth sounds does not excite me much...

Some things I'd like AI audio researchers to work on:

- High quality isolation of vocals and other instruments in recordings

- Convert audio recordings into multitrack midi/vst recordings - so a recording of a jazz quartet could be converted into the notes and appropriate patches/sample banks

- Convert between styles of music

- Convincing "vocal synthesis" - think text to speech but with singing allowing emulation of famous singers


I watched both videos, one hyping it, dropping all your favorite words like "neural net" and "AI", and I saw the performance video, and even though homeboy said in the first video that its not just 'combining the two sounds' its taking them and using machine learning 'to draw a new one' but isn't that the same thing? well if it isn't, it sounds like it is, and in the demo-play video it sounds like they are just mixing the two voices, and it didn't sound at all groundbreaking or anything.. I rather just use some old electribes than some Google Synth that looks like it was was painted with faery entrails. I could see some burning man people using this or like rich noise musicians in the mission or something. You can change the ADSL on all the synths since forever, I would take a rompler like the roland JV series or the Korg M1 and those just combine two weird waveforms, add some reverb and tweak the ADSL and you don't need any machine learning or freaky LED's




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: