Hacker News new | past | comments | ask | show | jobs | submit login

With regards to taking an instruction and converting it into a state representation, I think creating music is very similar to programming. Coming from a web background, I've seen experiments like React Music which I think are really interesting. If the question becomes "how can we best represent the changing state in code?" Then I think that's a question that can be solved for both interface programming and music programming at the same time. This is why I find React Music interesting: React's declarative syntax makes it easier to understand what the final product is going to look like. Can the same thing be done for programming sound, in a way that represents the final product in a more naturally understood way?

I've been working on a declarative music framework called Dryadic. Like React it is designed to diff a tree and make changes while playing.

Currently it works with suoercollider but I had planned to write bindings for webaudio and other targets.



There will be a formal language that can be parsed, but at this stage it uses JavaScript objects

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact