Thanks for sharing this paper. As a music language designer and developer (https://glicol.org/), I have a special feeling about this paper as the research gap mentioned in the abstract is exactly one of my goals: to bridge the note representation and sound synthesis. And there are other axes to consider: real-time audio performance, coding ergonomics, collaborations, etc. There is no doubt that these languages have now become musical interfaces and part of instruments (laptops). And they have now another role: the medium between humans and AI.