Thanks for sharing this paper. As a music language designer and developer (https://glicol.org/), I have a special feeling about this paper as the research gap mentioned in the abstract is exactly one of my goals: to bridge the note representation and sound synthesis. And there are other axes to consider: real-time audio performance, coding ergonomics, collaborations, etc. There is no doubt that these languages have now become musical interfaces and part of instruments (laptops). And they have now another role: the medium between humans and AI.
An issue with music languages is that they are still, metaphorically speaking, in the ASCII world stage (focusing on encoding the patterns of western music).
The "UTF revolution" (being able to express both the rich ancient traditions of other cultures and modern electronic music) is still somewhere in the future.
That push to include less stylized musical forms could be a very creative process. It forces us to reconsider what are the musical primitives but also to express them in practical and intuitive tokens.
It maybe a matter of inventing a computer interface that is both expressive and interactive enough for music.
Think of the two keyboards: the computer keyboard where you type abstract notation for music and the midi keyboard where you type... less abstract notation for music.
So close yet so far :-). The good news is there are more Bachs where he came from. Give it five/ten years.
For those intrigued by music formats, there are a few modern standards that are far simpler than some of those in the paper: ABC and MusicXML. (ABC is designed to be as readable as possible while maintaining a high degree of control over the output.)
Lilypond had such beautiful output, though the amount of time since I have used it can be measured in years--is it still setting the typesetting bar above Finale, Sibelius, etc?