Audio software has strict buffer/latency demands which usually cannot be guaranteed on interpreted language platforms.
Doing audio synthesis with JS or any other interpreted language really is totally possible and has been done in a more or less serious way in several implementations and webtoys etc. But if you need extremely low latency and guarantees you cannot go that route, sadly.
the UI is the minority of the code. the main engine is gonna need to be implemented in some language which has strict guarantees about performance. also, not having a garbage collector that fires in a seemingly random way helps a lot too :)
There are few reasons:
- they most of the time run embedded in a DAW
- they are usually computationally intensive themselves
- they are meant to be instanced as far as memory/CPU can go
ad to the third point:
Music producers already require and use pretty powerful rigs: 32-128GB RAM is not uncommon, CPU as good as it gets.
There's great benefit when you can run 100 instrument synthesizer instances parallel vs 14 instances - it's a difference between a differentiated orchestra and a rock band.