It's supported almost everywhere that the Web Audio API is.
(However, one of the scenarios for a version 2.0 was to implement the same API, additionally to the 'native' one, to be used as a fallback solution. While I actually had implemented this already, I don't think it may be that useful, while it increases file size quite a bit.)
Edit: Viable points for this may be still a) reliable performance and interaction, and b) known voices (even, if they are a bit robotic), c) use in offline applications. Using an analyser node for animations may be yet another.
However, all the configuration data, including phoneme tables, may be overwritten (but you would have to install eSpeak on your machine first, in order to compile these.)
Another approach would be actually porting this to JS (instead of cross-compiling), by this having full access. But I simply do not have the resources for this. (Meanwhile, there's the Web Speech Synthesis API. With this being available on most modern clients, it's probably not worth the effort.)
Mind that the core won't run concurrently as a worker on mobile devices, but rather as an instance in the main/UI thread. This is, because mobile devices will mute the playback triggered by a message from a worker, as there is no immediate user interaction. Therefore, longer utterances are likely to block the UI noticeably, while the internal sound file is processed. This is a bit sad, but how things are.
Can anyone confirm that this is working on Android? (If so, I'll push this to the release.)
If so, I may enable them again for Android based systems.