Hacker News new | past | comments | ask | show | jobs | submit login

> based on flow matching with Diffusion Transformer

Yeah that's not gonna be realtime. It's really odd that we currently have two options, ViTS/Piper that runs at a ludicrous speed on a CPU and is kinda ok, and these slightly more natural versions a la StyleTTS2 that take 2 minutes to generate a sentence with CUDA acceleration.

Like, is there a middle ground? Maybe inverting one of the smaller whispers or something.






StyleTTS2 is faster than realtime

To be clear, what I mean by realtime is full gen under at most 200ms so it can be sent to the sound card and start playing, not generating under the amount of time it would take to play it, which would add that as an unusably long delay in practice.

I suppose it might be possible to do it with streaming very short segments, but I haven't seen any implementation with it that would allow for that, and with diffusion based models it doesn't even work conceptually either.


Bark?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: