Hacker Newsnew | past | comments | ask | show | jobs | submit | markush_'s commentslogin

It’s very cool and inspiring to see the CEO posting here. Keep up the amazing work!

Interesting choice from PyTorch to release yet another DSL, on positive side it's one more point in the design space on the other hand it's even more difficult to choose the right technology among Triton, Gluon, CuTe, ThunderKittens and a few others.


I think unlike Gluon/CuTe/ThunderKittens (which distinguish themselves from Triton by being lower level giving you more control, thus being less performance portable and harder to write), Helion distinguishes itself from Triton by being higher level and easier to write.

IMO, this is something that makes sense for PyTorch to release, as "neutral ground" in the industry.


70% is wild, how does that hold up in monopoly and anti competition cases


> This is a _very_ low hanging fruit anyone with a couple of dgx h100 servers can solve in a month and is a real world problem that needs solving.

I am not convinced it is a low hanging fruit, it's something that is super easy for humans but not trivial for machines, but you are right that it is being neglected by many. I work for speechmatics.com and we spent a significant amoutn of effort over the years on it. We now believe we have the world's best real-time speaker diarization system, you should give it a try.


After throwing the average meeting as an mp3 to your system, yes, you have diarization solved much better than everyone else I've tried by far. I'd say you're 95% of the way to being good enough for becoming the backbone of monolingual corporate meeting transcription, and I'll be buying API tokens the next time I need to do this instead of training a custom model. Your transcription however isn't that great - but good enough for LLMs to figure out a minutes of the meeting.

That said, the trick to extracting voices is to work in frequency space. Not sure what your model does but my home made version first ran all the audio through a fft, then essentially became a vision problem for finding speech patterns that matched in pitch and finally output extremely fined grained time stamps for where they were found and some python glue threw that into an open source whisper tts model.


Exciting project, congrats on the release!


The docs on https://x.ai/ide/docs gives away some of the source code. It's interesting that they are running Python in the browser.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: