Hacker News new | past | comments | ask | show | jobs | submit login

We're working on our roadmap right now and prioritizing support based on user interest. If there's a particular model or set of models you're interested in accelerating, I'd love to hear about it!

If there's a lot of interest in transformers, we'd aim to offer support in the next couple of months.




A lot of SOTA models seem to be gravitating towards transformer based models. Obviously, I can't speak for the entire field, but you can just go take a look at the most popular HuggingFace repos and see what I mean. They started out focused on language, but because transformers have become so popular, they're expanding into the audio and vision domains quickly. Their library called 'transformers' is, outside of research, most peoples go to high level framework as it largely abstracts away a lot of the boilerplate that writing in pure TF, PyTorch, Jax requires.

See:

https://huggingface.co/spaces

https://github.com/huggingface/transformers


Agreed, this is the way things seem to be trending. We'll definitely add support for transformers in the near future, the question is only whether there are other things we should work on first, especially with respect to the edge and embedded domain where smaller conv models still dominate. Thank you for the links!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: