Hacker News new | past | comments | ask | show | jobs | submit login

Model deployment is painful. Running a model on a mobile phone?

Forget it .

The frustration is real. I remember spending nights exporting models into ONNX and it still failed me. Deploying models on mobile for edge inference used to be complex.

Not anymore.

In this post, I’m going to show you how you can pick from over 900+ SOTA models on TIMM, train them using best practices with Fastai, and deploy them on Android using Flutter.




are there any object detection models? we went with apple to use CoreML, which works great, but this is cool. Running the pytorch version of our model took way to long for inference.


The pytorch lite package also supports yolov5 models.

I posted on my LinkedIn awhile ago

https://www.linkedin.com/posts/dickson-neoh_deploying-object...


That's really cool. I see you have much faster inference compared to the PlayTorch example. What size YOLOv5 are you using?



Media pipe looks really cool. I havent tried it. Have you?


Yes, although haven't deployed on mobile. Works well in the browser.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: