Model deployment is painful. Running a model on a mobile phone?
Forget it .
The frustration is real. I remember spending nights exporting models into ONNX and it still failed me. Deploying models on mobile for edge inference used to be complex.
Not anymore.
In this post, I’m going to show you how you can pick from over 900+ SOTA models on TIMM, train them using best practices with Fastai, and deploy them on Android using Flutter.
are there any object detection models? we went with apple to use CoreML, which works great, but this is cool. Running the pytorch version of our model took way to long for inference.
Forget it .
The frustration is real. I remember spending nights exporting models into ONNX and it still failed me. Deploying models on mobile for edge inference used to be complex.
Not anymore.
In this post, I’m going to show you how you can pick from over 900+ SOTA models on TIMM, train them using best practices with Fastai, and deploy them on Android using Flutter.