Hello hackers,
I've launched InferenceTime for tracking and monitoring machine learning models' performance on mobile devices. I'm submitting this for gathering your perspectives and recommendations.
For data scientists and engineers, collecting real-world performance data for machine learning models across a range of edge devices poses a significant challenge. I built this platform for getting more insights of device & ml model combination.
We wanted to answer the following questions:
- How is our model performing on {X} device with {Y} OS version?
- What is the average inference time of our model on edge devices?
- How does the performance change over time?
- How does the performance change with different model versions?
- How does the performance change with different device models?
- How much resources are used by our model on edge devices?
The most important metric I use to track performance is the inference time as that significantly changes from device to device.
P.S. Site requires register/login but free option is available.