Hacker News new | comments | show | ask | jobs | submit login
Launch HN: Numericcal (YC.S18) – Lifecycle Management for ML Models on the Edge
49 points by radoye 3 months ago | hide | past | web | favorite | 13 comments
Hey HN,

We’re Ranko and Shaoyi, co-founders of Numericcal (https://numericcal.com). We’re building developer tools to simplify building, optimizing and maintaining Machine Learning models for embedded, mobile and IoT applications.

This platform is the result of our own struggles while helping on two ML powered projects; one using Computer Vision and one Voice Recognition. In both cases running models in the cloud was too expensive and slow for a good user experience. Moreover, it was not clear whether the training data our Data Science team had, at the beginning of the project, was representative of the data to be encountered in production.

The solution for cost and speed was to run inference on some end-device. However, we did not know which device would be feasible, nor did we know what model the Data Science team would end up with.

We initially considered two options:

* Postpone working on ML related software tasks until the Data Science team figured out what they wanted to do. Unfortunately, this meant that we could not parallelize the development. Moreover, the Data Science team would not get any feedback on the model runtime performance until we integrated them towards the end.

* Make a bunch of assumptions about models that the Data Science team would use and implement something. This would allow us to work in parallel, but we would be running the risk of having integration issues if those assumptions turned out to be wrong.

We looked for tools online to solve this but we couldn’t find anything comprehensive. So we decided to build our own! We settled on a design that packages ML models into (very primitive) serializable objects, encapsulating all the information and supporting files to use the models. For example, an object detection model would package the DNN graph, shape of the input and output tensor, bounding box anchors, etc.

This gave us what we wanted. On the software side we could write against the “interface” of the package. During initial development, we simply packaged dummy DNN models that always returned the same value. Later, the Data Science team would simply drop in a new package and everything just worked. One more victory for abstraction!

Since then, we’ve added a number of features around the packaging system. We can now add models during compilation or remotely through a cloud service and a user-friendly GUI. Models are versioned and report basic performance metrics to the cloud so Data Science teams can get feedback and guide model exploration. Remote delivery also allows sending different models to devices with different compute capabilities. We also added the possibility of running models on different runtimes (TensorFlow Mobile, TensorFlow Lite, our own library, Caffe2 is coming soon). Finally, we wrapped the app side code into ReactiveX API to make working with multiple models asynchronously easier.

Today we’re releasing the Numericcal Cloud Tools and Android SDKs in Beta. You can read about the system in more detail at (overview.workspace.numericcal.com) and check out demos (https://github.com/numericcal/BetaDemoApp).

Machine Learning on mobile and IoT devices is yet to gain wide adoption. Making this transition will bring personalization of mobile apps and automation of business and industrial processes to a whole new level. We’re currently working on projects for automatic damage assessment on vehicles and preventative maintenance of mobile assets.

We hope these tools will add to the momentum by speeding up development iterations for other teams, as they did for us. In the long term, we plan to open source edge integration and packaging SDKs. We will also open up more of our cloud-hosted functionalities, such as automatic model benchmarking, deployment, model compression and runtime optimization as services.

We’re eager to hear comments, ideas and your experiences in building Edge applications with ML. Thanks for reading!




Very cool and congrats on the launch. I see your demo is on android, are you doing anything with iOS / CoreML? I put up a similar project for OTA updates on iOS using CoreML to hot swap out device side models https://github.com/rkirkendall/MLEdgeDeploy


Yeah, as the cloud/edge interface settles we plan to build SDKs for pulling models from iOS, GNU/Linux through Swift, Python and C#.

Let me check your repo.


Your project sounds really interesting. Machine learning models that can be updated and treated more like packages will really give people an edge over people who can't perform that task.

I made a project called mms2concept. What it would do is given a picture taken by a phone (or anything that could send an MMS really) and it would attempt to identify the image based on the picture. I used Clarfai and Twilio to do this.

I think with very resource constrained edge devices either you would have a much more powerful "basestation" style deployments where you have an arduino or similar device collecting edge data and sending it back to that "basestation".


Can you share some writeup or repo for your project?

We internally have an auto-migration tool (currently very primitive) that can learn from cloud hosted model and cache it locally on the device (that's what this infra & packaging was built for ;)).



For some reason, your quick demo reminded me of this awesome MS project: https://www.microsoft.com/en-us/seeing-ai

Though we would be pushing it the boundary with 3+ DNNs on mobile, adding speech2text and text2speech to your app would make it an interesting addition to Seeing-AI, IMO.


Congrats to the Numericcal team on the launch! It's great to see new runtimes coming out to improve performance specifically on Android. It's been a real pain for us to get things up to par with Apple devices running Core ML.

We’re building something similar but focused on existing Core ML/Tensorflow runtimes if anyone is looking for similar management features for iOS, check out https://fritz.ai. (Full disclosure, I’m one of the co-founders).


Exactly, there is a lot of potential on the Edge, but dealing with fragmentation is really painful.


How easy is it to build custom architectures like segmentation and object detection networks? I am working on a segmentation coreml tools and apple coremltools only coverts a selection of layers.


That depends on your choice of training framework and runtime engine (TF Lite/TF Mobile/Numericcal/Caffe2/etc.). We wrapped multiple runtime engines to provide as much flexibility as possible, quickly. For example, some layers available in TF Mobile are not available in TF Lite (and TF Lite is slower, for now!).

That being said, if there is no converter between the training format and runtime format, you're out of luck (for now). Our runtime (targeted primarily at Qualcomm SnapDragon SoCs) supports the most common layers and some more exotic ones that our users needed. For other engines we simply pull the standard package that vendors provide.


Congratulations for launching. The idea is very promising. Good luck!


Congrats on the launch. We’ve pretty much experienced the same pains as you guys and are building a similar platform but targeted more towards web deployment. Best of luck!


I'm quite interested in hearing how things turn out. Let us know if you write/release something.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: