Hacker News new | past | comments | ask | show | jobs | submit login
Machine Learning on 2KB of RAM [pdf] (manikvarma.org)
53 points by gyre007 5 months ago | hide | past | web | favorite | 11 comments



I have previously seen Microsoft Embedded Learning Library[1] which requires LLVM support for the target arch IIRC, and uTensor[2] which runs TensorFlow models on larger ARM cortex micros.

Nice to see another group in Microsoft is targeting smaller 8/16 bit processors, they can still be useful for very limited power budget applications, low cost devices, and some odd applications where they are built in to a specific purpose IC (e.g. a flash drive controller). There aren't many other alternatives in this space I think, other than manually porting your model to C and running unit tests against it, does anyone know of any competitors?

[1]https://github.com/Microsoft/ELL [2]https://github.com/uTensor/uTensor


Also interested in more competitors, as I'm researching and developing machinelearning for microcontrollers. Two libraries I have made are:

https://github.com/jonnor/emtrees https://github.com/jonnor/embayes

These are likely to be consolidated into one library/framework in some weeks, along with some other models and tools I have lying around. Like simple neural networks and audio feature extraction.


My brain dump with links can be found here, https://github.com/jonnor/datascience-master/blob/master/emb...


I've been working on a deep learning library in C, and have been thinking about optimizing it for embedded applications specifically.

https://github.com/siekmanj/sieknet

Other than that, I'm not sure if there are many libraries for machine learning on microcontrollers. Genann comes to mind:

https://github.com/codeplea/genann


Nice. What are you thinking for optimizing for embedded use? In my opinion the main challenge for neural networks on microcontrollers is the amount of memory needed for weights.

- Quantizing the weights to lower precision is an easy gain. CMSISNN (which uTensor will use on Cortex-Mx) uses 8 bit fixed-point.

- Utilizing sparse weights from regularization (L1,L0) may also give some gains.

But apart from these I think more innovative things will be needed?


I used quantization of weights and an adaptation of the training, more information on the link above:

https://www.researchgate.net/publication/304424659_Performan...


Looks like state of the art can achieve up to 120x compression on CNNs for image classification. https://arxiv.org/abs/1802.02271


sklearn-porter can be used to compile some scikit-learn models to C. For instance Support Vector Machines, both linear and RBF,polynomial kernels.

https://github.com/nok/sklearn-porter/blob/master/readme.md#...


ProtoNN is another classifier especially for low-resource systems. http://proceedings.mlr.press/v70/gupta17a.html


How could this possibly be faster than a linear classifier? In "Implementation" near Section 2 they claim that their implementation is better than a linear classifier but that seems like it couldn't possibly be true could it? A single floating point operation per parameter has gotta be faster than multiple branches, right?


A linear classifier can only handle linearly separable things, which limits prediction accuracy a lot. I don't think you'll get anywhere close to this level of classification with a linear model. In the experiments it is shown to outperform SVM RBF.

The platforms in question doesn't have hardware for floating point, any floating point support needs to be emulated in software (slow).




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: