This project is at its infancy. We are barely able to walk!
As a proof of concept, quantized MNIST is happily running in Mbed enabled MCUs. Upcoming work will include: reference counting, memory abstraction, tensorflow-to-mbed exporter and more ops.
These should be enough for us to run most DL models out there.
We started with the idea of putting AI everywhere and help people to build cooler things.
Inputs and collaborations are welcome.
Neil Tan, Kazami Hsieh, Dboy Liao, Michael Bartling
Could you share your training scripts - would love to look at the piece which does quantization/SavedModel/FreezeGraph
: Embedded Learning Library, https://github.com/Microsoft/ELL
Edit: Oh, 256KB RAM. Nevermind, although 2Mb RAM modules are pretty cheap and have as little as 32 pins...
Good work guys!
Oh... almost forgot... get off my lawn...
(Older readers might be able to guess the make & model from those specs).