Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: DeepCamera – Turn Camera into AI-Powered with Embedded/Android/Pi etc. (github.com)
14 points by simbaz 12 days ago | hide | past | web | favorite | 4 comments





Three years ago, we decided to develop production on the hottest area: machine learning/deep learning. We tried several cloud option, but can't make real production due to cost and other limitation.

So, we start to develop this production: DeepCamera.

Specify to one hardware(camera), go deep inside for porting would cost half of year based on our strong embedded linux(router) experiment. So, this is the best way we figured out: software on set-up-box or Android(tablet/mobile).

Also, we strongly believe AutoML is the only way for AI production to success. Users should be able to teach/train AI(Model) simply just like chatting. So we developed the application as wechat/whatsapp. People start to talk to machine to make them clever. When they recognized people wrong, just rename it, machine will train its brain to remember it.

After deployment in one of the largest industry leading data center to protect their security, we finally getting to our final target: open source.

This is the way a platform likes Android could born on AI. This is our dream, Android on AI, that's SharpAI. With your help, we will success.


This is super confusing. Questions:

1. What is your unique strength here?

2. Does user need to call in web API to use this? Can this whole thing work offline? If not, is web API part open sourced?

3. What are the techniques you are using to bring down power usage?

4. You keep mentioning AutoML but AutoML is only useful for training and generating model, not inference. For embedded software, the bottleneck is not training but inference. Also, you need to clearly mention what exact AutoML techniques are you referring to.

5. On one table you mention NVidia and CPU hardware having same GFLOPS. Very confusing.

Overall, I'd recommend more rigore and being crystal clear when you are using technical terminology.


1. No production implemented the feature we open sourced. This is unique. 2. Whole thing(AI Inference/Classifier training) are offline. 3. ARM did it. 4. We have classifier training on the embedded system. 5. I calc the theory/real, arm is better than Nvidia GPU.

All your base are belong to us!



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: