Hacker News new | past | comments | ask | show | jobs | submit login
Building a Poor Man’s Deep Learning Camera in Python (makeartwithpython.com)
265 points by burningion on Dec 19, 2017 | hide | past | web | favorite | 26 comments



This article inspired me to have a play around with Darknet and Darkflow - turns out they're pretty easy to get going on an OS X laptop with Python 3 (installed via Homebrew).

Here's how I got Darkflow working: https://gist.github.com/simonw/0f93bec220be9cf8250533b603bf6...

For Darknet, I just ran "make" as documented here: https://pjreddie.com/darknet/install/ and then followed the instructions on https://pjreddie.com/darknet/yolo/ and https://pjreddie.com/darknet/nightmare/ to try it out.


Off-topic, but I am curious what you are using for your OS X laptop. I'm assuming from your choice of wording that it isn't Apple hardware and am interested in your experience with what is working well for a non-Apple OS X machine and how happy you are with it.


Well that was a remarkably fast installation & test for something like this. Very fun trying it out directly with my webcam, though rather slow on my laptop CPU, will have to get it installed on a better machine.


So many cool projects on HN this week!

For OpenCV classification tutorials this is another great resource to keep playing around with DIY projects. FYI avoid his email list unless you enjoy 3-4 sales emails every week.

https://www.pyimagesearch.com/


This is brilliant given that even Deeplens is around $250, this poor man's set up is a very good DIY kit for anyone who wants to start in this new age of Image processing.


This is the project I am planning to do with my son as he's getting interested in computers and loves nature. The rough aims are -

> Rasp Pi + Camera/PIR to photograph birds > Connect to internet and post to wp, twitter and instagram

The final aim is to add an AI component to see if we can detect birds and keep a count


Do you have hummingbirds in your area? You could setup a high fps camera pointed at hummingbird feeder. Hummingbirds aren't afraid to approach a feeder placed just outside the window of a house.


no - starlings and sea gulls mainly - nothing exciting


Sea gulls then. They aggressively go after food with little regard for their safety. They’re like rats with wings.


"Rats with wings" - you mean pigeons?


Fun fact, they are called rock doves.


This is an awesome family xmas project. I don't have an old pc around to run YOLO / YOLO Tiny constantly though; can anyone recommend a cheap, suitable server provider for this? AWS EC2?


Don't you have a computer with a decent GPU? I have trained YOLOv2 on a GTX 1050. A night of training (and starting from pre-trained lower layers) yields good results depending on your application.


Inference is cheap. I suspect even a Rasberry Pi may be enough.


He meant for inference.


AWS EC2 P2 instance is the one used in the fast.ai course and it seems like quite a good choice for these things.


So this can identify anything at all? That’s pretty amazing. Maybe we’re getting closer to a dish washing robot.


No, it will identify anything it is trained for. I havent read the article but these things are usually trained for common datasets with 10, 100, 1000 classes of common objects. The 1000 class dataset covers a giant portion of the distribution of objects you'd see, so sort of close to "anything."

Love the project.


This is superb. Gonna try this sometime soon. I wanted to do this when Google announced Clips.


how does this compare with https://aiyprojects.withgoogle.com/vision ?

which one is the cheapest and the most funnest to work with ?


(EDIT due to wrongly stating that models run on the Raspberry Pi directly)

The Google Vision Kit will run models on a custom neural processing chip connected to the Raspberry Pi Zero. With the DIY setup from the blog bost, the neural network runs on a "large pc" (potentially with GPU). Depending on the hardware you have at your disposal, you can run more complex (and therefore more powerful) neural networks. At the same time, you'll need wifi set-up and streaming to work. Completely embedded devices are easier to just put in the wild.

In theory, you should be able to use the models from the Vision Kit if you follow their instructions and just put the on a Raspberry Pi directly, and get an additional Movidius compute stick: https://developer.movidius.com/


Inference doesn't run on the RPi Zero. It runs on the VisionBonnet board which has a Movidius VPU tensor co-processor on it. RPi is just for handling the LEDs, buzzers and buttons. For training a model with custom datasets, you are correct - something bigger's needed.


This is awesome. some new ideas for my drone.


pretty cool! Signed up on his website and got a time out while trying to confirm my email is.


thank you, it give a good starting point to me.


So cool!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: