
Building a Poor Man’s Deep Learning Camera in Python - burningion
https://www.makeartwithpython.com/blog/poor-mans-deep-learning-camera/
======
simonw
This article inspired me to have a play around with Darknet and Darkflow -
turns out they're pretty easy to get going on an OS X laptop with Python 3
(installed via Homebrew).

Here's how I got Darkflow working:
[https://gist.github.com/simonw/0f93bec220be9cf8250533b603bf6...](https://gist.github.com/simonw/0f93bec220be9cf8250533b603bf6dba)

For Darknet, I just ran "make" as documented here:
[https://pjreddie.com/darknet/install/](https://pjreddie.com/darknet/install/)
and then followed the instructions on
[https://pjreddie.com/darknet/yolo/](https://pjreddie.com/darknet/yolo/) and
[https://pjreddie.com/darknet/nightmare/](https://pjreddie.com/darknet/nightmare/)
to try it out.

~~~
patja
Off-topic, but I am curious what you are using for your OS X laptop. I'm
assuming from your choice of wording that it isn't Apple hardware and am
interested in your experience with what is working well for a non-Apple OS X
machine and how happy you are with it.

------
Ryel
So many cool projects on HN this week!

For OpenCV classification tutorials this is another great resource to keep
playing around with DIY projects. FYI avoid his email list unless you enjoy
3-4 sales emails every week.

[https://www.pyimagesearch.com/](https://www.pyimagesearch.com/)

------
amrrs
This is brilliant given that even Deeplens is around $250, this poor man's set
up is a very good DIY kit for anyone who wants to start in this new age of
Image processing.

------
fiftyacorn
This is the project I am planning to do with my son as he's getting interested
in computers and loves nature. The rough aims are -

> Rasp Pi + Camera/PIR to photograph birds > Connect to internet and post to
> wp, twitter and instagram

The final aim is to add an AI component to see if we can detect birds and keep
a count

~~~
txsh
Do you have hummingbirds in your area? You could setup a high fps camera
pointed at hummingbird feeder. Hummingbirds aren't afraid to approach a feeder
placed just outside the window of a house.

~~~
fiftyacorn
no - starlings and sea gulls mainly - nothing exciting

~~~
txmx2017
Sea gulls then. They aggressively go after food with little regard for their
safety. They’re like rats with wings.

~~~
hcrisp
"Rats with wings" \- you mean pigeons?

~~~
ashelmire
Fun fact, they are called rock doves.

------
njam
This is an awesome family xmas project. I don't have an old pc around to run
YOLO / YOLO Tiny constantly though; can anyone recommend a cheap, suitable
server provider for this? AWS EC2?

~~~
Iv
Don't you have a computer with a decent GPU? I have trained YOLOv2 on a GTX
1050. A night of training (and starting from pre-trained lower layers) yields
good results depending on your application.

~~~
Iv
Inference is cheap. I suspect even a Rasberry Pi may be enough.

------
mrfusion
So this can identify anything at all? That’s pretty amazing. Maybe we’re
getting closer to a dish washing robot.

~~~
TuringNYC
No, it will identify anything it is trained for. I havent read the article but
these things are usually trained for common datasets with 10, 100, 1000
classes of common objects. The 1000 class dataset covers a giant portion of
the distribution of objects you'd see, so sort of close to "anything."

Love the project.

------
aryamaan
This is superb. Gonna try this sometime soon. I wanted to do this when Google
announced Clips.

------
sandGorgon
how does this compare with
[https://aiyprojects.withgoogle.com/vision](https://aiyprojects.withgoogle.com/vision)
?

which one is the cheapest and the most funnest to work with ?

~~~
florianletsch
(EDIT due to wrongly stating that models run on the Raspberry Pi directly)

The Google Vision Kit will run models on a custom neural processing chip
connected to the Raspberry Pi Zero. With the DIY setup from the blog bost, the
neural network runs on a "large pc" (potentially with GPU). Depending on the
hardware you have at your disposal, you can run more complex (and therefore
more powerful) neural networks. At the same time, you'll need wifi set-up and
streaming to work. Completely embedded devices are easier to just put in the
wild.

In theory, you should be able to use the models from the Vision Kit if you
follow their instructions and just put the on a Raspberry Pi directly, and get
an additional Movidius compute stick:
[https://developer.movidius.com/](https://developer.movidius.com/)

~~~
lovelearning
Inference doesn't run on the RPi Zero. It runs on the VisionBonnet board which
has a Movidius VPU tensor co-processor on it. RPi is just for handling the
LEDs, buzzers and buttons. For training a model with custom datasets, you are
correct - something bigger's needed.

------
viktour19
This is awesome. some new ideas for my drone.

------
ultrasounder
pretty cool! Signed up on his website and got a time out while trying to
confirm my email is.

------
ikan
thank you, it give a good starting point to me.

------
janemanos
So cool!

