
Show HN: Fast.ai Camera App for CNNs - _Tyler
https://github.com/TylerNoblett/fastai-cnn-camera-app/blob/master/README.md
======
moltar
Pretty bad imo. Tried three objects: Diet Coke can, bottle of cleaning fluid,
desktop cactus. Didn’t identify any even remotely close. Was just like a
random guess.

~~~
_Tyler
OP here: I shouldn't have added the word 'etc' to the list of things it
classified, but have changed the readme to show a full list of objects that
the demo works with. My intent was not to make a classifier for any and all
objects, to serve as a demonstration of what someone could use the app for.

------
nl
I'm going to suggest it's pretty bad form to call this "Fast.ai Camera App"
when it's not a Fast.ai product.

~~~
_Tyler
Hi! I can see where you're coming from; however, since this product is
initially intended to tightly work in conjunction with Render, which uses the
fast.ai name in it's repo ([https://github.com/render-
examples/fastai-v3](https://github.com/render-examples/fastai-v3)) and the
fact that this model (to my limited knowledge) only works with models produced
by fast.ai, making a clear naming distinction was crucial so that someone
creating models using another library would realize they couldn't use this. If
you have a suggestion for how to make that distinction in a clear and concise
manner without coming across as an official product I would definitely be
willing to hear you out!

~~~
nl
Well I don't see why the fact it uses a fast.ai model is particularly relevant
for the name.

You don't mention PyToch or ResNet of Python or any of the million of other
libraries it needs.

~~~
_Tyler
Right, but this library is designed only to be used with a fast.ai model (and
the hope is that fast.ai practitioners will find it helpful). I suppose you
could create your own backend that worked with another library if you wanted
to take the time, but that's not how it's designed.

EDIT: Regardless, I've updated some of the language on the readme to be more
clear that this is not an official fast.ai product (and made it more clear in
the header that it uses fast.ai models), because I do think your criticism had
some validity and I really do appreciate the feedback.

------
tmchow
I wasn’t familiar with the that fastai model until now. I took 6 photos and
the classification was an utter failure.

------
asutekku
Pretty neat, it recognized a chair even though it was obscured with clothes.
Granted it does not recognize some heavily specific objects (as it says in the
description) but it is a start.

------
bwasti
this can be done in browser with tensorflow.js + a pretrained mobilenet in a
couple dozen lines of code

example:
[https://jott.live/html/tfjsmobilenet](https://jott.live/html/tfjsmobilenet)
code:
[https://jott.live/code/tfjsmobilenet](https://jott.live/code/tfjsmobilenet)

------
continuations
> After the picture is taken, it's sent to a fast.ai CNN model running on
> render

Why does the model have to run on render? Can it be run on other servers?

~~~
_Tyler
Good question! Like I mentioned in the readme, I'm hoping to add a flask
server soon; however, render is recommended on the fast.ai website and is the
simplest way to get a model running
([https://course.fast.ai/deployment_render.html](https://course.fast.ai/deployment_render.html)),
so I thought connecting my app to render would be the most helpful way for
fast.ai practitioners to get started.

