
Show HN: Set of trained deep learning models for computer vision - fchollet
https://github.com/fchollet/deep-learning-models
======
terhechte
Caffenet also offers a set of pre-trained models in their "model zoo":
[https://github.com/BVLC/caffe/wiki/Model-
Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)

------
Omnipresent
Can the classify images example be modified to train on other (new) images.
For example, images of screenshots to identify elements that are in the images
(such as: word processor, browser, command prompt).

~~~
fchollet
Yes, you can use these models for fine-tuning (or feature extraction) on a new
dataset. This tutorial would be a good place to start (esp. sections 2 and 3):
[https://blog.keras.io/building-powerful-image-
classification...](https://blog.keras.io/building-powerful-image-
classification-models-using-very-little-data.html)

------
minimaxir
Are you allowed to redistribute the models under the MIT License?

~~~
fchollet
The code is under the MIT license, not the weights. The weights are under
their respective licenses.

The weights are not included in the git tree and are thus not covered by the
LICENSE file. They are automatically downloaded when you run the code.

EDIT: following your comment, I have added a point-by-point breakdown of
licensing information in the README. This will avoid any confusion.

~~~
ma2rten
Don't take my word for it, but actually as far as I know both in the US and EU
data (including model weights) can't be copyrighted.

~~~
akhilcacharya
I'd like to see more information on this.

I ask because I'm surprised more companies haven't gotten into the market of
licensing the data they collect (unless they do, in which case, sorry).

~~~
praccu
[https://www.ldc.upenn.edu/](https://www.ldc.upenn.edu/)

[http://kingline.speechocean.com/](http://kingline.speechocean.com/)

[http://deeplearning.net/datasets/](http://deeplearning.net/datasets/)

[http://www.elra.info/en/](http://www.elra.info/en/)

------
chris_va
Question:

Does anyone know of a library for loading models/weights from a registry of
some sort?

~~~
nl
Yes - pretty much every Deep Learning library. Caffe, Torch, Theano,
TensorFlow etc (that's kinda what this link is about?)

Just use Keras on top of TensorFlow as shown at this link.

~~~
chris_va
Well, this is fairly manual. More like:

my_model = registry.get("tensorflow://github.com/asdf/models/imagine/latest")
... my_model.push("...")

~~~
nl
That's hundreds and hundreds of MBs you are downloading. It should never
change, so it hardly seems a critical piece of functionality.

I guess someone could build it, sort of like the datasets you can download in
SciKit or R, or the trained models in NLTK/Spacy.

In-fact I've almost come the full circle on this and think it might be a good
idea.

Weird - I didn't think people on the internet could change their mind.

~~~
chris_va
Heh :).

I was thinking that the current "best" model/architecture may change fairly
frequently. Obviously you wouldn't want to download 100MB every time the
application starts, but maybe amortized every time there is a significant jump
would be good.

Anyway, I haven't seen anything like this, so was curious.

