
Convolutional neural networks and feature extraction with Python - perone
http://blog.christianperone.com/2015/08/convolutional-neural-networks-and-feature-extraction-with-python/
======
amelius
It would be nice if there existed a set of standard problems, a set of
benchmarks for each of them, and an overview of methods to approach these
problems and corresponding benchmarks. Then for each problem, also a set of
implementations.

Right now, the field of neural networks seems like a maze. It is too easy to
get lost, or to settle on the wrong, suboptimal solution.

~~~
kastnerkyle
This is the point of the benchmarks in literature such as MNIST, CIFAR-10,
ImageNet, SVHN, and so on. You can see a pretty comprehensive list here [1]
that also shows papers and their reported performance.

There are a lot of implementations for many of these models out there to be
found with some google-fu, and usually porting a network from one library or
framework to another is not too bad. The main thing is that many modern neural
networks are on the hairy edge of research, so having some nice, easy to use
code just laying around is pretty unlikely unless the researcher who published
the model _prioritized_ making things clean and readable.

The good news is that as long as a "suboptimal solution" is in place in your
pipeline, you can always improve later. The hard part is really setting up the
pipeline in the first place, IMO.

Since the state of the art is always moving (day to day at times!), and many
reported SOTA results are not 100% trustworthy, it is much better to setup a
pipeline and test different solutions yourself. One working solution on
production data is worth 1000 papers with "optimal" results.

[1]
[http://rodrigob.github.io/are_we_there_yet/build/classificat...](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354)

