Ask HN: Looking for research done with shallow CNNs and lo-res data - thedragonline
======
p1esk
I trained small networks on low res data.

What are you trying to do? What kind of research are you looking for?

~~~
thedragonline
I've already built a working model (at least it appears that way). I'm looking
for any research involving non-representational, low resolution imagery (3x3)
- I haven't seen anything that rules out working with this kind of data at
this scale. But maybe it's a fool's errand. I don't know. It would be nice to
find literature that catalogs what has/hasn't been done in this space, or
better, what works and what doesn't.

~~~
p1esk
Why do you need this? Again, what are you trying to do?

------
verdverm
Maybe you want AutoML? There are some recent posts about getting YOLO on
mobile or in tiny amounts of bytes

~~~
thedragonline
I’m looking for actual research. I’m wrapping up some work on a stripped-down
version of the lenet architecture and extemely low resolution non-
representational data (3x3 greyscale pixels). The results appear to be better
than chance and I’m quite frankly stunned - I expected complete garbage. Maybe
I’ve missed something, but if the results are simply due to chance, I had an
utterly extraordinary case of bad luck. (edit:clarity)

~~~
verdverm
You probably can avoid convolution for a 3x3 grey scale.

Have you run your test multiple times with varied training sets? Did it
perform well on validation data or test data?

Honestly, it sounds like a data set that is easily memorized

~~~
thedragonline
>Have you run your test multiple times with varied training sets?

Some background - I've been working on this for several months now,
experimenting with various CNNs (settling on a modified LeNet),
hyperparameters and parameters. The bulk of the experiments have been failures
- a typical scenario is the loss function decreasing in the training phase,
but winding up unable to correctly predict labels in the test phase. There has
been a progression however - from predicting either label A or B (but not
both), to predicting both (but no better than chance), to doing a little bit
better than chance. Maybe I'm fooling myself - I don't know. That's why I've
been scouring the Internet for similar kinds of work (and not finding anything
truly useful) and now reaching out to HN. If there's work out there that
definitively rules out working in this space, I'm all ears. Otherwise I'm
going to keep on experimenting. (edit:formatting)

