
RCN is much more data efficient than traditional Deep Neural Networks - barbolo
https://www.vicarious.com/2017/10/26/common-sense-cortex-and-captcha/
======
dpandya
It seems that the primary contribution of this technique is that it uses
specific assumptions supported by neuroscience research in order to allow for
composability of learning and better generalization. By introducing these
specific assumptions (e.g. contours define objects), they are able to reduce
the complexity that the model has to learn and thereby reduce the amount of
data that it needs.

Obviously, the question then becomes: what happens when you have visual
situations that violate or come close to violating the assumptions made?

I'm not familiar enough with the specifics of RCNs to be able to answer this;
maybe someone else can. Very interesting paper and approach regardless.

~~~
joe_the_user
After six or seven click-throughs, I downloaded the PDF.

I haven't read it but skimming, I could see that there definitely were no
formulas in it at all . Which sort of says, at best what it tells you is "we
did this thing, which is kind of like X and kind of like Y with Z changes".
Essentially, no way to reproduce or understand by itself. The first reference
then had a link behind a paywall...

So despite lots of apparent explanation, it seems like what they're actually
doing is essentially unspecified (at least to the interested layman). It seems
like at best an expert in the field of "compositional models" could say what
is happening.

Also, the paper is published under the heading of an AI firm Fremont, ca
rather than folks in a university, with the many authors listed by initial and
last name...

PDF for the curious:

[http://science.sciencemag.org/content/sci/early/2017/10/26/s...](http://science.sciencemag.org/content/sci/early/2017/10/26/science.aag2612.full.pdf)

Edit: tracked down that apparently has some "real" math. Whether is even what
the OP is doing remains to be seen.

[https://staff.fnwi.uva.nl/t.e.j.mensink/zsl2016/zslpubs/lake...](https://staff.fnwi.uva.nl/t.e.j.mensink/zsl2016/zslpubs/lake15science.pdf)

~~~
boltzmannbrain
70 page supplementary material:
[http://science.sciencemag.org/content/sci/suppl/2017/10/25/s...](http://science.sciencemag.org/content/sci/suppl/2017/10/25/science.aag2612.DC1/aag2612_George_SM.pdf)

Reference code:
[https://github.com/vicariousinc/science_rcn](https://github.com/vicariousinc/science_rcn)

~~~
mannigfaltig
I still find it incredibly hard to tell whether this is overblown hype or
legit scientific progress. There is no indication whatsoever that this
approach scales to deep feature hierarchies and that is likely what you need
to compete on hard tasks like classification on ImageNet. Given the amount of
money at play (several hundred millions of dollars), writing 70 pages, making
code publishable is certainly an obvious way to get the most out of the hype.

------
bufo
Again: no one cares about CAPTCHA in the deep learning world compared to other
more challenging benchmarks. I wouldn’t be surprised that many optimizations
could be made with ANY kind of effort put into it. Still waiting for Vicarious
to go beyond MNIST and text CPATCHA.

~~~
nl
This is trueish, but there is more to it than that.

It is true for sure that absolute performance on MNIST isn't the most
interesting thing in the world.

But when introducing a new tool or technique being able to show competitive
performance on MNIST is a good way to show that it isn't an entirely useless
thing.

I'd note that recent Sabour, Frosst and Hinton paper[1] (where they finally
got Hinton's capsules to work) spends most of the paper analyzing how it
performs on MNIST, and only a short section on other datasets.

I assume I don't need to point out that Geoff Hinton does know a little about
deep learning, and if he thinks submitting a NIPS paper on MNIST is acceptable
in 2017 then I'm not going to argue too hard against it.

[1]
[https://arxiv.org/pdf/1710.09829.pdf](https://arxiv.org/pdf/1710.09829.pdf)

~~~
chronice70
And what about the other _boys_ who know a thing or two about deep learning? I
don't see any of these people submitting MNIST to NIPS in 2017: Yousha Bengio,
Yann LeCun, Ian Goodfellow, Andrew Ng, Ross Girshick, Andrej Karpathy, Pedro
Domingos, and the whole DeepMind crew.

So yes, submitting experiments on MNIST in 2017 should not be taken seriously.

~~~
nl
"boys"

Not sure what this was supposed to mean? Yes, I think Fei Fei Li's datasets
are much better tests than MNIST if that is what you were getting at?

 _I don 't see any of these people submitting MNIST to NIPS in 2017_

None of them submitted things as entirely new and different as this, either.

Having said that, I think my point holds.

The completely awesome 2017 "Generalization in Deep Learning" paper[1] was co-
authored by Bengio and uses MNIST - because everyone can follow it.

Yann LeCun was co-author on the 2017 "Adversarially Regularized Autoencoders
for Generating Discrete Structures"[1.5], using MNIST

Ian Goodfellow Autoencoder NIPS paper[1] used MNIST as one of its 4 datasets.
Yes, it was 2014, but when introducing a new technique using familiar datasets
isn't a bad thing.

DeepMind's "Bayes by Backprop" (ICML15) used MNIST[2]

Another example: the (June 2017) John Langford (Vowpal Wabbit) et. al paper[3]
on using Boosting to learn ResNet blocks used MNIST.

So yes, I agree there are much better datasets to compare performance on. But
to prove something new works, MNIST is a useful dataset.

[0]
[https://arxiv.org/pdf/1710.05468.pdf](https://arxiv.org/pdf/1710.05468.pdf)

[1] [http://papers.nips.cc/paper/5423-generative-adversarial-
nets](http://papers.nips.cc/paper/5423-generative-adversarial-nets)

[1.5]
[https://arxiv.org/pdf/1706.04223.pdf](https://arxiv.org/pdf/1706.04223.pdf)

[2] [https://deepmind.com/research/publications/weight-
uncertaint...](https://deepmind.com/research/publications/weight-uncertainty-
neural-networks/)

[3]
[https://arxiv.org/pdf/1706.04964.pdf](https://arxiv.org/pdf/1706.04964.pdf)

------
flor1s
I only skimmed over the article, but I think the title on HN does not reflect
the claims the authors are making.

The title of the paper is: A generative vision model that trains with high
data efficiency and breaks text-based CAPTCHAs

The title of the article is: Common Sense, Cortex, and CAPTCHA

That's nowhere near the sensationalist title at HN: RCN is much more data
efficient than traditional Deep Neural Networks

~~~
boltzmannbrain
Paper abstract highlights the model's data efficiency several times:

 _Learning from few examples and generalizing to dramatically different
situations are capabilities of human visual intelligence that are yet to be
matched by leading machine learning models. By drawing inspiration from
systems neuroscience, we introduce a probabilistic generative model for vision
in which message-passing based inference handles recognition, segmentation and
reasoning in a unified way. The model demonstrates excellent generalization
and occlusion-reasoning capabilities, and outperforms deep neural networks on
a challenging scene text recognition benchmark while being 300-fold more data
efficient. In addition, the model fundamentally breaks the defense of modern
text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-
specific heuristics. Our model emphasizes aspects like data efficiency and
compositionality that may be important in the path toward general artificial
intelligence._

------
sherbondy
As far as I can tell, the code on GitHub
([https://github.com/vicariousinc/science_rcn](https://github.com/vicariousinc/science_rcn))
only works for the MNIST dataset.

Unclear how to run on the CAPTCHA examples referenced in the paper, even
though they did make the datasets for those examples available.

Bummer, a big part of what the paper mentions about being so great with this
RCN model is being able to segment sequences of characters (of indeterminate
length even!). Sad that I cannot easily verify this for myself!

~~~
fuelfive
We talked about releasing more comprehensive proof of concept code, but
ultimately decided against it. While helpful for other researchers, offering
anyone on the internet a ready-to-use arbitrary captcha breaker seemed like a
net-negative for society.

~~~
visarga
Like people can't find captcha breakers already. Your research might push for
the replacement of text captchas with some other test.

------
BucketSort
I'd love to read this, but the faint text on white background... good god. I
went through the code looking to change the background so I could read it and
found this:

body { text-rendering: optimizeLegibility; }

Ok

~~~
Groxx
Huh. Did they change it? I see a very thin font in the header and in bulleted
lists, but the rest of the text on the page is black (literally #000000) and
relatively bold compared to what I'm used to seeing online (could just be that
it's slightly larger, which is also good! it's by no means _big_ , just nice
to see something not pointlessly tiny).

The header has the awful "ObjektivMk1-Thin" font mentioned elsewhere, but for
me the body is a normal _" Roboto","Helvetica Neue",Helvetica,Arial,sans-
serif_ font-family.

~~~
spott
They did change it.

------
nightcracker
Featuring some of the worst typography I've seen on the internet. There
clearly was an attempt, but just leaving font-face as default would've been
more readable.

~~~
warent
I'll typically be the last to comment on a webpage typography (usually to a
fault) but this site here actually gives me a headache to try and read.

------
cs702
This paper looks really interesting to me, although after quickly reading the
introduction it's evident that I'm going to have to invest quite a bit of time
and effort on the paper to grasp its key ideas. I come from more an encoding-
decoding, deep/machine-learning background, as opposed to a probabilistic
graphical modeling or PGM background, and my knowledge of neuroscience is
minimal.

To date, my experience with "deep PGM models" (for lack of a better term) is
limited to some tinkering with (a) variational autoencoders using ELBO
maximization as the training objective, and to a much lesser extent (b) "bi-
directional" GANs using a Jensen-Shannon divergence between two joint
distributions as the training loss.

Has anyone here with a similar background to mine had a chance to read this
paper? Any thoughts?

~~~
barbolo
I’m reading over and over again since the last weekend. And I’m checking the
code. And I’m still not understanding it.

------
real-hacker
It looks RCN sits between traditional machine learning (with manual feature
selection) and 'modern' neural networks (CNN). The traditional methods are too
rigid to capture the essential information, while the CNNs sometimes are too
flexible to avoid overfitting. Different from CNNS, RCNs have a predetermined
structure. Humans are not born a blank slate, we have a neural structure
encoded in our genes, so we don't need millions of training samples to
recognize objects. So maybe RCN is onto something.

I am curious how RCN performs on real-life images like ImageNet, and how do
they perform against adversarial examples. If they can easily recognize
adversarial examples, that would be very interesting...

------
dx034
> In 2013, we announced an early success of RCN: its ability to break text-
> based CAPTCHAs like those illustrated below (left column). With one model,
> we achieve an accuracy rate of 66.6% on reCAPTCHAs, 64.4% on BotDetect,
> 57.4% on Yahoo, and 57.1% on PayPal, all significantly above the 1% rate at
> which CAPTCHAs are considered ineffective (see [4] for more details). When
> we optimize a single model for a specific style, we can achieve up to 90%
> accuracy.

66% with reCaptcha and up to 90% when optimised is much higher than what I can
achieve with my actual brain. Maybe I should consider using a neural network
to answer those, it happens quite frequently that I need 2-3 rounds to get
through reCaptcha.

------
nnx
Is RCN more of a CNN alternative most useful to image-related tasks or could
also work well to other types of neural networks?

ps: thanks god for Reader mode on Safari

------
visarga
This is a paper that departs from the 'normal' AI routine and takes a very
different approach. Is there another paper formally describing the RCN
network? What goes inside the RCN cell? I find it more like a teaser than a
revelation at this point.

~~~
dpandya
The details are provided in the supplementary material:
[http://science.sciencemag.org/content/sci/suppl/2017/10/25/s...](http://science.sciencemag.org/content/sci/suppl/2017/10/25/science.aag2612.DC1/aag2612_George_SM.pdf)

(mentioned by boltzmannbrain in one of the other comments)

------
fpoling
I do not see a discussion in the paper regarding computational efficiency of
RCN detection. The only hint about performance that I found is at the end of
supplementary material where the authors state:

> Use of appearance during the forward pass: Surface appearance is now only
> used after the backward pass. This means that appearance information
> (including textures) is not being used during the forward pass to improve
> detection (whereas CNNs do). Propagating appearance bottom-up is a requisite
> for high performance on appearance-rich images.

I presume from this that in the current form RCN requires much more
computations than CNN per detection, but I could be wrong.

------
stochastic_monk
If I'm not mistaken, a Deep Belief Net or Deep Belief Machine would also be a
generative model with enormously greater data efficiency. Comparing against
CNNs is a red herring: the advantage of requiring less data to develop a model
is more a generative/discriminative issue than it is an "RCN vs everyone else"
issue.

What I don't quite understand is why Deep Belief Nets seem to not be getting
press these days. For example, see this paper from 2010:
[http://proceedings.mlr.press/v9/salakhutdinov10a.html](http://proceedings.mlr.press/v9/salakhutdinov10a.html).

------
gugagore
Here's another example of a generative model that improves data efficiency, in
a similar-ish domain.

[https://gizmodo.com/a-new-ai-system-passed-a-visual-
turing-t...](https://gizmodo.com/a-new-ai-system-passed-a-visual-turing-
test-1747500554) /
[http://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.p...](http://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf)

------
taneq
Recent discussion on Vicarious' CAPTCHA cracking:
[https://news.ycombinator.com/item?id=15564922](https://news.ycombinator.com/item?id=15564922)

------
singularity2001
The git 'reference implementation' is only for MNIST not for real captchas.

------
jostmey
I'll need to see this approach work well across many datasets before I am
convinced, not just captchas and the MNIST

------
m3kw9
How hard is it to get it to run using CoreML?

