It seems that the primary contribution of this technique is that it uses specific assumptions supported by neuroscience research in order to allow for composability of learning and better generalization. By introducing these specific assumptions (e.g. contours define objects), they are able to reduce the complexity that the model has to learn and thereby reduce the amount of data that it needs.
Obviously, the question then becomes: what happens when you have visual situations that violate or come close to violating the assumptions made?
I'm not familiar enough with the specifics of RCNs to be able to answer this; maybe someone else can. Very interesting paper and approach regardless.
After six or seven click-throughs, I downloaded the PDF.
I haven't read it but skimming, I could see that there definitely were no formulas in it at all . Which sort of says, at best what it tells you is "we did this thing, which is kind of like X and kind of like Y with Z changes". Essentially, no way to reproduce or understand by itself. The first reference then had a link behind a paywall...
So despite lots of apparent explanation, it seems like what they're actually doing is essentially unspecified (at least to the interested layman). It seems like at best an expert in the field of "compositional models" could say what is happening.
Also, the paper is published under the heading of an AI firm Fremont, ca rather than folks in a university, with the many authors listed by initial and last name...
I still find it incredibly hard to tell whether this is overblown hype or legit scientific progress. There is no indication whatsoever that this approach scales to deep feature hierarchies and that is likely what you need to compete on hard tasks like classification on ImageNet. Given the amount of money at play (several hundred millions of dollars), writing 70 pages, making code publishable is certainly an obvious way to get the most out of the hype.
Haha yeah Science papers are about providing a high level explanation of what you did, in real words. Then you hit em with the 100 page supplemental that's got more detail than 3 papers' worth of research in other journals
Again: no one cares about CAPTCHA in the deep learning world compared to other more challenging benchmarks. I wouldn’t be surprised that many optimizations could be made with ANY kind of effort put into it. Still waiting for Vicarious to go beyond MNIST and text CPATCHA.
This is trueish, but there is more to it than that.
It is true for sure that absolute performance on MNIST isn't the most interesting thing in the world.
But when introducing a new tool or technique being able to show competitive performance on MNIST is a good way to show that it isn't an entirely useless thing.
I'd note that recent Sabour, Frosst and Hinton paper[1] (where they finally got Hinton's capsules to work) spends most of the paper analyzing how it performs on MNIST, and only a short section on other datasets.
I assume I don't need to point out that Geoff Hinton does know a little about deep learning, and if he thinks submitting a NIPS paper on MNIST is acceptable in 2017 then I'm not going to argue too hard against it.
And what about the other boys who know a thing or two about deep learning? I don't see any of these people submitting MNIST to NIPS in 2017: Yousha Bengio, Yann LeCun, Ian Goodfellow, Andrew Ng, Ross Girshick, Andrej Karpathy, Pedro Domingos, and the whole DeepMind crew.
So yes, submitting experiments on MNIST in 2017 should not be taken seriously.
Not sure what this was supposed to mean? Yes, I think Fei Fei Li's datasets are much better tests than MNIST if that is what you were getting at?
I don't see any of these people submitting MNIST to NIPS in 2017
None of them submitted things as entirely new and different as this, either.
Having said that, I think my point holds.
The completely awesome 2017 "Generalization in Deep Learning" paper[1] was co-authored by Bengio and uses MNIST - because everyone can follow it.
Yann LeCun was co-author on the 2017 "Adversarially Regularized Autoencoders
for Generating Discrete Structures"[1.5], using MNIST
Ian Goodfellow Autoencoder NIPS paper[1] used MNIST as one of its 4 datasets. Yes, it was 2014, but when introducing a new technique using familiar datasets isn't a bad thing.
DeepMind's "Bayes by Backprop" (ICML15) used MNIST[2]
Another example: the (June 2017) John Langford (Vowpal Wabbit) et. al paper[3] on using Boosting to learn ResNet blocks used MNIST.
So yes, I agree there are much better datasets to compare performance on. But to prove something new works, MNIST is a useful dataset.
> Neuroscience evidence indicates that contours and surfaces are represented in a factored manner in the brain [8-11], which might be why people have no difficulty imagining a chair made of ice.
Paper abstract highlights the model's data efficiency several times:
Learning from few examples and generalizing to dramatically different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing based inference handles recognition, segmentation and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities, and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects like data efficiency and compositionality that may be important in the path toward general artificial intelligence.
Unclear how to run on the CAPTCHA examples referenced in the paper, even though they did make the datasets for those examples available.
Bummer, a big part of what the paper mentions about being so great with this RCN model is being able to segment sequences of characters (of indeterminate length even!). Sad that I cannot easily verify this for myself!
We talked about releasing more comprehensive proof of concept code, but ultimately decided against it. While helpful for other researchers, offering anyone on the internet a ready-to-use arbitrary captcha breaker seemed like a net-negative for society.
I'd love to read this, but the faint text on white background... good god. I went through the code looking to change the background so I could read it and found this:
Huh. Did they change it? I see a very thin font in the header and in bulleted lists, but the rest of the text on the page is black (literally #000000) and relatively bold compared to what I'm used to seeing online (could just be that it's slightly larger, which is also good! it's by no means big, just nice to see something not pointlessly tiny).
The header has the awful "ObjektivMk1-Thin" font mentioned elsewhere, but for me the body is a normal "Roboto","Helvetica Neue",Helvetica,Arial,sans-serif font-family.
Featuring some of the worst typography I've seen on the internet. There clearly was an attempt, but just leaving font-face as default would've been more readable.
This paper looks really interesting to me, although after quickly reading the introduction it's evident that I'm going to have to invest quite a bit of time and effort on the paper to grasp its key ideas. I come from more an encoding-decoding, deep/machine-learning background, as opposed to a probabilistic graphical modeling or PGM background, and my knowledge of neuroscience is minimal.
To date, my experience with "deep PGM models" (for lack of a better term) is limited to some tinkering with (a) variational autoencoders using ELBO maximization as the training objective, and to a much lesser extent (b) "bi-directional" GANs using a Jensen-Shannon divergence between two joint distributions as the training loss.
Has anyone here with a similar background to mine had a chance to read this paper? Any thoughts?
It looks RCN sits between traditional machine learning (with manual feature selection) and 'modern' neural networks (CNN). The traditional methods are too rigid to capture the essential information, while the CNNs sometimes are too flexible to avoid overfitting. Different from CNNS, RCNs have a predetermined structure. Humans are not born a blank slate, we have a neural structure encoded in our genes, so we don't need millions of training samples to recognize objects. So maybe RCN is onto something.
I am curious how RCN performs on real-life images like ImageNet, and how do they perform against adversarial examples. If they can easily recognize adversarial examples, that would be very interesting...
> In 2013, we announced an early success of RCN: its ability to break text-based CAPTCHAs like those illustrated below (left column). With one model, we achieve an accuracy rate of 66.6% on reCAPTCHAs, 64.4% on BotDetect, 57.4% on Yahoo, and 57.1% on PayPal, all significantly above the 1% rate at which CAPTCHAs are considered ineffective (see [4] for more details). When we optimize a single model for a specific style, we can achieve up to 90% accuracy.
66% with reCaptcha and up to 90% when optimised is much higher than what I can achieve with my actual brain. Maybe I should consider using a neural network to answer those, it happens quite frequently that I need 2-3 rounds to get through reCaptcha.
This is a paper that departs from the 'normal' AI routine and takes a very different approach. Is there another paper formally describing the RCN network? What goes inside the RCN cell? I find it more like a teaser than a revelation at this point.
I do not see a discussion in the paper regarding computational efficiency of RCN detection. The only hint about performance that I found is at the end of supplementary material where the authors state:
> Use of appearance during the forward pass: Surface appearance is now only used after the backward pass. This means that appearance information (including textures) is not being used during the forward pass to improve detection (whereas CNNs do). Propagating appearance bottom-up is a requisite for high performance on appearance-rich images.
I presume from this that in the current form RCN requires much more computations than CNN per detection, but I could be wrong.
If I'm not mistaken, a Deep Belief Net or Deep Belief Machine would also be a generative model with enormously greater data efficiency. Comparing against CNNs is a red herring: the advantage of requiring less data to develop a model is more a generative/discriminative issue than it is an "RCN vs everyone else" issue.
Obviously, the question then becomes: what happens when you have visual situations that violate or come close to violating the assumptions made?
I'm not familiar enough with the specifics of RCNs to be able to answer this; maybe someone else can. Very interesting paper and approach regardless.