
Stacked Capsule Autoencoders - _Microft
http://akosiorek.github.io/ml/2019/06/23/stacked_capsule_autoencoders.html
======
p1esk
So it’s been almost two years since the capsules paper, and they still have
nothing but mnist results? A bit underwhelming...

~~~
tw1010
Has _anything_ actually happened in the last two years? Genuinely curious.

~~~
ivalm
I think there are two big things:

1\. Attention mechanism (transformers). Really moved forward NLP. It allows
long distance dependencies without the need for sequential processing (like in
RNNs). Generally scales much better (see BERT/GPT-2/XL-Net/etc)

2\. Separable convolutions. Significantly sped-up the standard CV networks.
(mobile net, but now everything, really).

------
ilaksh
Why do people assume that the assumptions in DL are correct for more general
intelligence? It's not remotely like a real biological NN. It's seems like an
arbitrary gross approximation that everyone is starting with just because it
is the prevailing worldview.

Why do some Chinese focus on SNNs? I suspect they just have a different
worldview because they are exposed to a different information stream in their
language. But SNNs are still a gross approximation. Also, why assume it needs
to look like an NN at all?

