
Google A.I. researchers develop alternative architecture for neural networks - jerianasmith
http://www.eno8.com/blog/google-a-i-researchers-develop-alternative-architecture-for-neural-networks/
======
inverse_pi
The idea might be cute but performance is not there yet. Specifically they
were able to achieve state of the art performance on MNIST, but got 10.6% test
error on CIFAR 10 which is comparable to state of the art of 4 years ago (and
if you're in the field, 4 years is like a century ago). It's important to
stress that there's ABSOLUTELY NO theory backing anything so everything we're
doing including this idea of capsules and dynamic routing is just brute-
forcing, trial-and-error. Even though the idea is cute, there's still ALOT to
be proven for this method. So when I see all these articles, I feel a little
bit uneasy.

~~~
zardo
>just brute-forcing, trial-and-error.

I think as a whole, the community is executing a distributed epsilon-greedy
montecarlo search, which is theoretically guaranteed to converge on an optimal
policy eventually.

~~~
inverse_pi
not necessarily. The convergence point can be and will probably be sub-optimal
if we keep doing this without a mathematical framework guiding us.

~~~
zardo
So long as you never stop trying random things, you are guaranteed to find a
global optimum... as time->inf

Not a very useful guarantee, but that's theoretical guarantees for you.

------
techno_modus
Here one can find more info about capsule networks:

[https://medium.com/@pechyonkin/understanding-hintons-
capsule...](https://medium.com/@pechyonkin/understanding-hintons-capsule-
networks-part-i-intuition-b4b559d1159b)

[https://hackernoon.com/what-is-a-capsnet-or-capsule-
network-...](https://hackernoon.com/what-is-a-capsnet-or-capsule-
network-2bfbe48769cc)

[https://www.youtube.com/watch?v=VKoLGnq15RM](https://www.youtube.com/watch?v=VKoLGnq15RM)

------
stablemap
This doesn’t add much on top of the _Wired_ article. Lots of comments on that:

[https://news.ycombinator.com/item?id=15609402](https://news.ycombinator.com/item?id=15609402)

Here’s the paper:

[https://arxiv.org/abs/1710.09829](https://arxiv.org/abs/1710.09829)

------
chimtim
So much buzz/hype with multiple articles even though the early results are
only on MNIST.

------
rahimnathwani
This article's point is lost on me. Its description of a capsule network is
indistinguishable from the definition of a regular feedforward neural network.

------
firebender6
"Neural networks are designed to operate, more or less, like a human brain."
This right there is my problem. Neural networks are inspired by brain. But as
of date, no proof exists to connect the two, and it is highly unlikely it will
turn out that way even after we have made progress in understanding either of
them. I just wish people would stop making such bold claims and stick to
facts.

------
visarga
The article makes it seem like capsule networks are cutting edge general
replacements for regular neural nets. The problem is that capsules only work
on limited types of data, and are not fast enough for deployment. The regular
neural nets are still the main workhorse. Capsules are a hot idea that might
lead to a leap in the future. It's like Intel announcing memristors.

------
burtonator
I swear... Any day I'm expecting them to discover something disturbing and
then a giant bank of mist rolls out over Mountain View.

~~~
modi15
I have no idea why we are not calling for a world wide ban on AI already. Like
this is not a nuke - we cant control it. Once an AI turns critical inside a
lab - we are done. There is no 'production'izing it. It will productionize
itself and whatever it takes for it to be not shut down - including launching
nukes.

~~~
losteric
AI isn't magic... We can control this because it is no different than a nuke.

The secret to keeping a nuke safe is to keep the radioactive material separate
and below critical mass. The secret to keeping experimental AI safe is to keep
is (relatively) network-isolated.

That, and our current "AI" is still very very dumb. The human backlash to
simple "job-killing" AIs going to be much more dangerous than the first super-
human AI.

~~~
modi15
> The secret to keeping a nuke safe is to keep the radioactive material
> separate and below critical mass. The secret to keeping experimental AI safe
> is to keep is (relatively) network-isolated.

I absolutely agree. But NONE of this stuff that we see daily is on air-gapped
machines. I am pretty sure that ALL of this stuff is on networked machines.

Accepting the fact that things can spiral out of control quickly is the first
step. World wide ban is the second and mandatory air-gap for any further
experiments is the third.

Our current AI is dumb but it is dramatically better than what we saw just
five years ago. This capsule stuff and open AI announcements are taking us
very close. There is a very clear liklihood of us conceiving sentience in a
lab over the next five years - in which case the time to ban is right f.....
now !

~~~
daveguy
None of the stuff we see today is even remotely advanced enough to "get out of
hand".

You're exactly right about it being dumb. You're exactly wrong about "capsule
stuff and open AI announcements taking us very close."

There is approximately zero chance of us inventing sentience in a lab over the
next five years.

Have you had a conversation with a digital assistant lately? Don't believe the
hype. It's just hype.

~~~
modi15
I dont go by the hype. I work in the field and I know whats what.

Digital assistants dont work at all and I know exactly why. And I also know
that digital assistants which are indistinguishable from an domain specific
expert are atmost 2 years away.

You can argue that I am going by the hype. Problem is that I am not the only
one. Even Elon Musk has expressed similar fears. Most people didnt believe
that a nuclear bomb was possible. They had to see a mushroom cloud to get
around it. AI has been a dud for the last two decades. It is easy to believe
that it will continue being so.

~~~
losteric
Siri promised to be that revolutionary digital assistant. After 7 years of
Apple buying startups and spinoffs that make the same promise, Siri is as dumb
as ever. Alexa works better because it's "dumber" \- a very well trained NLP
wrapped around a still largely human-curated rule/knowledge database. I've
heard of more realistic dumb systems, but no murmurs suggesting a
revolutionary step towards AGI.

Elon Musk is a brilliant man that could become an expert on AI... but right
now he is not. I trust him about as much as I trust the people that believed
the first nuclear bomb would ignite the atmosphere and end all life on Earth -
well-meaning concerns without a true understanding of the domain.

> AI has been a dud for the last two decades.

What? No. We're in the middle of an AI renaissance and I expect nothing less
than exponential progress. However, the goal of a digital sapient is still
_very_ far away.

------
thallukrish
Guess the key lies in grasping the meta data in images even if they are less
somehow. May be this will come by clustering similar things. Like my brain may
put a cat close to a dog than a human as they have something in common. But
between a cat and a dog, I find some metadata that are dissimilar.

~~~
ggggtez
What you are saying has nothing to do with what the article is talking about.
The article is badly written which I imagine is the reason for the confusion.

The wired article was much clearer.

------
letitgo12345
I _really_ wish people/media wouldn't hype X new paper before it has even been
peer reviewed...

------
Aron
Can anyone give an intuitive explanation for why standard CNNs are unable to
learn geometry?

------
ajmarcic
paper referenced in the Wired article this post discusses:

[https://openreview.net/forum?id=HJWLfGWRb&noteId=HJWLfGWRb](https://openreview.net/forum?id=HJWLfGWRb&noteId=HJWLfGWRb)

~~~
ajmarcic
CNN representation of objects is nothing more than a series of filters. The
criticism being addressed here is to create an architecture representing the
actual geometry of objects.

Consider this problem: given a picture of a simple object, a human could draw
a picture of the same object rotated at some angle. Currently there is no
elegant NN solution to this because no architectures "understand" a three
dimensional representation from images.

A CNN can identify every video frame of a dog running as a dog, but there is
no conception of the same dog running through space.

------
bluetwo
Any article that says "Neural networks are designed to operate, more or less,
like a human brain" loses credibility in my opinion.

------
philsnow
this sounds a lot like regular bagging/boosting, but applied to neural nets.

