
DeepMind just posted a mind blowing paper – achievement of transfer learning - Osiris30
https://medium.com/@thoszymkowiak/deepmind-just-published-a-mind-blowing-paper-pathnet-f72b1ed38d46#.r77jjcppq
======
lorenzhs
Previous discussion of the paper:
[https://news.ycombinator.com/item?id=13675891](https://news.ycombinator.com/item?id=13675891)
(6 comments)

...and of this very blog post:
[https://news.ycombinator.com/item?id=13674181](https://news.ycombinator.com/item?id=13674181)
(also 6 comments)

------
skj
I wish we could keep upworthy-style titles out of academic news... Or just out
of news entirely. I do my part by not reading articles with this presentation.

Unless it's about celebrity twins. I'm all over that shit.

~~~
asaddhamani
Yes, please let's not have headlines like these. It might as well be "DeepMind
just posted a mind blowing paper - You won't believe what happened next!" next
time, so let's not go there.

------
andyjohnson0
_" PathNet is a network of neural networks"_

Can anyone comment on the reasons for this approach, compared to a single
large network?

~~~
skywhopper
Optimistic answer: each sub-network can specialize in a certain skill or
element of knowledge, and then as a group, the networks can contribute to a
larger decision. Such specialization allows skills to be highly developed but
used only when they are applicable, instead of trying to force-fit a certain
skill onto an inappropriate problem domain. The human brain is believed to
work in a similar way with multiple regions processing input in parallel and
"voting" for the appropriate response, with other parts of the brain
coalescing and merging their output into the final response.

Cynical answer: Neural net machine learning has reached its peak, and there's
far more computing power than the researchers know what to do with. Add
additional layers of processing and if the numbers change, that's a paper.

------
PunchTornado
that article and title is the epytome of BS.

------
dangom
"We can imagine that in the future, we will have giant AIs trained on
thousands of tasks and able to generalize. In short, General Artificial
Intelligence."

Don't know if I believe this just yet.

~~~
Grangar
They're gonna be yuge! The greatest AIs we've ever seen!

------
pizza
the paper itself
[https://arxiv.org/pdf/1701.08734.pdf](https://arxiv.org/pdf/1701.08734.pdf)

------
mrfusion
I wasn't able to follow this. Does anyone have a good explanation? Is it
really AGI?

~~~
lorenzhs
Of course not. The article is just oversensationalised. The paper doesn't make
the same broad claims as the article.

------
torrent-of-ions
Grammatical error in the first line. Good start.

