
The Google Brain Team – Looking Back on 2016 - stablemap
https://research.googleblog.com/2017/01/the-google-brain-team-looking-back-on.html
======
dmix
This post referenced a NYTimes article about Google's AI work on Translate,
and the brain team in general, that I missed previously. It's a good read.

"The Great AI Awakening"

> The new incarnation, to the pleasant surprise of Google’s own engineers, had
> been completed in only nine months. The A.I. system had demonstrated
> overnight improvements roughly equal to the total gains the old one had
> accrued over its entire lifetime.

[https://www.nytimes.com/2016/12/14/magazine/the-great-ai-
awa...](https://www.nytimes.com/2016/12/14/magazine/the-great-ai-
awakening.html)

~~~
NumberCruncher
This is a really great article.

------
zengid
As a student hoping to become a software engineer, I'm a little nervous about
Googles aggressiveness towards applying ML across many of their products. It
makes me wonder: Will the field of software engineering be reduced to plugging
ML nodes into client-side interfaces? Could it shrink the demand for
engineers?

~~~
ilaksh
Personally I think that within X years (could be only 10, or 5, or probably
less) we will see human-level AGI. This is because the grounded (based on
granular sense/motor) deep learning systems such as Deep Mind are being
applied in varied virtual environments and some with gradual development of
skills and knowledge. This type of approach will enable truly general agents.

So we better hope that the BCIs develop quickly as well.

~~~
fat-chunk
I work in AI, specifically in the field of deep learning applied to computer
vision. Not trying to discredit your comment, but can you provide some sources
indicating that the type of models being trained which you have mentioned are
approaching the notion of a general agent?

I haven't read much of the literature around deep learning, mostly only what
has been applied to computer vision. But from what I understand, the general
consensus is that the current crop of state-of-the-art deep learning models
are very good at performing specific tasks (machine translation, object
recognition in images, etc.), but are not so good at generalising across
multiple fields. This seems more relevant to the domain of reinforcement
learning (which does indeed include deep learning), which has proven to be a
very difficult problem to solve.

~~~
ilaksh
Having an agent with sensory and motor terminals in diverse contexts gradually
trained on increasingly complex knowledge and tasks is key I believe. Like I
mentioned see Deep Mind's work. Also see field of AGI which does exist (for
example search 'AGI-16 intelligence' on youtube).

I am working on a webpage to try to break down why I think this is coming so
fast.

[https://www.youtube.com/watch?v=BP7vhBaBDyk&t=4589s](https://www.youtube.com/watch?v=BP7vhBaBDyk&t=4589s)
General Reinforcement Learning

[https://www.youtube.com/watch?v=T9eSVYLSSrs](https://www.youtube.com/watch?v=T9eSVYLSSrs)
The Emotional Mechanisms in NARS

[https://www.youtube.com/watch?v=eVrflIw6sGg&t=4056s](https://www.youtube.com/watch?v=eVrflIw6sGg&t=4056s)
AGI-15 Keynote by Jürgen Schmidhuber - The Deep Learning RNNaissance

~~~
argonaut
Don't read too much into titles.

AGI(-16) is a cognitive science / philosophy conference (and not particularly
high impact). Similarly, NIPS is not really about neural information
processing.

~~~
ilaksh
I'm not reading into titles. I watched a lot of the videos. Don't dismiss it
on your superficial evaluation of titles or prejudice about the conference.

AGI is in fact a developed field with key insights into general intelligence.
You should study it.

~~~
nl
Argonaut works in the field. I'm pretty sure (s)he knows what NIPS is and has
thought about AGI some too.

~~~
argonaut
I comment a lot on machine learning but I don't actually work in the field
actively. I did some research in college.

There aren't many active researchers/experts commenting on HN (better things
to do), which is IMO a big issue with the ML-related discussion quality on HN
(it's basically 90% futurism/speculation).

------
rawnlq
Is Brain completely separate from DeepMind?

~~~
tcai
Yes - they are independent groups with separate projects in different
locations (Brain is in Mountain View, DeepMind in London).

~~~
jrheard
Is there an easy way of characterizing the difference between the two groups -
eg brain focuses on A, deepmind focuses on B?

~~~
gdahl
We don't have a simple separation of concerns like that. Brain and DeepMind
share a common vision around advancing the state of the art in machine
learning in order to have a positive impact on the world. Because machine
intelligence is such a huge area, it is useful to have multiple large teams
doing research in this area (unlike two product teams making the same product,
two research teams in the same area just produces more good research). We
follow each other's work and collaborate on a number of projects, although
timezone differences sometimes make this hard. I am personally collaborating
on a project with a colleague at DeepMind that is a lot of fun to work on.

Disclosure: I work for Google on the Brain team.

------
philip142au
I wish they would make a society of AI brains inside the computer which talk
to each other by sending documents around, one AI brain might have a job such
as checking the document is correct. A large society of AI brains.

Secondly, a single AI brain is composed of many small AI brains in a society.
Also those brains, recursively down forever recursive.

------
johnsmith21006
Google voice is rumored to be getting a major refresh. Will we get the new
translate capabilities built-in to Google Voice in real time. Basically it
would be the universal communicator.

I do not need the functionality personally but I want to see it in my
lifetime.

~~~
cercatrova
Skype does that already I think

