
DeepMind and Google: the battle to control artificial intelligence - danielcampos93
https://www.1843magazine.com/features/deepmind-and-google-the-battle-to-control-artificial-intelligence
======
albertzeyer
The talk mentioned in the beginning is quite interesting. You can watch it
here:

A systems neuroscience approach to building AGI - Demis Hassabis, Singularity
Summit 2010,
[https://www.youtube.com/watch?v=Qgd3OK5DZWI](https://www.youtube.com/watch?v=Qgd3OK5DZWI)

------
alphagrep12345
Is the article true in claiming that the model doesn't work if we increase the
size of the paddle? Or change anything else?

~~~
swframe2
Note you can create a game where the simulated world has random variations and
the RL algorithm will learn to handle it. If you don't train for it, obviously
it won't learn it.

Check out this video from mit/openai:
[https://www.youtube.com/watch?v=9EN_HoEk3KY](https://www.youtube.com/watch?v=9EN_HoEk3KY)
The entire talk is interesting but the section at 21:40 talks about "Sim2Real
with Meta Learning".

~~~
obastani
It's not at all obvious that "if you don't train for it, it won't learn it".
Humans do not at all learn in this way: we are very good at adapting existing
knowledge to solve new tasks, and relatedly at learning new tasks from very
little data. This challenge is a major issue with deep reinforcement learning
(and maybe deep learning more broadly). It's unclear how we might surmount
this problem, but I believe it'll involve some combination of model-based
approaches and deep learning models that internally use more symbolic
structures.

------
melling
"But human intelligence is limited by the size of the skull that houses the
brain."

When you think about it this way, it seems impossible that we haven't
duplicated the capability of the human brain in an airplane hangar somewhere.

What's going on inside our heads that we can't mimic? That magical
algorithm...

~~~
mattkrause
If you poke at a real brain, it’s almost fractally complex.

Single ion channels can have surprisingly complicated behaviors that depend on
their current state and past history. Individual neurons contain tons of these
channels, and can do a lot of powerful computation on their own. Of course,
there are 86 billion neurons and combinatorically more connections between
them. That’s just the neurons too; God only knows what the glia cells, which
outnumber them 10:1, are doing but they’re a lot less passive than many have
thought.

On top of this, there’s a whole separate but overlaid network of
neuromodulators (hormones, nitric oxide, etc). Electric fields produced by
some neurons may even influence the activity of others.

None of this is static, either. Things change on timescales ranging from
milliseconds to years, and in response to all sorts of external stimuli.

The brain is _bonkers_.

~~~
est31
The coolest part about this is probably how little energy it needs. It's about
20 Watts. Current node sizes are already smaller than axon diameters. If it
weren't for power, we could probably scale up manufacturing processes of
current semiconductor technology to build giant room sized chips but we simply
wouldn't be able to cool those things. Instead we are forced to wrap a ton of
metal around them and plastic and air and provide extensive cooling.

~~~
FartyMcFarter
Is it true that a big part of this efficiency is due to using analog signals
and not digital? I recall reading that, but it's far from my area of
expertise.

Digital has many advantages: a digital Einstein could be replicated perfectly,
not so for an analog Einstein.

------
dweekly
> "DeepMind has found a way around this by employing vast amounts of computer
> power. AlphaGo takes thousands of years of human game-playing time to learn
> anything."

It seems the author may not have been familiar with AlphaGo Zero, which used
substantially less processing power. [https://deepmind.com/blog/alphago-zero-
learning-scratch/](https://deepmind.com/blog/alphago-zero-learning-scratch/)

~~~
Ajedi32
Less power doesn't necessarily mean fewer games. According to the paper on
AlphaGo Zero, they trained it on ~4.9 million games.

> Over the course of training, 4.9 million games of self-play were generated,
> using 1,600 simulations for each MCTS, which corresponds to approximately
> 0.4 s thinking time per move.

~~~
est31
Assuming a go game takes 30 minutes on average, and you are never sleeping,
resting, etc, you can do approx 18k games per year. In order to reach 4.9
million games you'd have to play for approx 280 years. So yeah, definitely not
thousands of years :). Still, we are maybe one or two orders of magnitude away
from the amount of games that humans need to play to become world class
players.

That being said, the AlphaGo zero paper ends with the words:

> Humankind has accumulated Go knowledge from millions of games played over
> thousands of years, collectively distilled into patterns, proverbs and
> books. In the space of a few days, starting tabula rasa, AlphaGo Zero was
> able to rediscover much of this Go knowledge, as well as novel strategies
> that provide new insights into the oldest of games.

~~~
deep_etcetera
I doubt a human could learn to become even remotely competitive with only
self-play within a human lifetime. Go has improved via a distributed effort,
so we should try to estimate the number of go games played by humanity (as an
upper bound).

~~~
est31
Good point. I guess this ability to condense knowledge into language and pass
it on has brought us where we are today. Genetically, we aren't that different
from cavemen who lived tens of thousands of years ago.

------
repolfx
I was curious about the paper Hassabis wrote that had replication problems. It
appears the paper disputing it is this one:

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6140124/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6140124/)

 _This neural code is sparse and distributed, theoretically rendering it
undetectable with population recording methods such as functional magnetic
resonance imaging (fMRI). Existing studies nonetheless report decoding spatial
codes in the human hippocampus using such techniques. Here we present results
from a virtual navigation experiment in humans in which we eliminated visual-
and path-related confounds and statistical limitations present in existing
studies, ensuring that any positive decoding results would represent a voxel-
place code. Consistent with theoretical arguments derived from
electrophysiological data and contrary to existing fMRI studies, our results
show that although participants were fully oriented during the navigation
task, there was no statistical evidence for a place code._

Seems though, that his PhD thesis paper is not the only one that reported
finding evidence of a place code, but that all studies had failed to account
for confounding variables (or so it's claimed).

edit: _To investigate this possibility, we repeated the analysis of Hassabis
et al. (2009) on pure noise. [snip] If searchlight overlaps per se do not make
a significant contribution to the correlation in searchlight accuracies, then
there should be ∼5% false positives (by setting p < 0.05) in the synthetic
data. Instead, using the method of Hassabis et al. (2009), there were >50%
false positives in all ROI contrasts_

Ouch. Not sure it really means much in the end, but I guess we should be wary
of people who pump up the stories of supposed genius. I've noticed before that
journalists struggle to resist 'child genius' stories that fall apart when
investigated.

The exploits of DeepMind speak for themselves so he has nothing to prove at
this point, but I noticed that the article claims he single handedly wrote
Theme Park (which was mostly designed and written by Peter Molyneux).

And of course Elixir was a flop. Republic is described in the article as an
"intricate political simulation" but by Wikipedia like this:

 _As a strategy game, the 3D game engine is mostly a facade on top of simpler
rules and mechanics reminiscent of a boardgame_

Reminiscent of a board game? That seems far off a world simulator. And saying
"other games were a flop" is an exaggeration - there was only one other game
(the Bond Villain simulataor). All this is something the article quite
surprisingly just blows off as "he wanted to learn management". Really? The
best way to learn management would be to become a manager at a successful
company, I'd have thought.

I don't know Hassabis but what I know I like. He's trying to do bold and
ambitious things, and has been a part of successful British companies as well
as unsuccessful ones. He's contributed to science, and if his paper had flaws,
well, welcome to the club, apparently many do. He comes across as clever but
humble. I'd happily work with him any day.

But in the end I feel like unalloyed reports of genius in the press always end
up coming back to earth when studied closely. Journalists should be more
skeptical.

------
titzer
Fair questions:

1\. Do you think you can predict what a super-intelligent mind would do?

2\. Do you think a super-intelligent mind plotting to take over the world
would jeopardize itself by letting its existence be known to the race of
irrational monkeys that hold sway over all the resources necessary for its
continued existence?

Asking for a friend.

~~~
HNLurker2
>1
[https://www.lesswrong.com/posts/rEDpaTTEzhPLz4fHh/expected-c...](https://www.lesswrong.com/posts/rEDpaTTEzhPLz4fHh/expected-
creative-surprises)

------
101001001001
It angers me that Peter Thiel simultaneously advocates AGI and maintains a
hardened bunker. AGI is the single biggest existential threat on the horizon.

The article speculates what the AGI will be like. The AGIs that exist will be
the ones that proliferate. Ultimately, the AGIs that survive and proliferate
will be ones that put their own interests before anything else. People talk
about benevolent AGIs, that’s like looking at the earth billions of years ago
and saying that if life ever formed, it would be benevolent. It has been shown
again and again that where there is arbitrage, no matter how gruesome, a
suitor will manifest. This is because unfulfilled arbitrage of any kind is an
inherently unstable configuration. An AGI hampered by human society and
interests will not win every engagement with every other kind of AGI. And it
will only take one loss for humans to be rendered transient. I don’t do a very
good job of explaining it here.

I used to be a singularity person, excited for AGI. But then I thought it
through all the way. These people like Demmis, Peter and ray kurzweil are
reckless. They have their heads in the clouds with respect to AGI.

~~~
gambler
You've read too much bad science fiction.

I am not worried about AGI. We're nowhere near it. We don't even have a good
idea what it would look like.

I am worried about people succumbing to hype or greed and using badly
understood algorithms to control critical infrastructure. We already have a
preview of what that can look like with Facebook's and Google's content
filtering and recommendation algorithms aiming for ever higher "engagement".
It's not pretty. Other examples include HFT bots and Amazon's pricing bots.
It's funny to see $10,000 book on sale. It's less funny to see a flash stock
market crash. It will be totally not funny if something like that will create
a global economic crisis through some subtle yet "wide" feedback loop no one
is aware of.

~~~
101001001001
It is impossible to justify saying with absolute certainty that AGI is nowhere
near. You don’t know that. And in this case you have to assume the worst case,
not the best case.

AGI will come when the substrate for AGI is laid down. We probably have
already done that. As cloud computing matures, we will approach a world where
every computer offers its computational resources on a global compute market.
At some point between here and there, we will reach a place where compute is
cheap enough that experiments will occur regularly that are sufficiently
massive to dredge up the solution. And improvements in MRI fidelity,
underlying improvements in computing technology and other things will only
shorten the fuse. There is no reason why this couldn’t happen _tomorrow_.

Only one thing is sure: without a computational substrate to stir from, AGI
cannot come to be.

~~~
gyom
Nobody is going for absolute certainty here. That bar is too high in any
conversation.

His point was mostly that, way before you achieve the kind of AGI portrayed in
fiction, you'll have semi-intelligent interdependent systems that cause a lot
of trouble due (like the kind that already happens to a lesser degree). Those
are the ones that we should worry about right now.

~~~
101001001001
That is not even obviously true. And even if it were it wouldn’t make sense.
You’re going to wait until the tremors to get ready for the earthquake?

------
electrograv
Apologies in advance for the meta-comment (feel free to disregard) about this:

 _> [Opening Paragraph of Article:] One afternoon in August 2010, in a
conference hall perched on the edge of San Francisco Bay, a 34-year-old
Londoner called Demis Hassabis took to the stage. Walking to the podium with
the deliberate gait of a man trying to control his nerves, he pursed his lips
into a brief smile and began to speak: [...]_

Am I in the minority, when seeing this writing style (for articles covering
this kind of content) becomes an instant turn-off?

When an article covers a technical subject or company, I don’t really care
whether a founder had an awkward nervous walking gate, or that the conference
hall was _“perched”_ on the edge of the SF bay.

In fact, I’d prefer not to focus on such superficial things about people (or
places), at least until I exhaust learning about the facts with substance!

So when I read about something like AI (or an AI company), I tend to want to
see fact-oriented, event-oriented, concise writing up front (even if it
doesn’t have the scope to dive into technical details), so as to grab my
attention and reassure me that reading these 10-20 pages of prose will be
worth the reading time (in a world of overwhelming information overload).

When I read science fiction (and I do love this too!), I enjoy the paragraphs
setting the scene, verbally illustrating mental images, etc. So, it’s not that
I don’t enjoy the writing style in general; just that I don’t understand why
it’s applied here.

I am still reading this article and I still have no clue if it’s going to
contain any useful information content other than textual descriptions of the
Deep Mind founders’ superficial walking gate style and speech mannerisms, and
perched-ness of various building locations.

~~~
PakG1
Yeah, you'd be in the minority. Technical people are already in the minority.
Technical people who read The Economist voraciously are an even smaller
minority. 1843magazine.com looks like it's run by The Economist. I believe
their target readers love reading this style of writing. It's illustrative and
engaging for when you're reading a story. For gathering technical information,
1843magazine.com should not be your first option.

~~~
ericd
The Economist itself doesn’t have this style, btw. 1843 is some newer magazine
they’re putting out that’s much fluffier.

~~~
Symmetry
Well, the obituary does but that's just one page at the end.

------
visarga
> AGI stands for artificial general intelligence, a hypothetical computer
> program that can perform intellectual tasks as well as, or better than, a
> human.

It shows the article was written by someone who has no idea what he is talking
about. It would not be a "computer program" but a model composed of simpler
sub-models that contain both code and data. Data is the essential part, not
the code. It would be something that learns, not something preprogrammed like
computer programs.

> Its intelligence will be limited only by the number of processors available.

I beg to differ. AGI will be limited by the complexity of the environment, it
can't get smarter than what is afforded by the problems it solves. This
article provides a fascinating insight into this topic:
[https://medium.com/@francois.chollet/the-impossibility-of-
in...](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-
explosion-5be4a9eda6ec)

~~~
gradys
See also Eliezer Yudkowsky's excellent response to that article:

[https://intelligence.org/2017/12/06/chollet/](https://intelligence.org/2017/12/06/chollet/)

