
AI Nationalism - niccolop
https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism
======
YeGoblynQueenne
>> This arms race will potentially speed up the pace of AI development and
shorten the timescale for getting to AGI.

... or it will all fizzle out once it becomes clear that classifiers trained
from immense datasets can't get to AGI.

In the last few months I've noticed a sudden uptick in papers and articles on
the limitations of deep learning and even a few conferences discussing the way
to overcome them (e.g. Logic and Learning at the Alan Turing Institute).
Eventually, the hype will die down, people in the field will feel more
confident discussing the weaknesses of deep learning and the general public
(including the industry and the military) will catch on. Then we'll move
forward again when the next big thing comes along.

~~~
evrydayhustling
I fully agree that there is an ongoing reckoning with the limitations of deep
learning, but that is the sign of a thriving community not a failing one.

Also, AGI is a misplaced milestone, even as OP explores a more interesting
one: as incremental advances in AI empower non-human entities (corps and
governments) with unique powers of surveillance and autonomous action, the
impacts can be just as important as if it were a machine intelligence.

The uptick in articles is just a sign that it's trendy to poke holes in hype.
Expecting AI to go away is like expecting the Internet (another fundamental
shift that was announced with an annoying hype bubble) to go away. It's just
becoming a part of life, and creating new power structures while it does.

~~~
YeGoblynQueenne
Has there been a misunderstanding of my comment? I didn't mean to say that I
expect "AI to go away". For one thing, I certainly do not equate all of AI to
deep learning, or even machine learning.

My point is that deep learning has generated a lot of excitement in the last
few years, but as we're hitting inevitable diminishing returns the excitement
is dying down and people are starting to think about how to move beyond
current techniques. As usual, the last to catch on to the reversal of the old
trend, or the new trend if you prefer, are the people who pay the money-
industry investors or the military etc, and then the general public.

My money is on using deep learning for "low-level" sort of sensory tasks and
then GOFAI, symbolic techniques, for high-level reasoning. There has been some
recent activity on trying to marry symbolic reasoning and deep learning (e.g.
differentiable ILP, from Evans and Grefenstette in DeepMind [1] etc) and it's
a promising line of research that has the potential to yield "best-of-both-
worlds" results.

But - "AI going away"? Not any time soon!

________

[1] [https://arxiv.org/abs/1711.04574](https://arxiv.org/abs/1711.04574)

Although note that their assertion that ILP can't address noisy or ambiguous
data is plainly wrong :)

~~~
sangnoir
> Has there been a misunderstanding of my comment? I didn't mean to say that I
> expect "AI to go away"

The "it" in your original comment makes the subject ambiguous - your comment
can be interpreted as saying AI itself (not the arms-race) will "fizzle out".

I agree that AI's hype cycle is nearing the "trough of disillusionment" stage,
but after that will come a wider and deeper application of AI in a multitude
of fields and industries instead of the narrow applications today. Even the
diminished returns are worth chasing if it increases your turnover by 1-2%.

~~~
YeGoblynQueenne
>> I agree that AI's hype cycle is nearing the "trough of disillusionment"
stage,

I'm sorry but I never said anything like that. AI is not the same as deep
learning. It's deep learning that has been hyped in the last few years, not AI
in general. Of course, in the lay press, there is a great confusion between
AI, machine learning and deep learning- but I don't see why there should be an
assumpion that I, too, am equally confused about those terms.

In any case, it should be easy to dispel any misunderstandings by a quick look
at my profile- I'm an AI PhD research student. I would hardly claim that AI is
about to fizzle- or even deep learning. What is being discussed in the quote
from the original article is the purported arms race to AGI. And how could AI
"fizzle" now- when it has been growing as a field for the last 60+ years?

Honestly, I don't see how any other interpretation of the "ambiguity" in my
comment can be justified, assuming good faith. I sure had to squint really
hard to see any ambiguity at all.

~~~
evrydayhustling
It sounds like we (in this thread) are all on the same page about the future
of AI-writ-large.

Regarding good faith, the quotation pattern you used didn't mention deep
learning - you referred to either "the arms race" or "classifiers on large
datasets" as fizzling out, and my reply resolved them to "AI" as a spanning
term, since that was the context of the original post you quoted.

For what it's worth, I would have written the same response about deep
learning specifically for the same reasons you point out later in the thread
that AI will remain useful. The specific profile of opportunities opened by DL
is finding plenty of valuable homes in corporate processes where other kinds
of automation fill in the gaps between it and AI.

At this point, I'm not sure whether you disagree with that, or just think that
some hype could afford to fade (no argument!).

Since you are a grad researcher, I'll throw in some of my context. I did my
PhD focused on probabilistic graphical models, back when it was easy to ignore
NNs (and when we expected to hit some of the same perceptual milestones). As a
grad student, a big part of your job is to filter fads and find ideas that
will _stay_ true.

Because of that, I was slower than I could have been to recognize the feedback
loops in what is "true" about applied AI. Deep learning's initial fit for some
architectures and problems has attracted attention that made it much cheaper
and easy to experiment, and therefore useful for more and more - even gobbling
up adjacent techniques and giving them new names (over any manner of academic
protest). That feedback loop isn't unbounded, but I guess I'm just sharing the
perspective that hype, while annoying, isn't something that even a grad
student can afford to disdain.

~~~
YeGoblynQueenne
>> At this point, I'm not sure whether you disagree with that, or just think
that some hype could afford to fade (no argument!).

I think I agree. I believe the hype is primarily driven by industry looking
for applications rather than researchers looking for, er, well, understanding,
hopefully.

>> Because of that, I was slower than I could have been to recognize the
feedback loops in what is "true" about applied AI. Deep learning's initial fit
for some architectures and problems has attracted attention that made it much
cheaper and easy to experiment, and therefore useful for more and more - even
gobbling up adjacent techniques and giving them new names (over any manner of
academic protest). That feedback loop isn't unbounded, but I guess I'm just
sharing the perspective that hype, while annoying, isn't something that even a
grad student can afford to disdain.

You're right of course. Deep learning has earned its due respect I think and
although I expect the field to look for something new eventually, I'm guessing
that CNNs and LSTMs in particular will remain as established techniques,
probably incorporated into other work. I mean, until some new technique comes
up that can match CNNs' accuracy but with much improved sample efficiency and
generalisation, CNNs are going to remain the go-to method for image
classification.

Like, I don't disdain deep learning, I did some work with LSTMs for my
Master's and I'm thinking of using CNNs for some vision stuff after my PhD (my
subject woulnd't really fit). It's just, there's so many people publishing on
deep learning right now that I don't see the point of joining in myself.

~~~
evrydayhustling
Looks like you are doing some cool stuff based on learning of structures! And
indeed, the folks who are now celebrated for DNNs were doing the less popular
thing for 10-20y prior :) Best of luck on your research.

~~~
YeGoblynQueenne
Ah, I've been telling myself that, yes. Cheers :)

------
nopinsight
I added up the number of university researchers who publish in top AI
conferences (according to csrankings.org) from Jan 2017 to late May 2018.

[http://csrankings.org/#/fromyear/2017/toyear/2018/index?ai&v...](http://csrankings.org/#/fromyear/2017/toyear/2018/index?ai&vision&mlmining&nlp&ir&world)

Here are approximate numbers (with 2 significant digits) of faculty
members/university researchers who published as above in each
country/continent:

US 770

Canada 92

Asia (including China) 340

China 240

Europe 280

Australia + New Zealand 86

South America 12

The world excluding the US 810

So the US is still far ahead of other nations/regions, but it now has a bit
below 50% of the world's university researchers who recently published in top
AI conferences. China as a country is close to Europe as a continent and its
number of published university researchers has increased rapidly in recent
years.

The number of researchers is not weighted by the number of papers published
but this number is useful since it counts how many people are capable of
advising graduate students to produce world-class research. Using the number
of papers is complicated by how likely highly capable international graduate
students would choose to study in each program (in addition to the
researcher's capability), i.e. university's reputation would have an
additional impact beyond its research capability.

~~~
gf263
Surprised at how high Canada is when you factor in its population is only 4x
NYC.

~~~
ian
Hey this is ian hogarth. I first presented a version of this essay at an event
at a place called Ditchley which had brought together various ML researchers
and politicians from North America and Europe to discuss this topic. One of
the things that really struck me was the similarity between the U.K. and
Canada in terms of their depth of academic talent around ML but the paucity of
independent “domestic champions”.

~~~
dekhn
If you haven't, you should rewrite this essay from the perspective of Crypto
nationalism and how that turned out.

------
laichzeit0
Something tells me people that write these articles have never read a paper
from e.g. NIPS or pick any top tier conference on cutting edge research, heck
I would go so far as to say they don’t even know how to write an image
classifier for MNIST using Keras if their life depended on it.

Universal function approximators are not about to take over the world.

~~~
canjobear
I'm always a bit confused when people call neural networks "universal function
approximators" as if that makes them trivial or weak.

Suppose we have a really strong universal function approximator---stronger
than current neural networks, whose generalization properties are not really
that great in the grand scheme of things as of 2018. Tell it to approximate
the action policy that maximizes some overly-simplistic geopolitical objective
function, like GDP or territory controlled at time t+1. It doesn't seem at all
obvious that this thing could not take over the world or at least cause
significant havoc if given sufficient resources.

~~~
eli_gottlieb
>some overly-simplistic geopolitical objective function, like GDP or territory
controlled at time t+1.

The hard part here is _writing down_ that objective function. Remember, an
AI/ML/cogsci algorithm is locked inside a black box, that being the hardware
it runs on. Any objective function for RL must be expressed as a _function_
(preferably a smoothly differentiable one, for gradient descent) of the sense-
data available to the agent and the agent's hypothesis class about the world.
Naive RL tends to optimize the function by, where at all possible,
systematically decorrelating the agent's sense-data and reinforcement signal
from the distal causes we intend them to represent.

~~~
AndrewKemendo
In my opinion, it's an impossible and undesirable task.

So for example, write down the objective function for current General
Intelligence: Humans. It's impossible, and has been the work of the field of
Philosophy/Economics since we started seriously thinking about it.

~~~
eli_gottlieb
Ehhhh we're pretty close on that one, actually.
[https://www.nature.com/articles/s41562-017-0069](https://www.nature.com/articles/s41562-017-0069)

~~~
AndrewKemendo
Did you link the right article?

Nothing in that article discusses interpersonal neurological response systems
or anything relating to how mores and boundaries are created.

Seems like you're linking to something which may be on the track to narrow
down consciousnesses which is a separate question - and one I also question
the benefit of caring about.

~~~
eli_gottlieb
>Nothing in that article discusses interpersonal neurological response systems
or anything relating to how mores and boundaries are created.

Well, we weren't _discussing_ social reasoning and behavior, so I linked an
article talking about the systems governing the brain's "objective function".

~~~
AndrewKemendo
In fact I was, but I wasn't explicit enough, so that's my fault.

My original statement should have been: "We can't model Humanity's Collective
Objective Function" \- which is what would be behind what we are interested
in: Stable functioning muti-agent systems. I think EY took a crack at this a
long time ago and rightly abandoned the concept (see: CEV).

Even with that clarification I disagree with the premise that we can model an
"objective function" for an individual strictly in-vivo. Modelling an
individual agent's reasoning/function system doesn't account for the
environmental context it exists inside of, gives input into and responds to.
So even if it was possible to understand the mechanism for intra-personal
decision criteria, and I don't think it probably is, I don't think it's
generalizable without having the context of inputs.

Assuming that we could do this, I don't think you can extrapolate
intentionality directly from individual to collective groups - which for an
AGI is what is existentially important as it needs to be collectively general
to solve the existential problem.

I also don't think this is desirable as a framework for AGI - as humans,
despite our intelligent status, are quite unstable and sub-optimal in groups.

~~~
eli_gottlieb
>My original statement should have been: "We can't model Humanity's Collective
Objective Function" \- which is what would be behind what we are interested
in: Stable functioning muti-agent systems. I think EY took a crack at this a
long time ago and rightly abandoned the concept (see: CEV).

If no such thing exists, then it was the wrong thing to investigate, so stop
being interested in it.

>Even with that clarification I disagree with the premise that we can model an
"objective function" for an individual strictly in-vivo. Modelling an
individual agent's reasoning/function system doesn't account for the
environmental context it exists inside of, gives input into and responds to.
So even if it was possible to understand the mechanism for intra-personal
decision criteria, and I don't think it probably is, I don't think it's
generalizable without having the context of inputs.

That's just an inverse reasoning/theory-of-mind problem, one that normal
theory-of-mind models and actual human brains solve every day.

>Assuming that we could do this, I don't think you can extrapolate
intentionality directly from individual to collective groups - which for an
AGI is what is existentially important as it needs to be collectively general
to solve the existential problem.

What's this about "collectively general" and "the existential problem"? You
seem to have gone off the deep end into philosophy salad.

>I also don't think this is desirable as a framework for AGI - as humans,
despite our intelligent status, are quite unstable and sub-optimal in groups.

Considering you don't seem to know much about how humans work and what causes
us to work well or badly in various situations, this statement comes off as
almost racist.

~~~
AndrewKemendo
Clearly you have a chip on your shoulder so it's not worth having any further
discussion.

~~~
eli_gottlieb
Well, clear communication is important. The more important something is, the
more clarity is necessary.

------
analyst74
> [I]f most countries will not be able to tax ultra-profitable A.I. companies
> to subsidize their workers...This kind of dependency would be tantamount to
> a new kind of colonialism.

This is an interesting observation, it is actually already happening now with
Internet companies, and to a lesser degree physical product companies with
global reach (like Apple and Amazon). Money flow from local economies into
those companies who don't pay much local tax nor creating much local
employment. That could eventually drain the well dry.

Countries suffered from colonialism have been catching up to the developed
world in terms of standards of living in the past 100 years. I wonder if the
above effect will reverse that course.

~~~
wsinks
So - we end up with Elysium?

------
mlthoughts2018
I prefer to think about this in a manner similar to Robin Hanson’s Foresight
Institute presentation regarding models for AGI timescales [0].

Basically, the component of this that says “but machine learning is different”
is still not convincing. The same nationalistic divides and concerns about
geopolitical backing for warfare tech that happened in response to nuclear
weaponry and chemical weaponry are likely to be high-fidelity models of
whatever geopolitical divide for machine learning weaponry.

I agree it will be a significant policy issue, but I do not agree it is very
related to the topic of AGI. Reasoning about it by studying how various other
tech arms races have unfolded in history will be a good, but not perfect,
model for how it unfolds for ML too. And the pieces where this time is
different will be far more understated than the amount of hype about it.

[0]: < [https://vimeo.com/9508131](https://vimeo.com/9508131) >

------
wyck
So you're saying we are going to digitize emotional bias and pretend it's more
intelligent decision making, and then hand that over to a corporation. Sounds
like a winner.

~~~
nostrademons
That's a huge win. Consider that the economy today is built upon trading
emotional bias for labor-saving and calling it "employment". With AI, we can
get all the same emotional bias _and not have to pay them_. Plus they operate
millions of times more efficiently and never need to sleep or have a personal
life.

------
carapace
> Machine learning will enable new modes of warfare

Bucky was confident that we could use computers to solve our problems. We
could enter all relevant data and the machine could compute the optimal
solutions for us.

The issue has always been ensuring we ask them to solve the _right_ problems.

If we use AI to tell us how long to imprison people (already happening) rather
than how to decrease recidivism, that's a meta-computer choice that _we_ made,
not the AI.

If we use AI to kill people, rather than to figure out how not to have to kill
them in the first place, that's also our choice.

Cf. "Wargames"
[https://en.wikipedia.org/wiki/WarGames](https://en.wikipedia.org/wiki/WarGames)
This was in '83!

~~~
carapace
I found the scene on youtube:
[https://www.youtube.com/watch?v=NHWjlCaIrQo](https://www.youtube.com/watch?v=NHWjlCaIrQo)

I'm bawling my eyes out right now.

"The only winning move is not to play."

------
angel_j
AI will have the biggest impact on who makes money and controls wealth. Before
any nation-state tries to take over the world with some kind of ultra-dominant
weapon, most large states will have to deal with their own populations, as the
rich control more resources.

The imaginary graph of ML technology that can be developed for destruction or
defense is fraught with inter-dependent paranoid scenarios. The use of ML for
the increase of human happiness is apparent and obvious. An ML arms race that
invokes conflict is going to be a huge waste of a nation's ML resources.

It would be much more productive to think about how ML/AI can be used to for
egalitarian human prosperity (a la post-scarcity, etc).

------
madmax96
A general critique: it's not _just_ about AI.

Computation, in general, is capable of solving many problems that afflict the
world - disease, hunger, resource allocation, etc. Some of these problems have
"conventional" computational solutions.

Fundamentally, there are two problems that must be solved. First, the actual
ability to actually compute needs perfected. This means that massive
computations (i.e. the computations that solve massive, game-changing
problems) can easily be performed. Things like public clouds are solving that
problem. Second, computation needs applied to a problem. Statistical learning
approaches have become popular because they are relatively simple to apply and
are relatively successful. AI researches tend to believe that AI is all that
matters, but obviously the success of AI is only possible with efficient
computation. Similarly, efficient computation alone is useless if that
computation cannot be used to solve actual problems.

Computation is to the 21st century as energy is the 20th. The ramifications of
that statement are immediately obvious: consider the petrodollar. Soon,
computation will become a currency.

------
strken
Maybe it's not about AI as a strategic asset, so much as it's about private
sector data collection of the kind that AI can utilise as a strategic asset.
The ability to perform image recognition against user photos would be limited
to countries that host the headquarters of a large social network, for
example.

------
ButterflyWar
This is strangely relevant.

[https://archive.fo/zP1F1](https://archive.fo/zP1F1)

\- The economics surrounding AI development favor those who can commoditize
data to the cheapest price. (Silicon Valley, militaries, and finance have AND
MUST MAINTAIN their influence over this commoditization) This commoditization
requirement was once previously thought as irreversible, allowing dumb money
to buy into the idea that “data is the new oil”, but Butterfly War shows how
to unexpectedly drive up the liability of a mass accumulation of data
commodities.

\- Foreign actors and short sellers can now use derivations of the Butterfly
War to become market makers of the data economy, forcing the theory of “AI
Winters” to be replaced with a more predictive “AI Business Cycle”. (Do you
now understand why I went to Soros-influenced actors first?)

\- This undesirable pressure, when paired with the institutional dependencies
of established AI infrastructure, will force a deeper consolidation of Silicon
Valley, military, and financial “cognitive assets”, which in turn will skew
the funding and purposes behind additional AI development to be more risk-
averse and conservative (from an power preservation standpoint).

\- The pressures to embrace “cognitive mercantilism” become irreversible.
Nations will aggressively retain talent and technologies for themselves to
improve their collective bargaining power on the international stage.
/pol/-tier nationalism finally has the footing to stifle their material
humanist opposition.

\- AI development will enter an artificially induced “deep freeze” period,
similar to what happened to space exploration after the Space Race.

\- The doctrine of Gnostic Warfare we develop today dominates in this period,
focusing primarily on the epistemological limitations of Deep Belief Networks
and, more precisely, how these cognitive assets define emotion.

------
BloodyHands
the idea that politicians are welcome in theology is ridiculous

> what is intelligent

> what is the set of all x such that x is intelligent

not political questions

