
The AI Threat to Open Societies - malloryerik
https://www.georgesoros.com/2019/01/24/remarks-delivered-at-the-world-economic-forum-2/
======
raz32dust
This reminds me of the "Do artifacts have politics" paper by Langdon Winner
[1]. He argues that technologies have inherent political traits.

Nuclear power is considered to be supportive of autocratic political systems
since nuclear power plants need centralized planning and networks to be
effective. Solar power is considered democratic since anyone can harness it.
It's an interesting paper and definitely worth a read.

On similar lines, I feel internet is a democratizing force, since it allowed
anyone to publish data and anyone to consume it, and is (somewhat) difficult
to control centrally. AI, on the other hand, is a centralizing force, since
the most powerful AI can be managed and powered by the most powerful
institutions.

[1]
[https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf](https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf)

~~~
joakinen
Internet is out of control by citizens. The web was such a democratic space
but browsers narrow that space down because they are centralized products.
DRM-enabled only (if ever comes) will kill the web.

~~~
sgt101
I don't understand - why are browsers centralised?

~~~
nine_k
How many major browsers do you know? How hard it is to implement your own? How
hard it is to just read and roughly audit the source code? How much control do
you have over the features of the browser?

~~~
sgt101
Well, several are open source. I know that they have vast and complex code
bases and are extremely hard to understand or modify but it is possible and
diverse communities are working on them.

For me the centralisation is at the search engine and the content generation;
both have narrowed and narrowed and narrowed.

------
Animats
AI is only in a supporting role here. It's massive data collection and storage
at low cost that's the problem. Machine learning just helps to digest the
data.

Tech has solved the problem of previous attempts at Big Brother - you just
couldn't afford enough watchers to watch everybody all the time. Now, you can.
It's even profitable.

~~~
est31
And tech will also solve the power problem of autocratic regimes. Right now, a
dictator can't control every single individual in the country on their own.
They need police that can search your apt at 5am because you made a blog post
critical of the government. They need lawyers to convict you, prison guards,
etc. In such regimes, the dictator still has to put people into places of
power so that the will of the dictator is executed. But people can refuse
orders, and they can declare someone else to be president.

Coups are one of the biggest dangers for dictators, and many rebellions are
semi-coups where people in power just _step aside_ , letting the rebels do
their thing.

Now, enter AI. Now the dictator could give all that power to an AI instead of
intermediaries. An entire government run by two entities: the Dictator, with
direct control over the AI that runs the remainder. All the bomb-equipped
drones, all the self-driving tanks, all the bipedal robots with their machine
guns. All the robot prison-guards and the robot judges to put critical people
into prison. If this AI would answer only to the head of government, then any
kind of upheaval would become unrealistic and impossible.

~~~
crishoj
The exertion of control doesn’t even have to be blatant or physical. If social
credit is a determining factor in an individual’s ability to function
effectively in society, e.g. by way of credit ratings and access to opportune
employment, there’s a clear risk of increased self-censorship and conformance,
thereby debilitating serious political opposition even in its infancy.

~~~
speedplane
If this social credit or indirect influence is perfectly fair, it's not really
a problem. However, it's very likely it will be unfair to some, and create
tension and unrest that will lead to the same problems we have without AI.

~~~
pimmen
I would argue it's unrealistic to make such a system that would be fair in the
eyes of all coming generations. Imagine they implemented a social credit
system in the Middle Ages and added or deducted points based on whether or not
you followed the laws and norms of that era.

~~~
speedplane
> I would argue it's unrealistic to make such a system that would be fair in
> the eyes of all coming generations. Imagine they implemented a social credit
> system in the Middle Ages and added or deducted points based on whether or
> not you followed the laws and norms of that era.

The middle ages basically had exactly that system, but the enlightenment still
occurred.

------
austincheney
History has already answered this numerous times. AI is a tool much like a
semi-automatic rifle or the printing press. The people who understand it well
have an accelerated advantage over those who do not. It is both a good and bad
thing depending upon who wields the tool. Like any force multiplier it will be
misunderstood (magic) and improperly regulated by both open and closed
societies alike.

Like with any tool the real victors will be those who adapt it to solve basic
immediate problems: a utility.

~~~
intended
The question people need to address, is whether AI is a tool like the
evolution from the Gun to the automatic rifle.

OR whether AI is the kind of tool like flight, which made fortresses
redundant.

We have many areas of thought and society, which have been protected by walls
of 'difficulty' making them intractable to large scale, automated and
effective manipulation.

This may no longer be the case, and AI may well represent the more fundamental
change of the second kind.

~~~
austincheney
Why must people know those things? As with any new technology the second and
third order consequences are unclear and will likely be entirely surprising
beyond any intended (or not) consequences.

Here is what my own experience in programming and my understanding of history
have taught me. A sufficiently advanced understanding of a useful technology
(doesn't even have to be new) allows a small group dominance over spaces
traditionally controlled by large organizations or heavy investments in large
technology. This causes fear and panic when it becomes clear. The common
current solution is to attempt to purchase the competition, but what if the
competition is not available for purchase?

Perhaps the most common fallacy regarding new technology is that you can
destroy the competition with a sufficiently large force. History has proven
this false numerous times and it remains false with modern technology.
Consider the battle of Argincout, battle of Crecy, or the establishment of the
Yuan dynasty. Consider how much current corporations are investing in AI
research with thousands of dedicated developers.

A superior technology can easily compete with Amazon's AI unit, for example,
even though you may have a team of 5 developers competing against their 5000.
The reason why is because it takes time to learn and develop a superior
understanding of technology necessary to create the superior technology while
the army of competing developers will find friction abandoning their
perceptions of reality even when confronted with critical evidence. An army of
5000 developers is an establishment of culture, practices, identify, and
perceptions resulting in a large stagnant wall.

------
noahmbarr
Full video of his speech. His delivery adds to the message:

[https://m.youtube.com/watch?v=7ZGoXP-
BWoc](https://m.youtube.com/watch?v=7ZGoXP-BWoc)

~~~
nabla9
The transcript of the speech.

[https://www.georgesoros.com/2019/01/24/remarks-delivered-
at-...](https://www.georgesoros.com/2019/01/24/remarks-delivered-at-the-world-
economic-forum-2/)

------
bryanrasmussen
Some thoughts I just had, and haven't really developed yet beyond the initial
moment of having them:

It used to be you would see theories about the internet threat to Closed
societies, and I suppose those threats are real too. But perhaps the threats
to closed societies are the obvious threats, and the threats to the open
societies were not immediately obvious, and thinking on this is somewhat murky
but if there are a bunch of obvious threats and a bunch of hidden threats then
I guess people guard against the obvious threats.

Finally maybe anything that threatens one type of human society must also
threaten all types, only the threats are changed around for each type.

~~~
pjc50
Open societies are threats to closed societies, and vice versa. An open world
is stable; a world of closed societies is also fairly stable internally,
although much more likely to go to war with each other.

The idea of "the end of history" was that Open had won, and it was just a
question of mopping up the remaining closed societies. It turns out that maybe
the open societies weren't as open as they thought.

~~~
luckylion
> It turns out that maybe the open societies weren't as open as they thought.

Or that an "open world" isn't as stable as predicted.

Would we even have the same "open" societies without China's repressive
society and their willingness to finance western consumption?

------
quantum_state
It is self evident that labeling any knowledge, technology, or artifacts to be
against open society is completely nonsense and contradicts the very concept
of open society itself.

~~~
paganel
It’s not nonsense, as you cannot carry on mass-killing events like the
Holocaust or the Goulag without a modern-ish technology like the railway
system. Or, to go directly to the source, let’s use Goebbels [1], as he was
explaining that without modern technologies like the radio or the airplane the
nazis’ ascent to power wouldn’t have been possible:

> It would not have been possible for us to take power or to use it in the
> ways we have without the radio and the airplane. It is no exaggeration to
> say that the German revolution, at least in the form it took, would have
> been impossible without the airplane and the radio.

[1] [https://research.calvin.edu/german-propaganda-
archive/goeb56...](https://research.calvin.edu/german-propaganda-
archive/goeb56.htm)

~~~
lucio
So? let's make any modern-ish technology illegal?

~~~
kazagistar
Let's make leveraging technology to harm or control those who cannot
effectively wield it illegal, or at least compensate for such power
imbalances.

------
DyslexicAtheist
Jacques Ellul makes very similar points in his work _La Technique_ (The
Technological Society). He traces how technology and power interplay from the
invention of the pocket watch and steam engine, all the way to computers. It's
one of the best works I've come across in this space and a must read for
Technology critics and advocates alike. I wish it would be (even) more widely
read here[1] than it is but finding it in languages others than English or
French might prove a challenge. His follow up work "Propaganda The Formation
of Mens Attitudes" is also phenomenal. These 2 books are among the best books
I discovered in 2018 (if not in the last decade).

[1]
[https://hn.algolia.com/?query=Jacques%20Ellul&sort=byPopular...](https://hn.algolia.com/?query=Jacques%20Ellul&sort=byPopularity&prefix&page=0&dateRange=all&type=story)

------
raverbashing
Even though the reason why the author has these positions is perfectly
understandable, I don't subscribe to his views, and there is an issue in
complete liberalization of movements across borders.

Fundamentally, AI is not a threat or a blessing, it is a technology.

------
Aeolun
I think the biggest threat to open societies is that nobody will be able to
read the warnings against dangers if they don’t have money to subscribe to a
thousand publications :/

------
renholder
Perhaps this is my pessimistic view but wouldn't _any_ government that
monitors it's civilians for a potential of being "against" the government
(thus, threatening that government's power) be authoritarian?

Let's take the second amendment in the states for an example. The right to
bear arms is meant to stop oppressive government but you cannot purchase anti-
aircraft or anti-tank or anti-anything-really weapons. So, the might of the
government's force is disproportionately in favour of the very government that
the amendment is proposed to prevent oppression from, yeah? (Not that I'm in
favour of overthrowing the government of the states, whatsoever; this is just
an observation.)

When the Snowden revelations came about, this was the equivalent of China's
list of people to send for "re-education". Granted, in the states, there's
been no re-education (that we're aware of), it's only a small step from taking
that information gleamed from those "lists" and doing just that.

At what point do we draw the line between authoritarian and not? Shouldn't the
very notion of putting people on a list count? After all, it was a list of
people that suffered the consequences of the Night of the Long Knives, was it
not?

I fail to see how an open society would be considered "open", if it includes
secretive programs like that. Isn't the principle of secret programs against a
government's citizenry the antithesis of an open society?

Maybe I'm missing something here but to say that there are any open societies
left (while probably dystopian in nature and bereft of any hope of the future)
is beguiling the very fact that there doesn't seem to be very many (if any at
all) open societies left.

(I'm probably talking bullshit circular logic, so feel free to ignore this
tirade of discontent.)

~~~
DougN7
I think maybe you’re comparing “open society” to utopia. There will always be
bad actors that have to be accounted for and handled in some way. What is bad
and how bad is always up for debate, but their existence will always be with
us. In my opinion that requires lists. I do agree that most countries
including the US have gone too far from what I’ve read.

~~~
renholder
Aye, your argument has merit. I was just looking at it from a perspective of
whether or not an open society is what we're in, to begin with, to argue that
other societies should be more "open". It would be hypocritical to say
something of such sort, if not; which is something that Soros seems to hint to
with his posit (e.g.: corporations and governments combining to create a far
more in-depth surveillance state).

To give a principled example, the <insert the appropriate three-lettered
agency here> keeps a list of anyone who even looks-up things like TOR or
TAILS. Users have their own reasons for looking the information up, and those
are notwithstanding the fact they're automatically on a list, simply for doing
so.

I get bad actors are a part of reality (let's be honest: they always have
been) but, as I proffered originally, at what point do we decide a society is
no longer open?

For example, if those who participated in Occupy[0] were on a list maintained
by their respective government[s] - just for exercising their democratic right
to protest - then should we consider any society open, anymore? For example,
it's no secrety that those lists, themselves, are shared amongst communities,
like the five-eyes.

Of course, the best example would be the no-fly lists but given that those are
black box, in and of themselves, should that be demonstrative of no longer
being an open society?

The O.G. comment was merely conjecture, out loud, as it were; just as this
comment is... Though, I'm still left bereft of what constitutes an open
society, anymore, since that definition seems to have drifted in modern times.

(Of course, maybe I _am_ obfuscating the definition of an open society with
the precept of a utopia but maybe that's my stubborn hopefulness for a better
future than the one that I know that's coming...)

[0] -
[https://en.wikipedia.org/wiki/Occupy_movement](https://en.wikipedia.org/wiki/Occupy_movement)

------
VvR-Ox
You post an article about open societies and then I can't access it without
subscribing - LOL

Just my opinion then without reading it: \- in China we already see what a
government can do with technologies including AI to control the behaviour of
people \- the West always seems to think it's not going to be that evil around
here because people are moral superior (I don't believe that - with Hitler in
Germany we saw how quickly things can change and with refugees and poor
citizens in all those countries we see how badly people can treat others and
still think this is correct - even without an AI they'd believe blindly) \-
too many techies have no moral & ethics. While studying and also in business I
saw most of the people just being interested in personal welfare and earning
the most money they can

TLDR; This (whatever 'this' will be) is definitely going to happen if not
enough people find back to humanity - with or without support of IT/AI/...
(just tools)

~~~
malloryerik
Btw you can read the article for free by registering, though it's a hassle.
Might be easier to read from some of the alternative links with the same
speech posted by others here:

[https://www.wired.com/story/mortal-danger-chinas-push-
into-a...](https://www.wired.com/story/mortal-danger-chinas-push-into-ai/)

[https://www.georgesoros.com/2019/01/24/remarks-delivered-
at-...](https://www.georgesoros.com/2019/01/24/remarks-delivered-at-the-world-
economic-forum-2/)

~~~
VvR-Ox
Thank you very much :)

But that's exactly what I dislike: for everything you wanna do on the net
you're forced to register. I just can't see it anymore and instantly close
stuff that behaves that way.

------
ngcc_hk
It is not AI. It is the chinese communist party.

Otherwise might as well have a paper called “The Internet Threat ...” or “The
pen and paper threat ...”

The America fought Soviet Union. But ok with a communist party in china, with
their all in and everywhere in the society (and copy things on top of
contributions) ... it might be too interwoven and too big to ...

Good luck.

~~~
raverbashing
Also the US antagonized the Soviet Union but ultimately it fell by itself.

Remains to be seen what will happen in the Chinese case.

~~~
scottlocklin
The Soviets dumped all their resources into war economy output. The Chinese,
consumer goods for the world pay for their war machine, which is basically the
strategy the US used to have.

------
hilbert42
I've been interested in science and engineering since my youngest days and
I've always considered myself a hacker from way back. At school, my fellow
schoolmates nicknamed me 'The Boffin' as back then the terms 'hacker' and
'nerd' hadn't yet been coined. My profession is electronics engineering and IT
and for my entire career I've followed and worked with the latest developments
in the field. Right: I'm an insatiable technophile!

My other studies were in philosophy (ethics, etc.) and government and over the
years I've found my formal training in them truly invaluable, they've
broadened my perception and worldview about the ways science and engineering
dovetail into society and make the world a better place by improving the lives
of its citizens.

I have to agree with the tenet of George Soros' message for many reasons but
from my perspective perhaps the most significant one is that we are moving at
a frenetic pace headlong from an industrial age into a post industrial one
that's driven by advanced technologies (and primarily through the use of
information). We're entering a new era whose paradigms will have morphed into
ones so very different from anything humankind has ever before witnessed and
the changes are coming so very fast that they'll almost certainly cause fear
and social disruption on an unprecedented scale unless we act now to adapt
technology to our human needs and not those of governments and large
multinational corporations—after all, they ought to be our servants, not vice
versa as it is at present.

At present, society is both ill prepared and ill equipped to handle monumental
changes of such a magnitude without considerable preparation, and we've hardly
even begun to discuss the matter let alone draw up viable plans for society to
adapt to them.

Leaving ML and AI aside for a moment, let's just look at the metaphysical†
aspects of the Google/Facebook revolution. Both behemoths, but especially
Facebook, are floundering in the mire over very important issues such as those
concerning privacy, fake news, damaging effects on democracy and politics in
general, and there's precious little light on the horizon to shine upon any
potential solution let alone any commonly-agreed methodologies or viable
options.

Let's look at what has effectively happened here: internet technologies
evolved to a stage where worldwide networks such as Facebook became feasible
and thus they were built without any real thought of the wider social
consequences other than the paramount need to make money. Zuckerberg et al
would like us all to believe that they had actually executed both their
financial and social objectives as they'd planned but as we now know this is
far from being the full truth.

Not only did Big Tech companies have secret plans all of their own with the
deliberate intention of exploiting users but they kept these intentions hidden
from both governments and users alike thus no independent scrutiny was
possible until the inevitable leaks occurred. The lesson from this is that
with no oversight, undesirable metaphysical effects arose from their complex
systems the consequences of which have come back to bite them. Inevitably,
this will happen again and again with ML and AI unless careful and
sophisticated (and mandatory) regulation is introduced. To think otherwise
would be foolhardy in the extreme.

It's clear to many that these 'geniuses' of Big Tech would have been fully
cognizant of and understood how new physical properties often emerge from
complex systems that are not foreseen from just examining their less complex
building blocks. Moreover, similar but metaphysical processes evolve in human
minds when they encounter complex systems. For instance, examining fine
architecture brings an aesthetic experience to humans that no examination of a
brick to the nth degree reveals. Therefore, there can be little if any excuse
for Zuckerberg and his cronies for not anticipating in advance emergent human
problems (such as those that have arisen from the Cambridge Analytica fiasco).

When in 1847 Italian chemist Ascanio Sobrero* invented nitroglycerine and
immediately perceived its extreme dangers he became so scared and concerned
about what he'd actually done that he kept the fact secret for over a year.
However, unlike Sobrero who clearly had ethics on his side, the likes of
Zuckerberg et al never gave any serious consideration of the consequences of
their 'inventions'. As day follows night, they were expecting human problems
but they simply ignored them until it was too late. Their lack of concern for
humans—the hands that actually feed them—is palpable in the extreme; ethically
and morally they're bankrupt.

As history illustrates yet again, we're now well past the point where it's
safe to leave extremely powerful technologies in the hands of political
novices who possess so precious few ethics—or whose few ethics are easily
trumped on by their zealotry for certain technological fixes and or financial
objectives. The fact that they may be the inventors or owners of newer
technologies such as Facebook is irrelevant; what matters first and paramount
is what is best for the citizenry and society at large.

The Google, Facebook et al cases ought to have been a non sequitur from the
very beginning as the general will of the populus should have nailed them dead
from the outset but it never happened for many reasons, including the highly
addictive properties that Big Tech deliberately designed into their pernicious
technologies. Tragically, over the past 40-50 years or so, many traditional
ethical values which would have put the kibosh on these Tech Giants long
before they'd gotten started have largely evaporated as our societies have
become more homogeneous and international—nowadays, the lowest common ethical
denominator is just that—pretty low.

Given that societies are still struggling with very basic ethical issues such
as withering of our hard-fought democratic processes, rise of totalitarian
power from both governments and Tech Giants then we're not even at ground
level when it comes to solving the ethics of ML and AI. For starters, there
are serious cultural differences (hence little or no agreement) over how to
resolve the infamous trolley car/moral dilemma problem‡. At present, it is
abundantly clear the various societies of an international world are not able
reach a common worldview or consensus on this conceptual problem let alone a
specific ML/AI incarnation thereof, consequentially we have precious little
hope for solving even greater moral and ethical dilemmas that undoubtedly will
be created by these fast-advancing technologies.

It seems to me very first steps must be taken to forge a common moral and
ethical consensus for humankind. We need to first begin with the easiest
problems to agree upon such as the inviolability of human life and then work
upwards. Expect this to take a long time and it will. Of course, the huge
dilemma is how to hold technologists and technocrats sans ethics (and common
sense) at bay whilst various consensuses are being reached.

I am strongly of the opinion that (as I was fortunate enough to experience),
we should begin by ensuring that core training for all engineers, scientists,
technologists and technocrats—and for that matter, politicians—also include
compulsory training in key philosophical subjects, especially ethics, moral
philosophy and formal logic as well as basic/essential political science (the
study of government).

I'm realistic enough to realise that despite such ethical studies being both
core and compulsory, there is every that they will only have a minor impact in
changing human nature if anything at all (at least in the beginning).
Nevertheless their compulsory nature will achieve one major objective which is
that every engineer, scientist and technologist and technocrat will be forced
to learn the essentials of morals and ethics as they should be practiced in
our increasingly technological societies.

Thus when their technologies go belly-up and damage both societies and people
lives, with compulsory training in ethics under their belts, the Zuckerbergs
of this world will no longer be able to claim ignorance as an excuse for their
negligence, they will not be able to say that they 'did not know' or that 'we
never considered that outcome'. The only likely excuse that they'll have left
to argue is that of 'force majeure'—and it'd had better be a pretty good
instance thereof or they'll be toast. …And good riddance.

Good effort George, keep the pressure up.

_____________

† As many will be aware, the uncomplicated definition of 'metaphysics' is
'above and beyond physics', that's to say ontological a priori deductive
concepts of existence, of being, of becoming, reality etc. As far as Physics
is concerned, metaphysics deals with ethereal, intangible concepts that are
inconsequential to its Laws but nevertheless they're key to human existence as
we know it, what it is to be human, our values, beliefs and ethics are
metaphysical.

‡ The Moral Machine experiment , Nature, vol. 563, pp59-64, 2018-10-24
[https://www.nature.com/articles/s41586-018-0637-6](https://www.nature.com/articles/s41586-018-0637-6)
Especially note graphs in Fig. 3: 'Country-level clusters'.

* Incidentally, Alfred Nobel was a student of chemist Ascanio Sobrero.

___

------
StreamBright
This is kind of funny how this article focuses on China and totally leaves our
surveillance capitalism (Google, Facebook) out of the picture. If this article
was unbiased than we would see example how the AI threat impacted the last
election in the USA as well.

~~~
IfOnlyYouKnew
It’s probably presented in such a way to b convincing to the maximum amount of
people. Being addressed to an English-speaking audience that tends to already
be suspicious of a Chinese grab for economic power, that serves as common
ground to establish rapport with the audience.

This being Soros, any connection to the US election would already doom his
message to be disgeraded, or even taken as evidence for its opposite.

------
m3nu
Any non-paywalled version?

~~~
est31
[https://www.wired.com/story/mortal-danger-chinas-push-
into-a...](https://www.wired.com/story/mortal-danger-chinas-push-into-ai/)

~~~
crishoj
Mods, please consider changing the link to this.

[edit] From the speaker himself:
[https://www.georgesoros.com/2019/01/24/remarks-delivered-
at-...](https://www.georgesoros.com/2019/01/24/remarks-delivered-at-the-world-
economic-forum-2/)

------
hoaw
I disagree with the premise. The threat from AI, while serious, isn't
immediate compared to the threat of using thing like AI to justify inequality.
You want to tell me that suddenly the global elite became concerned about
relatively esoteric technology to the point where common politicians talk
about it like it is the tax rate? And that just happens go with with the
effects of globalization, crony capitalism and extreme rent seeking? If the
excuse wasn't AI it would be something else.

------
microdrum
When Trump and Soros sound the same, they're probably right.

------
buboard
here is an idea: instead of sacrificing AI maybe we should sacrifice organized
governments.

~~~
beefield
It has been tried in the world history many times and is currently being tried
in some parts of Somalia. Most of people do not prefer that, so if you do,
maybe you should move there?

My guess is that we are about to see "No True Scotsman" very soon after I post
this.

------
6d6b73
That's funny coming from from George Soros especially that it was delivered at
Davos.

