
The Digital Maginot Line - crunchiebones
https://www.ribbonfarm.com/2018/11/28/the-digital-maginot-line/
======
gervase
Everyone is focusing on the Maginot Line metaphor, and possible historical
inaccuracies in what the author wrote. However, the important word from the
title is not "Maginot", it's "Digital".

In reading the author's assessments of a wide range of topics, including free
speech, digital militarization, and more, I found that there was actually
quite a lot of valuable takeaways from their perspective, even for the reader
that doesn't agree with 100% of the content.

However, if you are finding yourself roadblocked by the comparison to the
Maginot Line, I encourage you to disregard this portion of the text and skip
past it; the loss of the metaphor is not fatal to the rest of the content.

~~~
skybrian
The other dubious metaphor is "war". A propaganda campaign isn't itself a war,
even if we're calling all sorts of things wars to make them scarier.

He's a good writer, but you do need to think about the framing and decide
whether it makes sense.

~~~
lloeki
> War is a state of armed conflict between states, governments, societies and
> informal paramilitary groups, such as mercenaries, insurgents and militias.
> It is generally characterized by extreme violence, aggression, destruction,
> and mortality, using regular or irregular military forces.

It's a war[0] for sure, only not led with physical armament[1], thus without
an obvious "mortality" part, although I've seen people and relationships
destroyed due to those underhanded manipulations[2]; but every single word in
the definition applies to the current global situation in an immaterial and
digital context, with economic, physical, and psychological casualties (from
alteration/destruction of rational thinking to downright PTSD).

Democracy is being gradually infected, corrupted, exploited, and ultimately
destroyed by the dwindling costs of propaganda through weaponisation of modern
information channels.

[0]: [https://en.wikipedia.org/wiki/War](https://en.wikipedia.org/wiki/War)

[1]:
[https://en.wikipedia.org/wiki/Information_warfare](https://en.wikipedia.org/wiki/Information_warfare)

[2]: [https://africatimes.com/2018/11/25/beyond-paris-the-
gilets-j...](https://africatimes.com/2018/11/25/beyond-paris-the-gilets-
jaunes-protests-continue-to-rile-reunion/)

~~~
skybrian
Why do you post a definition of war and then ignore it?

------
pjc50
Key pull quote rather than arguing over the historical Maginot line:

"This particular manifestation of ongoing conflict was something the social
networks didn’t expect. Cyberwar, most people thought, would be fought over
infrastructure — armies of state-sponsored hackers and the occasional
international crime syndicate infiltrating networks and exfiltrating secrets,
or taking over critical systems. That’s what governments prepared and hired
for; it’s what defense and intelligence agencies got good at. It’s what CSOs
built their teams to handle."

~~~
EthanHeilman
I think using the Maginot line as a metaphor here is actually correct.
Preventing an adversary from doing X means the adversary does Y instead. That
doesn't mean spending time and money on prevent X is bad, it just means no
matter how much planning you will always be unprepared. Develop flexible plans
and general capabilities.

Given the choice between protecting ICT networks and protecting social
networks, I'm glad we focused on ICT networks. That isn't the same as being
invulnerable.

------
elvinyung
The thing that worries me even more is that more and more diverse types of
content will be susceptible to algorithmic amplification in more and more
sophisticated ways. Right now it's still relatively easy to tell what is a bot
and what isn't because it's hard to generate credible text and video (and many
other kinds of) content, but that's rapidly changing. It's a game of cat and
mouse clicks.

This pervades every single sphere of life. Soon, it will become more and more
difficult to distinguish the truthfulness of digital representations. In
Baudrillard's words, I suspect that big data is becoming _hyperreal_. It is
not that data is misrepresenting reality, but that it is actively influencing
it, changing it to fit its representations.

Furthermore (and somewhat orthogonal), distinguishing fake content is getting
harder because you don't even necessarily need bots (technological capital).
If I started a restaurant and I wanted to acquire the maximal amount of
attention and I had no moral qualms, I could hire 10,000 MTurks (or MTurk-
equivalents) to write reviews for me on Yelp, and it will drive a positive
feedback loop of legitimate good reviews. (This is exactly what's happening
with Amazon rankings and reviews.) Amplification isn't just algorithmic, it
can be produced by the application of many different kinds of capital.

~~~
zby
We need to work on our assumptions - why human content is supposed to be good
and bot content is supposed to be misleading? We assume that people have good
intentions - but, as you notice, this easily breaks if evil organizations can
pay people to create the content.

In the past page rank was (partially) solving this content problem - but now
http pages are not the important nodes any more.

I would welcome new developments in page rank like algos - but more localized,
because smaller networks are easier to police and evolve.

Unfortunately it looks that
[https://en.wikipedia.org/wiki/Advogato](https://en.wikipedia.org/wiki/Advogato)
is dead now.

~~~
gambler
_> I would welcome new developments in page rank like algos_

PageRank-like algorithms are not the solution, they are the problem. They
reward popularity with more popularity, enabling viral content, encouraging
stuff like link spam and outrage click-bait. Worst of all, they normalized the
idea that what you see in search results (and later - timelines, feeds)
shouldn't be based on your query and author's page content, but rather on what
_everyone else_ wants to see.

~~~
ryandrake
there are lots of current day Internet-scale problems where, in order to solve
them, someone is going to have to figure out how to overcome people’s natural
tendency to mistake popularity for quality. This falsehood compromises all
kinds of rankings, from search results to Yelp and Amazon reviews to tweets.
Even here on HN, “number of upvotes” (popularity) unsatisfactorily attempts to
declare something about a comment’s quality.

~~~
harimau777
I think part of the problem is that in many circumstances, such as Yelp and
Amazon reviews, popularity is the best proxy for quality that is currently
available. Perhaps if other ways for people to quickly distinguish quality
were developed then people would start making better decisions.

------
vinceguidry
The following is a paraphrase from a Quora answer. Sadly, the source has since
left Quora and I've been unable to locate it. Here's what I have from memory.

French military planners did not believe that the Ardennes was impenetrable.
Tanks were used in WW1 and they were considered a grave threat, the whole
point of the Line was to defend against a mechanized assault force. The battle
plans they drew up had a variation extending the Line to cover the forest. It
was dropped when a deal was reached with Belgium so that they could cover that
part of the defense. Years later the deal was forgotten and so the forest
wasn't defended.

A French division was stationed at the forest but it was recalled for cost
concerns. It wasn't a failure of military planning, it was a failure of the
government to follow through.

------
travisoneill1
The problem with fighting disinformation is that it requires an entity that
has the authority to stamp something as true or untrue. This entity will be
made of humans and therefore susceptible to disinformation in the same way as
the people it is intended to protect and also prone to abuse of its power. If
it's between the order of one centralized authority determining truth (China)
and the chaos of thousands of competing entities, I'll take the chaos.

~~~
elvinyung
There Are More Than Two Things [1]. It's a spectrum, not a dualism.

Which is to basically ask that, for example, does it work to have multiple
semi-competing Ministries of Truth?

[https://twitter.com/_shyextrovert/status/1046004096820539393](https://twitter.com/_shyextrovert/status/1046004096820539393)

~~~
travisoneill1
Then you have the problem of who gets to decide who gets to be a ministry of
truth. The entity with that power is now the one centralized ministry of
truth.

~~~
elvinyung
Well, that's just one possibility in a space of many.

The core of the point is this, reiterating from the article: obviously a
ministry of truth is bad. But isn't the chaos just as bad, since it is
susceptible to be dominated by an "implicit" equivalent of the ministry of
truth?

To what extent are people really deciding for themselves when they think
they're deciding for themselves? If freedom is our highest ideal, how do we
empower the maximal amount of people to make decisions for themselves without
these influences?

~~~
travisoneill1
> To what extent are people really deciding for themselves when they think
> they're deciding for themselves?

To no extent. All decisions are made with some outside influence. This is true
now and has always been true. Unless you live as a hermit your thoughts are
influenced by other members of society.

> If freedom is our highest ideal, how do we empower the maximal amount of
> people to make decisions for themselves without these influences?

We don't. How can you eliminate outside influence if you are under outside
influence yourself?

~~~
elvinyung
Exactly -- then how can you claim that the "chaos" of an implicit outside
influence is better than an explicit outside influence?

------
jancsika
> If combatants need to quickly spin up a digital mass movement, well-placed
> personas can rile up a sympathetic subreddit or Facebook Group populated by
> real people, hijacking a community in the way that parasites mobilize zombie
> armies.

Why do we and the author seem to keep dancing around the fact that Facebook
_must_ be refusing to garbage collect bogus nodes on their own social network?

The company presumably still has access to its own social graph, to reams of
data from site/app/like button/audio/webcam/key&mouse events, and whatever
else it purchases from third parties. How in the world can a troll farm
employee proxied through a hacked router in Branson evade detection? An
employee who isn't cross-referenced in photographs, videos, phone calls, PMs,
contacts, browsing patterns, games, purchase histories, smart tv ad beacons,
and whatever else Facebook sniffs up?

In light of that, the article seems like a distraction. While I don't know how
to get social media companies to garbage-collect their graphs I know if they
did it would diminish the fragility of the system. If that raises the cost of
subsequent information attacks such that they must happen in a different
domain that the social graph, that's not a Maginot Line. It's an improvement.

Edit: same logic applies to the input coming in from ads. Facebook presumably
has just as rich data on the organizations buying them as it does on its
userbase. Prune the fronts and information attacks move again to a different
domain.

------
austincheney
People are generally gullible. Most people fear originality, strive to
maintain commitment, and conform to a group narrative. These are all
irrational noncognitive behaviors strongly aligned to perceptions of security.
It’s hard to fix stupid.

Before people obtained most of their news from social media they were getting
it from pseudo journalism comedy television. The primary motivation of the
supplier then was advertising revenue just as it is of social media now.

~~~
pure_ambition
A little presumptuous to say group narratives and commitment are stupid. These
are components of shared understandings of reality, which is what allows
society to function and people to communicate.

~~~
austincheney
A lack of self awareness is the basic understanding of stupid, no matter how
offensive it may sound.

------
fouric
The article first says that "Academic leaders and technologists wonder if
faster fact checking might solve the problem", implying that false information
is not the problem, and then "the human mind is the territory" \- if you're
fighting over peoples' minds, and using factually true information, then what
is the problem? That sounds like logical persuasion to me.

(obviously, I'm missing some nuance - I was hoping that a more intelligent HN
denizen would be kind enough to point it out to me)

~~~
TeMPOraL
> _if you 're fighting over peoples' minds, and using factually true
> information, then what is the problem? That sounds like logical persuasion
> to me._

The problem is that facts are almost entirely irrelevant in this fight. For
most humans, truth only matters where the impact of being wrong is immediately
and painfully visible. In all other cases, beliefs are primarily social
objects - means for maintaining relationships with people, participating in
groups, and signalling values[0]. Beyond false information, a lot of words are
put on-line to mess up with people's epistemology - the framework we use to
determine what beliefs are accurate. The most visible result is people
ignoring facts staring them in the face, having a million excuses to believe
what they want to believe.

You can't win a war with facts, when people don't agree on the most basic
concept that facts should determine their beliefs.

\--

[0] - If you don't believe this, consider all the many little things that you
know your family & friends are wrong about, but that you deemed not important
enough to correct those people on.

------
4bpp
> Our most technically-competent agencies are prevented from finding and
> countering influence operations because of the concern that they might
> inadvertently engage with real U.S. citizens as they target Russia’s digital
> illegals and ISIS’ recruiters.

The author seems to fully have bought into the anti-Trump narrative that the
only problem about the bots is that they were /Russian/ bots, and things would
turn out all right if only the US developed a reliable mechanism to keep them
foreigners (Russian bots, ISIS recruiters...) out of its internet; I'm not
sure if the usage of the term "digital illegals" here is to be taken as a sign
of self-awareness just how much this mentality mirrors "build the wall" and
Brexit and what-not.

Is there any reason to assume that there isn't plenty of potential and will
within the borders of the US to attack its society's ability to "operate with
a shared epistemology"? I remember debates between internet creationists and
evolutionists back in the '90s, and they looked exactly the same as red-blue
debates look nowadays: every side had access to its own endless stream of
well-written sources constituting a backdrop against which their own worldview
was obviously true and the opposition's worldview was obviously false, and
each side treated constructing another such prop that would contribute to the
picture that their interpretation is the only reasonable one as a pro-social
act regardless of how unsound or badly argued it was. This is the era for
which the phrase that "arguments are soldiers" was quipped. I take it nobody
contends that PZ Myers or Ken Ham were Russian plants, but all the qualities
of the present situation were already there back when they were the biggest
fish of memetic warfare. And this was made possible just by academics with a
lot of time vs. small-time evangelical tax evasion bucks; we still only have a
sketchy understanding of how much the public epistemology was polluted by
American corporate interests.

------
tribune
There's some interesting analysis in here but overall, I think it's focused on
the wrong points. I don't want to ignore the danger posed by propaganda
committees like those described in the article, but the author’s dismissal of
bots/automation in this space as passé seems totally wrong. She writes: "bots
are of increasingly minimal value...". Sure, in 2018, this is true for simple
bots, but I doubt bots have said their final word. AI could power bots that
are massively more sophisticated than their cousins that spam retweets, not to
mention better at slipping through filters designed to catch them. Such next-
gen bots might be able to read an article or view an image, decide what it’s
about, formulate an "opinion" on it, then generate some sort of response -
maybe a comment that's passable for (at least Internet-level) real human
discourse.

Imagine such a bot on reddit, for instance. Imagine it's the stereotypical
Russian bot, trolling the internet to sow discontent in the West. It might
upvote anything about racial strife or immigration issues. It might show up in
the comments to support Euro-skeptic candidates in the EU. It would pipe up
about the Deep State in the US. Now imagine there's hundreds of thousands of
similar bots, across a variety of sites. They would be able to control online
discourse to a huge extent and dictate the media/opinion diet of millions of
people. An army of these bots would be more powerful than any current
propaganda arms, even those with State-level support.

~~~
schoen
Related science fiction: [https://slatestarcodex.com/2018/10/30/sort-by-
controversial/](https://slatestarcodex.com/2018/10/30/sort-by-controversial/)

------
EGreg
Nothing new here. Ideologies are mind viruses that infect their hosts and
their goal has ever been to build an organization to perpetuate themselves and
fight off any competing ideologies that pose a threat. Humans are expendable.

What I want to see is a systematic epidemeological study of ideologies.
Including Liberal Democracy.

------
killjoywashere
I see at least one market opportunity here: the social graph has, presently,
assigned stronger weight to authenticatable news sources. Perhaps those news
sources should start authenticated their sources? Not the HUMINT stuff, but
video frames and audio segments should be cryptographically signed by the
device that produced them, so you can match the camera footage to the camera,
it's geolocation, etc.

------
monadgonad
> Our most technically-competent agencies are prevented from finding and
> countering influence operations because of the concern that they might
> inadvertently engage with real U.S. citizens as they target Russia’s digital
> illegals and ISIS’ recruiters.

JFC. They want _more_ unchecked state surveillance of its own citizens?

------
mooseburger
Everything would have played out exactly the same even with no Russian
interference. There are some big value clashes going on in America, and since
there is no established way to make is-ought leaps, facts are not of critical
importance in resolving these. Not that reality favors either side anyway.

~~~
AnimalMuppet
I think the Russian interference increased the gain in the echo chambers,
which made things worse, but not qualitatively different. (Though enough gain
can make things qualitatively different, once the gain exceeds unity...)

------
zyxzevn
> Information war combatants have certainly pursued regime change: there is
> reasonable suspicion that they succeeded in a few cases

I disagree with those ideas. Until now they look like made-up excuses for
problems that have local origin, not external origin.

By claiming that these problems are external we do not only create a division
and misunderstanding in the local politics.

We also create a non-existing virtual enemy that will cost a lot of resources.
And this might be the actual goal of this article: to get military funding for
these projects.

On the other hand there are companies and political organisations directly
working to promote a certain idea, and to attack opposing ideas. This is
independent of origin of country. These groups can be easily identified as
they try to even forbid those ideas, and avoid reasonable public discussions.
Discussions that can actually settle the disputes or even give better
understanding of the underlying problems.

Certain big companies do not want that. Because their funding and income are
directly related to how much products or shit they can sell the public.
Criticism of those organisations is usually stopped by buying out media, even
science media. And by paying expert to come up with certain conclusions.

The political organisations, or even political agencies try to implement a
certain policy. Those are actually two different groups, but the agencies
often like to support certain political organisation. They want to create an
"enemy" from everyone that does not support their political idea. This does
not matter on what side of politics you are. From: "They are gonna take our
guns and freedom" to "all white men are racists".

The general idea is to create an enemy based on fear. The article also does it
a bit similar. It is something that I rather step away from, because it does
not really identify the problem.

The political groups that create "enemies" to promote certain ideas, just need
to talk with each other. Putting internet barriers between them does not help.
Both sides are humans. Opposing people talking with each other, in a friendly
way, can even create friendships.

Then we have the last group. Agencies that want to support certain political
groups to push certain political policies. There are a lot of them, even
within a country. The oil-companies want to downplay climate problems. The
military wants to push for more wars, maybe for strategic reasons. The CIA
wants to push for regime changes, maybe to install their puppet. The Israelis
want to stop discussions about Palestine. The Russians want to keep Crimea.
But there are much more agencies. Even Billionaires that do their own little
policies, like Soros publicly sponsoring immigration.

Each of these agencies have their own goals, and we like to defend against
them. Is each agency really working for our interest or for their own? In a
democracy we like to choose and have full information.

So the problem is not separated by state. Blockage of information will even
prevent us from seeing what is really going on. It will even allow for
increased corruption.

If you look what is really going into the problems that are reported, it is
unfair politics, and the lack of good unbiased information.

------
xte
Mh, war needs big resources, especially if parties involved want to win
against big enemies and they have to fight for long time. In other words I do
not think "terrorist"&c exists on their own. Sure lone wolves can do act
autonomously without organizations but actual "info war" and "physical war"
are backed by someone with big money and power.

------
WesleyLivesay
This uses a very common, very oft quoted, but also very wrong myth about why
the Maginot line was created and what its purpose was.

Its purpose was not to stop a German attack head on, it was not to defend the
entire course of territory that needed to be defended. It was instead designed
so that the numerically inferior French army could free up as many troops as
possible to meet the expected German invasion in the north. There is a reason
all of the best French and British troops were positioned to meet the expected
German invasion through Belgium and not anywhere near the Maginot Line.

When compared to its actual purpose the Maginot Line performed perfectly well.
The Germans did not try and attack it, they instead attacked to the north,
just not quite as "north" as the Allies hoped.

------
ryandrake
Wait a minute—-missing the overall point in order to pedantically nit-pick and
argue over small details is how things work around here! :-)

~~~
dang
We detached this comment from
[https://news.ycombinator.com/item?id=18562804](https://news.ycombinator.com/item?id=18562804)
and marked it off-topic.

------
jdlyga
The Maginot line was never intended to be impenetrable. Its purpose was to
slow the Germans down if they decided to attack along the French/German
border, and to force them to go through Belgium again like they did in WW1.

