
Childhood's End: The digital revolution has turned into something else - Balgair
https://www.edge.org/conversation/george_dyson-childhoods-end
======
cookingrobot
The interesting idea I read here is that systems that were originally intended
to measure the real world, like a traffic map or social network, become so
influential they have a major effect on the thing it measures. This could
result in an interesting new steady state, like side roads filled to capacity
with people following Waze directions, or fractued networks of filter bubbles.
Or instead of steady state the system might be unstable, like massive
volatility cause by algorithmic trading. The “analog” metaphor here is that
the user isn’t the consumer of a carefully design system, but they’re one
“electron” in the circuit that carries many users, and the overall behavior of
the system depends on what they and millions of other users do as they
interact with each other. In that way he suggests the outcome is not planned
and is probably unpredictable.

~~~
GrantS
Agreed, it takes awhile to get there but the core idea comes toward the end:

 _The genius — sometimes deliberate, sometimes accidental — of the enterprises
now on such a steep ascent is that they have found their way through the
looking-glass and emerged as something else. Their models are no longer
models. The search engine is no longer a model of human knowledge, it is human
knowledge. What began as a mapping of human meaning now defines human meaning,
and has begun to control, rather than simply catalog or index, human thought.
No one is at the controls. If enough drivers subscribe to a real-time map,
traffic is controlled, with no central model except the traffic itself. The
successful social network is no longer a model of the social graph, it is the
social graph. This is why it is a winner-take-all game. Governments, with an
allegiance to antiquated models and control systems, are being left behind._

~~~
pnathan
Yes, I don't buy most of what the essay said, but _that_ was bang on,
generally. I disagree heavily with the last sentence, but the bit about how
we've generally shifted from models->actuality is very, very true. It's
acutely obvious when you look for knowledge that isn't in the consensual
standard places online, or really try to use Google as a _search_ engine as
opposed to a handy-dandy bookmark replacement thing.

~~~
pesmhey
Could you give an example of looking for knowledge that isn’t in the
concensual standard places online, where that search ends up finding nothing?

~~~
pnathan
Not a specific thing, no. It tends to be trade knowledge or bits of history &
beliefs that were printed, or orally known. I have a certain store of oral
history regarding a few 20thC movements that was passed down, for instance.
But there are no electronic editions of that information - if you find it on
Wikipedia, it'll be filtered, somewhat mangled, and altogether minimal in
description.

These days when I want to find obscure knowledge, I purchase books written on
the topic by scholars; they concentrate the information very well with good
sourcing.

------
josephpmay
I think this article has an incredibly important point most people are missing
because it is terribly written.

Part of this is that analog vs digital isn’t the right term. Vacuum tubes are
completely irrelevant to the point he is trying to make, and confuse the
reader.

The point he’s trying to make (which unfortunately I’m unable to explain right
now better than he can) is about the emergent properties of systems made up of
independnt entities in a rigid form and how this is a type of computing that
differs from what we usually think of as computing. The example he have about
DNA vs the brain sort of gives an example of this. DNA is coded similarly to
computers, and in many ways cells are like computers/programs (that happen to
self-propagate according to their software). On the other hand, brains are
this other type of “system” computer made up of a combination of independent
actors (neuron cells), a regidid but adaptable structure (the physical
structure of the brain), inputs, and information sent between the individual
actors. This is very similar to a million computer-regulated human systems
where we react to information we receive through the platform, and we relay
our reaction to the system, and that reaction affects how the system interacts
with other individual actors.

~~~
Joeri
_about the emergent properties of systems made up of independnt entities in a
rigid form and how this is a type of computing that differs from what we
usually think of as computing_

A way of viewing it is inversion of control between emergent behavior and
programmed behavior. When the number of interactions between discrete units of
programmed behavior exceeds a certain threshold the programmed logic responds
to the emergent interactions instead of dictating it. The system as a whole
starts obeying its own logic instead of one programmed into it.

I suppose you need to see such systems more like ecologies to be studied with
techniques not far off from those used to study natural ecologies.

~~~
diydsp
ah, almost like when we have robots building so many products, but the robots
are limited in what they can fashion, so we end up mostly using products which
are reflections of machine-capabilities?

~~~
lotsofpulp
The buyers in the market choose to purchase products which reflect machine
capabilities. No one is being stopped from using products that machine
capable, but people seem to prioritize other qualities such as less expensive.

~~~
coldtea
> _The buyers in the market choose to purchase products which reflect machine
> capabilities._

The buyers are limited by: (a) what's cheap so they can afford, (b) what's
available, (c) what's advertised -- among other things.

There's no "perfect market" where people perfectly "vote with their wallets".

------
aurelian15
_> Digital computers deal with integers, binary sequences, deterministic
logic, algorithms, and time that is idealized into discrete increments. Analog
computers deal with real numbers, non-deterministic logic, and continuous
functions, including time as it exists as a continuum in the real world._

As a computer scientist/computational neuroscientist, I don't really buy the
analogue vs. digital distinction. Basic information theory tells us that any
noisy continuous system is equivalent to a discrete system of a certain
resolution/bit-depth. As the author continues to write

 _> [...] analog computers embrace noise; a real-world neural network needing
a certain level of noise to work._

A few bits are sufficient to describe the output of a real-world neuron.
Action potential timing has jitter in the sub-millisecond range, which means
that relatively coarse discrete time steps are sufficient for a simulation.
Yes, our brains are extremely noisy systems, yet this is _exactly_ the reason
why they (in theory at least) can be simulated on a discrete/digital computer.

In practice there is little to be gained from analogue computation, except for
a (potentially) reduced energy consumption compared to a digital
implementation of the system in question. But on a theoretical level, nothing
changes.

~~~
k9s9
Well analog computation being noisey could be the reason we experience things
like emotion.

It's easy to say emotion has no value, until you see it in action bringing
some sense of control to say a family that has gone through trauma or a
country through war.

It doesn't look like digital computation (not digital encoding) can produce
such outcomes.

We are constantly seeing, be it the NSA/Zuck/Wall St/China etc etc having
access to ridiculous amounts of digital computational power, but being totally
surprised on a daily basis by the realization they aren't in control.

~~~
aurelian15
_> Well analog computation being noisey could be the reason we experience
things like emotion._

Hm, I don't really see why this should be the case. Emotions are a pretty well
studied in both animals and humans and are, to put it very handwavingly,
merely a global change of brain state/equilibria, for example modulated by
brain regions such as the Amygdala and/or the release of neuromodulators [1].
From my understanding, there is nothing about emotions that cannot be computed
by a digital computer, and there is little about emotions that is related to
noise.

I'll let philosophers think about the _experience_ part of your statement.

[1]
[https://en.wikipedia.org/wiki/Amygdala](https://en.wikipedia.org/wiki/Amygdala)

~~~
JohnJamesRambo
Ok show me a computer with emotions.

~~~
etCeteraaa
You really wouldn’t want to see one.

But all that’s needed are are a handful of rules, to provide for a system that
emotes. You’d probably dismiss it as an inauthentic toy, but emotions actually
aren’t the core aspect of agency.

Anyway, the rules just need to assemble a goal, a threshold for equilibium,
and reactions for deviation from that equilibrium.

Bonus points if you account for radiant measurements of equilibrium. What I
mean by that is anticipation of adjacent conditions that signal a probable
loss of equilibrium, such that the system doesn’t just react to an unbalanced
circumstance, but also things that could lead to an undesired imbalance.

Examples:

A. If the cup is disturbed so that the milk spills, then a negative experience
ensues.

B. If a balloon, inflated with ordinary compressed air, sinks onto the grass
and pops, a negative experience ensues.

C. Ambulate through an environment obstructed by complex obstacles, and
negotiate each obstacle without falling onto the ground. Falling onto the
ground will result in a negative experience.

Each of these three goals represents a targeted state of equilibrium: don’t
spill the milk, keep the balloon safe, don’t fall down go boom.

Now, layer an array of reactions on top of the branched set of possible
outcomes. You can also buld up variations on top of each branch.

Positive branches are indicated in moments of success at achieving the goal.
Negative branches are indicated upon equilibrium being defeated.

So the computer or robot can externalize its inner state with a happy face or
a sad face, but we’re missing some of the emotional range. When would anger
display? When the machine can assign blame and consider revenge, of course.

So if an entity (preferably a rival robot, since we wouldn’t want the robot to
exact revenge on a person) knocks over the milk, pops the balloon, tackles the
robot, the obvious motive is to make sure that never happens again, the root
cause is the rival entity. Stand back up, destroy the entity, and acquire more
milk, another balloon, and try to achieve equilibrium, and thus happiness
again.

Prior to reacquiring its happy state, the machine can externalize an angry
face if it can assign blame to a detected responsible entity, in all other
cases, it would simply be sad, until it can stand back up, inflate another
balloon and pour itself another glass of milk to protect. If it cannot set
things back in order, as desired, then it is simply permanently sad (no
balloon, no milk, unable to stand or walk), forever.

See how that works? It’s actually not much more complicated than that.

~~~
visarga
In a multi-agent environment an agent has to model its peers as well and learn
to communicate to solve goals together. Dealing with other agents is one step
up from dealing with objects. Emotion would naturally be linked to the actions
of other agents as they affect the completion of one's goals.

~~~
etCeteraaa
Part of your statement is true, and part of it is false.

An agent would need to model the behavior of peers, yes.

But communicate? No. Solve goals together? No.

To coalesce civilization or society? Maybe, maybe not. Socialization among
peers is not a prerequisite for agency. Not by a mile.

And certainly not amid a state of nature. Not at all would communication or
collaboration become a necessity.

Emotion might become an aspect of investment in hypothetical experiments
performed by an agent. Hope that equilibrium might be achieved with less work
through communication and collaboration.

But put it this way. A caveman grunts at a wild boar standing on top of a
hill. The caveman wishes to discern if the silohette atop the hill is
potential food by provoking movement, or an inert object offering the illusory
shape of a backlit animal. The boar notices and experiences fear. The boar
freezes, hoping the grunt was not directed toward it intentionally.

Is neither an agent? Does the conflict of interests preclude emotion?

The boar models the adversary, and experiences emotion to preserve the
equilibrium of staying alive.

The caveman experiences hunger as a loss of equilibrium, which provokes a
mixture of anxiety, and unhappiness which may cascade into a malaise or
depression as weakness progresses with starvation. The aggression of the hunt
is not anger, although anger may arrive incidentally.

Is the grunt communication? Perhaps as much as any tactic might be. Deceptive
comminication (bird calls, immitating a female in heat to draw male prey)
might still be communication, after all.

But to model nature, there must have been a period of where some agents
seemingly existed without peers. But those agents likely experienced emotion
before cognizant sentience and a rich awareness of the potential for sentience
within peers, which most likely precedes a capacity to communicate.

------
jaabe
I think this is an interesting article, though not from a CS point of view.
Basically my take away can be boiled down to the brilliantly put _“Most of us,
most of the time, are following instructions delivered to us by computers
rather than the other way around.”_.

I work with digitisation and automation in a Danish muniplacity, and I’m
actually one of the authors “hidden” hands on architects. Because I’m a techie
in a non-tech world, and, because a large part of Enterprise Architecture is
the business end, I see the impact daily on the non-tech savvy world.
Digitisation has absolutely changed the way organisations work, and not always
in the way we intended. There is a reason why AI/ML is so hyped now, and it’s
because we’ve trained our organisations to utilise Business Intelligence in
every aspect of their daily lives. It’s more than that though, if you give
people a system, they’ll use it, and not always in a manner that makes sense.
From managers trying to make informed decisions to employees simply trying to
do their best.

An excellent example of the dangers popped up last summer. We were reviewing a
process scheduled for Robotic Process Automation, to see how suitable it would
be and to sum up our benefit realisation prospects. Only it turned out that
the process wasn’t really sound at all. In short we had a team of employees
who spent a lot of time distributing tasks in Outlook. They even had a
colouring system in place, one colour per employee, to make it easy to spot
your individual tasks once distributed. Except they had more employees than
there are standard colours, and because they didn’t know how to make more
colours some people had to share. They wanted it automated, and we could have
done that.

Only, the entire thing was just silly. It was even more silly than I’ve just
outlined. Because after they had colour distributed the tasks, each employee
would archive their individual tasks in our ESDH system (electronic
journaling). During this proces, everyone would fill out a standard ESDH form,
where you also need to select a responsible employee. So basically they were
distributing tasks twice.

No one had questioned this for almost a decade, and these are intelligent
workers mind you, to then it was just how the systems worked. We swooped in,
looked at it for about 15 minutes and then forwarded them to a LEAN consultant
who saved their department from around 600 yearly hours of needless
bureaucracy.

And that’s just one story.

~~~
TheOtherHobbes
A variant of this used to be called Time and Motion, also known as Taylorism,
and is based on the uncomfortable assumption that left to their own devices
humans are rather stupid and need to be told what to do. In detail.

There are two elements to this. One is the idea that there's an executive
class which is born (or at least educated) to rule, and the other is the not-
quite-identical observation that many people lack genuine independent agency,
either by nature or because they lack the political/managerial leverage needed
to make productive changes.

The personal part of the "digital revolution" was supposed to be a way for
people to explore independence, agency, and creativity. It actually turned
into yet another scheme by which those who believe they're born to rule can
use algorithmic machinery to farm and control the economic and political
activity of everyone else (input and output), without the stickiness and
inertia of traditional long-term employment. Which was itself another form of
farming and control, but with hard-fought humane benefits.

This is a political problem, not a technological problem. It can only be
solved with technology where the political situation allows it.

For now it seems to be true that most humans are rule-takers and mimics, not
creative innovators or strategic thinkers. I have no idea if that's a genetic
limitation or an educational one. It would have been interesting to see what
would have happened if personal computing had gone in the direction it was
originally supposed to, and education had followed.

~~~
invalidOrTaken
I know it seems bleak, but one thing to remember is that no one _planned_
this. There's no omnipotent overlord whose scheme to screen-addict/rent-
enslave the human race is finally coming to fruition. Things like Dynamicland,
the early retirement people, and yes, HN, are all bright lights in the dark.
It's not as immediately profitable to uplift as it is to exploit, but there's
no one stopping you from teaching some neighborhood kids how to write a
browser extension, or personal finance. More importantly, there's no one
stopping them from learning if they want to.

Things aren't perfect, but there are opportunities and to spare for those
looking.

------
CptFribble
I think this boils down to two things:

1) Algorithmic tech products replacing basic services are growing beyond even
their creators' control

2) Analog computing will replace these algorithmic products and fix all the
problems

I disagree. Not with the "growing beyond control" bit, I think that's clear,
but to the analog computing bit.

First, Dyson doesn't explain what he means by analog computing, other than a
basic "operates on real numbers and continuous functions." He also doesn't
give any clues about what hardware or software for these systems will look
like, or specifically how they will outcompete what we have now (Google, FB,
etc).

Second, I think he's wrong. The only things that determine the broad direction
of society are who has the power and money. This has been true as far back as
you go, whether kings or merchant guilds or the citizenry, when enough of them
decide to work together for a revolution. And there's one fact Dyson ignores:

No matter how much or little control the owners of these algorithmic tech
products have over the direction and consequences of their use, they still get
the money.

As long as Google, Facebook, Amazon, and the others are making the money, the
unintended consequences of their algorithms dictating the shape of our lives
will continue to be irrelevant.

~~~
k9s9
He is not saying analog computation will "fix all problems". He is saying the
computation that is happening that props up a Trump resembles an analog system
where noise plays a role.

~~~
visarga
That fits well with neural nets, where noise plays a crucial role, from the
random order of selecting examples into batches, to randomly dropping out
synapses, directly injecting noise, or even starting prediction from pure
noise (GANs). In RL the epsilon-greedy technique prescribes random actions
from time to time, in order to better explore the environment. Noise, when
used properly, makes neural nets better and more resilient.

------
king_magic
> Childhood’s End was Arthur C. Clarke’s masterpiece, published in 1953,
> chronicling the arrival of benevolent Overlords who bring many of the same
> conveniences now delivered by the Keepers of the Internet to Earth. It does
> not end well.

Uhhhh, it didn’t end well because (in the book) humanity was doomed to
evolutionarily tear itself apart, and the Overlords knew this. Seems pretty
disingenuous to use that fictional scenario not ending well in this context.

~~~
ngmc
> Uhhhh, it didn’t end well because (in the book) humanity was doomed to
> evolutionarily tear itself apart

Yes and no. Humanity ends as something else takes its place. Quoting from the
book (haha of course I just pulled this up on my Kindle):

 _But there is one analogy which is–well, suggestive and helpful. It occurs
over and over again in your literature. Imagine that every man 's mind is an
island, surrounded by ocean. Each seems isolated, yet in reality all are
linked by the bedrock from which they spring. If the ocean were to vanish,
that would be the end of the islands. They would all be part of one continent,
but their individuality would have gone.

Telepathy, as you have called it, is something like this._

Swap "telepathy" with "telecommunications" and the analogy is–well, suggestive
and helpful.

The book is one of my favorites. _Lo and Behold_ is also worth a watch.

[https://en.wikipedia.org/wiki/Lo_and_Behold%2C_Reveries_of_t...](https://en.wikipedia.org/wiki/Lo_and_Behold%2C_Reveries_of_the_Connected_World)

------
xte
Mh, actual status is not due to technical evolution but to commercial one. We
have had, and there is no reason but commerce and ignorance, machines that
work under the control of it user notably LispMachines, Alto Workstations
&others.

They push knowledge, they help draw a personal evolution path, they solve
problem. Today we have something similar with Free Software.

Those ideas are not much accepted by "the big and the powerful" because they
mean a real free market in witch only knowledge and work have a value, not
much marketing and certainly not commercial secret agreements.

So yes, digital revolution has turned from a free academical intelligent
project and dream to a new way to subject whole populations and this start to
happen years ago, and start to be so evident now that even totally ignorant in
IT terms start to feel it.

Few subject succeed to displace public knowledge to private companies, to
displace free market in a sort of Soviet Union planned economy draw instead of
a dictatorial government by few equally dictatorial companies/founds. The next
step is abolish entirely public (paper/coin) money substituted it only with
digital payments, another step declare "politics" as a bad thing to be abolish
substituting it with a corporate council like "Continuum" TV series foresee
well, and George Orwell foresee even before with "1984".

Beware a thing in the past dictators need manpower so "force" is not all in
few hands. In a not-so-far future manpower will not be needed anymore. So the
chance to revolt and gain freedom is lower and lower.

Imaging a future with autonomous robotic army, cars, "smart" devices
everywhere. Think how we can revolt, and against who. Also think how we can
communicate, being tied to proprietary devices and platform, without pens,
paper anymore, without the knowledge to write text with pen&paper, without the
knowledge of "past" things like "postal system" and "newspapers" that well, we
know but at the same knowledge of today's people know ancient mimeograph and
polygraph...

------
api
Worth reading simply for the term "oligarchia."

What we may very ironically be witnessing with the recent international
populist uprising that seems in part aligned with certain trans-national
oligarchical interests is the emergence of the beginnings of an actual global
government. Unfortunately it seems to be a global government run by gangsters.

The fact that US Democrats (and a few Republicans) are hung up on "Russia"
shows that they don't get it. Russia is as irrelevant as the USA or China to
these people. I mean sure some of them are Russian and may have ties to the
Russian government, but Russia and Putin are just tools to them as are all
other national and governmental powers. They're trans-national and represent
an emerging power that has no national loyalty.

We're heading for a world run by gangsters and corporations. William Gibson
remains the most prophetic of all sci-fi writers.

------
rhn_mk1
I'm surprised no one mentioned cybernetics yet. Author's premise seems to be
that digital, centrally controlled systems are becoming pieces of systems made
up of self-regulating sets of feedback loops.

The idea certainly isn't new, having been written about in the '50s in
Stanisław Lem's "Dialogi". The observation about digital systems is certainly
fresh though.

------
mark_l_watson
Interesting take on emergent properties of large numbers of simpler things no
longer being ‘knowable’ or ‘understandable.’ Even though his analysis of how
search engines end up being how we see the world seemed like a tangent to his
big idea, it made me think of how knowledge graphs at Google and Facebook grow
out of many sources of information and end up being the ground truth for
organizing information about the world. The difference is that you can track
the provenance of data going into huge knowledgegraphs but you can not
understand the emergent properties of a complex system any more than you can
understand how a billion parameters in a deep learning model function.

------
abledon
side note: The author's father was Freeman Dyson, a physicist who popularized
the idea of the Dyson Sphere.

------
ck425
Terribly written article but very very good points (ignoring the whole analog
metaphor). Emergent systems are already seeing damaging side effects such as
accidentally inbuilt racism.

The perception that programmers are in control is dangerously inaccurate but
I'm not sure how we can go about educating the public without destroying the
trust we have left.

------
networkimprov
It commences with a fair bit of hand waving, and thus was I waved away from
the rest of the story :-)

~~~
networkimprov
I regret it if folks found this comment irritating. I only meant to convey my
sense that the article began with rather broad but vague statements, which
dissuaded me from finishing it.

------
mackalex6
well, true in some context future could be dangerous these self controlled
machines getting more destructive or could be programmed by few insane rebels
to harm the humanity and people drastically!

------
erikpukinskis
It’s a nice train of thought, but it’s not a great essay. Good enough for a
bar conversation but I don’t think it is saying much as currently written.

What is metazoan computing?

Given that our discrete computers encode many analog models, what exactly is
he suggesting will be so now that it constitutes a new era?

Reads a little like the writings of someone who so wants to say something
bigger than everyone else that they end up not saying anything to anyone.

Analog models are neat though, and pondering “could we do this analog” a good
question to have in the design quiver.

~~~
laughingman2
> Reads a little like the writings of someone who so wants to say something
> bigger than everyone else that they end up not saying anything to anyone.

Exactly, similar to some of the (not all though) "insights" you have under the
influence of psychedelics.

~~~
erikpukinskis
I mean, some insights are crap, regardless of what chemicals you are on.

But psychedelics have been clinically shown to help people have genuine
insights.

