
What Would the Father of Cybernetics Think About A.I. Today? - benbreen
https://slate.com/technology/2019/02/norbert-wiener-cybernetics-human-use-artificial-intelligence.html
======
scottlocklin
>Wiener could not foresee a technological world in which innovation and self-
organization bubble up from the bottom rather than being imposed from the top.

Holy shit this author is unutterably clueless. Wiener in fact wrote a whole
book more or less on how innovation and self organization bubble up from the
bottom. It was called "Invention: The Care and Feeding of Ideas."

> Wiener saw the world in a deeply pessimistic light.

Gee, a Jewish guy in 1950 was pessimistic and worried about totalitarianism. I
wonder where he might have acquired this attitude? Maybe, you know, all them
people recently killed in WW-2 might have spooked the man?

I used to respect Seth Lloyd ... until he published the ridiculous "on the
computational capacity of the universe." Google informs me he also takes money
from child molesting gangsters, which sounds about right. Dude should stop
grubbing money from pedophiles and read a book.

~~~
orbifold
You basically have to listen to any lecture he gives to understand that he has
at least a drastically overinflated ego. Compare that to lectures by people
like Witten or any other world-class theoretical physicist / mathematician.
The vast majority of them are extremely humble. Seth Loyd and people like Mark
Tegmark are the kind of guys that thrive in the current academic environment,
flashy ideas with some substance pushed into prestigious journals is a winning
strategy.

------
jacques_chester
> _Finally, because of his emphasis on control, Wiener could not foresee a
> technological world in which innovation and self-organization bubble up from
> the bottom rather than being imposed from the top._

I think this misunderstands the meaning of "control" in a control system /
cybernetic sense, mistaking it for a more popular sense of the term. It's the
difference between control-as-behaviour and control-as-power. The latter is
strictly about humans, the former can include power structures but extends to
any circular causality with at least one balancing loop.

Cybernetics thinkers were well aware of emergence and multi-layered autonomy.
Some of them made it a central concept (eg Stafford Beer).

~~~
cma
> Wiener could not foresee a technological world in which innovation and self-
> organization bubble up from the bottom rather than being imposed from the
> top.

His book Cybernetics even has a whole chapter on random circuits being able to
self-organize with some kind of learning algorithm for adjusting weights.

~~~
Bartweiss
It sort of makes you wonder what "self-organization from the bottom" is
supposed to _mean_ , if it's not some kind of control system. When entities
specialize into particular roles in a system, there's a control relationship
between them even without any central authority.

The cybernetic definition of control which covers self-organization in
technology is the same one which includes self-organization in cells, brain
structures, and even social groups.

------
carapace
FWIW, the Chilean Project Cybersyn would have been fascinating, however it
might have turned out.

[https://en.wikipedia.org/wiki/Project_Cybersyn](https://en.wikipedia.org/wiki/Project_Cybersyn)

> Project Cybersyn was a Chilean project from 1971–1973 during the presidency
> of Salvador Allende aimed at constructing a distributed decision support
> system to aid in the management of the national economy.

> After the CIA-backed military coup on September 11, 1973, Cybersyn was
> abandoned and the operations room was destroyed.

~~~
scottlocklin
You realize it was a Potemkin village, right? The screens were viewgraphs
assembled by people, and the fancy looking knobs were basically a DC circuit
that did neilsen reports style voting.

~~~
sgt101
They had no hope of making it work in 1973, but the idea points towards the
potential of modern technology for economic control. The USSR couldn't work
because the control loops just didn't exist - there were no sensor networks,
just concocted figures, and it couldn't work because any control that they
implemented worked over periods of months or years. By the time the central
office decided what should be done, conditions had moved on so far as to make
these decisions meaningless. This made open and free societies more efficient
and effective than totalitarian states.

Today, I think that the opposite may be true. That worries me.

~~~
scottlocklin
I think the capability and math (Kantorovich and of course Dantzig) was
actually there in 1973, but Beer was a wanker and a mountebank, and Chile
probably couldn't have afforded it anyway.

Apparently the main use of computers there was more or less workers
collectives using telex to "email" numbers to central.

~~~
mehh
Do elaborate?

~~~
scottlocklin
[https://scottlocklin.wordpress.com/2019/02/26/cybersyn-
and-a...](https://scottlocklin.wordpress.com/2019/02/26/cybersyn-and-allendes-
semi-automated-luxury-socialism/)

~~~
mehh
I see, not exactly a well informed opinion from what I can see, but each to
their own :)

~~~
scottlocklin
You realize Stafford Beer was my primary source for writing that, right? His
books and videos are all on file sharing.

~~~
meh2frdf
I’ve read some of his books, and know a bit about the subject. My take aways
are somewhat different to yours, that’s fine. I’ve found some of his work
extremely useful over the years.

------
thewarrior
I wonder if a second go at central planning in the future using networks,
machine learning and smartphones+ sensors could end up performing much better
than similar attempts in the past.

One of the chief critiques of central planning has been its inability to adapt
to constantly changing conditions and its lack of information about individual
preferences.

Amazon is already a centrally planned algorithmic system for delivering goods
to people. Data collection is used to personalize someone’s experience in a
way that wasn’t possible before. The feedback loops are extremely quick in
terms of the ability to do constant A/B tests.

Perhaps USSR 2.0 might have its own amazon app for citizens that would collect
preferences for what they want in real time and automatically feed it all
through the economy. While also allowing to correct for certain things which
the market might not be able to so easily as for eg medicine for poor people
is more important than fidget spinners.

You might also be able to experiment with different algorithms to see which of
them lead to more growth and productivity.

After a few years you might have enough data to do predictive analytics to
improve things ahead of time.

It would never take off because of its totalitarian implications but we might
get to witness a real world test of this idea in China in the coming years.
Life in China is already quite convenient in ways not possible in the west.

~~~
jacques_chester
Amazon is solving a teeny, tiny sliver of the total problem. The total
structure of production is incalculably vaster than the consumer end market.

Summarising a complex history, the USSR did have folks who argued for a
cybernetic approach to production planning, but it was never tried. An
entertaining semi-fictionalised account is _Red Plenty_. A drastically less
entertaining but more scholarly read is _Newspeak to Cyberspeak_.

~~~
cr0sh
> Summarising a complex history, the USSR did have folks who argued for a
> cybernetic approach to production planning, but it was never tried.

Not in the USSR - but there was an attempt elsewhere:

[https://en.wikipedia.org/wiki/Project_Cybersyn](https://en.wikipedia.org/wiki/Project_Cybersyn)

------
ismail
“These broader applications of cybernetics were an almost unequivocal
failure.“

This is incorrect. There are no “cybernetics” department at universities, nor
people with this title or role. However the concepts and ideas from
cybernetics have infiltrated and influenced most areas of science. If you
trace the ideas you will find them, you will recognize the DNA. The ideas from
cybernetics + information theory was a paradigm shift in science at the time.
I would say the cybernetic ideas were a victim of its own success. It became
subsumed into many other disciplines by other names. It is like a snake eating
its own tail. See the Ouroboros symbol [0] .

“Finally, because of his emphasis on control”

Here he is assuming control means too-down with a naive view of control.
Control here could be replaced with regulation, direction, steering,
facilitation. I would define control as the mechanism that allows a system to
achieve it's purpose in an efficient and effective way. This definition says
nothing about top-down. In actual fact there is a concept within cybernetics
of "self-regulation" or "internal control". This means that control must be
exhibited from within the system and not from an external party.

"Wiener could not foresee a technological world in which innovation and self-
organization bubble up from the bottom rather than being imposed from the
top."

Within the field of there is a ton of material on self-organization. This was
one of the topics that was specifically investigated and written about at
length. In particular how do you design for "self-organization"? Staford Beer
and the organizational cybernetician were specifically focused on maximizing
freedom, by balancing of both bottom up, and top-down feedback loops.

While Weiner did give the field the name i would hesitate in calling him the
"father" of the field. Alot of the ideas were developed in collaboration with
others, particularly at the Macy conferences [1]. The ideas have a long
history and cybernetics has been heavily influenced by Ashby, Beer, Shannon,
Maturana, Van Forester etc. Cybernetics itself born out of the tradition of
operational research and systems engineering.

[0]
[https://commons.m.wikimedia.org/wiki/File:Serpiente_alquimic...](https://commons.m.wikimedia.org/wiki/File:Serpiente_alquimica.jpg#)

[1]
[https://en.wikipedia.org/wiki/Macy_conferences](https://en.wikipedia.org/wiki/Macy_conferences)

~~~
mehh
There are Cybernetics departments, where people get degrees in Cybernetics!

~~~
saiya-jin
Exactly, we had one on our faculty. I didn't study there, back then it seemed
arcanely narrow-focused compared to general software engineering (bear in mind
we talk about year +-2000 in central europe). In reality it covered tons of
topics from AI, cybernetics, neural networks, lots of lisp etc. Folks ended up
as software devs after the school anyway which proved my decision. These days
it might be a different story...

------
ZhuanXia
Worth noting that this is no longer true:

“The world’s best chess players are neither computers nor humans but humans
working together with computers.”

~~~
detaro
Interesting. Humans playing with the help of a strong chess engine are weaker
than the chess engine itself? (=make more mistakes than right decisions when
they leave the machine recommendation?)

~~~
dsteinman
Google's AlphaZero can beat all other previous chess engines, and learned
chess on it's own and invented strategies that were never conceived of before:

[https://deepmind.com/blog/article/alphazero-shedding-new-
lig...](https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-
games-chess-shogi-and-go)

------
dsteinman
I think the author massively underestimates how ahead of his time Weiner was
and how relevant cybernetics is right now and how important it will become.
I've been studying and working on cybernetics for a while now and have become
a true believer in classic cybernetics as a way forward to develop
better/smarter networked apps and especially for things like speech
recognition and smart homes.

I've been developing a startup project building a cybernetic control system in
JavaScript that makes extensive use of feedback loops that get dynamically
spawned between devices and services and it is truly marvelous to see the full
system in action. I'm getting capabilities out of the system that I didn't
even foresee being able to do, like being able to write fully voice-controlled
chess game:

[https://www.youtube.com/watch?v=faOfKd1eAQA](https://www.youtube.com/watch?v=faOfKd1eAQA)

That was just a fun side-project too. With a true cybernetic control layer
running between the speech recognition system and the things you want to
control building apps like this becomes relatively straight-forward, perhaps
not easy but a lot easier than it would be without one.

I am quite busy in a rewrite of the whole system, not a whole lot of
documentation yet:

[https://github.com/jaxcore/jaxcore](https://github.com/jaxcore/jaxcore)

I'll be rewriting the speech recognition system as well and integrating it
better into the new version, I wrote a bit of a writeup here about how
Mozilla's DeepSpeech can be hooked into the web:

[https://github.com/jaxcore/jaxcore-
listen](https://github.com/jaxcore/jaxcore-listen)

And there's hardware aspect of the project as well, I created a new cybernetic
control dial that gets hooked into the Jaxcore to control all sorts of things
like stereo equipment, wireless speakers, and can control pretty much anything
in a super elegant way.

~~~
classified
_Wiener_

------
LargoLasskhyfv
Article mentions many others besides Norbert Wiener but omits

[1]
[https://en.wikipedia.org/wiki/Anthony_Stafford_Beer](https://en.wikipedia.org/wiki/Anthony_Stafford_Beer)

and his

[2]
[https://en.wikipedia.org/wiki/Viable_System_Model](https://en.wikipedia.org/wiki/Viable_System_Model)

Not sure why (not)?

~~~
Bartweiss
It looks like this is an excerpt from _Possible Minds: 25 Ways of Looking at
AI_ , which explains why past Wiener the focus is entirely on artificial
intelligence researchers and commentators: first McCulloch and Pitts, then von
Neumann, then Minsky and Simon, Kurzweil, and finally Hawking and Musk.

It's an odd line to take, though, since artificial intelligence has never been
the sole or even central focus of cybernetics. There was a relatively explicit
split in 1956 at the Dartmouth workshop, but the piece follows the AI thread
from there on with no real acknowledgement that there was any other path in
effect. Beer and Forrester began their cybernetics-related work around 1960,
and so the article seems to drop them entirely as no longer interesting. This
is convenient for the overall thesis: Lloyd works on the limits of computation
and Moore's Law, so he structured this article as "Wiener underestimated
advances in raw computing power, but now that's dying down and so he's
especially relevant".

But the offhand references to Wiener's relevance to unemployment, human-
computer interfacing, environmental disasters, and warfare are cheapened by
the speculative "what would Wiener think?" tone. His work has actively shaped
the current state of economic planning, environmental modelling, and military
strategy, so it doesn't make much sense to read him like an outside
commentator on those topics.

------
ausbah
I feel like the comparisons this article uses between current AI systems and
humans on the front of learning to induce skepticism on the overall ability of
AI is a bit misleading. Current systems may suck at learning like humans (on
most fronts), but unlike the human mind there is reason to believe we can and
will improve AI to overcome this shortcoming.

------
mehh
Why is it that now AI is a hot topic all these physicist and other scientist
are suddenly all chipping in with their opinions like they have substantive
opinions about the subject? Sure they understand maths, but there is a lot
more to the subject than that, get off our bandwagon!

------
xvilka
It's impossible to create AGI until we understand the human intelligence
completely. At least to know its parameters, to be able to make AI stronger.
So far we are nowhere close. Probably, 100 years required to reach that level
of understanding.

~~~
glial
I'm not sure this is necessarily true. At a smaller scale, we didn't need to
understand how humans play chess in order to build a good chess AI. I do think
we need a better understanding of _what we mean_ when we talk about
intelligence, if we expect to recreate it. Francois Chollet has a good recent
paper about this [1].

[1] [https://arxiv.org/abs/1911.01547](https://arxiv.org/abs/1911.01547)

------
RocketSyntax
Reading 'Dream Machine' now! (1) Automatons need be able to create new AI
models on the fly. (2) The UI was created to increase speed of interaction
between human brain and machine. Need the neural lace.

------
tomaszs
AI? This is not AI. HE would say

