
Why Stanislaw Lem’s futurism deserves attention - pmcpinto
http://nautil.us/issue/28/2050/the-book-no-one-read
======
Pamar
_The result is a disconcerting paradox, which Lem expresses early in the book:
To maintain control of our own fate, we must yield our agency to minds
exponentially more powerful than our own, created through processes we cannot
entirely understand, and hence potentially unknowable to us._

This sentence (and the ones that precede and culminate in this one) made me
think of Ian Bank's Culture.

~~~
AndrewKemendo
In practice though, the modern world has already done this to a degree, it's
just distributed currently.

 _Most_ people have no clue how things work that they depend on or put their
trust in daily - airplanes, cars, medicine, waste management, energy etc...

~~~
wickes
That's not really what he meant by "exponentially more powerful." The Culture
is a helpful example, because it's a society administrated by ludicrously
intelligent machines called "Minds" that are obviously and immediately more
capable than a human could ever be. Their intellectual dominance cannot be
overstated, and a single Mind is probably smarter than all of humanity put
together. Each one is perfectly capable of observing a human's brain and
accurately predicting their future actions, even though doing so is highly
discouraged by social custom. They are truly "exponentially more powerful" and
"potentially unknowable." A recurring theme in the stories is that the
Culture's people essentially live as the Minds permit. They could not possibly
sustain their post-scarcity lives without the Mind's superhuman abilities, and
the Minds could trivially enslave the entire populace if they wished.

By contrast, airplanes aren't exactly black magic. Any regular adult human can
figure out airplanes. They teach the prerequisite skills for flying,
designing, and building aircraft in schools. The majority of humans alive
today choose not to master the secrets of flight because there are only so
many hours in the day and they have other stuff to do, but that's not the same
as yielding any agency to someone "exponentially more powerful." They could
just as easily have ended up the airplane guy if they'd made some different
decisions in college. Cars, medicine, waste management, and energy are
similarly things that anybody could potentially understand and work with given
some reasonable amount of study. You'd run into trouble mastering all of them
together, but that doesn't make the required mind "potentially unknowable."
There are no supermen who enable modern human society, it's just regular
chumps like me and you in organized groups. We could totally become two of
those chumps! In fact, we probably are already two of those chumps.

~~~
gohrt
What does it mean to "maintain control of your own fate", if a Mind can
predict your future actions and therefore you lack free will?

~~~
wickes
Congratulations, you've nailed down the Culture's biggest quandary in one
sentence. The best answer I can give you is "good question." The best answer
Banks could give you is spread out over an award-winning series of nine books,
so it's probably worth a look.

For starters, Minds are fully aware of how big a deal mind reading is, and
virtually never actually do it. A situation has to be ethically ridiculous
before a Mind will even begin to consider it, we're talking "this person was
brainwashed into hiding a nuke on the puppy daycare planet and we can't find
it" sort of situations. This somewhat changes the question: "If the Mind
_could_ read your thoughts and predict your future actions, but _hasn 't,_
what does it mean to... " Again, the best answer I can give you is "good
question." I suppose it's worth pointing out that if the mere possibility is
enough to deprive you of free will, your universe is totally deterministic,
the Mind doesn't have free will either, and you're all in the same boat. The
Mind doesn't meaningfully have any "control" over you, since nobody has
control over anything. The realization that you don't posses free will won't
help you gain free will since that realization was itself inevitable and the
actions you take as a result are too.

Further, if you know the Mind can perfectly predict your future actions when
they scan your brain, does that change how you'll act? What if they scan your
brain in secret and don't tell you so you don't know when their prediction was
formed? I'm going to take an example from a source I'm sure we're all familiar
with: the 2007 film _Next_ , starring Nicolas Cage. The premise is that Mr.
Cage's character can observe his own future for the next two minutes. The big
caveat is that it only gives him a highly useful idea of what the future is
probably like and not a faultless prediction. By the time he's done looking,
the future has changed because he looked at it. The end result is that he
never actually knows the future. His power is subject to the "observer
effect:" the act of detecting the future caused it to change, so he only knows
what the future _would have been_ at the moment he used his ability. From then
on, the actual future is different, and the only way to know how it changed is
to use his ability again, which will yet again change the future. A simpler
but less Cage-tastic example is detecting electrons. You can observe when
photons interact with the electron, but the electron's path was altered by the
photon, so good luck figuring out where it is now. You could wait for it to
interact with another photon, but that will cause... and so on. Perhaps the
Mind shooting a bunch of futuristic radio waves or whatever inside your skull
alters your thinky bits, and the very act of detecting your mental state
alters your mental state. The Mind will then extrapolate from data that was
outdated by the very act of producing it, and his prediction will be wrong.
How wrong? Wrong enough to matter philosophically? "Good question."

~~~
pombrand
I'd also note that to predict someone's future behavior, you'd have to predict
everything in their future environment, and I'd assume simulating a large part
of reality would cost a lot of energy - and if we're facing heath death,
energy is all we, and the minds, have got. So economically it would be a minor
form of suicide.

------
terhechte
Interestingly, in his book 'Golem XIV' [1] Lem creates an actual example
scenario of a future where mankind managed to create an AI far superior to us
only to find that said AI is not even remotely interested in playing war games
for military generals and instead just holds long lectures about humanity. So
it is a bit like a simplified, more approachable, version of the 'Summa
Technologise' mentioned in the article. I recommend that book; it is a great
read.

[1]
[https://en.wikipedia.org/wiki/Golem_XIV](https://en.wikipedia.org/wiki/Golem_XIV)

~~~
soperj
I always wonder if you were able to resurrect someone's brain in a computer if
that "person" would be interested enough to even stay alive.

~~~
golergka
> if that "person" would be interested enough to even stay alive

If you "ressurect" someone's brain, it should behave like the person behaved
when it was alive — and people usually prefer being alive to being dead.

~~~
mattdw
But with a completely different set of sensory input, and under a whole
different set of constraints.

------
chiph
I think we'll be fine, as long as we don't build a machine that can create
anything as long as it starts with the letter 'N'

Reference:
[https://books.google.com/books?id=kWElP9YZkzQC&pg=PA3&lpg=PA...](https://books.google.com/books?id=kWElP9YZkzQC&pg=PA3&lpg=PA3&dq=stanislaw+lem+machine+starts+with+n&source=bl&ots=-NbM53rhi-&sig=NnFT6xTbsuAIm7yjuAMPOoMPSYM&hl=en&sa=X&ved=0CDgQ6AEwBGoVChMI64G0hKTtxwIVx6CACh1H-Ayq#v=onepage&q=stanislaw%20lem%20machine%20starts%20with%20n&f=false)

~~~
akkartik
That was wonderful, thank you. First Stanislaw Lem that has truly resonated.

------
maligree
There's also a fantastic 7-minute film based on the dark wisdom of Golem XIV.
You should really, really watch it:

[https://vimeo.com/50984940](https://vimeo.com/50984940)

~~~
musesum
Wow. Fave author (Lem), fave composer (Martinez), fave subject (?).

------
ommunist
Stanislaw Lem imho is one of the most accurate futurologists. Drone armies,
spaceflight psychological problems, he had that all, and his Futurological
Congress is also hilarious.

~~~
Erwin
Here's one recent real-life Futurological Congress -- I'm eager to hear know
more about what happened: [http://www.independent.co.uk/life-style/health-and-
families/...](http://www.independent.co.uk/life-style/health-and-
families/health-news/homeopathy-conference-ends-in-chaos-after-delegates-take-
hallucinogenic-drug-10491114.html)

~~~
V-2
Why didn't they take homeopathic recreational drugs? I heard they're way safer
and there's no overdosing hazard

------
pjscott
_How useful would a superintelligent computer be if it was submerged by storm
surges from rising seas or dis- connected from a steady supply of
electricity?_

How useful would Elon Musk be if he were submerged by storm surges from rising
seas or disconnected from a steady supply of food?

Put that way, the question sounds pretty silly: he's rich enough to buy food
even if it gets expensive, and if the ocean ever got too frisky he would
simply avoid standing next to it. Any superintelligent AI worthy of its lofty
title could get a lot of cash; mere humans manage that sort of thing all the
time. Why even mention such minor inconveniences?

~~~
idlewords
Elon Musk is barely useful on high ground immediately after eating.

------
beatpanda
_More than once I have wondered why so many high technologists are more
concerned by as- yet-nonexistent threats than the much more mundane and all-
too-real ones literally right before their eyes._

Yeah, for a group of people who hold themselves out to be so very intelligent
there does seem to be a blind spot about ten miles wide.

And before you say it, you're going to have to provide some proof of the oft-
repeated notion that goes something like "Uber-for-dogwalkers is going to
accidentally provide the solution to climate change." Simply believing so
isn't enough.

~~~
edanm
"Yeah, for a group of people who hold themselves out to be so very intelligent
there does seem to be a blind spot about ten miles wide."

A thought - It's also possible that you have a blind spot yourself. It's at
least worth considering.

"And before you say it, you're going to have to provide some proof of the oft-
repeated notion that goes something like "Uber-for-dogwalkers is going to
accidentally provide the solution to climate change.""

If you're talking about people like Eliezer Yudkowsky/Elon Musk/"the LessWrong
community", etc, that's not _at all_ what they're saying.

What they're saying is that the bad things that could happen, even if not
happening now, and even if unlikely, could be so horribly, terribly bad, that
it's worth making sure they don't happen.

~~~
beatpanda
What I'm saying is that bad things, things that are so horribly, terribly bad
that they could alter the course of human civilization for the worse,
permanently, are happening _right now_ and are being largely ignored by
technologists, while they are worried about preventing something that might
happen, someday.

And when people level the criticism at the technology industry that it is
currently mainly focused on creating trifles for already-wealthy people, the
inevitable response is that incredible technological innovation, the kind we
could use to solve actual life-threatening problems, _might_ come out of the
effort to create those trifles, so it's not worth pursuing actual problems
directly.

What's so interesting about the obsession over the singularity is that there
is massive effort and capital being directed at directly solving a problem
that is purely theoretical while, for example, climate change is already
creating mass social instability all over the world, and companies working
directly on possible technological solutions for climate change have to fight
hard for every penny.

These days it definitely feels like the priorities of the owners of capital
are located somewhere in an alternate reality the rest of us can only scratch
our heads at.

~~~
edanm
So, you're saying a few things here which I disagree with.

First of all, the basic argument, at least of the "existensial risk" community
that I frequent is that, compared to humanity's extinction, nothing else
that's happening now is quite as bad. (Unless of course it is something that
would also lead to human extinction).

More importantly to your point, you seem to be operating under the assumption
that " there is massive effort and capital being directed at [solving this
problem]" (paraphrased). As opposed to say something like climate change. This
assumption is wrong.

There was recently an incredible victory for something called the "Future of
Humanity Institute", which had just received $10 mil from Elon Musk. This was
an extraordinary sum for a charity dealing with existensial risk, which is
very decidedly a niche topic. Even if you look at all the charities dealing
with X-risk, I doubt you'll be looking at more than a $100 mil dollars raised
or so, and that's something of a stretch IMO. (If anyone has any real figures
on this - please let me know!).

As for something like climate change, it's hard to find good sources as most
of my google searches return mostly criticisms of climate change activisits,
but I would be shocked if the amount of money spent on climate change isn't in
the 10's of billions of dollars.

The argument of the people concerned with x-risk is that, at the moment,
considering how little money is actually spent on research concerning x-risk,
more money needs to be spent. And since these technologists are the only ones
really aware (or at least talking of) the dangers, they need to try to get
money invested in this issue.

Btw, I will mention two other minor things:

1\. I'm not trying to defend "people level the criticism at the technology
industry that it is currently mainly focused on creating trifles for already-
wealthy people" \- I consider this a really separate topic, since its usually
different people involved with x-risk charities vs. trying to make money.

2\. A lot of the community that talks about x-risk is also part of the
"effective altruism" movement, which concerns itself greatly with solving more
immediate issues.

~~~
beatpanda
I don't understand how climate change isn't at the very top of the list of
existential risks to human civilization. If your concern is warding off
existing and urgent existential risks, and then you end up fucking around
worrying about killer robots instead of solving the problem that is literally
at your doorstep, right now, then something has gone very, very wrong.

~~~
edanm
Err, I thought I covered it in my post, but let me make it clearer -

the question in this case isn't what is or isn't a risk, it's where is it
better to spend more money. Considering the fact that climate change gets
billions in funding and other x-risks get almost nothing, the arguement is,
not that they're important, but that they need the money more.

(btw, climate change might not be an existential risk because it won't
necessarily kill the entire human race).

~~~
jawbone3
Climate change is imeasurably more plausable as an extinction event than the
singularity though, and is actively being caused right now. There is no
evidence whatsoever of historical mass extinctions being linked to any
technological singularities, but rather the are all in some way linked to
climate change. Worth keping in mind before dismissing it as a non existential
risk.

------
paddyzab
There is one book by Lem, one of my favourite which was never translated to
English.
[https://en.wikipedia.org/wiki/Observation_on_the_Spot](https://en.wikipedia.org/wiki/Observation_on_the_Spot)

In which he describes a variant of reality where civilisation, engineers a
security sphere on reality (called etykosfera - sphere of ethics, with
nanobots called Bystry), which prevents beings from hurting them selfs in
anyway.

He explores social consequences of such security layer.

Amazing book.

~~~
sdoering
Just put it on my wish list for later ordering. Glad to see, it is available
in my native tongue (German).

Kudos for the recommendation.

------
novalis78
Lem's language is so very powerful - some of the vivid imagery of his books
(which I read as a young teenager) still haunts me to this day. Especially his
ability to depict "alien" worlds/concepts that seem so close on the one hand
but then continuously elude deeper comprehension and leave you wondering and
in awe.

------
sjclemmy
What appears to be the central theme of this article is the idea of
transformation, which is an idea as old as the human species.

It seems to me that central to our psyche is a desire to transcend our current
existence and replace it with a new one. This idea is expressed all over the
place in our cultures; reincarnation, living in space, life after death, or,
on a more mundane level 'bettering oneself' through personal transformation.
The author says that Lem was 'seduced' by this idea and expressed it as the
notion of 'indomitable human spirit'. It is indeed a terribly seductive idea,
not least because of the effect it has on how we feel.

~~~
gohrt
I see two separate ideas: one is "transformation", and the other is
"immortality". The desire for "immortality" seems far stronger than
"transformation". But we admit that we are going to die one day, so we imagine
and wish to transform into another life form, and survive that indirect way.

------
alex_young
Sadly unmentioned is Lem's quite distopion post-apocoliptic work, "Memoirs
Found in a Bathtub", a future defined by mega-McCarthyism and the pursuit of
any fragment of 'truth' however slimly defined that may be.

Worth a read.

~~~
stared
Mega-McCarthyism?

I see it rather as either a comment on the search of truth (and its absurdity,
nonsense, sense of being lost, despair) OR a "bureaucratic singularity" \-
creating a sustained, self-living bureaucratic organism, which does not need
any external input.

------
forscha
I wonder what the author of the article could do if he 1) got into the habit
of keeping it concise 2) used 2-syllable words instead of 5-syllable words
when reasonable.

With the rise of the web and a less-empty life than I once had, I don't have
the patience to work through verbiage for uncertain payoff, even when the
topic is a book that I'm already aware of and looking forward to reading.

------
ajuc
Lem was the most obviously brilliant author that I've read.

I still can't manage some of his serious books (Solaris was OK, but Fiasco was
too boring for me). But the Cyberiada is just too great.

I've tried reading Summa Technologiae when I was a teen, and dismissed it as I
dismissed all philosophy at the time, should probably try again.

------
ableal
[http://www.amazon.com/Summa-Technologiae-Electronic-
Mediatio...](http://www.amazon.com/Summa-Technologiae-Electronic-Mediations-
Stanislaw/dp/0816675775/)

(Summa Technologiae at amazon.com)

------
acconrad
The title made me think this was going to be about TAOCP or Godel Escher Bach.

~~~
JadeNB
What does either of those have to do with Lem's futurism? (I haven't read GEB,
so maybe it does have something to do with it.)

~~~
wues
A side note: Lem knew and liked GEB, and there are many similarities between
e.g. dialogues in GEB and Lem's The Cyberiad.

~~~
JadeNB
I originally read this as suggesting that The Cyberiad was inspired by GEB;
but, of course, chronologically, it could only be the other way.

Not that I doubt you, but do you know any reference for Lem's fondness for
GEB?

~~~
wues
In "Thus Spoke Lem" \- a several hundred pages interview with Lem - there is a
chapter about Lem's likes and dislikes in literature. He is asked about books
which influenced his thought and he mentions several of them, read when he was
young. When asked about later influences he talks about GEB and Mind's I only.
He says that again and again he sees in those books concepts similar to his
own, but he is sure that Hofstadter reached them independently. I do not think
that an English translation of Thus Spoke Lem exists.

Another connection between Hofstadter and Lem: in Le Ton Beau de Marot there
is a chapter where Hofstadter discusses possible ways of translating How the
World Was Saved from Cyberiad.

~~~
JadeNB
I'd never heard of that interview; now I'll have to see if I can find a copy.
Thanks!

