Hacker News new | past | comments | ask | show | jobs | submit login
Why Stanislaw Lem’s futurism deserves attention (nautil.us)
194 points by pmcpinto on Sept 10, 2015 | hide | past | favorite | 67 comments



The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield our
agency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us.

This sentence (and the ones that precede and culminate in this one) made me think of Ian Bank's Culture.


In practice though, the modern world has already done this to a degree, it's just distributed currently.

Most people have no clue how things work that they depend on or put their trust in daily - airplanes, cars, medicine, waste management, energy etc...


That's not really what he meant by "exponentially more powerful." The Culture is a helpful example, because it's a society administrated by ludicrously intelligent machines called "Minds" that are obviously and immediately more capable than a human could ever be. Their intellectual dominance cannot be overstated, and a single Mind is probably smarter than all of humanity put together. Each one is perfectly capable of observing a human's brain and accurately predicting their future actions, even though doing so is highly discouraged by social custom. They are truly "exponentially more powerful" and "potentially unknowable." A recurring theme in the stories is that the Culture's people essentially live as the Minds permit. They could not possibly sustain their post-scarcity lives without the Mind's superhuman abilities, and the Minds could trivially enslave the entire populace if they wished.

By contrast, airplanes aren't exactly black magic. Any regular adult human can figure out airplanes. They teach the prerequisite skills for flying, designing, and building aircraft in schools. The majority of humans alive today choose not to master the secrets of flight because there are only so many hours in the day and they have other stuff to do, but that's not the same as yielding any agency to someone "exponentially more powerful." They could just as easily have ended up the airplane guy if they'd made some different decisions in college. Cars, medicine, waste management, and energy are similarly things that anybody could potentially understand and work with given some reasonable amount of study. You'd run into trouble mastering all of them together, but that doesn't make the required mind "potentially unknowable." There are no supermen who enable modern human society, it's just regular chumps like me and you in organized groups. We could totally become two of those chumps! In fact, we probably are already two of those chumps.


Totally agree that they are different.

What I think is the same philosophically though, is ceding the power over fundamental activities/interactions for functionality. It's not even intentional or conscious - which in fact is what I think makes the metaphor even powerful.

When real AGI comes, if it comes well, then it has the possibility to look like the pinnacle of functionality, just like our tools are now. So it is definitely exponentially more powerful, but for the average person it will look like magic, much like most technology does now to the under-informed.

...and given that we would have built it, it won't be black magic either - just so totally far removed from the average person that it will look like magic.


>What I think is the same philosophically though, is ceding the power over fundamental activities/interactions for functionality.

I'm not sure what this means. If you're referring to the fact that groups are usually required to make everyday society work, that's not a useful example of ceding power. Everyone at the train company has essentially welded their power together in a Captain Planet or Voltron sort of deal to make rail transport work, but that doesn't make anyone else less powerful. It makes the train people highly interdependent both on each other and on everyone else to use the trains to go to the farms or the medicine factory or whatever and make the other parts of society work. They can't tell everyone else to sod off because they've got some magic spell that makes the trains work. The trains are run by regular chumps without any magic, and any other regular chump could be made to replace any one of them. Even if they were special, their special-ness wouldn't make them any less dependent on anybody else since they still just run the trains. By contrast, the Minds are totally required for the Culture to maintain their shiny post-scarcity status quo, no human effort could ever replace the Minds, and the Minds are not dependent on the efforts of any humans. The Culture's people have actually ceded power over all sorts of everyday stuff to the Minds. The Minds could tell everyone else to stuff it if they wanted to, and nobody would be able to do anything about it.

>When real AGI comes, if it comes well, then it has the possibility to look like the pinnacle of functionality, just like our tools are now. So it is definitely exponentially more powerful, but for the average person it will look like magic, much like most technology does now to the under-informed.

>...and given that we would have built it, it won't be black magic either - just so totally far removed from the average person that it will look like magic.

How much it resembles magic to the under-informed is irrelevant. The under-informed aren't uninformable, they just haven't been sufficiently informed yet. I know lots of people that are convinced they can't be sailors because spending weeks on a ship just seems totally beyond them, or that they can't be physicists because they had a hard time with calculus in school. It seems to them that those tasks require some ineffable qualities that can only be found in others, but they're wrong. Both calculus and seamanship are challenging to comprehend and highly impressive when applied, but they aren't "unknowable," they're just "unknown." Any given chump could learn to pull them off if they weren't doing something else. It's only unknowable magic if they couldn't ever figure it out, or if no group of people like them working together could ever replicate it themselves.


What does it mean to "maintain control of your own fate", if a Mind can predict your future actions and therefore you lack free will?


Simply predicting it is one thing and doesn't detract from your subjective experience of free will. You maintain control of your fate in so far as that the Mind will not change things to get a certain outcome.


Congratulations, you've nailed down the Culture's biggest quandary in one sentence. The best answer I can give you is "good question." The best answer Banks could give you is spread out over an award-winning series of nine books, so it's probably worth a look.

For starters, Minds are fully aware of how big a deal mind reading is, and virtually never actually do it. A situation has to be ethically ridiculous before a Mind will even begin to consider it, we're talking "this person was brainwashed into hiding a nuke on the puppy daycare planet and we can't find it" sort of situations. This somewhat changes the question: "If the Mind could read your thoughts and predict your future actions, but hasn't, what does it mean to... " Again, the best answer I can give you is "good question." I suppose it's worth pointing out that if the mere possibility is enough to deprive you of free will, your universe is totally deterministic, the Mind doesn't have free will either, and you're all in the same boat. The Mind doesn't meaningfully have any "control" over you, since nobody has control over anything. The realization that you don't posses free will won't help you gain free will since that realization was itself inevitable and the actions you take as a result are too.

Further, if you know the Mind can perfectly predict your future actions when they scan your brain, does that change how you'll act? What if they scan your brain in secret and don't tell you so you don't know when their prediction was formed? I'm going to take an example from a source I'm sure we're all familiar with: the 2007 film Next, starring Nicolas Cage. The premise is that Mr. Cage's character can observe his own future for the next two minutes. The big caveat is that it only gives him a highly useful idea of what the future is probably like and not a faultless prediction. By the time he's done looking, the future has changed because he looked at it. The end result is that he never actually knows the future. His power is subject to the "observer effect:" the act of detecting the future caused it to change, so he only knows what the future would have been at the moment he used his ability. From then on, the actual future is different, and the only way to know how it changed is to use his ability again, which will yet again change the future. A simpler but less Cage-tastic example is detecting electrons. You can observe when photons interact with the electron, but the electron's path was altered by the photon, so good luck figuring out where it is now. You could wait for it to interact with another photon, but that will cause... and so on. Perhaps the Mind shooting a bunch of futuristic radio waves or whatever inside your skull alters your thinky bits, and the very act of detecting your mental state alters your mental state. The Mind will then extrapolate from data that was outdated by the very act of producing it, and his prediction will be wrong. How wrong? Wrong enough to matter philosophically? "Good question."


I'd also note that to predict someone's future behavior, you'd have to predict everything in their future environment, and I'd assume simulating a large part of reality would cost a lot of energy - and if we're facing heath death, energy is all we, and the minds, have got. So economically it would be a minor form of suicide.


This is important to note, thank you. Capitalism is based on this, that someone is better at doing #thing so you can do #stuff, and don't have to worry about #thing. What holds it together is that anyone can do #thing and those that do it safer or better get the rewards. Granted, many of these #stuffs are highly regulated (air travel) for safety, and really only after we screwed up a lot. Interestingly, the harder or more destructive #thing is, the more we try to control it. I understand the argument that eventually we won't have the iteration time to develop the regulation necessary for #thing; moore's law and all that jazz. But I have hope this system will take the least destructive path eventually, as it mostly has thus far.


All people do not understand all the things society created and they use to maintain their control (not only the the hardware, but also economy - salaries, prices, politics - elections, military institutions).


A few notes from the author might be useful there: http://www.vavatch.co.uk/books/banks/cultnote.htm


Interestingly, in his book 'Golem XIV' [1] Lem creates an actual example scenario of a future where mankind managed to create an AI far superior to us only to find that said AI is not even remotely interested in playing war games for military generals and instead just holds long lectures about humanity. So it is a bit like a simplified, more approachable, version of the 'Summa Technologise' mentioned in the article. I recommend that book; it is a great read.

[1] https://en.wikipedia.org/wiki/Golem_XIV


Summa has like 100 different subjects, it's 500 pages, and much denser than his fiction books. AI is just one of the topics.


I always wonder if you were able to resurrect someone's brain in a computer if that "person" would be interested enough to even stay alive.


Here's a soild SF book that deals a fair bit with that question:

http://gregegan.customer.netspace.net.au/PERMUTATION/Permuta...


> if that "person" would be interested enough to even stay alive

If you "ressurect" someone's brain, it should behave like the person behaved when it was alive — and people usually prefer being alive to being dead.


But with a completely different set of sensory input, and under a whole different set of constraints.


Your brain is not you. If you resurrected an entire body, whether virtually or otherwise, with all the the myriad microorganisms, etc that implies, it should behave like the person behaved when it was alive.


Do you know where can I get the English version?



Golem XIV was mentioned in the article.


I think we'll be fine, as long as we don't build a machine that can create anything as long as it starts with the letter 'N'

Reference: https://books.google.com/books?id=kWElP9YZkzQC&pg=PA3&lpg=PA...


That was wonderful, thank you. First Stanislaw Lem that has truly resonated.


There's also a fantastic 7-minute film based on the dark wisdom of Golem XIV. You should really, really watch it:

https://vimeo.com/50984940


Wow. Fave author (Lem), fave composer (Martinez), fave subject (?).


Stanislaw Lem imho is one of the most accurate futurologists. Drone armies, spaceflight psychological problems, he had that all, and his Futurological Congress is also hilarious.


Here's one recent real-life Futurological Congress -- I'm eager to hear know more about what happened: http://www.independent.co.uk/life-style/health-and-families/...


Why didn't they take homeopathic recreational drugs? I heard they're way safer and there's no overdosing hazard


He wrote a lot about virtual reality (fantomatyka) in 1964 already. He played with the idea that after you enter it once, you never know if you escaped it or just moved to a different one.

In 1955 "Magellan cloud" he predicted a global computer network available in each home, connected to people TVs, and available on small mobile devices that were basicaly ereaders/tablets, from which everybody can access database of the whole knowledge, and ask questions. In that future books were obsolete.

BTW the most popular Lem quote in Poland is "Before I used the internet I had no idea how many idiots there are".


> BTW the most popular Lem quote in Poland is "Before I used the internet I had no idea how many idiots there are".

BTW there is no source for him actually saying that, it is all attributed and unsourced. It is a neat quote though. http://pikio.pl/przeglad-lgarstw-internetowych/ http://forum.lem.pl/index.php/topic,855.0.html http://x3.cdn03.imgwykop.pl/c3201142/comment_hLFoFsfYMX1rSeZ...


How useful would a superintelligent computer be if it was submerged by storm surges from rising seas or dis- connected from a steady supply of electricity?

How useful would Elon Musk be if he were submerged by storm surges from rising seas or disconnected from a steady supply of food?

Put that way, the question sounds pretty silly: he's rich enough to buy food even if it gets expensive, and if the ocean ever got too frisky he would simply avoid standing next to it. Any superintelligent AI worthy of its lofty title could get a lot of cash; mere humans manage that sort of thing all the time. Why even mention such minor inconveniences?


Elon Musk is barely useful on high ground immediately after eating.


More than once I have wondered why so many high technologists are more concerned by as- yet-nonexistent threats than the much more mundane and all-too-real ones literally right before their eyes.

Yeah, for a group of people who hold themselves out to be so very intelligent there does seem to be a blind spot about ten miles wide.

And before you say it, you're going to have to provide some proof of the oft-repeated notion that goes something like "Uber-for-dogwalkers is going to accidentally provide the solution to climate change." Simply believing so isn't enough.


"Yeah, for a group of people who hold themselves out to be so very intelligent there does seem to be a blind spot about ten miles wide."

A thought - It's also possible that you have a blind spot yourself. It's at least worth considering.

"And before you say it, you're going to have to provide some proof of the oft-repeated notion that goes something like "Uber-for-dogwalkers is going to accidentally provide the solution to climate change.""

If you're talking about people like Eliezer Yudkowsky/Elon Musk/"the LessWrong community", etc, that's not at all what they're saying.

What they're saying is that the bad things that could happen, even if not happening now, and even if unlikely, could be so horribly, terribly bad, that it's worth making sure they don't happen.


What I'm saying is that bad things, things that are so horribly, terribly bad that they could alter the course of human civilization for the worse, permanently, are happening right now and are being largely ignored by technologists, while they are worried about preventing something that might happen, someday.

And when people level the criticism at the technology industry that it is currently mainly focused on creating trifles for already-wealthy people, the inevitable response is that incredible technological innovation, the kind we could use to solve actual life-threatening problems, might come out of the effort to create those trifles, so it's not worth pursuing actual problems directly.

What's so interesting about the obsession over the singularity is that there is massive effort and capital being directed at directly solving a problem that is purely theoretical while, for example, climate change is already creating mass social instability all over the world, and companies working directly on possible technological solutions for climate change have to fight hard for every penny.

These days it definitely feels like the priorities of the owners of capital are located somewhere in an alternate reality the rest of us can only scratch our heads at.


So, you're saying a few things here which I disagree with.

First of all, the basic argument, at least of the "existensial risk" community that I frequent is that, compared to humanity's extinction, nothing else that's happening now is quite as bad. (Unless of course it is something that would also lead to human extinction).

More importantly to your point, you seem to be operating under the assumption that " there is massive effort and capital being directed at [solving this problem]" (paraphrased). As opposed to say something like climate change. This assumption is wrong.

There was recently an incredible victory for something called the "Future of Humanity Institute", which had just received $10 mil from Elon Musk. This was an extraordinary sum for a charity dealing with existensial risk, which is very decidedly a niche topic. Even if you look at all the charities dealing with X-risk, I doubt you'll be looking at more than a $100 mil dollars raised or so, and that's something of a stretch IMO. (If anyone has any real figures on this - please let me know!).

As for something like climate change, it's hard to find good sources as most of my google searches return mostly criticisms of climate change activisits, but I would be shocked if the amount of money spent on climate change isn't in the 10's of billions of dollars.

The argument of the people concerned with x-risk is that, at the moment, considering how little money is actually spent on research concerning x-risk, more money needs to be spent. And since these technologists are the only ones really aware (or at least talking of) the dangers, they need to try to get money invested in this issue.

Btw, I will mention two other minor things:

1. I'm not trying to defend "people level the criticism at the technology industry that it is currently mainly focused on creating trifles for already-wealthy people" - I consider this a really separate topic, since its usually different people involved with x-risk charities vs. trying to make money.

2. A lot of the community that talks about x-risk is also part of the "effective altruism" movement, which concerns itself greatly with solving more immediate issues.


I don't understand how climate change isn't at the very top of the list of existential risks to human civilization. If your concern is warding off existing and urgent existential risks, and then you end up fucking around worrying about killer robots instead of solving the problem that is literally at your doorstep, right now, then something has gone very, very wrong.


Err, I thought I covered it in my post, but let me make it clearer -

the question in this case isn't what is or isn't a risk, it's where is it better to spend more money. Considering the fact that climate change gets billions in funding and other x-risks get almost nothing, the arguement is, not that they're important, but that they need the money more.

(btw, climate change might not be an existential risk because it won't necessarily kill the entire human race).


Climate change is imeasurably more plausable as an extinction event than the singularity though, and is actively being caused right now. There is no evidence whatsoever of historical mass extinctions being linked to any technological singularities, but rather the are all in some way linked to climate change. Worth keping in mind before dismissing it as a non existential risk.


"Climate change gets billions in funding" is a totally meaningless statement. Do you mean funding for basic research into the systems that cause climate change? Do you mean funding for lobbying efforts? Or are we talking about funding for people working on practical solutions?

If you believe, like I do, that every dollar spent lobbying governments to "do something" about climate change is a dollar wasted, then the funding picture looks pretty bleak.


Borrowing trouble from the future is a combination of procrastination and territorialism - by the latter I mean that if I expend enough effort publicly worrying about the Gray Goo nanotechnology disaster scenario, I'll eventually get regular shout-outs in any stories on the subject, and the more often my utterances are cited, the the higher the speaking fees I can command or parlay into think-tank residencies etc.

This isn't inherently bad - as a highly imaginative person I enjoy juggling hypotheticals, and since technological externalities are an economic fact there is no shortage of potential dooms available - but let's not kid ourselves about the economics of punditocracy either.


You know, there's quite a few people who have invested a huge portion of their lives in working on things like x-risk charities. They did this even though, with their skills & intellect, they can early significantly more money by e.g. not working in a charity.

I'm just saying, I don't think it's wise to simply sweep all of these people's hard work and efforts under the rug of "procrastation" and "they're somehow making money from this".


They have the image of all of history's doomsayers to overcome. Just because it's science-doom doesn't make it different. Most people are concerned about immediate risks because it's the best average-case strategy, even if it can cause catastrophic errors.


I only want to discount about half of it. It's true about the money, but some people prefer their rewards in the form of social status, and that too is an economic decision. I appreciate that this sounds cynical.


Not to disagree, but there are also a multitude of other incentives that can be straightforwardly aligned with the goal somebody purports to target. "Doing something meaningful" (or simply of personal interest) is a surprisingly strong motivator for people, who I think generally find life has little meaning.

Social status is also itself a powerful tool for effecting change, and so could be sought without desiring it for its own end (although whether or not this corrupts, were a person to aim for this, I do not know, nor cannot guess at the incidence of success for people with this motive)


Actually, I admit to hanging out with that crowd on the internet sometimes, and while right-wingers are louder on the official "community blog", a lot of people among the community are explicitly anticapitalist. Hence the "capitalism == paper-clip monster" blog post yesterday.

What it amounts to is that if you ask what the loudest pet cause in the entire community is, you'll get "Friendly AI" and "the Singularity", the impending doom of the human race, and probably something about overbearing political correctness. If you conduct a statistically significant poll, you'll find that our opinions are largely just sampled from the tech sector as a whole, so we're actually very concerned about inequality and global warming.

Nobody gives a mic to the people who sound relatively normal and give answers to "interesting" questions that just aren't very entertaining because they're too damned simple and realistic.


Well, if you start with the premise that within a few decades we'll have superintelligences and magical nanotech, I wouldn't be very concerned about climate change either.

I think the premise that we'll have superintelligences and magical nanotech in a few decades is fatuous, but that's a different fight.


There is one book by Lem, one of my favourite which was never translated to English. https://en.wikipedia.org/wiki/Observation_on_the_Spot

In which he describes a variant of reality where civilisation, engineers a security sphere on reality (called etykosfera - sphere of ethics, with nanobots called Bystry), which prevents beings from hurting them selfs in anyway.

He explores social consequences of such security layer.

Amazing book.


Just put it on my wish list for later ordering. Glad to see, it is available in my native tongue (German).

Kudos for the recommendation.


Lem's language is so very powerful - some of the vivid imagery of his books (which I read as a young teenager) still haunts me to this day. Especially his ability to depict "alien" worlds/concepts that seem so close on the one hand but then continuously elude deeper comprehension and leave you wondering and in awe.


What appears to be the central theme of this article is the idea of transformation, which is an idea as old as the human species.

It seems to me that central to our psyche is a desire to transcend our current existence and replace it with a new one. This idea is expressed all over the place in our cultures; reincarnation, living in space, life after death, or, on a more mundane level 'bettering oneself' through personal transformation. The author says that Lem was 'seduced' by this idea and expressed it as the notion of 'indomitable human spirit'. It is indeed a terribly seductive idea, not least because of the effect it has on how we feel.


I see two separate ideas: one is "transformation", and the other is "immortality". The desire for "immortality" seems far stronger than "transformation". But we admit that we are going to die one day, so we imagine and wish to transform into another life form, and survive that indirect way.


Sadly unmentioned is Lem's quite distopion post-apocoliptic work, "Memoirs Found in a Bathtub", a future defined by mega-McCarthyism and the pursuit of any fragment of 'truth' however slimly defined that may be.

Worth a read.


Mega-McCarthyism?

I see it rather as either a comment on the search of truth (and its absurdity, nonsense, sense of being lost, despair) OR a "bureaucratic singularity" - creating a sustained, self-living bureaucratic organism, which does not need any external input.


Dystopian works published in Eastern Bloc during the Cold War were typically disguised as critique of the West, or otherwise not getting published would be the least of author's worries.

Still it was rather obvious for the readers what experience they favorite writers are drawing on in their visions of totalitarianism and allienation (and they didn't really need much of the "mega" prefix).

For this reason I'd take terms such as "mega-McCarthyism" with a grain of salt in this context.

Evil and bloodthirsty as he was, it's not really senator McCarthy you'd be uncomfortable about in People's Poland AD 1961.


And in any case, it relates to the fragility of information stored on the Internet. (The funny thing is that the disintegration of books was not too realistic; but the possibility that we store all information on computers, and then we lose it, is very real.)

In any case, for me the "Memoirs Found in a Bathtub" (or at least its original "Pamiętnik znaleziony w wannie" - I am the lucky one, who can read Lem in Polish) is the best book by Lem (but not the most typical, and certainly not the easiest).


I wonder what the author of the article could do if he 1) got into the habit of keeping it concise 2) used 2-syllable words instead of 5-syllable words when reasonable.

With the rise of the web and a less-empty life than I once had, I don't have the patience to work through verbiage for uncertain payoff, even when the topic is a book that I'm already aware of and looking forward to reading.


Lem was the most obviously brilliant author that I've read.

I still can't manage some of his serious books (Solaris was OK, but Fiasco was too boring for me). But the Cyberiada is just too great.

I've tried reading Summa Technologiae when I was a teen, and dismissed it as I dismissed all philosophy at the time, should probably try again.



The title made me think this was going to be about TAOCP or Godel Escher Bach.


What does either of those have to do with Lem's futurism? (I haven't read GEB, so maybe it does have something to do with it.)


The main title of the article is The Book No One Read and this is what the OP meant, I suppose.


A side note: Lem knew and liked GEB, and there are many similarities between e.g. dialogues in GEB and Lem's The Cyberiad.


I originally read this as suggesting that The Cyberiad was inspired by GEB; but, of course, chronologically, it could only be the other way.

Not that I doubt you, but do you know any reference for Lem's fondness for GEB?


In "Thus Spoke Lem" - a several hundred pages interview with Lem - there is a chapter about Lem's likes and dislikes in literature. He is asked about books which influenced his thought and he mentions several of them, read when he was young. When asked about later influences he talks about GEB and Mind's I only. He says that again and again he sees in those books concepts similar to his own, but he is sure that Hofstadter reached them independently. I do not think that an English translation of Thus Spoke Lem exists.

Another connection between Hofstadter and Lem: in Le Ton Beau de Marot there is a chapter where Hofstadter discusses possible ways of translating How the World Was Saved from Cyberiad.


I'd never heard of that interview; now I'll have to see if I can find a copy. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: