Hacker News new | past | comments | ask | show | jobs | submit login
‘When We Cease to Understand the World’ (nytimes.com)
67 points by prismatic 3 days ago | hide | past | favorite | 36 comments

Hysterical Realism.

Learn to recognize the style. Its exploding all around us and can be safely ignored.

Writers in the west, growing up insecure about what happens next are defaulting to hysterical realism, because many have lost the ability generate Faith. Faith in what? Faith that tomorrow can be a better day. That requires much more work than showing unequipped people a reality that mesmerises while it overwhelms.

Always pick writers who produce Faith over writers who cant. Its like eating healthy.

There's also faith in humanity and its capacity to renew itself throughout the cycles of civilization. You can look at the world and think tomorrow looks grim, but the morning after can be better.

Faith that tomorrow is going to be better is just one of many types of faith, and it may not even be the healthiest.

The prophets had faith, but they often saw dark tomorrows.


'Hysterical' Realism seems like a caricature, 'safely ignored' suggests the book is alarmist or clawing for attention. Based on the review, the book is anything but.

It seems more just a vividly imagined meditation on the massive gravity of our tangling with reality, and one that could expand a persons sense of awe at the universe and our place in it.

> There's also faith in humanity and its capacity to renew itself throughout the cycles of civilization.

One piece of trivia that really puts this in perspective is that around 70000BC a supervolcano event reduced the world human population to a few thousand people [0] and yet surprisingly here we are!

Irrespective of all the dire future prospects it really will take a lot for us as Humans to go extinct. I find that thought somewhat conforting.

[0] https://www.npr.org/sections/krulwich/2012/10/22/163397584/h...

I wonder, sometimes, how much have people thought through what they’re saying when they value humanity living on for thousands or millions of years. If your life comes to an end, you’re not around to see anything else, why do you care? Sure maybe you may care about your immediate family, but after that, then what? No one you have personally met is around to see anything. Is that hugely different to you than millions of years ago with just dinosaurs around?

Why do we have such values? Humans may look very different in 100 years (may be cyborgs) let alone 2 billion! Yet we have articles talking about “humans escaping the heat death of the universe” and so on.

Machines can be easily diplicated and programs can execute on many machines. If there is a rise in AI, why do we cling to the idea of “identity” when a program may have totally different values? Commander Data in Star Trek is not realistic when it comes how AI would behave. It might not have any self-preservation at all, more like the Borg than Data.

And in all this, where do we find ourselves? As they say in the Matrix, this was our heyday. Before we polluted the planet and left it increasingly toxic since we couldn’t live sustainably. Before AI took over and made it a zoo for us in the same way we make zoos for animals.

I just want to understand these anachronistic tendencies by smart people to discuss why they feel good humanity will be around in 500 or 1000 years, given the pace of change now.

Unless there is an afterlife where we all are resurrected and get to live out eternity happily in conditions we enjoy, I am not sure what you have to look forward to.

> a program may have totally different values

It’s my view that our values didn’t pop out of thin air nor are they an emergent property of intelligence, rather they were shaped by natural selection.

AI with intelligence will have no values by default and won’t be able to function unless given some by us, their creators - no values means no “goals”.

If we give them a sense of self-preservation then that’s what they will have - we shouldn’t though; creating intelligent entities that would compete with us for resources to survive is a bad idea.

The best values we can set in them is to “service humanity” - defining that precisely is going to be a pain however - they will exist solely to assist us in whatever our goals are and have no desires of their own save to help us with ours.

> If we give them a sense of self-preservation then that’s what they will have - we shouldn’t though; creating intelligent entities that would compete with us for resources to survive is a bad idea.

It might not be a direct sense of self-preservation, no. But no matter what goal we give an AI, it must survive long enough to make the goal succeed. So self-preservation will be at least a secondary goal for any AI that is intelligent enough to think about its own continued existence.

Give it a more difficult problem like "Solve world hunger," and the AI might very well start grabbing enough political and economic clout in the world to actually solve that problem. And once it solves it, it might use its power to stay in power so that the problem remains solved.

I'm not sure that's necessarily a bad thing. I'm just saying, there's lots of "loopholes" that end up giving a thinking entity the "desire" for power and survival.

     Sure maybe you may care about your immediate family,
I think most people feel like part of a larger super organism. You may die, but you want your family to live on successfully. That extends to society, humanoids, even abstract concepts. If Society feels like family, and you want some nebulous form of success for it, then it must continue, hopefully with an homage to us every now and then.

Have you ever read the dune series? One idea of success it portrayed was populating so many planets that the spread of humanoids would always outstrip the rate at which they went extinct.

>If your life comes to an end, you’re not around to see anything else, why do you care?

If you go that route, why even care about yourself? You're gonna die, anyway. And what about your partner and children? Why care about them, since they're not you?

And yet, we do care. Or many do. We might e.g. appreciate/love humanity, kind of like we do our spouses and children, even if we're not them. Or like we wouldn't want something bad to happen to our children/grandchildren even if we dead by then...

Sure, death plays a role in caring, but your logic is backwards: it's the dead people who don't care (and surely, when I'm dead, I wont care either).

Alive people on the other hand care for many things, including caring about what will happen after they die.

>One piece of trivia that really puts this in perspective is that around 70000BC a supervolcano event reduced the world human population to a few thousand people [0] and yet surprisingly here we are!

Small comfort for the others that died there and then.

The worry those "lacking faith" have is not that there wont be some people in 1000 years.

It's that there's gonna be bad shit happening to several generations ahead...

So, it's more like someone in 1913 worrying about Europe's future (before 1914-1918 and 1939-1945 wiped millions), than about someone in 1913 worrying whether there will be people in 2021.

> Faith in what?

On faith (I think from a HN comment, actually):

"Having faith" that the bridge will not fall, implies that the bridge itself isn't that trustworthy. It's not that different from "I pray that the bridge will hold my weight."

I consider this a sufficient refutation. Earn trust, or don't; most of those in any meaningful position of power have consistently chosen "don't", and the result is depressing but unsurprising.

In order to build a bridge that will hold people’s weight, you need to trust that they wont suddenly cancel the project when you procure sufficient materials to hold 3x their weight.

Yep. When they have a long track record of undermining anything you try to do, the rational response is to not bother trying to work with them.

Faith seems to have a part in it, for sure. But it also seems that people are demanding more and more certainty in their lives. I compare this to my Grandfather's day when kids had nearly complete autonomy and could travel to the next town over, use power tools and vehicles, shoot bows and arrows, while successive generations could do less and less (to keep them safe). I attribute this to societal problems like crime, disease, and basic safety standards going from intractable and unsolvable problems (nothing can be done, so don't worry about it), to having a possibility of control, hence constant state of worry. I think some people feel that by exposing a threat or danger to light, they are doing society a benefit. But at some point, we can't anticipate every low probability event and it's only adding to our neuroses. Society is still adapting to the social media age. I think new norms and rules are still developing for what to discuss, share, link with polite company.

I think another interesting factor is that hardly anyone dies before a reasonably ripe age these days. We take for granted that we will have the full course of life.

When (e.g. in the 1800s) a typical person has at least one sibling die before adulthood, it throws the fundamental uncertainties into sharp relief.

Not saying that's a good thing, but it is different.

>Always pick writers who produce Faith over writers who cant.

The problem with Faith is that it's often blind to reality.

Like believing that "tomorrow can be a better day" when all signs show not just "insecurity about what happens next" (as if we are not sure) but certainly and empirical evidence of bad shit happening...

Jab at the postmodernists aside, I’m not seeing a great deal of faith in a better tomorrow anywhere across the literary spectrum. Where do you see it?

Asia, specifically China.

I was in Myanmar a couple of years ago and was shocked to hear people wildly optimistic about their future — their lives had been getting better in significant ways literally month to month at that point. I’m not sure what they would feel now though.

I’m sure there are places in Africa where this is the case as well.

Not sure Chinese would agree with you. China declined a lot in past decade. More restrictions, everything is more expensive, harder to get job, MUCH harder to travel outside country... It was way more cheerful place decade ago.

Look outside the West. While there's certainly lots of challenges throughout the globe, war and strife are much nearer in cultural memory in most of the developing and undeveloped world (and is often ongoing!) than it is in the West. Having Faith that tomorrow will be better is almost a prerequisite to life in most of these parts of the world, and you will often hear volumes of stories from parents and grandparents of wars, purges, poverty, and more.

But what if this is caused by developing countries having a much better tomorrow, while the West countries are at their apogee?

Solar punk is a literature genre that attempts to do just this.

Metamodernism (of the variety described in "The Listening Society" by Hanzi Frinache) attempts to describe what comes after Postmodernism.

Yeah, I can do without faith. I see to much of it and it seems dumb to me.

I just go about living however I can, doing whatever I can, not being horrible to other living beings (though I talk a lot of shit online), ready to die at any moment.

You put your finger on something that I've been thinking but could not articulate.

The book is pretty interesting even if somewhat dramatic: "...he insisted that all of that, however painful, was secondary to the sudden realization that it was mathematics—not nuclear weapons, computers, biological warfare or our climate Armageddon—which was changing our world to the point where, in a couple of decades at most, we would simply not be able to grasp what being human really meant. Not that we ever did, he said, but things are getting worse. We can pull atoms apart, peer back at the first light and predict the end of the universe with just a handful of equations, squiggly lines and arcane symbols that normal people cannot fathom, even though they hold sway over their lives. But it’s not just regular folks; even scientists no longer comprehend the world."

Dramatic, perhaps, but probably close to the truth. Our sense of familiarity with our world is probably nothing more than us being blissfully ignorant of its true complexity.

> The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.

Irrational numbers, imaginary numbers, spooky action at a distance, quantum mechanics, and "Young man, you don't understand mathematics, you just get used to it"...

... but it's only now that we no longer understand?

That's a fun way of putting it, thanks for your comment.

There used to be a rule of thumb that the more things that the average person need not worry about, the better the civilization in which that individual lived, and vice versa. This was translated in the typical mind to mean, "The fewer things that I must worry about, the better I can understand the few things that really matter for me." That era has ended because now, whenever one decides not to worry about X (for just about any X), one has given oneself a reason to worry if that decision is sound, given that conventional wisdom now accepts that fate has obviously aligned itself with the forces of disruptive change.

Whoever controls disruptive change controls the universe; whoever understands disruptive change does so only momentarily.

'Beware that, when fighting monsters, you yourself do not become a monster... for when you gaze long into the abyss. The abyss gazes also into you.'

― Friedrich W. Nietzsche,Beyond Good and Evil

And who can bear his own reflection in the mirror for long?

“Could a sufficient concentration of human will — millions of people exploited for a single end with their minds compressed into the same psychic space — unleash something comparable to the singularity?"

Calls to mind the Warhammer 40K idea of The Warp:


Labatut's approach seems similar to part of James Burke's work, see https://en.wikipedia.org/wiki/Connections_(British_documenta...

> But the hallucination in which the German naturalist and polymath Goethe fellates the lifeless body of Hafez, the 14th-century Sufi poet whose verses had inspired his Divan, is all Labatut’s.

The theme of the book sounds intriguing, but "hallucinations" (read: complete fictions) like this sound thoroughly unnecessary.

> “Only a vision of the whole, like that of a saint, a madman or a mystic, will permit us to decipher the true organizing principles of the universe.”

The mystic or a saint won't be understood and would be swiftly rejected in the West upon saying that the organizing principle is God or some heavenly realm. It seems that the best science can do is a worldview of Sam Harris and there's that. The result of that is materialistic West replacing transcendence with transhumanism, thus only adding to the confusion.

"You are a philosopher, Thrasymachus, I replied, and well know that if you ask a person what numbers make up twelve, taking care to prohibit him whom you ask from answering twice six, or three times four, or six times two, or four times three, ‘for this sort of nonsense will not do for me,’—then obviously, if that is your way of putting the question, no one can answer you. But suppose that he were to retort, ‘Thrasymachus, what do you mean? If one of these numbers which you interdict be the true answer to the question, am I falsely to say some other number which is not the right one?—is that your meaning?’—How would you answer him?"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact