Hacker News new | past | comments | ask | show | jobs | submit login
Man as a Rationalist Animal (2017) (samzdat.com)
50 points by simonsarris 9 days ago | hide | past | web | favorite | 36 comments





There's a ton of (especially early) nerd culture idolizing many forms of what most people know as the "Spock" character (somewhat more accurately, Vulcans in general). This is typically contrasted with emotions, sometimes as a mystical part of the human soul that exists outside of logic, sometimes as the source of evil.

It's a constant bother. The only way to apply pure logic to any problem is to pare it down to a mockery of the real world problem. That's why we have "gut" or "emotions": We apply imperfect patterns to complex issues, otherwise we'd never make it through a single day; it's an optimization.

Tech/nerd/stem types absolutely love to do this. We take a problem, pare it down to its essence and then solve it. When no one listens and no logical counterargument prevails, the cries for technocracy start to ring out. A classic attempt to dissect this fallacy was the old blog post "What color are your bits?" [1]

As technology companies continue to grow in power relative to all other companies and governments, I'm very interested in watching how this plays out.

[1] https://ansuz.sooke.bc.ca/entry/23


"Tech/nerd/stem types absolutely love to do this. We take a problem, pare it down to its essence and then solve it."

In my experience, the tech/nerd/stem types being hypothesized about actually do not pare a problem down to its essence. They instead do the equivalent to attempting to solve the screaming of a hungry child by assuming the essence of the problem is that "too much sound", so we put the child in a soundproofed room.

In truth, it is entirely possible to apply pure logic to a problem if the problem is appropriately scoped to closely align with actual reality. In my experience, to deny the emotional aspects of something is actually irrational behavior- it makes incorrect assumptions that the only things that exist are what the tech/nerd/stem person understands themselves.

This results in the hypocritical behavior of a tech/nerd/stem type crying out (emotionally) to solve the problems of emotionality.


I think we agree, but I appreciate that my wording wasn't perfectly clear.

"pare down to its essence" was a bad way of saying "disregard intersecting issues and focus on one logically resolvable issue." Your soundproofed room analogy is apt.

I also agree with applying pure logic to a scoped problem being not only possible but desirable... but I'd argue that properly scoped problems are rarely as useful to solve as the scoper might think. In many cases "merely" scoping the problem in a novel way leads directly to a truly useful course of action, and is most of the hard work.

I agree completely with emotions being something we should not deny, but rather be something to /include/ when trying to solve problems. (My complaints are around emotions being placed outside or opposed to the realm of logic, where accounting for them is "illogical")


Yes, I agree- but what fo you mean by "merely scoping the problem in a novel way" and how does it differ from the way I describe scoping?

(Not attacking, genuinely curious because I'd love to find new and more useful courses of action when it comes to problems...)


I think we are both arguing that some people narrow the scope of arguments to the point that they aren't useful to the original problem, merely more amenable to logic. Let's call this "not useful scoping".

I further argue that useful scoping (isolating the problem in a way that solving it provides a solution amenable to all those who proposed the problem) often /is/ the hard work, itself a product of much time and logic.

Far from proposing a useful course of action: I simply lament that we will often choose our scope to support simple logic, rather than use complex logic to improve the scope.

No surprise either: each life only has so many hours.


Ah, ok, thanks for clarifying.

It is possible to genuinely enjoy scoping the problems out though, evaluating their complexities, etc. It's also possible to say "I don't have time to evaluate the thing, so I'm not going to conclude anything about it" (although the latter irritates lots of friends, haha)


What's really interesting is that when the ancient Greeks spoke of 'reason', they didn't mean a cluster of intellectual processes divorced from our emotional lives, but an approach to the world that was informed by logic and also the noblest sentiments within us.

Well, tech is full of famous smarties who, let's say, have famously underperformed in the emotions department.

If the tech world got filled with similarly emotionally unaware folks who idolize the smarties, it's quite likely they interpret those two traits as a package deal and emulate both.

Since most technies aren't famously smart, the emotional unawareness at least gets them 1 out of 2.

This rank speculation explains why technies get famously angry when asked to be even minimally aware of the effect their behavior has on others.


The problem isn't even one of over-simplification; it's that logic does not inherently possess values. Logic can only tell you whether a conclusion follows from premises, not whether those premises are correct. It can tell you whether an idea is consistent but not whether it is good.

This is exactly what I'm talking about. You've narrowed down the scope of logic to apply to everything !good, which is some emotional value that exists outside of logic. In reality "good" is a vast trove of information which differs in the minds of every party to the problem.

Logic absolutely applies to this set of information!

Wanna-be technocrats should understand that exploring that vast trove of information (which they can't actually do, in practice) will allow a more widely accepted solution.

Accounting for all these various "goods" logically results in messy compromises which run exactly counter to the technocratic dream.

Democracy is an imperfect attempt to distribute the logical calculation of all these "goods" and come up with a big ugly messy solution.

(Some parallels with capitalism exist here)


This "spock" character is just another form of rationalism. It has a long history, for example french revolution.

That's true, but it doesn't really have much to do with my points:

* Common (current) culture around rationalism fantasizes about the human using it being able to choose the best course of action in every situation (and examples to the contrary are typically full of woo-woo about human emotions and souls). This is computationally impossible for the brain.

* Hacker-news-audience-types will often discard important nuance on a topic in order to reduce it to something that can easily be rationalized about.

If historical cultures may have "suffered" from the same delusions, it would be more interesting to see how my points did or didn't apply than saying that they existed.


My point was about "idolizing" Spock, as form of worship.

What's funny is, watching Star Trek, how obviously Vulcan "logic" isn't (usually?) logic. I've assumed that was intentional.

Would Sam Harris be an example of a person who holds such beliefs? I would say definitely yes, but curious if others would agree.

The concept that emotions are irrational is itself irrational. They are a physiologic reaction to environmental conditions that is a byproduct of untold generations of evolutionary development. To dismiss them out of hand for the sake of deifying a comparatively infant-like intellectual culture is not reasonable. Emotions are Us, not some inconvenient creature to squash for the greater good.

A common mistake (those who see themselves as) clever people make is that they believe that if you win an argument, you are correct.

That's why when a person is upset and has a hard time expressing themselves, or when an old person struggles to explain the importance of some traditional custom to a new generation, people tend to automatically dismiss them. We often care more about winning arguments than about finding truth.


> A common mistake (those who see themselves as) clever people make is that they believe that if you win an argument, you are correct.

You contradict this later by saying "We often care more about winning arguments than about finding truth." This is right - it's not that people think that winning the argument means they're correct, it's just that it feels good to win an argument. Winning does matter more than finding the true. Why? Feelings. Feelings govern everything. Life would literally have no meaning without the feeling tones that give it meaning.


Sure, but even total hedonists believe that sometimes you have to not spend everything you have on candy, sacrificing immediate pleasure so you can eat more total candy in the long run.

We should also resist the temptation of declaring victory in a discussion and instead listen carefully to what irrational, emotional, even inarticulate people have to say.


I really like the Daodejing on this topic.

Chapter 17:

The greatest of rulers is but a shadowy presence;

Next is the ruler who is loved and praised;

Next is the one who is feared;

Next is the one who is reviled.

Those lacking in trust are not trusted.

However, [the greatest rulers] are cautious and honor words.

When their task is done and work complete,

Their people all say, "This is just how we are."

Chapter 29:

Those who would gain the world and do something with it, I see that they will fail.

For the world is a spiritual vessel and one cannot put it to use.

Those who use it ruin it.

Those who grab hold of it lose it.

[...]

(Phillip J. Ivanhoe translation)


Reminds me of

Lao Tzu: “A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves.”


Yep, that's actually the last three lines I quoted from chapter 17 in a different translation.

The account of pessimism and optimism which the author begins with is too simplistic.

What matters for our outlook is not, in the first instance, whether humanity's failures in the past are attributable to malign motivations or technical incompetence, but whether the problem - whatever it is - is soluble.

If our major failures are ultimately the result of ill will, but humans are inherently malign creatures, or our collective endeavours are always undermined by the malignant actions of some individuals, then this would be little comfort indeed.


> If our major failures are ultimately the result of ill will, but humans are inherently malign creatures, or our collective endeavours are always undermined by the malignant actions of some individuals, then this would be little comfort indeed.

If people really believed that they wouldn't complain so much about it and they certainly wouldn't be politically active about it. There would be no point.

Arguably, this is true for most of the population. But there is a big loud minority that works very hard to make sure the correct lizard rules over us, so they must believe that not all humans are inherently evil.


I found the Slate Star Codex review (recommended by this article) more readable.

https://slatestarcodex.com/2017/03/16/book-review-seeing-lik...

I think there are fascinating comparisons to be made to legacy code and the full rewrite.


Especially when the thing that is being rewritten and refactored is not the software but the human processes that the software is meant to facilitate.

How many managers choose to enforce, say, an Agile methodology on their team, not because they are any good at Agile but because existing tools like JIRA support that workflow and it's easier to just go along with it?


I think that's a separate interesting application of the principles involved, but interesting as well to be sure.

What a sophomoric article. For example:

> At first, the state just puts random names on the streets. This helps some, but the residents still colloquially go by the old terms they know, which causes problems for dispatch. Moreover, most of those alleys are still too narrow for ambulances to get through. The state decides on a more radical project: it’s going to plow through what it can and build new, ordered streets based on a grid. While they’re at it, they decide to make one commercial district and one residential district – it’s just a better system.

Just count the absurdities. "[T]he state just puts random names on the streets." Ludicrous. I mean, we can't even talk about this because we've lumped so many different things into the term, "state." To even begin thinking about this, we have to assume, as I'll do, a first-world democracy, for the sake of having something to talk about. I'd give even odds that in no first world democracy has "the state just [put] random names" on streets, especially where the residents have "old terms they know." Even if they did, how likely is it that no dispatcher would be, or know, a resident? How likely is complete ignorance on their part.

Then, we have: "The state decides on a more radical project: it’s going to plow through what it can and build new, ordered streets based on a grid." What? Here we can go quite a ways down the scale from "enlightened democracy" down toward "tin pot dictatorship" and we still strain credulity to think of a public welfare project where "the state" whimsically decides to "plow through what they can" and rebuild "based on a grid." (As evidenced, I suppose, by all the recently reordered neighborhoods on Google Maps, and by the news stories of the brand new kind of urban displacement that would entail.)

And they just "decide," "while they're at it," to rezone? Unlikely to the point of making one angry. It shocks the intellect.

The article is pablum, meant to rehearse a mushy market ideology so that a pathetic kind of know-it-all can feel better about their vaguely held lazy beliefs.


> Unlikely to the point of making one angry. It shocks the intellect.

I am sincere when I say you need to read considerably more history. You could start with Seeing Like a State, the book this article is reviewing, since it contains many examples that would, apparently, shock your intellect. Many of the schemes tried in history are certainly shocking.

If you want to flip through Wikipedia instead I would suggest looking into the works or deeds of Le Corbusier, Earl Butz(! maybe the man solely responsible for massive scale farms in the USA), Robert McNamara, Robert Moses, Jean Monnet, the Shah of Iran, David Lilienthal, Vladimir Lenin, Leon Trotsky, and Julius Nyerere, and for examples of re-ordering existing neighborhoods especially the work of Moses, but also Haussmann's renovation of Paris, and Le Corbusier's proposed (but never enacted) remaking of Paris center. Maybe also read about Oscar Niemeyer and the making of Brasilia.

Or, if you want a US centric look try The Death and Life of Great American Cities, where Jane Jacobs details lots of things that happened in the USA that you may term "unlikely."

> we have to assume, as I'll do, a first-world democracy

It's not clear why you'd have to assume this. The book covers societies in very different times, across continents, but certainly includes first-world democracies, as well as forced villagizationin in Ethiopia, forced standards conversion in medieval France, the rise of grids in many countries, and so on.


> we still strain credulity to think of a public welfare project where "the state" whimsically decides to "plow through what they can" and rebuild "based on a grid."... they just "decide," "while they're at it," to rezone

The reason this is so incredulous is that it used to happen all the time, at least based on Caro's book "The Power Broker" about Robert Moses. Randomly tearing down neighborhoods to build highways, housing projects, etc. In my hometown of Albany, two neighborhoods were entirely torn down in the 70s to build the massive "Rockefeller Plaza."

The only reason planners no longer do this is that the outrage was so immense in the wake of people like Moses. But it's not "unfathomable", it's history.



I think the example was meant to be hypothetical and a bit absurd so that people can understand the point being made. There are enough real world examples of state intervention for the "common good" gone awry.

When you take examples over the top, you are simply lying, because that is the difference between sensible choices made in reality and insane nonsense choices made in ideology tinted nonreality. And extrapolating examples globally is just as absurd... taking an exceptional mistake, and just assuming it's the status quo.

These kind of bad mental shortcuts are the result of decades of propagandistic programming, not any kind of reflection of reality.


What is normal changes, what seems absurd today might be quite possible tomorrow.

Besides that, there is the fact that "Authoritarian High Modernism" absolutely existed in the past century and resulted in all sorts of "insane nonsense choices" being made to the great detriment of millions of people.


Sure, but the author should have relied on those examples instead of creating some sort of bizarre hypothetical narrative to argue his point. I find it's always telling when an argument is based on hypotheticals or "thought experiments". It generally means that the author had to literally make stuff out of thin air to argue their point. It's antiscientific at best and disingenuous at worst.

he isn't making it up, this happened most famously in paris but also in many other places



Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: