Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Doubting at the door: when it's right to be wrong. (For the lesswrong crowd.) (techno-anthropology.blogspot.com)
14 points by KennethMyers on Jan 24, 2011 | hide | past | favorite | 17 comments



I think your language, even in the title, indicates the problem with the question, and why, while I appreciate the goals, purposes, and actions of the lesswrong crowd, I wouldn't consider... um, "joining up."

The whole premise, as I see it, of being a rationalist is that there are particular methods to working with truth, that there is an actual truth, and a solid, singular rationality. The problem is that there is no sense in, having begun using them, declaring that you are now in charge of the truth or really have any particular access at all.

Statistical analysis doesn't prove things correct. It allows us to prove things incorrect. The assertion that whatever is left is true, is correct, typifies rationality.

But here's the thing, here's why it's called lesswrong, why the idea is to increment in favor of the truth: everyone's actually pretty rational. Some people are more wrong than others, but given the assumptions they work with, they are making rational choices.

If you are declaring people irrational, the problem is with your definition of rationality, not with them. In finding out _why_ they are rational you will find out _why_ they think differently than you.

Take atheism vs. theism. The assumption of irrationality on either party shuts down debate and lets no information traverse between the viewpoints.

But if you realize that everyone's pretty much the same, the problem you raise just _goes away_. Everyone's at a different place on the axis of rationality, no one is stupid, and there are people who are more right than you so there's not much point being annoyed with those who are less right.

So calling one's self a rationalist, to me, is either equivalent to saying "I am a human being," because all humans are somewhat rational and believe in the assertion that true things can be known (whether they say so or not), or "Other people are dumber than me and that is a part of my identity."


Just because all humans are intelligent (and rational), from Einstein to the village idiot, and the difference is really minuscule if you think about it, doesn't mean we should stop caring about those minuscule differences amongst ourselves. Calling people stupid doesn't help debate, I'll agree, but sometimes "misguided" doesn't quite cover it and we have to recognize that there are "stupid" people (where stupid only makes sense with humans comparing humans). I'll leave it without argument that "stupidity" should not in general be a desirable state.

I agree with you about the term "rationalist" as a signal. To me it's just saying that "I study the arts of rationality to become better at applying rationality in my life using my conscious mind, rather than letting the unconscious parts of my brain do what they were evolved to do (that is, roughly approximating Bayes' Theorem in inconsistent ways)." I personally don't like to use such labels, it reminds me of people proudly declaring themselves to be Objectivists, Atheists, Christians, etc. I don't care what you think you are, I want to know what you do, and I don't want to have to make guesses and assumptions that can lead debates into dead ends.

Also the terms "rationality" and the like are often abused and overloaded. So we're all on the same page: http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/


>I'll leave it without argument that "stupidity" should not in general be a desirable state.

I'm not saying we should question the concept of stupidity, I'm saying that self-identifying as a rationalist is like self-identifying as a human--that is to say, it's redundant. And if it isn't redundant, then it's elevating your personal rationality above others.


Ahh.. As to redundancy, is there a case to be made that all or most self-identification reduces to self-identifying as a human? (After all, we're the only species (we know of) that self-identify with some word...) If not all self-identification is redundant, then surely many forms of self-identification could be made to be elevating your personal self above others? (Never mind whether it's your personal rationality or your personal programming language preference.)

I guess I'm not seeing your point on this. Why pick on rationalists? Because they think they're smarter and not supposed to fall into the traps of lazy/empty-labeling and social signaling? I'm reminded of http://xkcd.com/774/


I reject your case. Self-identification isn't even necessarily something that's public. It's as much a question of who you are to yourself.

Self-identification is a broad topic. Sports fans create opposition with other self-identified fans for the sheer hell of it. Crack addicts self-identify. Being married is a joint self-identification.

In any case, the redundancy is meant to force the argument to the other definition. My prime point is not that they think they're smart, it's that they're trying to sort people into valid/invalid thinking bins.

And as a side note, there's something very wrong with thinking that social signaling or labeling is a trap.


But rationalism is not just the assumption that all others are dumber, rather the realization that some are dumber and some are smarter. The goal then becomes to determine which are which, and to move oneself toward the truth. It assumes that there is some method of comparison, and that not all solutions are equal. It does not assume that the way is clear and unambiguous, nor that the truth is all on one side.

Compare Western medicine with homeopathy. I'd wager my life that there is no such thing as water memory. But at the same time, the placebo effect is definitely real. Where a absolutist would argue that modern medicine has nothing to learn from homeopathy, and a relativistic would argue that from within each system each is equally 'true' for its adherents, a rationalist will feel comfortable disbelieving in the surface explanations of homeopathy while at the same time trying to understand how modern medicine could be improved.


>rather the realization that some are dumber and some are smarter.

What? You think that begins with _rationalism??_ The world is divided up into people who realize this and people who do not?--for that is what defining 'rationalist' in the context of people's self-identification attempts to do.

Your definition leads right back to mine. Everyone realizes it. Everyone!


Yes, I do think that rationalism is intimately involved with self-doubt, and that many people are wholly unaware of the possibility that others with whom they disagree may be right rather than wrong. But I also think that there is such a thing as truth, and that it would be an error to believe that no one is stupid or acts irrationally.

But possibly I just don't understand your point. Are you arguing against the precepts of LessWrong, or against the case that Kenneth is making showing its limitations? Are you arguing that rationality is (or should be) synonymous with evolutionary fitness rather than truth? That believing things that are factually true can be irrational?

If you're up for watching depressing but incredibly insightful documentaries, I'd recommend "Stevie" (http://movies.netflix.com/WiMovie/Stevie/60027209?trkid=2361...) as one that might change your mind on whether "no one is stupid". It's a brilliant movie of stupid people making their way in the world, with as much grace as they can manage. The scary part for me was that the self-explanations of the (medically) retarded characters vacillate between sounding simplistic and fully aware. I think the rational conclusion would be that it's awfully hard to tell which rung one is standing on, and whether others are above or below it. But I don't think it would be rational to conclude that "everyone's pretty much the same".


Your third paragraph is correct and I'm confused why you thought it was contrary to my point.

I'm not saying that no one has any trace of stupidity. I'm saying that the concept of stupidity-intelligence is not a binary, it is a gradient.

What I'm arguing is that the question reveals its flaws, and the flaws of the lesswrong community. A member who self-identifies as part of that community is either being redundant by saying something about themselves that applies to every human being, or they are being elitist by implying that they have (or seek, and therefore have) some special understanding of the universe.

This is illustrated because Kenneth is dividing the world into people who are right and wrong and wondering if being wrong can sometimes be right. It's nonsense, a question brought about by too much caring how right you can be.


Calling yourself a rationalist means, basically, that you're a virtue-ethicist who considers rationality to be a virtue: you're willing to allow for the maximization of truth at the expense of (some amount of) utility. In its way, this is a signaling mechanism like any other: there's no "rational sense" in doing something that will earn you locally sub-optimal utility, even if that thing is rationality itself.

However, like the local signal of revenge allows for global cooperation in the iterated prisoner's dilemma, maximizing for local truth is a clear signal to others that they can globally rely on your work to benefit all parties equally (including them), rather than just whoever the rationalist works for. Succinctly, "no one has their ICBMs pointed at the enemy's universities."


Your definition does not contest my either-or equivalence, my assertion that "rationalist" either means "human being" or "someone whose identity involves caring about other people's dumbness."

Mostly it's just dressing up how people think with terminology. Virtue-ethicist = "believes in bad and good." Maximizing truth = "trying to be right."

Who is not a rationalist by your definition?


> Calling yourself a rationalist means, basically, that you're a virtue-ethicist who considers rationality to be a virtue

What? No! Yudkowsky, at least, follows utilitarian ethics and argues for _instrumental_ rationality. The utilitarianism sets the goals, and the rationality is about finding the strategy to fulfill them.


"all humans ... believe in the assertion that true things can be known"

Actually, there are a minority who do not believe that. I take Richard Dawkins and Morris Kline ("Mathematics: The Loss of Certainty") as representing themselves accurately, even if I believe their positions to be incorrect. See Martin Gardner's essay "How Not to Talk About Mathematics".


That was the point of my caveat, actually, the "whether they say so or not."

Nihilists still act as if things are true and kinda negate their own point. That guy who wrote a tome about meaninglessness and then offed himself at Harvard wrote a poem about meaning and finding it _through_ belief in meaninglessness.

That's a little besides the point, however, because I think it fundamental that it is only after experiencing meaninglessness that we are free to create meaning. How do you know what's really important to you? It's what you do after you don't think anything matters.

There are people who are _actually_ nihilists, who _actually_ cannot bind meaning functionally anymore. They are some subset of schizophrenics, I suspect.

If you really, fully believed that not meaning nor truth existed (which is a contradiction since the justification for the argument must use the concept of truth) you would be unable to function.

tl;dr the theory of meaninglessness is useful as an abstraction but it is not a practical reality.


I don't think you are addressing the concern that the GP brought up. Mathematics: The Loss of Certainty is not an essay about nihilism.

Briefly,

[Mathematics] will be seen as the creation of finite human beings, liable to error in the same way as all other activities in which we indulge. Just as in engineering, mathematicians will have to declare their degree of confidence that certain results are reliable, rather than being able to declare flatly that the proofs are correct.


That was intentional. As he could not or did not condense the points of the paper (partly because he didn't personally agree) I did not have anything to work with; as there wasn't a link to the article, I did not have anything to read; as my point was never about mathematical certainty the question of mathematical proofs being true is irrelevant.

As described in your quote, mathematical truth is not the same thing as whether any thought can be true.

Though while I'm here, I'd say that it's some weird ego-thing that humans think they've 'created' math while simultaneously thinking it's flimsy. Math is a property of the universe.


> There are plenty of good reasons not to be a rationalist. In fact, valuing the truth as a sort of Platonic ideal to which everything else must be subservient is an arbitrary moral choice in a universe where moral facts are either non-existent or inaccessible.

The rationalists' position is that believing true things is almost always strategically superior to believing untrue things. This is an empirical claim, not one about "Platonic ideals". You can choose to value whatever you want, but presumably you have some ends you want to achieve, which requires decision making, which is usually assisted more by true beliefs than by false beliefs.

Now, there's some evidence about cases where this might not be true. The blog post raises this about happiness surveys, etc. This evidence seems to me to weakly establishes a negative correlation, but not necessarily that rationality causes less happiness. At any rate, for any particular subgoal (say, becoming better at programming, achieving material wealth, etc), believing true things is probably going to be good strategy. That's the rationalist claim, anyway. For happiness in general, it gets murkier.

Yudkowsky also has a whole thing about how a rationalist can still win at Newcomb-like problems. The gist of it is, what's rational is whatever will meet your goals. This means if you need to preserve some false beliefs, you should go ahead and do that.

For example, you may find false beliefs about how long a project will take motivate you to get started. If you really accepted that your project would take a long, long time to bear fruit, maybe you'd never have begun, and you'd have been worse for that. In this case you've got this other irrationality about how you're weighing future benefits that you're compensating for by lying to yourself about how far into the future you'll reap your rewards.

Here, the two irrationalities together take you to a local maximum. If you only got rid of the false belief about the project times, you'd be worse off. But, if you could get rid of both irrationalities you'd be _better_ off. You wouldn't have the motivation problem AND you could plan more efficiently.

But maybe you just can't get rid of your tendency to over-discount future benefits. If so, you should just accept the local maximum, rather than anti-strategically attacking your tendency to underestimate project costs and leaving yourself worse off.

You could make a similar argument about religion and many other false beliefs. Which local maxima we can realistically step beyond is another empirical question. But I think it's clear that there are many false beliefs that are strictly negative: that is, you're better off without them, even if you don't change anything else about your thinking.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: