Hacker News new | past | comments | ask | show | jobs | submit login
Epistemic Learned Helplessness (2019) (slatestarcodex.com)
195 points by gHeadphone 9 months ago | hide | past | favorite | 222 comments



After spending many years studying philosophy I think that the right approach is that you don’t need to come to any conclusions about anything.

Aristotle often started with common peoples’ viewpoints on any given topic, along with looking at what experts thought and built his own ideas from there, and of course was very successful with his methodology.

Arguments are interesting and obviously some are more correct than others but everyone has good reasons for what they argue for and there is a genius to the collective thoughts of humanity even if it seems like insanity most of the time.

The best starting point is one of ignorance and asking other people what they think, you don’t need consensus but you should be looking for clues, solid ground, and inspiration.

Everyone should get more comfortable andmitting to and saying that they truly ‘don’t know’. Taking a position of ignorance will open the doors to real possibilities and actual progress.


To speak to this directly in a scientific setting I find one of the most beneficial thing I can do when exploring a new domain is to NOT read works first. But instead to first ponder how I would go about solving the problem, maybe do some naive attempts, and THEN read. An iterative cycle of this becomes successful because of a few typical outcomes: you spend more time but generally come away with a deep understanding of motivation and why the field has moved in the direction that it now exists in (which helps you understand where to go next!) and/or in this process you find assumptions made to simplify a problem but has been forgotten and can be readdressed (at worse you have a deeper appreciation for the naive assumption). I do think this process is slower, but I find it frequently results in deeper understanding. It is (for me) faster if your intent is to understand, but slower if your intent is to do things quickly. I find that with the typical method of review to catchup without the struggling generally results in me not having a good understanding of many of the underlying assumptions or nuance at play (I think this is a common outcome, beyond my own experience). YMMV and it may vary dependent on your goals with a specific area and cross-domain knowledge. But I find great pleasure in it because it causes me to consider many things beautiful and genius where I would have otherwise saw them as obvious and and mundane. That alone is enough for me because research is a grueling task with endless and frequent failure but this helps keep me motivated and keeps things "fun" (in the way I think Feynman was describing). It's not dissimilar from doing your homework, checking someone else's work/a solution manual, and then trying to figure out where your mistakes are rather than simply correcting them. Similarly, time constraints are a pain so such a process isn't always possible and certainly not as often as I'd like.


Sounds were quite familiar to this anecdote from Feynman about his learning path:

https://literature.stackexchange.com/questions/8691/in-what-...


Yeah I've actually learned a lot from Feynman about how to learn. I think a lot just comes down to acting like a kid. Fuck around and find out, but with more nuance and sometimes more direction or not haha. There's the old saying "passion is worth 10 IQ points" and I think these all are part of the same thing.


Agree completely. I think the author misses the big insight here. Rather than universal deferment to authority, I think the insights here are a) that people overstate their confidence and b) not all science should be treated equally.


The epistemic standpoint of rationality (particularly Cartesianism) assumes a static arrangement of knowledge, where one uses analytic reason to gradually unveil bits of it like finding new territory on a map. It is rooted in analytic geometry.

David Hume challenged this view. His main insight was that an object we call "A" in time T1 may not be the same object in time T2. We also need to distinguish between "A" as an idea of an object and "A" as a particular instance of object.


Along similar lines, critiques of Cartesianism in epistemology have also pointed out the heavily social aspects of knowledge construction, see situated epistemologies etc. Even epistemologists in the analytic tradition have begun to move away from Cartesianism due to its limitations.

TBH, taking an epistemic stance that's primarily cartesian these days mostly just shows that you're (likely) ignorant to basically the entire history of development and research in epistemology after Descartes. Cartesianism is a very useful perspective and method for certain things, but as a general epistemology it's quite crusty.


> I think that the right approach is that you don’t need to come to any conclusions about anything.

This reminds me of a phrase I often find myself using with people: I am not required to have an opinion about everything.


This is probably true, but not very helpful. We can shrug over historical curiosities, but in a lot of cases we have to make a decision.

Consider Linus Pauling's claim that you can prevent cancer with megadoses of vitamin C. It was never widely accepted, but Pauling is a titan of science with two Nobels and he wrote books with convincing-sounding arguments, so it's tempting to think maybe this is a case where the status quo is wrong (esp. if you have cancer).

I think that's the sort of thing Alexander is trying to navigate here - no matter how comfortable you get saying "I don't know", at the end of the day you need to take the vitamins or not take them.


The same argument can be turned against you, someone can wait for absolute expert certainty and die in the process due to lack of action. We all act pragmatically when making decisions and simply blindly trusting people is no way to live for a thinking and intelligent being.

I was not suggesting this sort of Burdian’s ass scenario where everyone is so gripped by ignorance that they can’t act. I’m suggesting instead that ignorance is a starting point, to drop your preconceived notions and opinions or at the very least challenge them and not be afraid to come to no conclusions at all— keep everything open. To see there is gold in the common opinions of people and in the arguments of experts, that no one has a monopoly on truth.

You don’t need to work yourself up to absolute certainty over the world to make a decision. You don’t need to blindly trust experts and you don’t need to be gripped by fear of uncertainty and you don’t need to be forced into action because of arguments. You really don’t need to do anything at all. There are bigger questions to ponder and a life to live that’s worth living.


I still don't see how this is relevant to the question at hand, where someone is trying to convince you of something. They either convince you or they don't; refusing to decide is equivalent to the latter. Not taking the vitamins because I'm "so gripped by ignorance that you can't act" and not taking the vitamins because I'm "not afraid to come to no conclusions at all" are not alternatives, they're the same answer!


This article talks about learned helplessness in a learning context. I talked about it in a work context, and the two could be linked. I think social media is training people for everything to be quick, but learning + work aren't necessarily quick.

> This insistence on constant availability disrupts the essence of focused work. Rather than encouraging employees to tackle complex problems independently, there’s a trend, especially among junior staff, to quickly seek help upon encountering any obstacle. The fear is that being “blocked” under-utilizes an expensive team member. However, the nature of knowledge work is solving ambiguous, complicated problems - so the expectation of constant availability can lead to a culture of learned helplessness, which shunts professional development.

https://www.contraption.co/essays/digital-quiet/


As a junior dev, sometimes I've been blocked for hours because I want to show that I can solve problems independently. But I've definitely had cases where I should've asked questions early. A question that takes 1 minute to answer could have saved hours.

For instance, I was stuck on figuring out how to send an email through AWS. Turns out we have a lambda function that handles all the authentication and security things that are specific to our company. Once I asked my coworker and found out about this function, it was trivial.


The distinction between trying to solve a problem yourself vs ask for help is: are you figuring out a new problem, or figuring out the system?

If you're trying to figure out the system, you should probably ask for help ASAP and build up some knowledge. Over time you'll need to ask questions about the system less frequently.

If you're trying to figure out a new problem related to your work, slogging away at it for a while is more ok because it's value-add and a good learning experience.

The lines are a bit blurry, but that's how I tend to think about this.


I had a manager when I was a new hire who had no concept of this difference at all, and it led to his firing me in my 4th week because I "should have been coming in with more knowledge."

He actually told me I should have just come in and started doing things whatever way seemed best to me, like he did, when he was the first employee in the company in the role.

Never mind that it was a compliance-heavy role in risk analysis and I was the 5th person on a 5-person team.

Sure, I knew plenty of ways to solve the problems. What I didn't know were what systems they had (or, in most cases, didn't have) in place to document investigations, which absolutely must be done.

I know plenty about what I'm doing. What I need to know is how you want it recorded. And in fact he had reinvented the entire documentation system during my 2nd week, on a whim.

Firing me that fast was the best gift that company could have given me, because things would not have improved between us.


As you get more and more experience, I think this will become easier. You'll eventually get a feel for what types of questions you should just work through yourself, and which types of questions you should ask someone else about, and the threshold for moving between the two (you'll develop a sense of "let me investigate theory A, B, and C, and if I still can't figure it out, I'll ask for help").

Some of this will probably be via an increased understanding of the types of problems you can and can't solve. And some of this will probably be because you'll eventually know more people who can help (something like "oh yeah, I know Sam worked on a similar problem last month, let me see if they found a solution")


There's also a minimum bar of seeking that you need to have both performed and can demonstrate - this shows you're doing your due diligence, and not just wasting the more senior developer's time. It also provides you the context you need to understand the solution. The last thing you want is a LMGTFY moment from someone who was interrupted from deep focus on a hard problem (or worse: a series of them).


The nuance here lies within the “hours” quantifier.

A junior member being blocked for 3-8 hours semi-regularly is expected. Once every other month you probably should be blocked for several work days as well.

In your example, the challenge would have been realizing you wouldn’t have been the first one trying to send an email in the team. Recognizing that and it is very easy to know that you should just ask the question right away. It’s not a technical skill, but something a bit more meta.


This is really bad advice. Don't let yourself be blocked for several days. Try for 2-3 hours and ask for help.


I'd say that's excessive. An hour max would be more reasonable otherwise you're just creating a discouraging workplace. Note that would be an hour of real effort though, not half arsed googling. Making a genuine effort is more important than the time taken.


The amount of time is entirely dependent on what kind of work it is and what kind of blocked you are. Can’t get your build to succeed? Ask for help quickly. Can’t get your research model to produce useful results on the first try? Maybe try a few things first.


Yeah, it's a bit disheartening to hear some (presumably) experienced people advocates for never allowing the junior to dig and deal with things on their owns.

I can imagine the scenario where a build fail due to some missing incantations that needs to be invoked, and sure, you shouldn't spend more than 15 minutes on trying to resolve it. But that should be rare, while the common blockers for junior tend to be their lack of understanding in even bog-standard systems and situations (why can't system X do Y, why did person Z require A). In those cases, part of the expectation when assigning them the work would be for the junior to learn those on their own.


If you lost hours digging through the code and trying things to solve the problem, they aren't really lost. The knowledge you gain from this, the code you dug through, etc. all help builds up the experience that leads to one day no longer being a junior. Being given the answer is fast, but it doesn't lead to as much learning, especially in a career where knowing how find answers is more important long term than knowing the answers.


Blocked for hours is you learning and totally OK if you are learning. If one is operating open loop and stabbing at configuration options in a brute force manner to "make something work quickly" that isn't learning. Not saying you did this, but I do see this in a number of folks. They don't learn, and they don't form a model of the system. Those folks can be replaced by an LLM and Z3.


As a tech lead I would rather see a team member ask me questions early than never. Nudging them in the right direction before they waste 3 days going down a path that’s never going to work is at least half my job.

But it’s important that they’ve done enough research to formulate the question. A little struggle helps the learning stick.


I agree, but in practice now I've come to see that this is an extremely difficult point to discern. As the vast majority of findings flow in as a trickle, there often isn't a large enough delta to identify as a launching point for asking for help. For example, if one continues to search, they will keep finding breadcrumbs leading them to the solution. It is painful to me when someone spends 3 days looking for a solution to a problem that is very custom and unique to our system (so they're not gonna find the answer anywhere on the internet), and one I could solve in 5 minutes, but you don't know what you don't know, so as a new person learning it is almost never clear at what point it is ideal to ask for help. Compounding all of this is that many personalities don't like to be bothersome to other people, and that can cause them to hesitate to ask for help, which further sends them on the search between small deltas. It's a very hard problem.


You need to socialize the problem; to broadcast that you are trying to solve something. Some sort of internal discussion board could work. It serves the same purposes as the water cooler.

If anyone can think of a way to make software better serve this end I'd love to hear.


> think of a way

An app was recently mentioned in hn comments, for "live blogging" as PIM - a texting-like personal record of your questions and findings as they go by - research notebook meets chat UI. With a trace like that, or even just a browser trace, an AI might, without interrupting, summarize "what's Joe working on" and perhaps "how's that going?", or even "Hey Bob, maybe come chat with Joe?". And it could be nice to have a PIM that helped you maintain distilled clarity of objectives and state of play. Less nice the vision of a micromanager mesmerized by the dashboarded real-time state of his team.

A physics peer-instruction app, in support of "instructor puts up a question; everyone individually commits an answer; discussion with a neighbor; answer again", told students which neighbor to talk with, optimizing for fruitful discussions, knowing seating and the (mis)understandings implied by answers. LLMs open a lot of possibilities for the old dream of computer-supported cooperative work.

When github was band new, I'd hopped it would grow far more social/meetup/hackathony than it ended up. Wander by, see who's around and what people were banging on; stop by the beginner tables and see if anyone was stuck or struggling; maybe join a push; maybe pair or group; interest profiles, matchmaking (eg round-robin pairing, or "oh! a category-theoretic type system in-the-style-of-a-conference-bar discussion!"). Like a team or small community discord with bots, but scaled. Perhaps AI can make something like that more tractable.


The sort of problem that is often best solved by hanging out around the water cooler, mixing and kvetching with random coworkers, ones you might not see at your regular meetings.


My problem is I see this most frequently with debugging. It's like no one knows how to debug anymore. Read a runbook, google an error, try a few things.. no, just pester a senior.

When I find myself responding to juniors/mids with the same list of rote, problem agnostic, runbook responses .. and it actually helps them, it's unnerving. It's like the socratic method of debugging without them actually learning anything from the experience.


Agree. I think about 4 hours is a good rule of thumb for how long you should be stuck before getting help.


Cool! I'll send them to you then :-)


Yeah, it's a balance. I love being able to help, and I am generally in favor of asking questions early, but not ones of the form "hey so I ran this code and it errored. Help?"

"... did you read the stack trace? Did you look at the code referenced by the stack trace?"

This is where I've learned responding with "Sure! What have you tried so far?" is relevant.


This just has not been my experience at all.

I've never had a problem with a junior asking too many questions. Never.

I have, however, had issues with them not asking enough questions.


Right. I never got upset with a question. The only issue is getting the same question multiple times.

Not necessarily the same question verbatim, by the way. When I answer a question, I am trying to “teach to fish,” and so there is some system that I am explaining. My hope is that the asker will show curiosity - ask follow-up questions - and then be able to generalize. “I learned there was a lambda for sending emails, in the sysops repo. Maybe there is a lambda for sending slack messages in there too?”

Software systems are imperfect so the generalizations might break. In this case I want another question quickly, like “I couldn’t find a slack message sender like the email one. Does one exist?”


I have had both problems. As it turns out, unsurprisingly, it varies from person to person.

It can also vary within a single person. They might, for example, ask questions too quickly when stuck on a technical question that could be solved by reading docs, but ask questions too slowly when stuck on an ambiguous product requiment that can only be personally answered by the PM or UX person.


Asking questions quickly is optimal for resolving the immediate problem, but not necessarily for understanding the components involved in the answer, or the time of the person being asked.

Even if you write off the latter as ~free, the former is a significant benefit to someone familiarizing themselves with a new environment, and even someone walking the junior person through the process of reasoning through how to get there probably won't cover the same benefits as they might have experienced doing it themselves because they likely don't think exactly the same way.

Of course, it's an immediate versus long term tradeoff too, and a personal tradeoff for when to ask, and with how many people tend to just refuse to ask questions because they've been trained to think this makes them look bad, it probably makes sense to aggressively incentivize asking by default, but there are benefits to spending time on a problem yourself if you have enough tools to bootstrap your way into more understanding, and I have met enough people who never understood things more than just how to lookup the answer from someone knowledgable to think this isn't something to also be concerned about.


Flipping the script a bit:

As a senior developer, avoid cultivating learned helplessness. You can push back on this in a couple ways:

1) Instead of answering questions, give a nudge to where the solution is documented ("Hey I'm swamped, but I'd recommend checking X for more info. If you haven't read through X yet, that's a good resource to skim"). Keep tabs on how much your junior team members are expected to know, nudge them harder if they aren't taking the time to ingest the gestalt of what's there (it feels like a waste of time to do so sometimes... Reading isn't getting code written. But knowing what's already there saves work in the long run).

2) When something isn't documented... review the docs a junior team member writes, don't write them yourself. This both encourages them to have ownership over the system and will probably generate better docs in the long run (everyone has a notion of what docs should look like, but communication is two-way: seeing what someone writes down clues you into what you missed is necessary to record. Can't tell you how many docs I've seen for cloud systems, for example, that assume the user is logged in with proper credentials when that step alone usually requires handshaking across three services and maybe an IT team).

3) Prefer mistakes to silence. Don't bash a team member for making a correctible mistake, even in production; use it as a learning opportunity both for them and for you (if that mistake was possible, you're missing a guardrail). Actively communicate to junior members that wrong code that exists is preferable to no code; wrong code can be talked about, no code is work yet to be done. And be aware that for a lot of junior devs, the reaction to making a visible error is like touching a hot stove; cultivate a soup-to-nuts environment that minimizes that hot-stove reflex.


I agree with all of this with one minor quibble. I'd never tell someone I'm in a position to mentor / coach / lead like this that I'm swamped. It's probably true, but that's my problem not theirs. I don't want them to avoiding talking to me because they think I'm too busy. I know that was in an example statement and not necessarily something you're endorsing, but I thought the point worth bringing up.


This is just all excellent. There are a lot of strange attitudes elsewhere in this comment section, ones that I don’t recognize from good senior engineers. Good ones realize that a lot of their job is improving the whole team. If junior engineers are constantly asking trivial questions, maybe they need to be taught to learn!


> There are a lot of strange attitudes elsewhere in this comment section, ones that I don’t recognize from good senior engineers.

Whole lot of misinformation and red herrings on the internet these days, intended to lead the unwary astray


Yes, and I think some of the neediest juniors (or seniors who behave like juniors) have now substituted in ChatGPT for some of their nagging.

The ones I see doing this most heavily are not really developing themselves and improving in any meaningful way.

At least a stack overflow thread will be filled with alternative solutions, arguments, counterpoints and caveats.

ChatGPT leaves the questioner with the illusion that they have received the 1 good answer.


Summary: Scott Alexander recounts his gullibility to various well-reasoned crackpot arguments on a topic, and describes how he to decided to trust experts instead of investing time into learning enough to assess the topic for himself. Then he reflects on the nature of argument-accepting and its relation to rationality.

I don’t think the term “learned helplessness” fits well here. It suggests a lack of agency, whereas he exercised much of it, employing his skill of critical thinking to arrive at the right epistemic stance.

A better term might be “bounded agency”, to pair with the concept of “bounded rationality”. We recognize that we cannot know everything, and we choose how to invest the capability and resources that we do have. This is far from any type of “helplessness”.


He talks about the pitfalls of pure rationality. There can be competing explanatory frameworks for the same thing, and they often contradict each other. Rational arguments may seem rigorous like math, but are in practice standing on shifting sands.

It ultimately comes down to what you decide to believe in. This is where traditional values and religion come at play.


Yes, It's not "gullibility", it's believing things in terms of the mechanism of standard argumentation.

The basic thing is that arguments involve mustering a series of plausible explanation for all the visible pieces of evidence, casting doubt on alternatives, etc. Before Galileo, philosophy had a huge series of very plausible explanations for natural phenomena, many if not all of which turned out to be wrong. But Galilean science didn't discover more by getting more effective arguments but by looking at the world, judging models by their simplicity and ability to make quantitative predictions and so-on.

Mathematics is pretty much the only place where air-tight arguments involving "for all" claims actually work. Science shows that reality corresponds to mathematical models but corresponds only approximately and so given a model-based claim can't be extended with an unlimited number of deductive steps.


I, for one, am glad that the rationality-bubble is popping.


A further thought that is too much for an edit… one of Alexander’s final conclusions is:

> I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that […] they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.

I think there’s better framing here as well: he is glad that a few people direct their own bounded resources towards what I’d call high-risk epistemic investments.

I’m also thankful for this. As species, we seem to be pretty good at this epistemic risk/reward balancing act - so far, at least.


> The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses.

Isn't this just... science? We learned from the ancient philosophers that really smart people can reason powerfully about a whole lot of things. But - if we want to know whether that reasoning holds up, it has to be tested. Modern science is the current unit testing framework for philosophy. At least, for the thoughts that lend themselves to unit testing.


This is all very noble on paper.

Yes, people can be persuasive to twist the truth.

And testing is good for arriving at proof.

But what the ancient philosophers didn't count on, is that there is a machine in charge of what gets tested and how, and that choosing what and how things get tested can also be twisted to be persuasive. I can develop and thoroughly test my own drug, and then persuade you that a much cheaper chemical is not tested enough (because I blocked all attempts at testing it), and then I would point to the noble principles of ancient science as to why things need verification.

In other words, the current world is teaching us that really powerful corporations can test selectively about a whole lot of things.


You are correct, it is science.

The differentiator that I see is that the medical community are under pressure to apply the most cutting edge science in real life-and-death scenarios, so they are uniquely positioned to be burned by trusting a new hypothesis which appears correct but is subtly but completely wrong which could absolutely cause someone to lose their life.


But the alternative, waiting to apply new-but-actually-correct hypotheses until totally, absolutely proven, also causes lost lives.

And as is usual with "Type 1" vs "Type 2" errors, saving lives by avoiding one problem costs lives due to the other problem. The trick is to sit in the minimum of "lives lost due to Type 1 errors plus lives lost due to Type 2 errors". Unfortunately, that's not an analytic function with a known formula and computable derivative...


Science is more than just testing, although testing is an important part of science. I believe the reasoning aspect can, in principle, be made strong enough to be robust, and almost independent of experimental confirmation (essentially from analyzing already existing data).

For example in Machine Learning you can trust a model to work well in the context of a data stream by simply using (cross) validation, and so on (in general avoiding overfiting, and assuming your data stream will continue your training data without changing too much).

Mathematics is an example of a non-experimental science as well: in a way, we can perform purely theoretical experiments where our 'tests' are given by internal consistency. The equivalent of experiments in mathematics for ruling out incorrect statements (inconsistent theories) are proofs and (almost tautologically) the concept of consistency. I think there is some risk that even within math this process derails, but it's only when we start lowering too much the standards for what constitutes a proof. And the concept of proving is so solid we can automate it and computer-verify them. As long as we can (and do) verify what we create, in a sense we can't (or are extremely unlikely to) go off rails.

I think relativity is a good example of a theory that was largely conceived from mathematical principles (from a few observed essential properties of reality). I think in the future this kind of approach will be highly significant to physics, math, philosophy and even social sciences.

It is true though that experiments (and just real life) are very useful to 'keep us in check' in a general, less rigid way as well.

Honestly I think those are the fantastic tool (consistency and proof), fundamentally connected to the notion of truth.


Right, and I think the distinction he draws between this and engineering is interesting.

My experience (as an engineer) is that engineers get paid to manipulate the world around them in precise, predictable ways. As such, they tend to cultivate a core set of relatively simple principles (e.g. heat and material balances) which they take seriously by using to plan/design/etc their manipulations of the world. The engineer's creativity lies in finding new ways to combine the known-reliable principles, and stake their reputation on the result.

Scientists, on the other hand, are expected to come up with new principles. If they take too many ideas seriously, there is no room left for creative new explanations.

Medicine lies somewhere in between, insofar as doctors are trying to manipulate patient's health with the best principles they can find, but they also have to wear a scientists hat via diagnosis, i.e. figuring out which principles to apply to a patient.

And so the reaction to e.g. fundamentalism makes sense:

An engineer likes having a fundamental set of universal principles, and is comfortable using them to make plans for manipulating the world. They expect the religious world to work the same way as they do, and take those principles seriously.

A scientist wants to know "well I've got a new idea for a fundamental principle, how can we find out if I'm right?" And the fundamentalist has no answer.

A doctor will want to know how certain the fundamentalist is, and how they know those principles even apply to current situation, and the fundamentalist will have no answer.


One counterexample for why I think it's not always the best approach is that there are sometimes disproportionate rewards for being right when everyone else is wrong.

Doing so, however, requires you to take an idea seriously before it has gone through analyses so extensive that everyone else believes the same idea too.


Medical thickheadedness sometimes makes them performing useless, harmful procedures for many years after the harm became known. So it's not a virtue the article postulates.


Yes, but not all "scientific" disciplines adhere to this the same way that medicine does. Take for example Psychology, Political Science, or some of Economics where unreplicable studies in prestigious journals almost immediately become cannon for new grad student seminars.

N.B. I have direct experience with this in Political Science, the other disciplines I have just anecdotes about so apologies if they are mischaracterized.


It does seem like odd phrasing. It's correct to not fully trust anything until it's gone through a long verification process.


Don't you have to put some amount of trust in an idea to even consider verifying?

And have even more trust in the idea to perform your experiments in the first place?

Even in medicine, there has to be someone willing to challenge status-quo ideas, or there's nothing to feed into the "long verification process" in the first place. How do you decide when to be that person?


This seems to be responding to a different comment?


> "Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!”"

That isn't making them look like an idiot, reducing them, or demolishing their position. That is completely failing to demolish their position! While also failing to convincingly explain your position.

> "If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right. (This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)"

Maybe the correct action is to see that how the Early Bronze Age worked has so little effect on your life, no testable hypothesis to confirm one way or the other, that it doesn't matter which one you believe, or if you believe both (even if they are contradictory, that's a thing humans can do). Instead of doubling down on one, let go of all of them.


>Maybe the correct action is to see that how the Early Bronze Age worked has so little effect on your life, no testable hypothesis to confirm one way or the other, that it doesn't matter which one you believe, or if you believe both (even if they are contradictory, that's a thing humans can do).

This is a great point, and I'd suggest it is worth taking even further. Even for things that have moderate or substantial impact on your life, holding space for the possibility that multiple competing/overlapping explanations could be true can be an extremely valuable (if cognitively expensive) skill.


It also ignores the question of how the person even got their prior in the first place. Presumably they heard a convincing argument at one point and accepted that - but then later changed their standards to not accept convincing arguments. In fact, how do they even know "if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way"? Presumably they were convinced of it at some point.


> Presumably they heard a convincing argument at one point and accepted that - but then later changed their standards to not accept convincing arguments

No, many people base their views on what their friends thinks, they never reasoned themselves into it.

It is rational to base your beliefs on your friends, and to keep those beliefs to ensure you can continue to fit in with your friends. That is less work and less risky than changing your mind, only few really go against the grain and try to think for themselves, it isn't helping them but some people has to do that for the benefit of our pack.


> That isn't making them look like an idiot, reducing them, or demolishing their position

I think the quote is in Scott's opponent's voice, not his own.

> Instead of doubling down on one, let go of all of them

Yes. We don't need better ways to form opinions - the world would be a better place if we all just had fewer opinions


I believe the parent comment is aware of your first point


> That isn't making them look like an idiot, reducing them, or demolishing their position.

I think he's saying that he can reduce his opponent to those words - i.e., the author's argument fits together, the opponent says "it's just wrong" and gets frustrated.


I agree that's what he's saying, and I'm saying that is not demolishing the opponent's position. Scott is going up to a castle, talking to the walls until he thinks they should collapse, then when they don't collapse he's declaring victory by "making the walls look idiotic" and telling people he "demolished the castle walls".

If you haven't actually convinced your opponent, and you haven't changed their mind, and you haven't understood their true objections, and you haven't presented a convincing enough case for your position for them (arguing in good faith) to accept, you haven't won, there's an underwater-iceburg chunk still missing.

Like, if I claim 2 is the largest possible number, and show you 1+1, and you say "1+1 is coherent and logical and fits together ... I can't place the flaw but something's not right and I still don't believe you", I can't go around reasonably telling people I demolished you with my proof that 2 is the biggest number and you look idiotic for not believing me.


The thing is, often people cannot be convinced of some statement, not because they are so convinced of their own stance, but because leaving their stance feels bad and might even question their own identity (what they stand for).

Only after all those years i came to that conclusion, after a lot of arguing and debating.

And that relates to what the article says, we just nod and still keep our position because the other's arguments feel concise at first sight but just feel weird. Any rebuttal attempt very often feels futile and useless.


> believe both (even if they are contradictory, that's a thing humans can do)

It really isn't, though.


>engineering trains you to have a very black-and-white right-or-wrong view of the world based on a few simple formulae, and this meshes with fundamentalism better than it meshes with subtle liberal religious messages.

Back in the usenet days it was taken as given that any creationist was also an engineer. Creationism was nice and neat and logical unlike that handwavy big bang thing that was probably dreamed up by woolly headed academics with no practical experience.


I think of it as lowering the cognitive dissonance. Highly analytical individuals are going to tend to be more highly sensitive to any contradictions in their belief system. On which side you land might be remarkably random -- one might become an atheist, and another a Christian fundamentalist. The key is getting to a place where they see no contradictions.


Don't know about this one. Engineering taught me that reality is complex and nuanced and that success is defined on a spectrum, not zero or one.


Back in the usenet days everybody using it was either an engineer or a physicist, and every single one of those people had a bias about one of those groups being wrong more often than the other.


I wouldn't necessarily disagree with that. There were also professors of all stripes, and of course first-year students which were wrong more often than everybody.


> Creationism was nice and neat and logical

Also requires less faith


To some degree the idea of a Creator is easier to wrap one's head around than the idea of nothing-->something.

Of course it really just rolls the question uphill since you're now faced with 'Where did the Creator come from?'. Happily there are no academic theories on this matter so there's no need to engage with it. Ultimately satisfaction doesn't come from the belief itself, satisfaction comes from the feeling of being Right while others are Wrong.


> Of course it really just rolls the question uphill since you're now faced with 'Where did the Creator come from?'.

Creationists aren't actually faced with that question - not Creationists who believe in a God who is eternal, anyway. Things without beginnings don't have origins.

If you want to actually attack that belief, you need to either 1) show that God isn't eternal, or 2) show that eternal things are impossible. Or you have to do the hard work of 3) persuading them that a purely naturalistic process fits the available evidence better than a Creator does.


I'm just suggesting that Creationism requires more faith than one might think at first glance.

Creationist beliefs

- a Creator

- things which are eternal

- an Eternal Creator

Whether those things are easier or harder to believe than naturalistic processes is up to the reader.


> To some degree the idea of a Creator is easier to wrap one's head around than the idea of nothing-->something.

Of course that's a mistake, because there wasn't literally nothing, there was just something else.


> Happily there are no academic theories on this matter

Why happily? Evolution seems like a good academic theory. Is Pascal's wager academic enough? What about Occam's razor?


To clarify, there are no academic theories about where the Creator came from. Academic research on the matter would have to pre-suppose the existences of a Creator, and so far as I know there is no research group willing to take that as given.


How?


God exists or he doesn't, 50/50 chance.


Fantasy always requires less thinking than knowledge.


I have to admit a weakness for reading not-quite-crackpot-but-likely-wrong theories. In particular, big fan of Julian Jaynes and The Development of Consciousness in the Breakdown of the Bi-cameral Mind, and the aquatic ape hypothesis https://en.wikipedia.org/wiki/Aquatic_ape_hypothesis

I get that they're probably not true, but I do enjoy reading novel thinking and viewpoints by smart people with a cool hook.


I think if you want to start down that sort of road, it's important to read lots of them. Read zero, you're probably fine. Read lots of them, you're probably fine. "One or two" is where the danger is maximized.

And I would agree with "likely" wrong. Some of them probably aren't entirely wrong and may even be more correct than the mainstream. Figuring out which is the real trick, though. Related to the original article, I tend to scale my Bayesian updates based on my ability to test a theory. In the case of something like the Breakdown of the Bi-cameral Mind, it ends up taking such a deduction as a result of that heuristic that it is almost indistinguishable from reading a science fiction book for me; fun and entertaining, but doesn't really impact me much except in a very vague "keep the mind loose and limber" sense.

I have done a lot of heterodox thinking in the world of programming and engineering, though, because I can test theories very easily. Some of them work. Some of them don't. And precisely because it is so easy to test, the heterodoxy-ness is often lower than crackpot theories about 10,000 years ago, e.g., "Haskell has some interesting things to say" is made significantly less "crackpot" by the fact that plenty of other people have the ability to test that hypothesis as well, and as such, it is upgraded from "crackpot" to merely a "minority" view.

So my particular twist on Scott's point is, if you can safely and cheaply test a bit of a far-out theory, don't be afraid to do so. You can use this to resolve the epistemic learned helplessness in those particular areas. It is good to put a bit down on cheap, low-probability, high-payout events; you can even justify this mathematically via the Kelly Criterion: https://www.techopedia.com/gambling-guides/kelly-criterion-g... If there is one thing that angers me about way science is taught, it is that it is something that other people do, and that it is something special that you do with either the full "scientific method" or it's worthless. In fact it's an incredible tool for every day life, on all sorts of topics, and one must simply adjust for the fact that the lower effort put in, the less one should trust it, but that doesn't mean the total trust must necessarily be uselessly low just because you didn't conduct your experiment on whether or not fertilizer A or B worked better on your tomatos up to science journal standards.


Same. I found a book back in college claiming (on the basis of some theory about the Egyptian pyramids) that if you made a pyramidal shape with certain dimensions out of cardboard, it would make plants grow faster and keep your razorblades sharp. I didn't believe it, but I did make one for fun. All my physics-major friends made fun of me for being gullible. I was like, isn't testing stuff what we're supposed to be doing here?

(It didn't work)


Is there solid evidence against aquatic ape? The only argument I've seen was that it's unnecessary because multitude of previous explanations of every single feature work just fine, thank you very much.


I can read a novel idea, get excited by it, remember it and return to it later without being convinced.


I think this article is badly argued, but about a topic which interests me greatly.

There are basically three epistemologies. There's the constructive (mathematical, proscriptive), the empirical (science, emotional), and trust. The constructive and empirical epistemologies don't separate as neatly as we would like them to, but a constructive argument basically looks like: "here's a thing that you definitely believe, and here is an implication of that, therefore you believe in the implication" aka modus ponens.

The empirical epistemology goes: "You have made a lot of observations, here's a simple explanation for all of them that you may go check", more or less the scientific method.

The trust based epistemology is just: "If I can establish facts about you, then I can establish facts about the things that you claim to believe, without having to see the receipts"

Each of these epistemologies has its own definition of argument, and they're all similar, but they're distinct, and the author isn't being clear about which one he means. In my estimation, the passages about ancient history are a reflection of him mistaking trust based arguments for empirical ones. This is a very common mistake.

An empirical argument is of the form "Here is the test that I used to differentiate my explanation from other explanations, and why I think this is a good test", whereas the trust based argument for the same explanation would be "Here is the data that my theory explains".


I think it is a trope that engineering is "black or white".

Having been a professional engineer for 35 years, it is anything BUT. Everything is a threshold for negotiation, there is give and take in every design decision. There is no black-white anywhere I've ever seen, except freshman-year logic texts. Even with digital logic, in reality, there is no such thing as digital: slopes, thresholds, manufacturing variations, it's all analog.

Where does this notion come from?


I suspect it comes from when engineers try to problem solve with non-engineers.

At the very least I've been in the room a few times when this happened. The engineer says a few things. The non-engineer suggests something outside their wheel house. The engineer explains why it won't work or would be too expensive to work. Non-engineer accuses engineers of living in a colorless world without imagination.

The black and white thinking accusation is similar to the 50-50 chance of winning the lottery (either you do or you don't). That is, engineers only say if something can work or won't work so they're black and white. It kind of misses the nuance in what's happening.


Exactly. Engineering often involves choosing a solution within some time / cost / performance envelope. I’m inclined to think that if you can’t see where trade-offs might exist, it’s because you don’t fully understand the domain.


> I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains.

What always struck me about famous philosophers is that almost none of them ever changed their minds. They grasped a main idea, turned it into a supreme principle, built everything around it, and defended their system against all critiques their whole life. I know this is simplifying but I think it's pretty close.

And yet when I read their works, it's easy to see flaws in their position. So I've always wondered how they could have such tenacity. I'm glad they did though. But it feels like they sacrificed their own life for the rest of us, so that we could see where one idea goes: what it looks like when it's fully developed.

When I play chess, I can never go very deeply down a single line before I get distracted by alternatives. But I guess some people are doing a depth-first search. I don't get it, and they may be wrong, but it's like a service to us all.


> I’ve heard a few good arguments in this direction before, things like how engineering trains you to have a very black-and-white right-or-wrong view of the world based on a few simple formulae,

Whenever the rationalist writers start talking about engineering, programming, or machine learning in technical terms I have a sudden Gell-Mann Amnesia realization that these people's primary domain is writing and pontificating, with little to no actual experience with engineering, programming, or machine learning.

I don't know any engineers who would characterize the world as being simply defined by "a few simple formulae"

Meanwhile, the rationalist community has an abnormally large reverence for engineers, programmers, and machine learning experts as if they were superhuman:

> But to these I’d add that a sufficiently smart engineer has never been burned by arguments above his skill level before,

Really? Sufficiently smart engineers have never encountered arguments above their skill level? Are we to assume that engineers are at the peak of logic and reasoning and skill? Is this an attempt to pander to an engineering audience? Or an actual belief that engineers have superhuman levels of logic?

Finally, I'm growing tired of the way rationalist writings go out of their way to self-congratulate rationalists while placing the rest of the population into the unenlightened normie category that we're all supposed to look down upon:

> The people I know who are best at taking ideas seriously are those who are smartest and most rational


> I don't know any engineers who would characterize the world as being simply defined by "a few simple formulae"

I'm an electrical engineer. The Standard Model of particle physics governs literally all of existence, minus gravity, and it can fit on a single page [1]. Gravity is another few equations that can also squeeze onto that same page. This is a set of fairly simple formulae compared to the unbelievable complexity of existence.

A universal Turing machine can compute any computable function and it's description is even shorter.

In other words, unimaginable complexity can follow from "a few simple formulae". I don't know many engineers that would dispute this. Your problem is that you have some conception of "simple" that does not match what rationalists mean.

[1] https://www.symmetrymagazine.org/sites/default/files/images/...


> The Standard Model of particle physics governs literally all of existence, minus gravity, and it can fit on a single page [1]. Gravity is another few equations that can also squeeze onto that same page. This is a set of fairly simple formulae compared to the unbelievable complexity of existence.

Can this model explain why we have yet another war in the middle east (which is part of existence), and also why education in certain fields seems to decrease one's curiosity in the cause of such events and whether they can be reduced or eliminated?

Maybe gravity isn't the only thing missing from the model... If there was something else missing, is there anything contained within the model that would allow one to detect such a situation, or at least think about the possibility?


To further expand on the "a few simple formulae", many of those experienced in these fields will note that complexity can emerge from simplicity. Conway's Game of Life is my go to example. Following some of about the simplest rules you can have, you can create a fully Turing complete system able to emulate itself.


I call this the "Rationals spend a whole lot of time and effort ignoring their irrational parts"


Yes, I think that an engineer who can navigate (to some extent) human and organisational minefields will be much more effective than one who is merely super-smart


Thanks for finding the exact words for how I also feel about the rationalist community.

Source: sufficiently smart chemical engineer


It is 100% a safety valve. Humans have an inherent negativity/risk-aversion bias. Those who didn’t were weeded out of the gene pool. If someone comes up to you with a crazy sounding idea it is far more likely that they are wrong and you shouldn’t trust them. It’s one reason why obvious seeming innovations take so long to become widespread.


Take a shot anytime a “rationalist” invokes Bayes.

Long article for him to basically say nothing insightful.

Crazy people will say seemingly true things that attract more crazy people.

OK. Thanks I guess.

The two paragraphs about how good at arguments he is and how other educated people might be able to out argue him in topics if they’re specialists _maybe_ was so stupid I closed the tab, but then I decided to be adult and read the rest to see if there was anything redeeming.

There wasn’t. Don’t bother.

Why do people read this guy’s crap? Did he not bother to share this with a trusted friend first to see if he sounded like an idiot?


This is strangely reductive. The article isn’t very well written but I think the key point is insightful.

The article was written in 2019, so before COVID, which was very prescient. During the pandemic the only really reasonable option was to follow CDC guidelines even though they seemed to be unreasonable, constantly changing, and even sometimes conflicting. There were large parts of the population that were smart enough to realize something odd was going on, and so they searched for their own answers, and sometimes the ones they came up with were pretty bad. Obviously we would like to do something about this, but what? What do you think should be done?


I noticed this too. People that casually drop "Bayesian priors" into conversation seems to be a strong signal for someone who's up their own arse. Sam Bankman-Fried did it a lot.


Well I thought the offhand remark about Bayesian priors was actually the real answer to his question of why we don’t accept arguments. I’m not going to accept some argument that contradicts a lot of what I previously thought was true unless it is very convincing (that can be achieved in a variety of ways). The remark about Bayes should have been front and center.


It's a shibboleth for the Rationalist community, that's why.


> Why do people read this guy’s crap?

I'm gonna go out on a limb and say that it's because they disagree with your assessment, and think that he does have something insightful to say.


> Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.

Maybe you ought to think about why that is? I wonder why "smart" people don't live their lives according to magical situations with no bearing in reality?


> You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try.

Replace "the arguments" in this sentence with "complexity" for a more universal statement on human nature.

Some people deal with complexity through science. This sounds great, until you realize that science is currently systematic (step by step) and not systemic (whole picture). Reality is made of interconnected systems. A systematic approach leaves blind spots.

For instance, look at medicine. Traditionally, men were seen as less complex to study because of the lack of a (regular) hormone cycle. Our medical system is largely based on authoritatively treating women as "smaller men". This results in systemic ignorance of women's health.

Even in veterinary medicine, kittens are often treated as "smaller puppies". This results in needlessly dead kittens.

The difference between essential complexity (complexity created to handle complexity) and inessential complexity (complexity driven by another purpose) is also important. Some people rely on inessential complexity to derive authority: they sound like an expert when they talk about their "complex" system.

Understanding systems and system dynamics is useful for discerning between good information and bad information. If you can visualize an argument as a system (and as part of a system), it is easier to avoid getting lost in the details.


> Some people deal with complexity through science. This sounds great, until you realize that science is currently systematic (step by step) and not systemic (whole picture). Reality is made of interconnected systems. A systematic approach leaves blind spots.

The counter argument is that human minds are limited and very few people in history are able keep even a large % of the entire system in their head.

A huge problem we face as a species is that as the amount of information that needs to be connected to each other in order to make progress grows, fewer and fewer people are able to make large contributions, meaning we'll accumulate knowledge and understanding at a slower rate.

Hopefully technology will help here, and people have been working at using computers to try and connect those points, but I am not aware as to how successful those efforts have been.


The problem we face is being a culture of many specialists, with few generalists. Generalists are capable of managing a wide range of information and have the ability to transfer skills and knowledge between domains.

You don't have to model an entire system in your head to understand the dynamics. That's a specialist, systematic point of view.

The technology will only help if we change the culture and change how we educate and incentivize the next generation of humans.


> The problem we face is being a culture of many specialists, with few generalists. Generalists are capable of managing a wide range of information and have the ability to transfer skills and knowledge between domains.

Sadly different fields of science can each take a lifetime to master, and even closely related fields use different jargon[1] internally, meaning that a person can be an expert in something, read a journal article about that topic but written by an expert in another field, and not fully understand what is being said w/o significant mental work translating concepts from how one field talks about them vs the language and terms the reader is used to from their own field of study.

[1] Defined as specialized language to make communication within the field easier


You're describing a procedural problem - different words for the same concepts. This is one way science is not systemic. As a result, experts end up solving the same problems in different silos.

Generalists can spend time "translating" between subjects, find the common threads, and find missing pieces from other domains. Most domains have developed independently from each other, and there are a lot of missing pieces.

Specialists are important, but specialists without generalists gets us to where we are today.


The real tie-breaker is experiment. It's telling that his primary example of "epistemic learned helplessness" is historical narrative. Historical narratives are solutions to a boundary value problem where the boundary is simply "the world at present". There are infinite numbers of plausible stories that are consistent with "the world at present", but they can't be tested unless they overtly predict something, and even then they generally can only be tested over relatively long periods of time.

The disturbing trend that I see is that stupid people stubbornly cling on to falsifiable beliefs, like flat Earth theory, to satisfy some external incentive, like the need to belong, or the need to push against consensus. This is both an expression of Isaac Asimov's famous quote "There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge." That is, an attack on knowledge itself. It's harmless enough until it generalizes. Meanwhile, smart open-minded people run the terrible risk of a similar generalization, which also ends up functioning like an attack on knowledge itself.

The solution is to weight knowledge in inverse proportion to proximity to you in time and space. That is, I can know what is near to me, but far things are less known. If you adopt this posture toward knowledge then you can approximately partition rational learned helplessness, associate it with far away things, and yet maintain a rational confidence in knowledge that is close to you. And if you run across an historian that speaks with utter confidence about a historical narrative, run.


The world's lack of good tools for assessing the truthfulness of arguments is my fault. Some things came up that stopped my work on the next iteration of https://howtruthful.com/ but only temporarily. Expect some big changes in 2024.


If you think you have good ideas write a position paper and open it up for comments. A wiki even.

Your site currently has little food for thought.


There's an "intro" link to a video at the bottom of the site. Comments can also go on this essay: https://www.lesswrong.com/posts/f2CftRRndH97RpAwL/how-i-got-...


Everyone is convinced they arrived at their ideas rationally.


Everyone is not right. Yet again, some are.


If you are an ordinary person, this is the way to go. You are not in business of evaluating arguments. You need to live your life and for that accept the most reasonable view on everything, verified by your peers.

However, if you have a shot at being intellectual elite of your society, having epistemic helplessness renders you unfit. And we see a lot of people who look like intellectual elite, paid for taking its place, but not doing any reasoning, evaluating and believing of their own. That's why we seemingly only get problems piled up on us.

Imagine a judge who have a popular preoccupation of who is right and who is wrong, before hearing the actual case and using their own experience.


Why is it assumed that someone who is convinced by arguments is prone to wrong ideas? Wouldn't that type of person become iteratively closer to the truth every time they hear a new argument and compare to the others they've heard?


I think at some level for certain topics and areas there is a minimal level of knowledge and information you need to know in order to evaluate or even understand advanced arguments about the topic. In a sense you won't become closer to the truth until you reach that minimum level of understanding and before that you will just ping pong between positions without a real understanding of whether you are correct or not.

Many arguments especially advanced ones are building on a presumed level of knowledge that many people do not have or are even aware that they should have. Doing the work to adequately evaluate an argument in every domain is probably not worth their time.


> you won't become closer to the truth until you reach that minimum level of understanding

This might be my misunderstanding of the article. How do you gain knowledge at all without considering new arguments you hear?

> Doing the work to adequately evaluate an argument in every domain is probably not worth their time

Sure, but why not admit you just don't know then instead of "sticking to your prior" and believing something you have no evidence to believe?


If you "admit" you don't know then you by definition aren't convinced by the argument and thus not falling into the trap. But it doesn't help you make decisions any easier for stuff that is relevant to you and therefore requires you to make a decision. In that case you are going to have take a leap of faith so to speak. And choosing the way you leap might be better informed by consensus by experts in the field rather than speculative investigations on the fringe.

You might still be wrong but your chances are probably improved in the correct direction most of the time.


> This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.

If two diametrically opposed arguments are equally convincing, it's not intuitive to me that the "correct" solution is a middle ground. Especially if both argument state that a middle ground is not possible.


I guess if I encountered two diametrically opposing arguments of equal validity I would increase my uncertainty, not just stick to my prior. That's super different to just ignoring new arguments, right? Like going from "I think X is the best candidate" to "I don't know who the best candidate is".


Oh, that's a good point, and I think you're idea is a better behavior model vs. the "stick with prior" idea.

What if the conclusion forces behavior though? For voting, you can abstain from voting if you're unclear, but what if you're in a position where you have to choose two distinctly different options.

In this case, I think you pretty much have to stick with your prior, but I'm curious if you have a more nuanced approach here.


> I think you pretty much have to stick with your prior

Yeah, at least to me that makes sense if you're forced to make a choice. Although in some ways if you really have no clear evidence it doesn't matter what choice you make, since both are equally reasonable.

> I'm curious if you have a more nuanced approach here

Not really :D. I do think things are often more complicated than "you must choose two options", for example you might have to choose between two candidates (in the US at least), but if you are more uncertain maybe you don't put a giant poster in your front yard or get upset at neighbors if they have a different opinion.


In general this is not how biological life has evolved and not how most decisions work within that framework.

Most decisions you'll ever make are "You have ___ seconds/minutes/hours to make a choice". And depending on the pressures you're under you will skip the expensive logical/rational processing as needed and fall back to heuristics.


https://en.wikipedia.org/wiki/Brandolini%27s_law

>Why is it assumed that someone who is convinced by arguments is prone to wrong ideas?

To put this a different way, if humans actually behaved in this manner in any significant numbers then humanity should have discovered the sciences and been uplifted a long time ago.

Instead that's not what we see at all. A more likely reason is Bullshit Asymmetry, that is, humans throw out so much made up crap that using arguments alone will burn the observable universe away in a puff of entropy.


> Bullshit Asymmetry

If we could solve the Bullshit Asymmetry problem, imagine the progress we could make as a species.

Someone reads that "ACKCHYUALLY, if you're operating a car for personal reasons, and not for hire, then you're not actually 'driving', you're just traveling, and you're not legally required to have a driver's license or car registration! The cops and the courts have it wrong!" just one time, and it's now a permanent belief, and absolutely nothing you do or say can convince them otherwise, and now you've got another Sovereign Citizen.

That's just one example. Flat Earthers, various conspiracy theories, some anti-vax beliefs...


Keep in mind that the normal learning process involves students asking questions out of left field, surprising and frustrating even long-time experts who might need some time to consider how to answer even if the expert has the knowledge to notice the fallacy.

If removing bullshit puts an undue burden on that learning process that causes students to refrain from asking the questions that will help them understand the topic, then you didn't ACKCHYUALLY remove it, you only moved it. :)


Because the most convincing argument is not necessarily the most correct, nor does comparing one argument to another discriminate truth from lies.


Not necessarily, but it's arguably as good as you can get after aggregating over a set of people independently analyzing the same question.


Comparing them requires sometimes rejecting new ones. This is different than being someone who is convinced by all arguments they hear.


If the people are convinced by all arguments they hear, at that point it comes down to which argument they heard most recently, so let's hope the media is autonomous, ethical, independent, & responsible


I know nothing about that guy, but several commenters point out that debating here does not mean making correct arguments but only making convincing arguments. If that is the case, why would anybody care? I can imagine a few situations where this might be a useful skill, for example if you have some ideological predisposition and want to convince others no matter the truth, but that does not really sound like a good goal, it essentially amounts to manipulation.


Discussions on similar submissions:

Epistemic Learned Helplessness https://news.ycombinator.com/item?id=32369238 (August 6, 2022 — 17 points, 1 comments)

Epistemic Learned Helplessness (2013) https://news.ycombinator.com/item?id=20118865 (June 6, 2019 — 146 points, 60 comments)


So where is gHeadphone on this? I'd like to read why they shared this.

The writer though may actually be less egotistical than it seems. By the end of post they are saying that their ability to not believe a good sounding argument is exactly the same approach an uneducated person has... Saving them both from believing the wrong things. The funny thing is he gives up trying help the reader discern just what the right thing might be.


When I consider how much specious, fallacy-riddled reasoning is produced by EAs, all hunkered in their AI bomb shelters praying for and against the end, I suspect that the "problem" that Scott's friends face is not learned epistemic helplessness, but the simple fact he cites that other people just don't want to spend the time on their apocalyptic verbosity.


This feels like it belongs on /r/iamverysmart.

Most people don't argue within moral frameworks. Most people don't live logic-based lives. Most people prefer happiness to correctness, peace to progress, and quietude to justice. This is a very difficult thing, and not a very good thing, for those who want change.


Hobbes called it. Fully coming to terms with the downstream implications of this assessment, which Hobbes also did, is less palatable.


> Most people don't argue within moral frameworks. Most people don't live logic-based lives.

Yes they do. You may contest aspects of the foundation of their logical and moral framework of choice, but most people on earth adhere to a moral or religious framework. Those are logic-based and that is what apologetics and natural theology and philosophy are for.


I would argue it is a very good thing. Most “bad” things historically have been created by those clamouring for progress and justice.

Like Bukowski said

those who preach peace do not have peace

beware the preachers

beware the knowers

beware those who are always reading books

I would extend it as

those who preach justice are not just

those who preach rationality are not rational

those who preach correctness are not right

And those who preach progress are against change

/edited for readability


Every new good or bad thing is by definition a change.


One downside of the specialization in a service economy is that people no longer are as resourceful as the older generations.

Older people could fix and build anything. To me some of the things they do is totally McGyver level shit.


I think there is some partial issues here... It is not just about people, it is also about devices.

Most of these people you're talking about today, if given an old device and a simple set of instructions could fix said device.

Now, I like to think I can pull off some McGyver level shit, but when the first step of fixing a modern device is "Heat the device to 250F to break the epoxy bond and then take it to your solder station" then actually fixing this stuff is closer to galaxy level brain.


This discussion is completely about propositional statements, as if that's the only way to know anything. I would say, don't let mere words change your sense of what's real.


It's the rule of excluded middle misapplied to everyhing. When you think you have to decide an argument is either completely true or completely false, while in reality no argument is.


> When you think you have to decide an argument is either completely true or completely false, while in reality no argument is.

Hmm, that sounds suspiciously like an argument that is either completely true or completely false...


Perhaps. Mostly.


I'm curious what the author really means by "average uneducated person". It's rare for someone to be truly uneducated. Less knowledge of the domain(s) in question?


Why must these rationalist / atheist / EA people come across as the most smug teen debate club nerds?


Maybe, because the rest is so in love with their opinion.


He makes a convincing argument to (checks notes) not be convinced by convincing arguments.


I am strongly reminded of https://thingofthings.substack.com/p/rationalists-and-the-cu..., which concluded that the kind of person who believes complex irrational arguments is also the kind of person who falls for conspiracy theories, weird cults, and various kinds of whacky beliefs.

That said, I would advocate a more systematic version of the article's conclusion that we should accept that we can't think. Instead borrow an idea from Superforecasting. One way to get better at predictions is to start by constructing an outside view, and an inside view.

Suppose we want to predict the odds of a thing happening. First we make a tally of roughly comparable situations. In how many did the comparable thing happen?

The inside view is a detailed analysis of how the thing could happen this time. You can analyze this to a Bayesian analysis.

The problem with the inside view is that, confronted with a way something could have happened, we overestimate all of the probabilities. We wind up too certain that it did. Conversely if we don't think of a way that it could happen, we wind up being too certain that it could not.

So the inside view has too much variance. We fix that by adjusting the inside view towards the outside view. How much you should do this takes some practice to figure out. But actually doing this makes your predictions more accurate.

Instead of calling this epistemic learned helplessness, I would call the need to adjust to the outside view as estimated epistemic uncertainty. And yes, people who practice doing this really do become a lot better at predicting what will prove to be true in uncertain circumstances.


This is a terribly negative take on a very useful skill. Perhaps a better description would be: “Recognizing the limits of our own powers of discernment”

We have to acknowledge that we don’t have sufficient understanding and expertise to differentiate truth from bullshit in all fields. It is reasonable therefore to turn to scientific consensus and to disregard the arguments of bullshitters.


Strongly disagree.

> Anyone anywhere – politicians, scammy businessmen, smooth-talking romantic partners – would be able to argue them into anything.

Epistemically helpless people are the easiest targets because conmen don't try to convince you to something you don't believe. They use what you already believe against you.

The vaccine against bad arguments is not ignorance. It is curiosity and inquisitiveness. It's always searching for debunks against every well argued thesis you encounter. And evaluating both the thesis and the debunks. After few such excercises about varied subjects you start to notice the patterns and you can learn to separate personal charm and attractiveness of the idea from the truth.

People lie and con and speak absurds the same way all the time about everything. You can learn to spot it if you put your mind to it.


Didn’t this guy get outed basically admitting that he launders racist ideology into something more palatable to liberals? https://news.ycombinator.com/item?id=26305570


(2013)


(2019)


>I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”

> And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments.

He is admitting he might lose an argument against experts with decades worth of experience in their topic (most humble rationalist.)


If anything, Scott is underselling himself, and could probably argue circles against the average educated person.

Debates are the rationalists bread and butter, and you should expect them to be better at you the same way you would expect a trained boxer to beat you in a fistfight.


I've been to dozens of rationalist meetups (I don't know why), people are constantly talking out of their ass while constantly referencing contrived rhetorical devices. A lot of "just asking questions", confidently asserting solutions to e.g. palestine, cartels, etc. and contending with things TV pundits say.

I like some of Scott Alexander's writing, but "rationalists" in general online or IRL have not been very impressive to me.


Haha yeah, I've been to rationalist meetups too, and have come to the same conclusion that I don't like them very much. Too much IQ stroking for my tastes.

Scott specifically though, has an established history of thinking through arguments that I find much better than most, and I still think someone who trains at this stuff will do better.

To extend my analogy a little further from before, if you go to a boxing gym or boxing enthusiast meetup, there will be plenty of people there who like the idea of fighting, but don't know how to fight. That doesn't mean that a trained boxer still won't be tough in the ring though.


With boxing you know who won because the judges say so. If Scott argues something at a party, and the other person says "I don't know why you are wrong but you are." Who is to say who "won"? Scott comes away thinking he's an amazing thinker. The other person comes away thinking he is a contrarian good at inventing plausible sounding nonsense. Who won here?


I think part of the trouble many here are having is that they don't even understand why Scott goes to the party, or argues, or wants to convince people of his arguments.

Surely, if no one reading my comment here ever agrees with me, my life doesn't change for the worse at all. And if people do agree with me, again, my life doesn't change for the better. I'm mostly here to learn from you people, you sometimes say interesting things I am unaware of, or introduce interesting perspectives I might not have seen for myself.

I have noticed among humans though, that you argue and debate like this, because you're basically monkeys and monkeys do the social monkey thing. You're trying to achieve higher status by climbing above the other monkeys, whose status will lower (if only a little). This is not as entertaining as violent fights, but much less risky (at least in some cultures). You're posturing.

It has little appeal to me. I am unable to determine if it holds appeal for the rest of you, or if you just can't help yourselves.


Consider the idea that some people feel that they benefit internally from having the experience of becoming less wrong than they were at some earlier time.

There are many ways of attempting to become less wrong, and without doubt many of those who follow those ways have other motives. But I do think that there is room for people for whom there is an intrinsic motivation in seeking out the experience of discovering that what they used to think is less correct than something they've just been introduced to.


> Consider the idea that some people feel that they benefit internally from having the experience of becoming less wrong than they were at some earlier time.

And helping others become less wrong by sharing one's perspective that was shaped by experience and accumulated knowledge.


> Scott specifically though, has an established history of thinking through arguments that I find much better than most

He has an established history of writing down detailed articles about arguments, yes.

Unfortunately, a significant number of his articles contain schoolboy errors that anyone who wants to claim they are better at thinking through arguments than the average person should never have made.

I'll mention just one such article:

http://slatestarcodex.com/2013/12/01/empireforest-fire/

Scott says:

"...democratic nations, like the US and UK, which have gone three hundred or so years with only the tiniest traces of state-sponsored violence (and those traces, like the camps for the Japanese during WWII, have not come from the Left)."

The two obvious, egregious schoolboy errors here are:

(1) Don't wars count as "state-sponsored violence"? The US and UK certainly haven't gone three hundred years without wars. And some of those wars were civil wars, so even the lame excuse of "well, wars are violence, but not against the state's own people" doesn't count.

And even if we let that pass, what about the US's treatment of Native Americans? What about the UK's treatment of Ireland? And so on. To say that the US and UK have gone three hundred years with "only the tiniest traces of state-sponsored violence" shows a level of historical ignorance that is just staggering.

(2) But even if we let all of the above pass: doesn't Scott know that the President who interned Japanese citizens in WWII was from the Left?


> To say that the US and UK have gone three hundred years with "only the tiniest traces of state-sponsored violence" shows a level of historical ignorance that is just staggering

Scott is not ignorant, therefore it seems more plausible that you should reconsider whether Scott is making the point that you think he's making.


While Scott does make errors, I've found he works hard to avoid making them and forthright when he does (e.g., not everyone has a Mistakes page in the main navigation bar of their site[1])

What you're missing in your example is that, in context, the state-sponsored violence he's talking about is "against one's own people" (e.g., he also refers to it as a "reign of terror" a few times, like those of Robespierre, Stalin, Pol Pot, etc). I think there's a difference of kind between say the 19th century US's (awful) treatment of Native Americans (who were explicitly treated as "other" very consistently) and Stalin's (also awful) dekulakization (in which one-time typical members of Russian society were declared enemies of the state and purged).

So his point isn't that the UK and US have done no wrong, it's that despite having democratic systems for centuries that haven't gone through the kind of reign of terror that Scott's interlocutors claim they should have

[1] https://www.astralcodexten.com/p/mistakes


Btw, I noticed another egregious error: "France, where a reign of terror five years after the Bourbon monarchy is clearly contrasted with a hundred fifty terror-free years since it became democratic in 1870." Um, what? France is now on its Fifth republic, and the transitions haven't always been peaceful. Not to mention Indochina, Algeria (France's own leaders explicitly described what the French did to subdue Algeria as "terrorism"), etc. And many of the people who were terrorized in Algeria were French citizens.


> he works hard to avoid making them

He does so when he sees that there is a potential mistake to be made, yes. But he doesn't seem to me to work hard at all at questioning things that he appears to think are too obvious to need questioning. And yet those things are where he makes the egregious mistakes.

> forthright when he does

Yes, I agree with this: once he recognizes he's made a mistake, he's much better than most at acknowledging it.

> What you're missing in your example is that, in context, the state-sponsored violence he's talking about is "against one's own people"

I don't buy the hairsplitting distinction he's making there, but even leaving that aside, I already addressed this point in my post. The UK fought a war against its own people in the American Revolution. The US fought a war against its own people in the US Civil War. The US fought against Native Americans. The UK fought against Ireland. And so on.

> I think there's a difference of kind

I think this is even more of a hairsplitting distinction that I don't buy. But again, leaving that aside, if Scott wanted to make an argument for such a difference in kind, he should have made one. Just taking it as so obvious as not to need any argument is not justified. And the need for making such an argument doesn't even seem to be on Scott's radar. I find that either astoundingly obtuse, or astoundingly disingenuous.


I agree with you, and I think this is a recurring flaw you'll see in Rationalist communities: they love smart, Rationalist, Just So stories ("X can be explained by Y", never mind whether that explanation is actually correct...and best not mention it if you want to be an upstanding member of the community).


> haven't gone through the kind of reign of terror that Scott's interlocutors claim they should have

There's another point about this as well. Scott claims in the article that his "alternate model" makes better predictions than his "Reactionary model" (which is something of a straw man, but let that pass). But if we look at the cases he cites, here is how they actually stack up against the models:

French Revolution: Ended up with Napoleon taking power as Emperor, i.e., a monarch. Sure looks more like the Reactionary model to me.

Russian Revolution: While the USSR did end up falling apart of its own weight after decades of terror gradually morphing into somewhat less terrifying bureaucracy, what has happened to Russia since looks more like "repressive monarchy" than "government mellows out and does pretty okay".

Chinese Revolution: I suppose that the fact that the Chinese Communist Party has adopted some features of capitalism in order to allow the country to actually have some economic growth might count as a sort of "mellowing out", but it would still be very hard to make a case that China's government is closer to "doing pretty okay" than it is to "strong repressive monarchy".

In other words, even if we agree that the US and UK have reached Step 6 of Scott's "alternate model", historically, those countries (and other countries in the British Commonwealth, like Canada and Australia) are the only cases where that has happened. Historically, reigns of terror brought on by repressive regimes have in all other cases led to new repressive regimes.

(Btw, this is not to say that the Reactionary model's claim that the new monarchy in its last step is an improvement, is correct. The actual historical facts are that the new repressive regimes are often worse than the old ones.)


> haven't gone through the kind of reign of terror that Scott's interlocutors claim they should have

I'll comment on this separately because it goes deeper into the central claim of the arrticle. If you're going to make this claim, you need to ask whether the UK and US have actually avoided such things, or merely made sure they happened in other countries instead.

The US under Woodrow Wilson enabled the Soviet Union to exist by failing to support the Kerensky government, even though the US had troops in the area, and by discouraging the other allied powers from helping on the grounds that everybody was already too exhausted after four years of war. (Not to mention that Communism wasn't of Russian origin; it was exported to Russia by American and British intellectuals, such as Jack Reed. And one of their reasons for doing so was that Communism was supposed to be more "democratic".)

The US, UK, and France created the conditions for the Nazis to take power in Germany by imposing such harsh terms in the Treaty of Versailles (terms which were nothing like what the Armistice had implicitly promised).

The UK under Chamberlain, along with France, enabled Hitler's Germany to conquer Eastern Europe by pursuing a policy of appeasement.

The US under FDR allowed Stalin to take over all of Eastern Europe at the end of WW II (which, btw, made WW II a failure in Europe since the stated objective was to free Eastern Europe from tyranny, and Stalin by any measure was a worse tyrant than Hitler) because FDR wanted to suck up to Stalin and be his friend. (For example, read the historical book Stalin's War.) And even before that, FDR and his administration (and the press, such as the New York Times) systematically lied to the American people about the true nature of Stalin's regime, which was well known to the US government (and to reporters who were there) in the 1930s. If the American people had known what was actually happening in Stalin's Russia, they would never have agreed to having the USSR as an ally in WW II or allowing the USSR to take over Eastern Europe.

The US allowed Mao and his Communists to take over China by withdrawing support for Chiang Kai-Shek. If the US had supported Chiang, the entire country of China would have a government that's basically the government Taiwan has now (since Taiwan is where Chiang and his supporters went when Mao drove them out of China). Imagine how much better the geopolitical situation would be if that were the case.

Sure, if you want to make hairsplitting distinctions, none of these counted as "state-sponsored violence" against the people of the US or UK. But Scott's central claim in the article is that it is repressive monarchy, not "democracy", that causes reigns of terror. Before making such a claim and exonerating "democracies", one should at least examine the possibility that the "democracies" actually enabled the reigns of terror. And when you examine that claim historically, you find that, yep, "democracies" were doing exactly that.


I think you'd have to make some outlandish claims if you wanted to connect most of the material you're trying to bring in to the direct topic of this piece. Re-read the pieces Scott is responding to if you want to see what I mean—the topic of this piece is fairly narrow, and most of what you're proposing to bring in is non-sequitur without some pretty wild connective tissue added, which connective tissue I doubt you'd want to try to defend. Stuff like "leftward-trending liberal democracies tend to become more and more permissive of egregious domestic political violence in other countries over time" would be the minimum to make any kind of even oblique sort-of connection to the actual topic of the piece, and... surely not, right?


> I think you'd have to make some outlandish claims if you wanted to connect most of the material you're trying to bring in to the direct topic of this piece.

I disagree. See the other comment I just posted upthread about what happens when you compare the actual historical events in the cases Scott cites to the two models he describes, the "Reactionary model" and his "alternate model". The fact that the particular "liberal democracies" Scott references, the US and UK, do their "state-sponsored violence" by proxy instead of directly is not a coincidence: it is what allows those countries to claim that they are "liberal democracies" and don't have "state-sponsored violence" for the benefit of their voters, while blaming those "repressive regimes" for all of the mayhem in the world--when in fact the "liberal democracies" themselves are just as much to blame.

> Stuff like "leftward-trending liberal democracies tend to become more and more permissive of egregious domestic political violence in other countries over time"

While I think this is actually true (the US has caused far more mischief recently in the name of "spreading democracy" than it did in the 19th century, for example), it is not the argument I was making. The argument I was making is simpler than that, and is summarized above.


But the argument you'd prefer to focus on isn't connected to the topic in Scott's piece, because Scott's writing to address specific claims with predictive power (as in: we can "replay the tapes" of history and see if what we'd expect to see, if they're true, is evident—and they're strong and confident claims, so we'd expect it to be pretty clear if it is) in a couple other pieces, to which pieces the argument you want to make also isn't connected.


> Scott's writing to address specific claims with predictive power

Which his model does not have. That was the point of the post of mine upthread that I referred you to. The actual cases he cites (French Revolution, Russian Revolution, Chinese Revolution) are a much better match for the Reactionary model (though with a caveat that I gave in my upthread post--the final "monarchy" step in these cases, e.g., Putin's Russia, is not an improvement) than they are for his alternate model. (His justification for his alternate model matching the Russian Revolution better is that Khruschev and Gorbachev were "more mellow" than Stalin, but step 6 of his alternate model says that the government "does pretty okay", which was not true of Gorbachev's USSR any more than it was of Khruschev's--not to mention Putin's Russia, as I said above.)

So why is Scott so confident of his alternate model, if it actually is worse, on net, than the Reactionary model he criticizes? Because he fails to realize that the US and UK (and other countries in the British Commonwealth like Canada and Australia) are not the historical norm, but are historical outliers. They have managed to settle into "liberal democracy" which might be "mellow" and doing "pretty okay" at home only by exporting all of the "state-sponsored violence" elsewhere, where their voters can ignore it and their politicians can pretend it's the fault of those other "repressive regimes".

In other words, the egregious historical errors Scott makes in this article, which was what I originally called out in my first post in this discussion, are closely connected to the main point of his article: he is only able to even entertain that point, much less write an entire article about it, because of the false historical narrative he has.

Here's another example, where Scott restates the article's central claim (from which its title is taken):

"A monarch who voluntarily relaxes their power before being forced to do so by the situation – like the constitutional monarchs of Europe or the King of Thailand – is performing a controlled burn, destroying the overgrowth that would otherwise cause a fire and skipping directly from 1 to 6."

Tell that to Tsar Nicholas II, who voluntarily abdicated when his high officials convinced him that it was for the good of the country. Russia not only did not "skip directly" from Scott's 1 to 6, it never reached his 6 at all (see above). Since this was one of the examples Scott explicitly chose, you'd expect him to at least check to see whether the actual historical facts matched his narrative.


The claims he's addressing are, boiled down:

1) That leftist movements inevitably become more prone to violence (purges, genocide, violent repression of opponents in general, "reigns of terror") over time, and

2) Democracies have a strong-bordering-on-inescapable tendency to incubate such leftist movements, and to grow more leftist over time.

What we should expect to see, then:

1) Radical, violent leftist movements that gain power get ever-more violent over time.

2) Democracies become more unstable and prone to those specific sorts of violence, over time, and this unabated increase pushes ever closer to crisis points .

What we see instead is:

1) The worst leftist violence tends to be immediately preceded by authoritarian governments (which is what these fringe folks Scott's responding to want more of—this is a very specific and out-there movement making very specific and out-there claims, not the entire field of criticism of leftism or democracy), rather than to be preceded by democracy; to reach their fever-pitch very quickly; and to cool off over time rather than doing what we'd expect based on what was claimed to be true, which would be for them to typically get worse the more time passed.

2) Meanwhile, democracies seem... fairly stable, actually, without a clear, inevitably-trending-upward trend line on leftist-induced violence and chaos, or what have you. Fluctuating, sure, but where's the trend line for specifically that? Where are the ones ending in leftist reigns of terror? All of them are supposed to be heading toward a fever-pitch of leftist purges and genocide. Like, that specific thing is what was claimed. Does it look that way? LOL no.

I think what's key to following this is that the thing he's arguing against is a pretty fringe political view. He's not addressing some more-tame, more-mainstream criticism or model-for-the-development of either the left, or democracies, that might be stronger. He's trying to suss out whether the above, specifically, appears to describe actual, observable tendencies of leftism and democracies in the real world.


> The claims he's addressing are, boiled down

As I said in my post upthread that I referred to, I am not arguing for the claims that Scott is arguing against. (I do say that Scott's "Reactionary model" is a better historical fit to the cases he cites, but only with the key caveat I gave about the final step, and that caveat directly opposes the claims that Scott's "reactionaries" make based on that model.)

I am arguing against the claims that Scott is making about his "alternate model". His article is not just rebutting his version of "Reactionary". He is making claims of his own. Those are what I am addressing. I have already explicitly quoted claims that he makes that are historically false. Those claims are what support his "alternate model", which his article is arguing for.


Your claim 2) does not imply your "expect to see" 2). Why? Because democracies don't have to incubate leftist movements in their own country. They can incubate them elsewhere. Which, indeed, they do, as I have said.

This, in itself, is not an argument for the "reactionary" claims Scott is arguing against, for reasons I have already given. But it is an argument against Scott's "alternate model".

Also, democracies can grow more leftist over time (which, I would argue, they have) without becoming internally unstable, as long as a majority of voters continue to vote for policies that move further and further left. Which I think is a fair description of what has happened in "democracies" over the past century or more. Whether this tendency can continue indefinitely is a different question.


These are interesting claims and points, perhaps, but remain disconnected from the original material. In particular:

> They can incubate them elsewhere. Which, indeed, they do, as I have said.

This needs an immense amount of development to maybe qualify as both connected-enough to this topic to belong in Scott's piece, and a strong enough claim to be worth either explaining and refuting, or adopting and defending. You're claiming that things like declining to prosecute a war against the USSR after WWII is an example of an action that acted as an outlet for what would otherwise have become domestic US leftist political violence in the US. There are, like, several things about that, and your other examples, that need to be filled in before it might be clear that makes any sense at all, plus some kind of pattern of this kind of thing increasing over time needs to be established that can't easily be explained by other, more-straightforward factors. Notably an awful lot of these examples are failures to act—what's that about? How's that an outlet for a kind of "energy" that would otherwise push the US closer to a reign of terror? Why should we think that sort of thing can act as such an outlet? What's the connection between those things? I see none whatsoever—it is not obvious this should be entertained as a relevant and strong line of inquiry.

> Also, democracies can grow more leftist over time (which, I would argue, they have) without becoming internally unstable, as long as a majority of voters continue to vote for policies that move further and further left.

Maybe! But it doesn't appear to make them ever-more violent in the specific way in question. The real events and trends we have before us really don't appear to fit—not to fail to fit your claims, but by the claims made by the folks Scott wrote the piece to address. (This isn't me arguing against you—I follow that you do get that what you're aiming at is Scott's alternative model, not that you're arguing in support of the Reactionaries)

You have proposed some reasons different from Scott's that this may be the case, and fault him for not addressing your proposed reasons, but it remains unclear to me that there's a strong line of argument there, specifically as it relates to the topic at hand. It's not clear to me that he should have brought it up, or that it makes his argument weaker that he did not, let alone that it's part of some set of "schoolboy mistakes" to have not done so.

What I don't find any of this back-and-forth convincing about, is that this piece of Scott's is in error for failing to address this stuff. I definitely am not convinced that failing to entertain (or even mention) some kind of, "the 'temperature' of US leftist domestic political violence has, perhaps, remained cool only because we did stuff like not do much to help Chiang Kai-shek" explanation, constitutes an elementary error.


> You're claiming that things like declining to prosecute a war against the USSR after WWII is an example of an action that acted as an outlet for what would otherwise have become domestic US leftist political violence in the US.

I am making no such claim. My claim is simpler: Scott's argument is that "liberal democracies" are the best way to prevent "state-sponsored violence". But that argument can't possibly be valid if "liberal democracies" in fact cause violence in other countries.

Scott's rebuttal to that claim is to gerrymander the definition of "state-sponsored violence" so that it only counts if it's against the citizens of the "liberal democracies" themselves. But that is exactly the problem: people like Scott can pat themselves on the back about how great "liberal democracies" are only by ignoring the historical record of "liberal democracies" sponsoring all kinds of violence in other states besides their own.

If Scott were to remove the blinders he put on by defining "state-sponsored violence" in such a narrow way and take an honest look at the historical record of "liberal democracies", he would never have even tried to write such an article. Instead he would be directing his intellectual resources towards a much more useful inquiry: why do "liberal democracies" sponsor so much violence in other countries--especially when, in every other country besides their own, sponsoring all that violence never even leads to liberal democracy? Why don't they see the obvious contradiction between their stated principles and the actual results of their actions? But that question isn't even on Scott's radar, because of his ignorance of history.


> What I don't find any of this back-and-forth convincing about, is that this piece of Scott's is in error for failing to address this stuff.

Scott's argument--not his anti-Reactionary argument, but his argument in favor of his alternate model--depends on particular historical claims. Those claims are false, and they're false because of the historical errors he makes. Those historical errors are not about small or peripheral points. They are about points that are central to his argument. I have already explicitly quoted and discussed them and I won't repeat them here. I just find it difficult to see how Scott is not in error for failing to spot these historical mistakes that he makes, particularly in view of his reputation for supposedly being much better than average at making valid arguments. Making valid arguments is not just a matter of using reason correctly. It's a matter of reasoning from correct premises, and making sure that you are in fact doing so.


Have you considered that it's possible that both you and Scott are particular good at reasoning, and it's everyone else that is much worse at this stuff?

You're pointing out a bunch of errors, but it's not clear to me that they're schoolboy errors (i.e. errors a schoolboy wouldn't have made). I would also venture to guess that the average educated person would make much more egregious mistakes.


> it's not clear to me that they're schoolboy errors (i.e. errors a schoolboy wouldn't have made)

If this is the case, IMO it's more a reflection of the awfulness of the schooling in our time than anything else. No schoolboy in the US of a century ago would have made such egregious historical errors, because they would have been taught some actual history instead of what students in the US today get taught.


My tongue in cheek response is that you are correct, because a century ago, WWII didn't happen yet.

Can you provably demonstrate that education quality was better a century ago? I'm not outright trying to refute you, so much as I have a belief that people tend to overstate the "good old days", so I usually prefer more concrete data points.


> the Left?

You mean the GOPs modern definition of a leftist, or the vast majority of the times definition of a leftist?

https://encyclopedia.densho.org/Norman_Thomas/


The Democratic party in the US, at least since Woodrow Wilson, has been on the Left. I don't think that is a matter of any serious dispute.


It's a matter of very serious dispute if you were to read people who are actually on the left, or who live outside the USA.

The stated policies of the US Democratic Party barely correspond to those of leftist parties worldwide. Their actual policies when in power are even further from those of leftist parties worldwide.

So .. are they left of the US Republican Party? No question. Are they "on the left" in any broader sense? That depends very much on your conception of the nature and scope of the political space. For most people in most parts of the world, the US Democratic Party is a completely centrist party that would never be termed "leftist".


> It's a matter of very serious dispute if you were to read people who are actually on the left, or who live outside the USA.

Ah, that old standby "No True Scotsman". Sorry, not buying it.

> The stated policies of the US Democratic Party barely correspond to those of leftist parties worldwide.

No political party's stated policies can be taken seriously, since political parties, on immense amounts of historical evidence, will lie about their actual goals as much as they need to to get elected and stay elected.

For example, the Democratic platform that FDR ran on in 1932 did not look very leftist, but the actual things he did once in office were most definitely leftist, and bore no resemblance whatever to the platform he ran on. Did any Democrats object? Hollow laugh.

If you want to argue that the US Democratic party is not as far left as, for example, leftist parties in the UK or Europe, yes, that's probably true. But saying that that means the US Democratic party is not leftist is like saying that the Atlantic Ocean is not an ocean because it doesn't have quite as much water in it as the Pacific.


No, it's like saying that Walden Pond is not an ocean because it doesn't have quite as much water in it as the Atlantic or Pacific.


If you seriously think that the US Democratic party, as compared to leftist parties in Europe, is closer to Walden Pond than the Atlantic Ocean in terms of leftism, then you and I clearly live on different planets and we don't have enough common ground to have a useful discussion.

In fact, historically speaking, even the US Republican party today is closer to the Atlantic Ocean than Walden Pond in terms of leftism. What today's US Republicans consider "conservative" would have been considered so far left as to be radical to the US Republicans of the late 19th century.


A party that overtly supports capitalism is not a leftist party.

Its at most (in the direction of leftism) center to center-right.


> A party that overtly supports capitalism is not a leftist party.

I'm not sure how the US Democratic party "overtly supports capitalism", since it is the party of government micromanagement of every aspect of business.

I'm also not sure how leftism is inconsistent with support of capitalism, unless the latter is taken to imply a complete rejection of socialism. Which is certainly not a good description of the US Democratic party. To the extent it does "support" capitalism, it is only as one aspect of a society which the party wants to organize along mostly socialist lines.


Americans have little knowledge of their history, can't imagine an era when there were actual socialists, marxists, communists, fascists and literal nazis in uniforms holding conventions in NYC.

All choices must be binary.


My current hot take, as a longtime casual outside observer of the movement, is that the Rationalism movement is basically just Mensa with no membership requirements.

And if you know Mensa's reputation....


Debates aren't about arriving at the truth. They are rhetorical combat to 'win' an argument. Most people don't understand that.


LLM's have really made me realize that "using language" is a shallow skill, but detecting bullshit is a deep skill.

And anyone who read Velikovsky and thought they had made a good argument, is someone who needs to hone the deeper skillset.

Debate isn't genius. There are famous politicians who were RENOWN in law school for their debate skills, who are clearly idiots. Holding up debate skills as an intellectual achievement is like praising someone for their ability to hold in a fart.


lmao


Being an expert doesn't necessarily make you a good debater. Debates aren't about who's "right", they're about who has made a more convincing argument.

The article used alternative history cranks as an example of someone good at conjuring up seemingly convincing arguments, despite being pretty much the opposite of an expert.


This is coming late and you might not read this, but I think you're reading too much into that quote.

Scott has an uncanny ability to cherry-pick facts, usually absurd ones to paint funny narratives. He even wrote a whole fiction book based on the concept, a universe that actually runs on Kabbalah. Here's a short self-contained chapter that showcases this:

https://unsongbook.com/interlude-%D7%AA-trump/

It's totally believable that Scott could use this skill with serious facts in a serious argument, and befuddle almost anyone. Doesn't make him smarter or more right, just very persuasive in this narrow method.


The 10 Principles of Rationality:

1. Your beliefs and assumptions are facts and logic; other people's beliefs and assumptions are emotions and biases

2. If someone describes an experience you can't relate to, it's an anomaly and you should ignore it

3. You know more than other people (except for third-parties who agree with you: they are an unquestionable authorities)

4. Never engage with issues in the context of real events; only examine issues in contrived, hypothetical scenarios

5. Avoid debates with real people; instead, make up imaginary people and argue against the make-believe opinions you give them

6. If you do debate a real person, your primary goal is to make them look foolish (not to change their mind, or yours)

7. Your understanding of other people's feelings and beliefs is more accurate and nuanced than their own

8. Your wild guesses and baseless estimates are valid statistical data

9. It's ok to bend the truth when necessary to prove a point

10. If someone disagrees with you, it's because they're less rational than you

The 10 Decrees of Rationality:

1. A person's wealth is a direct measurement of their contribution to society

2. The best way to help people is by getting rich as fast as possible (it doesn't matter how)

3. When helping people, it's best to ignore their input and feedback (if they had sound judgement then they wouldn't need help)

4. When someone does something good, they're probably secretly doing it for terrible reasons

5. When rich people do something terrible, they're probably secretly doing it for good reasons

6. Believing in your favorite unproven conspiracy doesn't make you a "conspiracy theorist"

7. Stereotypes are based in truth, except for the ones that you find personally offensive

8. When people say your research is bogus, it means your research is really so profound that they're scared of its implications

9. Super-intelligent AI is the greatest threat to humanity

10. Capitalism is a natural, fair, and sustainable system that makes our world a better place


This comment is interesting in that it itself exemplifies principle #5.

(And also principles #6, #7, #9 to some extent.)


Yeah it's kinda contradictory. It's part critique and part self reflection on an ideology that I have mixed feeling towards. In the same vein as the Programming Language Checklist for language designers.

https://www.mcmillen.dev/language_checklist.html


Not sure if your comment is positive or sarcastic. Are you familiar with Scott Alexander's writings?


Yeah, I read some of his essays. I like some of the ones where he stayed in his area of expertise, like "Ontology Of Psychiatric Conditions: Taxometrics" ( https://www.astralcodexten.com/p/ontology-of-psychiatric-con...). The other ones I've read are big fancy word salad unfortunately. Can you recommend others like the series I linked above?


They're dense sometimes but for me the furthest thing from word salad. Every paragraph feels carefully considered and apposite.



It's funny he says circles so much.


What, twice, in an "I can do X to Y, you can do X to me" structure?


Philosophy is so hard that most people don’t ever start

In fact in my experience people who can push past this “epistemological learned helplessness” also tend to have significant problems with day to day living

In a manner of speaking, I boil this down to how often you spend in different levels of thinking. I’m gonna go ahead and share a back-and-forth chat I had with ChatGPT on this precise topic the other day:

https://chat.openai.com/share/03302643-5efc-41e0-a681-6d1699...

To me it boils down to an intersection of raw IQ and experience. If you’re generally “thinking fast” in Kahneman-language then that would be “type1” thinking that most people do most of the day.

Much beyond that and thinking becomes exponentially harder with every level of “simulation” your brain has to do with more complex, multi variable simulations walking up the abstraction stack.

The downside to that, however, is that if you don’t have accurate perceptions, or there are flaws in reasoning, then you come to conclusions that don’t fit because your perceptions are off. This is the other side of genius, which is typically some kind of very iconoclastic position on something political social or personal.

I think it fits with what the author said to conclude:

“ It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.”

Most people however have neither the IQ nor the experience to actually have coherence for almost anything at the 4th or 5th order.

Usually only in one area of their life like work or a hobby do anyone actually have the attention and focus to get to that level of coherency - and worldwide that’s extremely rare as most people aren’t secure enough to have the time and resources to gain the experience needed.


I have a hard time engaging deeply with philosophy. I've tried to get a good overview, but whenever I read the real scholarly work I find an assumption I hate on page 3 and cannot bear to engage with that assumption for a thousand further pages of close reading.

I don't want to be the philosophical equivalent of those guys who pester physics professors with "I've unified gravity with quantum mechanics and I just need you to add the math". But I can't pick up Kant or Hegel or Sartre without screaming.


> I can’t pick up Kant or Hegele or Sartre

Ah, there’s your problem, at least in my opinion/experience. You have to start at the beginning, read Plato’s Republic followed by Aristotle’s Nicomachean Ethics. When you read them don’t take them as fact but as a “living” historical foundation that will help you set the stage for reading increasingly more recent books.


That's part of it. I feel like I can't even start without reproducing 2,000 years of back work. I know it's not the same as science, but if someone said "you can't do basic mechanics without reading Archimedes in the original Greek", I'd laugh at them.


What do you think about Good and Real by Gary L. Drescher? I wrote about it at https://news.ycombinator.com/item?id=38789146, so it's not perfect, but I haven't found anything that is.


Broke: Reading books or attending classes on philosophy and cognitive science to learn from hundreds of years of combined research from the world's greatest minds.

Woke: Pulling up a new theory of intelligence and for why everyone else is too damn stupid for "fourth order thinking" out of thin air, talking to it about an LLM and deeming it so important it has to be shared with an online community.


What would indicate that the former is not also true?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: