Hacker News new | comments | show | ask | jobs | submit login
What Developmental Milestones Are You Missing? (slatestarcodex.com)
189 points by ggreer 536 days ago | hide | past | web | 147 comments | favorite



The milestone about understanding trade offs really resonated with me. I had been using (what I call) the "Scylla-Charybdis Heuristic":

-Don't trust any model that implies X is too low unless it's also capable of detecting when X would be too high

I've been frustrated by how people can fail this for X=the minimum wage, immigration, ease of getting public assistance (just to show that it's across the spectrum), as well as less political issues like "approaching strangers" or "trying to persuade after a rejection".

There is a very common mentality out there that cannot admit to any downsides to a favored policy. I had often attributed this to "well, they have to be wary of tripping others' poor heuristics", but this may itself be the #1 fallacy Alexander mentions! It can just as well be that the person cannot think in terms of trsdeoffs.


I don't think this is good advice as stated. If you are showing the symptoms of scurvy then your Vitamin C intake is too low. This model tells you nothing about when your vitamin C intake is too high. But it's still valuable.


It depends on the (possibly implicit) model behind the advice. If the advice is "always take more vitamin C", the it's failing the SC Heuristic because it recognizes no downsides to vitamin C or when you would be taking too much; telling this to a scurvy sufferer is only correct by accident. It would fail to notice that eg too much can cause vitamin C poisoning or (if in the form of fruit juice) obesity or vomiting.

If the model says to take between X and Y units ("but I don't know what's causing the harms outside the range"), then it may be a shallow understanding but it's not failing the SCH, and it avoids a common failure mode.


> it's failing the SC Heuristic because it recognizes no downsides to vitamin C or when you would be taking too much; telling this to a scurvy sufferer is only correct by accident. It would fail to notice that eg too much can cause vitamin C poisoning or (if in the form of fruit juice) obesity or vomiting.

Exactly! And yet for centuries it was nevertheless an extremely valuable model to have (I should have said "lemons" rather than "vitamin c").


Which model and which history? People did historically pass this heuristic because they could articulate a standard for when you're bringing too much lemon.

"Hey -- bring lemons on your ships because it stops scurvy"

'Oh, so why not a ton of lemons? Two tons? Ten tons?'

"Well, to stave off scurvy, you only need x units per sailor per day. Beyond that, it's just expensive deadweight."

In contrast, there are policy advocates who want an X to be higher and yet who haven't met the "tradeoff development threshold". Instead of being able to articulate a model which tells you when X is high enough be a net negative, they will show inability to understand the core challenge: "Strawman, no one's advocating 2X". "That would just be absurd." "I didn't say 2X, I said 1.4X."

It's true that if you posit a scenario in which it's physically impossible to steer far enough right to hit Charybdis, then it will looks successful to have the model "steer as far right of Scylla as you can"; but this isn't the general case, and it wouldn't count as an understanding of tradeoffs.


This comment stuck in my craw a bit, so thanks for provoking some thought. :) Your model isn't modeling Vitamin C consumption, right? It's modeling scurvy. I would guess the model could also tell you about when there are too few scurvy symptoms, which would be "never."

"If I'm going to be late, I should walk faster" isn't modeling velocity — it's modeling punctuality.


>Your model isn't modeling Vitamin C consumption, right? It's modeling scurvy.

I don't see it that way, I see it modeling Vitamin C consumption. To me, your assertion that it's modeling scurvy instead feel contrived in order to fit the top-level comment's princple.


To my mind "your teeth are falling out, you should drink more lemon juice" is the same kind of argument as "our gini coefficient is too high, so we should raise the minimum wage" or "our companies are taking too long to fill positions, so we should allow more immigration".


Your model is more complicated than you think.

If someone says "take vitamin C", you don't point blindly at a log chart and thus consume a kilogram of it (which, based on a rat model, probably would kill you).

This is because the practical algorithm for "take Vitamin C" has the grocer, the government, and a team of scientists sign off on the size of a dosage and how many pills are even in a bottle. So while that may not be part of your model, it is most definitely part of the model. The model tells you how much Vitamin C not to take, and so it passes the heuristic.

And without that cap, Vitamin C is unsafe, etc., so the heuristic holds.


It's just an heuristic, something to look for when evaluating a model of the world, not an strict, objective and precise rule.

However, in that example, I'd say that modern medicine certainly has a "too much vitamin C" threshold, and thus might be a sensible model of human vitamin C needs.

I had never seen this idea in writing, but I certainly remember thinking along those lines about retiring age and similar policies.


"Modern medicine" contains a model that includes both criteria for too little and too much vitamin C, but that's not the model that was being referenced.

The point was that the cited model, "scurvy symptoms => too little vitamin C", is useful (in some situations, very useful, if you aren't in possession of a better model with which this one would agree) while being in violation of the maxim given upstream.

I think a much stronger rule - which wouldn't deserve the same catchy name - is that your model should at least be able to say "X is not to low".


I quite like that formulation too.


I tend to avoid people who comes up with simple solutions based on their ideology. Folks who are honest about not having a simple solution makes a much more intelligent impression on me.

Does this comment make me a hypocrite?


It doesn't make you a hypocrite if that's one heuristic you use to judge people among many. There's actually research on expert judgement showing that experts who are able to find lots of "On the other hand" reasons are more likely to produce accurate forecasts.

http://www.chforum.org/library/choice12.shtml


"Folks who are honest about not having a simple solution makes a much more intelligent impression on me."

This works because problems with simple solution don't stay problems for long.

Problem: I'm dehydrated.

Solution: I drink (potable) water.

Problem solved.

(Don't get distracted by all the other potential bad solutions, we're just talking about how this specific simple solution does work.)

If it's a pervasive problem that has plagued humanity since the dawn of civilization, then simple solutions probably won't work, no. I'll cop that I lean libertarian but ultimately the very popular "let's just free market at it" and "let's just government at it" are equally stupid solutions to any hard problem.


What about people who come up with complex solutions based on their ideology?


What about situations where a simple solution would be the most effective and appropriate?

Would you be averse to it just because it is simple?


The solution may be simple, but the reasoning to justify that it is the most effective is probably not.


Potentially, yes? Especially if it's a long-standing complex problem. History is full of oversold simple solutions. Hearing "just do .." should immediately alert you to the possibility that the speaker hasn't understood your situation.


>History is full of oversold simple solutions.

Of course, history is also full of people insisting, "No, hold on, it's far more complicated than that!" and then being totally wrong.

For instance, people spent a very long time believing that a difficult-to-model combination of many different factors produced stomach ulcers. Then an experiment was done, and voila, the real cause was Helicobacter pylori.

Simplicity (or, in fact, regularization) is helpful far more often than it's harmful.


In the case of stomach ulcers it actually is more complicated. Helicobacter pylori is the most common cause, but not the only cause:

http://www.uptodate.com/contents/association-between-helicob...


Also the policy / solutions promoters need to be asked and need to answer the questions:

1) what are the trade-offs 2) what are the potential unintended consequences 3) what happens if the boundary conditions are approached like 60 years later, very few people do X, many people do X


I would say that it would depend. I'm primarily thinking of big picture political questions, and in that general area I feel that simple answers are mostly flawed answers.

However, in engineering, I apply the *nix philosophy of less is more. But this post isn't very technical.


No. Your comment doesn't use any magical concepts for which I have no model. I think I entirely understood it. It's simplicity is incidental but allowed for that. People who come up with simple solutions based on Their Own Ideology are a different thing. Their solutions aren't simple. They're lies that you want to believe contingent on lies you don't know you don't want to believe. Generated mosty for control, of self or others, and never for awareness.

I think it's a good social heuristic.


> magical concepts

Depends on how you define magical concepts. Are the magical because they are too complex and require too many assumptions?

For example, by your definition, is it ok to have faith that would be associated with a religion, if that faith is based on a wide variety of experiences and acceptance of the existence of others worldviews?


The only reason to have an ideology is to use it to produce solutions. You should be wary of solutions that are just too simple for the problem (irrespective of ideology,) or the rejection of solutions based solely on ideology.


Yes, but we are all hypocrites, so don't feel too badly about it.

The problem with not having a simple solution is that humans are wired to take action based on simple solutions. Here's a test for your theory of mind. Can you imagine that someone else thinks, "I know I'm not completely right, but I also know that the best way to lead this group forward is to present a simple solution as if it is right." How would they act?


>-Don't trust any model that implies X is too low unless it's also capable of detecting when X would be too high

I think that's a fairly flaky heuristic. There's no point at which a nation's GDP could be considered "too high", although some of the tradeoffs necessary to increase economic growth beyond a certain point may diminish net utility.

I prefer a heuristic along the lines of "Don't trust any argument that doesn't explicitly state the other side of the cost/benefit tradeoff".


To steel man GP: pareto efficiency[1] is a model which can both tell you if GDP is too low and too high... the thing I like about it most is that rather than a simple metric over the aggregate actions of everyone, it considers individual decisions and individual needs. It respects property rights for example, which "GDP good" doesn't (though a "long term GDP good" position probably does).

[1] https://en.wikipedia.org/wiki/Pareto_efficiency


Perhaps the truth isn't always in the middle tho.


IMHO LW grossly overstates the importance of statistical thinking.

If you have a job making decisions for groups then statistics matters greatly but in our personal lives, genuinely statistical problems are so rare compared to the huge number of decisions we actually face.

Most of our decisions are characterised by uncertainty, uniqueness and an absence of data. Real-world rationality is less like mathematics and more like system design where we weave together plans and strategies. For example using contingency planning - a rational buyer ensures the seller has a good return policy rather than trying to use Baye's rule to analyse their past purchase experiences.

IMHO the overlooked core of rationality is creativity - you have to imagine possibilities and invent plans. Its only when you reduce decision-making to math problems that math seems so important.


Personally I always understood LW's focus on statistical thinking as something similar to what math classes do, from times table to solving integrals - what matters is that a) you have the understanding of underlying concepts which makes you think in a particular way, and b) training / learning caches some operations in your head which lets you use new models of thinking that you otherwise would not be able to (e.g. memorizing times table serves just to make most calculations feasible to do in head or on paper in reasonable time). You don't have to do a perfect Bayesian updates in your daily life - just understanding how the thing works + little training lets you not fall into obvious traps when learning new evidence.


You are exhibiting a case of failing in the "2. Ability to model other people as having really different mind-designs." The author is a psychologist, not a statistician. When he talks about thinking probabilistically, he is not talking about sitting down and doing the math. He is talking about recognizing that there is no such thing as certainty and understanding that even very unlikely outcomes may occasionally occur. It is about making reasonable rational decisions in an uncertain world, hedging uncertainty if possible, and generally being OK with it.


In college I had a friend who didn't understand 50/50. If an outcome was not yet decided, then both options are the same "size", right?

One would like to think that my debates with him about this, or maybe the stats class he took, we're what helped fix this intuition. But what probably was just as important was playing games like LoL, where in the face of uncertainty, you make calculated risks all the time. At some point, it is impossible to get better without making intelligent risks.


Client: You styled my house 3 months ago it is still not sold! You said it would sell much more quickly!

Stylist: Your house was moved from the population of un-styled houses to the population of styled houses. This population has a different mean time-before-being-sold but it still has long tails, I'm sorry you are in that tail.

Btw: "a rational buyer ensures the seller has a good return policy rather than trying to use Baye's rule to analyse their past purchase experiences." Sounds to me like one would first look for priors that confirm that good return policy, one does not pull trust out of thin air.


Agreed. Having seen other try the statistical approach to making many of life's decisions I have concluded that it's somewhere between futile and or worse self deception.

It might work things that happen over and over again like deciding which route to take to work. You can accumulate enough useful data to do that.

Many new and large decisions by their nature do not have that data and then you're stuck looking for proxies for it based on other's data (experiences) and that rarely applies to your circumstances.

An example of this would be moving to SV, you prob don't have your own data points so you're relying on data of others (friends, people on the internets) but it's unlikley that that will have a bearing on how you feel about it once your there.


Even when our models are weak and there's a lot of uncertainty, putting numbers on it can clarify your reasoning and eliminate a lot of potential errors. E.g. the Drake equation consists of five or six numbers none of which we know, but it still tells us a lot.

Alexander's own take is http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-...


Doesn't your example contradict your claim? Putting numbers into the Drake equation doesn't really make it more useful than just thinking about it abstractly.

I wonder if the discussion of that article would have gone differently if it had been about (internally) reasoning about uncertain things as 51/49 choices and made it clear that it isn't really useful to communicate using the made up numbers (except maybe if the whole hypothetical case is explicitly communicated).


> Doesn't your example contradict your claim? Putting numbers into the Drake equation doesn't really make it more useful than just thinking about it abstractly.

No? The way you (or at least I) use it is to plug numbers in (I mean, what else can you do with it?). "I think this factor is somewhere between 1 and 10,000, and this one's about 1,000,000, so... huh. My model can't be right."

> I wonder if the discussion of that article would have gone differently if it had been about (internally) reasoning about uncertain things as 51/49 choices

In a 51/49 scenario there's no point spending a lot of time evaluating. But unfortunately we often naturally assume a scenario is 51/49 when it's not.

> made it clear that it isn't really useful to communicate using the made up numbers (except maybe if the whole hypothetical case is explicitly communicated).

A number calculated from made up numbers is probably more accurate than a gut feeling, and we communicate those all the time.


Yes, but the gut feeling has the caveats attached (or anyway, I also prefer it to be made clear that it is a gut feeling).


> Real-world rationality is less like mathematics and more like system design where we weave together plans and strategies.

You're implying a narrow (but depressingly common) definition of "mathematics".

System design is solidly inside the cognitive realm of mathematical thinking.


I'm gonna add - understanding feedback loops. A lot is written about economics and politics by people who don't seem to get that one, and those discussions are mostly pointless. Myself I understood it only after control theory course in university, though now I recognize that high-school level chemistry teaches it too (chemical equilibrium). After you grok it you start to see how incentives can feed one another and that it's perfectly reasonable for a bad system to exist in which there is no single actor to assign blame to (compare: is television stupid because people are dumb, or are people dumb because TV is stupid?).


I've had the experience of understanding feedback loops perfectly well but still feeling inclined to blame a "big bad" for most ills. I had trouble developing an ethical basis for my actions because I swing across the political aisle very readily based on utilitarian arguments, but simultaneously feel tons of empathy for others and thus want to be in collaboration with everyone I meet. Thus I swept through the gamut of political thought, radical in various ways at various times, moderate at others.

Working through that meant gaining a better distinction of the self from the system. I explicitly use a virtue ethics mode for my immediate surroundings and everyday behaviors(i behave in the way in which i feel the least guilt), and then a utilitarian mode when I have to make a decision affecting potentially large groups(e.g. my voting decisions tend to be less selfish and more model-based). This keeps my priorities healthy because it stops me from pointing to an inappropriate "scale" of thinking for the situation.


> whether the listener needs the piece, already has the piece, or just plain lacks the socket that the piece is supposed to snap into.

This is very relevant to how we think about education, communication, information design and propagation.

The combination of this SSC and the linked David Chapman piece are very good. They explained and formalized quite a few concepts that have seemed like weird nebulous glitches in the matrix to me. And not only did it make some things click into place, it expanded the amount of places things could click into.


Reminds me of the 7 transformations of leadership https://hbr.org/2005/04/seven-transformations-of-leadership


>2. Ability to model other people as having really different mind-designs

I got recently very good lesson about this while learning about MBTI.[1] It's really not too scientifically rigorous model. But it shows you how some people could function in fundamentally different way than you. And there is people who subscribe to each of the 16 categories. The Big Five is more statistically valid. But it is lacking in the potential to explain world from different points of view.

1. https://en.wikipedia.org/wiki/Myers%E2%80%93Briggs_Type_Indi...


"Ability to think probabilistically"

I think that is really powerfull, the ability to think in a Bayesian way, to not label something as one thing or the other but to accept fully, that as things stand now, something can be 70% A and 30% B. This does not even have to collide with objectivist views as the thing is either A or B but, based on all the information you have you can only be 70% sure it is A.

New information may well lead to to a new 20% A, 70% B, 10% something else. I tend to judge people by their ability to do this. Never enter a discussion unwilling to change your mind is one of my mantra's. I'm also fully aware I break it often by the way.

I have often seen two experimental situations with highly overlapping histograms for one parameter and then have a colleague ask: "So where do we draw the line for positive/negative?" I always say: "We don't. We call the sample that is exactly in the middle 50% positive, 50% negative and we have said everything we can about said sample given the 1 dimensional information we have."

What's cool for the Dune fans, this seems to me to be exactly what Bene Geserit witches and mentats do. Ingest proofs, take tiny, tiny hints, combine them in a Bayesian manner, produce a likelihood of truth for a certain hypothesis... Perhaps I'm over-interpreting (and projecting) ;)


This entails the ability to change one's mind as in "previously, I would have done this but given the new evidence now I favor that" - something often seen as weakness in politicians.


Not just in politicians...

"Hmmm I think this bug will take a week to fix"

Gets fixed in two days.

"So why were you wrong in your estimate?" or even worse, now your estimates gets discounted next time. And the same holds true if it happens the other way round.


It can be useful to explicitly state error bars. "Delivery will take three to seven days" communicates uncertainty much more clearly than "delivery will take about five days".


It sadly means unwashed masses, including your administration and your bosses, are hostile to rationality, and to their own peril. But we all know that.


Anecdotal and worthless information but someone might find it interesting.

> he gets the thing, where “the thing” is a hard-to-describe ability to understand that other people are going to go down as many levels to defend their self-consistent values as you will to defend yours.

It think it boils down to anger.

I can't recall what causes it (the basal ganglia if I'm not mistaken); at birth we have "biological firmware" that protects our internal mental state - part of which is our belief system. When our internal state is challenged a flight or fight response is invoked.

Hypothesis 1: this means that the anger or irrational response to a challenge of beliefs could be a biological disposition.

Hypothesis 2: if you still have any semblance of neuroplasticity it might be possible to 'unwire' or 'reflash' this primitive "firmware".

I've been attempting to do this over a few years: killing off that primitive part of my brain by proactively seeking out conversation with intelligent people who are opposed to my beliefs: mostly bigots. I have anecdotal evidence that it might be working (although it can't be quantified or proven in any way) - mostly by observing this flight or fight response in others when I have none.


It's called "cognitive dissonance".

If someone passionately disagrees with you, accusing him of cognitive dissonance works always. So you can assume high ground by being right and being capable of seeing the motivations of the other party. (Or it might be that your opponent actually passionately disagrees. But lets not talk about that.)


> cognitive dissonance

Very, very close to what was referring to. As I recall there is a biological reason underpinning effects like cognitive dissonance and confirmation bias (Freudian Denial in general). I could just be remembering things incorrectly and you may have hit the nail on the head.


2. Ability to model other people as having really different mind-designs

I believe not learning this is a problem in the communities within which I grew up. Bible belt, rural ... The reinforcement of "we should all think/believe/do the same things" seems to come from a lack of exposure to any people or ideas from outside the community; from "isolationism" one could say.


I think the other extreme of elitist intellectuals has exactly the same problem; the understand that people have different mind-designs, but thats because their minds are wrong.


I think it's typical for people to divide other people into two categories. The ones who function just like me, and enemies who are nothing like me.

You find that phenomena in surprising places. I'd claim that nobody is completely immune.


"There are two types of people in this world. People who divide the world into two types, and those who don't."


Good catch. :)

I was thinking about Judith Butler. Philosophist, gender theorist and published author. I read her "Gender Trouble" to a bout halfway. It occurred to me that she might not really recognize that some women might be genuinely straight. It explained the tone of the book bit too well. Also my queer ex-roommate also expected everybody appearing straight to just be closeted. These are individuals who really should understand that sexuality comes in all colors of rainbow.


really interesting read.

the only thing that bugs me, with this text, and the whole rationalist world-view, is how human emotion sometimes becomes a thing to be ashamed of. something beneath a hypothesized "true", "correct" condition. like, the part about recognizing that other people have different minds, different motives, and thus reach different conclusions - on one hand i agree that it is a matter of personal development, and that many people are just plain jerks (sometimes, myself included). but on the other hand, the political examples given in the text seem like a normal emotional response. isn't it simply hard trying to think for everyone, understand everyone all the time? is it even possible? particularly when the issue is something that, for whatever reason, you feel personally. it is not automatically a sign of retarded cognitive development if you just simply got tired.

rationalism sometimes seems so, well, christian - we are all filthy sinners by default.


> the only thing that bugs me, with this text, and the whole rationalist world-view, is how human emotion sometimes becomes a thing to be ashamed of. something beneath a hypothesized "true", "correct" condition.

The "rationalist world-view" is just noticing that (per #1) what brain tells you and what the world really is are two different things, therefore your emotions may or may not be properly aligned with reality (btw. this is also the primary insight of Cognitive Behavioral Therapy). Emotions are just like opinions, but in different part of the brain - you want to have ones that correspond to reality.

"Contrary to the stereotype, rationality doesn't mean denying emotion. When emotion is appropriate to the reality of the situation, it should be embraced; only when emotion isn't appropriate should it be suppressed." - http://wiki.lesswrong.com/wiki/Emotion


OK, but let's take again the political example - there is a confounding social factor there. i think it is common to regard a concession, in the eyes of the public, as defeat. this drives many people to shout their positions, and disregard rational counterarguments, resorting to personal attacks (like the "democrats are pro-crime" claim). even if those people are in fact perfectly reasonable otherwise... the political example is really insufficiently explained by bringing in just cognitive development.

now, generally, from experience, i have the impression that higher mental skills often translate simply into a higher capability of producing more elaborate rationalizations of positions that a person would have held anyway, positions which are rooted in emotions. pointing out rational counterarguments can in many situations only go so far. in fact, i have many times had an easier time discussing with people one might call primitive, since you get to the truth of their motives very quickly, and with respectful approach to people's feelings, a lot can be achieved. with very rational people it's sometimes really difficult to break through the layers of rationalizations.

i'm not saying that rationalism is bad, or that the author (or you) like to go around calling people retards :D i just feels that despite best intentions it can lead to disregard for emotion, even though in many cases emotion is the ultimate relevant "reality of the situation". we can and must make ourselves better, but we can never stop being people.


> now, generally, from experience, i have the impression that higher mental skills often translate simply into a higher capability of producing more elaborate rationalizations of positions that a person would have held anyway, positions which are rooted in emotions. pointing out rational counterarguments can in many situations only go so far. in fact, i have many times had an easier time discussing with people one might call primitive, since you get to the truth of their motives very quickly (...) with very rational people it's sometimes really difficult to break through the layers of rationalizations.

I think that's a potential failure mode of getting too smart. Rationality is supposed to teach you to notice when you're building layers of rationalization instead of updating your opinion to account for new evidence. It's a hard habit to learn, so what you described often happens instead. But if you're aware of this trap I still think learning more is better than not learning :).

> with respectful approach to people's feelings, a lot can be achieved

That's probably the truth about dealing with people, of all kinds of skill, education, smartness, etc. I'd say most of interpersonal conflicts come from one or both sides not groking it.


i absolutely agree that learning more is still better! in fact, meeting a gray area like this one is to me a heuristic indicator that something really worth learning is lurking behind. i think i took it from Zen - path to enlightenment through doubt...

my question is perhaps do we properly acknowledge the difficulty we face? you say, for example, "it's a hard habit to learn" - but can we ever even learn it fully? it should make us humble. we can easily add disclaimers to our speech, but in practice it shows that we don't really mean them. we say it is hard to shake the rationalizing habit, and yet we often ascribe a level of certainty and objectivity to our conclusions which makes it seem like we completely disregard the possibility of our own emotions being behind our thinking. and ultimately, can any thought ever be had without an emotion behind it? Minsky, i think, remarked that he does not believe AI without emotion is possible. i completely agree with that.


The point of rationality is to detect your own cognitive failures and to recognize and exploit those of others. It's pretty explicitly NOT about ignoring them.

So when I observe someone saying "that guy is racist, don't believe his arguments", I can recognize that this is an emotional claim. It's an attempt to demonize a person rather than refute the argument. I can then observe that no one is disputing the actual argument, evaluate that on merit, and have a true belief about the world unbiased by my desire not to affiliate with racists. I can also update my beliefs about the rationality/honesty of the person saying "hey that guy is racist".

Conversely, I can also recognize that a desire not to be racist is strong in people, and exploit it when I want to manipulate the less rational. For example, if I'm arguing against economic protectionism with an emotionally driven person, I might use Jim Crow as an example of protectionism rather than occupational licensing.


i think you missed my point, which was that these so called "cognitive failures" are really integral to what we are. rationalism is not generally wrong. improving your cognitive abilities should be a goal for everyone. but it tends to lead people to a worldview where simple humanity is deemed inferior, wrong, and where a person actually starts believing that they outgrew themselves and their humanity, that they are superior, objective, free of bias. this is an illusion, nobody is free of bias.

in fact, people who believe they successfully suppressed or outgrew any emotion are typically the ones most influenced by it, subconsciously.


The rationalists I know in real life tend to be acutely aware that they are still biased in many ways, and would probably laugh at the sentence fragment "successfully suppressed or outgrew [an] emotion". Maybe we know completely different people who affiliate with the word "rationality".


Much like the Zen ideologies, there is no end state to being a rational human being. You can't just stop at one point and say "That's it, I'm perfectly rational now, therefore..."

Instead, it's more of an ongoing process. The process of identifying your biases and reasoning about them is on-going. You will always have biases, the trick is to make the subconscious influence conscious. To come back to the Zen comparison - the more rational you become, the more you realize you're not rational at all.


well said. i just feel it is something people easily say, but hardly put into practice.

i am being unfair, perhaps, as the other comment here states. i may be identifying some negative aspects in some people with a whole group that did not deserve it. but, OTOH, you two may be resorting to the "No True Scotsman" fallacy.


> you two may be resorting to the "No True Scotsman" fallacy.

That may be a fair statement, since I do not believe there is a perfectly rational person in the real world - just those who are working to fight against their inherit irrationality. All we can ever rationally (oh, the irony) expect is that they do their best.


So your rationality skills give you a lot of advantages in arguments, more or less independently of the actual merits of your position. Rather than compete in that zero-sum game, isn't the best thing for society then to distrust anyone with rationality skills and form social norms against then?


It's not a zero sum. If my opponent is also rational and unpersuaded by emotional pleas, then we converge to truth more rapidly.

I recently got around to reading Scott Aaronson's Complexity of Agreement: http://www.scottaaronson.com/papers/agree-econ.pdf

One of the predictions of this paper is that rational Bayesians sharing information will a) rapidly converge to agreement and b) after making arguments, will often switch sides. An individual's position in an argument will look like a convergent random walk rather than a gradual concession in a negotiation. Of course, real life arguments rarely go this way [1].

But when reading this, I was struck by the fact that in some rare cases I have engaged in arguments that move this way. In every case I can think of, the other participant in the conversation was either a mathematician, a philosophers or a lesswrong reader.

[1] One rare exception to this is real life arguments about what a stock price should be. Yay for market transmission of information!


You're right that not everything is zero-sum. But I suspect an overwhelming majority of the issues that come up for argument are very close to zero-sum.


not to attack your position, but just to further illustrate where i'm going with all this.

> ...rational Bayesians sharing information...

setting aside the inherent humor of economics (which is sadly completely lost on economists), have you ever wondered - why would a purely rational actor even participate in the discussion, or, as a matter of fact, in anything? why would a purely logical machine get out of bed in the morning? would it not need to "want" something first? what would an emotionless logical machine "think" about, and why would it think about that and not something else? intellect without emotion is nothing. not figuratively, not poetically, but literally nothing.

P.S. since when does the word "converge" apply to anything going on in a market? :P


Why do you think a rational actor has no goals, or is somehow emotionless?

The word "converge" has always applied to market responses to new info. Find a news event (e.g., WMT Oct 13-14) and look at the second or minute level movements near that event. It's qualitatively quite similar to Aaronson's theorem.


well, i'm definitely not educated on the terminology, particularly when it comes to economics. i guess we could clear out a lot of the misunderstanding here as soon as we sort the terms out ;)

so, just IMO, a perfectly rational actor should be emotionless, at least on the matter at hand. any kind of emotional tendency would skew their reasoning.

to add to what you said more above, about those fruitful discussions: i would bet that the people you had those fruitful discussions with have had another common characteristic. namely, they did not have much personal stake in the issue you were discussing. when people approach a discussion with only a desire to learn and improve their opinions, they can indeed have a quality discussion.

ultimately, i don't believe such neat separation of rational and irrational can ever work (aside from sometimes being a useful approximation). which is why i asked those silly questions - how can a rational actor want something, without wanting it? i find the concept very contradictory, borderline useless.

edit - i think i have to back down a bit, or maybe just clarify, dunno. i maybe get what you meant. people that appreciate rationality will tend to be better discussion partners even if they are personally affected. but it will be much harder...


> a perfectly rational actor should be emotionless.

Nope. A perfectly rational actor should have unskewed reasoning, yes, but you can (in principle) achieve that by making your emotions not skew your reasoning rather than by throwing your emotions away.


> why would a purely logical machine get out of bed in the morning? would it not need to "want" something first?

In a word - huh? An irrational person might wake up one morning with the urge to paint a picture - are you suggesting a purely rational person wouldn't feel such an urge, or that they wouldn't act on it? In either case, why not?


i'm suggesting "purely rational" prohibits the existence of emotion, otherwise we're dealing with a contradiction. sorry, i already replied to the peer comment to yours, maybe you can reply there?..


This discussion is confusing terms. Economic meaning of a rational agent is a bit different than the one we're discussing here.

See: [0] about the definition and [1] about the feelings angle.

[0] - http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/

[1] - http://lesswrong.com/lw/hp/feeling_rational/


Unfortunately, /u/yummyfajitas is severely mischaracterizing the point of "rationality". Viz:

>The point of rationality is to detect your own cognitive failures and to recognize and exploit those of others.

This is false. The point of "rationality" is to achieve greater cognitive success: to have your thoughts yield information about the world by allowing the world to move your thoughts. The people who try to do this (such as, in this case, me) do so because we feel like our thoughts and emotions ought to be about stuff. The more I make my emotions be linked to my thoughts and my thoughts be linked to the real world, the less gnawing self-doubt I have to deal with when things go bad, and the more I can enjoy when things go right.

If all you can do was recognize cognitive failures, you will end up an epistemic relativist, which is useless.

If what you care about is exploiting the cognitive failures of others, you're just a jerk.


You're talking more about epistemic rationality - getting closer to the truth; 'yummyfajitas seems to be talking more about instrumental rationality - doing and thinking stuff that systematically yields success. But generally, you're right, and here:

> If what you care about is exploiting the cognitive failures of others, you're just a jerk.

I totally, 100% agree with you. Rationality is a tool; if you use it to exploit people, you're just a jerk.


Everyone exploits people in this way. Have you ever worn a suit or otherwise altered your appearance to influence the decisions of others? Ever built a landing page using a theme other than default HTML, in order to make people happier when reading? Ever noted an irrelevant shared interest ("hey we both love kale!") to someone you are trying to sell to, or otherwise influence the behavior of?

Is everyone a jerk?

In my view, a big failure of rationalists (coming from the typical mind fallacy, most likely) is that too little effort to manipulations of this sort. It's certainly a failure on mine.


Fair enough. I see what you're getting at, and it's indeed the basic way we communicate - by influencing each other.

I thought long about it and I'm still confused at some points, but I ended up viewing the issue through a lens of intent. Am I exploiting people by building a pretty website? Maybe, in a way that my actions cause them to spend more time on it. But if I do it with intention of helping them accomplish whatever they're looking to accomplish, that will be beneficial to them, then it's ok. If I'm doing it to trick them into wasting more time on my site full of half-assed linkbait content so that I earn money through them viewing ads, then I am a fucking jerk.

So no, not everyone is a jerk. Only those who seek to act to purposefully harm others (usually to gain something at their expense). Which sort of fits the very definition of the world "jerk".


Not everyone sells. Not everyone does any of the things you list. Not everyone's a jerk, at least at a conscious level (and I think people are correct to put more trust in people who will only manipulate unconsciously)


I don't know about you, but calling Democrats "pro-crime" sounds pretty immature to me. I'm embarrassed to belong to a society where the status quo is so uncharitable that you're willing to excuse this slander as "people just being people". You may cringe at the word "retard", but does it not unnerve you just as much that the journalist called half the U.S. literally malicious?


I'm personally of the opinion that people who spout such nonsense are perfectly aware that it's nonsense; they're simply trying to appeal to the radicals: the vocal, irrational minority.

The reason to do so is also perfectly rational - they're easier to get to the voting booths to vote for your candidate with such rhetoric. The moderates, who are unswayed (yet annoyed) by such rhetoric are more likely to be the ones who do not show up at the voting table, because they see the rhetoric from both sides as equally reprehensible, and will either not vote at all (in my case, due to the futility of voting in the US's current two party system), or vote for a third party (making their vote equally irrelevant as if they hadn't voted at all).

So, both sides appeal to those folks who are so easily swayed by irrational comments which make them feel superior to "those dirty left/right-wingers", and the result is power (government positions) and money (PACs, lobbyists, donations).


I think a lot of people really do believe their own non-sense. It's easier to lie when one honestly believes the falsehood. Consider that the same instincts that drive partisan politics are the same instincts that drive spectator sports. Are the fans cheering because of a conscious awareness of political strategy? Or are they cheering because -- gosh darnit -- being on a team is fun. Do you think the children in the Robbers Cave Experiment were masterminds of political intrigue? Or do you think the conflict arose out of instinct. That the strategy happens to turn out super-rational in the field is orthogonal to the question of whether their beliefs bubble up to a level of conscious awareness.

I've met people who honestly have made the "dey must be evil" maneuver. My experience says the sentiment is genuine.


do you realize that you're now going for a very emotional attack on my comment? do you see the problem? :)

you also seem to be putting words into my mouth. i was merely offering an additional explanation for that behavior, to indicate that simply putting it under "poor cognitive development" (i.e. calling people retards, but in bigger words) is insufficient. i never said that we should all accept absolutely anything, and skip across blooming meadows singing kumbaya lol.


And I disagree with your explanation. The journalist isn't simply committing a fallacy or misinterpreting an argument. The journalist went out of their way to accuse a political demographic of malicious intent. Out-group dynamics like this is something people actively shape (as demonstrated by the Robber Caves Experiment). Maybe we ought to classify it under something other than immaturity. But we can't chalk it up to laziness, mental fatigue, or #justPeopleThings as if it were an accident.

> do you realize that you're now going for a very emotional attack on my comment?

Honestly, I don't know what you're trying to prove here. My comment is invalid because I expressed embarrassment? lol?

Errata regarding your other threads. LW members call themselves "aspiring rationalists" to remind themselves that they have not yet "outgrown humanity" [0]. You also seem to confuse logic and rationality. When economists talk about rational agents, they're discussing an agent which employs a decision making algorithm consistent with the VonNeuman Morgenstern Utility Theorem [1].

[0] http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_...

[1] https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenster...


no, it's just you keep putting words in my mouth. as i already said above, which you fail to read for the second time - i did not offer it as THE black-and-white explanation. i merely pointed out that going for the retard explanation is insufficient - as you yourself have said, they could just be evil :) so why are you fuming? we seem to actually agree. jeez.

and the reason i asked the "do you realize" question is because we were talking here about rationalism and it's limits. i'm in a kind of mildly anti-rationalism position, and you attack me, all emotional. so i thought, this is kinda funny. get it?

edit - and now, in an edit, you suddenly join the discussion... i'll have to get back to you later.


> as you yourself have said, they could just be evil

This is exactly the opposite of my opinion. Very rarely do people think of themselves as "evil". E.g. I highly doubt Bin Laden thought of himself as an evil man. He probably thought he was doing the Middle East a favor. Same goes for Hitler.

My embarrassment and disappointment is directed towards contemporary society, the decadence of which I find your comment's non-chalance indicative of. And no, you're not as clever as you think you are by distinguishing rationality from emotion as if they were mutually exclusive. That you posit others believe emotions are something to be ashamed of is a straw man. Or better yet, a Straw Vulcan [0] (it looks like temporal covered this topic already).

[0] http://tvtropes.org/pmwiki/pmwiki.php/Main/StrawVulcan


your perception of society's "decadence", whatever the hell that's supposed to mean, is your problem. i was trying to have a discussion here, but you're apparently deliberately misreading everything i write because you have some battle to fight against society... wow.

must be hard being so superior to everyone around you, huh? ;)


I'm not trying to fight a battle against society. I was trying to express exasperation at your explanation.

A thought experiment. Imagine that everyone in the 18th century (including 60 year old men and women) jokingly called each other "retard". Would this strike you as immature? What if I expressed this sentiment aloud, and someone responded "There's a confounding social factor. If one were to object to such a social norm, he or she might be called a square and thus ridiculed! Maybe it's not right, but we can never stop being people."

This response bothers me. To call it a "confounding" rather than "additional" factor implies that risk of reputation somehow funges against the immaturity of the social norm. I would argue that calling people "retards" is immature regardless of the mechanism driving the social norm. On top of this, you seem to be implying that I think myself "superior" to those calling other people retards.

(Now replace every instance of "retard" with "evil democrats". That is my original argument.)

(By the way, your comment frames things as if you're 100% the good guy, and I'm obviously a villain. This is the exact behavior the Robinson article criticizes.)


Scott makes a lot of good points, but frames them in a very Less Wrong kind of way, when the key to development or growth is always being able to think more clearly.

You could pick any number of other developmental milestones in other categories, for children, teenagers, or adults.

In the ethical bucket, children begin to recognize of abstract rules as system of dispute resolution ('he should go first because he got there first'), develop a concept of fairness, etc

In the social / emotional bucket, adolescents begin desiring independence and feel peer pressure much more acutely.

Is there any reason to believe that theory-of-mind-relevant developmental psychology is more important than the other parts?


I think you're reading too much into that post. I don't see any place where Scott makes this particular type of development as the most important one. It's simply the one he decided to focus his today's blog post on.


SSC writing is so rambling and confusing. Someone could have a great career in condensing SSC blog posts into concise, readable essays, not steeped in LW arcana.


Thanks for the idea!


This suggests another role for government - to break a system out of a local peak (forgotten term) or sub-optimal Nash equilibrium

Interesting


That's basically the role of the government. It's a solution to coordination problems, which are in themselves those local optimas you're thinking about.


If you can stomach it, you'll really like "Meditations on Moloch", by the same author.

(Its readable description of multipolar traps is good, but I imagine it's hard to see at once if you haven't been following the author for a while. In any case, the AI connection at the end is tenuous.)


That's something that might work in theory and has at times but it has a very mixed track record. If the new equilibrium the government enforces turns out to be worse it can be even harder to leave it and it can be hard for politicians to admit they were wrong.


The forgotten term you're looking for is 'local optimum'.


Seems to me numbers 3 and 4 are quite unlikely to develop unless you studied statistics. Something like the Monty Hall problem is unobvious to most people I've met. Heck even phds discuss it. Conditional probability is really tricky and counterintuitive.

The other example is this old error, which is attributed to doctors (for some reason): You have a test which is 99 % accurate (will show the correct status) for presence of some illness. 1/1M people have the actual illness. You find someone with a positive score. What's the chance they have it? Well actually not as high as you think.


The main reason Monty Hall is non-intuitive is that the shift in probability is rather small (relatively.

To make it more intuitive, we can simply make the shift in probability absurdly high!

Imagine you have 1000 doors, behind one there is a car and behind others, there are goats. You pick one door. The show host opens all other 998 doors, except yours and another door, revealing only goats. Do you think it's more likely the car is behind your door or behind the other unclosed door?


It still depends on what the show host is doing. Maybe he only opens 998 other doors if you happened to pick the right one in the first place. To a monkey brain that's more likely than him trying to help you get the car.


At the time it was first popularized, there was no need to speculate about what the show host was doing. Statements of it often leave out explicit discussion of the broader context of the problem, but saying "Monty Hall" relates it directly back to a game show where the host always revealed a goat.


The report I read was that he didn't always do it in the real gameshow.


If he opens a door and shows the car, the game is over with nothing to win (or they get the car?).

If he doesn't open a door, it isn't the Monty Hall problem.


If he had the option of not opening a door and just opening the one the contestant picked immediately, then that makes it a rather different problem from the mathematical one. And AIUI that was how the gameshow worked: sometimes Monty would reveal a goat and offer you the chance to switch doors, and sometimes he wouldn't.


P('My Door') = P('Any other Door')?


The problem is in the phrasing, and you're actually not improving on it. The way you explain the problem, it would still be 50/50. What changes the situation is that the show host CANNOT open the door with the car in it. The way you phrase it, it's not clear that the show host has any information that the participant doesn't have, but he does.


He does kind of improve on it. You do have information about whether the host knows which door the car is behind. If the host does not know which door the car is behind he just did something very unlikely so you should update your model based on this information. Assuming you have reasonable priors about the distribution of whether the host has information about the car then after he did what he did it is very likely he has information about where the car is.

Though actually I don't think you can then use this information to answer the question because it is tainted with the assumption that the car is not behind your door. :(


Yes, but that's not what this problem is about. Monty Hall actually knew where the car was and never picked it, that's how the show worked. The problem is that whenever the Monty Hall problem comes up, this is never clear in the phrasing. For someone who have never seen "Let's make a deal", this is not obvious, and thus people are confounded by the problem when really it's not very confounding if you knew the facts.

On a side note, I think what you talked about would be 0.2% chance: product((n-1)/n) n=3 to 1000

I'm not a stats/probability wiz, but I suppose if you need to decide between Monty Hall using his knowledge or not, you'd be fairly certain by this point...


Yes, I guess making that explicit would make it even more obvious.


This is the very best technique of explaining the Monty Hall problem I've ever read.


> You find someone with a positive score. What's the chance they have it?

Did you remember to consider the really tricky and counterintuitive conditional probability for this?

P('has the rare illness' | 'is at doctor's appointment with at least partially matching set of symptoms')

Assuming here that the doctor isn't just testing everyone for the rare illness.


A lot of these errors are more in the confusing phrasing or unstated assumptions.

In Monty Hall the ambiguity is around whether the host is randomly opening doors or not.

With the doctor example, they'd normally only face that question when they've ordered a test for someone, so the probability is much higher.

The other classic 'unintuitive' result is the prisoners dilemma, because people have emotional bonds to colleagues and if the break them those emotions can lead to revenge and retribution. These have to be ignored in the classic formulation, but recast it as a drug deal or spy exchange and it makes more sense to people.


At least the way I've heard it, the doctor problem is given as an answer to the question "Why don't we test everyone for HIV/cancer/horrible disease X?" See for example the discussions people had when the American Cancer Society recommended that women with average cancer risk delay their first mammogram to age 45.


It's relevant to that question, but it's also cited regularly as an example of how "even doctors can't do stats".

Which, like most people, they probably can't. But asking someone with lots of experience with a situation, a question about a superficially, but not actually, similar situation adds another level of confusion beyond inability to work out the numbers logically.


Thinking in probabilities is certainly posssible without formal training, and I'd argue most people do it, just cutting the corners and not actually doing the math.

Example calculation that I (and every other kid) did every day after shool:

- it takes 40 minutes on foot to get to home, buses go every 20 minutes on average, and get you there in 10 minutes, but sometimes they are late and sometimes some bus is broken and you will have to wait even 40 minutes

- it takes 20 minutes on foot to get to the next bus stop, so if you go on foot you may miss the bus if it goes when you're too far from both bus stops

Depending if there are people on the bus stop (so there was no bus recently), and if you see buses going the other way (so you will wait at most 20 minutes) - it makes less or more sense to wait instead of going on foot. But take into account that if you went out not directly after classes - the bus stop may be empty even if there were no buses recently.

It even makes you pay (with time or ticket money) for miscalculations :)


There's indeed a paper showing that's roughly what's going on: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.69.2...

>Human perception and memory are often explained as optimal statistical inferences, informed by accurate prior probabilities. In contrast, cognitive judgments are usually viewed as following error-prone heuristics, insensitive to priors. We examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people to make predictions about the duration or extent of everyday phenomena such as human life spans and the box-office take of movies. Our results suggest that everyday cognitive judgments follow the same optimal statistical principles as perception and memory, and reveal a close correspondence between people’s implicit probabilistic models and the statistics of the world.


Maybe my Bayesian stats are rusty, but wouldn't you need to clarify what "accurate" means to get a correct answer here? i.e. tell us if, by accurate, you mean sensitivity or specificity.

EDIT: Oh, never mind. Just noticed you specifically say 99% for the presence. :)


Could you show how much it is?


Totally not a statistician, but I'll give it a shot.

For the sake of the argument:

  test accuracy: exactly 99.0% accurate
  disease incidence: exactly 1 in 1 million
Calculation:

  For the sake of simple calculations, let's assume we test exactly 1 million people.

  tests positive = (1 * 0.99) + (999999 * 0.01)
  tests positive = (.99) + (9999.99)
  tests positive = 10000.98
We'll round up for the sake of argument to 10,001 positive results. And we know that only 1 person (remember that we're testing 1 million people) is actually sick. We have 1 actual sick in 10 thousand positives tests. So the probability that the positive test that is right in front of you is actually a truly sick person are 1 in 10 thousand.


Beware, you are making a strong assumption: that the test's accuracy is the same regarding false negatives and false positives. For example, a test may not find enough "anomaly" in an ill person to trigger the positive, thus yielding a false negative; at the same time, it may as well never find any "anomaly" in a sane person, and as a consequence never give any false positive. Back to your example, it's obvious that a test with a 0.01 probability to give a false positive is completely useless for an illness that affects 1e-6 of people.


Actual descriptions of medical tests routinely give both rates. They often call them "sensitivity" and "specificity". Good luck remembering which is which.

But if only one rate is given, that indicates they're equal. If they're not, then it's reasonable to describe the documentation as incorrect.


I have no trouble remembering which is which, as "sensitivity" is used in a way quite similar to its everyday use.


Spot the domain-specific knowledge :)

While true in the real world this wasn't part of the problem as written above!


About 0.01%, or 1 in 10,000.

Consider a population of 100M people, of which 100 would have the illness. Of them, 99% = 99 would test positive and 1% = 1 would test negative. For the other 99,999,900 healthy people, 99% = 98,999,901 would test negative and 1% = 999,999 would test positive.

In total, 99 + 999,999 people would test positive. Given that a person tests positive, then, there is only a 99 / (99 + 999,999) ~= 0.01% chance that person has the illness.


Your answer is as problematic as dsp1234's one.


For sure, this assumes that false positive rate = false negative rate = 1%, but it suffices to illustrate how a highly accurate test can produce misleading results.

A solution would be repeated retesting, as the 1st, 2nd, 3rd, and 4th consecutive positive test results would lead to 0.01%, 1%, 50%, and 99% chances. (Each additional positive test reduces the false positives by 100-fold, whereas the ill patients are very likely to get continually positive results.)


Only if your tests are all independent. Doing the same test twice probably doesn't buy you anything.


I"m trying to understand the fundamental concept he's describing - how are the examples different from 'rational thought'? Or are they that, in a way? And is the conclusion then (bringing the first and second parts together) that one can only develop this through being taught? But then, don't the same arguments that applied to the politics example apply to mysticists? And isn't that epistemological relativity all the way down?


I dont see these as hard milestones that requires the amount of effort suggested by the article. To me, it all comes down to strict logical thinking.

Take the last milestone of understanding tradeoffs as an example, if you are good at identifying logic jumps in an argument (or the validity), you will be good at understanding tradeoffs. Because you can't be logical to think "something has a downside and thus I shouldn't pick it."


1 and 2 were a gotcha for me for many years...


There are some good ideas in here. But, the discussion of "primitive cultures" is so problematic that it undermines my ability to take him seriously.


> But, the discussion of "primitive cultures" is so problematic that it undermines my ability to take him seriously.

Consider then that maybe it's a flaw in your ability to #1 - "to distinguish “the things my brain tells me” from “reality”". Especially that there's little "discussion" of "primitive cultures" there, just a list of factual observations about them made by various people.


What is the problem with saying that cultures with differing levels of technological and social complexity present different sets of cognitive challenges to their members? If the word "primitive" bothers you, substitute "lower levels of technological complexity" to get the same endpoint.


[flagged]


The notion of "primitive" cultures, societies, religions etc. has not been taken seriously by anthropologists for decades. His whole argument hinges on theories of cultural relativism from the late 19th and early 20th century.

So, yeah, I am going to dismiss arguments that are based on analogies to discredited ideas.


But where does he end up with the "primitive cultures"?

To me it looks like it was just imaginary concept used as a tool. Once you understand that development of cognitive abilities are related to environment and that they are not necessarily "levels", you can forget the whole idea of "primitive cultures".

This is prime example why one should sometimes "entertain a thought without accepting it".


The funny thing was that African tribesmen refusing to talk about things they've never seen before was actually some pretty clear thinking on the part of the Africans, confronting very confused thinking on the part of the anthropologists.


[deleted]


You basically said that you're dismissing this guy wholesale because of one small part of the article that is not that important to its overall point.

How is that not a low-brow, low-content "reddit"-like comment?


Sorry but can't take anything associated with Lesswrong even remotely seriously. To call them a cult isn't an exaggeration.


Why?


So if I catch you in an error, I can not only call you on that but now I can imply that your are developmentally stunted as well? Sounds like fun! Let me try:

The Post argues that because the Democrats support gun control and protest police, they are becoming the “pro-crime party”.

Actually, the Republican-leaning columnist Ed Rogers argues this. If Scott Alexander were a fully developed adult, he would recognize that individual columnists' views may not align with those of the paper. If indeed "the paper" can even be said to have a single point of view.


That depends on exactly how the column was labeled. Newspapers as corporations endorse the views of their columnists by default. For the view to be attributed soley to the columnist there has to be something calling it out specifically as a personal view.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: