-Don't trust any model that implies X is too low unless it's also capable of detecting when X would be too high
I've been frustrated by how people can fail this for X=the minimum wage, immigration, ease of getting public assistance (just to show that it's across the spectrum), as well as less political issues like "approaching strangers" or "trying to persuade after a rejection".
There is a very common mentality out there that cannot admit to any downsides to a favored policy. I had often attributed this to "well, they have to be wary of tripping others' poor heuristics", but this may itself be the #1 fallacy Alexander mentions! It can just as well be that the person cannot think in terms of trsdeoffs.
If the model says to take between X and Y units ("but I don't know what's causing the harms outside the range"), then it may be a shallow understanding but it's not failing the SCH, and it avoids a common failure mode.
Exactly! And yet for centuries it was nevertheless an extremely valuable model to have (I should have said "lemons" rather than "vitamin c").
"Hey -- bring lemons on your ships because it stops scurvy"
'Oh, so why not a ton of lemons? Two tons? Ten tons?'
"Well, to stave off scurvy, you only need x units per sailor per day. Beyond that, it's just expensive deadweight."
In contrast, there are policy advocates who want an X to be higher and yet who haven't met the "tradeoff development threshold". Instead of being able to articulate a model which tells you when X is high enough be a net negative, they will show inability to understand the core challenge: "Strawman, no one's advocating 2X". "That would just be absurd." "I didn't say 2X, I said 1.4X."
It's true that if you posit a scenario in which it's physically impossible to steer far enough right to hit Charybdis, then it will looks successful to have the model "steer as far right of Scylla as you can"; but this isn't the general case, and it wouldn't count as an understanding of tradeoffs.
"If I'm going to be late, I should walk faster" isn't modeling velocity — it's modeling punctuality.
I don't see it that way, I see it modeling Vitamin C consumption. To me, your assertion that it's modeling scurvy instead feel contrived in order to fit the top-level comment's princple.
If someone says "take vitamin C", you don't point blindly at a log chart and thus consume a kilogram of it (which, based on a rat model, probably would kill you).
This is because the practical algorithm for "take Vitamin C" has the grocer, the government, and a team of scientists sign off on the size of a dosage and how many pills are even in a bottle. So while that may not be part of your model, it is most definitely part of the model. The model tells you how much Vitamin C not to take, and so it passes the heuristic.
And without that cap, Vitamin C is unsafe, etc., so the heuristic holds.
However, in that example, I'd say that modern medicine certainly has a "too much vitamin C" threshold, and thus might be a sensible model of human vitamin C needs.
I had never seen this idea in writing, but I certainly remember thinking along those lines about retiring age and similar policies.
The point was that the cited model, "scurvy symptoms => too little vitamin C", is useful (in some situations, very useful, if you aren't in possession of a better model with which this one would agree) while being in violation of the maxim given upstream.
I think a much stronger rule - which wouldn't deserve the same catchy name - is that your model should at least be able to say "X is not to low".
Does this comment make me a hypocrite?
This works because problems with simple solution don't stay problems for long.
Problem: I'm dehydrated.
Solution: I drink (potable) water.
(Don't get distracted by all the other potential bad solutions, we're just talking about how this specific simple solution does work.)
If it's a pervasive problem that has plagued humanity since the dawn of civilization, then simple solutions probably won't work, no. I'll cop that I lean libertarian but ultimately the very popular "let's just free market at it" and "let's just government at it" are equally stupid solutions to any hard problem.
Would you be averse to it just because it is simple?
Of course, history is also full of people insisting, "No, hold on, it's far more complicated than that!" and then being totally wrong.
For instance, people spent a very long time believing that a difficult-to-model combination of many different factors produced stomach ulcers. Then an experiment was done, and voila, the real cause was Helicobacter pylori.
Simplicity (or, in fact, regularization) is helpful far more often than it's harmful.
1) what are the trade-offs
2) what are the potential unintended consequences
3) what happens if the boundary conditions are approached like 60 years later, very few people do X, many people do X
However, in engineering, I apply the *nix philosophy of less is more. But this post isn't very technical.
I think it's a good social heuristic.
Depends on how you define magical concepts. Are the magical because they are too complex and require too many assumptions?
For example, by your definition, is it ok to have faith that would be associated with a religion, if that faith is based on a wide variety of experiences and acceptance of the existence of others worldviews?
The problem with not having a simple solution is that humans are wired to take action based on simple solutions. Here's a test for your theory of mind. Can you imagine that someone else thinks, "I know I'm not completely right, but I also know that the best way to lead this group forward is to present a simple solution as if it is right." How would they act?
I think that's a fairly flaky heuristic. There's no point at which a nation's GDP could be considered "too high", although some of the tradeoffs necessary to increase economic growth beyond a certain point may diminish net utility.
I prefer a heuristic along the lines of "Don't trust any argument that doesn't explicitly state the other side of the cost/benefit tradeoff".
If you have a job making decisions for groups then statistics matters greatly but in our personal lives, genuinely statistical problems are so rare compared to the huge number of decisions we actually face.
Most of our decisions are characterised by uncertainty, uniqueness and an absence of data. Real-world rationality is less like mathematics and more like system design where we weave together plans and strategies. For example using contingency planning - a rational buyer ensures the seller has a good return policy rather than trying to use Baye's rule to analyse their past purchase experiences.
IMHO the overlooked core of rationality is creativity - you have to imagine possibilities and invent plans. Its only when you reduce decision-making to math problems that math seems so important.
One would like to think that my debates with him about this, or maybe the stats class he took, we're what helped fix this intuition. But what probably was just as important was playing games like LoL, where in the face of uncertainty, you make calculated risks all the time. At some point, it is impossible to get better without making intelligent risks.
Stylist: Your house was moved from the population of un-styled houses to the population of styled houses. This population has a different mean time-before-being-sold but it still has long tails, I'm sorry you are in that tail.
Btw: "a rational buyer ensures the seller has a good return policy rather than trying to use Baye's rule to analyse their past purchase experiences."
Sounds to me like one would first look for priors that confirm that good return policy, one does not pull trust out of thin air.
It might work things that happen over and over again like deciding which route to take to work. You can accumulate enough useful data to do that.
Many new and large decisions by their nature do not have that data and then you're stuck looking for proxies for it based on other's data (experiences) and that rarely applies to your circumstances.
An example of this would be moving to SV, you prob don't have your own data points so you're relying on data of others (friends, people on the internets) but it's unlikley that that will have a bearing on how you feel about it once your there.
Alexander's own take is http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-...
I wonder if the discussion of that article would have gone differently if it had been about (internally) reasoning about uncertain things as 51/49 choices and made it clear that it isn't really useful to communicate using the made up numbers (except maybe if the whole hypothetical case is explicitly communicated).
No? The way you (or at least I) use it is to plug numbers in (I mean, what else can you do with it?). "I think this factor is somewhere between 1 and 10,000, and this one's about 1,000,000, so... huh. My model can't be right."
> I wonder if the discussion of that article would have gone differently if it had been about (internally) reasoning about uncertain things as 51/49 choices
In a 51/49 scenario there's no point spending a lot of time evaluating. But unfortunately we often naturally assume a scenario is 51/49 when it's not.
> made it clear that it isn't really useful to communicate using the made up numbers (except maybe if the whole hypothetical case is explicitly communicated).
A number calculated from made up numbers is probably more accurate than a gut feeling, and we communicate those all the time.
You're implying a narrow (but depressingly common) definition of "mathematics".
System design is solidly inside the cognitive realm of mathematical thinking.
Working through that meant gaining a better distinction of the self from the system. I explicitly use a virtue ethics mode for my immediate surroundings and everyday behaviors(i behave in the way in which i feel the least guilt), and then a utilitarian mode when I have to make a decision affecting potentially large groups(e.g. my voting decisions tend to be less selfish and more model-based). This keeps my priorities healthy because it stops me from pointing to an inappropriate "scale" of thinking for the situation.
This is very relevant to how we think about education, communication, information design and propagation.
The combination of this SSC and the linked David Chapman piece are very good. They explained and formalized quite a few concepts that have seemed like weird nebulous glitches in the matrix to me. And not only did it make some things click into place, it expanded the amount of places things could click into.
I got recently very good lesson about this while learning about MBTI. It's really not too scientifically rigorous model. But it shows you how some people could function in fundamentally different way than you. And there is people who subscribe to each of the 16 categories. The Big Five is more statistically valid. But it is lacking in the potential to explain world from different points of view.
I think that is really powerfull, the ability to think in a Bayesian way, to not label something as one thing or the other but to accept fully, that as things stand now, something can be 70% A and 30% B. This does not even have to collide with objectivist views as the thing is either A or B but, based on all the information you have you can only be 70% sure it is A.
New information may well lead to to a new 20% A, 70% B, 10% something else. I tend to judge people by their ability to do this. Never enter a discussion unwilling to change your mind is one of my mantra's. I'm also fully aware I break it often by the way.
I have often seen two experimental situations with highly overlapping histograms for one parameter and then have a colleague ask: "So where do we draw the line for positive/negative?" I always say: "We don't. We call the sample that is exactly in the middle 50% positive, 50% negative and we have said everything we can about said sample given the 1 dimensional information we have."
What's cool for the Dune fans, this seems to me to be exactly what Bene Geserit witches and mentats do. Ingest proofs, take tiny, tiny hints, combine them in a Bayesian manner, produce a likelihood of truth for a certain hypothesis... Perhaps I'm over-interpreting (and projecting) ;)
"Hmmm I think this bug will take a week to fix"
Gets fixed in two days.
"So why were you wrong in your estimate?" or even worse, now your estimates gets discounted next time. And the same holds true if it happens the other way round.
> he gets the thing, where “the thing” is a hard-to-describe ability to understand that other people are going to go down as many levels to defend their self-consistent values as you will to defend yours.
It think it boils down to anger.
I can't recall what causes it (the basal ganglia if I'm not mistaken); at birth we have "biological firmware" that protects our internal mental state - part of which is our belief system. When our internal state is challenged a flight or fight response is invoked.
Hypothesis 1: this means that the anger or irrational response to a challenge of beliefs could be a biological disposition.
Hypothesis 2: if you still have any semblance of neuroplasticity it might be possible to 'unwire' or 'reflash' this primitive "firmware".
I've been attempting to do this over a few years: killing off that primitive part of my brain by proactively seeking out conversation with intelligent people who are opposed to my beliefs: mostly bigots. I have anecdotal evidence that it might be working (although it can't be quantified or proven in any way) - mostly by observing this flight or fight response in others when I have none.
If someone passionately disagrees with you, accusing him of cognitive dissonance works always. So you can assume high ground by being right and being capable of seeing the motivations of the other party. (Or it might be that your opponent actually passionately disagrees. But lets not talk about that.)
Very, very close to what was referring to. As I recall there is a biological reason underpinning effects like cognitive dissonance and confirmation bias (Freudian Denial in general). I could just be remembering things incorrectly and you may have hit the nail on the head.
I believe not learning this is a problem in the communities within which I grew up. Bible belt, rural ... The reinforcement of "we should all think/believe/do the same things" seems to come from a lack of exposure to any people or ideas from outside the community; from "isolationism" one could say.
You find that phenomena in surprising places. I'd claim that nobody is completely immune.
I was thinking about Judith Butler. Philosophist, gender theorist and published author. I read her "Gender Trouble" to a bout halfway. It occurred to me that she might not really recognize that some women might be genuinely straight. It explained the tone of the book bit too well. Also my queer ex-roommate also expected everybody appearing straight to just be closeted. These are individuals who really should understand that sexuality comes in all colors of rainbow.
the only thing that bugs me, with this text, and the whole rationalist world-view, is how human emotion sometimes becomes a thing to be ashamed of. something beneath a hypothesized "true", "correct" condition. like, the part about recognizing that other people have different minds, different motives, and thus reach different conclusions - on one hand i agree that it is a matter of personal development, and that many people are just plain jerks (sometimes, myself included). but on the other hand, the political examples given in the text seem like a normal emotional response. isn't it simply hard trying to think for everyone, understand everyone all the time? is it even possible? particularly when the issue is something that, for whatever reason, you feel personally. it is not automatically a sign of retarded cognitive development if you just simply got tired.
rationalism sometimes seems so, well, christian - we are all filthy sinners by default.
The "rationalist world-view" is just noticing that (per #1) what brain tells you and what the world really is are two different things, therefore your emotions may or may not be properly aligned with reality (btw. this is also the primary insight of Cognitive Behavioral Therapy). Emotions are just like opinions, but in different part of the brain - you want to have ones that correspond to reality.
"Contrary to the stereotype, rationality doesn't mean denying emotion. When emotion is appropriate to the reality of the situation, it should be embraced; only when emotion isn't appropriate should it be suppressed." - http://wiki.lesswrong.com/wiki/Emotion
now, generally, from experience, i have the impression that higher mental skills often translate simply into a higher capability of producing more elaborate rationalizations of positions that a person would have held anyway, positions which are rooted in emotions. pointing out rational counterarguments can in many situations only go so far. in fact, i have many times had an easier time discussing with people one might call primitive, since you get to the truth of their motives very quickly, and with respectful approach to people's feelings, a lot can be achieved. with very rational people it's sometimes really difficult to break through the layers of rationalizations.
i'm not saying that rationalism is bad, or that the author (or you) like to go around calling people retards :D i just feels that despite best intentions it can lead to disregard for emotion, even though in many cases emotion is the ultimate relevant "reality of the situation". we can and must make ourselves better, but we can never stop being people.
I think that's a potential failure mode of getting too smart. Rationality is supposed to teach you to notice when you're building layers of rationalization instead of updating your opinion to account for new evidence. It's a hard habit to learn, so what you described often happens instead. But if you're aware of this trap I still think learning more is better than not learning :).
> with respectful approach to people's feelings, a lot can be achieved
That's probably the truth about dealing with people, of all kinds of skill, education, smartness, etc. I'd say most of interpersonal conflicts come from one or both sides not groking it.
my question is perhaps do we properly acknowledge the difficulty we face? you say, for example, "it's a hard habit to learn" - but can we ever even learn it fully? it should make us humble. we can easily add disclaimers to our speech, but in practice it shows that we don't really mean them. we say it is hard to shake the rationalizing habit, and yet we often ascribe a level of certainty and objectivity to our conclusions which makes it seem like we completely disregard the possibility of our own emotions being behind our thinking. and ultimately, can any thought ever be had without an emotion behind it? Minsky, i think, remarked that he does not believe AI without emotion is possible. i completely agree with that.
So when I observe someone saying "that guy is racist, don't believe his arguments", I can recognize that this is an emotional claim. It's an attempt to demonize a person rather than refute the argument. I can then observe that no one is disputing the actual argument, evaluate that on merit, and have a true belief about the world unbiased by my desire not to affiliate with racists. I can also update my beliefs about the rationality/honesty of the person saying "hey that guy is racist".
Conversely, I can also recognize that a desire not to be racist is strong in people, and exploit it when I want to manipulate the less rational. For example, if I'm arguing against economic protectionism with an emotionally driven person, I might use Jim Crow as an example of protectionism rather than occupational licensing.
in fact, people who believe they successfully suppressed or outgrew any emotion are typically the ones most influenced by it, subconsciously.
Instead, it's more of an ongoing process. The process of identifying your biases and reasoning about them is on-going. You will always have biases, the trick is to make the subconscious influence conscious. To come back to the Zen comparison - the more rational you become, the more you realize you're not rational at all.
i am being unfair, perhaps, as the other comment here states. i may be identifying some negative aspects in some people with a whole group that did not deserve it. but, OTOH, you two may be resorting to the "No True Scotsman" fallacy.
That may be a fair statement, since I do not believe there is a perfectly rational person in the real world - just those who are working to fight against their inherit irrationality. All we can ever rationally (oh, the irony) expect is that they do their best.
I recently got around to reading Scott Aaronson's Complexity of Agreement: http://www.scottaaronson.com/papers/agree-econ.pdf
One of the predictions of this paper is that rational Bayesians sharing information will a) rapidly converge to agreement and b) after making arguments, will often switch sides. An individual's position in an argument will look like a convergent random walk rather than a gradual concession in a negotiation. Of course, real life arguments rarely go this way .
But when reading this, I was struck by the fact that in some rare cases I have engaged in arguments that move this way. In every case I can think of, the other participant in the conversation was either a mathematician, a philosophers or a lesswrong reader.
 One rare exception to this is real life arguments about what a stock price should be. Yay for market transmission of information!
> ...rational Bayesians sharing information...
setting aside the inherent humor of economics (which is sadly completely lost on economists), have you ever wondered - why would a purely rational actor even participate in the discussion, or, as a matter of fact, in anything? why would a purely logical machine get out of bed in the morning? would it not need to "want" something first? what would an emotionless logical machine "think" about, and why would it think about that and not something else? intellect without emotion is nothing. not figuratively, not poetically, but literally nothing.
P.S. since when does the word "converge" apply to anything going on in a market? :P
The word "converge" has always applied to market responses to new info. Find a news event (e.g., WMT Oct 13-14) and look at the second or minute level movements near that event. It's qualitatively quite similar to Aaronson's theorem.
so, just IMO, a perfectly rational actor should be emotionless, at least on the matter at hand. any kind of emotional tendency would skew their reasoning.
to add to what you said more above, about those fruitful discussions: i would bet that the people you had those fruitful discussions with have had another common characteristic. namely, they did not have much personal stake in the issue you were discussing. when people approach a discussion with only a desire to learn and improve their opinions, they can indeed have a quality discussion.
ultimately, i don't believe such neat separation of rational and irrational can ever work (aside from sometimes being a useful approximation). which is why i asked those silly questions - how can a rational actor want something, without wanting it? i find the concept very contradictory, borderline useless.
edit - i think i have to back down a bit, or maybe just clarify, dunno. i maybe get what you meant. people that appreciate rationality will tend to be better discussion partners even if they are personally affected. but it will be much harder...
Nope. A perfectly rational actor should have unskewed reasoning, yes, but you can (in principle) achieve that by making your emotions not skew your reasoning rather than by throwing your emotions away.
In a word - huh? An irrational person might wake up one morning with the urge to paint a picture - are you suggesting a purely rational person wouldn't feel such an urge, or that they wouldn't act on it? In either case, why not?
See:  about the definition and  about the feelings angle.
 - http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/
 - http://lesswrong.com/lw/hp/feeling_rational/
>The point of rationality is to detect your own cognitive failures and to recognize and exploit those of others.
This is false. The point of "rationality" is to achieve greater cognitive success: to have your thoughts yield information about the world by allowing the world to move your thoughts. The people who try to do this (such as, in this case, me) do so because we feel like our thoughts and emotions ought to be about stuff. The more I make my emotions be linked to my thoughts and my thoughts be linked to the real world, the less gnawing self-doubt I have to deal with when things go bad, and the more I can enjoy when things go right.
If all you can do was recognize cognitive failures, you will end up an epistemic relativist, which is useless.
If what you care about is exploiting the cognitive failures of others, you're just a jerk.
> If what you care about is exploiting the cognitive failures of others, you're just a jerk.
I totally, 100% agree with you. Rationality is a tool; if you use it to exploit people, you're just a jerk.
Is everyone a jerk?
In my view, a big failure of rationalists (coming from the typical mind fallacy, most likely) is that too little effort to manipulations of this sort. It's certainly a failure on mine.
I thought long about it and I'm still confused at some points, but I ended up viewing the issue through a lens of intent. Am I exploiting people by building a pretty website? Maybe, in a way that my actions cause them to spend more time on it. But if I do it with intention of helping them accomplish whatever they're looking to accomplish, that will be beneficial to them, then it's ok. If I'm doing it to trick them into wasting more time on my site full of half-assed linkbait content so that I earn money through them viewing ads, then I am a fucking jerk.
So no, not everyone is a jerk. Only those who seek to act to purposefully harm others (usually to gain something at their expense). Which sort of fits the very definition of the world "jerk".
The reason to do so is also perfectly rational - they're easier to get to the voting booths to vote for your candidate with such rhetoric. The moderates, who are unswayed (yet annoyed) by such rhetoric are more likely to be the ones who do not show up at the voting table, because they see the rhetoric from both sides as equally reprehensible, and will either not vote at all (in my case, due to the futility of voting in the US's current two party system), or vote for a third party (making their vote equally irrelevant as if they hadn't voted at all).
So, both sides appeal to those folks who are so easily swayed by irrational comments which make them feel superior to "those dirty left/right-wingers", and the result is power (government positions) and money (PACs, lobbyists, donations).
I've met people who honestly have made the "dey must be evil" maneuver. My experience says the sentiment is genuine.
you also seem to be putting words into my mouth. i was merely offering an additional explanation for that behavior, to indicate that simply putting it under "poor cognitive development" (i.e. calling people retards, but in bigger words) is insufficient. i never said that we should all accept absolutely anything, and skip across blooming meadows singing kumbaya lol.
> do you realize that you're now going for a very emotional attack on my comment?
Honestly, I don't know what you're trying to prove here. My comment is invalid because I expressed embarrassment? lol?
Errata regarding your other threads. LW members call themselves "aspiring rationalists" to remind themselves that they have not yet "outgrown humanity" . You also seem to confuse logic and rationality. When economists talk about rational agents, they're discussing an agent which employs a decision making algorithm consistent with the VonNeuman Morgenstern Utility Theorem .
and the reason i asked the "do you realize" question is because we were talking here about rationalism and it's limits. i'm in a kind of mildly anti-rationalism position, and you attack me, all emotional. so i thought, this is kinda funny. get it?
edit - and now, in an edit, you suddenly join the discussion... i'll have to get back to you later.
This is exactly the opposite of my opinion. Very rarely do people think of themselves as "evil". E.g. I highly doubt Bin Laden thought of himself as an evil man. He probably thought he was doing the Middle East a favor. Same goes for Hitler.
My embarrassment and disappointment is directed towards contemporary society, the decadence of which I find your comment's non-chalance indicative of. And no, you're not as clever as you think you are by distinguishing rationality from emotion as if they were mutually exclusive. That you posit others believe emotions are something to be ashamed of is a straw man. Or better yet, a Straw Vulcan  (it looks like temporal covered this topic already).
must be hard being so superior to everyone around you, huh? ;)
A thought experiment. Imagine that everyone in the 18th century (including 60 year old men and women) jokingly called each other "retard". Would this strike you as immature? What if I expressed this sentiment aloud, and someone responded "There's a confounding social factor. If one were to object to such a social norm, he or she might be called a square and thus ridiculed! Maybe it's not right, but we can never stop being people."
This response bothers me. To call it a "confounding" rather than "additional" factor implies that risk of reputation somehow funges against the immaturity of the social norm. I would argue that calling people "retards" is immature regardless of the mechanism driving the social norm. On top of this, you seem to be implying that I think myself "superior" to those calling other people retards.
(Now replace every instance of "retard" with "evil democrats". That is my original argument.)
(By the way, your comment frames things as if you're 100% the good guy, and I'm obviously a villain. This is the exact behavior the Robinson article criticizes.)
You could pick any number of other developmental milestones in other categories, for children, teenagers, or adults.
In the ethical bucket, children begin to recognize of abstract rules as system of dispute resolution ('he should go first because he got there first'), develop a concept of fairness, etc
In the social / emotional bucket, adolescents begin desiring independence and feel peer pressure much more acutely.
Is there any reason to believe that theory-of-mind-relevant developmental psychology is more important than the other parts?
(Its readable description of multipolar traps is good, but I imagine it's hard to see at once if you haven't been following the author for a while. In any case, the AI connection at the end is tenuous.)
The other example is this old error, which is attributed to doctors (for some reason): You have a test which is 99 % accurate (will show the correct status) for presence of some illness. 1/1M people have the actual illness. You find someone with a positive score. What's the chance they have it? Well actually not as high as you think.
To make it more intuitive, we can simply make the shift in probability absurdly high!
Imagine you have 1000 doors, behind one there is a car and behind others, there are goats. You pick one door. The show host opens all other 998 doors, except yours and another door, revealing only goats. Do you think it's more likely the car is behind your door or behind the other unclosed door?
If he doesn't open a door, it isn't the Monty Hall problem.
Though actually I don't think you can then use this information to answer the question because it is tainted with the assumption that the car is not behind your door. :(
On a side note, I think what you talked about would be 0.2% chance: product((n-1)/n) n=3 to 1000
I'm not a stats/probability wiz, but I suppose if you need to decide between Monty Hall using his knowledge or not, you'd be fairly certain by this point...
Did you remember to consider the really tricky and counterintuitive conditional probability for this?
P('has the rare illness' | 'is at doctor's appointment with at least partially matching set of symptoms')
Assuming here that the doctor isn't just testing everyone for the rare illness.
In Monty Hall the ambiguity is around whether the host is randomly opening doors or not.
With the doctor example, they'd normally only face that question when they've ordered a test for someone, so the probability is much higher.
The other classic 'unintuitive' result is the prisoners dilemma, because people have emotional bonds to colleagues and if the break them those emotions can lead to revenge and retribution. These have to be ignored in the classic formulation, but recast it as a drug deal or spy exchange and it makes more sense to people.
Which, like most people, they probably can't. But asking someone with lots of experience with a situation, a question about a superficially, but not actually, similar situation adds another level of confusion beyond inability to work out the numbers logically.
Example calculation that I (and every other kid) did every day after shool:
- it takes 40 minutes on foot to get to home, buses go every 20 minutes on average, and get you there in 10 minutes, but sometimes they are late and sometimes some bus is broken and you will have to wait even 40 minutes
- it takes 20 minutes on foot to get to the next bus stop, so if you go on foot you may miss the bus if it goes when you're too far from both bus stops
Depending if there are people on the bus stop (so there was no bus recently), and if you see buses going the other way (so you will wait at most 20 minutes) - it makes less or more sense to wait instead of going on foot. But take into account that if you went out not directly after classes - the bus stop may be empty even if there were no buses recently.
It even makes you pay (with time or ticket money) for miscalculations :)
>Human perception and memory are often explained as optimal statistical inferences, informed by accurate prior probabilities. In contrast, cognitive judgments are usually
viewed as following error-prone heuristics, insensitive to priors. We examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people to make predictions about the duration or extent of everyday phenomena such as human life spans and the box-office take of movies. Our results suggest that
everyday cognitive judgments follow the same optimal statistical principles as perception and memory, and reveal a close correspondence between people’s implicit probabilistic models and the statistics of the world.
EDIT: Oh, never mind. Just noticed you specifically say 99% for the presence. :)
For the sake of the argument:
test accuracy: exactly 99.0% accurate
disease incidence: exactly 1 in 1 million
For the sake of simple calculations, let's assume we test exactly 1 million people.
tests positive = (1 * 0.99) + (999999 * 0.01)
tests positive = (.99) + (9999.99)
tests positive = 10000.98
But if only one rate is given, that indicates they're equal. If they're not, then it's reasonable to describe the documentation as incorrect.
While true in the real world this wasn't part of the problem as written above!
Consider a population of 100M people, of which 100 would have the illness. Of them, 99% = 99 would test positive and 1% = 1 would test negative. For the other 99,999,900 healthy people, 99% = 98,999,901 would test negative and 1% = 999,999 would test positive.
In total, 99 + 999,999 people would test positive. Given that a person tests positive, then, there is only a 99 / (99 + 999,999) ~= 0.01% chance that person has the illness.
A solution would be repeated retesting, as the 1st, 2nd, 3rd, and 4th consecutive positive test results would lead to 0.01%, 1%, 50%, and 99% chances. (Each additional positive test reduces the false positives by 100-fold, whereas the ill patients are very likely to get continually positive results.)
Take the last milestone of understanding tradeoffs as an example, if you are good at identifying logic jumps in an argument (or the validity), you will be good at understanding tradeoffs. Because you can't be logical to think "something has a downside and thus I shouldn't pick it."
Consider then that maybe it's a flaw in your ability to #1 - "to distinguish “the things my brain tells me” from “reality”". Especially that there's little "discussion" of "primitive cultures" there, just a list of factual observations about them made by various people.
So, yeah, I am going to dismiss arguments that are based on analogies to discredited ideas.
To me it looks like it was just imaginary concept used as a tool. Once you understand that development of cognitive abilities are related to environment and that they are not necessarily "levels", you can forget the whole idea of "primitive cultures".
This is prime example why one should sometimes "entertain a thought without accepting it".
How is that not a low-brow, low-content "reddit"-like comment?
The Post argues that because the Democrats support gun control and protest police, they are becoming the “pro-crime party”.
Actually, the Republican-leaning columnist Ed Rogers argues this. If Scott Alexander were a fully developed adult, he would recognize that individual columnists' views may not align with those of the paper. If indeed "the paper" can even be said to have a single point of view.