In my space, we refer to some things as "portable" or "non-portable" which has very specific intent, which doesn't relate to if you can pick them up or not. I think loan-words which have close analogies in a few minds rapidly diverge.
So a "rational" actor in economics seem (to me at least) to mean the typical selfish bastard who only acts to maximise their own profit outcome, no matter how its defined, and excludes a green warrior buying a good or chattel to NOT use it, or somebody buying it to give to charity, or buying it to round up another charge to get north of a shipping fee but its a nonce purchase and has no intent or purpose..
its a narrow, domain-specific meaning. I think that word doesn't mean what you think it means inconceivable
That's not true at all. Rational is defined as someone acting to maximise their utility (roughly satisfaction), which is capable of encompassing "a green warrior buying a good or chattel to NOT use it, or somebody buying it to give to charity, or buying it to round up another charge to get north of a shipping fee but its a nonce purchase and has no intent or purpose" perfectly fine.
This also leads to another concept of “Malice” which would be the positive utility you get from others losing utility.
The crux to me is that often utility is talked about as only what you intrinsically get. Doing selfless things isn’t without benefit, it’s just without direct benefit and I have seen little ability to quantify it in economic terms.
Wouldn't altruism be getting no satisfaction from increasing someone else's utility but doing it anyways?
Family, tribe, nation, race, world, humanity, sentience.
This is completely unrelated to what economics means by referring to a rational actor. A rational actor is just one who, given a choice, will take the action that produces the all-inclusive result they most prefer. You're talking about someone whose only preference is having the greatest amount of money, but economics contemplates all possible preference sets, which is why it measures benefit in "utils".
Let me ask you a question. Do you think the word "rational" in rational actor, rational investor and rational market, has exactly the same meaning? Do you think as people commonly understand these terms (I don't mean economists) the answer would be the same?
Do you even mean this as a serious argument? Can your favorite field survive politician's (politician, from "polis", city, and "-tician", "person": 1. professional liar 2. scumbag 3. one who professionally holds office) handling of their terminology?
Nor can I find it a terribly serious argument that "common people", i.e., people who aren't in the field and don't know the definition, don't know the definition. That's just shy of begging the question, except there's a small sliver of people "in the field" who don't yet know the definitions, called students.
I'm trying my best to unpack your words into some sort of actual argument but I can't find it.
I know you didn't really mean this, but polit/ic/ian -- "polit" is from polis, city; "ic" is a Greek adjective-forming element; and "an" or here "ian" is a Latin adjective-forming element. "Polit" is the only part of the word that bears any semantics (in the etymology). The Greek root for person would be "anthrop", which isn't present.
(The root for "man", an adult male, is "andr", which is why it's so hilarious to Italians that English speakers think Andrea is a girl's name.)
Rational actor and rational investor, the same. Rational market, different. But I only know the term "rational market" through its use in the fixed expression "the market can stay irrational longer than you can stay solvent".
> Do you think as people commonly understand these terms (I don't mean economists) the answer would be the same?
I guess I can't really speak to this.
Ultimately, rational is supposed to mean reasoned, not optimal. Heuristics can be rational if applied with reason and devised with good valid reasoning.
The other thing is that the limited situations which psychologists/economists measure people in are very artificial, and there's no reason for people to ace the experimenter's metric.
Responding to grandparent: It's not just that project are efficient and not quite correct -- they're probably more correct and scientists have no way to tell.
"If you judge a fish by it's ability to climb trees, it will live it's whole life thinking it is stupid."
Have a set of "irrational actions"? Add in information and computational constraints and the necessity for heuristics, the exact right kind of risk-aversion / novelty-seeking behavior, (or play with the second or higher derivatives of your utility function), add in social dynamics like signaling, repeated games (or incorporate mental burdens that negate the impact of repeated games) etc. There exists some formalized universe where your agent is indeed rational.
Well, a rational actor works to maximise their personal preferences effectively. You've accidentally slipped an assumption in that all people are fundamentally typical selfish bastard's with a corresponding set of preferences, but I suspect you don't actually assume that personally because of the invoking of green warriors.
A green warrior or charitable giver is still a rational actor, and adequately modeled as a normal source of demand. It just happens they need to be acting rationally in context of a larger system to themselves or basic law-of-the-jungle style effect will optimise them out. Green warriors themselves often talk about preserving the planet so people can live on it, which is clearly a personally rational goal. Charitable people will often say similar things.
(I have a great picture of Claude Shannon's hand holding a literal mechanical mouse: not a modern computer mouse, a simple robot mouse he built do do maze-solving. I wish I could find it, its such a nice example of dropping words and names into another context: "Claude Shannon's mechanical mouse" means probably something completely different to what many people think. https://www.google.com/search?q=claude+shannons+mechanical+m... shows things I think it was taken from. Its a kids book on computing from the 1960s)
I think you're missing the value in finding that some things are irrational. Of course no one assumes people make perfect decisions 100% of the time, and of course there are cases where making the better decisions isn't worth it - but it's incredibly useful to know in which cases people usually make "wrong" decisions, in order to, when necessary, find antidotes. Wrong is a synonym for irrational here, and is a placeholder for "contrary to what the person would have chosen to do, had they had all the information".
Take the "planning fallacy" - a pretty well known bias in which people underestimate how long things projects will take. Knowing that it exists is what leads people who want to get the correct answer to use "tricks" to get a more real answer (like taking the so-called "outside view", and estimating how long a project will take based on how long similar projects took in the past).
Actually, a lot of economists assume exactly that, call it "revealed preference" and then claim that people are lying when they complain about it.
A bias, on the other hand, is if we ask sometime if you want to eat 2 or 3 donuts, and you answer 2, but then after further studies, we see that the amount you eat depends on, say, the size of the plate in front of you, then we know that there's something influencing your decision which you may not be aware of and which theoretically speaking "shouldn't" influence your decision.
Revealed preferences don't mean that everyone makes perfect decisions all the time. It just means that, if you want to find out what people actually want, then the best way to do it is to observe what they actually do.
That is, given a revealed preference and someone claiming they've identified a bias, I start from the presumption the revealed preference is actually rational and the explanation of the bias is wrong, because, well, the article is clearly correct. I see it online all the time. There's a bias bias, the presumption that if someone explains why something is biased, that's it, case closed, no further analysis needed. I see that bias in myself. It's hard to overcome, which is why I (try to) start by presuming the revealed preference is correct.
People's explanations of their revealed preferences are, by contrast, hot steaming garbage. People are terrible at explaining their reasons. So terrible that I think it actually plays into bias bias, because if you take people's explanations seriously they sound incredibly irrational, so it's merely a matter of trying to explain their irrationality. It seems so fruitful. But while studying patterns in rationalization is probably an interesting topic of its own, studying rationalization shouldn't be confused for studying what people do. In general, I (try to) start out by discarding someone's claimed reasons for doing something, unless they do a very good job of convincing me they're actually good at introspection and have made an at least halfway serious attempt at it.
People are complex psychological creatures and this whole issues of irrational or rational behavior is compounded by the fact that people can /learn/ system 2 patterns of thinking so well that they become system 1 patterns. Those are essentially habits.
So with your rational mind, it is worth identifying any biases you want to codify in your system 1 brain to minimize cognitive costs. But that's an option. And I think we have that option because there is a giant tradeoff between spending time and energy thinking versus getting a good enough result (out of the many decisions and processes we have to carry out each day).
(With respect to the planning fallacy, I suppose there are benefits to underestimating the time it will take. If projects are really much harder than expected, it may never be emotionally worth starting them because they are too monumental. So we have a bias that makes us more optimistic so we actually start doing stuff which helps us in the long run. What I mean is that you can spin it as a good thing. It may be a wrong intellectual assessment, and sometimes that matters immensely, but it is also quite possibly rational for humans to have this bias because it helps them get more stuff done. I'm not a behavior economists or a psychologist, to be fair.)
The closest that I recall is representing strategies with finite state machine and having a preference for strategies requiring fewer states. A main difficulty there is mapping strategies to FSA.
Read: "unwilling to sacrifice theoretical purity in order to build actually realistic models"
The capacity to make rational decisions is a limited resource, so making locally slightly irrational decisions is, globally, the rational strategy. There's a number of predictable ways we make locally irrational decisions, though, and there's a lot of value to be extracted from studying those.
But then there are emotional biases, for example the loss aversion bias. Investors tend to hold on to their losing investments to avoid realizing a loss and sell their winning investments to lock in a profit. This is irrational - investors achieve subpar returns because of this - and often cannot be resolved with education.
Accounting for cognitive or longterm costs - using the concept of bounded rationality - would not make emotional biases disappear or make them invalid.
In agreement, the cognitive cost of being rational all the time is too high, and is, after all is done, irrational.
You say, "Many times they are simply not accounting for cognitive or longterm costs." How do distinguish those times from the other times when a person is being rational?
Edit: and here's a great HN comment with more detail on this from 18 days ago, which I just came across - https://news.ycombinator.com/item?id=18988316
Gigerenzer is undoubtedly correct that Bayesian reasoning is something that can be learned by many people, and yet all that Kahnemann needs for the implications of his statement that "the human mind is not Bayesian at all" to be relevant in behavioural economics terms is for some portion of the population to reach systematically different conclusions from the correct Bayesian one using an inferior heuristic. (A point Gigerenzer essentially demonstrates by citing a study which show how outcomes improved after gynecologists already motivated to make correct predictions were taught Bayesian reasoning). Perhaps it's sloppy wording on the part of Kahnemann that Gigerenzer is taking exception to, but the core claim is not that humans cannot learn statistics, but that at a population level, some humans will continue to rely on less accurate heuristics which deviate from those predicted by rational expectations models in a systematic [and predictable] manner which are not simply eliminated over time due to financial incentives to be correct.
The one claim which really ought to be of serious concern to behavioural economists is that their experimental studies frequently mistake random error (acknowledged by 'rational expectations' orthodox economics) for systematic error, so it's perhaps unfortunate that the Kahnemann example Gigerenzer highlights in this paper as not being replicable is an abstract demonstration of the availability heuristic with zero direct economic implications.
Indeed, the possible result that only some people are irrational would be very good for the paternalistic policy makers. In fact it would be better than the result that all people are irrational; if all people are irrational, what gives some irrational people the right to guide other irrational people’s lives.
Economists are always oh-so-sorry to inform us that some people just can’t run their own lives without the help of the “free” market or the government. So very sorry. Happily they know some technocrats that will bravely shoulder the burden of being Bayesians.
This could also be because it's more "interesting" to come to conclusions like "here's a pernicious irrational bias that most people have" instead of "people are essentially rational within their informational and computational limits".
These limitations are only a concern when people attempt to suggest that behavioral economics is telling us to rework how we live our lives. Generally such claims are unjustified extrapolations from very basic studies which in no way support the pop-economics advice. Like if someone did some experiments and determined that submersing people in water often killed the test subjects, then other people ran to the presses recommending that no one drink anymore because water was proven to kill in scientific studies... :)
Most of the talk of 'rational' vs 'irrational' is because it is in opposition to the classical economics assumptions that people act perfectly rational with perfect information. The more full explanation a behavioral economist would probably give you would be something about how the human mind is optimized for a very different environment and takes short cuts appropriate to that environment that are not helpful in the current environment.
See also: Stereotype Accuracy 
This isn't really that informative, but the strongest form of this argument is "informal statistics is better in a lot of real-world cases." This sounds to me quite a bit like the case the behavioral economists make -- that these intuitions aren't crazy, or stupid, but they do exist. They were constructing tests specifically to isolate them, and succeeded.
A further point is that these kinds of situations when intuitions are challenged happen regularly. I don't know what behavioral economics says specifically about causes, but it seems to me they take a fairly neutral view: such situations can occur by chance, due to hostile intent, or by mistake. Whether the outcome is positive or negative depends as well. The point of "nudge policy" (opposition to which seems to be the main point of the article) is to see about making these situations positive -- that is, arranging the world through policy such that making intuitive considerations actually yields positive outcomes. I don't think the authors' argue successfully for the point that since everyone is rationale and no-one ever gambles when presented with the opportunity, that therefore arranging policy such that intuitive choices yield positive outcomes is unwarranted interference.
Is it me, or does this sentence (from the abstract) seem very non-academic? It uses the first person, and seems rather political.
The abstract is putting forward a very important point in light of the environment that has developed where governments are expected to 'fix' the economy when a 'crisis' occurs. Both of those words are highly, highly political and also central to the practical branch of modern economics.
The core of this abstract is that there is evidence people have 'fine-tuned intuitions about chance, frequency, and framing' compared to what is currently believed by economists. If that is true, then that should have a bearing on the quantity and quality of regulation being recommended by economists.
It is more efficient to go to the gas station and fill you tank all the way. If you do, you are less likely to accidentally run out of fuel somewhere, you can choose better where you fill up for the best price, you do not waste time that could be devoted to other profitable endeavors.
So why is it rational for the poor to only put in a few dollars at a time? It is not often the case that they don't have enough money to fill the tank; No, they don't fill it all the way up because come end of the month, gasoline in the tank is not as liquid as cash, and other needs may arise. Filling your tank is a statement about the predictability of your financial needs and the availability of credit to offset hard needs.
Poverty entails non-optimal choices.
I'd say a similar phenomenon has started happening in Western Europe, too, see the recent "gilets jaunes" movement. The French Government wrongly presumed that increasing the cost of gas by a few euro-cents won't matter, because, like you said, they thought that "if you can afford a car, you are not poor" so that a few euro-cents per liter won't matter, but the reality bit them in the posterior as lots and lots of poor French people have had to move out to the suburbs even exurbs because the downtown areas of cities like Paris or Bordeaux are expesneive af so that those poorer masses had to go wherever the real-estate was cheaper. And when you live 30 to 50 km outside of Paris you do need a car.
Sure if you took their several dollars to another poorer country they would be considered well off, but they can't get there and spend it. They are here, paying for food and lodgings that take almost their entire income.
Economic stress of living on small
paycheck to small paycheck is poverty.
There is a gigantic qualitative difference between living in a western country "paycheck to paycheck" enough money to afford driving a car, (not to mention all the other benefits that come with living in a first-world country, top health care, pension, unemployment insurance, working primary schools, working secondary schools, low corruption etc etc) in comparison with the crushing poverty one finds in poor countries, no streets, no schools, civil war, high corruption, no running water, no electricity etc.
What cognitive benefit do gain from conflating two entirely distinct phenomena? It's like calling both, a heart attack and flu, a heart attack, because both are unpleasant.
Once more, I invite you to spend some time in very poor countries.
For example, the anchoring bias is actually a useful heuristic in many situations. If you ask someone about the length of the Nile and a third person guessed 5000 miles, you probably would guess closer to 5000 miles to integrate the third person's information, which is usually somewhat informative.
The bias comes when an outside marketer manipulates you by saying "how much should the iPad cost? $2000? No! $1000? No! It's only $500!".
Freudians and Marxists were promoting what was, in his view, scientism: a dangerous middle ground where theories and studies appear scientific but aren't.
These two fields (psychology & economics) are still having a lot of problems with that middle ground. On one hand, they aren't willing to cede the science label, and join the other humanities like history and philosophy. On the other, very few of their important theories are scientific. IE, the big debates in economics & psychology are about theories that will never be tested. They'll fall into and out of fashion due to anecdotes and other intellectual trends.
The Keynesian paradox of thrift, Friedman's monetarism... these will never be tested, confirmed or denied. They are likely to fall out and maybe back into fashion. Neither will historical materialism, etc.
Same exact thing in psychology.
It's a more aggressive conclusion than I'm happy with but.. behavioural psychology is pseudoscience.
This sentence in the abstract is probably the most revealing about the goals and yes biases of the article’s author: “meaning that governmental paternalism is called upon to steer people with the help of ‘nudges.‘“
In other words this isn’t really about examining whether the field is productive on its own terms but it’s about picking a political fight under the guise of a methods critique. The author seems convinced that behavioral economic results demand governmental intervention, and so he attacks the premise of the entire field.
Maybe better material for a polisci journal.