
Epistemic learned helplessness (2013) - ikeboy
http://squid314.livejournal.com/350090.html
======
danteembermage
If I find myself being convinced by the argument, does that mean I should
adopt Epistemic learned helplessness in response or not adopt Epistemic
learned helplessness in response?

I'm partly being facetious, but it would be interesting to try to use a non-
argumentative approach to persuade to use one, I'm just not exactly sure what
that would look like.

As a kid I remember being told "brush your teeth in circles, it's better" and
thinking "I'm sure something else will be recommended in ten years so I'm just
going to go back and forth horizontally like I want to" and sure enough
circles clean more plaque but push your gums up so downward flicks were
recommended. Maybe a dentist can weigh in on current tooth brushing
practice... That said, was I better off with my inferior method? That's kind
of the crux of it. If we're blown about by every plausible theory, is that
better than being blown about by nothing? It seems like this is a nested
Bayesian decision problem that needs to incorporate switching costs, which I'd
guess for some things are trivial and for other things are quite large.

~~~
greendestiny
I've thought down these paths a lot, and I came to similar but different
conclusion. It's not simply that I don't trust argument - I weight arguments
with evidence.

If you take the simulation argument for instance - if you're contemplating the
probability that we are living in a lifelike simulation you must factor in the
number of lifelike simulations you have seen.

~~~
mtdewcmu
>> you must factor in the number of lifelike simulations you have seen

Right. I take the position that there are problems that are unsolvable by
technology; simulating the universe seems likely to be in that category. That
position is fundamentally unprovable, but call it a hunch. Also, Occam's Razor
argues against it, FWIW.

~~~
eru
> I take the position that there are problems that are unsolvable by
> technology; simulating the universe seems likely to be in that category.

Why?

~~~
mtdewcmu
It's hard to make a compelling argument for why a particular problem will
never be solved. However, I think it's a reasonable supposition that such
unsolvable problems exist. Resurrecting the dead, for instance. In order to
faithfully simulate the universe, you'd apparently need complete understanding
of the laws of physics. And, you'd probably need a tremendously large (and
presumably very power-hungry) computer to run the simulation. There's no
particular reason to think that physics will ever be completely understood,
and the computer might need more power than all the stars in the universe can
supply.

Again, it's extremely difficult to argue compellingly for why something can't
be solved: even if it seems to be ruled out by known laws of physics (like
traveling faster than light), you can always argue that new laws will be found
someday that will be more favorable than the current ones. That argument never
terminates, because you don't know what you don't know.

~~~
eru
Oh, I am happy to accept speed of light as the limit.

I was more thinking about (hypothetically) using the same techniques as a
quine to store a simulation of the universe inside a universe.

~~~
mtdewcmu
I have no idea how that would work.

------
aharrison
That is an excellent essay and it scares the crap out of me, because I see it
play out every day in both directions and it often does feel exactly like
epistemic learned helplessness.

There are so many biases that affect individuals (hindsight, hyperbolic
discounting, confirmation, etc) that to get correct answers, you have to look
at and trust other accounts. You have to do studies, you have to trust those
studies, and you have to track actual hard evidence.

But large enough systems become increasingly opaque, and it can be hard to
test your web of trust empirically. This reduces huge organizations to cargo
cult behavior, or worse, fundamentally unsound behavior because "it worked for
me."

I don't know what to do about this besides what CFAR is trying, which is to
get people better at weighing evidence and changing their damned minds. You
have to constantly be asking yourself: "what would the world look like if this
were true? What would the world like like if it wasn't?" It is hard work, but
I think it is critical to our continued improvements as a civilization and a
species.

~~~
CarVac
I don't know if the right answer is to be generally more open to changing your
mind.

There are far too many things to be worried about: the danger of AI, religion,
the environment, other political views, the results of scientific papers in
dozens of fields... epistemic learned helplessness is a defence against
wasting literally all of your time taking every argument seriously.

At the end, the author notes that we should be glad for the specialists who
are well-versed in certain subjects enough to actually be able to evaluate and
experiment on outlandish theories and unusual study results.

Maybe we need to emphasize more specialization? Or is it working alright
already?

~~~
tbrownaw
The other solution is to not really care all that much about most things.

There's some new physics result about how black holes interact with the
quantum foam? I can't build anything practical with it, so beyond "that's
interesting" it doesn't really matter to me.

Some food is suddenly found to cause cancer in rats? Unless it's fairly new,
the effect on humans can't be all that strong or we'd have noticed by now.
Something else will probably kill me first (like whatever the medical term for
"old age" is these days).

Someone's claiming that some particular aspect of modern diet causes obesity?
It would have to either make you feel lethargic (lower calories out) or make
you hungrier (raise calories in). Both of which are directly observable and
easily correlated to the contents of recent meals, without having to argue
over mechanisms and confounders and personalized gut bacteria and such.

There's a new higher estimate for the percent of scientific results that are
fake or poorly done or just statistical noise? This does help for knowing how
not-seriously to take everything, but beyond "that's too damn high" the exact
numbers aren't all that relevant (unless you're testing a fix).

Someone wants money to research AI risk? Are they looking at how to build safe
AI, or how to prevent anyone either carelessly or deliberately building unsafe
AI? Only the second is worth looking into further.

~~~
puredemo
>Someone's claiming that some particular aspect of modern diet causes obesity?
It would have to either make you feel lethargic (lower calories out) or make
you hungrier (raise calories in). Both of which are directly observable and
easily correlated to the contents of recent meals, without having to argue
over mechanisms and confounders and personalized gut bacteria and such.

This is extremely facile. Take trans fats scarring arteries and causing
systemic inflammation, for instance.

------
JesperRavn
I think the author makes some good points, but I think the term "learned
helplessness" together with the arguments made, imply that if reasoned
arguments can't be trusted then nothing can. But I think that a better
characterization is that people use other ways of thinking, such as intuition,
emotion, etc. in addition to reason.

Any expert in a field other than pure math, will tell you that in order to
evaluate an argument or evidence, you need experience and intuition, not just
argumentation. Arguments are not a formal process that can definitively arrive
at conclusions. The same applies to mathematical models. Economists often
complain in private about the need to dress up their ideas as mathematical
models, even when the model adds nothing to the discussion. This is especially
annoying in empirical work, when no one is actually interested in the model,
just the empirical results.

------
mbubb
Great read and interesting discussion. Reminder of how good HN has been over
the years.

The idea of "institutional learned helplessness" is a useful one. There is a
French term which I will mangle instead of google: 'Deformation
professionelle' \- built in blind spots which come from having T shaped
expertise...

I've been part of a hiring team for a sysops person and found this of
interest:

[http://www.heavybit.com/library/video/2015-02-24-charity-
maj...](http://www.heavybit.com/library/video/2015-02-24-charity-majors)

Overall the talk is ok - the point which stuck with me was the idea that you
should look for a sense of institutional helplessness and filter for it when
you hire Ops folks.

The speaker also said to look for folks with 'strong opinions which are
lightly held' \- a double edged sword.

In a devops role there is an internal debate. Should I press point "x" ahead
on principle? Or should i hold off and wait on the business/product
requirements. Thinking slow and waiting on a full discussion of requirements
before charging forth into battle can be seen as good process or passivity...

At work we wre having a discussion on where and how to involve QA processes in
the build/deploy process. Devops took the stance - "this is a business
decision - we will implement it as product sees fit" and we got push back for
being too passive.

But this is a trivial example. I guess the bigger issues which inspire a "What
can i do about it?" response. Recent readings on surveillance state and
corporate interests (ie Bruce Schneier, etc) make me vacillate between deep,
helpless feelings we are FUBAR to learning about constructive things like gpg,
tor, etc

~~~
ArkyBeagle
On "Should I press point 'x'", do a divide and conquer on it. Try to reduce
the number of degrees of freedom from that decision point to a minimum.

IMO, you have to live with the ambiguity, but try to craft strategies that
cover both with minimal ... perturbation. Waiting helplessly is not an option
:) If all else fails, strawman the least cost solution first.

I also think there's value in having some measure of dissent from within the
team. Done right, you'll synthesize solutions you would not otherwise have
come up with. But you want this to be without rancor and without sinking large
amounts of time.

On "... where and how to involve QA processes in the build/deploy process" \-
make a complete list of steps, then balance which gets included as the risk
profile emerges. Good risk analysis includes a comprehensive and complete plan
for what happens when a step fails.

Finally, you simply have to manage perception. Quantify, quantify, quantify.
As if you don't have enough to do....

------
nabla9
This is related to things that people like Robert Axelrod have been studying.
It's innovation vs imitation dilemma in social learning and evolutionary game
theory.

Science. 2010 Apr 9;328(5975):208-13. doi: 10.1126/science.1184719. Why copy
others? Insights from the social learning strategies tournament.
[http://www.ncbi.nlm.nih.gov/pubmed/20378813](http://www.ncbi.nlm.nih.gov/pubmed/20378813)

Related xkcd [http://xkcd.com/1170/](http://xkcd.com/1170/)

~~~
fa
Like all good philosophy, this idea has a long history. I encountered it in
Taleb's Black Swan as Pyrrhonian skepticism, which flourished in Greece as the
Aristotelian Academy. Very down to earth and actionable advice on living and
doing. Many Pyrrhonians were doctors and that colored their philosophical
views—at a time when philosophy was taken very seriously and meant more than
it does today (see it as a precursor of today's science).

------
tgb
The author is now blogging at:
[http://slatestarcodex.com/](http://slatestarcodex.com/) It's really one of
the most consistently interesting blogs I read. Check out the top-posts
section to see if anything catches your eye: [http://slatestarcodex.com/top-
posts/](http://slatestarcodex.com/top-posts/)

~~~
TeMPOraL
It's one of my favourites too.

There've been a few lists of his best posts created by readers, e.g. [0] and
[1].

Personally, my absolute favourite is Meditations on Moloch [2]. It's long, but
has high concept density and highlights the very important causes of problems
in the world that many people miss entirely when discussing important issues.

[0] - [http://nothingismere.com/2015/09/12/library-of-scott-
alexand...](http://nothingismere.com/2015/09/12/library-of-scott-alexandria/)

[1] -
[http://lesswrong.com/lw/mmg/yvains_most_important_articles/](http://lesswrong.com/lw/mmg/yvains_most_important_articles/)

[2] - [http://slatestarcodex.com/2014/07/30/meditations-on-
moloch/](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/)

------
n0us
Some of the longest papers I've ever written were in support of conclusions
that I don't even believe in. In my opinion they ended up being some of my
best papers as well. The point wasn't to write in support of what I believe in
or what I want to write about but what I think I can present the strongest
argument for.

------
paulsutter
> given that at least some of those arguments are wrong and all seemed
> practically proven, I am obviously just gullible

Not gullible, he just needs to read Kahneman's "Thinking Fast and Slow".

The intuitive mind is unable to distinguish between consistency and truth.
That is, a consistent story will always seem "proven" to the intuitive mind.

"Seeming practically proven" means little. You need to use the more logical
part of your mind to crosscheck it.

~~~
ThrustVectoring
I'd be very surprised if he hasn't read Kahneman. And even if he has, knowing
about cognitive biases is a completely different skill than avoiding them.
It's basically why the Center for Applied Rationality exists.

~~~
paulsutter
Well if he read it, he certainly didn't understand it.

> When I was young I used to read pseudohistory books; Immanuel Velikovsky's
> Ages in Chaos is a good example of the best this genre has to offer. I read
> it and it seemed so obviously correct, so perfect,

This guy believes in PROVING PSEUDOHISTORY. It's hard to understand what that
even means, but let's look at specifics:

"Noah's Flood couldn't as a cultural memory...

\- of the fall of Atlantis

\- of a change in the Earth's orbit

\- of a lost Ice Age civilization or

\- of megatsunamis from a meteor strike."

A generous person could give these a 1% chance of being right. Maybe a 5%
chance if you had a very convincing argument.

A gullible person might give one a 30% chance of being true.

But it is utter nonsense to assign 100% probability to any of these (that's
what proof means, it means 100% likely). These just aren't provable matters.

Circling back to Kahneman, it really seems that he's getting persuaded by
intuitive arguments, and the book "Thinking Fast and Slow" really dives into
how this happens.

~~~
TeMPOraL
> _This guy believes in PROVING PSEUDOHISTORY. It 's hard to understand what
> that even means, but let's look at specifics_

No, he does not. He only said that a good book proving pseudohistory sounds
totally convincing to a history layman like him, and so does a book disproving
the previous book. Since he can't tell what is false from what is true without
huge amount of effort into studying history (no one has time to study
_everything_ in that amount of detail), he concludes that his only solution to
stay sane is to ignore both arguments and stay with the general science
consensus.

> _But it is utter nonsense to assign 100% probability to any of these (that
> 's what proof means, it means 100% likely). These just aren't provable
> matters._

That's your implication, nowhere written in the text. The guy hangs out in
rationalists circles, he knows better than to assign P=1.0 to stuff. I know
because I read quite a lot of his articles.

~~~
paulsutter
I don't see how you're replying to my core point, which is that getting
"totally convinced" is a function of the intuitive mind, and the intuitive
mind will be totally convinced of anything that is consistent.

Could you help me understand the difference between "totally convincing" and
proving? If you are "totally convinced" of something, isn't that a p=1.0?

Whether the topic is history or pseudohistory, words like "true", "false",
"prove" and "disprove" really don't belong. Those words are fine in casual
conversation, but they can't be part of the thinking of a serious person.

~~~
TeMPOraL
> _I don 't see how you're replying to my core point, which is that getting
> "totally convinced" is a function of the intuitive mind, and the intuitive
> mind will be totally convinced of anything that is consistent._

Well, I think that was actually his point - that if you're not an expert on a
topic and don't have the explicit knowledge to counter your intuition, every
relatively consistent set of arguments will sound convincing.

> _Could you help me understand the difference between "totally convincing"
> and proving? If you are "totally convinced" of something, isn't that a
> p=1.0?_

No, it's not, and assigning probabilities of 0 and 1 to anything is a _very
bad idea_. See:
[http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/](http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/).

Otherwise, exept of pointing out that we have intuitive and explicit parts of
reasoning that often oppose one another, I'm not sure what you're getting at.

> _Whether the topic is history or pseudohistory, words like "true", "false",
> "prove" and "disprove" really don't belong. Those words are fine in casual
> conversation, but they can't be part of the thinking of a serious person._

Yes and no. They are shorthands. Unlike maths, in the real world you can't
assign absolute true value to anything, but you can still be _really, really
sure_. If I jump out of the fourth-floor window, I will hurt myself. That is
true. Arguing that it's not true because you can't be absolutely positively
100% sure is just pointless arguing about words, a failure in communication.
C.f. [https://xkcd.com/1576/](https://xkcd.com/1576/).

~~~
paulsutter
> assigning probabilities of 0 and 1 to anything is a very bad idea

THAT'S MY WHOLE POINT. Thank you. When a person says something is
true/false/proven/disproven, that person is assigning a probability of 0 or 1
to it.

>
> [http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/](http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/)

I nearly cited this exactly. I love that article because it clearly spells out
what's disturbed me for years about cloudy thinking.

When someone says a matter has been "proven" or "disproven", they are saying
the matter is closed, and they feel no need to update in the future.

It's not pointless arguing. That's really what the words mean, and that's
usually what people intend when they use those words. Other uses are misuses.
It's perfectly OK to mis-speak in casual conversation and we're all obliged to
understand each other in the most charitable way possible. On the other hand -
when someone consistently speaks in a way that sure to be misunderstood, it's
helpful to point this out.

And seriously - Kahneman made the overall point more clearly, with a practical
antidote we can use every day. Being "totally convinced" is a clean indicator
that our intuitive mind has latched onto consistency. And that's a very useful
guideline to introspecting that very conclusion.

~~~
TeMPOraL
Ok. So I guess we miscommunicated, and have been thinking about the same thing
all along. :).

As for your original comment - the author of this article is none other than
Yvain from LW, whom you will most likely recognize if you read stuff on that
site. You can safely assume that he knows all this stuff we discussed and
(being a practicing psychiatrist) is aware of Kahneman. Knowing this, and
taking into account his typically casual style I invite you to re-read the
article while applying the principle of charity, and see if your original
criticism still applies.

------
noonespecial
Feels a bit like "anecdotes are not data"; "arguments are not evidence"
perhaps?

------
thom
"Well, you can prove anything with facts, can't you?"

[https://www.youtube.com/watch?v=4n-UGQcG3Jw](https://www.youtube.com/watch?v=4n-UGQcG3Jw)

~~~
JesperRavn
Well that's a good example of what the original argument was talking about
since the comedian's argument was terrible (e.g. by the same logic we should
not say that slavery is immoral, because the ancient Greeks practiced
slavery).

------
mbubb
It is interesting that Pascal gets cited in this. With Pascal's Wager there is
no leap really. It is very internal idea - an individual's relationship with
God. "I might as well believe, there is no risk if I do so. If I am right
there is a great reward. If I am wrong the result is what it would be anyway."

This is what I believe is a distinction between atheism and agnosticism.
Belief vs a pragmatic ethic.

Pascal is identifying a throughly modern disconnect between what we privately
might think and believe and how we act in public. His idea is radical because
that split would probably not been conceived in the same way before. Over time
this idea would become more socially grounded. Weber in "The Protestant Ethich
and Spirit of Capitalism" would put forth a similar idea in terms of 'visible
forms of God's grace'. Thus I am more blessed because there is an Audi A8 in
my driveway and you are driving a Jetta...

Another perhaps more relevant case is Kierkegaard in "Fear and Trembling". I
am probably misremembering political threory classes from 20+ years ago but
the idea I retained is:

Think of Abraham and Isaac. In a nutshell Isaac is the much loved (and only)
son and God tells Abraham to sacrifice Isaac. He takes his son up to the
mountaintop and builds an altar. At the last second an animal is substituted.
(Hmmm just occured to me that Miyazaki could do an interesting treatment of
this)

The key idea is Abraham's resignation - which is different from passivity. He
is willing to follow through. He hears God's voice telling him to do something
batshit crazy. Because it is God he is willing to follow it through to its
logical end. He has belief and resignation to God's will is the first step of
that belief.

Now to the outside world Abraham is the worst kind of criminal. You would
imagine him thrown in a dreadful prison and the guards looking the other way
while that staple of tv dramas - prison justice is meted out. That doesn't
change.

And Abraham doesn't even know if he is hearing God or if he is just crazy. And
no one else knows either.

Back a decade or so ago when Intel's 'Trusted Computing Platform' was cause
for debate, I remember someone saying "Just because Richard Stallman is
paranoid doesn't mean that Microsoft is not after you" which I thought was
hilarious and true and adopted it as my email footer for a while...

When I think of Stallman - I have enormous respect for the man, for his
beliefs and his courage in following out his convictions. In my everyday life
I use a Mac and install nonfree packages on my Linux servers so my actions are
not consistent with that respect. It is too hard for me to browse the web in
emacs and I enjoy twitter and the various inane 'Distractions from distraction
by distraction' that are the internet.

I think Stallman is more right than wrong. I also believe in a Kantian
imperative sense that we would all be better if we aligned our ideas of
freedom and use of technology with his ideas. Without his ideas and his
actions which broght thse ideas to life, we would be in a very different
situation.

But I am much more 'pascalian' in my everyday life choices and actions.

~~~
ArkyBeagle
But if it is indeed Stallman v. Gates ( which I doubt ) then Gates won the big
war, while Stallman continues to win small skirmishes.

This is just all too convenient and narrative for my tastes. IOW, "It's
complicated." That's my epistemic learned helplessness.

------
Kenji
_> This is the correct Bayesian action, by the way. If I know that a false
argument sounds just as convincing as a true argument, argument convincingness
provides no evidence either way, and I should ignore it and stick with my
prior._

Wrong. The correct scientific methodology would be to store the new
conflicting theories in your head while being unsure whether or not they are
true (withholding judgement); later on, if new evidence sheds light on the
issue, you can discard or keep it. The silliest thing to do is what this guy
advises: Persistence on arbitrary facts because they were first. That's how
religion works. Despicable.

I don't like how this guy treats "high school dropout" as an example for
someone particularly stupid and naive. It sounds elitist and smug and detracts
from the main point of the text. It reminds me of lesswrong.com

 _> I've heard a few good arguments in this direction before, things like how
engineering trains you to have a very black-and-white right-or-wrong view of
the world based on a few simple formulae_

Oh no, nothing is further from the truth. In engineering, you have to do a lot
of subtle design decisions that aren't black-and-white at all. Difficult
tradeoffs like cache invalidation, where there's no right answer in general.

All in all, a subpar article that goes on way too long and could be shortened
to a couple of lines.

------
quietplatypus
Guy makes a shitload of basic philosophical errors, like most rationality
training people. They are all centered around conflating logic with reality
and ignring computational costs. If you cant connect a theory with reality
that is rooted in specific, observable, actionable things, without requiring
strong AI with a O(1) time complexity of synthesizing respinses, the theory is
a nonstarter. If he was more aware of this, he would never get close to buying
into claptrap like pascal's wager or mugger. So what if you believe in god or
not? What does doing that or not actually get you?

It's amazing how many of these "rationalists" are using the concept as an
emotional band aid against their resentment and other uncomfortable feelings.

And its not about compartmentalization. Ideas vary in their usefulness or
actionableness. Sure, it takes instinct to distinguish them. But that is what
"rationalists" are scared to death of, there being things like instinct and
unconscious processing that, honed right, is often much superior to their
laborious, droning but logical attempts to count every outcome, but requires
faith in oneself.

It's really just a huge inferiority complex that they have. They are so
insecure that they need universe-circumscribing systems of Bayesian logic to
do anything. The truly rstional thingg to do is to put down the fucking
rationality handbook, stop running the fucking expected value computations, go
outside, live life, dare for the world to correct you, and get stronger or die
trying. Thats it.

------
mml
This guy sounds absolutely insufferable.

~~~
jdjdirn
He sounds like a guy Id love to have a beer with. We might just be able to
understand each other :)

------
bsder
The problem is that you shouldn't intrinsically believe arguments, at all.

You should believe _facts_. You should believe logical inference _from_ facts.

You should believe arguments insofar as they corroborate or refute facts.

And, if an argument can't be tested, it's not a very good argument.

~~~
jdjdirn
Lol. And what precisely is a fact, but an argument about the state of the
world being a specific way?

~~~
bsder
> And what precisely is a fact, but an argument about the state of the world
> being a specific way?

Thankfully not. A "fact" is something that can be measured in such a way that
even if you are totally antagonistic to my arguments, we will get the same
result.

Now your "argument" may attempt to prove the irrelevancy or inadequacy of my
facts, but the facts themselves should be unfudgeable.

~~~
afarrell
The problem is that it is far far too expensive to rigorously verify all the
facts one hears. So in practice humans need to rely on the trustworthiness of
the person or institution saying them.

------
thuuuomas
"Rationality" is the new "atheism".

All I got from this article:

"Sometimes, it's best not to commit to a position when you don't have all the
facts (or if 'the facts' are essentially unknowable). But it depends!"

OP has a lot of ten-dollar words, tho.

So, what'd I miss?

------
illivah
Poor guy. He reminds me of Leshrac from Dota 2. Lots of deep philosophy, leads
to nihilism.

Here's one of the multiple big problems. Learning that arguments are often not
valid is an amazing insight! However, it says nothing about what what you
SHOULD believe, relative to the information you have.

The wise choice is NOT to believe based on your ore-conceptions (ie ignoring
the stronger arguments). Better arguments are better than worse arguments.
mediocre evidence is better than better arguments. And better evidence is
better than worse evidence. And since better evidence is not a guarantee, we
measure it by the efforts to disprove them.

So, lets take an example: the conspiracies for the world trade center attack.
First we have the official story which we more or less know. Well, some people
know at least. Then we have the video Short Change, which goes through a
couple hours describing all the points where the official story doesn't make
sense. It talks about physics, and background deals, and molten this and that.
A lot of interesting stuff. It raised a lot of questions. It didn't provide
detailed or reliable evidence, so more research was needed. So i watched the
rebuttle video, which was twice as long. They provided evidence, with
citations, from experts in their respective fields.

The part that was convincing wasn't "oh, this argument is right, this one is
wrong". The part that is convincing is that I can now say "well, according to
so and so, this is explained, and so is this, and so is this, and so is this,
and so is this" down the line. Now, in this case there was basically nothing
left at the end of the argument, but even if there was then I consider the
sources. One source says xyz is still unexplained, but that source is now
known for making things up and/or not knowing what they're talking about. The
other source cites dozens of experts with detailed explanations, photo
evidence, clear walkthroughs.

It's not "oh, this argument is better" anymore. Now it's "well, this one
literally has no reason to believe it, but this one has a great deal of
support."

