
All models are wrong, but some are completely wrong - magoghm
https://rssdss.design.blog/2020/03/31/all-models-are-wrong-but-some-are-completely-wrong/
======
grawprog
What I've noticed about models, or at least when people are talking about them
or trying to prove a point about them, is that people forget models are a
simplified version of a specific part of reality, much the same as models of
airplanes or something.

No matter how many variables you include, you can never capture the utterly
massive and unpredictable amount of variables that exist in reality. But
they're not supposed to. They're tools that are supposed to be used to help
get an idea about how something might happen, that's marginally better than
just guessing, due again to models always being incomplete, despite any amount
of best efforts.

Like any tools, using models incorrectly, fucking around with them until you
just get what you want to see or using the wrong one for the wrong task can be
potentially dangerous, especially when those models are used to guide large
decisions with serious consequences and impacts.

Reasoning and common sense need to be used alongside models and if the models
don't seem to reflect reality, the problem is with the model and not reality
and the model needs to be adjusted or thrown out otherwise continuing to use
it will just lead to lousier and lousier decisions.

~~~
BiteCode_dev
Humans have the tendency to think they understand something if they have a
label on it. It helps them take decisions faster, whether it's good or bad.

If you tell people you don't know what causes their respiratory illness, they
may think you are incompetent, and feel helpless.

If you tell them they have acute bronchitis, suddenly they feel empowered. It
just means they have an inflammation of the bronchi. It says nothing of the
direct cause, nor the indirect cause: a virus, tobacco, dust, air pollution or
asthma. It says nothing about how bad it is. They don't even know really what
it implies for their body or their treatment.

But now they think they know.

Call something a democracy, and nobody will check if it is.

Call something a privilege, and people will start to want it, working hard for
it.

Models can be used to help your interact with the world. But they are also
used to avoid interacting with the world: if you use them as labels, instead
of comparing the model with reality, you get a shortcut to take decisions. It
becomes a mere name to justify a decision process instead of a tool for the
decision process.

~~~
earthboundkid
Medicine is super-bad about not being able to tell a symptom from a disease. A
lot of "diseases" are just symptoms with Latin names and a set of empirical
remedies.

Psychology in particular seems to have come to the conclusion that if you can
name an assortment of 4 out of 10 symptoms, that counts as having identified
single disease.

I think part of the problem is that we're so good at applying various
techniques that no one bothers to think anymore.

~~~
Nasrudith
Well historically the symptom /was/ the disease effectively without imaging to
meaningfully separate "appeasing the spirits by ritual boiling" vs "killing
microbes". To go with a niche mixed metaphor part of the issue is
epistemological low resolution. If the extended "senses" and knowledge are
lacking everything is imprecise.

One path of medicine closest to that sort of symptom vs disease separation
being practical is phage therapy. It could be called true alternative medicine
as an actual alternative as opposed to a euphemism for "false or unproven" but
is generally less practical in spite of its other virtues. Because a proper
phage needs to be selected per target pathogen. As opposed to not caring what
strain of virus it is and just treating the symptoms so the patient can
recover and not die.

------
nitrogen
_Would we encourage an epidemiologist to apply ‘fresh thinking’ to the design
of an electrical substation?_

Yes, absolutely. If an epidemiologist identifies and models a trend in human
disease around substations, or a trend in failures of substations, or a new
way of modeling the ways electrical demand can change over time and influence
demand in other times and places, etc., then their input should absolutely be
considered.

Just as it's annoying when a total outsider claims to know everything about a
field, it is equally problematic when insiders refuse to acknowledge anyone on
the outside.

~~~
salty_biscuits
That rubbed me up the wrong way too. Applied mathematics is really quite
transferable across disciplines, you just usually don't have the domain
knowledge to stop you chasing dead ends or reinventing the wheel.

~~~
Avamander
Not to mention that sometimes experts in their fields miss common knowledge in
other areas. Like the biologist that reinvented integration, Riemann sum -
[http://care.diabetesjournals.org/cgi/content/abstract/17/2/1...](http://care.diabetesjournals.org/cgi/content/abstract/17/2/152)
([https://fliptomato.wordpress.com/2007/03/19/medical-
research...](https://fliptomato.wordpress.com/2007/03/19/medical-researcher-
discovers-integration-gets-75-citations/)).

------
smitty1e
One feels that, irrespective of the models, the data in the covid-19 case may
be unusually bad.

It may be time to add a third error category[1]

I. False positive

II. False negative

III. Deliberately skewed off the map for propaganda reasons.

[1]
[https://en.m.wikipedia.org/wiki/Type_I_and_type_II_errors#Ty...](https://en.m.wikipedia.org/wiki/Type_I_and_type_II_errors#Type_II_error)

~~~
YZF
This is driving me crazy ;) the poor quality of data.

You'd want health organizations around the world to be publishing every
possible detail (anonymized) so that the disease can be better understood. Yet
three months in, with over a million cases worldwide, we still have experts
disagreeing about things like asymptomatic transmission, use of masks,
droplets vs. aerosol, how much distance one should stand from another,
viability on surfaces, etc. etc. Even for treatment options rather than
insisting on randomized double blind trials start by using the natural
experiments that are already happening.

We should have the data to answer a lot of these questions (or at least draw
out some probability distributions), or at least someone has it. This stuff is
going to be critical in informing exit strategies.

~~~
thaumasiotes
> You'd want health organizations around the world to be publishing every
> possible detail (anonymized)

If you're publishing every possible detail, then your data isn't anonymized.
Anonymization consists of the removal of almost all of the details.

~~~
smitty1e
Or even if the data are reasonably tidy, they might expose more when
juxtaposed with another dataset.

------
timkam
An observation from inside academia: there are some (potentially many)
researchers who use the opportunity to push obscure models and simulations of
their research sub-subdomains, even if they have no knowledge about public
health, epidemiology, or even strong past real-world success stories of the
methods they are advertising. Sometimes, this happens with a "talk to the
media first" approach (before subjection of their work to the scrutiny of
actual domain experts or even any form of peer review), which is certainly
dangerous.

------
kgwgk
> The FT chose to run with an inflammatory headline, assuming an extreme value
> of ρ that most researchers consider highly implausible.

The headline "Coronavirus may have infected half of UK population — Oxford
study” is not really out of line with the preprint the article talks about.
What's questionable is that it was a good idea to write about the preprint at
all.

"Importantly, the results we present here suggest the ongoing epidemics in the
UK and Italy started at least a month before the first reported death and have
already led to the accumulation of significant levels of herd immunity in both
countries."

"Our overall approach rests on the assumption that only a very small
proportion of the population is at risk of hospitalisable illness. [...] Three
different scenarios under which the model closely reproduces the reported
death counts in the UK up to 19/03/2020 are presented in Figure 1 . [...] [In
two of those scenearios] By 19/03/2020, approximately 36% (R 0 =2.25) and 40%
(R 0 =2.75) of the population would have already been exposed to SARS-CoV-2.
[...] [The third scenario] suggests that 68% would have been infected by
19/03/2020."

The secondary headline “New epidemiological model shows vast majority of
people suffer little or no illness” was much worse as that's the assumption in
the model and not the result. It was changed to "New epidemiological model
shows urgent need for large-scale testing" in the amended article.

> Since its publication, hundreds of scientists have attacked the work,
> forcing the original authors to state publicly that they were not trying to
> make a forecast at all.

"Attacked" sounds as if the criticism was unwarranted.

------
martingoodson
Author here: happy to take comments or criticism

~~~
YZF
I'm finding myself in disagreement with rule #6. Using a model effectively is
about a lot more than just the domain knowledge. I'd value analysis from a
mathematician/statistician more highly than from an infectious disease
physician. There's the stuff that informs models, i.e. the observations, the
experimentation etc. and then there's the science of modelling itself which
isn't really in the same domain.

~~~
martingoodson
I agree with this - but I do think it should be clear that the model is from
outside the mainstream. Not to dismiss it but to clarify its status. Check out
the New Yorker piece I link to in the article - it's quite shocking the
misinformation that's out there.

~~~
garmaine
Well, we could have benefited greatly from the mainstream media and
politicians taking the outside predictions seriously at the beginning of this
crisis. Instead we had to wait a month for the Imperial College London to say
the same exact thing before certain leaders got their heads out of the sand.

Likewise now with hydroxychloroquine--if you listen to the epidemiologists all
you'd hear is how it's an UNPROVEN drug. What we need instead is coverage of
sample sizes, p values, bayesian predictions of effectiveness (in the absence
of controlled studies) and serious modeling of the number of ICU beds and
ventilators required with and without various levels of treatment, from
emergency care to prophylactic use.

The epidemiologists have their head in the sand and think we can just wait 6
months for a proper set of randomized trials. It's the less attached data
modelers you need to turn to get predictions that are useful for effective
policy choices.

~~~
martingoodson
That's not really a fair representation. (Harvard epidemiologist) Marc
Lipsitch raised the alarm back in February: "it's likely we'll see a global
pandemic" of coronavirus, with 40 to 70 percent of the world's population
likely to be infected this year."

[https://thehill.com/changing-america/well-
being/prevention-c...](https://thehill.com/changing-america/well-
being/prevention-cures/482794-officials-say-the-cdc-is-preparing-for)

------
paulsutter
"the best material model of a cat is another, or preferably the same, cat” -
Norbert Weiner

~~~
LittlePeter
Wiener

------
strenholme
I am working on trying to make an accurate model for predicting COVID-19
growth based on the per-county figures we have with COVID-19 (courtesy The New
York Times). To say the data is noisy would be an understatement.

What I have found, so far, is that if we look at current daily growth
(averaged over seven days) and use exponentiation to predict future growth
based on the previous week’s figures, the numbers are too high (usually by a
factor of two, but the error amount is all over the place).

Point being, we’re seeing a more complicated growth model than simple
exponential growth; the actual growth is lower.

My work so far is on GitHub:
[https://github.com/samboy/covid-19-html](https://github.com/samboy/covid-19-html)

 _This is a work in progress_ and I’m nowhere near being able to make a simple
easy to read graph showing a reasonable projection of COVID-19 growth in the
United States.

~~~
justanotherc
It could only be exponential if the population was infinite. It has to be more
of a bell curve, because as infection grows, there are fewer hosts to infect,
and therefore growth would start to decline.

~~~
strenholme
Even in places where 0.05% of the population has confirmed cases of the virus,
I’m seeing the curve flatten. Whether that’s from quarantine of from the virus
hitting its limit, I can not say. But the curve _is_ flattening.

~~~
justanotherc
Yes that's good news. However when you say the curve is flattening, I believe
you mean we have reached peak daily new cases, meaning the number of new cases
we see is going to start declining from here.

Collectively many people seem to be referring to that as "curve flattening",
but my understanding is that flattening the curve means slowing the growth
rate overall, so that it takes us longer to reach peak daily new cases. It is
not intended to indicate a particular point along the x axis. In fact if we
are actually flattening the curve, it will take us LONGER to reach our peak.
Also, its difficult to measure whether we have been successful or not, because
the only thing we have to measure against would be hypothetical worse case
scenarios.

------
m0zg
I'd like to point out that we _still_ don't know if that "high R0" model was
right or not. And we won't know that until we randomly test a sufficiently
large random sample of the UK population for antibodies. They imposed
containment _very_ late, and they go to pubs _all the time_. It is not
implausible that the majority of their population already had COVID19 without
even knowing what it was.

What we do know is that "doomsday style" models from IHME that just last week
were predicting 50K beds needed in NY are off by a factor of 3-4, and
hospitalization are starting to flatten out already. You can guess the
direction they were wrong in. And before you start an uninformed argument,
yes, these models assume the current isolation measures.

In the meanwhile NY hoarded the ventilators and medical supplies because it
anticipated this prediction to be true. To be clear, I don't blame NY - they
used the best information they had, which turned out to be bullshit. Better be
safe than sorry.

These are not harmless errors. When this is over, someone should study these
fiascos and estimate the death toll just from bad models alone.

[https://twitter.com/AlexBerenson/status/1246465515704463360](https://twitter.com/AlexBerenson/status/1246465515704463360)

------
scribu
Reading the examples, where people jumped on a single mistake to discredit an
entire report points to some sad conclusions:

1\. Scientific literacy is super low in the general population.

2\. Motivated reasoning is rampant. People will believe anything that enables
them to do what they wanted to do anyway.

~~~
chrisco255
I think the author oversimplifies the problems with climate models. They've
had numerous mistakes and extremely critical ones. Not least of which is that
they haven't factored for Multi-decadal changes in cloud cover albedo. The
climate system is incredibly complex and our models do not have good models
for all the subsystems that make it up, including the oceanic oscillations
like Atlantic Multi-decadal Oscillation, PDO, etc. We can't even predict many
of these subsystems with any degree of accuracy so we are hopelessly
inaccurate at the higher level.

~~~
djsumdog
I agree, and there are the same politician leanings in climate change non-
profits and think tanks as with anything else. Sometimes they may justify
sightly more alarming views of data to gain more funding; justifying it with,
"Well, it's going to be bad anyway"

------
firefoxd
This is a good time for schools to go back to teaching about Ignorance:

[https://www.nytimes.com/2015/08/24/opinion/the-case-for-
teac...](https://www.nytimes.com/2015/08/24/opinion/the-case-for-teaching-
ignorance.html)

------
carlmr
>Would we encourage an epidemiologist to apply ‘fresh thinking’ to the design
of an electrical substation? Perhaps we should treat with caution the
predictions of electrical engineers about pandemic disease outbreaks.

I kind of disagree with this point. The models we see are mostly statistical
models. Anybody with a statistics background (mathematician, physicist,
chemist, biologist, engineer, computer scientist, epidemiologist) can have
enough of an understanding to make valid points.

Discarding opinions based on somebody's background is an argumentative fallacy
in and of itself. You should of course check if somebody is trustworthy, but
epidemiologists should be scrutinized under this aspect like everybody else.

~~~
jeegsy
wouldn't you need to be an SME to interpret the results properly and put them
in their proper context? Not to mention what inputs, variable would actually
make sense in the real world?

------
cousin_it
> _Journalists must get quotes from other experts before publishing_

No, this isn't enough. This whole way of thinking isn't enough. It's a big
part of the reason for the current situation.

Journalists should report what's true, not what Tom, Dick or Harry said. If a
journalist isn't qualified to make object-level claims on a given topic, don't
write on that topic. For example, if Bob says there's a forest fire, then
instead of publishing "Bob says there's a forest fire", you must do enough
legwork to tell your readers "There's a forest fire" or "There isn't".

I allow myself to ignore all journalism that don't follow that guideline, and
it makes me happier.

~~~
Quanttek
Uhmm, no? Reality is way too complex for anyone to make a statement about what
is "true" \- even for domain experts and especially for journalists, whose, at
best, entire specialization is "public health". We may be able to state what
is "true" for very obvious matters where there are clear dividing lines for a
matter to qualify (e.g. forest fire, election results). But when it comes to
the sciences, it is incredibly difficult to make any such statement,
especially when it comes to emerging research and predictive modeling.

Instead of journalists demonstrating the effect of Dunning-Kruger in a manner
similar to what many computer scientists and engineers love to do about
unrelated fields, they should rather listen to the experts and try to gather
multiple opinions in order to triangulate what is probably correct.

~~~
cousin_it
Given a choice between an engineer who did some calculations on an unfamiliar
topic, and a journalist who triangulated expert opinions but didn't do any
calculations, I'll listen to the engineer.

~~~
Quanttek
1) The comment regarding engineers related to the tendency to assume, because
one has specialized knowledge in one field, one also is qualified to comment
on other topics, such as the ability to discern the "truth" in a highly
volatile and complex social situation (e.g. COVID-19)

2) What you are describing is exactly what I mean: There are dozens of experts
("engineers") who have done their calculations but have come to different
conclusions. To presume, as a journalist or expert, one's own calculations
will provide the "truth" in such a situation is not only extremely arrogant
but also, when it comes to a pandemic, extremely dangerous.

It reminds me of that electrical engineer at Imperial College who thought
epidemiology is a cake-walk and wrote a paper predicting 5000 deaths in the
UK, which stood in stark contrast to that modeling effort by a large group of
actual, renowned experts in the field (epidemiologists, virologists, public
health scholars) also at the Imperial College, whose estimates have at least
estimated 20,000 deaths. The electrical engineer had to quickly backtrack on
his claims after hundreds of scientists wrote in. Now imagine, every
journalist would do that and directly publish it. That would be far worse than
an article that brings up that 5,000 deaths study but also mentions other
estimates.

------
lazyjones
Regarding rule 2: I stumbled across
[https://www.sciencemediacentre.org/working-with-us/for-
journ...](https://www.sciencemediacentre.org/working-with-us/for-
journalists/roundups-for-journalists/) which does a decent job of aggregating
expert reactions to questions that pop up in the media.

I have many more issues with current journalism than the author of the blog
post, rooted in their "fire & forget" nature of publications (no visible
revisions, no corrections, almost all currently accessible articles are too
old to be useful or even correct).

------
DoofusOfDeath
Regarding the "all models are wrong" maxim...

Is that statement 100% true for the low-level models that physicists use and
develop? In particular, I'm curious if quantum-physics models are 100% right,
just not 100% precise.

~~~
tomrod
I keep this distinction in mind:

\- Models are deliberate simplifications of reality, in order to guide
thinking and otherwise pull in only important information

\- Formulations (formalizations) are encapsulation of principles into a
mathematical framework

While there is significant overlap, the two categories do not overlap 100%. I
see formalizations of physics as the latter, and we use the former to help
keep our understanding of the latter clear.

------
galaxyLogic
Maybe the best thing about models is not the numbers they produce but the
insight into how different variables affect each other, how steeply.

But this information is already contained in the model itself. Therefore
people should be reading the models, not their results.

The models should of course be verified by comparing their results to
empirical data. But that does not often exist with global things like
pandemics and climtae change.

------
dham
All I can think about with these models in general is the book: Jurassic Park.
A lot of talk about chaos theory in the book(kind of skimmed over in the
movie). I don't know how accurate all of it is, but it seems trying to model
how a dinosaur park would work in the modern day could be a lot like modeling
what a virus will do(although probably easier to model a Dinosaur park)

------
tallgiraffe
Absolutely true. The amount of bogus science being shared is astonishing. Yet,
almost no one seems to notice, care or demand better. All that’s said in the
article should be, but personally I’ve lost all hope. We might be living
through a generation that values TikTok And click bait more than truth, and it
won’t end well, but that’s where we are heading.

------
6510
I think making medical claims is quite punishable if one doesn't have a
license. This could easily be extended to include journalists and politicians
as well as other areas of science (if the stakes are high enough)

It would work like this:

For each field there has to be a committee similar to the ones medical
practitioners already have. If the behavior is not up to standards they do an
inquiry and if need be the credentials are stripped.

Someone with credentials may sign-off on articles written by journalists and
must do so for laws made by law makers.

Such articles and laws stay active for as long as the credentials are valid.
If the person dies a new one has to sign off on it. If that doesn't happen the
law is abolished automatically and the archived news article will have to
clearly state at the top that such validation is missing as-of [say] march
2043. It may also solicit such review.

We can make a convenient api that allows people with credentials to stick our
their neck to approve a publication. The list of professionals endorsing the
perspective must be made available from the article or law.

~~~
cryptonector
> I think making medical claims is quite punishable if one doesn't have a
> license.

Well, it's tricky. If you say "I don't believe covid-19 is a serious problem",
that can't be punishable as there is a large spectrum of legitimate thinking
as to its severity and what trade-offs we should want to make. Such a
statement is not remotely like practicing medicine without a license, but some
will argue that it is if it could help them shut up the speaker. While saying
that "chloroquine phosphate is a good prophylactic for covid-19" to someone
who would believe it certainly should be punishable (as attempted murder
perhaps! and regardless of whether the speaker is a licensed physician!).

> This could easily be extended to include journalists and politicians as well
> as other areas of science (if the stakes are high enough)

Journalists? Eh, maybe, but government officers generally have privileges and
immunities -- good luck getting them to let those go.

Anyways, the rest of your comment reads like 1984. Quis custodiet ipsos
custodes and all that. Regulatory capture and all that. There's no
Objectivity. There's no way to set up a mechanism that yields objectively-
correct results. All systems will be susceptible to collective delusions and
other failures. There's no silver bullet here, and free speech should be part
of the mix. Reactionary thinking is fun for the angry, but not good for
society.

~~~
antpls
> While saying that "chloroquine phosphate is a good prophylactic for
> covid-19" to someone who would believe it certainly should be punishable (as
> attempted murder perhaps! and regardless of whether the speaker is a
> licensed physician!).

I would disagree. As an example in the history, saying "The Earth is a sphere"
was once punishable by death. Fake news and pseudo science has always existed,
but censorship is not a solution. Only research, education and communication
can help, and even then, each generation will always have his own set of
questions without answers.

I don't agree with the parent' solution either, for the same reasons.

~~~
cryptonector
Chloroquine phosphate is fishtank cleaner, and -to humans- poison.

Granted, if you're just dumb and ignorant and choose to take it yourself,
that's no crime. But telling others to take poison, without telling them it's
poison, should be a crime. Of course, in the case of chloroquine phosphate, if
you look at the packaging, you'll see it's poison.

~~~
kgwgk
Water will also kill you if you drink too much. But telling people that they
need to drink water is not a crime.

Chloroquine phosphate is a fish tank cleaner. On the other hand, chloroquine
phosphate is a medicine.

------
jeegsy
> Perhaps we should treat with caution the predictions of electrical engineers
> about pandemic disease outbreaks

Or former software company CEOs for that matter

------
puranjay
I fail to understand how you can reasonably model an unprecedented event in
modern history. We have no data on how people will act under a weeks, even
months long lockdown. Will they stay indoors and follow guidelines? Maybe.
Will they watch their livelihoods get affected, their mental health
deteriorate, or get careless over time and break quarantine? Maybe.

We just don't know because we just don't have the data or heck, even anecdotal
evidence from similar events in the past

~~~
heurifk
We have recent data from Wuhan.

We have old data from 1918.

This is a white swan, not a black one.

More importantly, the mortality, while much higher than flu, it's still
relatively low.

Now imagine a virus as contagious as this one, but with 10% mortality over all
age groups. That would be unprecedented and probably cause society meltdown.

~~~
a1369209993
> More importantly, the mortality, while much higher than flu, it's still
> relatively low.

Anecdote: someone was trying to convice me to panic about Coronavirus because
"three hundred and [something] people died just today!"

    
    
      $ units
      8 billion / 80yr
      /day
    

"Actually, three hundred _thousand_ people died today. Probably more, even."

~~~
koyote
I think a lot of people would be very surprised if you told them how many
people die in their country every year.

It's nearly 3 million in the US.

------
cassiet
The headline really cuts off the points nose.

~~~
martingoodson
Yeah - I was trying to play off the George Box quote but maybe it's too
obscure
[https://en.wikipedia.org/wiki/All_models_are_wrong](https://en.wikipedia.org/wiki/All_models_are_wrong)

~~~
KerryJones
Interesting, I did _not_ know the reference (I'm a SWE, not in scientific
community), and I thought it was a clickbait headline. Obviously this differs
from the person above me ^ who clicked the link _because_ of the reference.

Just perspective, and I'm very happy to know the reference now.

------
IBCNU
All models are wrong, some are deadly.

------
johndoe42377
It is so strange - when I write exactly the same assertions I am getting
downvoted into oblivion lol.

------
rossdavidh
So, I basically agree with everything this article says, but it seems to miss
a basic point. If journalists do what this paper suggests, they make less
money.

Journalists, and the newsmedia corporations and organizations that employ
them, don't run with the most inflammatory headline possible as an accidental
fluke of a mistake that they were too careless to catch. Even public sector
newsmedia organizations use measures of how widely read their articles are, as
a figure of merit to how well they are performing. Private sector newsmedia
are rewarded financially in more or less direct proportion to how widely read
(or at least clicked on) their articles are, not how well informed the reader
is after they're done reading it (if they even do read past the headline).

If there is one less to be learned from this whole Covid-19 debacle (and I'm
sure there are several), it is that our entire news ecosystem, public and
private, is fundamentally structured wrong for doing what is supposed to be
its purpose, which is to make people better informed. It's not bad at it by
mistake, it's bad at it as an inevitable consequence of its design.

~~~
renewiltord
The problem is on the demand side, not solely on the supply side. People
desire reinforcement not challenge. That is natural. I suspect it is true of
us all. Those who are capable direct their potential to be challenged in
specific ways, allowing themselves to have biases reinforced in the
unimportant places. And they have access to the knowledge in their specific
areas.

This is the reason for the enduring power of the filter bubble: it is a stable
equilibrium because it serves both the purposes of the demand and supply side.

You can test this by making high-reliability websites that state honest
priors. You'll get an audience but the audience will be pretty specific to
your subject matter and not be popular. No information source has had all of
the following characteristics:

* Broad-based popular support

* High information content

* Novel information, i.e. information you can't get elsewhere

* Sustained presence

This may actually be desirable. Novel reliable information is an advantage,
but it may not be a present sufficient advantage, and species survival may
depend on presently boosting those capable of acquiring and utilizing
information advantage. i.e. a time may come when we need to be good at it - if
we have more people with this characteristic then, it'll lead to better
outcomes.

~~~
amitdeshwar
What about 538?

~~~
AnthonyMouse
Does 538 have broad-based popular support?

How does its viewership compare to that of gossip-based media like E! or
anger-based media like Fox or CNN?

