Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Big Study About Honesty Turns Out to Be Based on Fake Data (buzzfeednews.com)
186 points by danso on Aug 21, 2021 | hide | past | favorite | 91 comments


"Now some are questioning whether the scientist himself is being dishonest" kinda undersells it. The debunking offers numerous pieces of stone cold proof that the paper is straight-up academic fraud: https://datacolada.org/98

And since the post says "the fourth author has made it clear to us that he was the only author in touch with the insurance company", it seems clear that Ariely personally fabricated the data.


I don't think that's "clear" at all. Even the blog link doesn't come to that conclusion.


The trail of the data goes cold at Ariely. He is refusing to tell who supposedly gave him the data, and can't keep his story straight. He is also marked as the creator of the Excel file in the metadata [0]. If he received the data in Excel format, he wouldn't be the creator. If he received it in some other format.

It's obvious that Ariely is dirty, and I'd bet we'll hear of other similar cases soon enough from other studies that he was involved in. Cheaters don't cheat just once.

[0] https://datacolada.org/98#footnote_13_6142


from the submitted article:

>And this is not the first time questions have been raised about Ariely’s research in particular. In a famous 2008 study, he claimed that prompting people to recall the Ten Commandments before a test cuts down on cheating, but an outside team later failed to replicate the effect. An editor’s note was added to a 2004 study of his last month when other researchers raised concerns about statistical discrepancies, and Ariely did not have the original data to cross-check against. And in 2010, Ariely told NPR that dentists often disagree on whether X-rays show a cavity, citing Delta Dental insurance as his source. He later walked back that claim when the company said it could not have shared that information with him because it did not collect it.

from "Dan Ariely was suspended from research at MIT after conducting unauthorized experiment with human subjects" ^

>We talked to sources at MIT, who said that the way in which the placebo experiment was conducted led to dispute and eventually to Ariely's departure. Sources familiar with the study said that Ariely did not request the necessary permissions from the IRB (Institutional Review Board) of MIT that is called CUEHUS, and did not fulfill the required protocols to conduct the experiment, which included administration of electric shocks to 80 participants, and administration of cheaper or more expensive placebo medications.

>When this was discovered after one of the participants complained, the ethics committee reached out to Ariely for an explanation. Emails that have come to our attention reveal that Ariely's reply was amused in tone, indicating he had a general protocol for this type of experiment of this type called "electric shocks", and he followed them.

^ https://www.ha-makom.co.il/post-tomer-dan-ariely-mit-suspent...


I think "We ran 1 million simulations to determine how often this level of similarity could emerge just by chance. Under the most generous assumptions imaginable, it didn’t happen once." is pretty damning.


Oh, somebody made it up. It’s just not proven that he was the one who made it up. It could be someone at the insurance company, or some student paid to type it in. Less likely perhaps, but not impossible.


This evidence is clearly irrefutable. It's fascinating how a well respected scientist could make such elementary mistakes when fabricating data.


It's easier to gain reputation if you are fine with cutting corners here and there, or even make up things as long as no one can see that. And once you are at Harvard (or Duke for that matter), most people wouldn't even question your credibility.

EDIT: And to the point of not being able to fake that data well. Yeah, again, if we are in the business of getting credit points quickly, faking the data quickly makes sense too. No one would take a close look, right?


Patrick Winston, professor and former director of MIT CSAIL, told a brief story in class back in 2008 or so. He said it was sort of a running joke in the graduate admissions committee that people were drawn to study and research the areas of AI that corresponded to weaknesses of their own.

I.e., people with poor hearing study speech recognition. People with face blindness study computer vision. People with poor writing skills study natural language processing. And so forth.

"And every so often", he said, turning to face the class with a small grin on his face, "we get a grad student who comes before the committee and says he's interested in all aspects of artificial intelligence."

It was amusing at the time, but had an element of truth to it as well. It's not surprising to me that dishonest people would be drawn to a career studying honesty.


I've read two of Dan Arielly's books and while occasionally entertaining and occasionally insightful, I could never stop feeling like he was a bullshit artist. His books read as if he read a paper about people liking counterintuitive headlines with sciency-sounding explanations, and he p-hacked his entire career to take advantage of it and become the next Malcolm Gladwell. I kinda expected to see this sort of rebuttal sooner, considering how much of a thorn he was in economists' sides, but not at all surprised to see that it eventually happened.


New info from the Buzzfeed article:

> “I can see why it is tempting to think that I had something to do with creating the data in a fraudulent way,” [Ariely] told BuzzFeed News. “I can see why it would be tempting to jump to that conclusion, but I didn’t.” He added, “If I knew that the data was fraudulent, I would have never posted it.” [..] he said that all his contacts at the insurer had left and that none of them remembered what happened, either. [..] Asked by BuzzFeed News when the experiment was conducted by the insurance company, he first replied, “I don’t remember if it was 2010 or ’11. One of those things.” [..] But Ariely discussed the study’s results in a July 2008 lecture at Google [..] did not have any emails from that time to review.

Another quote from an article in The Economist (https://www.economist.com/graphic-detail/2021/08/20/a-study-...):

> Mr Ariely has requested that the study be retracted, as have some of his co-authors. And he is steadfast that his mistake was honest. “I did not fabricate the data,” he insists. “I am willing to do a lie detection test on that.”


>“I am willing to do a lie detection test on that.”

For a scientist researching honesty, I'd say that referring to a polygraph as a lie detector test raises a few questions.


> In the first sign of something amiss, the 13,488 drivers in the study reported equally distributed levels of driving over the period of time covered in the study. In other words, just as many people racked up 500 miles as those who drove 10,000 miles as 40,000-milers. Also, not a single one went over 50,000.

I have a hard time believing that Dan Ariely didn’t know about this. The uniform distribution of mileage makes no sense, so this should’ve been caught right away. Plotting a histogram of the mileage data would’ve been one of the first things Ariely’s team did with this data.


It’s not damning per se. That depends on the sampling (or otherwise it would be a tiny, tiny insurer). I’ve done studies where I sampled an insurance population to get equal group size on a few key parameters because we then did follow up questionnaires and I needed to account for non response. No point in random sampling then because all I probably would get was data on the largest groups in the population. As it turned out people loved the subject of our questionnaire (effect of preventive measures by home owners on incidence of a whole range of common claims) and we got about a 70% response rate (that’s crazy high for cold questionnaires to customers) so the study ended up quite overpowered.

Not knowing the sampling, not documenting, not having the emails or at least a zip containing the work (over 4 authors)… that’s a different ballpark.


The sample size is in the article:

> Nearly 13,500 drivers were randomly sent one of two policy review forms to sign…

The distribution referenced is of mileage, which you’d expect to have some kind of right-skewed, continuous distribution.


Look I’m not defending Ariely, just saying that random sampling can be more complex than each record exactly the same sampling chance. And if the population has a few overpopulated groups but you’d like results for all groups, you don’t throw extra samples at it but use smarter sampling.


Sorry, but you seem to be suggesting that non-response bias would be responsible for the uniform distribution seen in the Update mileage digits as well as the uniform miles driven distribution?


Theoretically, yes. I’ve used stratified sampling in this kind of research, since the underlying portfolio is so skewed.


I really don't see how any random sampling mechanism (unless you're literally stratifying based on the last digit of the odometer) would cause these sorts of results, please explain further.


Or, even better share the data you reference from "this kind of research".


These are customers they already know I presume. So they have an inkling of the expected mileage (at least: we price on expected mileage). So you repeatedly sample dropping some samples to get enough filling in all mileage baskets. It’s a stratified sample and random.

Say you have 10k customers with 1k women. You want to take a sample And want enough power to get answers on women. In that case you do stratified sampling.



In my view, there are two possibilities:

1) The data is fabricated and some of the researchers were in on it.

2) The data is fabricated, and the researchers are extremely sloppy, irresponsible and should be ashamed of their poor work ethic.

How can you not have done any kind of analysis on this data, even if only for curiosity's sake? No plots of the distributions? Nothing? Come on, it's stuff that take 3 mins to whip up in python.

In this age of misinformation, we don't tolerate people spouting lies even if they claim to think it is the truth. I don't see how this is any different. They didn't even attempt to do basic verification.


Another instance of suspicious behavior from Ariely: he has an experiment which shows that most people cheat a little, and very few cheat a lot. The experimental method involves prying off the teeth of shredders with a screwdriver. However, others could not replicate this method.

http://fraudbytes.blogspot.com/2021/08/top-honesty-researche...

Someone has claimed that getting a shredder from Home Depot is suspicious [1], but I found it perfectly innocuous. Googling for "home depot shredder" brings up several options.

[1] Aaron Charlton's response at https://twitter.com/sTeamTraen/status/1428275153155264520


Adding on to this: Ariely at one point in time bandied around this unfounded claim about how roughly half (!) of dentists willfully misinterpret medical images in order to fill cavities that don't exist, quoting Delta Dental. But:

> “But according to Dr. Ariely, he was basing his statement on a conversation he said he had with someone at Delta Dental,” said Pyle. “But he cannot cite Delta Dental in making that claim because we don’t collect any data like that which would come to such a conclusion.”

> So what happened?

> Ariely said he got that 50 percent figure from a Delta source who told him about “some internal analysis they have done and they told me the results. But they didn’t give me the raw data. It’s just something they told me.”

> Ariely did not provide the name of the Delta medical officer, whom Ariely said was not interested in talking with me."

[1] https://www.wbur.org/npr/131079116/should-you-be-suspicious-....


What's weird is that this blog post is really just expanding on the 2020 study by the original authors which says that the original data is unreplicatable. And provides the original data.

The original authors probably should have retracted the 2012 paper in 2020. That was a mistake not to do that. Which leads to this.

This blog article is just science at work. You have a study that says X. Then people try to replicate it, and can't. And in this case, the original authors all come out and say "it seems this conclusion was wrong".

Kudos for figuring out why the data can't be replicated. But it was the second 2020 study that gave them the clues, not the 2012 one.


> And in this case, the original authors all come out and say "it seems this conclusion was wrong".

Well, not really. They published a paper with that conclusion. But they were happy to lie about it elsewhere. Compare the discussion at https://statmodeling.stat.columbia.edu/2021/08/19/a-scandal-... :

> Ariely is the author of the 2012 book, “The Honest Truth About Dishonesty: How We Lie to Everyone—Especially Ourselves.” A quick google search finds him featured in a recent Freakonomics radio show called, “Is Everybody Cheating These Days?”, and a 2020 NPR segment in which he says, “One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it’s opportunity . . . the surprising thing for a rational economist would be: why don’t we cheat more?”

> But . . . wait a minute! The NPR segment, dated 17 Feb 2020, states:

>> That’s why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they’re done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

> And that last sentence links directly to the 2012 paper—indeed, it links to a copy of the paper sitting at Ariely’s website. But the new paper with the failed replications, “Signing at the beginning versus at the end does not decrease dishonesty,” by Kristal Whillans, Bazerman, Gino, Shu, Mazar, and Ariely, is dated 31 Mar 2020, and it was sent to the journal in mid-2019

> Ariely, as a coauthor of this article, had to have known for at least half a year before the NPR story that this finding didn’t replicate. But in that NPR interview he wasn’t able to spare even a moment to share this information with the credulous reporter? This seems bad, even aside from any fraud.

(emphasis original)


I’ve seen a tweet by an NPR journalist saying the interview happened in 2017 and was rebroadcast in 2020. Sorry no link.


Not replicable is very different from “the data were made up using a random number generator”.


The person doing the fraud realized that ironically, the main reason to be honest is if you are being checked on it.

I would totally have believed this study intuitively as my honesty is dependent on my assessment of the risk of getting caught.


This is why learning morals and ethics don't necessarily make a person more moral or ethical: the knowledge can be used to try and "beat the system".


The longer someone maintains a reputation for honesty and ethics, the more they will be trusted and the more they can get away with in the end.

Therefore the most advantageous way to try to "beat the system" may be to defer any dishonest behavior for an indefinitely long time, until a "rainy day".

Makes me think of:

https://xkcd.com/810/


It's like a set of code, you are liable to hear all kinds of claims about the code base, what it can do, what it can't, but the only way to be sure is to go and examine it for yourself. And every time I do that I find surprises. I imagine the world of scientific experimentation could be the similar in that respect. The synopsis brought to us by scientists can range from slightly misleading to downright fraudulent. It's valuable but unless you get into the weeds of it expect your knowledge to be flawed


People are being very generous to Ariely. His theory is that someone in the insurance company faked the data such that it would prove his hypothesis before giving it to him?


The primary citation can be found at [0], but the article includes some more background, and emphasises the author's response - admitting that the data was probably tampered with.

[0]: https://datacolada.org/98


> Looking at the data only when they were aggregated, anonymized, and sent to him, he said, freed him from the work of securing ethics approval from the university to perform research on human subjects.

Note he was kicked out of MIT for doing work not approved by the IRB if I recall correctly.


Science is a process, a way to find truth. It is not in itself truth. It is subject to all the same human flaws and venality as any other human endeavor.

Just because something is published in a paper doesn’t make it true. Just because the paper was peer reviewed doesn’t make it true. Just because there is a “consensus” doesn’t make it true.

I’m sick and tired of people saying “the science is settled” or “trust the science”. These statements indicate a fundamental misunderstanding of what science is and how it works (or fails) in the modern world.

The modern scientific method is still, in my opinion, the most powerful way we have of learning how things work, but it is not without flaws. This is just another cautionary tale.


> I’m sick and tired of people saying “the science is settled” or “trust the science”.

You’re right, but also, there’s more to it.

There is a difference between what people say and what people mean and context is important. When I’ve heard people say “trust the science” it is always directed to people who have demonstrated that they do not understand the scientific method and the meaning of the word “healthy skepticism”. In that case what you are doing is an appeal to authority, which is completely unscientific, and necessary for building consensus.

So when we hear “trust the science”, it’s really a political statement more than anything, and it intends to replace a person in a robe, or a person in a uniform with a person in a lab coat.

Certainly it would be better to have an informed and well educated citizenry. But for people who live in the real world and do not want to have their society steered by people who have incompatible views with a modern society, saying “trust the science” is a shorthand for “we don’t think you have what it takes to contribute to a conversation in a meaningful way.”


When I hear people say "trust the science" the speaker is often a non-scientist who is selectively picking what science fits their agenda and ignoring the rest.

It's the same thing when somebody says "trust me, I'm a XXXXX", it's time to be very skeptical.


The problem is, not all opinions are equal. And the internet, including forums like this, have no notion of status, authority, or stature — so it puts all opinions, no matter how informed they are, on the same page. And while status and authority doesn’t mean one is right, it is important than on average our information sources are more informed than the average person who has no training, no relevant knowledge, no domain expertise, and little ability to disambiguate between contradictory or even harmful information. This tendency has spilled over into the nearly infinite sources of media and information that now exist, with accelerants that didn’t previously exist for bad information to travel wide, far, and fast.

So when people say trust the science, what they’re saying is a bunch of people who do this all day every day are collectively hive minding some opinion on how things work. If it’s wrong, it will be wrong for likely non-obvious reasons. And thus that’s the best we as humans can hope for at any point in time. As a race we are continually learning and updating our understanding of the world; just because it’s not perfect at any one time doesn’t mean everyone should just throw away all that they know and assume everything is wrong and up for grabs.


Especially when it's only a single study with a surprising result.

Those go viral easily, but are usually wrong.


The replication crisis has been a long standing problem in social sciences and medicine. The former's darkest stain is its collective need to prove their preconceived world view. The latter's seems to be motivated by profit.

I'm sure nobody is surprised that zealotry and greed have made some of science as reliable as psychic readings.


A more charitable interpretation of the tendency of error in both is that humans and the world are more complex than expected. Sure, there are examples of fraud, etc but my general intuition is that people are generally well intentioned and interested in honesty and that those qualities are not immunity from error. So, malice, incompetence or a world that defies our finite minds?


> The latter's seems to be motivated by profit.

The same problem persists in communist countries that don't allow profit.


I think you can replace profit with "power". Publish or perish is true in most academic circles, regardless of government.


Bingo.

"Science is the belief in the ignorance of the Experts"


> When I’ve heard people say “trust the science” it is always directed to people who have demonstrated that they do not understand the scientific method and the meaning of the word “healthy skepticism”.

I've virtually never heard it from somebody who is a scientist or even an educated layman. Those people are usually talking about specific studies or papers. The people who say "trust the science" in my experience might as well be saying "trust the television" for all they understand of the science.

And I agree they certainly are saying it "to replace a person in a robe, or a person in a uniform with a person in a lab coat." But I don't think that's a positive thing. They aren't scientists, they're NPR listeners regurgitating something they've heard.


"Certainly it would be better to have an informed and well educated citizenry. "

Of course it would, but there's no way for people to 'stay informed' on the multitude of various things.

The system we live in is fundamentally based on legitimate authority. We have no reasonable way to try to debate with our doctors, dentists, lawyers, engineers. It's literally why they exist - to understand, internalize and work with the inherent truth in a system.

We trust them, that's the way it works.

So how does a system that is inherently grey, be 'wrong a lot of time' but 'right in others' in a manner that is consequential (i.e. vaccines, climate change).

That's a tough social problem.


I say “follow the science” about Covid and Global Warming, for instance. Yes, that is a political statement. It’s also rational and reasonable.

I am a tester. Science is never complete, just as testing is never complete. But stupid people think they can ignore bad bugs and get away with it; and stupid people spin fantasies that maybe all those experts are completely wrong about Ivermectin or vaccines.

Trust science is just a way of saying “hey fella, there are people who have devoted their careers to knowing this. Let’s hear from them and take them seriously.”


It is difficult to get a man to understand something when his salary depends upon his not understanding it.


> When I’ve heard people say “trust the science” it is always directed to people who have demonstrated that they do not understand the scientific method

I think "always" is too strong. With regards to COVID vaccines I have heard that line thrown at people who say "this conclusion is too new, it has not stood the test of time" and now lo and behold we are seeing that the early "95% effective" rates are not holding up.

In fact I would say that many of the people who say "trust the science" (politicians, etc) have no scientific background themselves and are not people we would otherwise look to for scientific opinions on anything.


The 95% effectiveness was based on the version of COVID that was circulating at the time. No one said it would be 95% effective against every possible mutation. Claiming the results of those studies are wrong instead of a result of a changing environment is itself a misunderstanding of science.


Claiming that the vaccines were necessary to "end the pandemic" without factoring in the possibility of variants for which the vaccines were not as effective was not scientific thinking.


Vaccines are still necessary, they just aren't sufficient.


Maybe they expected 99%+ to get vaccinated before a vaccine-avoiding variant got out.


They expected 7 billion+ people to get vaccinated?


The media didn’t say “Vaccines are 95% effective, except for mutations”. The media said “Vaccines are 95% effective.”


The media is all lies and propaganda. Stop paying attention.


What evidence do you see that the effectiveness reducing hospitalization risk has decreased?


> people saying “the science is settled” or “trust the science”

These two are not of the same level of severity.

The first statement should usually raise eyebrows, as science is almost never "settled". The first and second laws of thermodynamics are pretty solid. For most other things we can entertain some skepticism.

But the second statement raises the question: trust relative to what? If your options are to trust something that appears to follow the scientific method vs. something that does not appear to follow the scientific method, then it's completely fair to favor the first. Sometimes you'll be wrong, and that's fine, but that doesn't mean that science is inherently undeserving of trust.


> The first statement should usually raise eyebrows, as science is almost never "settled". The first and second laws of thermodynamics are pretty solid. For most other things we can entertain some skepticism.

One of my good friends is a particle physicist and he assures me that he and his peers very much hope the science isn't settled, despite agreeing pretty well with experiment, because the Standard Model is a hideous kludge.

The heuristic I use is judging based on my estimate of the statement's Shannon entropy. And most of the time the "follow the science" crowd's statements contain absolutely no entropy. I've already heard it verbatim from CNN or some other usually low expertise source. Furthermore, it's annoying to have people lecture me about PCR tests when they don't even know what a restriction endonuclease is. Note that high entropy doesn't mean correct, but to me it does mean more interesting. Now there are plenty of people out there who demonstrably do not want me to have access to high entropy sources, because they believe them to be wrong. They're certainly entitled to their beliefs, but I dislike the idea of someone else deciding what I'm allowed to read.

On the subject of Shannon entropy, I've observed that on technical subjects that readers here have expertise in, like programming, the high entropy comments tend to get upvoted. On the other hand any topic where the crowd here believes "the science is settled" you see the exact opposite effect: high entropy comments are consistently massively disapproved of, while clever restatements of the conventional wisdom with minimal entropy get voted to the top.


What this all fails to consider are the undisclosed incentives and motivations of those “doing the science”.

It is shockingly easy to lie with statistics, massage experimental results, or just straight up fabricate the whole damn thing to further your career, get a grant, ego or whatever.

What percentage of published research papers are able to be reproduced? Very few.

Many “non-intellectuals” inherently know all of the above about human nature, but suffer ridicule when they don’t “trust the science”. It doesn’t take a blue check mark next to your name to realize that people are fallible and will lie to get ahead.


Ironically, you picked the one "law" in physics which is technically a statistical statement rather than a law per se.


While I agree abuse occurs I think this view is overly strict in the other direction. The phrase "the science is settled" is perfectly valid to use in many contexts and is useful shorthand for something like "if you want to deny this scientific consensus you would need to have amazing evidence, therefore it's more productive and expedient to move on and discuss something else". This can be different depending on context but the point is the same.

As an example; the science is settled, human activities are responsible for the most of what we observe as climate change.

This does not forbid anyone from coming along and proving the settled science wrong but it does prove useful for indicating our very high confidence.


While you’re right in a broad sense your points are irrelevant here. This is about fraudulent data (apparently created with a random number generator) being used. This is not and should not be something we expect from scientists. If this story was solely about the replication crisis in behavioral economics, then your comment would be relevant.


There are other ways in which a study can be worthless than simply making up the data.


This seems like a generalization that needs to limited in scope. I think the science is settled about Newton’s laws, Maxwell’s equations, Avogadro’s number, and DNA as the genetic material in bacteria, archaea, and eukaryotes. For pretty much anything taught in a 100 level science course, the science is largely settled.

There are controversies in science, and those controversies make for good popular science press, but there is a lot of science that is settled.


> I think the science is settled about Newton’s laws

Except it isn't. These laws have been extended once by general relativity already and there's a non-zero chance that the dark matter problematic will prove them insufficient once again. I don't know enough about the other fields, but it's not unlikely that they have similar problems.

The point is, things are really complex and surg edge cases. And if you make final statements like this to someone who is doubting the current consensus anyway, you'll only make yourself vulnerable without convincing them.


Different people may have different ideas about what “the science is settled” means. In many political discussions, the phrase actually is used to argue for or against the quality of the data. Often, when discussing climate change, or perhaps even evolution, the phase is used to say the data is clear, the earth is warming, or all living organisms share a common ancestor. Likewise, I think the science is settled that vaccines save lives. And the science is certainly settled that heavy and light objects fall at the same rate in a vacuum.

Being skeptical that “the science is settled” because of some poorly understood edge case makes it very difficult communicate where there is uncertainty, and where there isn’t (because the data is clear and the science well understood).


It is more nuanced. A lot of science is settled and e. g. moved over to technology. All semiconductor devices rely on what was groundbreaking science (as well as simple electricity on even earlier groundbreaking science). The latest when it became technology that part is settled (because it is then reproduced million of times).

Problem is some fields which do not systematically reproduce findings, or that it is even discouraged to reproduce results are not in a good shape and encourage (not deliberately) p value hacking at the one end and fraud at the other.


"A major cause of low reproducibility is the publication bias and the selection bias..." https://en.wikipedia.org/wiki/Replication_crisis#Causes


Trust comes in degrees, so I think the question is how much to trust. Of the scientists I've worked with for long enough to estimate their character, I outright distrust only ~ 10%. That's consistent with [0], which reports that 8% of surveyed researchers admitted to falsifying data, but [1] reports 2% falsification and [2] reports only 0.5%. So if a finding is only reported by one group, whom you don't know personally, I would be only 50–70% confident in it. Once a finding is reported by at least two genuinely independent groups (no strong social or professional connections between the groups) it's appropriate to accept the facts reported but not necessarily the interpretation. As independent confirmations accumulate, idle skepticism becomes less credible. Active skepticism—doing experiments or collecting data to test the status quo—is always helpful though and should be welcomed.

[0] https://www.proquest.com/openview/e1af57060d9d8f628417ce3b7d... [1] https://journals.plos.org/plosone/article?id=10.1371/journal... [2] https://www.nature.com/articles/435737a


I'm more concerned about people who doubt science as a default point of view more than the people who trust the science. If you don't trust science as a process, then you're just putting your faith in random crap that gets through your arbitrary filters. That's how we get stupid stuff like Qanon and Pizzagate.

We're always putting our trust in one thing or another. Personally, I'd prefer if we put our trust in a method that, over the long-term, strives towards some sense of "real" truth as opposed to some contrarian anti-science, anti-intellectual bull. Yes, be critical. No, don't reject science just because it suits you or because it might be uncomfortable.

The best thing about science? It's falsifiable. If climate change suddenly turns out to be wrong tomorrow, I don't have to cling to "oh, but yesterday the consensus was that it was real". It's "oh, these smart people are discovering new things that are giving us a new/deeper understanding of something we didn't quite understand correctly, time to update my understanding of the world".

If your point of view is dependent on your not understanding something, then it doesn't matter. You'll cling to your beliefs, which become a part of your identity, no matter what evidence is presented.


Climate science isn't falsifiable though. We cannot go back to the year 1600 and rerun the last 400 years without human industrial activity but keeping everything else the same and observe how the climate differs.


Just because one can imagine an impossible experiment does not prove that a “science” is not falsifiable. There are lots of predictions that climate science makes that are falsifiable. And one can imagine that at some point in the future, we will be able to do experiments on appropriately paired sets of planets.

The phase “unfalsifiable” is often used to suggest that something is not scientific. My recollection is that Popper thought that evolution was not a scientific theory for the same reason. But unfalsifiable depends a lot on the kinds of experiments that are possible, or might become possible in the future.


Not sure why you are getting downvoted. It is a fundamental challenge of climate science. You can backtest a model all you want, but anyone familiar with confronting backtested models to reality knows that this gives you very little comfort. Climate science fundamentally deals with untested mathematical models.

It is not the only domain of science that has this problem. Medecine is a big one. You can experiment to some extent but for obvious ethical reasons, there are lots of stuff you can't experiment on, and as a result we keep getting contradictory studies on issues that should be purely factual.


> I’m sick and tired of people saying “the science is settled” or “trust the science”.

Or "I follow the science". Nope, that's not how this works. Science doesn't tell people what to do. It isn't a book of instructions or a fixed set of truths. It's a process to learn how things work. What you do with those learnings is a squishy human meatspace thing involving values, politics, social norms, traditions, emotions, etc.

Anybody who says they "follow the science" is not doing any such thing.


"Science is a process, a way to find truth. It is not in itself truth."

We mostly all know this.

The issue is populism and communications.

When we use 'Science' as a basis of infallible credibility in some areas, then it's understandable that some are 'shocked' when that infallibility is obviously not true, and then people become jaded and lose confidence.

There's no language to differentiate between the murky greyness of some things (i.e. memory and recall), and the relatively unambiguous results of other bits of research (i.e. acetaminophen is safe).

Why should the non-scientific public, even sometimes, have confidence in a system that is so often very wrong? How are they supposed to know which bits are 'effectively true' and which are 'grey'?

From the outside, it looks like Science is being simultaneously authoritative (to the point of moralizing) and in other areas totally wrong but lacking in self awareness while saying 'Oh, that was wrong, bu Science is a process of discovery, it's not always right' etc..

Given that Science is constantly 'changing it's mind' it's perfectly reasonable for regular people to doubt Climate Change science: the 'authorities are often wrong, ergo, they may very well be wrong here'.

If Science doesn't have a way of effectively (and by that I mean simply and clearly) communicating the degree of confidence in something, then we're playing a very dangerous game with credibility of the institution.


Great take. What I find most difficult is how humans naturally want certainty one way or the other, or an opinion either on this side or this side. Side with us or against us. As opposed to reality, which has nuances, probabilities, and pros and cons depending on the stakeholder / perspective.


Though applying the word "science" to refer to psychology is a very liberal use of the term.


> Science is a process, a way to find truth. It is not in itself truth. It is subject to all the same human flaws and venality as any other human endeavor.

This is probably the most insightful comment on the subject I've ever read.


We need some new language over this because some science is 'settled' or at least 'strongly indicative'.

I wonder if 'Social Sciences' and 'Psychology' - sine we know so little about it and don't necessarily have a foundation from which to work on ... if they should be called 'Social Philosophy' that happens to use applied scientific methods.

And when the news talks about published papers and scientific results, we can agree on a language like simply the term 'Unverified' or 'Not Fully Verified' to effectively mean 'Not peer reviewed or duplicated'.

That way, when the first Ivermectin trial comes out we can say 'Unverified Ivermectin Study' and that 'Verification' is in progress etc..


Nothing is true, everything is Excel...


Subtitling his book "How We Lie to Everyone" could have been a sort of tell...


Ironic.


Don’tcha think?


A little too ironic


Immediately came to mind: https://youtu.be/bST8Xp8dtY0


>buzzfeed news

What a quality source, I will absolutely believe in it's honesty


Weirdly, Buzzfeed news have been a trustworthy source of high quality investigative reporting for quite a few years at this point.

I think that's partly because they took advantage of the shrinking market for traditional print newspapers and snapped up some seriously high quality talent that had been laid off from other news organizations.


Buzzfeed investigative journalism is top-tier [1]. There is a clear difference between this type of article and the rest of the cesspool.

[1]: https://www.buzzfeednews.com/investigations




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: