Hacker News new | past | comments | ask | show | jobs | submit login
I tried to report scientific misconduct. How did it go? (crystalprisonzone.blogspot.com)
895 points by ivank 78 days ago | hide | past | favorite | 352 comments

In graduate school, in my lab there was a grad student who was kind of an unlikely "professor's pet". He was tall and had surfer's long hair with a bit of a hippie aesthetic. Anyways, he was also really completely clueless about how to do science correctly, but also, I guess, really good about playing politics (there was a time when he asked me to put some bacterial plasmid DNA on my mammalian cells. I told him "it doesn't work that way", but I did it anyways and handed over the cells, and he got the observation he was expecting). On his main project he was teamed up with a super sketchy foreign postdoc that I was convinced would say anything to get high profile papers out.

So they did a series of experiments and reported results that screamed "artefact". On one of them, for example, the postdoc got trained to use the electron microscope and they went through thousands and thousands of images to pick out the one that had "just the right morphology" (I am pretty sure they were snapping photos of salt crystals). On another, they reported that their research subject protein was so fast at the process we were studying that everything occurred IN MIXING TIME. That to me, screams "you are not doing your experiments carefully".

Meanwhile I was sweating balls working on a very careful preparation of similarly finicky proteins (you agitate them and they do bad things since they're metastable) and finally got it to produce reproducible results. I suggested they adapt my preparation to their protein but they couldn't give a damn, they had already published their paper and had moved on to sexier proteins.

But then an intern was put on the project, and she could not reproduce their results, after working on it for six months (she is careful and honest). At the end, I felt so bad for her, I offered to train her on my technique, but she passed. I think she was burned out on the project. I asked if I could get a sample of the protein that she had prepped, and she agreed.

I ran the protein through my preparatory technique and observed that there was a contamination that could have seeded the kinetics of their process. Upon isolating an uncontaminated sample, I carefully but briskly rushed the sample over to the machine. Nothing. Curious, I jacked the temperature up to get it going faster. Nothing. I left it in the machine overnight. Nothing. Finally, convinced that I had likely done something wrong, I dropped the sample in a shaker at temperature, came back the next day and recorded amazingly high signal. In short, the observation that it was "super fast" was entirely an artefact.

As I, too, was trained on the Electron Microscope, I quickly spotted my sample onto an EM disc, reserved some time and hopped on the 'scope. The first grid sector I looked at, there was literally TEXTBOOK morphology in front of my eyes.

I stapled together my results, gave it to the grad student, and told him that the general gist of his paper was probably still correct, but that he should be careful about characterizing his protein as exceptional. I then said it was in his hands to do the right thing.

What do you think he did? Nothing, of course. He kept on the talks circuit, still talking about how exceptional his discovery was, and to date there have been no retractions. He even won the NIH grad student of the year award.

The epilog is that after a decade of floundering I realized that even though I am pretty good at science, I was no good at playing academic politics and quit the pursuit; I drove for lyft/uber for a bit, and now I'm a backend dev. I am certain that my experiences are not unique. Amazingly the intern returned to our lab, and had her own three-year stint chasing ghosts that turned out to be overoptimistic interpretation of results reported by a postdoc.

Oh. What happened to the grad student? He's a professor in the genomics department at UW.

A friend joined a group studying some cell behaviour. They had previously had a big result that they could stimulate this behaviour in defined, serum-free culture by adding a specific factor.

Friend was to work on characterising this effect, so his first job was to reproduce the result as a base case. He couldn't. The factor didn't stimulate the behaviour.

He asked around, comparing his execution of the protocol with that of the the postdoc who had done the original work.

The method involved growing a feeder layer of cells, in serum, then lysing them and washing the plate, leaving a serum-free layer of extracellular matrix behind, as a foundation for the serum-free cell culture (this is a pretty standard technique).

Turns out the previous postdoc's idea of washing a plate was a lot less thorough than my friend's. Couple of quick changes of PBS. So they were almost certainly leaving a lot of serum factors behind on the matrix. Their serum-free culture was nothing of the sort.

The supervisor insisted that the previous postdoc's work was fine, and that my friend just didn't have good technique. The supervisor had him repeat this work for months in an attempt to make it work. But he's a careful worker, so it never did.

This is the worst situation when the supervisor (professor) “sees no evil, hears no evil”.

In a similar situation a prior students work couldn’t be repeated and it was pretty clear the student made up the results. “Water under the bridge, let’s move on”. Of course the publication still counted for the prof.

This feels like automation would have great benefits for these types of things.

Instead of relying on people getting the right technique, you load in their program, dump chemicals into the right vials, then let it run and check the results

Well, a lot of these tasks are already automated (ie, shakers), but most bench workers have their own quirks on existing protocols. Most labs have their own 'dialect' of common mol bio techniques that 'work' in their particular setup. Perhaps the reagent from their particular supplier requires a longer incubation time, or the enzymes are wonky and you need to add more. Everybody I know does washing steps their own way - they say the "official" protocol is too long/cumerbersome/wasteful. More often than not, their own variant of the protocol is documented in their lab books, but not in the publications, where it cites the original protocol.

I imagine the postdoc would have a negative control of not adding the vector? Otherwise its hard to convince people the effect was coming from the vector.

Right, so the effect must have been from the added factor plus some mystery factor in the serum.

Having worked in both academia and industry in biotech field, I have to say that the bar of reproducibility is a lot higher in industry.

In academia, the goal is to publish. The peer-review process won't care to repeat your experiments. And the chance that other lab repeating your experiments was slim -- why spending time repeating other people's success?

In contrast, in industry, an experiment has to be bullet-proof reproducible in order to be ending up in a product. That includes materials from multiple manufacturing batches of reagents, at multiple customer sites with varying environmental conditions, and operator with vastly different skills.

I can second this. Working in industry, the bar is quite high for rigor. The general attitude of industrial researchers is to be very very skeptical of academia, since a lot of things just don't reproduce (cherry-picked data, p-hacking, only work in a narrow domain, etc., etc.). These researchers are almost all people with PhDs in various science fields, so not exactly skeptics.

The bar is different, but so are the aims.

Industry works solely on stuff that's reproducible because it wants to put these things into practice. That makes for an admirable level of rigor, but constrains their freedom to look at unprofitable and unlikely ideas. That inevitably results in inadvertent p-hacking. The first attempt to look at something unexpected is always "This might be nothing, but..."

They call in other people earlier because they're not protecting trade secrets or trying to get an advantage. They do want priority, and arguably it would be better if they could wait longer and do more work first, but the funding goes to the ones who discover it first.

So there's no real reason for either academics or industry scientists to look askance at each other. They're doing different things, with standards that differ because they're pursuing different goals. They both need each other: applications result in money that pushed for new ideas, and ideas result in new applications.

I agree with you, and I don't want my comment to be read as an indictment of academia exactly - we couldn't live without it, it has huge returns on investment, etc. It's worth reading 10 bad papers to find 1 with a kernel of a good idea (and worth spending research funding on 100 bad experiments to get 1 useful result).

I think what I mean to say is that the skills required in industrial research (which can be quite speculative in well-funded companies, by which I mean a 5% chance of success or so) are somewhat different from those required in academia.

I could sense this sort of problem even in CS and could not wait to get into an applied position as soon as possible. If you cannot build the thing you’re an expert in, you’re no kind of expert I understand.

It's true CS has this problem. But that doesn't mean you cannot do reproducible research in academia. It's up to you.

In many academic disciplines there are no real incentives for reproducible research. On the contrary, reproducibility helps your colleagues/competitors poke holes in your papers. It is quite perverse that being secretive and sneaky is better for career advancement that being open and honest. This is the underlying root of the problem.

Well, I believe that the biggest problem is that there are very little incentives in doing that. Everybody (your university, the Government, the funding agencies...) rushes you to publish as many papers as possible, get zillions of citations, and boost your h-index; however, they do not give a damn about the reproducibility of the results you are publishing.

If you are outcompeted by people with lower morals, then is it really up to you? You either have to succumb to taking shortcuts, or lose your funding.

Theranos showed us that's sadly not true. A good story beats reliable results.

I would say the industrial incentive still works pretty well. Theranos didn't follow and eventually couldn't sell products and busted.

I'm sorry but this saddens me to no end, even I did better science during my BSc and MSc; it's not just disheartening, it's frightening. Reading this almost made me feel ill to my stomach. I don't know what else to say at the blatant disregard for scientific ethics and sense of duty.

And we complain that the public at large doesn't trust us "educated" folk, well I can't see why...

My own, rather bitter, experiences with academic research in the early 1990s led me to suspect that by trying to "manage" academic research at a large scale was utterly counter productive and was optimising for all the wrong things (publications, career progression, money, politics) and was actually dramatically reducing the amount of actual science being done.

I left, co-founded a startup and never regretted it for a moment.

Edit: The point where I was sure I had to leave was when I was actually starting to play the "publications" game too well - when you find yourself negotiating with colleagues to get your name on their paper for a bit of help I'd decided things weren't really for me.

Edit2: I'd wanted to be an academic research scientist since I was about 5 or so when I actually got what I thought was my dream job I was delighted - took me a couple of years to work out why almost nothing in the environment seemed to work in the way I expected them to ("Why is everyone so conservative?") and became, as one outsider described me, "hyper cynical".

I had the 'luck' of being a research assistant at a prestigious academic collaboration involving multiple equally prestigious universities. This was in my bachelor years, and I still hadn't decided whether to pursue a career in academia or elsewhere.

While the experience day to day was definitely fun, it destroyed any desire I had of entering the field. A lot of politics, a lot of statistically suspect stuff (even to me, in my third year of a bachelor), and a lot of busiwork.

After that experience I went into web development (full-stack). What I like about it is that even though there IS politics, even though there IS taking shortcuts, and god forgive me for some of the code I delivered, in the end whatever I work on has to actually do the thing it's supposed to do. It doesn't remove the aforementioned problems, but it grounds everything in a way that is mostly acceptable to me.

As frustrating as it can be to build some convoluted web app that feels like it's held together by scotch tape, it's nice to know that it eventually has to do whatever the client asks for, however flawed.

What does conservative mean in this context? Could you explain it a bit? Thanks!

Apologies, I meant conservative in the sense of resistance to contemplate new ideas rather than the political sense. Somewhat naively I had assumed that academic research was where people would be most welcoming of at least discussing new ideas, whereas I found the opposite to be true.

Actually in some sciences they are. But anything touching medicine... forget it.

Thanks! Could you give a few examples? About what were folks so conservative? Were they stubborn proving/supporting their own paradigm/hypothesis or ... they were just simply not open to any ideas? About methods or about theory? Both?

It was quite a long time ago (~30 years) but I suspect a lot of it was simply because senior academics didn't realise they were actually managers, had no interest in managing or even understand that there were problems.

If I had to guess, in the academic context it would mean no actual novel thinking, just churning out more papers on the same `winning` theories in the field, things where before even starting you have a clear idea of what the result would look like.

The problem with going to a startup is it is kind of like going from the frying pan into the fire. As someone who has worked in both academia and industry, while academia and its pursuit of publications leads to bad behavior, industry and its pursuit of money is even more unprincipled. While it might not be that hard to fool peer reviewers with nonsense, it is way easier to fool venture capitalists, who often know no science and and are just listening for the hot buzzwords.

Is that small-c conservative? Or do you mean rightwing? (curious, I assume the former...)

In either case pretty much all humans are profoundly small-c conservative, "big change projects" on society-scale do often end in war/death/etc. At least, it's probably 50/50 whether its a "National Health Service" or a "World War".

However the reason is deeper than that: evolution does not care if you're thriving, it cares that you are breeding. So you're optimized for "minimum safety" not "maximum flourishing".

So if things are stable then you will prefer to stay in them for as long as possible. It is why people need to "hit rock bottom" before they can be helped, often, ie., their local-minimum needs to become unstable so they will prefer the uncertainty of change.

This is true, and in my opinion there is one more tendency which you also imply.

Not only the public at large, but even University graduates start to an extent distrusting those who are "professionals" in academia. It is simply a whole other world, where you are only judged by the number of papers under your name, perhaps never having contributed to anything practical - seems so detached from real life.

Hk a that old saying go?

Those that can, do. Those that can't, teach.

In my experience this is accurate in the overwhelming majority of cases.

It is worse. If you mention that you are not trusting every scientific results per se, you're being labeled as stupid and uneducated. This sort of absolute reasoning is making the distrust even worse. How can you have trust in a in a system that is unwilling to publicly admit its shortcomings? Trust and honesty come in pairs.

This science-bro movement scares me too. "But SCIENCE said so! You're a SCIENCE denier!"

It feels like a religion, with its own T-shirts and all. Appeals to authority, intellectual posturing… often from people with little understanding of the actual science. Honest insiders are way more careful with any absolute statements.

No wonder there's a (also scary) rise of conspiracy theories.

How do people not observe those as two sides of the same coin?

I think people have to get more serious about separating science as a procedure from scientism (that is, philosophical issues that are often discussed in tandem). When one uses the phrase, “science denier”, it often means, “you don’t agree with my philosophy/metaphysics/economic policy” rather than “you deny these particular facts”, which causes people to be rightly concerned. I’m not optimistic that this is going to change anytime soon, but this, I think, accounts for many of the issues in current discourse.

In the UK we've seen a fascinating evolution from skeptic societies to science denial conspiracy theorists. To _massively_ simplify what's a relatively complex piece of sociological weirdness: using your intuition about how the world works is a good heuristic for spotting charlatans, but it fails you badly when the science tells you something that doesn't accord with your intuition.

I tend to limit my use of "science denier" when an organization or its followers systematically deny scientific knowledge on multiple unrelated fronts.

Interestingly, I have read that in the 1920s and 30s, there was actually an organized relativity denialist movement, that wrote articles and held public protests.

Relativity was a huge philosophical shift from the comparative simplicity of Newton's laws. It's not surprising that there was resistance to it.

Tesla was famously against relativity, telling the New York Times, "Einstein’s relativity work is a magnificent mathematical garb which fascinates, dazzles and makes people blind to the underlying errors. The theory is like a beggar clothed in purple whom ignorant people take for a king".

Indeed, and the anti-relativity movement also had a very strong undercurrent of antisemitism.

Chances are, most of the people marching against relativity had no clue about Newtonian mechanics, and were told stuff such as relativity leading to moral relativism.

Since I read Seeing Like a State, I've started to think "charlatan" whenever I hear the word "science". As in "scientific forestry", "climate science" (scientists who study Earth's climate call themselves meteorologists), "scientific racism". Is "computer science" an exception? I'm not game to speculate.

Which actual scientists describe themselves that way? We're physicists, geologists, botanists, psychologists or whatever. When someone says they're a scientist, it suggests that they're not part of any actual scientific discipline, but making a false appeal to authority.

>scientists who study Earth's climate call themselves meteorologists

This is just incorrect. Meteorologists don't study Earth's climate, they study weather. Meteorologists don't use ice cores or tree rings for their research, they study much shorter-term fluid dynamics. Climate scientists do study climate, and not weather. The disciplines are related (specifically, they're under atmospheric sciences), but to dismiss either one as being less scientific is picking favorites despite all evidence to the contrary. I suppose you could use the synonym "climatology" if you want a word without "science" in it, but it seems like a pretty silly heuristic regardless.

I don't like the science-bro movement, but I also think they might fill an important niche. The anti-science movement has too many people and too much time and too high of a (answer time / question time) gish-gallop ratio for scientists to possibly engage with. If scientists try to fight the anti-science crowd, they will lose.

Science bros, for all their faults, can trade blows on more even footing, and that's something. Perhaps even a vitally important something. Even if science bros aren't great at science proper, their contribution to societal consensus formation might be as important as the underlying science itself!

Science bros often misuse "anti-science" to try to shutdown opinions they disagree with. Hence people worried about the unlikely event of being killed by a nuclear power plant are anti-science, but people worried about the even more unlikely event of being killed by a super intelligent AI aren't. Misusing the word "science" (particularly by people who don't seem to have a good grasp on it) and turning it into a rhetorical cudgel is harmful, and pushes the idea that science is ideological.

Do you have an alternative for addressing the gish gallop issue?

We agree that science bros have problems, but unless you have an alternative I see them as a net positive, and not by a small margin.

Consensus formation is always messy, but that's not solved by losing.

I found that the movement you talk about is more about putting your faith in the "scientist" as opposed to the actual "science".

It seems much easier to find scientists who will tow your political viewpoint and then people can use them as a resource to prove that unless you take this person's "expertise" as gospel, then it proves you are a science "denier".


> "But SCIENCE said so! You're a SCIENCE denier!"

This is re-incarnation of what used to be religion. Religion is alive an well, just not in form that our predecessors were familiar with.

My cousin was a student at a lab where a sketchy grad student doctored results too. She was majorly sketched out by the whole thing and that the PI supported the whole op. It was very painful and set her back a bit but she managed to switch to a different lab and do proper work, defend her thesis, graduate, and get far away from those people. Now we just shake our head in disbelief at them but at that point it was fairly existential. It's not easy to switch labs after some time there and people will sort of distrust you and everything.

My impression is that some large number of 'results' are fake results. I can't even imagine in non-hard sciences what the fakery is when the hard sciences have this stuff.

Attention and effort will go to “well-presented fake”.

Marie Curie believed that radioactivity might have been caused by ghosts or the paranormal because of such things.[1] While there may actually be ghosts or other things paranormal, I’d bet that Marie Curie was fooled.

The good part is that Curie’s work persists, and we think we have more understanding about radioactive substances.

I’m not sure whether she had to spend time specifically debunking the ghost-of-radioactivity theory; that just happened because of her work studying radioactive substances and their effects.

[1]- https://www.famousscientists.org/scientists-who-believed-in-...

I find this fascinating. If we allow ourselves to entertain the idea of quantum time paradoxes, could it be that the radioactivity was in fact caused by the ghost of Marie Curie herself? She would have a very strong and obvious reason to haunt the science.

That sounds like self-imposed slavery: every time someone wants radioactivity, Marie Curie's ghost needs to show up and produce it. What with all the nuclear reactors and RTGs on far-flung spacecraft, she's a busy ghost.

Every now and then I go through some of wikipedias sources on certain social topics because I don't trust them at all. The amount of BS I've found in papers, even though I don't eve have any research background at all is impressive.

My favourite was probably this one paper where the author essentially made a reddit-post asking a community about themselves, then cherry-picked (the post is still up, with timestamps and all) a few comments and came to a conclusion that didn't really fit those hand-picked comments.

In conclusion: Wikipedia is a dumpster fire and shouldn't be used for anything other than hard facts like dates and for entertainment.

Wikipedia is an encyclopaedia. It’s a secondary source that aggregates knowledge from primary sources.

For all the problems wikipedia does have, this isn’t one of them. It’s not their job to second guess published research.

Wikipedia actually aims to use secondary or tertiary sources, because of the likely bias in primary (and to some extent, secondary) sources. Statements of fact shouldn’t be supported only be primary sources (publications), though they may be referenced for the historical context. However, quality control on something as big as Wikipedia is essentially impossible.

> Wikipedia is an encyclopaedia

An encyclopaedia with rather low standards that many people sadly treat as an absolute source of truth.

You're right that this isn't really a wikipedia problem though. It's a matter of education because an overwhelming majority of the population isn't competent enough to fact-check memes on facebook, let alone wikipedia, and if wikipedia doesn't do it either, then that responsibility is pushed all the way back to the scientists doing the actual research.

This is an incredible lack of redundancy if you consider how important wikipedia has become in shaping public opinion. It's a system where the scientific publication process is the single point of failure and this article clearly shows that it does fail rather often.

So what way is there to make this process safer? There needs to be at least another link in the chain that confirms information, preferably two or three.

> An encyclopaedia with rather low standards

... that somehow manages to have articles on proeminent subjects that are more in depth and factual than any competing encyclopedic endeavor, while, at the same time, far surpassing them by orders of magnitude on breadth for obscure and less academic topics.

Wikipedia is not an encyclopedia in the traditional sense, and can't be judged on the same standards. It is simply in a league of its own, it fails in different ways than traditional editor-controlled projects and is a fantastic repository of human knowledge and educational resource.

Is Wikipedia really as bad as you are claiming? Can you give some examples?

It seems to me like a lot of the articles are accurate, and some of the check marked or featured articles are downright great.

Wikipedia is overall great, though highly politicized in some topics, for science/engineering topics there's actual review by other Wikipedians and sourcing is solid.

Moreover, the talk page always has anything that might be controversial about the article that you might be interested in.

Sure, it will rarely display incorrect data, but it happens less and less as antivandalism bots become smarter.

> Moreover, the talk page always has anything that might be controversial about the article that you might be interested in.

This, plus access to the per-article revision history, ensures a much higher degree of transparency than any other comparable work.

Did you edit the Wikipedia page or challenge the interpretation in the talk page?

Wikipedia itself has descended into the same pathologies you see in 'science' today: a bunch of gatekeepers who, by account of having been there the longest, have set up a moat of rules and 'culture' and such, to the point where newcomers are shut down or drowned out. I'm not saying it's impossible to get in; but only those that sufficiently mould themselves to the existing people and structures will last long enough to become fully accepted. And so the system sustains itself.

What are you even talking about?

I genuinely have no idea. You mind tangibly identifying how Wikipedia has descended into chaos like you say?

They're not the only person to have made an honest effort to improve Wikipedia, only to be met with legalistic hostility and inertia.

I wanted to edit an article about an obscure religious group that included some blatantly wrong statements about the espoused ideology. They had an academic tertiary source making these claims extrapolated from reaearch by the same author that made both plausible claims but also included similar inferences. Being a very obscure group there aren't many other academic sources discussing it. All literature that could disprove these claims comes from non-academics affiliated with the group which are a no-go.

As per Wikipedia rules (which took hours to figure out), there's not much one can do short of getting some impartial or friendly academic to publish a more reasonable article.

I already spend a lot of time trying to "fix the internet" and just don't have the stamina to also start fixing wikipedia now. I'm also being turned away by the constant stories of edit-wars that tend to happen about certain controversial topics.

Do you mind telling us which paper it was? I have a faint idea which one you mean, because I've read a bunch about reddit, and I would love to know if it's the same one or something else.

Also describes medical research. Only it's even worse. Problem is, you have no choice if you want to work in a university hospital. The system essentially tells you "you'll be doing shit science... or you'll leave!". Been at it for more than 10y, and no hint of change in sight. This is going to be really, really hard to change unfortunately.

How is medical research shit science?

At least my research is. Mainly due to hierarchical pressure. And from what I see around me, most medical papers must be read with a healthy dose of skepticism. I've personally witnessed incredible feats of dishonesty that I won't describe here.

There are multiple reasons degrading research quality. An important one is spreadsheet incompetence. Another one is that medical research goes hand in hand with academic achievement, which in medicine also means money and power (probably more than in most other fields). I guess we have the same kind of problems as everyone else, overall.

One thing people often miss is that clinical data is of abysmal quality and reliability, so honest analysis is really difficult.

I'm a postdoc at a medical school, and this hasn't been my experience. At least in our setup, clinical data tends to be channeled into a collaboration with a computational lab who are better stewards of data handling. Is there cherry picking and over selling the results? Sure. Outright dishonesty is something I have yet to see in my current institution (I did see a fair deal of fudging in my graduate institute, though)

Well, I think things are beginning to be better managed in some centers. If that's your case, then good for you. In my center, it's basically the wild west and data management is a catastrophe.

But are you working the clinical wards? Because things are definitely much better managed in places such as epidemiology units. The true horrors mostly come from clinical researchers digging into excel spreadsheets without knowing a mean from a median.

I'm in a computational lab, but I think I understand what you're describing. My medical school was acquired a few years ago by a hospital network, encouraging us to collaborate with our new clinical researchers. The medical school itself had a strong background in rigorous basic research with animal models, and the clinical samples are a relatively smooth transition. The data is obviously nowhere as clean or plentiful as with animal models, but that's to be expected.

So for example, my lab's expertise was in single cell developmental models, primarily for organ development in mice. Extended that to tumors from clinical samples was relatively straightforward. One of my colleagues is working on an autism dataset, but I wouldn't expect that to be nowhere nearly as clean.

I think a lot of people have deep enough pockets to fund a side lab. It's worth trying.

I am also from a molecular biology background and saw this often. We call these guys the "Golden Boys". They are super successful, but completely useless. If you still believe live is fair, wake up sunshine.

Dont worry about it. It becomes a trap and many of them loose their minds as them dig themselves into deeper and deeper holes not knowing what else to do. Source: shrink in the family at a univ counselling center. At the end of the day they are misguided people.

Even though there is a high price, their function is to train the survival skills of the honest folk who rise up the food chain. And dont have any doubt they have survived these type of people (usually thanks to the right networks and mentors), have developed their own tricks and exist in large numbers.

I don't particularly care about him per se (though i'm sorry to burst your model of society, from what I hear over my residual science network, I'm pretty sure he's oblivious or doesn't care), I'm a bay area dev, and I'm making enough money and have made good investments in friend's startups that my only regret is not having started sooner. Hopefully I'll be able to cash out with enough to do my own biotech, so I'm just biding my time for now. But what does concern me is that this is endemic in chemistry. It's not talked about much outside.... Which makes me wonder if other sciences are just as bad, "we just don't hear it". The incentive structure and nearly nonexistent self-reporting accountabilily is just the same; and meanwhile everything operates under a general social deification of the sciences.

We hear it all the time. Its an ancient story. The history of science is full of these stories.

Misguided/driven/ambitious people are always looking for shortcuts and they will find them. Its like dealing with mosquitos, cockroaches, weeds, software bugs and cancer. It never ends.

I think the point of GP's story is that this isn't just one or two bad apples here and there, but it's endemic in that domain - and most likely in others too (I'm leaning to believe that; it's not the first story like this I've read in recent years).

Being an endemic problem means you have to switch your assumptions; when reading a random scientific paper, you're no longer thinking, "this is probably right, but I must be wary of mistakes" - you're thinking, "this is most likely utter bullshit, but maybe there's some salvageable insight in it".

I think once you've seen a few papers in high-tier journals that turn out to be bullshit once you start to dig a bit deeper, there is not other choice than to adopt this harsh stance on random scientific papers. Especially if you want to do work with that expands on findings on other papers that roughly look good "trust but verify" seems to be the way to go.

I've only recently dipped by toes into academic life in a lab, but it very much seems that PIs generally know which are the bad apples. E.g. when discussing whether some data is good enough to be publishable the PIs reaction was something along the lines of "If we were FAMOUS_LAB_NAME it would be, but we want to do it in a way that holds up". So it seems like there are at least some barriers to how incompetence would hurt the whole field.

I'm also surprised that there is no mention of the PI in GP's story. As it's a paper published by the lab, it's not just on the grad student "to do the right thing", but even more on the more senior scientist, whose reputation is also at stake.

> I think once you've seen a few papers in high-tier journals that turn out to be bullshit once you start to dig a bit deeper, there is not other choice than to adopt this harsh stance on random scientific papers. Especially if you want to do work with that expands on findings on other papers that roughly look good "trust but verify" seems to be the way to go.

Yeah, but I meant that in general case, you no longer "trust but verify", but "assume bullshit and hope there's a nugget of truth in the paper".

This has interesting implications for consuming science for non-academic use, too. I've been accused of being "anti-science" when I said this before, but I no longer trust arguments backed by citations around soft-ish fields like social sciences, dietetics or medicine. Even if the person posting a claim does good work of selecting citations (so they're not all saying something tangentially related, or much more specific, or "in mice!"), if the claim is counterintuitive and papers seem complex enough, I mentally code this as very weak evidence - i.e. most likely bullshit, but there were some papers claiming it, so if that comes up again, many times in different contexts, I may be willing to entertain the claim being true.

And stories like this make me extend this principle to biology and chemistry in general as well. I've burned myself enough times, getting excited about some result, only to later learn it was bunk.

The same pattern of course repeats outside academia, but more overtly - you can hardly trust any commercial communication either. At this point, I'm wondering how are we even managing to keep a society running? It's very hard work to make progress and contribute, if you have to assume everyone is either bullshitting, or repeating bullshit they've heard elsewhere.

Funny story, PI noticed an error in one of my papers and I (happily) issued a very minor retraction. Also in one of the threads I talked about how he did retract several year's worth of work done on a different project by the intern when she joined later. So he was alright. Plus, as a junior (2nd year grad student) you really don't want to tattle on the NIH grad student of the year. Who do you think wrote the recommendation?

it's endemic in biology, and it's endemic in chemistry (I had feet in both sides). The sentiment you wrote in the last sentence is exactly what I feel whenever I read a paper, hit it on the nail.

The crazy thing, is that the honest scientists are working at middling university. It is worse the higher up you go. I have had the opportunity to work in a upper-midrange research university [time-] sandwiched between two very high profile institutes. The institutes were way more corrupt. Like inviting the lab and the DARPA PM to hors d'oeuvres and cocktails at the institute leader's private mansion type of stuff (it turned out that that DARPA PM also had some wierd scientific overinterpretation skeletons / PI railroading the whistleblower stuff in her closet, and for a stint was the CTO of a microsample blood diagnostics company, I can't make this shit up, I guess after Theranos it got too wierd, she's now the CEO of another biotech startup -- how TF do people like this get VC money, and yet I can't get people to raise for some buddies with a growth industry company, and had to make the entire first investment myself?).

Of course working at a upper-midrange university sucks for other reasons. Especially at state universities, the red tape is astounding. And support staff is truly incompetent. Orders would fail to get placed or would arrive and disappear (not even theft, just incompetence) all the time.

While the "host" (people who pay, often with minimal decision power over their resources) turns a blind eye, "parasites" (cheaters who profit disproportionately) proliferate. Is that really so surprising?

When somebody else foots the bill, it's feast time!

To be clear, I'm with you. Also a PhD-turned-industry, for much the same reasons. But I realize what you describe is a completely rational strategy. The options always come down to:

1) Try not to be a host – if you have the wherewithal

2) Try to be a parasite – if you have the stomach

3) Suck it up & stay salty – otherwise. You can call it a balance, equilibrium, natural order of things – whatever helps you sleep at night.

Take your pick and then choose appropriate means. Romantic resolutions and wishful thinking – kinda like Atlas Shrugged solution for option 1) – rarely work.

There is a reason there is a huge replication crisis in academia and it’s exactly what you say above. When folks in industry need to develop a product based a published paper more often then not it’s bullshit.

Yup, I was taught this as part of graduate school.

Nobody ever said it was fraud, they said things like they wouldn't share the data and I couldn't replicate.

In general, the incentives for shoddy science (get Nature papers or find a new career) tend to reward bad behaviour, and I just wasn't able to find something unexpected and pretend it had been my hypothesis all along (it's almost impossible to publish a social science paper where you disconfirm your major hypothesis).

The problem isn't that such people are getting away with unearned good feelings and so the fact that some may feel bad later isn't a solution or a reason not to worry. The problem is that they are wasting scientific resources (e.g. the time of the careful intern trying to reproduce flawed results), polluting research by publishing misleading findings, and discouraging legitimate research.

The problem is that there is no working system in place that makes such abuses of scientific truth visible.

We would need to get away from inefficient communication via publications and set a system in place that tracks findings in detail, and whether they can be replicated first.

But there is no willingness to do so after the US of A deeply harmed the scientific mission and academics by introducing infuriatingly dumb economical incentives into science.

> But there is no willingness to do so after the US of A deeply harmed the scientific mission and academics by introducing infuriatingly dumb economical incentives into science.

What are you referring to here?

No they don't. In my experience they become professors or get hired by top tier companies like Roche.

Reminds me of the protagonist in movie Fargo played by William H. Macy.

The number of incentives driving this kind of activity in science is disheartening.

So many papers get published, few are read widely, and even fewer are replicated, they'll still get citations if the talk circuit is played right. Citations are what advance a scientist in their career, and anything that could be tossed off as an unfortunate statistical anomaly or error is unlikely to end a career.

In such a world, "optimal play" would be to intentionally or unintentionally P-hack, or just slightly embellish results such that the work is interesting to cite, but not interesting enough to replicate. People who do this will eventually move up ahead of everyone else, ultimately favoring incremental but bogus resuls.

The thing I find disheartening is that if fraudulent results are being cited, it must mean that the mechanism of "standing on the shoulders of giants" is not working. One would expect that these papers would be contributions that citing scientists could benefit from and use in their own work with impact. For example if scientist A truly developed an O(N) sorting algorithm, then a scientist B might use it in their work to derive some other result.

I guess in some fields of science the effective dependency graph of academic work is very flat, and the true results get plucked and developed by industry (being true results it is actually possible to meet the higher reproducibility bar there). And the citations don't actually reflect the true dependencies, but some political/social graph instead. Too bad.

> And the citations don't actually reflect the true dependencies, but some political/social graph instead. Too bad.

I think this gets to the major concern with Academia today, as it becomes somewhat of a self-reinforcing feedback loop. Curry citations with political savvy, get awarded grants due to citations and political savvy, show that you are productive due to citations, grants, and political savvy - earning yet more political capital.

This will probably become my go to explanation for why Academic CS research has largely become decoupled from industrial application and industrial research. While political savvy is important in a large corporation, eventually you need to produce results.

this is incredible. And I thought I had it bad with politics in tech companies... this is some next level not giving a fuck right there - people who cheat like that should be punished severely, and work as supermarket cashiers, not become fricking professors. Unfortunately, I too, were I in your shoes, wouldn't pursue it much further past filing a formal complaint or two: the game is asymmetrical, it's much harder to nail someone for wrongdoing than it is for them to fudge up some lab results. Not to mention the emotional toil and waste of time and potential political blowback the would be whistle blower would suffer...

Pretty depressing stuff.

About half of my friends in grad school have had their careers damaged to varying extent by academic fraud, some have wasted lives chasing bad results (one friend lost several years chasing a bad result by Homme Hellinga), some have had bad stuff perpetrated upon them by bad actors with big names (one had her result we suspect - stolen by Carolyn Bertozzi via the review process, luckily her boss was a member of NAS and PNAS track-III'd the paper ahead of Bertozzi's publication).

Just be aware that it's not always like this, and that some fields are less prone to it than others.

In my 8 years in research mathematics, I didn't see a single case that would come close to this horror show (not that mathematics is free of unethical behavior, of course). Collaborating with biologists, however, I got exposed to a world far more backstabby than I've since experienced in the corporate world.

I think this is largely because results in math are easily verifiable compared to chemistry, or as an even worse example, the social sciences. The latter are also suffering from the replication crisis the most.

Math has a different problem. Because of the wide breadth of the field, and highly specialized nature of problems, it can take a very long time for anyone to actually verify a result with confidence. If ever. Unless you’re doing something famous like P!=NP, there might not be many people capable of checking your work in a reasonable amount of time.

The story of Fermat’s Last is a great example, what would have happened if that wasn’t a famous problem?

I agree, even proofs are wrong more often than you’d think, but I’m not sure whether math is actually so uniquely broad that other fields don’t suffer from this problem.

Maybe it's not its breadth, but its depth. That isn't to say that other fields aren't deep, don't get me wrong. But the more tightly coupled with the high-level physical world a field is (think for example medicine or biology), the more it is prone to having technologoical advances from the outside make new sub-fields crop up and old ones die. Think of for example the multitude of research areas made possible by gene editing, or high-resolution NMR imaging.

Of course this happens to some extent in math too, but a lot of subfields aren't killed or born due to outside technological changes. Number theory remains number theory, and still builds directly on centuries of work, even if computer verification has helped in some cases (disclaimer: I'm not a number theorist).

For most subfields of mathematics, you have a lot of depth to cover before you get to the forefront of research. That isn't to say that it's by any means easy to get to the forefront of more high-level physical sciences, but there are certainly subfields in biology or medicine that didn't exist a mere 40 years ago (also true in math, but in general far more rare there).

Also math can hinge on small technicalities. Like Andrew Wiles got a pre review for the proof of fermat conjecture and all was well. I depth review later found a serious gap/flaw which fortunately with hard work and luck he could plug. In contrast if you invent the electron microskop and you get images, you still made the invention (even if small or even big details might be wrong, and the result could be better). In other science often the gist is not effected.

I moved from Physics to Biology. It was quite a culture shock and I ended up leaving after a few years. The amount of shady practices, outright data manipulation, PIs ignoring students just making up stuff was just hard to be in on.

The modern financial system is making a butchery out of honest people. I've seen it happen over and over at many different companies and industries.

Educational institutions are rotting from the inside. Idiots were being rewarded at the expense of intelligent people and now the idiots have taken over control and rewarding other idiots. If you want to know what happens next, watch 'Idiocracy' or 'Planet of the apes'. At this rate, it will certainly take less than 500 years to get there.

You can see it based on how slow scientific development has gotten; there are very few major new breakthroughs compared to before... Most of the ones that get attention are BS.

> Most of the ones that get attention are BS

Arsenic life was the big one when I was a postdoc

Tardigrade DNA is a new one, so popular that it became a major plot point in Star Trek. Turned out it was probably just a sloppy grad student not being careful with their samples/not taking into account microbes physically hitching a ride on the tardigrade

> You can see it based on how slow scientific development has gotten

I feel that the cause and effect are reverse; while the low hanging fruit was available and getting discovered it was a lot harder to get away with fraudulent results. But now that we're facing diminishing returns and more fish in the pond due to years of overtraining fraud is easier to sell.

I just wanted to say that this resonated with me, as a former grad student involved in protein research who is now doing dev work. I hope you're doing well these days!

I'm doing pretty good, thank you! I enjoy coding, I've always been a better coder than a biochemist (likely on account of started coding at age 5, started biochemistry at age 19). I'm still doing garage science here and there, and being a programmer affords me the capability to both afford that and have some time to do it.

i have been thinking a lot about this problem. society has innumerable unsolved problems in healthcare, and many talented people who would like to contribute but cannot.

in software, the open-source model allows people to advance critical initiatives without quitting their day jobs or making onerous commitments.

how can we achieve the same in healthcare, that is let outsiders contribute and advance the state of the art?

I don't know if it will "allow outsiders to contribute" but I would like to see a biotech that makes patent-free drugs. I tried to make nonprofit out of that but there was a lot I didn't understand about how I work, how the world works, and how to get things done, so I will take another crack at it in 5-10 years.

i believe this is not only possible, but will happen sooner rather than later because of advancing capabilities in software, machine learning, and collaboration. we simply need the right people providing capital and launching these biotechs.

re patents, the key is to drive down costs for research and testing. research seems like the low-hanging fruit, comparatively speaking, but it's unclear how to reduce the costs of clinical trials in an uncontroversial way.

> how can we achieve the same in healthcare, that is let outsiders contribute and advance the state of the art?

The Biohacking community is actually really adept, and had made a lot of progress in making Science accessible, prior to COVID you had teams already working together across continents and different time zones. So when someone like Josiah Zayner wanted to tackle a COVID vaccination trial on himself and other biohackers they already had the means and methods ready to go.

The problem is if you want to play by their (academia) rules you're never going to making any inroads, you can't publish and no one will give you a grant for your work, and you're not going to be a chair of anything for your work even if it pans out: but, certain therapies are in development that started off as Biopunk/Biohacker projects.

It's super exciting and hard but also way more work than just BSing your way in academia into a professor role as its all too common occurrence. Professional students becoming mediocre professors was a far worse problem in the Sciences than I could have ever imagined, the one's I really felt bad for were the post docs with actual meaningful research, often with severe social anxiety and poor speaking skills, but were forced to teach undergrad and simply just read the book aloud as 'lecture.' My Organic Chem professor comes to mind, my inorganic professor (did his MSc at Cambridge!) was a rockstar to us undergrads and would do office hours during his lunch hour between lab research and the university made him protest before they'd release back pay during the cuts and layoffs.. it was pathetic and I felt so bad for him, my review was scathing of the University as I left and I've never really forgiven them for that.

Obviously with no VC model in Science to follow for anything but the most brazen outliers (theranos) it's unlikely to happen. Personally I'd volunteer to help middle school or HS kids get involved in plant and Ag science and take some on in culinary if such an Industry still exists in the US after COVID and help them bypass the University track altogether. That is what I focused on after I left working in a lab, but there aren't many avenues for this model to scale to take on massive projects due to a lack of funding. And the money and stability is abysmal, but the Science and fraternity of actual Scientists doing meaningful work is probably more than half of the reason most of us decided to study it in the first place.

Chamath needs to stop pretending to care about politics and solve real problems like funding Community Science wet-labs next to libraries to help the youth care about Science in a meaningful way instead of wasting their time on tik-tok or Instagram with his billions.

Josiah's videos about demystifying the COVID vaccine all got taken down, then the entire channel got shut down. Super disappointing move by YouTube. It was definitely one of my favorite channels for science education.

(squinting suspiciously) ... exactly why did they get taken down? The Algorithm has a well-earned reputation for being capricious, but there's also a ton of good-sounding bullshit out there.

> (squinting suspiciously) ... exactly why did they get taken down? The Algorithm has a well-earned reputation for being capricious, but there's also a ton of good-sounding bullshit out there.

Theories abound, but most/all of these platforms don't have to provide an explanation, and the end-user has little to no recourse on the natter: so far its been youtube, patreon and facebook none of which have followed up. Here is Josiah on an alt platform (odysee) explaining the situation through his eyes [0].

It's sad to see a pioneer of Biohacking dismissing p2p solutions like torrenting and even Bitcoin in order to bypass the censorship, but I think a lot of this just has to do with the clunky nature of its former or perhaps even current UI/UX for people with limited time or attention or familiarity with tech solutions, especially since it was so easy to use Youtube to distribute your content with just a simple click.

I honestly could have him up and running in a day or two with a solution just in case Paypal does in fact shut him down, that would interface with Fiat/CC payment upfront and convert into BTC if needed: the reason BTC is needed is because paypal or bank accounts can shut you out of your funds if you are already a target. It would mainly be a settlements network and only be slightly more steps than what he is used to, as well. But he is right about volatility as that cannot be helped as of right now.

I kind of want to reach out, but I'm dealing with more than I want at the moment due to COVID in my family, but its something I'm considering because Josiah is such a massive inspiration to us Biohackers that deplatforming from the big platforms should be the canary in the coal mine. They even shut down his Patreon!

0: https://odysee.com/@BiohackThePlanet:3/biohack-the-planet-wi...

> Josiah's videos about demystifying the COVID vaccine all got taken down, then the entire channel got shut down. Super disappointing move by YouTube. It was definitely one of my favorite channels for science education.

It seems from his twitter that he even left Oakland for Austin since December when that all went down.

But then look at how WSB was shutdown when it presented a real threat to the establishment. I think this is the same thing happening, but Josiah and the CDC were actually just informing people how gene therapies work in the most biohacker/biopunk way, which is near and dear to my heart for reasons I already explained.

I am not doubting your story, but I know plenty of solid careful scientists doing honest work and being successful. One of these successful scientists self-retracted a high impact paper after he discovered that he had made a data coding mistake. It was painful, but he did the right thing of his own accord, even though it halted work on several follow-up papers that he was drafting.

How is their story related to you knowing scientists?

I am simply providing a counter-example of academic integrity to make the point that one's personal experiences, good or bad, may not reflect what is generally true of academica/science.

I think that probably most people show integrity... but it's a problem if review processes, editorial mechanisms, and culture reward those who don't show it.

The problem is of a lack of data publishing. All data should be published; all conclusions published (preferably with the code that generated the conclusions) so corrections can be made and improved conclusions drawn easily.

> The problem is of a lack of data publishing

Agreed. If you have any recommendations for long-term public data archival they would be greatly appreciated. OSF recently instituted a 50 GB cap which rules out publishing many types of raw data, and subscription options (AWS, Dropbox, etc.) will lead to link rot when the uploading author changes jobs or retires, or the project's money runs out. Sure, publishing summary spreadsheets is a good first step, but there should be a public place for video and other large data files. IPFS was previously suggested but the data still needs to be hosted somewhere. Maybe YouTube is the best option, despite transcoding?

I have no answers. If the scientific community were cooperative enough it could perhaps come up with a shared platform, but it'd be hard to associate budgets with shared resources.

The exact same thing happen to me with a high profile professor who sits on editorial boards of top conferences. He was interested in shilling his dataset far and wide, and will do what ever it takes to show the "value add" of his stuff to other tasks. The smart ones just gave him the number the wanted. I did the work and told him the value add is dubious at best. He found someone else to gave him the number he wanted to tell the story he pre-determined, and made claims in the papers that didn’t even line up with the published results in that very paper. The music goes on and the whole experience is a complete waste of life.

No wonder we have a crisis of faith in our institutions.

the irony is that the intern after three years of suffering, twisted the arm of our PI and convinced him to do the right thing and got an 11-page retraction identifying and confirming the source of her artifact. This diligence got her a job at a big pharma company.

Of course her paper should have been a cautionary tale, but there are still people using the flawed technique for high-throughput studies to this day.

That's exactly why I left science, too. I saw people around me publishing artifacts and not getting caught. I realized I couldn't compete and left.

I did my undergrad at UW and was heavily involved with the GS department.

There’s only so many people this could be lol, really makes me wonder.

Edit: found who it is. Why am I not surprised?

I don't know anything about this field and I certainly don't know any people involved, but this comment made me curious and after a few google searches I think I can tell who it is too.

Which leads me into some thoughts about not rushing to judgement. I believe the commenter above is doing his best to be a reliable narrator, but it's always possible there was more to this story that was not visible to him at the time, that might exculpate a bit. It's also notable that people change over time, can improve on their faults, and might have learned something in the years since. Best not to view their past mistakes as forever damaging.

I also did a few Google searches and am no closer to figuring out who this is supposed to be :(

For what it's worth, I agree with you that we shouldn't rush to judgement. While its certainly possible in this particular case that there was genuine misconduct, quite often there is a simple misunderstanding.

As an anecdote, during my graduate work I had a fellow PhD candidate convinced his guide was out to sabotage his work, because it 'threatened' to overthrow the guide's long established model. He was convinced that the work of the prior student's work that clashed with his was fudged, and that the PI was covering it up. It is possible? Sure, but not very likely. It's a tad convenient when the people you disagree with also happen to be mustache twirling villains.

I've seen a general trend with young academics at the beginning of their scientific career. They tend to be exuberant, convinced of their own superiority. Until that point, they've tended to be the smartest person in the room, the pick of the lot from among their peers. Hit graduate school, and suddenly everybody around you is just as smart as you, but that appreciation takes a few years to sink in. When your experiments don't work, it's hard to digest and easy to imagine the other guy cutting corners. I'm not suggesting that this is what happened with the top level comment, but could explain many of the other comments I see here.

Those people can still be useful:

When there is an open question, with important consequences but unclear resolution, it is hard to know the right answer. Somehow, it is easier to know the wrong answer, and that person will reach for it immediately. So, watch him and choose the opposite.

In any group there is such a person, called the Oracle of Wrong, and almost anybody can tell you who it is. He is the one most likely to wear a trilby, and no wrong choice he has made has ever caused him any personal discomfort.

What an intresting but sadly somewhat common story. Thanks for sharing! Im a undergrad electronics student so basically a world apart, in terms of skill and department, but this is one of the reasons i do not wish to pursue academia and instead focus on intresting jobs

> In graduate school, in my lab there was a grad student who was kind of an unlikely "professor's pet". He was tall and had surfer's long hair with a bit of a hippie aesthetic. Anyways, he was also really completely clueless about how to do science correctly, but also, I guess, really good about playing politics (there was a time when he asked me to put some bacterial plasmid DNA on my mammalian cells. I told him "it doesn't work that way", but I did it anyways and handed over the cells, and he got the observation he was expecting). On his main project he was teamed up with a super sketchy foreign postdoc that I was convinced would say anything to get high profile papers out.

God damn, just this paragraph alone made me remember why I ran like hell after my undergrad even during the financial crisis of 2008's horrible job market and being up to my eyeballs in debt; I saw the politicking behind what it took just to get a department to give a nod to a tenured professor's peer reviewed paper.

It was fucking pathetic and I've never been more ashamed of my what would be my profession than that but it set the tone for what to expect and made me realize just how irreparably marred that system is. It was followed by a sense of dread that nothing I could do would ever change that and I turned down the offer to work in said professor's lab to carry things on into grad school (MS) and just worked as hard as possible to pay off my debts and pivot my Life entirely. I'd rather sweep and clean floors helping a small business grow into something real than ever go back to that despicable environment.

Academia is definitely a mind-prison, and a trap for so many brilliant minds that may not have ability or wherewithal to try their hand a startup or have the necessary paperwork (citizenship) to take on private sector work, which itself carries a ton of pitfalls.

There are some benefits to the University model but I really hope COVID disrupts the monopoly Universities have over this domain for good! Ed-tech really should be much bigger source of funding and development, but FAANG just keeps suckering in people that could otherwise do something actually useful for Society.

> What do you think he did? Nothing, of course. He kept on the talks circuit, still talking about how exceptional his discovery was, and to date there have been no retractions. He even won the NIH grad student of the year award.

> Oh. What happened to the grad student? He's a professor in the genomics department at UW.

He is literately the academic 'Big Head' character from Silicon Valley that every lab/department has. I'd speak of my own experiences further, nothing as bad as yours, but I really don't feel like ruining my evening any further.

> I am also from a molecular biology background and saw this often. We call these guys the "Golden Boys". They are super successful, but completely useless. If you still believe live is fair, wake up sunshine.

Same, I should have made the leap to Microbiology in JR year, but I just wanted to GTFO and even abandoned by double major (Biochemistry) work just to speed up the process.

I would say he is more of an erlich bachman than a bigetti.

The guy teaches at UW now, it's not Stanford but its definitely Big Head's plot.

> The epilog is that after a decade of floundering I realized that even though I am pretty good at science, I was no good at playing academic politics and quit the pursuit; I drove for lyft/uber for a bit, and now I'm a backend dev.

I'm really sorry. It seems a lot of people are hit by a wall of cruelty. More is less in our lives.

Have you thought of joining some biohacklab to keep enjoying your talent and curiosity on your original field ?

This is fundamentally because the risk-reward incentive structure is absolutely broken right now. If your lab expects productivity and transformative science (basically if you get a Nature paper every month that will be good!) then something will give.

And this rot starts all the way from funding agencies (NIH/NSF/DOE) who have become hardcore bean counters.

I am taking up masters(Materials Science) after a spell in Corporate. I was really hoping to go into research and academia. This is quite disappointing to hear. Then again, it helps to remove any expectations that any field would be devoid of politics in general. Bit relieved to be disillusioned now, rather than much later.

Shouldn't you name and shame this guy in this comment? It doesn't seem like he deserves anonimity.

I think the grad student was this guy: https://twitter.com/dougfowler42 (Douglas M Fowler)

His work at Scripps matches the same research group and timeframe of when dnautics was there, and he's now a professor in UW's genomics lab. The topic described seems to fit what he was researching then, and he received a prestigious grad student award for it.

This is so dastardly common sometimes I'm surprised science works at all.

You should tell UW about this.

This is the sign of someone who has never been in grad school

Yes, universities will always protect insiders over outsiders. You can go on Twitter or something and post but it will likely just be seen as a ranting by someone who couldn't make it and most outsiders don't understand the system anyway.

OP is not in school anymore. What obligation does he have to stay silent?

What obligation does UW have to listen to him? Better to make a fuss on Twitter and let this get to donors who will have a thing or two to say to the department chair and head of school/deans/provost/president.

As a public institution, UW has an obligation to take seriously and investigate allegations of academic fraud. Besides that they also have a reputation to maintain.

The reputation to maintain is why they will protect the insider over the outsider. Unless it is really blatant or the insider is really junior or new, universities will do everything in their power to silence the allegations.

> a super sketchy foreign postdoc

I'm unsure of how the term "foreign" is being used above. Is it implied as a pejorative there? For example, if OP had written "a super sketchy white postdoc", or "a super sketchy black postdoc", would the HN community tolerate that?

I think in this case it's probably relevant as it does not exactly make fixing these things easier. For example, later he points out he can't talk the language of the institution this person works at. I don't think it's meant to accuse foreigners of fraudulent science here.

I agree that's unnecessary detail.

Sure, I guess i should have been more specific, he was a postdoc who was from a country where getting one really awesome paper in a lab with a moderately good name (which ours was) would be an instant ticket to tenured professorship at the top academic facility in the country. That should give you an idea of the incentives at play. That doesn't necessarily make him sketchy. But he was also a sketchy human.

It's not about race, it's about the quality of the academic system, which is bad in many countries. I suppose GP intended it to compound - as in the guy was sketchy per se, and from a sketchy place.

> the academic system, which is bad in many countries. I suppose GP intended it to compound - as in the guy was sketchy per se, and from a sketchy place.

is it good here? is here considered less sketchy?

Evidently it’s not perfect here but yes, it’s less sketchy, because there’s far worse out there.

All you're arguing is that it's "not the most sketchy" not that it's less sketchy than the average. And you're offering no proof or example of that either.

If you're going to get hung up on technical details, note that GP didn't ask whether it's less sketchy than average, just whether it's less sketchy.

Finding concrete proof or examples is obviously hard in this subject matter (how are you going to prove something as abstract as sketchiness), but here's one observation: predatory conferences mostly only exist outside the West. To be even more concrete, two of the most infamous predatory publishers (WASET and OMICS) are based in Turkey and India respectively. You generally won't find something nearly as sketchy in the West.

>two of the most infamous predatory publishers (WASET and OMICS) are based in Turkey and India respectively

Well, as an Indian postdoc working in the US, I can speak to some of these sketchy behaviours. In terms of the predatory publishers, my Indian institution had its own filters, and most labs have their own as well. For example, for a while we had an institutional restriction on submitting manuscripts to conference proceedings, with the justification that the hard time limit equals substandard peer review. In addition, for the longest time we were not allowed to submit anything to open source journals, with similar justifications. Publishing in a journal with an IF < 8 was also frowned upon, and the institution would not cover publication expenses. AFAIK other institutions had similar filters for publications. I would regard my institution as a decent, but nowhere near the best in my field in India.

Who does publish in these predatory journals? Smaller, less well funded universities with desperate students, ever since the government mandated first author publications as a requirement for receiving PhD degrees.

The article mentions Brandolini’s Law:

“The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it.”

It's the first time I've heard it, but it's a very appropriate observation in today's world where misinformation travels faster and wider than correct information. If you're just making stuff up, it's much faster than looking up sources.

You're absolutely right - the big question of the day is not just "how do we counter disinformation?" but "how do we counter it at scale?" The bad-faith actors of the world have realized how incredibly cost effective disinformation is in an online world that can massively amplify messages, and which algorithmically selects for divisiveness and "engagement" rather than factually and utility. We need a CAPTCHA for truth - but I'm not sure such a thing is even possible without AGI. So what does that leave us with - making algorithmic message amplification illegal? Putting that genie back in the bottle isn't going to be easy, so we'd need to be damn sure it's the right thing to do, to ever drum up enough support to get legislation like that passed.

Isn't the answer well know (improving education and encouraging critical thinking), but unsatisfying because it is hard to implement and changes only crystalize slowly? If everyone were to e.g. request sources for claims instead of taking them at face value that would act similar to a CAPTCHA and prevent the spread of misinformation.

This is a solved problem for people who like to spread disinformation at scale. You link bomb.

Ideally you have a cache of extremely long messages where you selectively quote small sections of sources, out of context, that seemingly prove your point but on careful reads are unrelated or actually contradict.

But there's ten or fifteen "sources" and by the time you read through the post, all the articles posted, and form a coherent argument contradicting it, they've already posted a bunch of other places and/or the thread has moved on.

That's the ideal case where you're inclined to waste 20 minutes arguing agaisnt a comment on the internet and there isn't a mix of legitimate sources with total bull shit sources forcing you to do a secondary hop to prove a point agaisnt the fake source.

I don't know if improving education and encouraging critical thinking is actually the silver bullet one may hope it is. HN considers itself a cohort with significantly more critical thinking ability, but it will make wide and unsubstantiated, highly upvoted comments whenever a political subject comes up involving censorship, hallucinogens, systemic oppression, or unions.

Of course, HN is also an invaluable resource when it comes to tech and sometimes other STEM subjects. It's just significantly less valuable for areas completely outside of it. I wouldn't trust HN as a neutral or critically thinking source for, say, the usefulness (or lack of usefulness) of gender studies.

Sure, but again, we need a solution that scales, and there's no empirical evidence that this is a scalable solution. We also wouldn't need this if everyone would just be nice to each other and not spread disinformation in the first place, but that's not really much of a realistic solution either, just wishful thinking.

Aligning the incentives is the solution. There's no downside to bullshitting because our institutions are built on it.

Reminds me of "a lie can run around the world before the truth has got its boots on".

That precise quote is from Pratchett but there are similar, earlier citations https://quoteinvestigator.com/2014/07/13/truth/

Aren't these all variations on well known aphorisms such as "publish or perish"?

That aphorism applies to publishing within the scientific community and the need for grants, tenure, results, etc.

I think the ones above are much more general than 'publish or perish'.

Publish or perish is the opposite of "silence is golden" more than a statement on the speed of lies and truths.

Information travels very quickly through a medium that wishes it to be true.

You will find that the ability of the human mind to be critical, to refute with very salient arguments is suddenly acute when the mind doesn't wish it to be true, and this definitely also applies to H.N. comments.

That H.N. in this case is so accepting to this one side of the story suggests to me that this is the side it seems to want to be true, notwithstanding it might entirely be, or not be, true.

In your mind, is that "one side" that misconduct happens? Do you think the opposite side "it never happens" is reasonable?

No one here is trying to argue that it happens all the time or more often than not, I'm wondering if that's what you think we're reading.

I find that rarely when a side has an “opposing side” that either side is reasonable

In this case, “misconduct happens” is not opposite to “it never happens” and I do not find the comments to echo the former sentiment as much as “Academia has become so ripe with either outright malice, or an inability to catch earnest mistakes, that virtually no research can be trusted.

> No one here is trying to argue that it happens all the time or more often than not, I'm wondering if that's what you think we're reading.

No one is indeed arguing that, but what many, including me, are arguing is that nothing can really be trusted any more because it's a coinflip whether data is even reproducible.

There are definitely two sides to your story then :) Not necessarily to the top comments I have read. It might be a more important story though.

On the other hand, we are presented with the raw data that makes the author of this article suspicious.

Even in the face of such evidence, it often turns out that when the other side tells it's story it's more reasonable than that and there are explanations.


Its no wonder there is a trend now of just outright rejecting information presented when our "trusted" sources of information are very susceptible to malice and error without any real tools to combat it.

My current view is that academic research should not be used as proof of anything and only as the starting point for your own research. And by your own research I mean your own actual tests. The papers can point you in the right direction but their findings should not be taken as fact.

I don't see how this is practical. You could spend a lifetime testing the work of others and still not get through it all, let alone get to working on anything original. Progress is made by building on the work of others.

The point is not to test everything ever published, its that when you want to do X, you look for papers on X understanding that they are likely flawed but better than starting from scratch.

This still don’t make sense. For example, I want to paint my house with a less toxic paint. I can’t trust any academic research. I have to now research what is toxic in paint? Then I have to find ways to measure various chemicals and gases? Etc...

This seems like a complete utter waste of time.

In real life most life impacting academic research is much more right than wrong. You are far better served assuming so. Unless you want to waste your time going back to basic science and rebuilding all the academic knowledge in most things you wish to do.

I think what you’re missing is that academic research focuses on novelty, not basic facts. Ultimately not trusting novelty can save time. Basic facts can be found in reference material.

So it’s more like suppose you want to paint your house green, and you read that somebody says you can mix red and blue paint to make a really cool green paint. Instead of immediately going out and buying enough red and blue paint to cover your whole house, first buy a small amount of red and blue paint, mix them together, and see if you get that neat green paint.

It’s common sense, but the window dressings of academia can lead you to burn time and money on things that are totally silly because somebody important-sounding said they did it once.

Where people get burned is that there’s an enormous power imbalance—-junior scientists can end up stuck trying and failing to make green paint out of red and blue paint because nobody senior is going to take them seriously if they can’t make green paint. This presents a serious ethical challenge if making green paint is impossible.

What are "basic facts"? Surely the point of most research is to uncover new facts? And what is "reference material" if not other research - research that you're using as a foundation for your own?

It's fair to question things, especially if they don't make sense to you and even if acknowledged authorities are behind them. However, (1) something that you may question is not necessarily something I may question, and (2) questioning may be a waste of time.

If a paper that says mixing red and blue paint makes green paint has a thousand citations, perhaps you don't need to question it because others already have. If you can't reproduce it, the simplest thing to do is ask an expert who says it is possible to do it.

It’s not as simple as buying paint. You’re not going to use any treatment where research came from a medical school or associated institute without personally proving it works first? Good luck!

If making green paint is impossible I think that it will eventually self correct, or is simply inconsequential. In some instances it may take a while, but if the alternative is to reprove a result before using it — that seems like something only a fool would do or someone with infinite time.

I read retraction watch every day, so I’m used to seeing stories like this. But I’m always surprised the effort people go through over articles published in garbage journals. You’re never going to even make a dent.

The garbage journals are a plague. You publish one article in a legitimate journal and in next to no time these worthless journals (and conferences) start spamming you, even if what you wrote has nothing to do with what they're supposedly publishing.

Also related: https://en.wikipedia.org/wiki/Gish_gallop

I see this widely used by antivaxxers now.

This goes double for privacy.

What about Editor's ethics or lack thereof?

I got accepted in a Chinese-oriented journal (i.e. most of the Editorial Board were Chinese) - I am not just 'saying' this, I'm saying because the OP mentioned "it's a Chinese thing" over results and datasets, whatever, I digress.

On the last revision round, the Editor told me that I was lacking some references, which he promptly send me. Turned out that 6 out 6 of his 'recommendations' were papers HE WAS ONE OF THE AUTHORS.

Since the paper was not OFFICIALLY accepted, I caved in and cited the guy (3 times), to my UTTER DISMAY.

If you don't play the game, other Chinese are playing the game and having the results.

I don't mean to insult Chinese people, but this is what is happening...

Oh, this is not a China thing. I've had a paper have a bunch of reviewers suggest a bunch of references each. Every bunch had every paper share at least one author. Every bunch was pairwise disjoint in the author sets. Draw your own conclusions.

Edit: just to be clear: I didn't at the time read that as "submission tax". More of, trying to be helpful and using things they personally were familiar with. Most, if not all, of the extra references would make our paper better... If we weren't fighting that damned page limit, that is.

More and more evidence that the magazine based publication route is a net negative to science.

Question is, why don't scientists just put everything on public platforms (read: github) and call it a day? Is it only a matter of funding, or do other factors also play a role in that?

Because nobody reads it there and, more importantly, funders don't recognise the work you've done there. The "prestige" (as indicated by the scientific-looking but mostly inaccurate "Impact Factor") of the journal you publish in determines how good they think your work is.

I wrote about that a while ago here: https://medium.com/flockademic/the-ridiculous-number-that-ca...

> Because nobody reads it there

That's a problem that would fix itself the moment most useful research was mainly available on such platforms.

> more importantly, funders don't recognise the work you've done there

Once again, that sounds like mostly a problem that would disappear if a large migration to open platforms was to happen.


So it seems the main poroblem seems to be that there's no incentive to be among the first to make the move? IIRC it's often the journals that don't want content to be published elsewhere, so I guess just doing both is also not that simple.

Yep exactly, it's a classic coordination problem.

For all its faults, peer-review is still the best mechanism to keep science in right track.

What you propose would mean twitter or facebook will replace those journals, people with huge twitter followings, or "celebrity" scientists would dominate science, the works of people without such marketing skills would get drowned out.

(This is sort of true for current system too, but I think situation would be much worse in new system.)

> For all its faults, peer-review is still the best mechanism to keep science in right track.

Peer review is often effective, but it can't reliably block fraudulent publications like those described in the posted article. Most bad papers are rejected, but the authors can always try again at another journal. Any paper will probably get published somewhere, eventually, even if only in a Hindawi or MDPI journal. The journals aren't accountable to anyone, and as long as they have enough good articles to serve as cover, academics will need to pay for access because citing relevant prior work is obligatory. The publishing system is very weak against fraud.

> people with huge twitter followings [...] would dominate science

Isn't that at its core the same as with scientific journals? People trust these journals to curate science in the same way you suggest twitter would come to curate science if it made the move online.

1. It's already possible to call attention to a paper through twitter, regardless of whether it's published in a journal or not. Paywalls gate-keep the content somewhat and makes sharing easier, but that's a minor side effect of a very broken system.

2. Papers (and involved data) being available on public platforms like github that already have mechanisms for reporting and tracking issues as well as built-in review tools, in githubs case even a separate discussion feature now, would allow for much quicker discussion critizising bad methodology.

3. Working with a VCS like git would automatically make it clear who wrote, edited or removed what.

Scientific data does not fit on most public platforms. GitHub in particular has tight limits on file size, push size (100 MB), bandwidth, and storage ($100 / TB / month). Which isn't that surprising; git is designed for code, not data.

Even if funders gave large sums of money dedicated to data publication, if recurring billing is involved it will eventually break as attention wanes. Data archives need to be managed by an institution or purchased with a single up-front fee, otherwise they won't stick around.

There's also the aspect that, even if you as an individual take it upon yourself to publish your data without institutional support, anyone who reads your paper will most likely ignore your dataset. Which is somewhat demotivating.


Funding for the project/department as well as personal career prospects of everyone involved are tied to the publications. Various approaches to analysing those produce importance numbers. (Note: Pagerank was an attempt of doing same to non-scientific publications and we all know how that went.) Said numbers are picked by bureaucracies to determine the objective worth of groups and individuals. Growing said numbers is literally what the livelihood of academics, at least at some stages of their careers, depends on.

So, yes, that's fundamentally "a matter of funding". It can be fixed by academics and bureaucrats agreeing to switch to some other system. On international level. I think if you got the top 20 countries to coordinate, the rest would follow suit. Any bets on when that will happen? ;)

GitHub is not a great engine for driving discovery of quality content.

I don't think this is related to the country of the editor. The lack of ethics is more preponderant in low-quality reviews (many junk reviews are Asian) and in some domains (more in medical reviews than mathematics).

Here is an example that even the highest profile journal can lack ethics: circa 2005, Nature published a paper comparing a selection of scientific articles from Wikipedia and the Encyclopedia Britannica. The editorial board of Nature selected the articles and sent them to reviewers. They only publishes metrics and a few quotes of their data (the list of selected articles and the reviews). The results were surprising and made a lot of buzz. But Britannica noted that one of these quotes was a sentence that was not it their encyclopedia. Nature had to admit that they selected some Wikipedia articles, and when they could not find the equivalent Britannica article, they sometimes built it by mixing articles and adding a few sentences of their own. Obviously, the process were totally biased, from the selection to the publication.

As others have noted, this is a global problem, not just Chinese.

The version that is more difficult to detect is when a cabal of colleagues agree to push each others' papers in this way. So editor A says "you should really quote authors B, C and D." And somewhere else, editor B is saying "you should really quote authors A, C and D."

Machine learning might be a way to tackle this at scale, by teasing out these associations. Of course, this relies on a degree of transparency. Some journals publish all editors' comments and all revisions of a paper. This is a Good Thing, but humans aren't reading all published research, let alone all the meta data.

If someone with relevant ML skills wants to address this, and fancies starting a project, do get in touch :)

A note on the Chinese insinuations that have been mentioned: As always, it's a bit more complex. There may well be reasons that some states might sponsor or 'encourage' gaming of intellectual institutions. If the world is viewed as a zero-sum game, and the currency is power, this unfortunately seems inevitable. Science tends away from this and towards collaboration, but 'politics' often seems to tend toward competition. I've seen university heads explicitly declare to all staff how they intend to game the national rankings, and nobody bats an eyelid, it's business as usual. It's daft and harmful, and frankly I think it requires hard effort from idealistic grassroots activists to address it. Societal improvements are often won through struggle, they're not given away, they don't happen by incremental evolution.

How do you propose to detect if A, B, C and D are a cabal that push their own papers or if they are the people who actually know the subject and want to improve the quality of paper that new people produce?

Well indeed, that's why I said it's harder to detect :)

More worrying, what does it mean for science if we can't distinguish between a self-serving cabal and genuine good intentions?

I found this behavior in Europeans and Americans too. It is not a Chinese specific thing...

I've never seen it in Spanish publications, although I've been told it happens (social sciences).

I know about the politics too, that's the main reason why I never went to pursue an academic career, but being honest I never witnessed such plain fraud in my UNI. It was more of a friends-get-all scheme.

I think it depends on your field a little. I did not see this during my years in particle physics...

So you lowered your bar huh? Who am I to judge, but I would have preferred a story with something more than the game is rigged and that's what I get to play with

The paper was not accepted at that point. He could just denied publication out of spite. I played the odds, and got published, despite doing that.

I'm sorry the story ended badly :) and yes - I've lowered the bar, sadly.

I will not judge you. Citation indices are horrible and perpetuate this fraud. I was telling a student of mine yesterday, 10 years ago the game was to get publications in prestigious venues. Now the game is to have a stellar scholar.google.com profile. The two games are perhaps correlated, but the correlation coefficient is not very high.

Oh boy, this is very common. This is not specific to a country or ethnicity, unfortunately. You also see grad students shilling for their guides.

Unfortunately this happens in astronomy in non-Chinese journals too.

A fried of mine reported scientific misconduct (p-hacking) and, together with a few colleagues, left the research group, due to moral harassment by the head of that group.

The university removed all of them from the research group and said they could continue working on the data because it belongs to the university.

3 months later:

- investigations of scientific fraud against the people leaving (neglecting authorship because the data could after all not be used and the head wanted a say in the articles, i.e., change them completely). Also some random other allegations that didn't stick.

- police investigation of defamation (because they reported the scientific misconduct and some other misleading statements used by the head in sales for a research-related product)

- the university now expects them to contact the head of the ex-research group to clarify questions of authorship

- the head meanwhile continues as before

I've reported similar misconduct before. Was told the claims were very concerning, and that there was a quite clear problem that would be investigated. A few months later, I was informed the problem had been resolved, though on inspection, nothing had changed. I learned their investigation involved asking the person of interest if everything was legitimate, to which they said yes. Investigation closed. I am truly disappointed at what has become of the industry I used to appreciate so much.

I’m surprised it didn’t involve retaliation against you. I know of a few cases at UW where the outcome was retaliation against the reporting party

It hasn't been. My research is now being actively sabotaged.

Dr. Elizabeth Bik is making efforts to detect image fraud in scientific papers and reading her findings has made me quite worried about not only the general accuracy of the data from lesser-known universities, but how difficult it is to retract/correct those through journals.

Check out her Twitter if you’re interested in the topic: https://mobile.twitter.com/microbiomdigest

Not sure why you’re particularly worried about lesser known universities? These issues seem to have more to do with the state of science and the review process than the prestige/fame of any particular university. I’m pretty worried about the well known universities as well, in particular the expectations and pressure to perform can affect people’s work and decision making.

Because that may introduce new biases around smaller universities from India, Russia, China who already have issues getting things published. As a former scientist in a smaller Russian uni publishing was already difficult and it saddens me to see a couple of bad apples ruining it for everyone.

Chinese researcher is full of shit and fabricates results, news at 11.

For the haters, this is not racism but nationalism, China super incentivizes bullshit research at a high level these days, and it's gotten bad enough that we're starting to distrust any "work" that comes out of it.

I don't know what the solution is, other than to subject Chinese submissions to more stringent and specifically non-Chinese review.

That's absolutely nationalist, and arguably racist, but it's also smart.

Why would it be racist?

I have noticed in English-language discourse that often, shall-we-call-it, “non-white countries” are “races” but “white countries” are “nations”.

Also, Christianity and Judaïsm are religions, but Islām is a race.

Explain me that.

> Islām is a race.

Muslim here, that sounds absurd. In fact one of my biggest annoyances is when people view all Muslims around the world as single entity. Every stupid trait of every Muslim majority culture gets blamed on entire Muslim world.

I find it absurd too, but it is often how it is phrased in English-language discourse, even in Dutch discourse the anti-Islām branch phrases it as such: the language suggests that one can identify a “Muslim” by some kind of physical phænotype of his body.

The difference is that in general in Dutch discourse, such statements are considered racist or betraying such a mentality, and frequently protested, but, in English-language literature, even the “left” that claims to champion the causes of all these “races” and “religions” still very often writes in a way that betrays a mentality that some religions and countries are “races” and others are not.

it's built into their culture it's hard to even blame them. I say this as politely as I can

“their culture” being the Chinese culture, or the Anglo-Saxon culture?

>China super incentivizes bullshit research at a high level these days

Just as you'd expect from a "Chinese researcher", you're going to have to qualify this statement for your point to hold any weight.

In particular the incentives are set up the same way in the US, and reproducibility is a problem in many fields.

Many stories like this and people wonder why the general public is less "believing" in science lately. I think sadly the general public is right not to believe as strongly in science as it maybe once did, the results though are dramatic in "baby bathwater" kind of proportions.

I think science should fix itself. Just publishing paper should not be the metric to reward. A retraction should seriously reward the flaw finder (like sometimes with exploits), and really harm the flaw author/publisher: both scientist and journal.

It is very good that the general public is less believing in science.

I remember well when the public was very believing, including me, and in hindsight it was always undeserving of such faith.

It was a very misguided thing to take a conclusion as fact, so long as it be called “science”, for often upon closer inspection the methodology was dubious, and it was never attempted to be reproduced, so even if the methodology were sound, the data could either be a fluke, or outright fabricated.

This is not a new development; if anything, the critical stance is the new development. It has been going on for centuries most likely that completely fabricated data stoot the test of time because no one bothered to replicate it. When I was at university in the 2000s, we were already told of respected researchers that fell from grace as it was found they had been fabricating data for decades and it took this long for someone to catch wind of it, as no one bothers to replicate research in this world.

The only new development is that now, some are starting to.

“Science” is not enough to believe it; the methodology must be inspected and found to be salient, and the data must have been replicated at least once, præferably more, by another independent group.

To be credible does not require infallibility. The broader social consequence of the general public losing faith in science is not that they will suddenly become enlightened in the nuances of the scientific discovery process -- it is that they will turn to alternative sources of truth. Science isn't a perfect source of truth but it is a heck of a lot better than seeking truth through mythology, tribalism and the opinions of ideologues. Scientific literacy is the ideal state, but the world is not that.

I find that much of the newly inspired criticism on science after the appearance of the replication crisis did not go to alternative source of truth but started to admit that there is much that men don't know and won't know.

The problem is man's arrogance that it knows, that it can find a solution to every quæstion it asks.

“science” is also not even close to “not infallible” it is a complete coinflip whether any peer-reviewed result is even worth the paper it's printed on.

Dare I say it's under that, because it's a coinflip whether the data are even reproducible, but the conclusions derived from the data, even if they be reproducible, are almost invariably involving bigger leaps of faith than making data up.

Last week in a university course, I was surprised to read in A Short History of Physics in the American Century (Cassidy, 2013), that at least with Physics, US public perception of science had been tumultuous following the WWI, WWII, and the Cold War. As a scientific discipline, it only reached maturity through the war-effort, which earned it infamy for bringing about terrorizing nuclear weapons.

I sometimes think it's just a population problem. There are so many of everything and there's so much competition to be the best and succeed the rules and customs we have in place for most of the things is just ancient in comparision and it keeps getting older by the minute

> Many stories like this and people wonder why the general public is less "believing" in science lately. I.

Eh im not sure bad studies is the cause.

Scientists, especially doctors, wanting to use their authority i some debates while 2 of them can be saying completely opposed things maybe, however, contribute...

Problem is more serious than that.

What is happening is that the bad studies are being used for policymaking.

Examples: the "nutrition pyramid" that encouraged carbohydrates and blamed health issues on animal-based food, was later found out to be based on research that was blatantly corrupt, with researchers getting bribes from food industry to manipulate or hide results (a case of hiding results: one researcher that found out that vegetable oil causes decrease of blood cholesterol, also found WHY it happened, but omitted that part from his paper... the reason is that cholesterol is needed for cell maintenance, and consuming only vegetable oils cause a deficit from it, the body pulls cholesterol from the blood to repair itself, and even that might not be enough, with some people suffering damage).

Or a lot of pharma circlejerking that turns into law or regulations.

Or the paper mentioned in the article, that was about video-games and aggression, with many countries passing laws regulating video game consumption based and such papers.

Or the original reason Cannabis was banned (long story short: part of the reason is that they wanted to ban hemp fibers, that was being an obstacle to some newly invented synthetic fibers, some of the government people involved, had stocks of Dupont and other fiber companies, and "accidentally" banned hemp fibers while "trying" to ban the drug, based on manipulated and fraudulent science).

Or more seriously: the papers that recommended "Austerity" and basically destroyed the livehoods of millions of people, later were found out to have math errors that changed the conclusion completely.

And the list goes and goes on.

Hemp fiber was in competition with wood fiber harvested from Hearst-owned western-US forest land. Hearst also owned a newspaper chain, and found using it an easy way to eliminate the competition. Hemp is both cheaper and better-quality than the wood fiber for paper, but had no newspaper-chain backing.

We are dealing with 5+ papers that are fraudulent and another 5+ newer papers that are most likely fraudulent, too. That is, there have been 20, 25+ reviewers looking at those papers. Their job was to carefully read them and double check the numbers. All of them gave those papers a pass. I am at a loss here.

The authors' behaviour is outrageous, but this story is also about a broken reviewing process, partly due to wrong incentives.

"Peer review" is not "have someone else re-do the experiment". That's just not feasible, especially since reviews are done without pay. It's not realistic to expect people to spend more than a few hours reviewing a paper. That amount of time is barely enough to check for overall conceptual issues and maybe flag some really glaring deficiencies. (And then conclude with 'accepted with minor revisions', those 'revisions' preferably being 'add these three citations to my paper, that'll push your paper into 'acceptable' territory'.)

But there were glaring deficiencies in the stats. Anyone reviewing the papers should have caught them.

Well yeah in the papers of the OP maybe, I don't know. I more meant to address several commentors in this thread that seem to think in general that peer review is 'redo the research' and/or 'validate that it's correct'. It's not.

Nowadays when you see articles results of new research of covid19 in the media, those articles often include 'hasn't been peer reviewed yet' or 'reviewed by other scientists' or any such verbiage, either as a disclaimer or as 'now it must be true'. But that's not how it works; it's not because something has been 'peer reviewed' that it's 'The Truth' or 'Real Science'. Peer review, in reality, just weeds out (most) quacks (although in the OP's case it seems it didn't even do that) and checks that the paper is not completely out of touch with what is happening in and known about the field. It's not QA of the work itself.

(I don't care to debate if it should be, and if more money should be spend on replication etc, just providing some real world context on something that is quite opaque to and often misunderstood by those not in academia)

> Their job was to carefully read them and double check the numbers.

That's the theory. The reality is that there is no in-depth review. You're lucky if a reviewer actually reads the paper all the way through, let alone checks the numbers and applies a level of critical thought to the methodology, analysis and conclusions.

^ This exactly!

"Peer-reviewed" by whom?...

Journals typically list their referees per issue, but they do not say who reviewed which paper.

In a graduate product design class I took, our semester project was to design and build and make cost estimates for development of an IOT product. "Internet of things" wasn't a phrase yet, but that's what you'd call it today. We had to incorporate these ultra low power sensor/processor things the professor had his name on and he was a big promoter of. At the beginning of the semester his grad assistant presented her invention from a previous year, which she won awards for and had presented and written about and was part of her PhD work. It was a home health monitoring device and she showed plots of a month of data from sampling herself (pointing out she was the only woman on her project). It was very inspiring and I was very impressed by her. Jump to mid-semester, I randomly have a team of MBA students and me; the three of them were going to do all of the writing and I just had to do all of the engineering by myself (yay). I'm battling in the lab for hours trying to get the damn thing to read a voltage. I keep putting time of the GA's calendar for help, and she keeps blowing me off or passing me in the hall and saying "ummm maybe try this?" or she'd give me another device to see if the last was defective. In principle, she should have been able to point out whatever I was doing wrong in 15minutes or less, but weeks of this avoidance went on. Eventually, after asking everyone in the department where she was and letting people know I was trying to meet with her and just sitting at her desk at our appointed time for over an hour waiting, she caught me in a hall, conspicuously looked both ways to see that nobody was around, and said "look, the things don't work. They've never worked. My device never worked. I made up the plots based on what they theoretically should have been if the product worked. I'm grading the projects. Just focus about the write-up of the business plan." So my work was done. And at the end of the semester, nobody's product worked, but most people acted like theirs did. Ours obviously didn't work, but we made up some shit about it being a mock-up because we didn't have the budget for some of the components. ... A professional photographer took shots of my team that were used in promotional material for the school.

There's a group in Asia that worked in the same area I did my PhD in. In particular, there's a guy who published 18 papers during his two years Master's degree.

Now, most of these papers were tiny. They effectively were "Run one simulation, get one interesting but tiny result, publish". To me, that's 'salami slicing', and journals should not accept papers that should have been larger studies. But he's carried on with this, has now completed a PhD and has a permanent position at a Japanese University.

The belief that any research is automatically true is so bogus and so abused that industries and lobbyists came to rely on it. It’s sad that then people blindly push as “it’s science”.

Main issue is the sheer amount of papers being published and the lack of capacity of the body of experts to read all of it. I guess it’s the professionalisation of research.

People publish papers to improve their rankings and not because it’s relevant.

This comment really clarified an issue:

This is a slow-moving disaster for scientific credibility, and therefore for national safety and security.

There's going to be a point within two decades where "reproducibility crisis" is not a localised phenomenon, and "expert" misconduct is paraded out by the papers.

Totally destroying our societies ability to govern itself based on expert information. The early stages are already here (anti-climate, anti-vax, etc.).

I think the outcome is more likely to be that papers from the US are just assumed to be highly suspect in quality sort of how papers from China and India are now.

There actually is more than enough capacity to peer review (). It's just that nobody wants to do it. It costs time and money. Not compensated by the publisher, of course.

() edit: that's raw body count. I wouldn't know how many people could actually spot the errors mentioned in the OP.

From the article:

> For example, one paper reported mean task scores of 8.98ms and 6.01ms for males and females, respectively, but a grand mean task score of 23ms.

A 9th grader should be able to find that inconsistency, if you give them the table and tell the to find the number that is wrong.

(the other stuff is harder to detect, and I fully understand that you can't request and re-process the raw data for every paper you peer review. Some of these numbers....)

I remember doing chemistry at university and lab results quite frequently didn't match the expected results. So the first time you submit the results you report what you've found and try and explain it, and get marked down

Lesson learned in future you give them what they want and attach large error bars

I changed course after that as part of science should be explaining bad results

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact