So they did a series of experiments and reported results that screamed "artefact". On one of them, for example, the postdoc got trained to use the electron microscope and they went through thousands and thousands of images to pick out the one that had "just the right morphology" (I am pretty sure they were snapping photos of salt crystals). On another, they reported that their research subject protein was so fast at the process we were studying that everything occurred IN MIXING TIME. That to me, screams "you are not doing your experiments carefully".
Meanwhile I was sweating balls working on a very careful preparation of similarly finicky proteins (you agitate them and they do bad things since they're metastable) and finally got it to produce reproducible results. I suggested they adapt my preparation to their protein but they couldn't give a damn, they had already published their paper and had moved on to sexier proteins.
But then an intern was put on the project, and she could not reproduce their results, after working on it for six months (she is careful and honest). At the end, I felt so bad for her, I offered to train her on my technique, but she passed. I think she was burned out on the project. I asked if I could get a sample of the protein that she had prepped, and she agreed.
I ran the protein through my preparatory technique and observed that there was a contamination that could have seeded the kinetics of their process. Upon isolating an uncontaminated sample, I carefully but briskly rushed the sample over to the machine. Nothing. Curious, I jacked the temperature up to get it going faster. Nothing. I left it in the machine overnight. Nothing. Finally, convinced that I had likely done something wrong, I dropped the sample in a shaker at temperature, came back the next day and recorded amazingly high signal. In short, the observation that it was "super fast" was entirely an artefact.
As I, too, was trained on the Electron Microscope, I quickly spotted my sample onto an EM disc, reserved some time and hopped on the 'scope. The first grid sector I looked at, there was literally TEXTBOOK morphology in front of my eyes.
I stapled together my results, gave it to the grad student, and told him that the general gist of his paper was probably still correct, but that he should be careful about characterizing his protein as exceptional. I then said it was in his hands to do the right thing.
What do you think he did? Nothing, of course. He kept on the talks circuit, still talking about how exceptional his discovery was, and to date there have been no retractions. He even won the NIH grad student of the year award.
The epilog is that after a decade of floundering I realized that even though I am pretty good at science, I was no good at playing academic politics and quit the pursuit; I drove for lyft/uber for a bit, and now I'm a backend dev. I am certain that my experiences are not unique. Amazingly the intern returned to our lab, and had her own three-year stint chasing ghosts that turned out to be overoptimistic interpretation of results reported by a postdoc.
Oh. What happened to the grad student? He's a professor in the genomics department at UW.
Friend was to work on characterising this effect, so his first job was to reproduce the result as a base case. He couldn't. The factor didn't stimulate the behaviour.
He asked around, comparing his execution of the protocol with that of the the postdoc who had done the original work.
The method involved growing a feeder layer of cells, in serum, then lysing them and washing the plate, leaving a serum-free layer of extracellular matrix behind, as a foundation for the serum-free cell culture (this is a pretty standard technique).
Turns out the previous postdoc's idea of washing a plate was a lot less thorough than my friend's. Couple of quick changes of PBS. So they were almost certainly leaving a lot of serum factors behind on the matrix. Their serum-free culture was nothing of the sort.
The supervisor insisted that the previous postdoc's work was fine, and that my friend just didn't have good technique. The supervisor had him repeat this work for months in an attempt to make it work. But he's a careful worker, so it never did.
In a similar situation a prior students work couldn’t be repeated and it was pretty clear the student made up the results. “Water under the bridge, let’s move on”. Of course the publication still counted for the prof.
Instead of relying on people getting the right technique, you load in their program, dump chemicals into the right vials, then let it run and check the results
In academia, the goal is to publish. The peer-review process won't care to repeat your experiments. And the chance that other lab repeating your experiments was slim -- why spending time repeating other people's success?
In contrast, in industry, an experiment has to be bullet-proof reproducible in order to be ending up in a product. That includes materials from multiple manufacturing batches of reagents, at multiple customer sites with varying environmental conditions, and operator with vastly different skills.
Industry works solely on stuff that's reproducible because it wants to put these things into practice. That makes for an admirable level of rigor, but constrains their freedom to look at unprofitable and unlikely ideas. That inevitably results in inadvertent p-hacking. The first attempt to look at something unexpected is always "This might be nothing, but..."
They call in other people earlier because they're not protecting trade secrets or trying to get an advantage. They do want priority, and arguably it would be better if they could wait longer and do more work first, but the funding goes to the ones who discover it first.
So there's no real reason for either academics or industry scientists to look askance at each other. They're doing different things, with standards that differ because they're pursuing different goals. They both need each other: applications result in money that pushed for new ideas, and ideas result in new applications.
I think what I mean to say is that the skills required in industrial research (which can be quite speculative in well-funded companies, by which I mean a 5% chance of success or so) are somewhat different from those required in academia.
And we complain that the public at large doesn't trust us "educated" folk, well I can't see why...
I left, co-founded a startup and never regretted it for a moment.
Edit: The point where I was sure I had to leave was when I was actually starting to play the "publications" game too well - when you find yourself negotiating with colleagues to get your name on their paper for a bit of help I'd decided things weren't really for me.
Edit2: I'd wanted to be an academic research scientist since I was about 5 or so when I actually got what I thought was my dream job I was delighted - took me a couple of years to work out why almost nothing in the environment seemed to work in the way I expected them to ("Why is everyone so conservative?") and became, as one outsider described me, "hyper cynical".
While the experience day to day was definitely fun, it destroyed any desire I had of entering the field. A lot of politics, a lot of statistically suspect stuff (even to me, in my third year of a bachelor), and a lot of busiwork.
After that experience I went into web development (full-stack). What I like about it is that even though there IS politics, even though there IS taking shortcuts, and god forgive me for some of the code I delivered, in the end whatever I work on has to actually do the thing it's supposed to do. It doesn't remove the aforementioned problems, but it grounds everything in a way that is mostly acceptable to me.
As frustrating as it can be to build some convoluted web app that feels like it's held together by scotch tape, it's nice to know that it eventually has to do whatever the client asks for, however flawed.
In either case pretty much all humans are profoundly small-c conservative, "big change projects" on society-scale do often end in war/death/etc. At least, it's probably 50/50 whether its a "National Health Service" or a "World War".
However the reason is deeper than that: evolution does not care if you're thriving, it cares that you are breeding. So you're optimized for "minimum safety" not "maximum flourishing".
So if things are stable then you will prefer to stay in them for as long as possible. It is why people need to "hit rock bottom" before they can be helped, often, ie., their local-minimum needs to become unstable so they will prefer the uncertainty of change.
Not only the public at large, but even University graduates start to an extent distrusting those who are "professionals" in academia. It is simply a whole other world, where you are only judged by the number of papers under your name, perhaps never having contributed to anything practical - seems so detached from real life.
Those that can, do. Those that can't, teach.
In my experience this is accurate in the overwhelming majority of cases.
It feels like a religion, with its own T-shirts and all. Appeals to authority, intellectual posturing… often from people with little understanding of the actual science. Honest insiders are way more careful with any absolute statements.
No wonder there's a (also scary) rise of conspiracy theories.
How do people not observe those as two sides of the same coin?
Interestingly, I have read that in the 1920s and 30s, there was actually an organized relativity denialist movement, that wrote articles and held public protests.
Tesla was famously against relativity, telling the New York Times, "Einstein’s relativity work is a magnificent mathematical garb which fascinates, dazzles and makes people blind to the underlying errors. The theory is like a beggar clothed in purple whom ignorant people take for a king".
Chances are, most of the people marching against relativity had no clue about Newtonian mechanics, and were told stuff such as relativity leading to moral relativism.
Which actual scientists describe themselves that way? We're physicists, geologists, botanists, psychologists or whatever. When someone says they're a scientist, it suggests that they're not part of any actual scientific discipline, but making a false appeal to authority.
This is just incorrect. Meteorologists don't study Earth's climate, they study weather. Meteorologists don't use ice cores or tree rings for their research, they study much shorter-term fluid dynamics. Climate scientists do study climate, and not weather. The disciplines are related (specifically, they're under atmospheric sciences), but to dismiss either one as being less scientific is picking favorites despite all evidence to the contrary. I suppose you could use the synonym "climatology" if you want a word without "science" in it, but it seems like a pretty silly heuristic regardless.
Science bros, for all their faults, can trade blows on more even footing, and that's something. Perhaps even a vitally important something. Even if science bros aren't great at science proper, their contribution to societal consensus formation might be as important as the underlying science itself!
We agree that science bros have problems, but unless you have an alternative I see them as a net positive, and not by a small margin.
Consensus formation is always messy, but that's not solved by losing.
It seems much easier to find scientists who will tow your political viewpoint and then people can use them as a resource to prove that unless you take this person's "expertise" as gospel, then it proves you are a science "denier".
This is re-incarnation of what used to be religion. Religion is alive an well, just not in form that our predecessors were familiar with.
My impression is that some large number of 'results' are fake results. I can't even imagine in non-hard sciences what the fakery is when the hard sciences have this stuff.
Marie Curie believed that radioactivity might have been caused by ghosts or the paranormal because of such things. While there may actually be ghosts or other things paranormal, I’d bet that Marie Curie was fooled.
The good part is that Curie’s work persists, and we think we have more understanding about radioactive substances.
I’m not sure whether she had to spend time specifically debunking the ghost-of-radioactivity theory; that just happened because of her work studying radioactive substances and their effects.
My favourite was probably this one paper where the author essentially made a reddit-post asking a community about themselves, then cherry-picked (the post is still up, with timestamps and all) a few comments and came to a conclusion that didn't really fit those hand-picked comments.
In conclusion: Wikipedia is a dumpster fire and shouldn't be used for anything other than hard facts like dates and for entertainment.
For all the problems wikipedia does have, this isn’t one of them. It’s not their job to second guess published research.
An encyclopaedia with rather low standards that many people sadly treat as an absolute source of truth.
You're right that this isn't really a wikipedia problem though. It's a matter of education because an overwhelming majority of the population isn't competent enough to fact-check memes on facebook, let alone wikipedia, and if wikipedia doesn't do it either, then that responsibility is pushed all the way back to the scientists doing the actual research.
This is an incredible lack of redundancy if you consider how important wikipedia has become in shaping public opinion. It's a system where the scientific publication process is the single point of failure and this article clearly shows that it does fail rather often.
So what way is there to make this process safer? There needs to be at least another link in the chain that confirms information, preferably two or three.
... that somehow manages to have articles on proeminent subjects that are more in depth and factual than any competing encyclopedic endeavor, while, at the same time, far surpassing them by orders of magnitude on breadth for obscure and less academic topics.
Wikipedia is not an encyclopedia in the traditional sense, and can't be judged on the same standards. It is simply in a league of its own, it fails in different ways than traditional editor-controlled projects and is a fantastic repository of human knowledge and educational resource.
It seems to me like a lot of the articles are accurate, and some of the check marked or featured articles are downright great.
Moreover, the talk page always has anything that might be controversial about the article that you might be interested in.
Sure, it will rarely display incorrect data, but it happens less and less as antivandalism bots become smarter.
This, plus access to the per-article revision history, ensures a much higher degree of transparency than any other comparable work.
I genuinely have no idea. You mind tangibly identifying how Wikipedia has descended into chaos like you say?
As per Wikipedia rules (which took hours to figure out), there's not much one can do short of getting some impartial or friendly academic to publish a more reasonable article.
There are multiple reasons degrading research quality. An important one is spreadsheet incompetence. Another one is that medical research goes hand in hand with academic achievement, which in medicine also means money and power (probably more than in most other fields). I guess we have the same kind of problems as everyone else, overall.
One thing people often miss is that clinical data is of abysmal quality and reliability, so honest analysis is really difficult.
But are you working the clinical wards? Because things are definitely much better managed in places such as epidemiology units. The true horrors mostly come from clinical researchers digging into excel spreadsheets without knowing a mean from a median.
So for example, my lab's expertise was in single cell developmental models, primarily for organ development in mice. Extended that to tumors from clinical samples was relatively straightforward. One of my colleagues is working on an autism dataset, but I wouldn't expect that to be nowhere nearly as clean.
Even though there is a high price, their function is to train the survival skills of the honest folk who rise up the food chain. And dont have any doubt they have survived these type of people (usually thanks to the right networks and mentors), have developed their own tricks and exist in large numbers.
Misguided/driven/ambitious people are always looking for shortcuts and they will find them. Its like dealing with mosquitos, cockroaches, weeds, software bugs and cancer. It never ends.
Being an endemic problem means you have to switch your assumptions; when reading a random scientific paper, you're no longer thinking, "this is probably right, but I must be wary of mistakes" - you're thinking, "this is most likely utter bullshit, but maybe there's some salvageable insight in it".
I've only recently dipped by toes into academic life in a lab, but it very much seems that PIs generally know which are the bad apples. E.g. when discussing whether some data is good enough to be publishable the PIs reaction was something along the lines of "If we were FAMOUS_LAB_NAME it would be, but we want to do it in a way that holds up". So it seems like there are at least some barriers to how incompetence would hurt the whole field.
I'm also surprised that there is no mention of the PI in GP's story. As it's a paper published by the lab, it's not just on the grad student "to do the right thing", but even more on the more senior scientist, whose reputation is also at stake.
Yeah, but I meant that in general case, you no longer "trust but verify", but "assume bullshit and hope there's a nugget of truth in the paper".
This has interesting implications for consuming science for non-academic use, too. I've been accused of being "anti-science" when I said this before, but I no longer trust arguments backed by citations around soft-ish fields like social sciences, dietetics or medicine. Even if the person posting a claim does good work of selecting citations (so they're not all saying something tangentially related, or much more specific, or "in mice!"), if the claim is counterintuitive and papers seem complex enough, I mentally code this as very weak evidence - i.e. most likely bullshit, but there were some papers claiming it, so if that comes up again, many times in different contexts, I may be willing to entertain the claim being true.
And stories like this make me extend this principle to biology and chemistry in general as well. I've burned myself enough times, getting excited about some result, only to later learn it was bunk.
The same pattern of course repeats outside academia, but more overtly - you can hardly trust any commercial communication either. At this point, I'm wondering how are we even managing to keep a society running? It's very hard work to make progress and contribute, if you have to assume everyone is either bullshitting, or repeating bullshit they've heard elsewhere.
The crazy thing, is that the honest scientists are working at middling university. It is worse the higher up you go. I have had the opportunity to work in a upper-midrange research university [time-] sandwiched between two very high profile institutes. The institutes were way more corrupt. Like inviting the lab and the DARPA PM to hors d'oeuvres and cocktails at the institute leader's private mansion type of stuff (it turned out that that DARPA PM also had some wierd scientific overinterpretation skeletons / PI railroading the whistleblower stuff in her closet, and for a stint was the CTO of a microsample blood diagnostics company, I can't make this shit up, I guess after Theranos it got too wierd, she's now the CEO of another biotech startup -- how TF do people like this get VC money, and yet I can't get people to raise for some buddies with a growth industry company, and had to make the entire first investment myself?).
Of course working at a upper-midrange university sucks for other reasons. Especially at state universities, the red tape is astounding. And support staff is truly incompetent. Orders would fail to get placed or would arrive and disappear (not even theft, just incompetence) all the time.
When somebody else foots the bill, it's feast time!
To be clear, I'm with you. Also a PhD-turned-industry, for much the same reasons. But I realize what you describe is a completely rational strategy. The options always come down to:
1) Try not to be a host – if you have the wherewithal
2) Try to be a parasite – if you have the stomach
3) Suck it up & stay salty – otherwise. You can call it a balance, equilibrium, natural order of things – whatever helps you sleep at night.
Take your pick and then choose appropriate means. Romantic resolutions and wishful thinking – kinda like Atlas Shrugged solution for option 1) – rarely work.
Nobody ever said it was fraud, they said things like they wouldn't share the data and I couldn't replicate.
In general, the incentives for shoddy science (get Nature papers or find a new career) tend to reward bad behaviour, and I just wasn't able to find something unexpected and pretend it had been my hypothesis all along (it's almost impossible to publish a social science paper where you disconfirm your major hypothesis).
We would need to get away from inefficient communication via publications and set a system in place that tracks findings in detail, and whether they can be replicated first.
But there is no willingness to do so after the US of A deeply harmed the scientific mission and academics by introducing infuriatingly dumb economical incentives into science.
What are you referring to here?
So many papers get published, few are read widely, and even fewer are replicated, they'll still get citations if the talk circuit is played right. Citations are what advance a scientist in their career, and anything that could be tossed off as an unfortunate statistical anomaly or error is unlikely to end a career.
In such a world, "optimal play" would be to intentionally or unintentionally P-hack, or just slightly embellish results such that the work is interesting to cite, but not interesting enough to replicate. People who do this will eventually move up ahead of everyone else, ultimately favoring incremental but bogus resuls.
I guess in some fields of science the effective dependency graph of academic work is very flat, and the true results get plucked and developed by industry (being true results it is actually possible to meet the higher reproducibility bar there). And the citations don't actually reflect the true dependencies, but some political/social graph instead. Too bad.
I think this gets to the major concern with Academia today, as it becomes somewhat of a self-reinforcing feedback loop. Curry citations with political savvy, get awarded grants due to citations and political savvy, show that you are productive due to citations, grants, and political savvy - earning yet more political capital.
This will probably become my go to explanation for why Academic CS research has largely become decoupled from industrial application and industrial research. While political savvy is important in a large corporation, eventually you need to produce results.
Pretty depressing stuff.
In my 8 years in research mathematics, I didn't see a single case that would come close to this horror show (not that mathematics is free of unethical behavior, of course). Collaborating with biologists, however, I got exposed to a world far more backstabby than I've since experienced in the corporate world.
The story of Fermat’s Last is a great example, what would have happened if that wasn’t a famous problem?
Of course this happens to some extent in math too, but a lot of subfields aren't killed or born due to outside technological changes. Number theory remains number theory, and still builds directly on centuries of work, even if computer verification has helped in some cases (disclaimer: I'm not a number theorist).
For most subfields of mathematics, you have a lot of depth to cover before you get to the forefront of research. That isn't to say that it's by any means easy to get to the forefront of more high-level physical sciences, but there are certainly subfields in biology or medicine that didn't exist a mere 40 years ago (also true in math, but in general far more rare there).
Educational institutions are rotting from the inside. Idiots were being rewarded at the expense of intelligent people and now the idiots have taken over control and rewarding other idiots.
If you want to know what happens next, watch 'Idiocracy' or 'Planet of the apes'. At this rate, it will certainly take less than 500 years to get there.
You can see it based on how slow scientific development has gotten; there are very few major new breakthroughs compared to before... Most of the ones that get attention are BS.
Arsenic life was the big one when I was a postdoc
Tardigrade DNA is a new one, so popular that it became a major plot point in Star Trek. Turned out it was probably just a sloppy grad student not being careful with their samples/not taking into account microbes physically hitching a ride on the tardigrade
I feel that the cause and effect are reverse; while the low hanging fruit was available and getting discovered it was a lot harder to get away with fraudulent results. But now that we're facing diminishing returns and more fish in the pond due to years of overtraining fraud is easier to sell.
in software, the open-source model allows people to advance critical initiatives without quitting their day jobs or making onerous commitments.
how can we achieve the same in healthcare, that is let outsiders contribute and advance the state of the art?
re patents, the key is to drive down costs for research and testing. research seems like the low-hanging fruit, comparatively speaking, but it's unclear how to reduce the costs of clinical trials in an uncontroversial way.
The Biohacking community is actually really adept, and had made a lot of progress in making Science accessible, prior to COVID you had teams already working together across continents and different time zones. So when someone like Josiah Zayner wanted to tackle a COVID vaccination trial on himself and other biohackers they already had the means and methods ready to go.
The problem is if you want to play by their (academia) rules you're never going to making any inroads, you can't publish and no one will give you a grant for your work, and you're not going to be a chair of anything for your work even if it pans out: but, certain therapies are in development that started off as Biopunk/Biohacker projects.
It's super exciting and hard but also way more work than just BSing your way in academia into a professor role as its all too common occurrence. Professional students becoming mediocre professors was a far worse problem in the Sciences than I could have ever imagined, the one's I really felt bad for were the post docs with actual meaningful research, often with severe social anxiety and poor speaking skills, but were forced to teach undergrad and simply just read the book aloud as 'lecture.' My Organic Chem professor comes to mind, my inorganic professor (did his MSc at Cambridge!) was a rockstar to us undergrads and would do office hours during his lunch hour between lab research and the university made him protest before they'd release back pay during the cuts and layoffs.. it was pathetic and I felt so bad for him, my review was scathing of the University as I left and I've never really forgiven them for that.
Obviously with no VC model in Science to follow for anything but the most brazen outliers (theranos) it's unlikely to happen. Personally I'd volunteer to help middle school or HS kids get involved in plant and Ag science and take some on in culinary if such an Industry still exists in the US after COVID and help them bypass the University track altogether. That is what I focused on after I left working in a lab, but there aren't many avenues for this model to scale to take on massive projects due to a lack of funding. And the money and stability is abysmal, but the Science and fraternity of actual Scientists doing meaningful work is probably more than half of the reason most of us decided to study it in the first place.
Chamath needs to stop pretending to care about politics and solve real problems like funding Community Science wet-labs next to libraries to help the youth care about Science in a meaningful way instead of wasting their time on tik-tok or Instagram with his billions.
Theories abound, but most/all of these platforms don't have to provide an explanation, and the end-user has little to no recourse on the natter: so far its been youtube, patreon and facebook none of which have followed up. Here is Josiah on an alt platform (odysee) explaining the situation through his eyes .
It's sad to see a pioneer of Biohacking dismissing p2p solutions like torrenting and even Bitcoin in order to bypass the censorship, but I think a lot of this just has to do with the clunky nature of its former or perhaps even current UI/UX for people with limited time or attention or familiarity with tech solutions, especially since it was so easy to use Youtube to distribute your content with just a simple click.
I honestly could have him up and running in a day or two with a solution just in case Paypal does in fact shut him down, that would interface with Fiat/CC payment upfront and convert into BTC if needed: the reason BTC is needed is because paypal or bank accounts can shut you out of your funds if you are already a target. It would mainly be a settlements network and only be slightly more steps than what he is used to, as well. But he is right about volatility as that cannot be helped as of right now.
I kind of want to reach out, but I'm dealing with more than I want at the moment due to COVID in my family, but its something I'm considering because Josiah is such a massive inspiration to us Biohackers that deplatforming from the big platforms should be the canary in the coal mine. They even shut down his Patreon!
It seems from his twitter that he even left Oakland for Austin since December when that all went down.
But then look at how WSB was shutdown when it presented a real threat to the establishment. I think this is the same thing happening, but Josiah and the CDC were actually just informing people how gene therapies work in the most biohacker/biopunk way, which is near and dear to my heart for reasons I already explained.
Agreed. If you have any recommendations for long-term public data archival they would be greatly appreciated. OSF recently instituted a 50 GB cap which rules out publishing many types of raw data, and subscription options (AWS, Dropbox, etc.) will lead to link rot when the uploading author changes jobs or retires, or the project's money runs out. Sure, publishing summary spreadsheets is a good first step, but there should be a public place for video and other large data files. IPFS was previously suggested but the data still needs to be hosted somewhere. Maybe YouTube is the best option, despite transcoding?
Of course her paper should have been a cautionary tale, but there are still people using the flawed technique for high-throughput studies to this day.
There’s only so many people this could be lol, really makes me wonder.
Edit: found who it is. Why am I not surprised?
Which leads me into some thoughts about not rushing to judgement. I believe the commenter above is doing his best to be a reliable narrator, but it's always possible there was more to this story that was not visible to him at the time, that might exculpate a bit. It's also notable that people change over time, can improve on their faults, and might have learned something in the years since. Best not to view their past mistakes as forever damaging.
For what it's worth, I agree with you that we shouldn't rush to judgement. While its certainly possible in this particular case that there was genuine misconduct, quite often there is a simple misunderstanding.
As an anecdote, during my graduate work I had a fellow PhD candidate convinced his guide was out to sabotage his work, because it 'threatened' to overthrow the guide's long established model. He was convinced that the work of the prior student's work that clashed with his was fudged, and that the PI was covering it up. It is possible? Sure, but not very likely. It's a tad convenient when the people you disagree with also happen to be mustache twirling villains.
I've seen a general trend with young academics at the beginning of their scientific career. They tend to be exuberant, convinced of their own superiority. Until that point, they've tended to be the smartest person in the room, the pick of the lot from among their peers. Hit graduate school, and suddenly everybody around you is just as smart as you, but that appreciation takes a few years to sink in. When your experiments don't work, it's hard to digest and easy to imagine the other guy cutting corners. I'm not suggesting that this is what happened with the top level comment, but could explain many of the other comments I see here.
When there is an open question, with important consequences but unclear resolution, it is hard to know the right answer. Somehow, it is easier to know the wrong answer, and that person will reach for it immediately. So, watch him and choose the opposite.
In any group there is such a person, called the Oracle of Wrong, and almost anybody can tell you who it is. He is the one most likely to wear a trilby, and no wrong choice he has made has ever caused him any personal discomfort.
God damn, just this paragraph alone made me remember why I ran like hell after my undergrad even during the financial crisis of 2008's horrible job market and being up to my eyeballs in debt; I saw the politicking behind what it took just to get a department to give a nod to a tenured professor's peer reviewed paper.
It was fucking pathetic and I've never been more ashamed of my what would be my profession than that but it set the tone for what to expect and made me realize just how irreparably marred that system is. It was followed by a sense of dread that nothing I could do would ever change that and I turned down the offer to work in said professor's lab to carry things on into grad school (MS) and just worked as hard as possible to pay off my debts and pivot my Life entirely. I'd rather sweep and clean floors helping a small business grow into something real than ever go back to that despicable environment.
Academia is definitely a mind-prison, and a trap for so many brilliant minds that may not have ability or wherewithal to try their hand a startup or have the necessary paperwork (citizenship) to take on private sector work, which itself carries a ton of pitfalls.
There are some benefits to the University model but I really hope COVID disrupts the monopoly Universities have over this domain for good! Ed-tech really should be much bigger source of funding and development, but FAANG just keeps suckering in people that could otherwise do something actually useful for Society.
> What do you think he did? Nothing, of course. He kept on the talks circuit, still talking about how exceptional his discovery was, and to date there have been no retractions. He even won the NIH grad student of the year award.
> Oh. What happened to the grad student? He's a professor in the genomics department at UW.
He is literately the academic 'Big Head' character from Silicon Valley that every lab/department has. I'd speak of my own experiences further, nothing as bad as yours, but I really don't feel like ruining my evening any further.
> I am also from a molecular biology background and saw this often. We call these guys the "Golden Boys". They are super successful, but completely useless. If you still believe live is fair, wake up sunshine.
Same, I should have made the leap to Microbiology in JR year, but I just wanted to GTFO and even abandoned by double major (Biochemistry) work just to speed up the process.
I'm really sorry. It seems a lot of people are hit by a wall of cruelty. More is less in our lives.
Have you thought of joining some biohacklab to keep enjoying your talent and curiosity on your original field ?
And this rot starts all the way from funding agencies (NIH/NSF/DOE) who have become hardcore bean counters.
His work at Scripps matches the same research group and timeframe of when dnautics was there, and he's now a professor in UW's genomics lab. The topic described seems to fit what he was researching then, and he received a prestigious grad student award for it.
I'm unsure of how the term "foreign" is being used above. Is it implied as a pejorative there? For example, if OP had written "a super sketchy white postdoc", or "a super sketchy black postdoc", would the HN community tolerate that?
is it good here? is here considered less sketchy?
Finding concrete proof or examples is obviously hard in this subject matter (how are you going to prove something as abstract as sketchiness), but here's one observation: predatory conferences mostly only exist outside the West. To be even more concrete, two of the most infamous predatory publishers (WASET and OMICS) are based in Turkey and India respectively. You generally won't find something nearly as sketchy in the West.
Well, as an Indian postdoc working in the US, I can speak to some of these sketchy behaviours. In terms of the predatory publishers, my Indian institution had its own filters, and most labs have their own as well. For example, for a while we had an institutional restriction on submitting manuscripts to conference proceedings, with the justification that the hard time limit equals substandard peer review. In addition, for the longest time we were not allowed to submit anything to open source journals, with similar justifications. Publishing in a journal with an IF < 8 was also frowned upon, and the institution would not cover publication expenses. AFAIK other institutions had similar filters for publications. I would regard my institution as a decent, but nowhere near the best in my field in India.
Who does publish in these predatory journals? Smaller, less well funded universities with desperate students, ever since the government mandated first author publications as a requirement for receiving PhD degrees.
“The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it.”
It's the first time I've heard it, but it's a very appropriate observation in today's world where misinformation travels faster and wider than correct information. If you're just making stuff up, it's much faster than looking up sources.
Ideally you have a cache of extremely long messages where you selectively quote small sections of sources, out of context, that seemingly prove your point but on careful reads are unrelated or actually contradict.
But there's ten or fifteen "sources" and by the time you read through the post, all the articles posted, and form a coherent argument contradicting it, they've already posted a bunch of other places and/or the thread has moved on.
That's the ideal case where you're inclined to waste 20 minutes arguing agaisnt a comment on the internet and there isn't a mix of legitimate sources with total bull shit sources forcing you to do a secondary hop to prove a point agaisnt the fake source.
Of course, HN is also an invaluable resource when it comes to tech and sometimes other STEM subjects. It's just significantly less valuable for areas completely outside of it. I wouldn't trust HN as a neutral or critically thinking source for, say, the usefulness (or lack of usefulness) of gender studies.
That precise quote is from Pratchett but there are similar, earlier citations https://quoteinvestigator.com/2014/07/13/truth/
You will find that the ability of the human mind to be critical, to refute with very salient arguments is suddenly acute when the mind doesn't wish it to be true, and this definitely also applies to H.N. comments.
That H.N. in this case is so accepting to this one side of the story suggests to me that this is the side it seems to want to be true, notwithstanding it might entirely be, or not be, true.
No one here is trying to argue that it happens all the time or more often than not, I'm wondering if that's what you think we're reading.
In this case, “misconduct happens” is not opposite to “it never happens” and I do not find the comments to echo the former sentiment as much as “Academia has become so ripe with either outright malice, or an inability to catch earnest mistakes, that virtually no research can be trusted.”
> No one here is trying to argue that it happens all the time or more often than not, I'm wondering if that's what you think we're reading.
No one is indeed arguing that, but what many, including me, are arguing is that nothing can really be trusted any more because it's a coinflip whether data is even reproducible.
My current view is that academic research should not be used as proof of anything and only as the starting point for your own research. And by your own research I mean your own actual tests. The papers can point you in the right direction but their findings should not be taken as fact.
This seems like a complete utter waste of time.
In real life most life impacting academic research is much more right than wrong. You are far better served assuming so. Unless you want to waste your time going back to basic science and rebuilding all the academic knowledge in most things you wish to do.
So it’s more like suppose you want to paint your house green, and you read that somebody says you can mix red and blue paint to make a really cool green paint. Instead of immediately going out and buying enough red and blue paint to cover your whole house, first buy a small amount of red and blue paint, mix them together, and see if you get that neat green paint.
It’s common sense, but the window dressings of academia can lead you to burn time and money on things that are totally silly because somebody important-sounding said they did it once.
Where people get burned is that there’s an enormous power imbalance—-junior scientists can end up stuck trying and failing to make green paint out of red and blue paint because nobody senior is going to take them seriously if they can’t make green paint. This presents a serious ethical challenge if making green paint is impossible.
It's fair to question things, especially if they don't make sense to you and even if acknowledged authorities are behind them. However, (1) something that you may question is not necessarily something I may question, and (2) questioning may be a waste of time.
If a paper that says mixing red and blue paint makes green paint has a thousand citations, perhaps you don't need to question it because others already have. If you can't reproduce it, the simplest thing to do is ask an expert who says it is possible to do it.
If making green paint is impossible I think that it will eventually self correct, or is simply inconsequential. In some instances it may take a while, but if the alternative is to reprove a result before using it — that seems like something only a fool would do or someone with infinite time.
I see this widely used by antivaxxers now.
I got accepted in a Chinese-oriented journal (i.e. most of the Editorial Board were Chinese) - I am not just 'saying' this, I'm saying because the OP mentioned "it's a Chinese thing" over results and datasets, whatever, I digress.
On the last revision round, the Editor told me that I was lacking some references, which he promptly send me. Turned out that 6 out 6 of his 'recommendations' were papers HE WAS ONE OF THE AUTHORS.
Since the paper was not OFFICIALLY accepted, I caved in and cited the guy (3 times), to my UTTER DISMAY.
If you don't play the game, other Chinese are playing the game and having the results.
I don't mean to insult Chinese people, but this is what is happening...
Edit: just to be clear: I didn't at the time read that as "submission tax". More of, trying to be helpful and using things they personally were familiar with. Most, if not all, of the extra references would make our paper better... If we weren't fighting that damned page limit, that is.
I wrote about that a while ago here: https://medium.com/flockademic/the-ridiculous-number-that-ca...
That's a problem that would fix itself the moment most useful research was mainly available on such platforms.
> more importantly, funders don't recognise the work you've done there
Once again, that sounds like mostly a problem that would disappear if a large migration to open platforms was to happen.
So it seems the main poroblem seems to be that there's no incentive to be among the first to make the move? IIRC it's often the journals that don't want content to be published elsewhere, so I guess just doing both is also not that simple.
What you propose would mean twitter or facebook will replace those journals, people with huge twitter followings, or "celebrity" scientists would dominate science, the works of people without such marketing skills would get drowned out.
(This is sort of true for current system too, but I think situation would be much worse in new system.)
Peer review is often effective, but it can't reliably block fraudulent publications like those described in the posted article. Most bad papers are rejected, but the authors can always try again at another journal. Any paper will probably get published somewhere, eventually, even if only in a Hindawi or MDPI journal. The journals aren't accountable to anyone, and as long as they have enough good articles to serve as cover, academics will need to pay for access because citing relevant prior work is obligatory. The publishing system is very weak against fraud.
Isn't that at its core the same as with scientific journals? People trust these journals to curate science in the same way you suggest twitter would come to curate science if it made the move online.
1. It's already possible to call attention to a paper through twitter, regardless of whether it's published in a journal or not. Paywalls gate-keep the content somewhat and makes sharing easier, but that's a minor side effect of a very broken system.
2. Papers (and involved data) being available on public platforms like github that already have mechanisms for reporting and tracking issues as well as built-in review tools, in githubs case even a separate discussion feature now, would allow for much quicker discussion critizising bad methodology.
3. Working with a VCS like git would automatically make it clear who wrote, edited or removed what.
Even if funders gave large sums of money dedicated to data publication, if recurring billing is involved it will eventually break as attention wanes. Data archives need to be managed by an institution or purchased with a single up-front fee, otherwise they won't stick around.
There's also the aspect that, even if you as an individual take it upon yourself to publish your data without institutional support, anyone who reads your paper will most likely ignore your dataset. Which is somewhat demotivating.
So, yes, that's fundamentally "a matter of funding". It can be fixed by academics and bureaucrats agreeing to switch to some other system. On international level. I think if you got the top 20 countries to coordinate, the rest would follow suit. Any bets on when that will happen? ;)
Here is an example that even the highest profile journal can lack ethics: circa 2005, Nature published a paper comparing a selection of scientific articles from Wikipedia and the Encyclopedia Britannica. The editorial board of Nature selected the articles and sent them to reviewers. They only publishes metrics and a few quotes of their data (the list of selected articles and the reviews). The results were surprising and made a lot of buzz. But Britannica noted that one of these quotes was a sentence that was not it their encyclopedia. Nature had to admit that they selected some Wikipedia articles, and when they could not find the equivalent Britannica article, they sometimes built it by mixing articles and adding a few sentences of their own. Obviously, the process were totally biased, from the selection to the publication.
The version that is more difficult to detect is when a cabal of colleagues agree to push each others' papers in this way. So editor A says "you should really quote authors B, C and D." And somewhere else, editor B is saying "you should really quote authors A, C and D."
Machine learning might be a way to tackle this at scale, by teasing out these associations. Of course, this relies on a degree of transparency. Some journals publish all editors' comments and all revisions of a paper. This is a Good Thing, but humans aren't reading all published research, let alone all the meta data.
If someone with relevant ML skills wants to address this, and fancies starting a project, do get in touch :)
A note on the Chinese insinuations that have been mentioned: As always, it's a bit more complex. There may well be reasons that some states might sponsor or 'encourage' gaming of intellectual institutions. If the world is viewed as a zero-sum game, and the currency is power, this unfortunately seems inevitable. Science tends away from this and towards collaboration, but 'politics' often seems to tend toward competition. I've seen university heads explicitly declare to all staff how they intend to game the national rankings, and nobody bats an eyelid, it's business as usual. It's daft and harmful, and frankly I think it requires hard effort from idealistic grassroots activists to address it. Societal improvements are often won through struggle, they're not given away, they don't happen by incremental evolution.
More worrying, what does it mean for science if we can't distinguish between a self-serving cabal and genuine good intentions?
I know about the politics too, that's the main reason why I never went to pursue an academic career, but being honest I never witnessed such plain fraud in my UNI. It was more of a friends-get-all scheme.
I'm sorry the story ended badly :) and yes - I've lowered the bar, sadly.
The university removed all of them from the research group and said they could continue working on the data because it belongs to the university.
3 months later:
- investigations of scientific fraud against the people leaving (neglecting authorship because the data could after all not be used and the head wanted a say in the articles, i.e., change them completely). Also some random other allegations that didn't stick.
- police investigation of defamation (because they reported the scientific misconduct and some other misleading statements used by the head in sales for a research-related product)
- the university now expects them to contact the head of the ex-research group to clarify questions of authorship
- the head meanwhile continues as before
Check out her Twitter if you’re interested in the topic: https://mobile.twitter.com/microbiomdigest
For the haters, this is not racism but nationalism, China super incentivizes bullshit research at a high level these days, and it's gotten bad enough that we're starting to distrust any "work" that comes out of it.
I don't know what the solution is, other than to subject Chinese submissions to more stringent and specifically non-Chinese review.
That's absolutely nationalist, and arguably racist, but it's also smart.
I have noticed in English-language discourse that often, shall-we-call-it, “non-white countries” are “races” but “white countries” are “nations”.
Also, Christianity and Judaïsm are religions, but Islām is a race.
Explain me that.
Muslim here, that sounds absurd. In fact one of my biggest annoyances is when people view all Muslims around the world as single entity. Every stupid trait of every Muslim majority culture gets blamed on entire Muslim world.
The difference is that in general in Dutch discourse, such statements are considered racist or betraying such a mentality, and frequently protested, but, in English-language literature, even the “left” that claims to champion the causes of all these “races” and “religions” still very often writes in a way that betrays a mentality that some religions and countries are “races” and others are not.
Just as you'd expect from a "Chinese researcher", you're going to have to qualify this statement for your point to hold any weight.
I think science should fix itself. Just publishing paper should not be the metric to reward. A retraction should seriously reward the flaw finder (like sometimes with exploits), and really harm the flaw author/publisher: both scientist and journal.
I remember well when the public was very believing, including me, and in hindsight it was always undeserving of such faith.
It was a very misguided thing to take a conclusion as fact, so long as it be called “science”, for often upon closer inspection the methodology was dubious, and it was never attempted to be reproduced, so even if the methodology were sound, the data could either be a fluke, or outright fabricated.
This is not a new development; if anything, the critical stance is the new development. It has been going on for centuries most likely that completely fabricated data stoot the test of time because no one bothered to replicate it. When I was at university in the 2000s, we were already told of respected researchers that fell from grace as it was found they had been fabricating data for decades and it took this long for someone to catch wind of it, as no one bothers to replicate research in this world.
The only new development is that now, some are starting to.
“Science” is not enough to believe it; the methodology must be inspected and found to be salient, and the data must have been replicated at least once, præferably more, by another independent group.
The problem is man's arrogance that it knows, that it can find a solution to every quæstion it asks.
“science” is also not even close to “not infallible” it is a complete coinflip whether any peer-reviewed result is even worth the paper it's printed on.
Dare I say it's under that, because it's a coinflip whether the data are even reproducible, but the conclusions derived from the data, even if they be reproducible, are almost invariably involving bigger leaps of faith than making data up.
Eh im not sure bad studies is the cause.
Scientists, especially doctors, wanting to use their authority i some debates while 2 of them can be saying completely opposed things maybe, however, contribute...
What is happening is that the bad studies are being used for policymaking.
Examples: the "nutrition pyramid" that encouraged carbohydrates and blamed health issues on animal-based food, was later found out to be based on research that was blatantly corrupt, with researchers getting bribes from food industry to manipulate or hide results (a case of hiding results: one researcher that found out that vegetable oil causes decrease of blood cholesterol, also found WHY it happened, but omitted that part from his paper... the reason is that cholesterol is needed for cell maintenance, and consuming only vegetable oils cause a deficit from it, the body pulls cholesterol from the blood to repair itself, and even that might not be enough, with some people suffering damage).
Or a lot of pharma circlejerking that turns into law or regulations.
Or the paper mentioned in the article, that was about video-games and aggression, with many countries passing laws regulating video game consumption based and such papers.
Or the original reason Cannabis was banned (long story short: part of the reason is that they wanted to ban hemp fibers, that was being an obstacle to some newly invented synthetic fibers, some of the government people involved, had stocks of Dupont and other fiber companies, and "accidentally" banned hemp fibers while "trying" to ban the drug, based on manipulated and fraudulent science).
Or more seriously: the papers that recommended "Austerity" and basically destroyed the livehoods of millions of people, later were found out to have math errors that changed the conclusion completely.
And the list goes and goes on.
The authors' behaviour is outrageous, but this story is also about a broken reviewing process, partly due to wrong incentives.
Nowadays when you see articles results of new research of covid19 in the media, those articles often include 'hasn't been peer reviewed yet' or 'reviewed by other scientists' or any such verbiage, either as a disclaimer or as 'now it must be true'. But that's not how it works; it's not because something has been 'peer reviewed' that it's 'The Truth' or 'Real Science'. Peer review, in reality, just weeds out (most) quacks (although in the OP's case it seems it didn't even do that) and checks that the paper is not completely out of touch with what is happening in and known about the field. It's not QA of the work itself.
(I don't care to debate if it should be, and if more money should be spend on replication etc, just providing some real world context on something that is quite opaque to and often misunderstood by those not in academia)
That's the theory. The reality is that there is no in-depth review. You're lucky if a reviewer actually reads the paper all the way through, let alone checks the numbers and applies a level of critical thought to the methodology, analysis and conclusions.
"Peer-reviewed" by whom?...
Now, most of these papers were tiny. They effectively were "Run one simulation, get one interesting but tiny result, publish". To me, that's 'salami slicing', and journals should not accept papers that should have been larger studies. But he's carried on with this, has now completed a PhD and has a permanent position at a Japanese University.
Main issue is the sheer amount of papers being published and the lack of capacity of the body of experts to read all of it. I guess it’s the professionalisation of research.
People publish papers to improve their rankings and not because it’s relevant.
This is a slow-moving disaster for scientific credibility, and therefore for national safety and security.
There's going to be a point within two decades where "reproducibility crisis" is not a localised phenomenon, and "expert" misconduct is paraded out by the papers.
Totally destroying our societies ability to govern itself based on expert information. The early stages are already here (anti-climate, anti-vax, etc.).
() edit: that's raw body count. I wouldn't know how many people could actually spot the errors mentioned in the OP.
> For example, one paper reported mean task scores of 8.98ms and 6.01ms for males and females, respectively, but a grand mean task score of 23ms.
A 9th grader should be able to find that inconsistency, if you give them the table and tell the to find the number that is wrong.
(the other stuff is harder to detect, and I fully understand that you can't request and re-process the raw data for every paper you peer review. Some of these numbers....)
Lesson learned in future you give them what they want and attach large error bars
I changed course after that as part of science should be explaining bad results