Hacker News new | past | comments | ask | show | jobs | submit login

I think this is a fantastic example of how a) anyone can call bullshit on science if it doesn't make sense and b) there are more and less correct ways of calling bullshit on established science.

You don't need to hop on a podcast or radio show in order to question the orthodoxy of a scientific claim, and in fact you shouldn't if you want to be taken seriously. Nick Brown would have hurt his case had he done that, and instead found a path that actually knocked down a faulty concept in a way that has lasting effects.

This is the process working. This is all okay, and welcome, IMO, though possibly not by everyone involved.




I'm not sure I agree. In his own words, he had to ship around for a respected authority so that it would be taken seriously.

Once he had that, the journal have a bullshit response that it was passed two months since the original article was published, and so not eligible for publication. Sokal through his connections, went to the CEO to get that decision over ruled (edit: by threatening that the journal would be humiliated by refusing to publish it). The original debunker would never have got that far on his own.

To me, this sounds like a process that is far from some sort of egalitarian "judge ideas on their merit rather than the person presenting them." If he hasn't got Sokal involved, I wonder if this would have gotten published at all.

Edit: also, this casts a pretty negative light on the field, given that it's been cited 350+ times in this field. Not one of those other papers dug into the content (or understood it) enough to question it, and/or raise the alarm bells. It's one thing Its one thing if this was some obscure paper that never got eyes on it, but that is obviously not the case.


I agree completely with criticisms of the journal - clearly their process is flawed in a multitude of ways. But regarding "shopping around for a respected authority to be taken seriously" - I think this is actually an important lesson.

The problem with the vision of an egalitarian meritocracy in science (or any other field, honestly) is that reviewing, understanding and critiquing someone else's work has a non-zero cost - actually quite a large cost if done correctly. Also, the people most qualified to do this analysis generally loathe doing it, as they'd rather be spending time on their own original work. Inevitably, people develop heuristics to estimate the credibility of every new email or article that comes across their desk, to decide which is even worth reading.

Understandably, one of those heuristics is credentials/authority - the ratio of good articles published by non-experts in their field is extremely small and massively outweighed by crackpots. So what does a smart non-expert with a good idea do to hack this system? "Bootstrap" your authority, by first finding someone with more credentials than yourself, but not so much notoriety that they're too busy to read your email. If you can convince them, they'll bring it up the chain to the next-highest authority they can convince, and eventually it'll have some important names behind it and people will pay attention. Brown was able to accomplish this with only two collaborators, so I consider him a master bootstrapper for this :)

Everyone wants to be the patent clerk slaving away in isolation who pops out with a new revolutionary theory of the universe, but it's just not realistic these days. On the internet, a non-expert trying to rapidly gain credibility on their own is indistinguishable from social media hucksterism/"influencing". If you want to break into a new field, find some experts in that field and work with them!


"Everyone wants to be the patent clerk slaving away in isolation who pops out with a new revolutionary theory of the universe, but it's just not realistic these days."

And note that that patent clerk wasn't just some patent clerk. He had published four papers (plus a bunch of reviews) and finished his dissertation by 1905.


Juxtaposing this to Nakamoto's creation of bitcoin and original use cryptography, it makes no sense to me how anyone would have ever paid even an ounce of attention to what some random, supposedly unkown guy on the internet has to say about ... anything. Or how Perelman even got someone to even eant to verify his solution. Maybe not so much Perelman since he had some important figures knowing him already.


Bitcoin received early support from high-reputation folks in the cryptography community, including Hal Finney and Nick Szabo. (As a result, some people thought that one or both of them were Nakamoto.)

Regardless of who Nakamoto actually was, it does appear that Finney and Szabo played a similar role that Sokal did in this story; getting support from high-rep folks was essential to getting Bitcoin off the ground.


Perelman was widely known as a math prodigy (he won gold for the USSR at the International Mathematical Olympiad) and studied at the Leningrad University, so he could approach pretty much any of his former professors to take a look. He was definitely not an outsider in the field. A recluse, yes, but that is different from being a no-name.


Yeah Bitcoin is such an interesting exception! I think some ideas are viral enough on their own to go from zero to front-page on HN/reddit/etc. even when posted by an anonymous user - but it's rare to go beyond that, they usually run out of momentum.

Bitcoin had a few things going for it - the whitepaper is concise and well-explained, in perfect academic prose and typeset in LaTEX, which lends it an air of credibility. It came with an implementation, so hackers could experiment with it immediately, plus it sort of has implicit libertarian anti-government undertones, both of which encouraged follow-on blog posts.

Once the ball got rolling, another kind of heuristic took over - the money heuristic. People put money into the thing, which made other people go "oh look at how much money they put in, this is a real serious investment thing!" and put in their money, in a sick feedback loop that continues to this day...


> Not one of those other papers dug into the content (or understood it) enough to question it, and/or raise the alarm bells.

Quite the opposite, the article itself found out that many of them did question it (and quotes three such researchers), however, "raising the alarm bells" and disputing such claims requires exceptional diligence (more than the original article did) and lots of thankless work (like Nick Brown and his two collaborators did) that's not likely to be rewarded (you'd be lucky if you can even get it published in an appropriate venue); so they quite reasonably went on with their own research agendas instead of letting their actual work languish while doing a debunking campaign.


Science can only be as good as the people doing it.

The problem is that most papers, especially on very complicated topics, are not science and just serve the purpose of proving a point or helping someone's career.

Even without taking easy jabs at psychology (you would need full brain understanding of its inner workings and evolution over a lifetime, to have a 100% clear picture of what's happening - we probably have 0.0001% - it's all based on observations and interpretations), the reproducibility crisis is affecting a lot of areas.

Hopefully with more time and more people getting involved in the field, we'll get more and more science.

If I think about nutrition / fitness, between 40-20 years ago and now we made great progress: we went from "Fats are bad and make you fat" (pure propaganda to sell processed sugars), "BULK on a 4000 diet" to any video of Jeff Nippard quoting incredibly detailed studies about minutiae of nutrition and building muscle.


The journal publishers (similar to FB et al) are commercial entities designed to make money from desperate academics trying to publish. They thrive on novelty and readership, but also "respect" (rather than ads and "engagement") from not having their occasionally outrageous papers questioned. Sometimes to stoke or avoid scandal and increase readership they might include a debunking paper, but it's important to remember these are not bought by individuals but by departments who won't stock "less respected" journals.

Now, if perhaps there were a respected public psychology paper service like psyarXive at the time that might have made things slightly better.

https://psyarxiv.com/


I'm not sure I agree. In his own words, he had to ship around for a respected authority so that it would be taken seriously.

I don't see how that contradicts the person you are replying to. Ze is saying there is a path to contradicting established science. Ze didn't say that it was easy, or that it didn't involve working with people who are in "the establishment" - just that jumping on podcasts and talk shows and spouting off as an outsider with no credibility isn't the best way.


That probably depends on your goals.

If your goal is to try and reform from within then sure, spending months waiting for BS replies from journals is the best/only way.

If your goal is to spread the word that psychology is unreliable and you don't care much about what psychologists themselves think, then podcasts and talk shows are a far more effective strategy.

Psychology isn't really a very harmful field - bad claims in psychology mean maybe some people waste time on a useless self help method, or don't get proper psychiatry when maybe they should. But a lot of low level damage still adds up, and there's a lot of people who listen to psychologists. If you go the media (NB: which is what Brown is doing here), then you can potentially have a more positive impact on balance.


is it a path or is this case a fluke? that's the question.


> This is the process working. This is all okay, and welcome

No! This sort of thing casts enormous doubt on the field of psychology and many soft sciences as a whole. A lot of it is academic politics and people being afraid of calling things out in fear of hurting their career prospects. It's much easier to just stay silent, ignore it, and play along. Who knows who will sit on your next grant proposal's committee. Academia is full of this and it's a disgrace.


I, for one, will never trust another “finding” coming out of the field of psychology.

Between this and the marshmallow study, I think psychology should be considered an art. We’re many centuries away from this progressing to being a science on par with physics.

There’s nothing wrong with that, a wise therapist has much to contribute - but not to science.


There's an unfortunate tendency to treat science as the only source of truth. Really, psychology (like engineering and medicine) is a profession. Professions aren't just based on the scientific method -- they draw from a semi-formal pile of experience known as "best practices." These are often just based on something that a clever person decided to try decades or centuries ago which seems to have not hurt anyone. Of course it is an iterative process so sometimes these things are overturned, but the process seems to work allright.


Spot on. I’ve met many psychologists who struck me as kind, wise people with a helpful perspective. Society needs them.

But to plug this fuzzy, professional knowledge into equations, call it science, and use it to make decisions in business and policy is at best naive and at worst fraud.


> psychology (like engineering and medicine) is a profession.

Throwing engineering in there with psychology and medicine is not a very good grouping of disciplines. At least not if "engineering" means the kind of engineering that professionals in the field have legal liability for (like the engineering that goes into designing bridges and buildings).

Engineering does have "best practices", but those practices have to operate within limits that are very well understood based on underlying physical science that has been thoroughly tested and for which we have mathematical models with excellent predictive capability. The "best practices" in psychology and medicine have nothing like that kind of support.

> the process seems to work allright

Engineering, at least the kind that designs bridges and buildings, seems to work all right, yes. I'm not sure I would say the same for psychology and medicine.


If you actually talk to engineers who have to follow regulation and building code, they will tell you how often the rules are nonsense and don't make technical sense, they have to click checkboxes to say that something is fulfilled that doesn't even make sense in the given context etc.


I certainly agree that many local regulations and building codes are not there for engineering reasons, they're there for political and economic reasons that have nothing to do with good engineering.

But it's also true that any of those engineers who say that a particular regulation doesn't make technical sense, will be able to explain to you in detail why it doesn't make technical sense, based on the kind of theoretical and practical knowledge I described, the kind that is backed by well-tested models with excellent predictive capacity.


I think the differentiation between "science" and "profession" is a bit crude. You obviously don't include mathematics in this, but it is a form of deductive reasoning as far away from science as the arts are. Deductive reasoning is just far less prone to fault than inductive reasoning, be it mathematics or philosophy, and the methods we have developed in some of the natural sciences over the centuries to model real-life behavior on mathematics are really great.

You are right that these largely don't exist in the Social sciences, but this is at least partially due to the much more complex subject matter. Still, both the social sciences as well as the natural sciences are trying to approximate reality through models built on experiments, and are thus fundamentally different from deductive reasoning in closed systems.

Not trying to mince words here, but engineering is applying validated knowledge to solve real world problems, sure, you can call it profession, but the fields generally associated with engineering are still in the business of generating knowledge. Medicine and Psychology heavily rely on the scientific method to validate abstract concepts, while engineering adjacent disciplines like Computer Science heavily really on maths and deductive reasoning to solve problems in the space of algorithms and computers.


You obviously don't include mathematics in this, but it is a form of deductive reasoning as far away from science as the arts are.

I disagree strongly with this. To reduce mathematics (and philosophy in your next sentence) to not-a-science, just because it's based on deductive reasoning is doing it a major disservice.

It is especially because of its deductiveness that mathematics is such a valuable tool for science. It answers the fundamental question "assuming A, what can we prove to be true about A' and A''?". Without that deductive proof, many inductive scientific theories could not even be formulated, let alone disproven.

And then you go on stating that engineering is applying validated knowledge to solve real world problems. Do you realize that much of that validation of said knowledge has been done through mathematical models? That the only reason we have certainty in engineering is because of the deductive rigour of mathematics?


Mathematics & philosophy aren’t sciences because there are no hypotheses, experiments, and results; it’s just thinking and logic.

It’s still incredibly valuable and the basis of many good things.

I would also add that engineering isn’t only working with existing scientific knowledge; we were building steam trains and optimizing them long before we started doing scientific thermodynamics.


Generally in math, they call hypotheses "conjectures," and proofs are similar to results.


Science and math are definitely good friends and so I'm sure lots of analogies between the two could be made, but I believe the comment was getting at the epistemological difference between 'proven' (the happy state in mathematics) and 'not contradicted by experimental evidence' (the happy state in science).


There was an interesting discussion on HN last week that brought up the fact that mathematics has its own crises, namely a communication crises. It was brought up that the proofs can be so dense that errors will be published and go unchecked for a long time. The interesting part is that everything that's needed for "replication" is literally within the paper and the errors still fall through the cracks.

There is still an awful lot of engineering that is model-based without strong deductive proofs. (Mechanical failure theory comes to mind). But the actual origin of "profession" comes from professing an oath to the public good. Meaning the traditional professions (law, medicine, engineering) aren't necessarily aimed at creating new knowledge, but applying knowledge to the betterment (or profit) of society. Sometimes that butts up against problems without known solutions that requires adding knowledge to the field, but that's not really the primary goal, unlike something like mathematics.


I'm really not sure what you are getting at. Psychology is pretty definitively a science. So is a massive chunk of medicine. Science holds extra weight because it involves at least attempting a verification step following an idea.

Treating science as gospel is stupid. But without personal experience (and frequently even with) stating something as true is largely meaningless. Stating something is true, while providing data to back it up, as well as evidence that other subject experts have checked that data may not be perfect, but that doesn't put it on equal footing with a gut check.


How could you possibly know that the "process seems to work alright" without scientific evidence that it's doing so? You actually have no idea in that case whether you're doing harm or helping, and if you're somehow helping, it could just be a very expensive placebo.


I mean, I don't work in mental health, but if somebody has mental health issues that they are dealing with, and they buy an "expensive placebo" in the form of talking about their problems with a professional, and it helps... that seems like a successful treatment, right? I'm actually not even sure how to define placebo in this context.


I think it's important to focus on "expensive" and not "placebo" in that statement. If a placebo is all that's needed (or all that we can do), then arguably it shouldn't be expensive.


Lots of things seem oddly priced from my point of view, this is one of them, but who are we to argue with the market, right?


The market priced it based on the assumption that it works better than placebo.


If what you've read here concerns you to the point of throwing out the field, you're going to be immensely disappointed by every other scientific field, as well. [0] I'm not an expert, but my understanding is that this article could have been written about literally any field of science, with a depressingly few nouns changed. [1]

This is what people mean when they say, "science is messy". This is how the sausage gets made, and if you're cool throwing out psychology as a field, you're probably also going to have to be cool throwing out a lot of other fields. [2]

There are efforts being made to clean up how science is conducted, called "metascience", but it's a relatively new field and there's still a lot that hasn't been investigated yet. [3]

Basically, if you've ever thought of yourself as "pro science", you've been supporting what happened in this article, whether you realized it or not.

[0] https://www.nature.com/articles/533452a

[1] https://www.scientificamerican.com/video/is-there-a-reproduc... -- a slightly more accessible version of [0]

[2] https://direct.mit.edu/posc/article/28/4/482/97500/Mess-in-S...

[3] https://en.wikipedia.org/wiki/Metascience


I think we can maybe inject a little bit of extra nuance here.

Not every field is the same, they vary massively in size, level of external influence, and complexity in subject matter. Psychology suffers more than most. It's a large field, so competition to get noticed is likely high. There is money to be made, as discussed in the article, so it struggles with influences outside of pure scientific inquiry, and they have no chance of describing their theories in terms of pure component mathematics.

Contrast that with ocean and atmospheric sciences. It is a tiny and very insular field, so the rat race works very differently. This also protects them from outside influences, and a lot of the major underlying physical and biological processes involved can be modeled with mathematics that have been demonstrated to be predictive.

It's just easier for some fields to reliably produce higher quality work. I think this is a case where blanket skepticism is just lazy. Every field suffers some level of a reproducibility issue, but some fields are better able to self select for a variety of reasons.


Thank you for saying this. I mean, Computer Science is up there in the junk science department. Coming from a phsyics background, it's hard to even call most of what happens in CS "science".

Go to any CS conference and you'll see a whole bunch of papers doing micro benchmarks, saying that their method is X% faster than all the others, where X% is usually some very low percentage (and of course no errors bars are visible on any plots). You talk with the guy presenting the research and you find out that half the benchmarks were done on one machine, while the other half were done on another with different specs. Well, okay, now why can I trust the benchmarks? And how many other papers are doing this? And is your result even meaningful anymore? It's so depressing.

And don't even get me started on the AI/ML field. My god, talk about art over science.


A lot of AI/ML work is sloppy but well known methods work reliably, textbook descriptions make sense. That there's a long tail of shitty student papers doesn't invalidate the field. Year after year methods genuinely get better. Whatever process produces this progress, it produces this progress.


Also it is very common for these benchmarks to conveniently leave out the fastest current known methods for unclear reasons.


> you're going to be immensely disappointed by every other scientific field, as well.

Not every other field. We do have scientific theories that are reliable. But the reason they are reliable is that they are nailed down by extensive testing in controlled experiments that have confirmed their predictions to many decimal places. (I am speaking, of course, of General Relativity and the Standard Model of particle physics.) So we do know what actual reliable science looks like.

The problem, of course, is that there is not a lot of other science that actually looks like that, but it still gets called "science" anyway, and treated as though it is reliable even though it's not.


I said "every" because even some of the best work science has done, General Relativity and the Standard Model, were to my understanding an incompatible set of theories.

I did a bit of brief Googling, and it looks like some of that has been resolved and it isn't seen as totally incompatible anymore, but even that's not locked down as of yet.

The Wikipedia page is kind of a mess, but I'll link it here for others to save you a few clicks: https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo...

Of note, it looks like there's a debate ongoing even within the citations about compatibility/non-compatibility, which I've never seen before on a Wikipedia page (I've done some light editing to separate out the relevant quotes from their related citations to improve readability):

>> Sushkov, A. O.; Kim, W. J.; Dalvit, D. A. R.; Lamoreaux, S. K. (2011). "New Experimental Limits on Non-Newtonian Forces in the Micrometer Range". Physical Review Letters. 107 (17): 171101. arXiv:1108.2547. Bibcode:2011PhRvL.107q1101S. doi:10.1103/PhysRevLett.107.171101. PMID 22107498. S2CID 46596924.

> "It is remarkable that two of the greatest successes of 20th century physics, general relativity and the standard model, appear to be fundamentally incompatible."

>> But see also Donoghue, John F. (2012). "The effective field theory treatment of quantum gravity". AIP Conference Proceedings. 1473 (1): 73. arXiv:1209.3511. Bibcode:2012AIPC.1483...73D. doi:10.1063/1.4756964. S2CID 119238707.

> "One can find thousands of statements in the literature to the effect that "general relativity and quantum mechanics are incompatible". These are completely outdated and no longer relevant. Effective field theory shows that general relativity and quantum mechanics work together perfectly normally over a range of scales and curvatures, including those relevant for the world that we see around us. However, effective field theories are only valid over some range of scales. General relativity certainly does have problematic issues at extreme scales. There are important problems which the effective field theory does not solve because they are beyond its range of validity. However, this means that the issue of quantum gravity is not what we thought it to be. Rather than a fundamental incompatibility of quantum mechanics and gravity, we are in the more familiar situation of needing a more complete theory beyond the range of their combined applicability. The usual marriage of general relativity and quantum mechanics is fine at ordinary energies, but we now seek to uncover the modifications that must be present in more extreme conditions. This is the modern view of the problem of quantum gravity, and it represents progress over the outdated view of the past."


> I said "every" because even some of the best work science has done, General Relativity and the Standard Model, were to my understanding an incompatible set of theories.

If you insist on treating either one as though it were a fully fundamental theory, capable of explaining absolutely everything, then yes, they are not compatible.

But you don't have to treat either theory that way in order to make the claim I was making. Even if both of those theories end up being approximations to some more fundamental theory, it remains true that, within the domains in which they have been tested, they make accurate predictions to many decimal places and have been shown to be thoroughly reliable. Certainly in any situation in which you might have to bet your life on one of those theories, you are going to be well within the domain in which they have been thoroughly tested. And within those domains, there is no incompatibility between them.


Marshmallow study? Is the finding about "wait 5 minutes to get two sweets - success in life" debunked, or are you referring to another thing?


Yes, it has been debunked. They got the causal arrow around the wrong way.

It turns out that kids who are in stable situations with trustworthy adults around them, tend to trust adults more when they say “wait 5 minutes and I’ll give you more”, and that kids in bad situations don’t trust adults. Turns out being in a bad situation as a kid does have a long term effect on your life.

Nothing to do with the ability of people delay gratification.


This xkcd is relevant:

https://xkcd.com/435/

Unless the variables in the experiment are decently quantifiable. It is a garbage in, garbage out situation.


More often than not the constructs in the garbage studies are quantifiable, just useless. The bigger problem is the "what" of what is being measured---whether that thing is representative, useful, or important. Part of the reason bad research has the veneer of authenticity is that the numbers, graphs, and statistics dress up the nonsense.


Yes, that is the distinction I was trying to make by writing "decently quantifiable".


The problem with psychology is: the subject matter is the most complex system we know of.

In order to make definitive statements about said system, you need to start with the most basic behaviours and characteristics.

As you work your way up, you’ll mountains of data to capture every permutation of human behaviour and emotion.

We’re so, so far from any of that.


The messier the system, the more rigorous the methodology should be. Pre-registering trials to fight p-hacking, and open data should be the norm. That's decidedly not what we've gotten from this field, and some prominent figures have even been fighting such measures and consider the reproducibility crisis to not be a big deal. That's the problem.


I won't argue with your mistrust of psychological 'findings'.

But: If psychological research is not conducted with scientific rigor, and we treat it as an art, then where does that leave the phenomenology of mental illness, study of social dynamics, etc? Mentally ill people still need to be treated. And historically they have been treated in very, let us say, 'artistic' ways. If we abandon the scientific aspects of psychology, mental illness will still need to be dealt with, but without any framework of rigor or understanding.

While landmark findings of psychology (such as the marshmallow experiment) can be questioned and debunked, a lot of what you'll learn when studying psychology as a science is how little we know about psychology. That's an important thing to learn! Does approach X work for people with debilitating anxiety? In all probability, we don't know, or it does work for some people and has the opposite effect on others. That's the day to day of psychological research. When something exciting and novel comes up (like the marshmallow thing, or the Stanford Prison Experiment), it represents an island of false certainty in a sea of uncertainty and suddenly all the laypeople know about it. Such understanding of our lack of understanding can form a bulwark against, at the very least, political, social and religious ideologies that make sweeping claims about behavior and dynamics.

It's important to quantify what we don't know as well as what we know. Otherwise we'll be taken over by people making extraordinary and sweeping claims based on no evidence, rather than people at least publishing their evidence to be possibly later debunked. In the linked article's example, if psychology were purely art, there would only be the authors' book, not a paper with actual math to debunk and a journal to interact with. I am probably overstating my point, because when a specious claim is backed up by scientific trappings people may be more likely to believe it, but I am increasingly less certain of that.

When it comes to critiques of 'soft sciences' I often come to this. Those fields often came into being because there's actual day to day issues that don't go away even if we have arguable theoretical foundations behind them. Absent said foundations, people still need to make sweeping decisions about economics and psychology that affect millions of lives. The scientific part arose from looking at the history of such interventions and wondering if things like exorcising people with visions actually worked any better than not exorcising them. For many of the people being exorcised, it was a 'landmark finding' that exorcism is not the most effective method of helping them with their issues.


> If we abandon the scientific aspects of psychology, mental illness will still need to be dealt with, but without any framework of rigor or understanding.

I think false rigor is worse than no rigor at all.

> In the linked article's example, if psychology were purely art, there would only be the authors' book, not a paper with actual math to debunk and a journal to interact with. I am probably overstating my point, because when a specious claim is backed up by scientific trappings people may be more likely to believe it, but I am increasingly less certain of that.

I think the primary reason why people are less likely to "trust" scientific results now than in the past is precisely because of some scientists generating "scientific trappings" with the appearance of rigor to justify a bullshit result. Most of the time, no one cares, and those scientists go on to their academic careers as usual. Of course, it's the follow-up careers, books, headlines that lead to the erosion of trust, not the fake math itself (that's just necessary to gain the approval of other scientists).

When that is attached to something that the average person can blatantly see is wrong, it erodes their confidence in the entire scientific process and establishment. They'll reject the entire concept of rigor before believing some of the things scientists claim to be true. How is a lay person supposed to separate the modeling that goes into e.g. climate change and "a 2.9 positivity ratio is the key to happiness"? To them they look pretty similar, a bunch of mathy looking garbage that tells them something that isn't intuitive.

Personally, I've been forced to sit through a lot of corporate management trainings that are full of citations of titillating psychology results that I know are bullshit. I just don't have the patience or motivation that Nick Brown has to properly debunk them.


I quite agree with most of what you are saying, and you are saying it quite well. My main thrust is that we shouldn't throw the baby out with the bathwater by saying, 'Some people aren't rigorous so let's abandon rigor' (I don't think you're saying that, it's a strawman). Even in the paper critiqued in the original post, there was some rigor (in the midst of clear blindness and hubris), sufficient for the team in the article to take it apart on its own merits.

There's real examples of cases where no rigor is applied at all, and some facsimile of rigor can improve things in a psychological context. For example, much ink has been spilled in Hacker News about traumatic interview practices adopted and popularized by the FANG companies, such as so-called 'brain teaser' questions. Included in these practices was the widespread notion that interviewees needed to be 'put on the spot', or 'think on their feet'. Interview practice has been studied by the field of organizational psychology for decades, and such practices were from day one counter to several findings (such as that putting the candidate off-balance or in a state of discomfort during the interview and having them successfully navigate that would somehow predict job performance). Eventually several large tech companies conducted internal studies and concluded that the practices had no bearing on job performance.

I too have seen BS titillating psychology results, but the antidote is almost always to review the literature. For example you might often hear "conscientiousness on a Big 5 personality test correlates with job performance". Yes it does, but review the literature: it is quite weak and will not do a good job of explaining individual variance.

Let's say I had heard about this magical ratio in the OP's article in a job context. My BS meter would have gone off, most certainly. Some actionable number that can explain a tremendously complex and poorly understood set of dynamics? Hmph! I would have reviewed the literature and found the paper in question. I would have seen the large number of citations. Ok, that gives it weight as an idea, but let's see if the idea has legs. Where are the meta studies showing multiple independent corroborations of the number in different contexts and performed by different researchers? Non existent. As someone who takes science and applies it, for me that puts it firmly in the area of 'very interesting finding, but not actionable'. Honestly I think that's probably what was quietly happening with that ratio even before the retractions. Such findings make for great media news, corporate power points, motivational speech meetings, annoying HR meetings, etc., but hopefully (I do say hopefully :) ) in almost every real world setting if someone told a senior manager, "Your team is going to be measured on this positivity ratio, because science!", that manager would make a stink. Of course, maybe not. I do believe an important skill that needs to be increasingly taught is the ability to parse and understand the landscape of papers on a subject.


I agree. The real shining light here is that a determined amateur was able to get free, volunteer help from inside the community to weather the publication process.

Those who read the review / revision process in this article and were dismayed / convinced the process was flawed should realize this is precisely how review works in many fields. It takes a year to get a paper reviewed and revised, and a "25 page" (TFA) statement of revision is par. Not "all" will have the stomach for that, but once you are part of that process, it's normal.


You consider having to threaten the reputation of an academic journal by emailing the CEO directly before you even make it to the review for a well written, concise, critical paper "part of that process"?

Not even mentioning that it required the publicity and reputation of a well known figure (Sokal) to even open that door for them.

If that's the "shining light" of this industry - I sure as fuck don't want to see the dirty alleyway.


I didn't say that.

I said the shining light was insiders helping.

I also said the peer review process is challenging and that was an accurate representation.

Two different things.


Is Sokal really an "insider" to psychology?


I'd argue no. He's a mathematician and physicist.

You could claim that Harris Friedman is an insider - although I don't find a nearly retired (at the time - now actually retired) college professor to be particularly "inside" the journal space, but he is certainly a member of the field.

The people who were insiders (Barbara Fredrickson and the reviewers at American Psychologist) are decidedly unhelpful and uninterested outside of throwing folks under the bus and equivocating around how so much fucking fraud/bullshit ended up in their papers.


I understand what you mean, but I see a series of lucky moments, that all had to happen for the paper to be published.

Sokal picked it up, even if usually he doesn't. The journal rejected it and they had to extra push it etc.

You can see how many rebutals, done without podcasts are never heard of when going through the oficial path.


I understand what you mean, but I see a series of lucky moments, that all had to happen for the paper to be published.

The role of luck is often under-estimated.


> The role of luck is often under-estimated.

Then so is the role of bad luck.

Betcha there are more papers that don't have the luck this one did than there are that do, donchathink?


The people that publish and promote bullshit science spend a lot of time on podcasts, so I don't see why the people falsifying their research shouldn't as well.


This is absolutely not the process working.

It took 3 people months of work to debunk a claim.

These resources just don't exist in reasonable numbers and we absolutely need short, piercing, direct call outs that quickly demonstrate that fuckery is going on.

Feelings are the worst thing that has happened to science.


that's an odd interpretation. this seems less like a system doing what it's supposed to do and correcting itself and more like an instance of that system failing to do what it's supposed to do and correcting itself.


The article describes events set into motion over a decade ago. People calling bullshit on things that happened over the past couple of years have had a much worse time of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: