Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
To Fix the Social Sciences, Look to the “Dark Ages” of Medicine (mitpress.mit.edu)
85 points by anarbadalov on June 15, 2019 | hide | past | favorite | 64 comments



Lee has spent his entire career grappling with the issue of science denial — he’s the author of books on post-truth and defending science from fraud, the latter of which he drew on for this essay. Here, he holds the state of the social sciences up to the prescientific “dark ages” of medicine, an unlikely source for guidance, and lays out a path forward.

"Like medicine, social science is subjective. And it is also normative. We have a stake not just in knowing how things are but also in using this knowledge to make things the way we think they should be. We study voting behavior in the interest of preserving democratic values. We study the relationship between inflation and unemployment in order to mitigate the next recession. Yet unlike medicine, so far social scientists have not proven to be very effective in finding a way to wall off positive inquiry from normative expectations, which leads to the problem that instead of acquiring objective knowledge we may only be indulging in confirmation bias and wishful thinking."


The article did not acknowledge the two biggest problems with the social sciences: inability to do many experiments because of ethical reasons, and the fact that these disciplines often study human social systems which have mutable "laws".

There's not much we can do about the first problem except slowly accrete knowledge through retrospective study of actual events, and maybe fill in some gaps with simulation.

The second problem is more solvable. I do believe that there are actually laws in the social sciences, but that they are far fewer in number than the phenomena people want to study. Many social phenomena are simply artifacts of existing systems, and may shed light on behavior in a particular case but are not generalizable beyond that system.

The law of supply and demand is a good example of a law that is universal, in many ways because it lies at the junction of physical systems and social systems. The physical aspects of supply and demand, for example the amount of arable land nearby as well as the caloric requirements of a population, can be measured well because of our physical knowledge and provide a decent jumping off place to expand the theory.

I disagree that some things are not quantifiable. Everything can be counted, from the number of neural connections in our brains to the number of widgets that a factory can produce. As usual, the limiting factor in science is not a lack of theory but a lack of instrumentation and thus data. I am basically an optimist about the social sciences as our measurement abilities continue to grow. However, we are in the dark ages regarding the tools we have to collect data.


I'm always amazed that someone who by all rights should be intimately familiar with the history of social science research like McIntyre can seemingly be so unaware of it. There was an empirical, modernist period in social sciences in the 40-80s of exactly the kind suggested by Lee and it was an absolute disaster. The results couldn't be related to the real world, and no one was really able to get over the inherent [1] influence of the author in the data. After a lot of arguing and furious paper writing in the late 80s-90s, everyone began working on how to deal with that bias, moving it front and center where it can be handled instead of subtly hiding it in a bunch of numbers and pretending it didn't exist. Does it make people who aren't familiar with what's going on think everyone is a bunch of raving lunatics? Yes. It creates a heck of an image problem, not helped by the fact that some smaller percentage of academics are ideological lunatics. But it's far more manageable for their peers to deal with than trying to solve an impossible problem.

[1] If you don't understand why that's an inherent problem, ask yourself whether a white male like myself would get the same results doing research on menstruation in Kabul as an Afghani woman would, or an Arab doing sentiment polls immediately after 9/11.


This is an intellectually lazy comment. It is by no means obvious that the gender and ethnicity of of the researcher would impact that hypothetical experiment, or what the impact would be, or that it couldn't be avoided with careful experimental design. If the social sciences had a better general track record of producing reproducible results with practical value for society then I would be inclined to give them the benefit of the doubt. But the reality is that most researchers have squandered their credibility by chasing mirages. If it were up to me I would cut funding for the entire field.


> If the social sciences had a better general track record of producing reproducible results with practical value for society then I would be inclined to give them the benefit of the doubt. But the reality is that most researchers have squandered their credibility by chasing mirages. If it were up to me I would cut funding for the entire field.

Is that not the definition of intellectual laziness? You've just given up intellectually on an entire field.


Why not go one step further and only fund math, because in the end everything is math, right? No need to fund physics, which is just applied math. No need to fund medicine, which is just applied math. And medicine is a field filled with terrible studes that cant be replicated, so clearly it cant be good for anything.


> The results couldn't be related to the real world, and no one was really able to get over the inherent [1] influence of the author in the data.

Wait. Do I understand you correctly that an objective approach failed and your conclusion is to be more subjective?


The "objective" approach was never actually objective. It had just as many biases and issues as everything after it, but everyone sort of pretended it didn't. The change was recognizing that the same data wasn't objective and that researchers couldn't make it so. Instead, theory shifted to analyzing those inherent biases and studying how they affected research.


> The change was recognizing that the same data wasn't objective and that researchers couldn't make it so. Instead, theory shifted to analyzing those inherent biases and studying how they affected research.

And yet, the best tools for this are underfunded or ignored: replication and publishing negative results.


> The "objective" approach was never actually objective. It had just as many biases and issues as everything after it, but everyone sort of pretended it didn't.

On a related note, I often see similar issues with the A/B testing programs in many startups and large companies. It's not that the objective approach with A/B testing is in any way wrong, it's that the constraints of the tests and the business environment mean it often isn't testing what you think it's testing.

For example, I've seen more than one A/B test where the business goal was to reduce the number of users taking some action (e.g. calling customer service, for example). So instead of fixing the underlying problem, or making online help easier or better to use, the A/B test basically just tests making the customer service link as hard to find/obscure as possible. Run the test for a few weeks, and voila, see how it's a big A/B test winner because support calls are way down!

Not all examples are egregious as that, but fundamentally virtually all of the A/B tests I've seen are limited to a relatively short time period, but aren't really able to test the longer term, follow on effects to brand reputation or customer sentiment.


If your data is not good enough, get better data. Don't abandon objectivity. I think this is why a lot of people don't take the social sciences seriously, there's an extreme resistance to just doing objective, data driven research (and the lack of reproducibility that comes with that).


It took 200 years for the hard sciences to figure out the right way to 'do' objectivity, and arguably it's still evolving (see 'Objectivity'[1] by Daston and Galison, nice summary here: https://scienceobjectivity.weebly.com/daston--galison.html). The social sciences have additional problems, saying 'just copy the hard sciences' isn't reasonable.

In fact, the idea that objectivity can be done in a simple mechanical way doesn't work for the hard sciences either, and was rejected by them in the early 20th, because it turns out that interpreting the evidence objectively still requires an educated eye. If you've ever had the good fortune (or misfortune, considering the circs) to see one of your own x-rays, you'll know how much you have to take the word of the radiographer for what the photo actually means. Sure, eventually, after much debate, a mechanical formula for interpreting the photo may be developed. Radiographers are probably due to be replaced by AI any day now - but their judgements will have formed the labels of the training data. At the cutting edge of the sciences, you don't have any of that yet.

[1] that book was expanded from their earlier paper which can be found here: http://cspeech.ucd.ie/Fred/docs/Galison.pdf


Again, the issue isn't one of methodology or technology. It's inherent to the research process.

Let's do a thought experiment and imagine two identical research projects done in parallel universes. Everything is the same between these two universes except that the researchers are different people.

In some fields, you'd probably expect the results to be the same. The researcher shouldn't affect objective results after all. In a social science setting, the results may not be the same even if both researchers used exactly the same research design on exactly the same subjects at exactly the same point in their lives.

It should be very clear by this point in the thought experiment that "objectivity" is not really possible under those conditions. You need some theory to deal with this issue, but honestly it's manageable for the most part.

Now, the real kicker is that researchers affect their research way more than this. They actually design the project and constantly make decisions / interpretations throughout. These decisions and interpretations are subjective by definition. Even if we had a magical way to get objective data, are you really still willing to call it objective after a dirty, subjective human has so much as looked at it, let alone done the sort of filtering and interpretation inherent to academic research in all fields?


> The researcher shouldn't affect objective results after all. In a social science setting, the results may not be the same even if both researchers used exactly the same research design on exactly the same subjects at exactly the same point in their lives.

This is true of physics as well. Measurements in quantum experiments are contextual, in that the experimental setup interferes with system being measured. Measurements conform to statistical distributions. The difference is that physicists took the time to quantify how measurements influence the system, thus generating a practical theory. It's not clear to me that something similar couldn't be done here, even if human systems are in a sense, messier.

In fact, it's already been done to some extent because we clearly know how the phrasing of some questions influence the answers. We also know from tragedies of justice [1] how important this subject is.

Replication and a culture that encourages publishing negative results should be prerequisites for any field to qualify as a science.

[1] https://en.m.wikipedia.org/wiki/Day-care_sex-abuse_hysteria


> It should be very clear by this point in the thought experiment that "objectivity" is not really possible under those conditions. You need some theory to deal with this issue, but honestly it's manageable for the most part.

Tbh, That sounds like a lame excuse. The logical conclusion is that science done that way will never produce truth[1]. In consequence we should stop listening to such "scientists".

[1] As in "can be reproduced and likely gives the same result"


I think it says a lot about the value of the social “sciences” that they can’t be backed by data and aren’t reproducible. It seems that maybe they can be data driven but the results aren’t what the traditional practitioners of the field like to see.

I think we’ll see the role they historically played replaced by machine learning which has more predictive power simply because it’s backed by reality.


It's not that "the data is not good enough". It's that it's easy to obfuscate results by hiding bias in the data, even if unintentionally. Even in the "hard" sciences. Or have we forgotten about some of the awful research being done in machine learning? Remember the one about identifying future criminals from photos?


> After a lot of arguing and furious paper writing in the late 80s-90s, everyone began working on how to deal with that bias, moving it front and center where it can be handled instead of subtly hiding it in a bunch of numbers and pretending it didn't exist.

And the result of that is... what exactly?


Vastly improved methodology and understanding of results. One of the issues of modernist research was that objective data was trash, but no one knew it. For instance, anthropologists would throw out / fail to collect artifacts that weren't usable with the analyses available at the time. You can't recover that from a table that says 50 bone shards were found in a pit. You can begin to recover that if the researcher flat out says they don't believe small bones are useful for the easily (an ideological position). Or, to use the example I gave earlier, if both a white male researcher and a female Afghani researcher studied menstruation in Kabul, they very well might get different results without any deficiencies in methodology by either. You can start to explain that if the researchers both flat out state their approaches and assumptions in interacting with their female subjects. Doing that requires us to understand that the researcher affects the results though, which is a fundamentally different view than standard empiricism.


> You can start to explain that if the researchers both flat out state their approaches and assumptions in interacting with their female subjects.

I would think that the foundation behind your argument - which I agree with - implies that this isn't actually possible. You have an infinite regress of biases.

Empirically, what you can do - to take your example of those menstruation researchers - is to get more data from researchers of different backgrounds to try and statistically isolate those biases. This is part of "standard empiricism". You also always have to live with the fact that data is sometimes too noisy to be meaningful.


The homeless epidemic mostly. Sometimes doing badly is better than doing terribly. In the name of the perfect and being 'non prejudiced' the current generation of psychologists have doomed the mentally ill to living standards lower than the 19th century and a life expectancy on par.


That's a bold claim, perhaps worthy of elaboration.


> After a lot of arguing and furious paper writing in the late 80s-90s,

Can you cite at least one paper from this series?


My background is in archaeology, so that's what I'll discuss. The other social sciences had similar debates around the same time, plus or minus a few years.

If you're looking to understand one perspective from this period, I'd actually recommend Binford's Debating Archaeology [1]. This book summarizes a whole slew of arguments Binford had in the 80s with a number of other archaeologists, including his famous spate with Gould over the limits of empiricism. As it's a summary, it was obviously written after all of this was settled.

Selected works from the era might be Hodder's Symbolic and Structural Archaeology[2] or kohl's "Limits to Post-Processual Archaeology" [3]. The former was essentially an opening salvo for this debate and the latter is an example of a the reactions to it.

[1] Binford, L. R. (2016). Debating Archaeology: Updated Edition. Routledge.

[2] Hodder, I. (1982). Symbolic and Structural Archaeology.

[3] Kohl, P. L. (1993). Limits to a post-processual archaeology (or, The dangers of a new scholasticism). Archaeological Theory: who sets the agenda, 13-19.


The social sciences have gained an incredible amount of control over academia. Why would they allow the current system, which works very well for them, to be "fixed"?


Hmmm I do not know about that, the test real, meaningful control would be who gets the funding, no?

Social sciences is nowhere near the top of this list.

https://nsf.gov/statistics/2018/nsb20181/report/sections/aca...

Overall my understanding is that it is harder to be an academic in almost any field these days, but especially the humanities - social sciences are an in-between zone, but I doubt the universities cutting language and history programs consider the social sciences more 'science' than 'humanities.

Seems to me that there is a stigma around social sciences - both for legitimate reasons, and because, like anything which shines a light on the uncomfortable (systemic bias, racism, capitalism, etc) people get defensive and afraid and reactive and concoct conspiracy theories.


>> The social sciences have gained an incredible amount of control over academia. Why would they allow the current system, which works very well for them, to be "fixed"?

> Hmmm I do not know about that, the test real, meaningful control would be who gets the funding, no?

It may be that social science can exert control, but can't use that control to enrich itself. In that case, scientists who don't conform to the proper norms will suffer from a lack of funding, but the social sciences may not get increased funding.

Sadly, this is a much more difficult case to measure.


> Hmmm I do not know about that, the test real, meaningful control would be who gets the funding, no?

Well no. That's arguably showing what politicians perceive the value of each department to be. But it doesn't say anything about power on campus.

The test of control should probably be what the undergrad degrees of the administrators and deans are.


I’m not sure how that’s a good control. Certain academic paths lead to much more lucrative private employment than others, so the degree holders have a huge incentive to go there instead of academic administration.


I've always found it uneasy that much of the data in social science studies comprise 5-point scale answers by volunteers to questionnaires, or observations of microcosmic situations that do not really represent reality (e.g. attempting to prove a hypothesis about generosity by giving participants $20 each and observing how they spend/gift it under a specially designed situation).

I wonder whether questionnaires should be replaced by more objective metrics, such as heart rate, pupil dilation, blood hormone levels, EEGs of the participants.

The other problem I see is the lack of predictive models in the social sciences, especially psychology. The type of models we have today are akin to 'celestial spheres' in ancient physics and the boiler theory of fever in medieval medicine (which led to blood-letting).


Some questions to ponder: how is a record of behavior (the focus of behavioral science) any less objective than heart rate, pupil dilation, etc? Also, is heart rate a measure of psychological state really? Hormone levels are a mess as a measure of psychobehavioral state. EEGs are useful but are opaque in terms of underlying mechanisms.

People do use all the things you mention, and they're certainly useful, but also have limitations.

If you want to know how someone feels about liberal politics, for example (I'll let you pick the US or Britain), it's far easier and more direct to ask them than to try to infer it from heart rate responses etc.

I'm not saying those other things are useless, only that there's a reason Likert scales continue to be used so much.

Some food for thought from the other side of the coin:

https://aeon.co/essays/the-blind-spot-of-science-is-the-negl...

In any event, my complaint about the piece is that it picks on the social sciences when biomedicine is ripe with corruption and irreproducibility itself. There's a kind of bullying that occurs with this; biomedicine is rife with problems so it takes them out on a scapegoat. (The social sciences do have many problems, but many of them apply equally well to other fields.)


Questionnaires are not records of behavior. They are records of questionnaire answers. Aggregates of subjective responses don't become objective.

A record of behavior would be something like Internet activity. Tech companies may be the first to capture enough behavioral data to be able to actually study psychology.


How does biomedicine bully the social sciences?


Likert scale is the term of art


The social scientists do everything for grant money. Foundations don't give grant money to figure out the truth. They get grant money to get someone to make some scientific sounding propaganda that supports whatever narrative they're pushing.

Who comes up with the narrative to push though, and why?


>The social scientists do everything for Grant money.

That's a very absolute statement. The social scientists do everything for Grant money. Do you have some sort of data to back that up? I mean in a way it is true that research can only be conducted if there is Grant money but that should be true for all of the sciences.

>Foundations don't give grant money to figure out the truth. They get grant money to get someone to make some scientific sounding propaganda that supports whatever narrative they're pushing.

This might be true sometimes but do you think it is true for the majority of research? Not all foundations even have a "narrative" as far as I can tell.

>Who comes up with the narrative to push though, and why?

Isn't that a weirdly open question to ask at the end of a very assertive statement?


This article conflates all research in the "social sciences", when in fact methodological practices vary widely within disciplines and sub-disciplines. Within each field, there is a "qualitative" literature, much less successful and popular than it once was, and probably not very useful, although I like some ethnographic and anthropoligical studies. Mainstream economics and political science is incredibly mathematically sophisticated, in fact people are coming around to the idea that there may have been too much emphasis on quantitative gymnastics above things like formulating simpler hypotheses or more descriptive work. Sociology has a bit of both. Psychology was the main culprit in the replication crisis, but even the behavioural psychology work cited approvingly in the article (Kahneman) is, I think, somewhat speculative in relating its hypotheses to experiment.

Without doubt, in economics and political science at least the problem is that the research questions are infinitely more complex than in 19C medicine; not that the methods are not quantitatively rigorous. The questions are obviously also of a more normative and moral nature than in medicine.


This reminds me a lot of the "Science wars" from the 90's. It's unfortunate to see things haven't improved much since then.

https://en.wikipedia.org/wiki/Science_wars


For those interested in this, take a look at the work regarding "grievance studies" by Peter Boghossian, James Lindsay, and Helen Pluckrose. They made intentionally unscientific studies with the intent of publication, and actually got themselves published on a few of their works. The objective was to demonstrate that not only does academia in the social sciences operate according to a completely different standard than the hard sciences, but is so ideologically driven that they behave in much the same way as a religion might.


I think the main takeaway from the Sokal Squared hoax was that reviewers in this field are either a) extremely biased and approve any results that agree with their beliefs, or b) anti-science, or c) scientifically illiterate, because they couldn't even spot the painfully clear methodological problems in the hoax studies that were inserted on purpose to test exactly the quality of review.


Those aren’t mutually exclusive


They would have got a lot more published except their experiment was sprung by some reporter.

The social sciences (along with most of the humanities) appear to be beyond saving at this point and it might be a good idea to make a hard break with the past and start again with people trained outside the field.


Their "study" reminds me of Penn & Teller getting people to sign a petition banning dihydrogen monoxide (aka water).

https://www.youtube.com/watch?v=yi3erdgVVTw

Just another form of social engineering. Haha, we compiled footage from a bunch of randos to show how stupid people. Haha, aren't we smart!

It's the academic equivalent of tricking someone into walking into a punch.


People (perhaps you too) seem to have the idea that only social sciences are vulnerable to publishing stings like the one Boghossian and his friends pulled off, but that's not true[0][3]. Next, all the places the "hoax" papers were [mostly] submitted to were of low prestige (i.e poor ranked)[1]. The paper they wrote as a summary of their conclusions does not even rigorously define what field(s) in the social sciences they have issues with, instead they collect it under the nebulous term "grievance studies" - which is a term they made up themselves to deride academics (so much for the scientific spirit of camaraderie[4]).

The "hoax" they pulled off doesn't seem to be showing what they say it does, at least not to the degree they suggest; from here[2]:

>Let’s analyze the hoax a bit more carefully. The team wrote up 21 bogus papers altogether. (The essay starts by saying there were only 20; according to Lindsay, that’s because two of the papers were largely similar to one another.) Of those 21, two-thirds never were accepted for publication. The Areo essay dwells on several papers that had been rejected outright, including one suggesting that white students should be enchained for the sake of pedagogy, and another proposing that self-pleasure could be a form of violence against women. They take it as a sign of intellectual decay that such papers managed to elicit respectful feedback from reviewers, even short of publication.

Academics warn against doing what Boghossian and friends did for their own good[5].

>The hoax was cruel; it sought to discredit targeted journals by setting a trap that exploited the scholarly predispositions of their editors and reviewers. Moreover, because of the anonymous review system, once the tricksters revealed their intent, only the editors whose names appear on the journal’s masthead suffered the sting of adverse public scrutiny. More finessed responses by the disgruntled threesome would have employed tactics such as persuasion, insight, engagement with the actual scholarship, and good sportsmanship (Bergstrom, 2018).[6]

Most importantly, the researchers didn't include a control group for their study. How can they claim to be outing "bad science" when their own methodology is so poor and fails to prove anything other than anecdota and suspicion?

One of the reviewers subject to the hoax wrote:

>"Anyways, I guess I could be more critical in the future, but I assumed a grad student had written a confusing paper and I tried to be constructive. I'm embarrassed that I took it as seriously as I did, I'm annoyed I wasted time writing a review, and I'm glad I rejected it."[8]

P.Z. Meyers put the situation best[7]:

>If you can find a bad article accepted for publication in a feminist journal, please do jump on it and tear it apart. That contributes to the strength of the discipline. Don’t write a bunch of bad articles of your own, which are clearly intended only to weaken the whole discipline and provide a set of easy, straw-man arguments that you can use to pretend you’re a smart guy.

And for the icing on the cake: Sokal himself isn't all that impressed with Boghossian's efforts[9]. A good thread discussing the hoax is on the social sciences subreddit[10].

[0] https://www.newscientist.com/article/dn17288-crap-paper-acce...

[1] https://i.redd.it/qsi6i5rbv3q11.png

[2] https://slate.com/technology/2018/10/grievance-studies-hoax-...

[3] https://platofootnote.wordpress.com/2017/05/24/an-embarrassi...

[4] https://www.3quarksdaily.com/3quarksdaily/2018/10/bad-argume...

[5] https://www.sciencedirect.com/science/article/abs/pii/S03783...

[6] https://journals.sagepub.com/doi/full/10.1177/14733250198338...

[7] https://freethoughtblogs.com/pharyngula/2018/10/03/give-it-a...

[8] https://twitter.com/dwschieber/status/1047497301021798400

[9] https://www.chronicle.com/article/What-the-Conceptual/240344

[10] https://www.reddit.com/r/AskSocialScience/comments/9noxmp/is...


It did prove that the only people capable of pointing out there were problems with the hoax papers are people generally critical of the social sciences. That alone demonstrates a problem.

That's why the experiment ended early.

PZ Meyers is a doxing buffoon.


How is that true when a good proportion of their papers were rejected, and they kept trying in other journals until they were accepted? What about the fact that several reviewers wrote critical responses and suggestions for improvement? Why can't what you said also be said about hoaxes in the "hard sciences" like this[0]? HN user voidhorse has a good comment about their tactics[1] (quote):

> Sure, the criticism this hoax is trying to demonstrate may be legitimate, but the methodology is one designed to highlight the cleverness of its executors and diminish the credibility of a discipline, rather than point out constructive areas for improvement. Basically, it is a methodology that does not treat its targets as intellectual equals and is quite indecorous—you get the sense that a major point of this operation is to discredit the field and make its practitioners feel some kind of public humiliation or shame. A childish tactic.

[0] https://science.sciencemag.org/content/342/6154/60.full

[1] https://news.ycombinator.com/item?id=18899529


All papers get critical responses and suggestions for improvement. That proves nothing other than the submitted papers were considered to have some merit, and still possible inclusion in the various issues.

The problem is people (a journalist and RealPeerReview on twitter initially) identified quite a few of the hoax papers. And that does suggest there is a bubble within the field that has departed from usefulness and reality.

There is a problem within social sciences. Whatever is happening in hard science journals doesn't change that fact.


My claim was never that social science is immune to the replication crisis or that it could be more accepting of outside criticism; my claim was that singling out "grievance studies" (what the authors take to be sub-branches of critical theory) using poor research methods to make a name for oneself, childish tactics, not getting IRB approval isn't the right way to go about solving such a problem, and that using this as an example doesn't stand up to the large number of hoaxes/stings in hard sciences (see the first couple of links in my original comment). What suggests that the way to go about addressing this bubble (which can and does exist to at least some degree) is to publish more junk papers?

For a critical review of the papers' content, there's reason to believe that the hoax articles' premises may not be totally without merit[0].

"When Boghossian et al. describe their papers to us, the public, they do not explain what their bad arguments are, they only describe the “absurd” conclusions of the papers. So if the hoax is all about peer reviewers accepting bad arguments, then Boghossian et al. are failing to present the proper evidence, and propagating confusion about their own hoax."

[0] https://thingofthings.wordpress.com/2018/10/10/on-sokal-squa...


I think you are arguing from bad faith. Social Sciences has problems and it seems unwilling to do better.

You had to be shamed publicly and widely. People have a right to know social sciences are this broken.


Perhaps, then, the authors should have published a critical review of existing social science literature and discussed flaws in methodology. That would be academically and ethically honest, and probably prompt a discussion and response from inside the field itself. Instead, they ran with their article to online news sites. Why would they do that first rather than trying to engage the academia themselves? Why did the researchers need to be shamed? See the quote I provided in my original comment by one of the reviewers who was fooled. She was personally embarrassed and thought the paper had come from a new entrant in the field, so she was charitable. Why is it acceptable that making people feel that way is the first course of action? It is not at all in the scientific spirit.

People have a right to know, of course - but so do researchers. Researchers in these fields have a right to know exactly what Boghossian and friends thought was wrong with their methodologies and research topics. Boghossian decided that was beneath him.


Because the Medknow journals are notorious scam journals that will publish absolutely anything at all, without even a semblance of peer review?

There's a difference between non-prestige journal and vanity press.


>> Most importantly, the researchers didn't include a control group for their study.

I'm not sure what qualifies as a "good methodology"?


Sounds like, unsurprisingly, the author hasn't spent that much time considering any of these questions themselves...they just have all the answers for those that do (thanks buddy!).

Some questions just deal with issues that are just not quantifiable (even the question he poses about immigration is just not completely quantifiable). And even questions that are quantifiable are often affected by beliefs (i.e. what happened to inflation in the 70s was a consequence, in part, of what people believed about the Philips Curve in the 1960s).

Perhaps more relevant: social science went through this phase nearly 100 years ago (in history, more than 100 years ago). And this issue was resolved often more than 50 years ago^. For example in history, EH Carr: people are not objective, arguments are often contingent but there are facts, present your argument, let your reader judge for themselves).

The most harmful thing is to claim that there can be objective truth about these issues. Economics has thrown itself against the rocks far too often. The trend towards this in history at mid/end of the 19th century produced some extremely unimportant work.

This can also tend towards quackery. I remember a biologist in my local politics dept (relatively prestigious) got a ton of funding because he believed he had found a way to spot the physical attributes of terrorists (srs, not joking, last I checked he had over $1m in funding from govt). Some people, like the author, are just unaware of the wider context. Less preaching about ways to "solve" social science, more listening (btw, in my experience all of the above applies to scientific research too...all research is contingent).

^ We first had the move towards (broadly) logical positivism/empiricism, then to post-modernism when that seemed ridiculous, and now to (imo) a reasonably healthy medium.


Very well put. Regarding history and its own past battles with this illusory search for "correctness" and mathematical-like precision, I'm really surprised that the author seems to have completely ignored Popper's pretty well known "The Poverty of Historicism" [1].

I'd also strongly recommend the author to also check Raymond Aron's "Introduction to the philosophy of history : an essay on the limits of historical objectivity." [2], a book first published in French in 1938 and translated into English in 1948.

I find it sad that some people still open up this discussion about trying to make history more "exact", more physics-like, I thought we had already proved that that is an impossible task.

[1] https://en.wikipedia.org/wiki/The_Poverty_of_Historicism

[2] https://www.amazon.com/Introduction-philosophy-history-histo...


If you're interested in the topic, Marcue's[0] (and others'[1]) response to Popper's complaints about "historicism" (a term Marcuse takes Popper to task for)

[0] "Karl Popper and the Problem of Historical Laws" (1958)

[1] https://journals.sagepub.com/doi/abs/10.1177/095269519701000...


> he believed he had found a way to spot the physical attributes of terrorists

Isn't this literally phrenology, risen from its grave?


How does the author suggest that we should repeat history-related events like the Holocaust or horrible regimes like Stalin's or Mao's?


This is the key flaw in this article:

“The truth is that such questions are open to empirical study and it is possible for social science to study them scientifically.”

Here is someone proclaiming to be a scientist, and then throwing out proclamations of ‘truth’ not backed by evidence. Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.

It’s possible human culture and society is too diverse to make claims of ‘absolute truth’ about. A statistical mechanics approach to why this might be true is telling. The more entropic states available, the more potential outcomes. That is why physics studying single atom or molecules is more ‘understandable’ than sociologists studying 10^35 if them (humans being a cloud of atoms).


>Here is someone proclaiming to be a scientist, and then throwing out proclamations of ‘truth’ not backed by evidence. Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.

The trouble with this argument is that Goedel's proof relies on systems which can model Peano arithmetic; in particular, it assumes that the system deals with numbers that are arbitrarily large. For example, Goodstein's theorem, the commonly-stated simplest unprovable theorem, depends on a function like this:

f(2) = 19, f(3) = 7.6 × 10^12, f(4) = 1.3 × 10^154, ...

Most numbers in sociology are not so large. The number of possible subsets of the human population is between f(7) and f(8).

Additionally, theorems about discrete systems do not always apply to continuous systems. For example, the theory of real closed fields is decidable:

http://en.wikipedia.org/wiki/Real_closed_field

Most undecidable statements depend on things that look vaguely Diophantine, but sociology is rarely Diophantine. Rather, it tends be that approximately a solution is still kind of a solution.


> Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.

No, that is a misrepresentation of Goedel's results. A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'truth' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false) - see Goedel's completeness theorem, which says that every truth is provable (and vice versa).

Goedel's incompleteness theorems can be understood on the semantic level as the mathematical structure of natural numbers cannot be characterized by a sane set of axioms, so any such attempt (e.g. peano axioms) that describes natural numbers also describes a different mathematical structure (a nonstandard model of arithmetic) and there exists a theorem that is true in one and false in the other model (so that theorem is undecidable).


Goedel is not really relevant to this. In the same way that insolubility of the halting problem doesn't prevent this message getting from my phone to hn to your screen. Its s theoretical limit on what can be computed but we're nowhere near running up against it.

(Sure, someone's phone will crash now, but they can still get back here somehow if they're that bothered).


"We hold these truths to be self-evident, that all men are created equal..."

All men are not equal, never have been and probably never will be. So why is that line so famous? Why has it influenced the course of history? Not just in the US. What that sentence, and its effects on history show us, is when people are faced with the Unprovable, they have a choice to sit back, do nothing and accept it OR decide what they want the truth to be. Unsurprisingly its always the latter group that makes change happen. The rest just fall asleep reading Goedel.


No way - it's derived from a conception of rights derived from Reason, God, or both.

What you say sounds like a Postmodern interpretation, even revisionism, of the original intent. What else do you think self-evident meant?

But even worse, you left out the very next part: "endowed by their Creator with Certain unalienable Rights".

No, it is NOT about making change happen, or whatever hijacking of Truth is being attempted here, it's about solidifying a state (as in authority) different than other states created with other underlying principles.


He proved arithmetic is independent of logic. Geometry isn't.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: