This is a false assumption about how social sciences work.
Like in other fields, academics are repeating experiments and trying to (in)validate results, including for SPE. [1]
That being said, social sciences != hard sciences in the sense that there are no hard laws because humans are unpredictable. That doesn't mean social sciences are worthless, but it does require that one approaches them with a different mindset than e.g. math or physics.
FWIW, I do agree that your criticism regarding the gap that exists from publication to repetition / validation has made many famous. Bombastic-sounding results -> headlines -> public perception -> increased social and academic status of the researchers. By the time the results are repeated and (in)validated it is often too late to reverse that. I don't think that pattern is unique to social sciences however - I keep seeing article after article contradicting each other - e.g. regarding drugs, food, fitness, etc.
IMO - the mindset should be: social sciences are dealing with things orders of magnitude more complex than anything seen in hard sciences (except biology). Therefore, social scientists need to be much more reserved in their claims, and not much should be taken for a fact unless it's replicated in great many studies and has proven predictive power.
Whether this is the mindset social scientists have, I can't tell - but social sciences in popular media seem to be the exact opposite of that.
>Whether this is the mindset social scientists have, I can't tell - but social sciences in popular media seem to be the exact opposite of that.
I think it's just not a very human mindset. I've talked to natural scientists and social scientists alike. All of them want their research to succeed, they all want to get their PhDs, get good results for their postdocs, get publications going etc.
The incentive is just there to publish somewhat inflated results. If you could get a PhD for systematcally dismantling studies (which would be much more useful than most PhD studies), we'd have a replication crisis in every field.
Quote: "I think it's just not a very human mindset. I've talked to natural scientists and social scientists alike. All of them want their research to succeed, they all want to get their PhDs, get good results for their postdocs, get publications going etc."
And that is one of the major problems. Hubris and arrogance and thus bad requirements and expectations and thus cheating and lies and half-truths as normal behavior.
Quote: "Further, in my field (economics) one can never really get a publication if the research only produces ‘negative’ results. That is, the researcher fails to find anything. I believe this is a common problem in other disciplines as well."
I think that in the hard sciences, things are very cut and dry - or at least that is the goal of a study. For example, in physics, the goal of experiments is to prove with a very high certainty that something is true or false.
Humans on the other hand are unpredictable. So if you run an experiment that says X, it may or may not replicate later, depending on hidden variables and assumptions.
Consider the famous marshmallow test. The latest studies suggest that it is not willpower but actually affluence that is the bigger determinant factor. [1] So that means that in future studies, they probably need to consider this variable and design the experiment in a way that they can control for it.
What is interesting is that for all the differences and intervening variables, humans can be studied and exhibit very predictable patterns. A good example of that is the study of power. [2] The Prince was written in 1532 and its principle continue to be just as valid today!
Humans on the other hand are unpredictable. So if you run an experiment that says X, it may or may not replicate later
humans can be studied and exhibit very predictable patterns.
Seems somewhat contradictory. If humans can be studied and exhibit predictable patterns, why wouldn't we expect experiments to be repeatable, as the parent comment asked?
And if the experiments are highly random, then either you should be conducting more of them over and over to get a valid statistical sample, or you shouldn't be conducting them at all. Either way I see no valid argument that studies in social sciences shouldn't be repeatable.
This is more of an exploration than an explanation but it seems like you're pointing to the threshold of success in a series of experiments.
As deyan mentioned in physics or chemistry, we get a high level of certainty after isolating all the variables. When it doesn't go according to plan, it's a known or unknown variable to blame.
It seems to me that behavioral psychology is still in its infancy in terms of identifying those variables and/or the threshold of Truth is much lower than sciences like chemistry.
Lots of patterns exist that have lower thresholds. Sports analogies are fairly illustrative. Hitters in baseball are considered great if they succeed 3 times out of 10. Then again, they run 500+ experiments a year (for hopefully many years) to determine their average...
I agree with your conclusion that experiments needed to be repeated more over time. I simply wonder what type of success threshold we will look at as the Truth in time.
The reason I asked is because strictly speaking, an unpredictable thing cannot be studied empirically. We can sacrifice some precision or certainty in science, but we can't get rid of reproducibility and still call it "science" just by prefixing it with the term "soft."
For example, sound economic models are generally observable in the aggregate despite being imprecise, and high energy physics has many competing theories which are demonstrable but incomplete. The game theoretic principles of market analysis are reliable, as are the principles of gravity. Markets are generally efficient, and the model's conclusions have clear utility that matches real world conditions, even though small pockets of inefficiency also exist. It's fuzzy, but not unscientific. Forests are green, but some trees don't have green leaves.
In the abstract, we can tolerate some fuzziness or imprecision as a margin of error, but only if it's compartmentalized to some incomplete theories, and only as long as it's grounded and consistent. We cannot tolerate something being true one day and false the next. Green forests cannot inexplicably and inconsistently become orange without threatening our claim that forests are green.
I don't really have an opinion on psychology in particular, though it's pretty clear there's a reproducibility crisis. But as a direct response to your thesis: arguing that a "different mindset" is required to scientifically study subjects which are unpredictable is an untenable position. If humans actually are fundamentally unpredictable - whether due to intrinsic non-determinism or a present lack of sufficient data - they cannot be empirically studied. At that point we're no longer compartmentalizing incompleteness or fuzziness in otherwise sound models. Instead, we're compartmentalizing otherwise sound observations in a sea of chaos.
Faced with this sort of reality (and I take no position on whether it is the reality), any scientist, in a "hard" or "soft" discipline, would have to examine if they can reasonably acquire enough information related to the thing in question to make any determination in good faith. An unpredictable thing is an unknowable thing; you may as well try to resolve the three body problem.
Like in other fields, academics are repeating experiments and trying to (in)validate results, including for SPE. [1]
That being said, social sciences != hard sciences in the sense that there are no hard laws because humans are unpredictable. That doesn't mean social sciences are worthless, but it does require that one approaches them with a different mindset than e.g. math or physics.
FWIW, I do agree that your criticism regarding the gap that exists from publication to repetition / validation has made many famous. Bombastic-sounding results -> headlines -> public perception -> increased social and academic status of the researchers. By the time the results are repeated and (in)validated it is often too late to reverse that. I don't think that pattern is unique to social sciences however - I keep seeing article after article contradicting each other - e.g. regarding drugs, food, fitness, etc.
[1] https://en.wikipedia.org/wiki/Stanford_prison_experiment#Sim...