Discouraging replication in the tenure track is a large contributor to this. "Novelty" is literally written in the "guidelines for authors" sections of many journals. They want the newest, brightest, most headline-catching "research" to disseminate. And so do the educational institutions. No wonder why the incentives are so perverse.
On top of this, most accepted research is allowed to be published without open access, open data, open peer-review history (how many rounds did it go, what were the objections, how did the researchers answer them, etc), and with the aforementioned lack of replication.
It's incredibly frustrating being someone who loves science, works in the field of science, and is skeptical about the system, which used to be a prerequisite and is now looked at like luddite behavior.
"I could improve your ultimate financial welfare by giving you a ticket with only twenty slots in it so that you had twenty punches - representing all the investments that you got to make in a lifetime. And once you'd punched through the card, you couldn't make any more investments at all. Under those rules, you'd really think carefully about what you did, and you'd be forced to load up on what you'd really thought about. So you'd do so much better."
The way Buffett invests is by reading all day, everyday and once in a while making a very informed investment. Most people haven't even reached the level of staying away from 'sure things' and panic selling during bad news.
How many people would've been able to stay away from IT stocks during the dotcom boom, to even not invest in the company of one of your friends. (Bill Gates and Buffett have known each other since '91)
How often do you, as a 'master', break rules that you would tell a 'apprentice' are immutable?
Of course that's just one form of risk. The less attractive aspect of a 20 stock strategy comes from the fact that the majority of the market's returns come from very few stocks - The 80/20 rule applies pretty well here. With only 20 stocks you'll probably miss out on the few winners that contribute all the market's gains. It's very easy to end up with a 20 stock portfolio with low risk/variance and low returns.
Of course if you are Buffet, then your goal is to pick 20 stocks that all outperform.
BTW, the lack of new results is actually a good thing. It means we've got a lot of stuff figured out.
Of our current energy/resolution paradigms
We know that ~20% of the universe is Dark Matter and that ~75% is Dark Energy. All we really know about Dark Matter is that it falls down and doesn't really interact with itself or anything else, really.
Dark Energy? Yeah, whatever it is, it pushes galaxies apart, that's about all we got.
There are a LOT of pretty easy discoveries to be made yet, but we just don't have the engineering to make them yet. The LHC was a marvel of engineering, but it can't even come close to the energy required to produce (theorized) Dark Matter particles. If you tried to make a bigger one, you'd need an LHC with a radius larger than the Earth's. That's not going to happen anytime soon, not by a long shot. We need another method than particle colliders, probably a very large space-based telescope to stare at black-holes with jets.
Biology is also rife with 'easy' things to do, but ones that we just don't have the tech for yet. Strangely, with all of the data that the Googs and the Zuck have, we have not seen a similar paradigm shift in psychology, criminology, or economics. At least in public literature.
So, maybe it's not so much that we have stuff "figured out", as it is that we have mental models that, while they work well enough to make certain predictions, also constrain our thinking in terms of what phenomena are possible. The "EM drive" controversy is one example of this tension.
I'm not so sure of this. Have you heard of Edward Leedskalnin, or his "Perpetual Motion Holder"? The textbooks that I have discussing magnetism do not explain this effect, which you can see an example of here: https://www.youtube.com/watch?v=eWSAcMoxITw
If you can explain this effect with our "current theories", I would love to hear it!
There's another interesting effect where an electric charge can be pulled from that set-up. Lasersaber demonstrates that here: https://www.youtube.com/watch?v=S_ssUTRbRRs
I certainly can't explain it, and I don't think we have an adequate theory that does explain it (yet).
In other words, much of the problem we face in the current scientific age is due to the fact that our instruments are now electron microscopes and Hadron Colliders, instead of carefully made glass beads and simple mirrors.
It's similar to problem faced in software of managing increasing complexity. No silver bullets known of yet for all those problems, just chronic application of powerful minds to the problem. :)
For example within medicine when computer power and AI is advanced enough to help us much more connecting the dots between DNA, viruses, environment etc.
That collider can't get to the energy levels needed because it doesn't exist. It doesn't exist because it's construction was cancelled after the project was defunded. Where it to have finished construction, it would accelerate protons up to 20 TeV. The LHC reaches 6.5 TeV. All of that information is clearly stated in the link I posted and is completely uncontroversial.
Not necessarily. If anything, the more we find out, the more we realise how much we don't know. Take quantum physics, that stuff slowed down since the 1930s.
It's not a bad thing, the journey's always fun too. But let's not kid ourselves that we're close to figuring everything out.
And this shouldn't really be a surprise. Our cultures and societies have had thousands of years of evolution; our human instincts have had at least hundreds of thousands of years. In that time, both culture and instinct have picked up many true facts about the world that have helped us survive. A lot of science has been taking those facts and systematically investigating how far they hold.
So these both can be true: the more questions we ask, the more questions we have to answer, but also that many of the 'easy' hypotheses have been mined out, and any true advance in science will by necessity pull us further and further from the realm where our basic instincts and intuitions can usefully guide us.
Quantum mechanics being the prototypical example, of course.
Like a tree? The main branches are the most visible, and so the most tackled. The smaller branches are only realised when you're deep enough in it. That's where our age's advantage comes in, as we have many more qualified 'explorers' than before, but it also makes it all the more important that the foundational branches are solid.
But then again it may be closer to readjusting our lens. The big questions (of the natural kind, not innovation like AI) today still pertain what we take for granted, like quantum mechanics is making us question the nature of reality again. But much like the discovery of waves once threatened the credibility of light as particles, I'm optimistic that reconciliation can happen, and to ascend to another level ... but where can we stop? When can we say "This is it"?
We are, clearly, close to figuring out most of the easy stuff. For our current definition of "easy" based on the amount of work, computing power and strength of the mathematical tools that we consider normal.
There really are very few edge cases left for QM 2.0
I'm not sure about that. As a former working scientist, I would say that the primary driver of this is the promotion and funding structure of science. I do agree thta a lot of low-lying fruit has been picked.
Well, yeah, obviously. I'm saying that structure is there because it's a holdover from previous generations where novel results were easier to obtain.
Genuinely new results require developing new insights, new techniques or just making new guesses that happen to be right. All are rare enough. But once that is done, the information gets out and the experiments should be easily reproducible.
Some dispciplines get a lot of reproduction as a side effect of using last year's breakthrough research as "mere" technology for implementing next year's expriment. But I guess that sort of thing is less natural pyschology and sociology.
And it's possibly more costly than the original since the initial lab likely was able to re-use some of their existing resources, trained people, infrastructure, lab animals, equipment (whatever matters for the discipline) which you don't have; they likely chose this particular experiment because it fits well with the stuff that they have that's not available everywhere. If they developed a particular experimental setup / gear / network of volunteers doing interviews in the subpopulation you're studying / cell or animal line / etc, then they're using that for many publications over the years, but you need to spend the whole cost to reproduce any one of them.
I was once an experimental physicist and never psychologist. In physics there is a lot of trial and error before you can get the experiment to do anything at all. Anyone who reads your paper needs fewer trials.
So I was going to agree with you that in fields where experiments work differently, reproduction can cost as much as the original study. But on second thought, that's wrong: and indeed it is the whole point of the thread.
There is always trial-and-error. You simply don't know if your theory is correct. And there is always risk of a methodolical weakness that you only understand after seeing the data. The trouble is after you have already spent all your time and grant money on your study, you have every incentive to paper over these issues. Indeed even if you are scrupulously honest, you will have to shove study in the proverbial file drawer - because you don't have the resources to try another iteration.
So I think it's still true that a new experiment costs more than reproduction, and many of the problems with studies arise from trying to pretend that cost doesn't exist.
It's hard to find a Roman gold coin in farmer's field but once you've found your first one you can be more confident that the second will be easier to come by.
Easier, if you factor in how much easier P hacking is with modern software.
We don't know anywhere near "most" things. The supply of novel discovery is not the limitation.
Please don't use that quote. It's apocryphal; he almost certainly never said that.
It's not about how close we are to "knowing most things," but about how accessible new learning is. In a game of musical chairs, all the easy chairs are taken.
> The beauty and clearness of the dynamical theory, which asserts heat and light to be modes of motion, is at present obscured by two clouds. I. The first came into existence with the undulatory theory of light, and was dealt with by Fresnel and Dr. Thomas Young; it involved the question, how could the earth move through an elastic solid, such as essentially is the luminiferous ether? II. The second is the Maxwell–Boltzmann doctrine regarding the partition of energy.
As I understand it, the two clouds were dealt with via special relativity and quantum mechanics.
Our society has changed, dramatically, in the past 30 years. We've gone from a completely disconnected society with connections spanning arm's reach to a globally connected society with connections spanning just as far. Baby sitters have been replaced with digital tablets providing endless entertainment and ensuring no person, adult or child, ever need sit with their own recreation being solely within their mind. One needs only look at the slew of digital devices whipped out at any eatery, lest individuals be left to their own thoughts for an agonizing 10 minutes.
The internet and our technology has shattered nearly all educational barriers. Getting an MIT caliber education for free is now something any child in a developing country who has access to the internet can do, if so inclined. On the other hand they can find countless entertaining drama to read or participate in, find an endless supply of porn, engage in social media and other online sites that are specifically designed for addiction, or find an array of games and other forms of fast response stimuli all designed to make him or her feel good.
Einstein constantly related his tales of meandering strolls as he pondered his work. Another individual that has managed to achieve some great things in modern science is Stephen Hawking. He's mentioned spending decades thinking intently, even if not exclusively, on singular topics. And there is absolutely every reason to believe that is in no way bluster. Given his condition, his mind is his recreation.
Is the breaking down of information barriers in our society helping more than the massive commercialization and directed addiction towards all sorts of mediums and vices along with the rapid reward systems they offer, is hurting it? It's not just high level academia that's failing. Education systems to be deteriorating at all levels with US (which is the center of much of this change) scores in math and science rapidly falling behind much of the rest of the world. Healthfulness has fallen to dangerously low levels. And more. Regression is not impossible.
Although, to be fair, the direct effect is explained by increased inequality:
Yet, I know how digital shit affects me. I'm not talking about the devices themselves, I'm talking about being flung up by some stupid pointless shit in social media or by the colorfull game which totally hijacks the reward mechanisms in my brain.
I totally fail to see how the effects which promote shallow, fast reward seeking cognition are not not detrimental... yeah yeah, it sounds like greeks whining about reading spoiling the mind but the difference is that our media is engineered to distact and hijack the mind for no valuable outcome whatsoever.
Sure, there are lots of positive effects but that does not mean one should not discuss the negative ones as well.
I'm currently writing my phd project (after having worked for 10 years in a company) and I'm absolutely stunned by the current easiness to access papers and informations compared to just 10 year ago when I left university.
I concure however that a new and real challenge is now to be able to filter the noise induced by this overwhelming volume of available information. But I don't think nostalgia is an useful tool to reach this goal.
PS: To illustrate https://xkcd.com/1601/
(And I believe there was another comic that it traced it back to Greek philosophers that rejected writing. Thanks to whoever remember the link!)
I wouldn't generalize this to all branches of science. My own experience in physics contradict that, and I also get the impression that some of the large successes we've seen recently (like gravitational wave detections) couldn't have happened if the scenario you described were the norm.
My advisor and I have found some questionable stuff in PoP, for sure.
"Well it was the engineers who did most of the work..."
Who do you think came up with some of those engineering advances? No need to play semantic scientist vs. engineer arguments. Some of the advances came from scientists and some came from engineers.
But gravitational wave detection wasn't "just" engineering; there's a lot of theory to be done to figure out what signals to look for, how to interpret them etc.
The original post here is talking about reproducible results and p-values and publishing. Is LIGO a particularly good example of that part of science being done well? I don't know either way.
FWIW I'm much closer to being a scientist than an engineer.
I don't see how you can look for a specific kind of outcome from an experiment without fooling yourself, or cheating.
It's much like with tech startups - you can design a startup such that it will either be worth billions or zero by tackling a problem that most people believe to be unsolvable but will affect lots of people, and then trying multiple counterintuitive ways of solving it. The vast majority of them will fail because that's what it means to be counterintuitive, but if it succeeds, you have a world-changing company.
Financial instruments as well - an option is a financial contract whose value will either be zero or a lot, depending on whether an unlikely-but-possible event occurs.
Not nearly enough of them of course.
But still, it's kind of funny how certain fields in pseudoscience actually drive progress in statistics and scientific methodology :)
He even made a simple problem more complex just to fit his base theory. It's only about getting citations.
You can look at any experiment as having essentially two possible outcomes - you need to ensure that at least one of those outcomes is publishable, because there are many experiments where any possible outcome will be weak.
Gelman's other point is this is all made much worse in inherently low-power fields like psychology and the social sciences. A field like physics can "get away with" it more because making the appropriate corrections is less likely to wildly change the conclusions.
This is the advice of a true scientist. I'm refreshed to see that they're still out there.
It encourages students to cheat, because most of the time you won't get either. Not a significant result, and usually not a counterintuitive one either. You're pretty likely to get a significant result that agrees with existing theory, though.
> The most widely cited test was a 1987 study for Bicycling magazine by engineering professor Chester Kyle, one of the pioneers of cycling aerodynamics. He found that leg-shaving reduced drag by 0.6 per cent, enough to save about 5 seconds over the course of one hour at the brisk speed of 37 kilometres per hour. At slower speeds, the savings would be less.
> [More recent tests in a modern windtulle show] that [shaving legs reduces drag] by about 7 per cent (...). In theory, that translates to a 79-second advantage over a 40-kilometre time trial that takes about one hour.
> [The aerodynamicists in charge of the windtunnel contacted Kyle], to ask if he had any ideas about the discrepancy between the two results. It turned out that the 1987 test involved a fake lower leg in a miniature wind tunnel with or without hair glued onto it – hardly a definitive test, and yet it was enough to persuade most people not to bother with further tests for the next three decades.
It's curious to me, because in other areas it seems some degree of surface detail reduces drag (golf balls, aeroplanes too I've heard) - I wonder if short stubble is actually better and that's the effect you get using real legs. That would account for a plastic model only having a small gain.
Assuming that we're already reaching the tail-end of the bell curve for human performance in cycling sport (which seems likely to me), suggesting minimal individual performance differences, then being at among that group despite a 7% performance disadvantage seems downright impossible.
So especially at the top there would be an incredible selection against anyone with hairy legs.
But I guess you could say that natural selection beats human belief.
I'm concerned about it from a different angle:
How much public policy, medical treatment, and follow-on science is seriously suboptimal due to this house of cards?
This you do not want to know.
Here is a good question, the politics and bad science of which were uncovered by the documentary "Bigger, Faster, Stronger: The Side Effects of Being American" - why are exogenous testosterone and anabolic steroids strictly controlled and essentially banned?
What is your first reaction to this question? That they are unhealthy, that they have gross and terrible side effects? Likely this is what you have heard. Nearly 100% of what is popularly "known" about anabolic steroids and use of testosterone is absolutely false.
Exogenous testosterone was banned because a father of a boy who committed suicide (who was taking steroids ) rode a publicity wave all the way to Congress, despite the recommendation against the ban by the AMA and other medical experts.
Now, think of what else this may be true of?
: The son was taking steroids, yes. He was also on benzos, drinking alcohol, and doing other recreational drugs.
Correct me if I'm wrong, but I thought the main issue was that papers were being replicated, but the results weren't being published (especially if they were negative). These problems were always being discovered, but the record was never being corrected.
You may be right, but that's basically the same thing (and as you have sort of pointed out, it's not possible to tell either way).
If you just do the experiment again that's repetition.
So experiments are being repeated, but results can't be replicated and the results aren't being published, as you note.
The unpublished results perhaps aren't strong enough to make a refutation?
1. Replicability of social psychology results are estimated at 25%. (Cited in article, from http://www.nature.com/news/over-half-of-psychology-studies-f...)
2. People are not reporting non-significant results. Which means we get an incomplete picture of the validity of certain hypotheses. (from article)
3. The numbers don't line up when comparing observed power and significance of results. (from article)
4. Some results data is erroneous or falsified. The GRIM test that surfaced a while back shows numeric "results" that cannot possibly be derived from the sample data. (https://medium.com/@jamesheathers/the-grim-test-a-method-for...) (https://www.vox.com/science-and-health/2016/9/30/13077658/st...)
5. Many authors on the papers don't want to share the data from their papers - even if it's a condition of publication. (from a previous article about the GRIM test that I can't find) This doesn't directly equate to maliciousness or deceit, but it illustrates incorrect attitudes in the field. Some authors consider it undignified to have someone "check their work". Others are concerned of fallout if their data contains errors. (Generic mainstream blog/press article on reproducibility https://www.vox.com/2016/3/14/11219446/psychology-replicatio...)
6. Ridiculously small sample sizes are often used in experiments. Often this doesn't seem to affect how the results are received, though it should. (briefly discussed in article and comments)
7. Papers and studies are cited in articles and future papers before they've been peer reviewed, replicated, or proven sufficiently to justify their inclusion into the field. Thus the results are "baked in" to social psychology as fact at an uncomfortably early stage. (http://blogs.plos.org/mindthebrain/2015/12/30/should-have-se...) This kind of feeds into the hype machine where people focus on significant results - even if the data needs massaging to get there. Even after being refuted (if refuted) the field and its fans never seem to fully recover and back away from bad science. By then it's already made its rounds on Facebook and it's even referenced in the worst of the college intro to psych/sociology courses.
8. Journals in general need to be more open with their data. Especially with publicly-funded research. Paywalls don't help progress.
9. Corporate/politically funded white papers publish studies without making the connections clear.
The field needs a massive overhaul in the sense that its participants need to stop chasing controversial results in the press and instead focus on reproducible results and methods. All fields need some of this, but psychology needs it more than others.
That's a pretty big reason not to publish a negative result, especially if the original result has social credence (big name, popular publication, wide acceptance).
I would actually expect the opposite, as finding evidence to support a new claim should be more difficult than not finding evidence.
My understanding is that a negative result has the following format: "we did such and such experiment and did not find a significant statistical relationship between thing 1 and thing 2, after controlling for a bunch of other things".
It's worth noting that this by no means proves that there isn't a relationship, it just means that study wasn't able to find evidence of one. It could be a piece of the puzzle for a potential strong case against such a relationship, or that further research is needed to untangle any confounding factors. Which is why I think all methodologically sounds results should be published, no matter how unflashy or boring.
(And yes, it is my opinion that climate change is absolutely happening. Does not mean that we can't be skeptical of the effect sizes cited.)
"And yes, it is my opinion that climate change is absolutely happening" .
The level of fear in stating simple opinions has gone to 11.
No possible discussion follows. All opinions, however tame, are outcast.
Or is this the point you are making, and you are expressing a climate-change-denying position, really obtusely? If so, it's on you to show climate science is 'spinning tenuous research into facts'.
Reduce that to:
> happens to lead us to a conclusion we don't like, we should be careful especially given the immense political pressure both within science and in governments.
Liking has little to do with it.
Those who are purists also don't like that others question dogma.
Right now, submitting data that contradicts a published paper is likely to be ignored, treated with hostility, or swept under the rug. At very best, you'll get a small publication of a retraction.
Disproving published results needs to be at least as prestigious as producing novel results.
If we can't swing that - it would be great for graduate student research work...
I don't know if it's true, but it sounds like it might be a good idea.
How might we get funding though? Perhaps with a prediction market (people bet on whether scientific hypotheses will be disproved/confirmed)? It could be fun.
I'd love to prove things wrong for my master's thesis but novelty is a grading criterion so that's a no-go.
This kind of stuff despite those flaws gets propagated without much skepticism. It makes me wonder how much stuff we "know" to be true is the result of an intuitive argument but has no real support.
It is scary, and you are right, but you can do your part by blogging about it. Doesn't have to be official, just post the text and your notes. The more people do this, the more we have informal peer-review and replication checking after the fact, and at least a loosely-connected database of blogs and articles will exist when people search the article title.
It's fascinating to see the "new guard" emerging in psychology, actively acknowledging the poor practices that have preceded them and fighting back were possible.
A while ago I had a manuscript rejected because I used the word "novel"... so hopefully the trend of emphasizing "novel" things  is changing.
I'm trying to find the specific manuscript guidelines for that journal, but pretty sure they banned use of the word.
The criteria for publication of scientific papers (Articles and Letters) in Nature are that they:
-report original scientific research (the main results and conclusions must not have been published or submitted elsewhere)
-are of outstanding scientific importance
-reach a conclusion of interest to an interdisciplinary readership.
"Outstanding scientific importance" screams no boring replication studies that confirm pre-existing work.
The note in the first bullet also forbids replication studies that confirm pre-existing work.
> (the main results and conclusions must not have been published or submitted elsewhere)
I don't know, "couldn't replicate this other major study" seems like it would qualify as outstanding scientific importance. It all depends how you frame "scientific importance". Replication seems like it should be high on the list of scientific goals.
You're missing the other side of it. Here's what's not interesting: Replicating the results of a study already believed to be accurate. Yet it is vitally important.
To arrive at the simplest truth, as Newton knew and practiced, requires years of contemplation. Not activity. Not reasoning. Not calculating. Not busy behaviour of any kind. Not reading. Not talking. Not making an effort. Not thinking. Simply bearing in mind what it is one needs to know
It would also be interesting to repeat the experiment with studies from different points in time (when there was more ideological diversity) to see how the effect varies with homogeneity. My guess is that if there is an effect, it varies nonlinearly with diversity--in other words, a field with 90% political homogeneity might have an effect that is ten times that of the same field when it was only 70% homogeneous. I suspect that once you hit 90%, dissent is much more effectively suppressed than at 70%. But maybe not?
The academics will deny these hidden truths but the greater population will (and already do) exploit them.
I can attest there is a lot of crap science published. So much so that when you find a really good publication you think "wow!!"
The one thing about industry is that if the science doesn't work you don't make money. However there is little incentive to publish.
And worse, the vast majority of the evidence cited in major social science debates is deficient in this way.
Secondly, supporting OA-friendly journals is the next step. PeerJ is a very good initiative on that front for a lot of reasons - they open and document the peer-review process, which is unheard of. It's a tremendous asset to look through peer review and see how the "science" is really done. I highly recommend everyone find some papers that they are interested in on PeerJ and check the peer review logs.
I don't think it's any one thing or that there's an easy fix. For my part, I refuse to publish in non-OA journals and strongly prefer those who require/strongly encourage both open data and open peer review. PeerJ is one such journal which also features very fast time to first decision AND low fees. I cannot recommend them enough.
Replicating research would boost your reliability score and the original's too which would boost credibility for future research.
Are there any departments or disciplines or academic careers that can be made on scrupulous scientific study of how science goes right and wrong, including statistical as well as other issues? I think we need it.
Belief is always optional, if you want people to believe you have to get them to opt-in to believing. Telling them they must believe rarely accomplishes this and is endemic of the type of the scientism that has displaced real science. Of course, the thing this clown was telling us we didn't have the option of disbelieving turns out not to be true. I suppose now he's willing to grant us that option. How exactly does Kahneman expect people to distinguish "well-supported" science from not "well-supported" science when he himself is obviously wholly incapable of doing so?
This runs contrary to what I was taught in high school science classes, which was that newer science is more reliable than older science. The truth is that the old stuff that still stands is really where it's at. Most "novel" scientific findings are not true. A smaller portion will be thought true for a while, then discarded. Only a very small amount of research will stand for a long period of time.
Another thing to consider is that the scientific method and its modern, institutionalized implementation is not very old at all. You cannot exclude the possibility that some of our fundamental scientific understanding is totally flawed, and we've yet to discover how.
Much of modern technology is built on modern scientific findings. Since that technology unarguably works, the findings on which it is based cannot be "totally flawed".
That said, working technology absolutely is experimental validation of the underlying science on which it is based (if any).
The GPS receiver only works properly because we understand and account for general relativity.
The radio relies on a lot of stuff related to information theory. Which might be science, depending on how you think of applied math.
The screen and battery are built on a lot of materials science, which I don't know much of anything about.
What failure means when you try to consider your phone as an experiment is very unclear to me
A phone is far to complex and intertwingled to be a proper experiment; if it works it proves a lot of things, but because of that if it doesn't work it doesn't disprove much of anything.
If the phone works, it says "all of these things are true", which is a lot of information.
If the phone doesn't work, it says "at least one of these things is not true", which is correspondingly less information.
Those things are tied together. Someone who knows more about statistics and information theory could probably explain how a lot better than I can.
Just because technology that is built on top of science works doesn't mean that it is 100% correct. It isn't that our theories are outright wrong, but they probably still are incomplete or are only approximations.
Interesting you would say that. They were calculated and the orbit of Mercury was not predicted correctly according to Newtonian Physics. Using Einstein's general relativity, the calculations matched observations.
If any phone ever works, it means that some of the underlying theories might be true. Any specific phone means nothing. And a phone that isn't working has absolutely nothing to do with any proof.
Implementations of objects that rely on theories are emphatically not proofs of them. And failed implementations of those objects are not disproof.
Just following the analogy; what science was fire validating during that time? Just because something works doesn't mean it wasn't created by trial and error.
I'd also like to mention that I am a scientist, albeit retired. I'm wrong more times, by 09:00, than most people are all day. I have no reason to believe we are at some sort of apex of knowledge.
The other day, they released findings that showed they can mathematically predict quantum chaos. I'm still pondering the implications. There is so much we don't know, and that's a great thing.
BTW, I never said we were at an apex of knowledge. I'm trying to quell the concern that what little we do know is somehow going to turn to mud one day. That's very unlikely.
I'm going to give it a serious investigation over the winter. I don't have the time to devote to it until then.
I should also say that physics has a better track record than softer sciences. But even in physics, things change.
The geocentric model of the universe was equally obvious and functional to our ancestors in the context in which they used it. We are the same creatures as them and are subject to the same basic epistemological limitations -- just because we have seen further does not mean that we have seen everything. The whole concept of science is exactly to that point. It is implausible under currently available information that we could be missing some fact that would fundamentally change our understanding, but that is not the same thing as saying that there is no possible fact that could cause such a rupture.
who taught that?
I notice a lot of people seem to think this is true. Seems obviously crazy to me!
People sure aren't any smarter now.
And then turns out - oops, these conclusion are not well-supported and not that scientific in fact. And those who doubted it were actually correct. Does it mean anybody who doubts anything is correct? Surely not. Does it mean we should be a bit more careful with statements like "belief is not optional" and following condemnation of those who fails to comply? I think yes.
Science has, in many ways, taken on some of the characteristics of religion. You either believe or you're a heretic. Which, sort of, I agree with. If you don't believe in the results of the scientific method, you may wish to reexamine your life.
However, a lot of what is published under the name of science hasn't actually followed the scientific method. You get people who believe in the strangest things.
A recent example was someone who told me that the oceans were going to rise by 57' by 2050. Now, I'm firmly in the belief of AGW camp. I pointed out that they were wrong and that no models suggested such a thing and that their citation had no citations, other than an article in some newspaper.
I provided link after link. I explained that I am not a climate scientist, but a mathematician. I explained that I'd actually run the models locally, just to learn more. I explained the process, the data collection methods, and even why the data is adjusted.
They decided I was a 'denier' and a Trump voter - quite literally, they called me a 'science denying Trump voter.'
I was baffled.
Later, I'd bump into them again, at the same site. This time, I explained that science was a philosophy. Of course, I provided the citation for this and even went so far as to take the time to explain what Ph.D. actually means.
They went ballistic, so to speak. Oh, the names they called me.
This is not an isolated case, they had others chiming in. It wasn't until several days later when someone finally noticed and chimed in to support me, but it was of no use.
That person is a science believer. I can't tell them to stop believing in science. However, it has the hallmarks of a religion, it's certainly a belief system. No, being a belief system isn't a bad thing.
I don't have the answers, but this is a problem. Sorry for the verbosity, but I haven't a better way to describe it, nor a better descriptive term than religion.
>>Science has, in many ways, taken on some of the characteristics of religion.
There is no better way to put it. I have said this the same way. There is a growing sentiment of a "science clergy," a bureaucracy that controls Truth (tm).
I say this as a scientist.
It's almost as strange as how people will ask me medical questions when I'm introduced as Dr. KGIII. I can kinda understand when people ask me things outside my discipline, but I'm the last person you want to ask about your medical problems.
To put it in another way, most people I've met who criticized science couldn't even explain how a toaster works if they had to and had no grasp of what they were criticizing. Hearing superficial criticisms by people who do not know anything about the methodology and maths can be very tiring, and these criticisms have become abundant in online forums. It can be easy to overcompensate and react needlessly aggressive to such general criticisms. At the same time, the critiques of science I've met so far (in academia, and all leftists, not that it matters) were all relying on the outcomes of obvious and constantly ongoing scientific progress on a daily basis. From lossy image compression over GPS over microchips to modern medicine and diagnostics.
Maybe in your case people overreacted and missed the fact that you're a smart mathematician and not some random dumbass who says "statistics lie" as if that was an argument for anything, and you felt butthurt about it. Still, they are right when they say that science is not a philosophy. I should know that, because I have a Ph.D. in philosophy, and the standards for our 'theories' are not even remotely as stringent as in natural science or mathematics. And maybe you're also overreacting a bit.
Contrary to what seems to have become fashionable - mostly in certain political circles outside of Europe to be honest - the scientific process continues to converge to the truth in almost all disciplines and we reap the benefits of science daily. (I say almost all because I'd like to exclude literary science and the like.)
That even smart people can be dumb.
― George Orwell
Belief is always optional, if you want people to believe you have to get them to opt-in to believing.
Kahneman is describing part of a larger framework for human beings to cope with their own flawed human thought processes. You're talking about the art of persuasion.
If "you want people to believe" the most effective way to accomplish that is generally to lie to them in a clever way. That's a whole other discussion.
I'm fine with anyone who wants to question climate change, evolution or even something as accepted as gravity. Questioning is how we refine our thinking.
Where I get less okay is when people use their questioning to justify behaviors that, should they be wrong, are detrimental to society and the world. Question climate change all you like, but stop polluting until you've convinced the majority of climate scientists. Question evolution, but stop preventing schools from teaching it or forcing them to give creationism an equal standing until you've got something that is, at least, equally supported.
Science rarely does black and white, but you can still get a lot of benefit from altering behavior based on a scientific prediction that is 60%, 80%, 90%, 99% or 99.999% likely/accurate. Forcing belief doesn't honor that small bit of doubt that's inherent in any scientific study. But betting against what science believes to be likely when there are negative consequences to being wrong is stupid and we need to stop doing it.
Question climate change all you like, but stop polluting until you've convinced the majority of climate scientists.
Just because they're all convinced it's completely our fault things are changing, doesn't mean that the prescribed remedies will actually work.
What if the models are wrong on the low side, and we're wasting resources trying to stop the inevitable when we should be figuring out how to survive it?
What if I don't trust the scientists doing the work, or the larger system that they work in? What if I find the common incarnation of the entire field lacking? I do not trust much published by sociology, regardless of how many studies show results, because of the political nature of the field.
On the other hand, physicist who say they found a new particle? I don't have an once of doubt (though maybe I should have a little).
Part of the problem is that the very language used in sociology is crafted to reached the desired findings. Once the integrity of an entire field has been lost, what can be done to restore it?
Technology is created based on using facts discovered by physicists - layer, upon layer, upon layer of very precise and constant use of fundamental ideas of physics. If unexpected things keep happening, it gets investigated and sometimes "new physics" is discovered.
On the other extreme (of what some call science) there is the social sciences and psychology. A study with a p=0.05 (95%) claim to being true. Lots of problems with p-hacking and replication. Scientific experiments are either unethical or impossible.
Here is a cute cartoon about the spectrum (https://www.xkcd.com/435/), although I would title it, "Fields arranged by certainty of truth". I work in geology research where the certainty of truth of the ideas/theories we promote vary across the whole range. This can be quite fun, but one needs a very well developed sense of how true the ideas you hold are to make progress.
The subtleties on the probability of an idea being true and over which domain it applies to is why people consult "experts". Hard to choose an expert when you are not an expert.
C'mon, use logic and reason.
What does that mean in real terms? What do you propose doing to those people who take the option of not believing? If the answer is nothing then belief is indeed optional.
I suppose there's ostracism for a non-state solution. So if you're a business, maybe once a year you make all of your employees measure some toads, and if they can't apply a Kolmogorov-Smirnov test correctly on the first try they're immediately fired and added to a shared blacklist.
How exactly are you handling enforcement?
There's a very nice article I read recently , on a situation in educational research (as in, how learning happens). The authors point out a situation where a group of 30 scientists published a letter informing teachers about a persistent myth about students. They go into a beautiful examination of the impact of this, what the scientists (possibly) miss, and what effective persuasion may look like. The authors' recommendations, suitably generalized, are a great read for our times, when polarization and tribalism are high, with a startlingly low tolerance for people with “different” ideas. Highly recommended:
Skepticism, Science and Scientism
Have a read! I also submitted to HN at the top level. (https://news.ycombinator.com/item?id=15235291)
Belief is a problem, in my opinion, in every situation.
By definition, a belief is something you accept as true with insufficient evidence. It's kind of the opposite of knowing things.
So why believe at all when you can both think and know. Beliefs are garbage and the number 1 source of problems in this country.
I think Kahenman's response was spot on and real. And I think by responding that way he gives meaning and life to the scientific process of constantly questioning and reviewing and trying to improve.
So much about a person is evident when you challenge their beliefs. There is a range of responses but I find people who respond with "help me understand how you arrived at your [conflicting] result" get to better results than people who respond with "if you believe that [conflicting] result then you must not understand what I said, here let me explain it again in simpler terms for you." The latter don't do as well and don't have as much impact.
Barbara Ehrenreich wrote a book in 2009 called "Bright-Sided" that was incredibly devastating critique (if you managed to hear about it) about Positive Psychology. I finished the book a week ago, and I say this as someone who really likes (and still does) Tony Robbins, but it's scary stuff. The most horrifying part, although not too unexpected, is a conference near the end packed with Positive Psychology PhD students chomping at the bit to get the sweet sweet consulting jobs in the various global cities around the world, and Martin Seligman, the head of the American Psychology Association (APA), having to subtly hint that it's all a bubble, and these students realizing they might have been lied to.
I should also say that she has many concerns with Martin Seligman, another researcher of somewhat similar success to Kahneman.
In defense of all these people though, with the cuts to research funding over the years what else could they have done?
A deeper point here is the general failure of Psychology. Karen Horney, a tremendously talented psychotherapist from the 1930s, argues that true psychology is more sociology than psychology. In general this focus on the mind, on the internal, is 1). a very Western style of thinking (specifically Calvinism who were obsessed with deep introspection) and 2). Has only accelerated over the years as positive self-help think asks people to retreat further and further into the mind as a defense to deal with lower incomes, lower economic mobility, or Facebook-incited jealously.
Layman here. Upon seeing a black box that performs tasks that are otherwise intractable, isn't it all-but-natural to desire to crack the box open and figure out how it works?
(n.b. the inability to artificially replicate the human mind's full capabilities, is one of the few reasons we continue to tolerate the many failings of the human form and mind.)
"Priming is a theory in which an implicit memory effect in which exposure to one stimulus (i.e., perceptual pattern) influences the response to another stimulus. ... For example, NURSE is recognized more quickly following DOCTOR than following BREAD."
The "priming" that's mostly being discussed in the comment and blog post it's replying to, on the other hand, is macro-behavioral priming, for lack of a better term: priming significantly affecting complex physical or social behaviors. Things like "seeing old-age-related words makes you walk slower" or "holding a hot vs cold beverage for a few seconds before an interview radically changes your opinion of the interviewer". The evidence here is... well, that's a lot of what the non-replicability fuss is about.
"I accept the basic conclusions of this blog. To be clear, I do so (1) without expressing an opinion about the statistical techniques it employed and (2) without stating an opinion about the validity and replicability of the individual studies I cited.
What the blog gets absolutely right is that I placed too much faith in underpowered studies. [...] My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me, but the argument only holds when all relevant results are published."
[Edit/add] Kahneman also outlined an approach to address concerns in 2012 in an open letter in Nature see https://www.nature.com/polopoly_fs/7.6716.1349271308!/suppin... [linked from https://replicationindex.wordpress.com/2017/02/02/reconstruc...] which was apparently ignored by the priming researchers.
I believe that chapter on priming Has been widely cited and used in practice. For example, law enforcement agencies use the ideology behind priming whenever they run a sting operation, and clearly believe it actually works. Look at the new prevalence of terrorism, gun, Drug, and rape stings.
If you're worried about the "file drawer problem" JASNH might help you feel a little better about the future.
Naively one would think that the scientific method -- "guess that X does Y, try it, see whether it worked" -- depends as much on people telling each other what DIDN'T work as what DID. Yet many journals will refuse to publish negative results (perhaps aside from attempts to replicate significant results).
JASNH takes the opposite publication bias and gives a refreshing view into what stuff people are doing that takes up real research effort, seems to be real work, but didn't show an effect.
I have no illusions that JASNH is, like, a super impactful journal -- it's online-only and accepts a weird mix of disciplines -- but it's nice to see that people are trying to do the right thing.
While some effort to standardize is important, it also waste a lot of time setting up a specific set of experimental conditions that may not have much resemblance to the conditions that obtain in the real world. In my opinion, we learn much more by taking someone's existing result, thinking through the consequences and then designing well-powered experiments that probe the assumptions, mechanisms and applicability of the result. With critical eyes and diverse systems, we won't fool ourselves.
One more note: if this topic interests you, please read The Structure of Scientific Revolutions. If you are unfamiliar with the book, I guarantee it will completely change how you think about science as a human endeavour and make you much more comfortable with the existence of long periods of time where science just gets some things wrong.
The c. elegans story you linked is a great example of this. Getting the little details right matters. I can't imagine having any luck doing that while constantly changing the big details.
Wow... this is pretty basic info on what was measured.
I think that, too, is considered replication by many.
Such an incredible book.
So, there might be a priming effect but the studies that Kahneman used don't necessarily show that? Is that right?
Perhaps more importantly, there still might not be.
Part of the problem is that priming isn't a binary thing, but a range. Some uses of things that could be described as priming are so well established that even if they are not "science", they are certainly engineering, inasmuch as marketers successfully use them routinely. On the other hand, studies that seem to show that if you flash words of negative connotation faster than they can be consciously read (or possibly even consciously seen), pictures conforming to stereotypes associated with those words are slightly more quickly recognized may turn out to be bunk they intuitively are after all. (Note that I'm not saying they're bunk because our intuition says they are. But contrary to what seems to be a somewhat popular belief, it is in fact possible for our intuition to be correct. It's one of those things where you only ever hear about where it's wrong, precisely because that is in some sense news. It's right quite often, moreso for one trained on existing science.)
Personally I'd say this is one of those cases where the recent Nature proposal to up the standard of significance from 0.05 to 0.005 would probably have been helpful. https://news.ycombinator.com/item?id=15192610 If implemented it wouldn't solve everything instantly, but it would certainly raise the bar on this sort of side track being taken.
Cultural progress does not hinge on advances in hard science and the Nobel prize is wise to understand that.
Failing to recognize the impact of literature, peace and economics on our society is a failure to understand the entire purpose of the award.
Peter Nobel, a human rights lawyer and great grandson of Alfred Nobel explained that "Nobel despised people who cared more about profits than society's well-being", saying that "There is nothing to indicate that he would have wanted such a prize", and that the association with the Nobel prizes is "a PR coup by economists to improve their reputation".
So often I read some bullshit article with a headline touting the author as being a nobel prize winner, and it is always the economics prize. It debases the achievements of other nobel prize winners, both in hard science and in literature/peace.
Even Hayek himself was against the Nobel prize for economics because: "The Nobel Prize confers on an individual an authority which in economics no man ought to possess.... This does not matter in the natural sciences. Here the influence exercised by an individual is chiefly an influence on his fellow experts; and they will soon cut him down to size if he exceeds his competence. But the influence of the economist that mainly matters is an influence over laymen: politicians, journalists, civil servants and the public generally."
>The whole of my remaining realizable estate shall be dealt with in the following way: the capital, invested in safe securities by my executors, shall constitute a fund, the interest on which shall be annually distributed in the form of prizes to those who, during the preceding year, shall have conferred the greatest benefit to mankind. The said interest shall be divided into five equal parts, which shall be apportioned as follows: one part to the person who shall have made the most important discovery or invention within the field of physics; one part to the person who shall have made the most important chemical discovery or improvement; one part to the person who shall have made the most important discovery within the domain of physiology or medicine; one part to the person who shall have produced in the field of literature the most outstanding work in an ideal direction; and one part to the person who shall have done the most or the best work for fraternity between nations, for the abolition or reduction of standing armies and for the holding and promotion of peace congresses.
(You will also notice that the Economics prize is a later addition, having nothing to do with Nobel's will. I've always wondered why the Swedes picked economics as the only field worthy of being added as a prize category despite not being mentioned in the original testament.)
I recognize impact of literature and peace prizes. I don't recognize such from economics.
I aspire to this level of self-awareness, accountability and integrity.
But you are right. I still see his book cited all the time in popular media. He should be much more vocal about what he got wrong.
If he just leaves it at this, and the book is still sold the same and he doesn't broadcast this message any further than a blog comment, what has he really said?
He is a very distinguished scientist with a huge amount of published work. It is far easier for him to own up to a mistake than a fledgling scientist just getting started. He has far less to "lose".
Admitting mistakes should be normal behavior for everyone.
Agreed. But it isn't, and applauding it is a way to provide moral incentive to those who might otherwise stay quiet.
Strictly on the basis of significance — a statistical measure of how likely it is that a result did not occur by chance — 35 of the studies held up, and 62 did not. (Three were excluded because their significance was not clear.) The overall “effect size,” a measure of the strength of a finding, dropped by about half across all of the studies. Yet very few of the redone studies contradicted the original ones; their results were simply weaker.
Also, as one person pointed out in the same article:
Dr. Schwarz, who was not involved in any of the 100 studies that were re-examined, said that the replication studies themselves were virtually never evaluated for errors in design or analysis.
And finally, that's how science is supposed to work. Research isn't a hard conclusion but an argument in a long debate:
But the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.
Science is not a body of facts that emerge, like an orderly string of light bulbs, to illuminate a linear path to universal truth. Rather, science (to paraphrase Henry Gee, an editor at Nature) is a method to quantify doubt about a hypothesis, and to find the contexts in which a phenomenon is likely. Failure to replicate is not a bug; it is a feature. It is what leads us along the path — the wonderfully twisty path — of scientific discovery.
What is a high rate? What is a normal rate?
> it's not hacked (or the papers go through selection effects)
There are many, many reasons, besides 'hacking' or selection effects, for results that aren't 100% accurate.
Plausibility depends on the level of expertise one has in a given domain.
Ordinary people don't have any expertise or direct experience of, say, fundamental physics, or astronomy, and so for them (us) no result is either "plausible" or "implausible". We can believe them, or not, but we can't really have an opinion on them. (That goes for climate change: you choose to either trust the experts or not, but you can't tell them that they're wrong if you yourself are not an expert).
But social sciences are very different, because all humans have a pretty good understanding of human nature. Studying it is all that we do, all of the time.
And so, in the domain of studying human nature, "plausibility" really means something, and the opinion of the layman isn't to be ignored, because chances are said layman is as competent as anyone.
In that specific domain, results that fly in the face of the common experience should be received with great skepticism and examined and verified thoroughly.
Relative to what? Anthropologists most certainly have far more expertise on human nature than anyone else. Everyone else tends to fall into a rut of essentialism where they believe that their particular, most obviously culturally specific, experiences are universal.
Anthropology is the study of human cultures and customs, which probably have as much to do with accidents, chance and contingency (history) as with universal human characteristics.
To this affliction, my own research has luckily been relatively immune. Developmental studies need large Ns to track age-related trajectories, and most of my graduate research has N's greater than 100...for which I am grateful to my advisor and NIH.
Does the act of intellectual honesty displayed on that blog, and the concomitant discovery that the author is a fallible human being, make you more or less inclined to read a work that was rather widely pronounced to be brilliant?
That is, they involve questions which were designed to communicate one thing while secretly "really asking about" another thing, so that the questioner could then shout "Ah-HA! Your biases caused you to misinterpret that!"
Doesn't science build off other science? When one scientist takes the conclusion of another study for granted (Example that X drug is effective), and then tries to build off of it with their own study (testing if A or B delivery method is more effective), doesn't it become obvious fairly quickly the second study is based on a bad conclusion of the first study?
And if the author of the second study calls out the author of the first study, this type of Peer Review could be almost as effective as having more Peer Reviewed literature or study replication, which is what everyone is asking for.
Indeed. In fact, this is extremely common in biology. It’s possible that some fields need to confront the replication crisis (psychology, for example, is frequently cited and the subject of the OP), but I really don’t see this in my field.
The "if" there is one big problem. What seems to have happened instead, in some fields, is that studies that would challenge the results of existing ones simply don't get published. This is partly a result of scientists self-censoring ("I must have done something wrong") and partly a result of journals not accepting papers that challenge "established science".
But also, again for some fields, the assumption that other scientists build on the existing work may not be true. The worst-case scenario is when someone does a study, the results are widely trumpeted and applied to the real world, no one keeps track of how they actually perform in the real world (which may be difficult to do anyway, in fields like social psychology), and no one bothers doing any more studies in the area because it's "settled science"...
A paper that is tainted because can't be replicated by others, also taints all papers that quote it. Index can be searched by author name.
People would be much more careful what they quote.
Basically without replication you do not get quoted.
Without being quoted you do not get scientist karma.
Take a random study from physics and another from social psychology; the physics study will generally be a much higher-powered study, botching the stats will have a smaller effect on the conclusion, and the practitioners themselves are, arguably, better statisticians anyways.
Kahneman has defended much of this research in the past. But at least he's smart enough to recognize his mistakes, which cannot be said about many of his colleagues.
Personally, I run across the concept a lot in popular books and articles that I read (from before the controversy), so it presents a problem for me in whether to believe them if it's not actually true.
If you would have the same standards for large parts of CS that shattered psychology then you'd have to admit some pretty dire things.
Actually, I've hardly seen any vendor study (especially Microsoft, but basically all vendors) which was NOT ridiculed by anyone who cares about science.
> If you would have the same standards for large parts of CS that shattered psychology then you'd have to admit some pretty dire things.
Can you give an example? Unlike "software engineering", which is neither science nor engineering but mostly a craft, Computer Science is a field of Math, and rarely has anything to do with probability (p-Hacking), or selecting a favorable data set; There's a lot of criticism directed at the relevance of test sets (e.g. MNIST), but that's basically the opposite problem, which might impede progress, but does yield out very well defined and verifyable conclusions.
The irony is that Kahneman has spent a lot of time both raising awareness of the replication crisis and studying the kind of cognitive biases people often have, yet got bit by exactly that.
In his defense, his comment here is, I think, a master study in what a detailed, sincere "I was wrong" should look like.
It's a typical situation of that telephone-whisper game; researchers do a test they came up with on a group of 21 students. In social psychology this is considered a reasonable sample size. From where I'm standing, this is a non-starter for doing research and calling it science. Just don't even bother. It's better to not know anything than to do it any way and catch a bias (yes like the disease it is).
Stopping attributing meaning to what is essentially random data is what got us into this whole modern era of technology in the first place. It seems pretty clear by now which fields of science took that seriously and which ones preferred to instead read (meta meta) meta studies and "sit in their office, bouncing ideas of one another".
But hey, I'm from computational science and we can always generate more data, which starts at sample sizes of about, oh, 10K or so? (at least)
I always get the idea that these guys just wanted to sell books, filled with slightly-counterintuitive yet somehow plausible factoids. Add in the veneer of scientific credibility and you've got a very juicy best-selling combo. For people who like to feel they are scientific / rational intelligent, make it a part of their ego--something which I can totally relate to, btw, but I try to be better.
A few things that stood out from Kahneman's comment that reflect his ego (even though he uses words to sound humble, he can't quite find the courage, if he could it wouldn't have gotten this far);
I had to look up "file-drawer problem", turns out it is a cutesy euphemism for "publication bias". Care to guess why he doesn't use that word? He does it twice, even, so it's definitely not to add variation or flavour to his writing style. Especially when replacing the two usages of the term in context would become "severe publication bias" and "substantial publication bias undermines the two main tools that psychologists use to accumulate evidence", which sounds really pretty damning, much more so than calling it "file-drawer problem".
> first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples.
He likes to coin terms a lot. I know the "Law of Small Numbers" in a mathematical context where it means something entirely different. So I looked it up and it turns out to be a euphemism for a "hasty generalization fallacy", kind of the exact opposite of what he suggests here.
Anyone care to check this citation? What is this magical law that allows researchers to trust the results of underpowered studies with unreasonably small samples? It sounds beyond implausible to me. You can statistics your way around this in circles but really you can also just like, dismiss underpowered studies with unreasonably small samples.
> We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.
Yeah right. The real answer is "because I had books to sell, research grants to obtain and an ego to maintain". You don't need someone to write a paper about this to come to this conclusion. Of course it's harmful, how can this not be obvious? It's also, like, a MAJOR part of social psychology and similar studies because sample sizes are always stupid small. And Kahneman is an expert in this field. So apart from that he already KNEW this because he has common sense, he MUST have already internalized it because he's an expert in this field and you come across this particular bit of common sense all the time. Therefore, no, you CHOOSE to ignore that reality (not message because you were perfectly aware of this before that paper you cited).
> if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional.
If you come from the point of view of the exact, hard sciences, like physics, math or computational science, it's kind of hard to see anything wrong with this statement, on the surface.
But if you know a thing or two about social sciences and the like, the bullshit that goes on there, you know that almost everything is wrong about the above statement.
Reputable journals often aren't. And there is nothing, absolutely nothing, in this world that can make someone believe. You cannot require it. And to say that "belief" is not optional is almost an oxymoron. Unless you use brainwashing. Except I'm not sure if "brainwashing" even really works because, you know, guess what fields conducted the unethical studies into it.
> This position still seems reasonable to me – it is why I think people should believe in climate change.
Please don't drag climate science through the same mud as your clusterfuck of research. The hard numbers and sample sizes they have access to, are so large you'd soil your pants.
> But the argument only holds when all relevant results are published.
Which you KNEW is not the case, so that's not really an excuse is it.
> I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own.
But I had books to sell, research grants to obtain etc etc
> However, I now understand that my reasoning was flawed and that I should have known better.
> I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through.
> I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware.
If he's so convinced that "belief in well-supported scientific conclusions is not optional", then it also holds when the scientific conclusions say the opposite, and he should DROP this belief right this instance until proper evidence is obtained. This is not belief, it's stubbornness.
I mean sure, I would personally say, don't throw out the baby with the bath-water (because I also don't quite agree with the other one, science can be flawed like any human endeavour). But he only just went hard-line science on this idea, in the very same comment, and you should at least be consequent about it.
I really don't think he's learned any lesson. He led his scientific beliefs be guided by ego, clouded by the idea that research proved what he wanted to be true, and he's published books filled with untruths that are out there right now. Is he going to issue retractions? Errata? Cause you know, lay people are going to read this years into the future, and believe this crap.
 In a book by the Kahneman/Tversky/Taleb trio of juicy pop-psych writers, they described their research methodology this way. Proudly so, because what could be better science than such incredibly smart people giving the freedom to "bounce ideas of one another" ... Sorry this post is not proper science, I can't recall and properly cite what book it was :-/ I think it was Taleb talking about his buddies Kahneman and Tversky.
The focus of the article is on failed replication attempts in Psychology. The gist:
* The famous failed studies are famous because of characteristics of the media, mainly the media loves quick fixes. "Just doing this one tiny thing once will produce this massive change."
* A common sense reason that small interventions don't produce massive psychological changes: imagine if everyone you knew was changing personalities all the time. It would be chaos.
I think it's good and could tell you a lot more about it because I've written for their new program and had one of the first publications to join.
I can't really find a fault with the current model of Quora, but for some reason $5/month sounds a bit high.
It'll be sort of interesting to see what it looks like in three years.
As of now, I tell people that if they like my stuff, then the $5 is a bargain. We're publishing crazy detailed personal development stuff behind the paywall. Definitely worthwhile.
But if that's not your thing, then it's much less of a guarantee right in this moment.
In three years though... if they execute on a Netflix for content model... that $5 could be crazy compelling. There is a lot of great writing that would be unlocked by just having a reader who already paid rather than trying to jump through content marketing, SEO and virality hoops.
When Bill Nye starts calling for burning of the heretics:
I wonder if fact or emotion is guiding us here.
Does a couple of hurricanes prove this? For those with faith it does. Congrats.