> We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
> Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
This isn't the case with sufficiently controversial topics in the social sciences.
That's bad, if there was any confusion.
As another comment puts it, findings that buck conventional thought to the point that they can prompt irrevocable ramifications better have an "unimpeachable, reproducible path." And that's difficult when so much of social science is based on statistical modeling of phenomena that are multifaceted and difficult to control for, and not actually produced by deduction.
This is assuming the current incorrect findings aren't being used as justifications for infringement of human rights currently. For example findings that might relate to rehabilitation for crimes and how long we sentence someone to prison, or if we even sentence someone to prison at all. Findings related to forced institutionalization of someone deemed too mentally ill to be allowed their freedom. Findings that are used to justify existing laws that see people put in prison for things they perhaps shouldn't be.
Using such an impeccable standard for changes when such wasn't used for the existing social structure isn't justifiable.
Before cargo cult science becomes a problem here, we need to actually accept and adopt evidence based management of these institutions. Currently policies in these areas are primarily driven by tradition, rhetoric and anecdotes, not scientific inquiry.
...No, it doesn't assume that. The notion that circumstances are imperfect doesn't doesn't preclude a drastic change from making things worse.
Thus, one cannot be said to "commit a crime," only that the particles in one's brain and one's body happened to be in a configuration and receive interactions with other stimuli such that those actions physically occurred.
We already know these things, but we ain't updating our justice system to account for this.
Isn't non-publication of studies/research (for various reasons) a massive problem in a lot of different disciplines now? , for example.
Essentially you'd have to argue that the method you tried should work, based on what we already know, but surprisingly it doesn't work. This type of argument is very hard to carry through.
The opposite is much more publication friendly: "nobody would have thought that this could ever work like this, but we now show that this novel idea does work very well". That's the type of thing for award-winning research.
short answer - here you need researchers and grant makers to simultaneously decide to do what is not in the best interest for them.
I'm sure the people doing those experiments also shared the conviction that they were smarter and more objective than people in past ages.
Or maybe I'm revealing my biased image of him...
You simply do your analysis with some extra, unknown, factor added and once you think you done the best you can, the blinding is removed and you check to see what you measured.
edit To expand on this a bit, as it might sound flippant. People often talk about "bias" as if it's something that obviously ought to be eliminated. But in fact you can't get anywhere without biases. We only have the time and resources to explore a tiny portion of the hypothesis space in most domains.
Yeah, bias means our model of reality is distorted; one doesn't correspond to the other as well as it could. An example of bias, from https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disas... (appropriately enough)
<<<In the appendix, he argued that the estimates of reliability offered by NASA management were wildly unrealistic, differing as much as a thousandfold from the estimates of working engineers. "For a successful technology," he concluded, "reality must take precedence over public relations, for nature cannot be fooled." >>>
That is bias, and it killed people and didn't do the US any favours. If by bias you mean something else, your post needed to be clearer.
In any case, that's the kind of bias that's relevant to this discussion.
Naive bayes has higher bias than logistic regression. Is that a good or a bad thing? Depends.
(I don't actually see any analogy with overtraining.)
Agreed. In your original comment however you wrote "smarter". Granted, it's a rather vague term, but I would say the type of progress we are making in sciences could easily qualify.
Other than physics, chemistry and, to some extent, biology, how sure are we that we have made progress? Can we say that the CBT-psychologists of today are any closer to a theory of human cognition than their Freudian predecessors? Can we say that modern dynamic general stochastic equilibria theories of macroeconomics do better at predicting long-term economic trends than the Keynesian models that preceded them? Even in physics, we seem to be spinning our wheels, building larger and larger particle accelerators while waiting for a theoretical breakthrough that never seems to come.
The alternative leads to things like the cold fusion debacle. There are still people working on it, but the ones I know know that they are regarded as cranks (At least for their CF work) and are super careful to try and discount any results seemingly contradictory to existing physics. Which is correct: another CF claim better have an unimpeachable, reproducible path.
That’s very optimistic. If nowhere else, polling has a huge herding problem, where outliers are dropped when they are too far off the consensus.
But you shouldn’t publish that you got 11. Because then somebody else will see that the measurements were 10 and 11, and think the true answer is closer to 10.5...
The appropriate approach is to accept the evidence, and correct your priors based on it, even if it's not good enough to believe. It's not to fiddle with the evidence until it's something you believe is correct.
Of course, it's much easier said than done. I don't think any group of scientists is safe from repeating this.
On the one hand the traditional Bayesian response is something like "yes, we're making our prior assumptions explicit and then incorporating that into a formal inferential paradigm."
However, this prior is being used to bias the estimates, rather than to avoid the bias. That is, it would be akin to Dunnington taking the current estimates of e/m and using that to shape any new estimates from data. The argument is then that at least he was being explicit about his biases and how they are used to make an estimate.
This has always seemed backwards to me, though. It seems what is more defensible is to use a formal theory about how prior biases affect estimates, and then to leverage that theory to minimize biases. This is basically the idea of the reference prior, to estimate things such that any role of the prior is minimized in an information-theoretic sense. This seems more analogous to what Dunnington was doing.
I really wish reference priors were more widespread, although they can be computationally pretty hefty. It's one of my hopes that quantum computing might make these types of approaches more feasible in general.
A major cause is publish-or-perish. And expert-group-bias. That last one is like: "Experts in astrology agree that astrology is working well."
We can spot these phase-locks by comparing the theoretical predictions with the actual real-world results. I also noticed that some of the results are altered afterwards to fit to the model.
Another signal is that good (and friendly) criticism is attacked, with personal attacks usually. This often happen when two different experts meet. From their expertise they come to different conclusions.
I noticed that these conflicts are hidden due to the peer-review system. Each specialisation is controlled by their own experts. This means that the different experts won't touch each other areas much. And just stay at their own territory to avoid conflicts. Or do not even widely publish their conflicting results.
That said, bias is a general term, and "intellectual phase lock" as described here is a more specific example. The modern terms would probably be "anchoring", "confirmation bias", "courtesy bias", which slice up the space in a slightly different way.
>> [Pascal Costanza] Why is it that programmers always seem to think that the rest of the world is stupid?
> Because they are autodidacts. The main purpose of higher education
and making all the smartest kids from one school come together with
all the smartest kids from other schools, recursively, is to show every
smart kid everywhere that they are not the smartest kid around, that
no matter how smart they are, they are not equally smart at everything
even though they were just that to begin with, and there will therefore
always be smarter kids, if nothing else, than at something other than
they are smart at. If you take a smart kid out of this system, reward
him with lots of money that he could never make otherwise, reward him
with control over machines that journalists are morbidly afraid of and
make the entire population fear second-hand, and prevent him from ever
meeting smarter people than himself, he will have no recourse but to
believe that he /is/ smarter than everybody else. Educate him properly
and force him to reach the point of intellectual exhaustion and failure
where there is no other route to success than to ask for help, and he
will gain a profound respect for other people. Many programmers act
like they are morbidly afraid of being discovered to be less smart than
they think they are, and many of them respond with extreme hostility on
Usenet precisely because they get a glimpse of their own limitations.
To people whose entire life has been about being in control, loss of
control is actually a very good reason to panic.
–– Erik Naggum, 2004 https://www.xach.com/naggum/articles/3284144796180060KL2065E...
> Fermi and von Neumann overlapped. They collaborated on problems of Taylor instabilities and they wrote a report. When Fermi went back to Chicago after that work he called in his very close collaborator, namely Herbert Anderson, a young Ph.D. student at Columbia, a collaboration that began from Fermi's very first days at Columbia and lasted up until the very last moment. Herb was an experimental physicist. (If you want to know about Fermi in great detail, you would do well to interview Herbert Anderson.) But, at any rate, when Fermi got back he called in Herb Anderson to his office and he said, "You know, Herb, how much faster I am in thinking than you are. That is how much faster von Neumann is compared to me."
-- Relayed by Nick Metropolis
I got the second one from https://infoproc.blogspot.com/2012/03/differences-are-enormo... which also quotes this submission at the point a bit further, no wonder it was so familiar and these quotes came to mind.
Lesson #2 was that those super-smart folks I worked with had absolutely no problem saying, "I have no idea what you're talking about, could you explain it?" Probably how they got so super-smart.
So though I learned a lot about the craft in my time at Microsoft, I'd dare say I learned a little bit about how to be a more decent human, too.
It takes a whole lot to shake that. If you see a few pieces of evidence that other people are smarter, it's easy to dismiss. However, if you regularly surround yourself with people who can run circles around you and provide so much evidence that you can't ignore it, you're eventually forced to reevaluate yourself.
Of course, if the machinist messed up and made an angle outside of the allowed range that's probably a few years down the drain (but I expect he checked before putting it in, just not too closely)
It certainly does. It looks like what appears to be a majority of the population bullying others into accepting their viewpoints. This can be because only a single political party exists in the country (China), because the culture is populist and hence unstable leading to conservatism in individuals (Europe), or because as soon as you voice dissent with the current cultural direction a lynch mob materializes and tries to kill you (modern US, but has happened a few times in the past).
>For example, I wonder what techniques can be used to avoid it.
You need a healthy economy, a liberty-minded society, a strong Constitution with freedom of speech, equal protection, and privacy rights (that doesn't have "except when we don't feel like it" clauses like most nations do), and a lack of places and resources for bullies and those in favor of that phase lock to gain power.
In other cases, you need a good pseudonym or (where culture lock is at a despotic level) a decent pair of running shoes.
The alternative is to actually evaluate claims objectively, and find out what you like despite what others think. Both of those things are hard to do.