
Avoiding Intellectual Phase Lock - monort
https://books.google.com/books?id=__CnDwAAQBAJ&lpg=PT21&dq=intellectual%20phase%20lock%20Frank%20Dunnington&pg=PT21#v=onepage&q=intellectual%20phase%20lock%20Frank%20Dunnington&f=false
======
thijser
Nice, Feynman also described this in cargo cult
science:([http://calteches.library.caltech.edu/51/2/CargoCult.htm](http://calteches.library.caltech.edu/51/2/CargoCult.htm))

> We have learned a lot from experience about how to handle some of the ways
> we fool ourselves. One example: Millikan measured the charge on an electron
> by an experiment with falling oil drops and got an answer which we now know
> not to be quite right. It’s a little bit off, because he had the incorrect
> value for the viscosity of air. It’s interesting to look at the history of
> measurements of the charge of the electron, after Millikan. If you plot them
> as a function of time, you find that one is a little bigger than Millikan’s,
> and the next one’s a little bit bigger than that, and the next one’s a
> little bit bigger than that, until finally they settle down to a number
> which is higher.

> Why didn’t they discover that the new number was higher right away? It’s a
> thing that scientists are ashamed of—this history—because it’s apparent that
> people did things like this: When they got a number that was too high above
> Millikan’s, they thought something must be wrong—and they would look for and
> find a reason why something might be wrong. When they got a number closer to
> Millikan’s value they didn’t look so hard. And so they eliminated the
> numbers that were too far off, and did other things like that. We’ve learned
> those tricks nowadays, and now we don’t have that kind of a disease.

~~~
dr_dshiv
Isn't that also an appropriately Bayesian approach to new evidence?

~~~
knzhou
No, because the Bayesian conditioning ends up being used twice. If you and a
friend do an experiment independently, and he gets 10 and you later get 12,
you might justifiably think the answer is closer to 11.

But you shouldn’t _publish_ that you got 11. Because then somebody else will
see that the measurements were 10 and 11, and think the true answer is closer
to 10.5...

------
osullivj
Sounds like the author is describing techniques to avoid Kahnemann & Tversky's
anchoring effect. Avoiding one's own cognitive biases is important in
debugging too; I mutter "listen to the system" as I read logs and error
messages to try and avoid ignoring output that contradicts my preconceptions
about root causes.

~~~
bluetwo
I use "what does IT think I'm asking it to do?"...

------
multidim
To summarize it for the curious in a hurry: intellectual phase lock is the
tendency for people in science/intellectual-endeavors to publish/assert
results that are not too far from what other people are getting. With this
tendency, it can take a while for a (science) community to drift from a
fashionable, wrong belief to a more correct belief. thijser's comment[0] is a
good example of intellectual phase lock.

[0]:
[https://news.ycombinator.com/item?id=21113365](https://news.ycombinator.com/item?id=21113365)

------
zyxzevn
It may be weird to some, but there are a lot of intellectual phase locks
today.

A major cause is publish-or-perish. And expert-group-bias. That last one is
like: "Experts in astrology agree that astrology is working well."

We can spot these phase-locks by comparing the theoretical predictions with
the actual real-world results. I also noticed that some of the results are
altered afterwards to fit to the model.

Another signal is that good (and friendly) criticism is attacked, with
personal attacks usually. This often happen when two different experts meet.
From their expertise they come to different conclusions.

I noticed that these conflicts are hidden due to the peer-review system. Each
specialisation is controlled by their own experts. This means that the
different experts won't touch each other areas much. And just stay at their
own territory to avoid conflicts. Or do not even widely publish their
conflicting results.

~~~
asdfman123
In programming, so many major decisions are phase locked, because not that
many programmers have the time to meaningfully test out new technologies, so
many of us are dependent on hearsay to choose what's "best."

------
fouronnes3
IIRC similar care was taken for the gravitational waves paper. They had the
measurement team send multiple data sets of observations to the team writting
the paper but only one of them correct.

------
ImaCake
As a side note, if you like collecting notes like this but find yourself
frustrated that you can't copy paste from the source. I recommend you download
tesseract, take a screenshot, and parse the image to extract the text.
Tesseract runs really well on these kind of things and saves a lot of manual
typing.

------
SiempreViernes
More commonly called bias I think, calling it "intellectual phase lock" seems
like a weird flex.

~~~
SatvikBeri
The book was written in 1987, which is before Kahneman and Tversky's
experiments were popularized (at least outside of academic Psychology.)

That said, bias is a general term, and "intellectual phase lock" as described
here is a more specific example. The modern terms would probably be
"anchoring", "confirmation bias", "courtesy bias", which slice up the space in
a slightly different way.

------
dandyandy
"Most people are concerned that someone might cheat them; the scientist is
even more concerned that he might cheat himself."

------
Jach
Looks like the two good comments were taken (Feynman, Kahnemann). I'll just
leave a couple quotes I thought of when I read a few paragraphs further about
the hierarchy for ability in math and how it can be quite upsetting to
discover how far up it goes beyond you, when you thought you were pretty high
up.

>> [Pascal Costanza] Why is it that programmers always seem to think that the
rest of the world is stupid?

> Because they are autodidacts. The main purpose of higher education and
> making all the smartest kids from one school come together with all the
> smartest kids from other schools, recursively, is to show every smart kid
> everywhere that they are not the smartest kid around, that no matter how
> smart they are, they are not equally smart at everything even though they
> were just that to begin with, and there will therefore always be smarter
> kids, if nothing else, than at something other than they are smart at. If
> you take a smart kid out of this system, reward him with lots of money that
> he could never make otherwise, reward him with control over machines that
> journalists are morbidly afraid of and make the entire population fear
> second-hand, and prevent him from ever meeting smarter people than himself,
> he will have no recourse but to believe that he /is/ smarter than everybody
> else. Educate him properly and force him to reach the point of intellectual
> exhaustion and failure where there is no other route to success than to ask
> for help, and he will gain a profound respect for other people. Many
> programmers act like they are morbidly afraid of being discovered to be less
> smart than they think they are, and many of them respond with extreme
> hostility on Usenet precisely because they get a glimpse of their own
> limitations. To people whose entire life has been about being in control,
> loss of control is actually a very good reason to panic.

–– Erik Naggum, 2004
[https://www.xach.com/naggum/articles/3284144796180060KL2065E...](https://www.xach.com/naggum/articles/3284144796180060KL2065E@naggum.no.html)

> Fermi and von Neumann overlapped. They collaborated on problems of Taylor
> instabilities and they wrote a report. When Fermi went back to Chicago after
> that work he called in his very close collaborator, namely Herbert Anderson,
> a young Ph.D. student at Columbia, a collaboration that began from Fermi's
> very first days at Columbia and lasted up until the very last moment. Herb
> was an experimental physicist. (If you want to know about Fermi in great
> detail, you would do well to interview Herbert Anderson.) But, at any rate,
> when Fermi got back he called in Herb Anderson to his office and he said,
> "You know, Herb, how much faster I am in thinking than you are. That is how
> much faster von Neumann is compared to me."

\-- Relayed by Nick Metropolis

I got the second one from [https://infoproc.blogspot.com/2012/03/differences-
are-enormo...](https://infoproc.blogspot.com/2012/03/differences-are-
enormous.html) which also quotes this submission at the point a bit further,
no wonder it was so familiar and these quotes came to mind.

~~~
projektfu
Very good! I got an early education in college when I took Real Analysis in my
first year (which most programmers do not take) and it kicked my ass so hard I
still have impostor syndrome. But I don't make the error that I'm the smartest
person around anymore.

~~~
asdfman123
Yeah, same here. As embarrassing as it is to admit, I think a lot of us grew
up thinking we are The Smartest People Ever because it seems like so much of
your self-perception is solidified in your early teenage years.

It takes a whole lot to shake that. If you see a few pieces of evidence that
other people are smarter, it's easy to dismiss. However, if you regularly
surround yourself with people who can run circles around you and provide so
much evidence that you can't ignore it, you're eventually forced to reevaluate
yourself.

------
amelius
Would he have used the same approach if there were fierce competition in his
field?

~~~
SiempreViernes
Probably, it only costs a couple of days to dismantle and measure and you get
a very valuable source of confidence.

Of course, if the machinist messed up and made an angle outside of the allowed
range that's probably a few years down the drain (but I expect he checked
before putting it in, just not _too_ closely)

~~~
amelius
Ok, but with competition he probably would have had more pressure to publish
sooner and more often.

------
tlb
It's remarkable that in this case he was able to obscure just a single piece
of information to avoid phase lock. That's rarely the case. If you wanted to
avoid phase lock for most topics you'd have to completely cut yourself off
from all the literature in the field. I can't think of a way to do this in any
of my fields: robotics, programming languages, or algorithms.

------
SolaceQuantum
Can intellectual phase lock apply culturally, and if so, what does it look
like? For example, I wonder what techniques can be used to avoid it.

~~~
onemoresoop
I'd say absolutely. That's why new generations tend to do things differently
because they come from a somewhat new slate.

~~~
SolaceQuantum
Yes, so should we break out of our cultural intellectual phase lockings the
way we should break out of our scientific ones?

~~~
onemoresoop
You're free to break from it, have a breakthrough and share it with the world.
On a larger scale it's a drop in the bucket though. You'd have to change how
and what things are taught and sometimes the cultural intellectual phase might
unlock to a worse phase. Best is to let these things happen on their own.

~~~
SolaceQuantum
What does "let things happen on their own" mean in this context? One person
can pivot a culture, just as one researcher can pivot a field. What's the
difference?

~~~
onemoresoop
What I mean is not try to "break" the establishment but let it "break" itself.
But not in the sense that establishment is bad but in the sense that some
practices have crystallized and nobody challenges them.

------
conjectures
This is a great excerpt.

