
Rethinking Hamming’s questions - zdw
https://lemire.me/blog/2019/01/29/rethinking-hammings-questions/
======
hprotagonist
Hugged to death. [http://archive.is/h9dih](http://archive.is/h9dih) has it.

>Tell smart people to work on what is important to them, but don’t tell them
(ever) what exactly they must do. Do not reward any of them visibly for any
repeatable action.

Recapitulates the rather famous advice attributed, by way of Rayleigh, to J.
J. Thompson:

 _" If you pay a man a salary for doing research, he and you will want to have
something to point to at the end of the year to show that the money has not
been wasted. In promising work of the highest class, however, results do no
come in this regular fashion, in fact years may pass without any tangible
result being obtained, and the position of the paid worker would be very
embarrassing and he would naturally take to work on a lower, or at any rate a
different plane where he could be sure of getting year by year tangible
results which would justify his salary. The position is this: You want one
kind of research, but, if you pay a man to do it, it will drive him to
research of a different kind. The only thing to do is to pay him for doing
something else and give him enough leisure to do research for the love of
it."_

~~~
lixtra
This may be an argument for PhD positions that include teaching vs pure
research.

For me it psychologically good to have constant successes in teaching while
seemingly not advancing in research for months. Of course the load shouldn’t
be high.

~~~
hprotagonist
I'd be a lot better off, psychologically, if i had to fret about grants maybe
once per decade, not twice a year.

------
biomcgary
I've certainly seen this in biology. The rapid growth of molecular biology
graduate programs in the 90s and early 00s lead biology to becoming a career
path with a lot of mediocre talent entering graduate school. To keep research
faculty happy and inexpensive hands at the bench, the historical filters for
clarity of thought and careful logical analysis were reduced in qualifiers and
dissertation defenses. Simultaneously, there was a shift from biological
experiments designed to carefully test a hypothesis with a limited number of
logical outcomes to genome-scale (e.g., gene expression) experiments that were
more likely to lead to pattern matching and post hoc story telling exercises.
This shift reduced the filter for careful thinkers even more. The overall
trend lead to more hierarchy in the sciences, which has limited the number of
topics, the acceptable answers, and the funding for carefully thought out, but
non-mainstream research.

In biology, the complexities of the subjects are very high relative to
physics, so determining the important questions is much harder. Studying
bacterial immunity, which lead to CRISPR, was and is a very niche field, but
led to a very important biological technology.

------
btilly
The reasoning all starts with the supposition that people can't know what the
important questions are. Whether this is true strongly depends on what field
you are in.

Within a paradigm there will be agreement on what the important questions are.
In any field of study with a shared paradigm, there will be rough agreement
about what is important. The whole field may be wrong about that answer, but
it is an answerable question.

How big an agreement is there?

Across math and the hard sciences, it turns out that not only does each
subfield have its dominant paradigm, but people accept the paradigms in
neighboring subfields, and overall paradigms across fields. Therefore when a
result like the fact that neutrinos can flip types in flight was demonstrated,
it was immediately recognized as important by all physicists, even ones in
fields as far away as fluid mechanics.

This shows up in journals, where each little subfield has its own
constellation of journals, with the best results going into a prominent
journal that is shared across several subfields. And ditto at the next level,
leading to a hierarchy that culminates in the most important journal of all,
_Nature_. Which is where the most important results in all fields are
announced.

This beautiful picture completely falls apart when you get out of math and the
hard sciences. The problem is not that people are not doing scientific
research, it is that the field hasn't agreed on a paradigm. The question of
what the right approach is and what questions are important is a subject of
debate, not agreement. Which immediately renders Hamming's advice
questionable. Indeed this is so much true that the picture I have described is
both obvious to someone who has worked in fields where there is such
consensus, and sounds impossible to someone who works in fields where there
isn't.

This shows up in a variety of ways. For example in journals, each school of
thought has their own ranking of journals and they don't agree. Therefore no
journal can ever attain the widely acknowledged prestige of top science
journals. For another example, if you talk to two random psychologists about
the 10 most important research results in psychology in the last 10 years, the
one's list is unlikely to contain anything recognized by the other. However if
you do the same with physics, the top results will have a lot of overlap. For
example right now the top 10 list for both is likely to include the
verification of the Higgs Boson, and the measurement of gravitational waves.

Anyways, long story short, I understand why this researcher would question our
ability to know what questions are important. However for anyone in hard
science, that is surprisingly achievable.

~~~
milesvp
I think there's more to Hamming's advice than just parroting back what the
consensus is in a given field though. Hamming did something that most people
just don't do. He intentionally cross pollinated by having lunch with people
in different academic disciplines. His harassing questions, "what's the
biggest problem in your field?", and, "why aren't you working on it?", I think
were first and foremost, meant to help him get a better idea of the broad
strokes of the field in question (I'm sure he also got a certain amount of
perverse pleasure in being a pest). But after that, if you look at how he
writes about people's responses, I think he was also trying to get people to
think about the big picture for themselves, and in this context, it doesn't
really matter what your answer is to his questions, so long as you've given it
some thought.

His observation, that people who tended to give satisfactory answers later
tended to have more prominent positions (I can't remember if he also said
anything about the importance of their direct contributions), really just
lends support to the idea that it's important to step back and keep the big
picture in mind. He made similar observations when he talked about people who
work with their doors open vs those who work with their doors closed. And that
those who kept their doors closed tended to find themselves working their way
towards irrelevance in their field, despite much greater short term
productivity.

Mostly I find he's got advice that seems generally useful in any career, which
can be boiled down to be wary of working on something without regard to how it
fits into the bigger picture.

~~~
btilly
There is indeed more to it.

But his reasoning starts with the assumption that it is possible to know what
the big problems are in a field. Which assumption this article started by
questioning. And I'm sure that in the author's field, this assumption looks
more questionable than it did to Hamming.

------
smaddox
> Hamming believed that great scientists knew the important problems. I doubt
> it. I’d call it hubris.

Or perhaps causality is just reversed. Perhaps it is the very scientists who
happened to choose the important problems to work on that become "great" as a
consequence of that choice.

