Hacker News new | past | comments | ask | show | jobs | submit login
The Long Game of Research (acm.org)
77 points by tosh on Aug 24, 2019 | hide | past | favorite | 24 comments

Alan Kay talks about this when reminiscing of PARC. All of the cool stuff invented at Xerox PARC (PC, Computer Graphics, GUIs, Word Processors, OOP, proto-networking, laser printers) happened in ~5 years by about ~25 people. But those people were second generation researchers shepherded through the ARPA program, and ARPA had been investing in the effort for well over a decade when it finally paid off. Led by Licklider on the ARPA side and Bob Taylor on the PARC side.

Just the laser printer got back a 250x on all the money invested into PARC. And PARC had to push Xerox to even consider productizing that!

Book: Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age (1999)

I also remember reading somewhere that the guy for whom the Higgs-Boson is partly named, defiantly responded with "nothing" for many years when his university did their annual (or was it quarterly?) roundup asking all the professors to submit their latest published findings. Later, after winning renown for his work on Higgs-Boson he remarked that he would never have succeeded in today's impatient environment.

I don't even think Higgs was being defiant with saying "nothing", I don't think it was that unusual in UK Academia in the 70's and 80's for people not to publish for several years.

My mother once asked me why anybody cared about prime numbers - “they’re there, but so what?” I told her that prime numbers are actually an important part of public-key cryptography. It occurred to me that prime numbers had been studied for _centuries_ before somebody figured out a way to use them to secure communications.

1. Doing math is fun and interesting. Like someone who is interested in riddles or poetry. You don't really need to do these things, but we find fun in the novelty and surprise, so we do it. Math is the same as growing flowers. One is pleasing to look at, the other pleasing to think about.

2. Ask her why we know prime ministers are there? Because somebody was playing with math and noticed that some numbers don't have non-trivial divisors. So with a some effort we saw a pattern that we could not see before. That was fun.

3. Maybe there are other patterns that are out there, or other things to know about primes. Clearly, it takes effort. So we keep exploring, hoping for the next dopamine hit. That's all. We are just math druggies.

Ironically, they even wrote essays glorifying pure mathematic's uselessness:

> > No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems unlikely that anyone will do so for many years. [1]

> Since then number theory was used to crack German enigma codes and much later, figure prominently in public-key cryptography. https://en.m.wikipedia.org/wiki/A_Mathematician's_Apology

[1] Hardy's "A Mathematician's Apology" http://www.math.ualberta.ca/~mss/misc/A%20Mathematician's%20... [pdf]

Math is fun. Why do people play frisbee?

If someone's experience with frisbee were being forced, day after day for many years, to throw a disc through increasingly arcane set of hoops, and God forbid they try to throw it to another person, then you'd have a lot of work first explaining to that person how frisbee can be fun.

I think the implied question is: why do people care enough about prime numbers to pay people to research them?

We pay people to play sports because we enjoy watching them do so, but that argument does not hold for maths - so your rhetorical question does not answer the implied question, I think.

Must read: the structure of scientific revolution https://en.m.wikipedia.org/wiki/The_Structure_of_Scientific_...

tl;dr most of the time science is incremental innovation. and big discovery comes as paradigm shifts, usually first started by a group of “non-mainstream” researchers at the time.

I've run across mention somewhere recently that Kuhn's work has been somewhat deprecated. Unfortunately I don't recall where I saw this, or in what ways it may have been.

If this rings a bell with anyone, I'd appreciate a pointer or link.

Kuhn himself disowned much of his early views, refining his original intuitions in “The Structure”. There’s an essay on [1] in which he tries to offer a novel interpretation.

[2]: [https://www.amazon.com/Road-since-Structure-Philosophical-Au...

Thanks! Botched that one...


Mosche Vardi's essay is provocative, but still leaves a few too many points vague and undefined for my taste.

The nature, questions, and value of research are one element of a larger set of questions I've been exploring.

I've recently run across David Hounshell's work on R&D at DuPont: Science and Corporate Strategy: Du Pont R&D 1902-1980, originally published in the 1980s.


For a ... short-ish ... synopsis, covering main themes, see Hounshell's article: "Measuring the Return on Investment in R&D: Voices from the Past, Visions of the Future"


Upshot: R&D is a risk-based activity, and it can have high payoffs, but also high costs. Much of Hounshell's book is about management's attempts to both organise its research and research teams for optimal results, and to manage the costs and risks associated with it. There was a period in which the labs were phenomenally productive, including a roughly decade-long span from the 1920s through 1930s where virtually the entire modern plastics industry (Graduate fans take note) was invented. But also flops (a fake leather product produced in the 1960s notably).

This fits in with notions I've been noodling at of risks in economics -- an element of virtually all economic activities, but with distinctly different characteristics and scope. R&D tends more toward the dice-rolling variety (you may win), whereas some activities gamble with catastrophic and systemic loss. Some fields appear to be little but raw risk balancing (much of finance, real estate, and of course, insurance) whilst others provide a greater opportunity to directly influence, reduce, adopt, or mitigate risks (engineering, generally).

There's also the risk, or uncertainty, involved in attempting new activities. The recent HN submission "The Drugs Won't Work" https://medium.com/@belledejour_uk/the-drugs-won-t-work-659c... (https://news.ycombinator.com/item?id=20789000) talks of the problems of new drugs discovery, effectively a search through a 10^200 node space for potential beneficial compounds, with very few heuristics in reducing or guiding the search (Lipinski's Rule of Five being one: https://en.wikipedia.org/wiki/Lipinski%27s_rule_of_five).

All research (and much coding) is effectively a search though a large possible solution space, subject to constraints and domain characteristics, for possible useful elements or combinations. The problem of huge search spaces means that alternatives to brute-force exploration are quickly necessary.

Back to Du Pont: the chemical search space is also large (pharmacology is effectively a branch of that), and low-hanging fruit was found early with small, simple, and readily-attained compounds: the law of diminishing returns. Worse was the discovery, often much later, that along with useful features came harmful ones -- the law of unintended consequences being another suprisingly general principles that arose from a specific discipline (sociology) but is applicable in nearly all others.

As for management: by Hounshell's account, management largely turned its researchers loose and told them to have fun.

There's the further challenge that the discovery of keys rarely coincides with the knowledge of the lock in which it fits. The phrase "a solution in search of a problem" was first applied (by Theodore Maiman: https://en.wikipedia.org/wiki/Theodore_Maiman) to a technology of which there are almost certainly numerous instances of in your immediate vicinity: lasers.

Early applications were thought to be in the application and delivery of power (Arthur C. Clarke makes a passing reference in 2001 A Space Odyssey to blasting the Lunar monolith with one), but it was the characteristic of lasers as a tightly coherent transmissive blank slate onto which information could be encoded which proved its greatest use.

(This makes me suspect that materials such as graphene -- a uniform, monoatomic plane, might have its greatest application as information storage medium rather than physical materials, much as doped silicon has proved useful in semiconductors.)

Another part of my general study has been in thinking how technology works by considering what technological mechanisms fundamentally exist, and how those interact. My list presently numbers nine (materials, fuels, power transmission & transformation, process knowledge, structural knowledge, networks, systems, information, hygiene -- fuller descriptions elsewhere, still in process), which may or may not prove ultimately accurate. But has been somewhat useful in thinking through numerous problems and applications. Each mechanism has specific characteristics and limitations.

Upshot: I don't think "technology" is a bottomless bag of tricks. It's definitely useful, and well get more from it, but it also has costs, including complexity (itself a form of risk, which is to say, of debt), but there are bounds. R&D is exploring a space with constraints, in which both benefits and risks may be found. Treating the matter probabalistically and as one to be approached in on a risk-based approach may prove generally useful.

TLDR; some research requires couple of decades to bear the fruit but people in power, especially in industrial labs don’t have much patience.

I don’t see any new insights. This is very well known fact. Typically industrial labs starts with lot of fan fare about long term view, big bets, blue sky research, don’t worry about business impact etc. Eventually after few years or decades, people starts counting money spent vs money earned. Then suddenly you have research evaluation meetings where you have to justify everything in terms of business impact. I think this is the life cycle of industrial labs.

I wonder what the future is going to be like for research within the next few decades?

Since working in industry for the past five years largely in research environments, I've noticed a profound shift away from research labs run by companies like IBM, HP, Intel, and Sun/Oracle that focused on medium-term research ideas and emphasized publications and prototypes, to a hybrid model of research and product development, pioneered by Google (see "Google's Hybrid Approach to Research" [https://static.googleusercontent.com/media/research.google.c...) and has since become widespread among Silicon Valley's largest and most successful companies. This hybrid research model is more short-term driven compared to the medium-term approaches of the labs of the 1990s and 2000s, and especially compared to the long-term visions of the legendary Bell Labs and Xerox PARC of the 1970s and 1980s. I have no doubt that this hybrid research model has been successful for companies like Google and Facebook from a business standpoint.

However, there is still a need for medium-term and long-term research, for the sake of our field and also for the sake of providing future inventions that industry can sell. There is also still a need for theoretical and explorative types of research that may not immediately have commercial applications. Academia would appear to be a great fit for pursuing medium- and long-term research, but the realities of fund-raising and competing for tenure in academia makes it difficult for academics to pursue risky medium- and long-term projects.

I've been thinking a lot about this regarding my career. I'd love to work on medium- and long-term projects and aim for expanding scientific knowledge, but industry is increasingly focused on short-term gains, and academia is very competitive, from trying to obtain a tenure-track assistant professorship to trying to earn tenure.

There is a big problem of feedback with respect to longer term ideas and technology development. In my experience folks at the coal-face (doing delivery, operating the machines) have vital information that would save a huge amount of time in research efforts - not bothering to even investigate certain approaches for example. Sadly this information is often discovered post-hoc! I think that this is what hybrid research taps into, and is the right way to tackle things that will be delivered to market in 5 years or less.

Interestingly there is very little effort in research in general in creating and discovering that qualifying information because there isn't much short or medium term reward in ruling things out - creating negative results.

Academia is now very remote from industry in EU/UK comp sci at least, there are entire communities of work that appear and run and from almost the beginning are known to be very unlikely to create any real or practical result. The poster child for this is the Human Brain Project, but it was apparent that Semantic Web was essentially an open question in computer science very early on, and yet it was sold to the commission and many other funding bodies as incremental technology development. There are other big examples. The problem is that no one is interested in derailing the gravy train and the funding agencies seem to either be under political influence or not to have any critical facilities or institutional memory.

FYI, you should add a space before the closing ], otherwise HN formatting makes it part of the URL. Currently clicking your link returns a 404 due to this.

I'm not sure academic labs are in any better shape. Outside of a few very rarified places (NIH intramural research, HHMI-funded labs, etc), the incentives are such that most projects need to be such that 1-2 main people can produce a publishable result every 1-2 years.

You get a bit of a grace period when starting grad school, and shorter ones when starting a postdoc or a faculty position, but the ability to stumble around a problem and explore it from multiple angles has become an incredible luxury.

> You get a bit of a grace period when starting grad school, and shorter ones when starting a postdoc or a faculty position, but the ability to stumble around a problem and explore it from multiple angles has become an incredible luxury.

I don't think you are overstating or misstating the problem. The economic incentives do prevent certain kinds of long term research programs, but they only slow down other kinds. What grants demand is that research publish often. So researchers have to plan ahead and mostly only pursue avenues that can lead to publication with a year or so of work.

If they can break down a big research goal into small yearly chunks, some of which can happen in parallel even, that big goal will eventually be reached. Of course, what slows down things is that a lot of effort have to be repeated per paper. Part of it is new grad students that have to learn everything from scratch, part of it is just the writing down of the paper and going through the review process, etc.

But if the research program can't be broken down, then its some prof working on it on the side while they scramble to publish enough to keep their job/funding. Usually, they don't make much progress and for all intents and purposes the research goal has been prevented by the system's incentive structure.

Maybe this is well known, but it's still worth repeating (like many things which are true are worth repeating). As a young researcher, I like Vardi's short essays a lot. He has some historical perspective which many of my peers don't have.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact