
Academics, we need to talk - ssn
http://matt-welsh.blogspot.com/2016/01/academics-we-need-to-talk.html
======
forgottenpass
_Now, I don 't think all academic research has to be relevant to industry._

I'm getting kind of sick of this tactic, where this fake concession is made
before going on about how academia isn't designed well enough to deliver to
industry.

If authors of articles like this didn't care about the efficiency of academia
to deliver them research, what's the big deal if there are a bunch of people
somewhere running around in circles? Hey, at least the CS academics
occasionally produce something useful to industry, which is more than I can
say about other groups running in circles. Sure it could be about burning
through money, or a real desire to improve the state of CS academia. But if
that was the case I'd expect these articles to have categorically different
discussion and calls to action. Or - at the very least - address goals of
acedemia other than performing research for industry.

I've never been an academic, and never want to be, but even I gave up reading
after the "(I know, crazy, right?)" line. At that point, I knew for 100% sure,
that the target audience of this article is not academics. Nobody is that much
of a condescending prick to the person they're trying to persuade. This is not
a "we need to talk" conversation this is a "I need to talk at you, so I can
show off to other people."

~~~
mdwelsh
(I'm the author of the original blog post.) Ad hominem attacks aside, I do
think it's a big deal if a bunch of academics are spending time working on the
wrong things, if for no better reason than maximizing efficiency: Don't forget
that most academic research is funded by the government. Since I also happen
to help decide which research proposals Google funds, I also care that
academics are well-aligned with the problems we care about. Clearly we also
need to invest in long term bets. But there is a big difference between doing
long-term, potentially-groundbreaking research and bad industry-focused
research.

~~~
cushychicken
>it's a big deal if a bunch of academics are spending time working on the
wrong things

At the risk of sounding a little prickly with my comment - isn't it a little
presumptuous to write off a certain amount of research as "wrong"? There are
plenty of examples of research that didn't have a clear purpose leading to
breakthroughs - safety glass, microwave ovens, and Teflon all spring to mind.

Also, focusing on efficiency doesn't really align with academia's purpose,
which (to my mind) has more to do with fundamental research. This article
about Xerox PARC springs to mind:

[http://www.fastcodesign.com/3046437/5-steps-to-recreate-
xero...](http://www.fastcodesign.com/3046437/5-steps-to-recreate-xerox-parcs-
design-magic-from-the-guy-who-helped-make-it)

~~~
mdwelsh
That's a fair point. I'm trying to draw a distinction between "speculative"
research (which might pan out long term) and "industry-focused" research
(which tries to solve problems we have today). My concern is not with
speculative research -- that's great -- but rather flawed industry-focused
research: making incorrect assumptions, failing to deal with the general case,
not considering real-world constraints.

~~~
gradstudent
> but rather flawed industry-focused research: making incorrect assumptions,
> failing to deal with the general case, not considering real-world
> constraints.

I find this point rather condescending. What you term "incorrect assumptions"
are often necessary simplifications that reveal the core of the problem at
hand. The point is to try and understand what makes problems hard and to
develop new ways of solving them; not to deliver to companies like Google
ready-made solutions that cater to their every operational need. Don't like
that? Too bad. Fund your own research.

~~~
mdwelsh
No, I mean incorrect assumptions. It doesn't matter if we're talking research
in industry or academia; doing research based on flawed (not simplifying)
assumptions is bad science.

~~~
gradstudent
If you want to make a concrete point about bad science then do it. At the
moment however you are not making that point. Your argument is that "the
assumptions" (whatever those are; you provide no specifics) made by
researchers don't match up with industry expectations; a wholly different
point for which I have little sympathy.

In my opinion the scientific authors doing the work are in the best position
to judge what assumptions are and are not appropriate for their work. If the
science is bad we expect the community to pick up on that via peer review or
in a subsequent publication.

~~~
mdwelsh
I was originally going to list examples of bad industry-focused science in the
original post, but decided against it, since I didn't want to offend anyone.
Your username is "gradstudent", suggesting you have read a few papers. My bet
is that you've read papers where you scratch your head and say, "is that
really how things work?" I read lots and lots of those papers - usually they
don't end up getting published.

~~~
cushychicken
>My bet is that you've read papers where you scratch your head and say, "is
that really how things work?" I read lots and lots of those papers

That sort of contextualizes your post as a long winded statement of "My job is
hard".

------
Fede_V
I think he is largely correct in identifying the flaws of 'academic research',
but he does not spend enough time discussing the whys.

Academics are in an insanely competitive environment, where what is rewarded
is bringing in grants/high impact publications. There are a very select few
academics that are so brilliant and have such sterling reputations they can
afford to not play this game (like Matt Welsh's former advisor) but most young
researchers don't have this luxury.

For example, Peter Higgs, the Nobel prize winner who postulated the existence
of his namesake Boson, flat out said that: "I wouldn't be productive enough
for today's academic system"
([http://www.theguardian.com/science/2013/dec/06/peter-
higgs-b...](http://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-
academic-system)). He spent several years of quiet research without publishing
anything to develop his theory - a young professor doing the same now is
unthinkable. The most highly successful young scientists I know now are
incredibly career driven and optimize ruthlessly for the kind of output that
tenure committees are looking for.

Basically, if you want researchers to incorporate best practices (tests,
version control, well commented code, etc) and to actually attempt ambitious
longterm research programmes, make sure that's what you reward, and remember
you cannot just reward success. By definition, something ambitious has a high
possibility of failure - if failing means that your career is destroyed, then
people won't do it.

~~~
thearn4
I agree with your points, in my case it was this hypercompetitiveness at the
expense of meaningful contribution that drove me away from an academic career,
post-PhD.

~~~
riskneural
Me too, and I've been lucky enough to have incredible research opportunities
in finance and insurance, which all go unregistered, and which have today
brought me to Holland to eat this cheese plate.

------
samth
Academic research that unknowingly (or sometimes even knowingly) duplicates
secret industry work is much more valuable than this discussion indicates.
Sure, it's not valuable _to Google_ for someone to publish things they already
know. But everyone else benefits. If people at Google want their research to
stop being duplicated, they should publish it.

Of course, if your goal is that Google adopt your new system in their data
centers, then you need to know what they already do. But the problem with that
model of research is the initial goal, not the way it's currently executed.

~~~
mdwelsh
I'm not so worried about duplication by academics -- that does not happen
often -- but rather about academic research that's just wrong: makes bad
assumptions, uses a flawed methodology, fails to address the general case.

~~~
js8
I think I agree with samth - it's not the academia, but the industry who needs
to open up. Especially Google has a reputation of being very secretive.

~~~
nickpsecurity
You and samth seem to be missing the point: academic research is often built
using methods that will ensure no real-world success while aiming to achieve
real-world success. Factoring in real-world constraints, best practices, and
existing work-arounds will let academics achieve better results on average.
Baseline of practicality goes up.

And again, these are all academic projects that aim at being practical. The
point is supported by a number of academics that incorporate real-world
information and constraints into their work to produce deliverables that
advance state-of-the-art _and are useful_. Examples that come to mind are:
Haskell/Ocaml/Racket languages, CompCert C compiler, Microsoft's SLAM for
drivers, SWIFT auto-partitioning for web apps, Sector/Sphere filesystem, the
old SASD storage project (principles became EMC), old Beowulf clusters, ABC HW
synthesis, RISC-V work, and so on. So many examples of academics keeping their
head in the real-world instead of the clouds to making a name for themselves
with awesome stuff with immediate and long-lasting benefits. I'm not Matt but
I'm guessing he'd rather see more examples like this than, say, a TCP/IP
improvement that breaks compatibility with all existing Tier 1-3 stacks and
whose goal is to improve overall Web experience. Yes, there are people working
on those. ;)

~~~
gluggymug
I don't think the RISC-V work is a good example. It suffers from some of the
problems that mdwelsh is worried about.

It's aimed at a real world problem but their solution is not good.

A couple of days ago, someone asked where the verification infrastructure was
on
[https://news.ycombinator.com/item?id=10831601](https://news.ycombinator.com/item?id=10831601)
. So I took another look around and found it was pretty much unchanged from
when I looked last time. There is almost nothing there. It is not up to
industry standards, to put it lightly.

It's not just the verification aspect that is weak either. On the design side,
they only have docs on the ISA. For SOC work, you are essentially given no
docs. Then in another slap in the face, the alternative is to look for code to
read but the code is in Scala. Basically only helping those who went to
Berkley or something.

It is something that seems relevant but if you were to try using it most
engineers would have a pretty hard time.

~~~
nickpsecurity
That I recall, the RISC-V instruction set was created by looking at existing
RISC instructions, industry demands, and so on. The result was a pretty good
baseline that was unencumbered by patents or I.P. restrictions. From there,
simulators and reference hardware emerged. Unlike many toys, the Rocket CPU
was designed and prototyped with a reasonable flow on 45nm and 28nm. Many
others followed through with variants for embedded and server applications
with prior MIPS and SPARC work showing security mods will be next.

Them not having every industrial tool available doesn't change the fact that
the research, from ISA design to tools developed, was quite practical and with
high potential for adoption in industry. An industry that rejects almost
everything out of academia if we're talking replacing x86 or ARM. Some support
for my hypothesis comes from the fact that all kinds of academics are building
on it and major industry players just committed support.

Is it ideal? No. I usually recommend Gaisler's SPARC work, Oracle/Fujitsu/IBM
for high-end, Cavium's Octeons for RISC + accelerators, and some others as
more ideal. Yet, it was a smart start that could easily become those and with
some components made already. Also progressing faster on that than anything
else.

~~~
gluggymug
The flow is not good IMO.

They haven't followed engineering practices which is one of the issues mdwelsh
was talking about.

If they've synthesized to 45nm and 28nm, where's all their synthesis stuff -
constraints etc.?

They have no back end stuff, very little docs, almost no tests, almost no
verification infrastructure.

~~~
nickpsecurity
Hmm. Im clearly not an ASIC guy so I appreciate the tip on this. News to me.
Ill try to look into it.

Any link you have where people mention these and any other issues?

~~~
gluggymug
Maybe I was a bit harsh with the "almost no tests". They have some tests.

Someone name fmarch asked on
[https://news.ycombinator.com/item?id=10831601](https://news.ycombinator.com/item?id=10831601)
about verification against the ISA model.

It possibly can be done via a torture tester apparently,
[https://github.com/ucb-bar/riscv-torture](https://github.com/ucb-bar/riscv-
torture) , but taking a quick look I don't think it handles loops, interrupts,
floating point instructions etc.

~~~
nickpsecurity
There didn't seem to be a lot in there but I don't know Scala. I wish it was
scripted in Lua or something with the Scala doing execution and analysis. Make
it easier for others to follow.

Doesn't seem nearly as thorough as what I've read in ASIC papers on
verification. They did (co-simulation?), equivalence, gate-level testing, all
kinds of stuff. Plus, you did it for a living so I take your word there. I do
hope they have some other stuff somewhere if they're doing tapeouts at 28nm.
Hard to imagine unless they just _really_ trust the synthesis and formal
verification tools.

The flow is here:

[http://www.cs.berkeley.edu/~yunsup/papers/riscv-
esscirc2014....](http://www.cs.berkeley.edu/~yunsup/papers/riscv-
esscirc2014.pdf)

Are those tools and techniques good enough to get first pass if the Chisel
output was good enough to start with? Would it work in normal cases until it
hits corner cases or has physical failures?

~~~
gluggymug
Interesting paper. It sounds good until you look for the actual work. With a
possibly limited amount of testing, you can't be sure of anything. In
verification, you can never just trust the tools. With no code coverage
numbers, how do I know how thorough the existing tests are? The tests
themselves have no docs.

The torture test page said it still needed support for floating point
instructions. That kinda says, they did no torture tests of floating point
instructions. I wouldn't be happy with that. Same goes for loops. Etc.

You have to think about physical failures as well: the paper mentions various
RAMs in the 45 nm processor. You should have BIST for those and Design For
Test module/s. Otherwise you have no way to test for defects.

~~~
nickpsecurity
Yeah, that all sounds familiar from my research. Especially floating point
given some famous recalls. Disturbing if it's missing. I'll try to remember to
get in contact with them. Overdue on doing that anyway.

------
munin
Today, in academia, it's considered risky to do research in computer vision,
machine learning, or speech processing, for example, because it's likely that
you will get "out-Googled". Google probably has an entire team of 20 working
on what your one graduate student is doing. They'll have petabytes of real
data to test against, hundreds of thousands of computers to run their jobs on,
and decades of institutional experience. Your graduate student has a macbook
air, six months of experience from an internship at Microsoft, and a BS in
computer science. If you're lucky. They're going to lose. They should just go
to work at Google.

Over time, fields of study become industrialized. There was a time when doing
research in computer vision, machine learning, and speech processing was risky
because the field was new, difficult to enter, and the prospects for
commercialization were slim. That time has passed. Those 20 people working at
Google are the people that helped that time pass. One could argue that the
place for this work is now in industry - the motivations are all right and the
resources and data are aligned to carry the work forward at a rapid pace.

This happens in other fields. For example, there's some word on the street
that DARPA is going to stop funding so much basic research into applied
robotics. Industry, they say, has got this covered. You can argue that they're
right. The commercial sector is starting to get real thirsty for robots.
Amazon talks about automated drone delivery. Everyone talks about self driving
cars. The military wants to buy automated planes as a purchase, not as a
research project. The time for basic research, it seems, is over.

As far as I can tell, this happened with systems about fifteen years ago, so
the academic activity you see in systems is what is left over after all of the
researchers that could do things moved into applying their research in
industry. You no longer need to have weird hair and be buried in a basement to
think about 20 computers talking to each other in parallel - you can go work
at any technology company and think about two million computers talking to
each other in parallel, and get paid two orders of magnitude more money. So
the people doing systems research in academia are the people that cannot take
their systems research into industry. If they could get internships, they
would, and then they would get jobs. They haven't.

~~~
anonymousDan
Such nonsense. I know plenty of people doing great systems research that just
doesn't align with the goals of current technology companies. Just look at the
proceedings of the top systems conferences and there are plenty of good papers
and ideas out there.

------
tensor
> Industry is far more collaborative and credit is widely shared.

This couldn't be farther from the truth. Your idea's are generally credited to
the company, which in turn is credited to the CEO or some other high up. On
collaboration, it's only more collaborative _within a given company_ , and not
always even then. Between companies it's outright hostile to collaboration by
definition.

~~~
accountatwork
Something I've seen at every large tech company I've worked at (which includes
the author's company), is that some people do the work for something cool. The
next step for them will be to prepare a slide deck so that some big name can
give a talk at a conference. That doesn't always happen, but it's common.

Depending on the managers and team leads involved, that kind of thing can also
happen when promotions come around. At every place I've worked, a common
complaint is that the TL for the project got promoted despite not doing much
because they were TL. Of course that's not supposed to happen, but it happens
all the time.

The post seems to compare the worst case in academia vs. the best case in
industry. You could just as easily flip things around and make industry sound
bad.

~~~
accountatwork
In case it wasn't clear in the original comment, the big name is usually
someone who was only tangentially involved in the project, if at all.
Sometimes it's the head of the org the project took place in. They may have
approved the budget for the project, but it's rare that the big name did any
of the work.

This is like when Tim Bray mentioned that Amazon is great because he hasn't
experienced the same problems that have gotten a lot of press lately. Of
course he's treated well! He's Tim Bray!

Matt Welsh is exactly the kind of big name that isn't going to lose credit on
something. Of course he gets credit! He's Matt Welsh!

------
skywhopper
> I know this sounds like a waste of time because you probably won't get a
> paper out of it

It's unfortunate that a "lessons learned" paper summarizing a sabbatical in
industry doing customer-facing work would not be publishable. Surely it's far
more useful to other academics than most papers. It'd definitely be more
broadly relevant.

My wife is a professor in a practical field and I'm always sad to hear what
counts as a "good" paper or a publishable article. The big journals in these
fields drive the notion of what is and isn't legitimate research. That's the
point where what constitutes career-advancing "academic output" has to be
changed. But I'm not enough of an insider to have any idea of how to go about
doing that.

~~~
mdwelsh
The issue is that when doing a sabbatical/internship at a company, it's often
not possible to write a paper - either because there's not time, or the
company may not want to publish the work (which could be confidential). I
wouldn't go to a company expecting to be able to publish about the project.

~~~
NotOscarWilde
But if a company wanted to make sabbaticals or science-focused internships
more interesting for people on the academic career tracks, it probably could
ease their confidential policies and design such internships so that said
academics can publish, right?

You have mentioned elsewhere that you are looking into how Google can help the
academic world focus more on the right industrial questions, and you
personally recommend such sabbaticals/internships -- so this seems like a
natural step.

~~~
mdwelsh
It's a good idea but fairly challenging in practice. The amount of information
you need to reveal in a scientific publication may make many companies
uncomfortable. My view is that academics should not just be focused on getting
another paper on their CV -- there is value in having the industry experience
even if no papers come out of it.

~~~
wink
I'm not in academia per se, but (in Germany) just getting to do my (equivalent
of a) Master's thesis at a company (still with a supervisor at university) was
hard enough - because sometimes they just seem to search for people to do work
based on their topic suggestions and if you want to do something that's a
little off the tracks... Let's just say there were no real arguments, mostly
just "we don't like companies that are not university spin-offs and oh shock
that company might reuse that thesis to advertise their exploits instead of
granting the kudos to the university.

But maybe that's not the general case, but as someone coming from the tech
sector to get a degree (and not the other way round) this sounds all too
similar :)

------
irremediable
I agree with some of this, but I wonder...

> It drives me insane to see papers that claim that some problem is "unsolved"
> when most of the industry players have already solved it, but they didn't
> happen to write an NSDI or SIGCOMM paper about it.

I've seen many examples of industry "solutions" that aren't documented, aren't
published, and aren't even validated. There's a place for papers like these.
I'm not quite your typical CS researcher (I do applied math and software for
medical imaging), so YMMV, but I think this criticism is too harsh.

~~~
mdwelsh
That's a fair point. The issue is that many of these papers don't seem to
acknowledge that industry has (unpublished) solutions, and are somewhat naive
as a result.

~~~
knughit
Another way to look at it: avoid the open/academic community stop doing xrypto
research because NSA mathematicians are (secretly) way ahead of them?

"Open" is a different league from proprietary. It doesn't matter that they are
behind proprietary. but it matters when they are working on the wrong
problems.

------
wfo
If you're doing industry-relevant research and you're in academia, leave. Your
work can be supported by corporate profits because it is in essence for
corporate profit. Get a job in industry, make more money, and make room for
academics who want to do honest-to-god academics and work on theory or
fundamental research. Or who want to do research relevant to improving
society, not improving profit margins.

There aren't that many professor jobs out there. It's unbelievably greedy to
be taking one up to do industry's dirty work.

You can always take an afternoon off a semester here and there to be an
adjunct and teach a SE class or give a guest lecture.

~~~
mdwelsh
I don't agree with this at all. The partnership between industry and academia
is long-standing and has proven to be extremely valuable -- much of the
Internet came about because of it.

~~~
wfo
At this point it's not a 'partnership' \-- industry has co-opted and taken
over academia to a startling and troubling extent. Universities are expected
to pump out software engineers trained in industry best practice, not thinkers
and theorists, and woe betide any school that doesn't toe the line. Research
is pushed towards corporate interests as requested in this article here and
people just give up. As public research is defunded corporate money has to
fill in, which forces academics to make some noises about "applications" or
"industry" when they write their papers in hopes of more funding. In any
paper, no matter how theoretical, you can make it sound like it has
"applications". If someone published the halting problem today, the first half
of the abstract would be "as industry deals with more and more complex
programs and terabytes of data that are distributed as services in the cloud,
we need to understand if there are limits to what we can compute. We present
an argument that there are limits, and describe some practical applications".

Many capitulate altogether, trying to do industry work from their academic
position. It is these last people who should get out.

The creation of the internet worked wonderfully. It was invented by academics
based on government grants when it was basic research, and then refined and
turned into something practical and workable and, most importantly,
profitable, by industry. Exactly as it's supposed to be.

------
sail
What stood out for me:

 _My PhD advisor never seemed to care particularly about publishing papers;
rather, he wanted to move the needle for the field, and he did (multiple
times)._

 _Racking up publications is fine, but if you want to have impact on the real
world, there 's a lot more you can do._

------
yarrel
Clickbaity title aside, this is sound advice for academics who wish to be
relevant to industry from someone who has experience in both camps.

Other academics, for example those doing "stuff going way beyond where
industry is focused today" as the author explicitly states, can safely ignore
it.

------
KKKKkkkk1
Re collaboration. I work in a government research lab which prides itself on
being a collaborative environment. The result is that we publish 10-author
papers in which one author is doing all of the work and the other 9 are
cheering from the sidelines. I don't think this is particular to my lab -- the
typical scenario is that 90% of the work on any given project is done by 10%
of the people. So when people praise their work environment for being
collaborative, I'm sceptical. I'd much rather be in a situation where everyone
gets the credit they deserve for the work they have actually done.

------
Fomite
"Second: don't get hung up on who invents what. Coming from academia, I was
trained to fiercely defend my intellectual territory, pissing all over
anything that seemed remotely close to my area of interest. Industry is far
more collaborative and credit is widely shared."

In my experience, this is only true until there is money to be made. Or more
specifically, that industry was more than willing to share _credit_ , but
ownership was theirs.

------
jff
> Coming from academia, I was trained to fiercely defend my intellectual
> territory, pissing all over anything that seemed remotely close to my area
> of interest.

Anyone who has ever been through a conference/journal submission process knows
this pain. You can usually tell from the comments which of the reviewers is
working in your field and wants to shut you out.

------
nitinics
I think Industry Research is mostly driven by some constraints that applies to
their architecture, their business use-cases and how much the company is
willing to spend $$$ on research that adds value to their products or
services. Academics on the other hand thinks beyond the box and researches and
gives clues to upcoming industries on where the problem might be and how it
could be solved, therefore eventually helping Industry grow with validation
from the researches and allowing them to put them into "products" and
"services".

Therefore, I don't think Academics should stop doing what they do (i.e. wander
around) and have a laser focus on Industry's product-based researches.

------
woah
Matt, I was intrigued by your throwaway "not another multi hop routing
protocol" comment. As far as I can tell, the field is very slow-moving. The
state of the art, Babel, is at least 5 years old and is an incremental
improvement on protocols that are at least 20 years old. Some very promising
research was done into DHT-based routing with Scalable Source Routing, but
this work is now from more than 10 years ago, and interest seems to have
dropped off completely.

Are there a bunch of protocols that I don't know about?

Are you maybe referring also to centralized path finding algorithms? This
would explain the comment.

~~~
mdwelsh
Nobody needs multihop routing protocols. Show me one instance in which they
have been useful, despite 20+ years of academic work in the area.

------
AngrySkillzz
> "Coming from academia, I was trained to fiercely defend my intellectual
> territory, pissing all over anything that seemed remotely close to my area
> of interest."

Apparently the author was unable to break that habit.

------
pklausler
Program committees and the conferences they serve may be part of the problem,
and hence part of the solution as well. Instead of picking the best N papers
so as to fill out a conference schedule, pick the good papers and shorten the
conference schedule if the number of good papers is <N. And then raise the
standards to meet your expectations of reproducibility, code reviews, unit
tests, etc. If a big conference like ISCA were to be shortened by one day by
omitting the least worthy papers, you'd see much better work arriving the next
year.

------
jonsterling
Who gives a damn if academic research is relevant to industry? Almost anything
that could possibly be relevant to industry is highly uninteresting.

Imagine being someone who thinks that Capital could decide what is a good
problem to work on...

~~~
mdwelsh
This is so completely wrong. The most exciting work happening in systems,
networking, programming languages, crypto, computer architecture, mobile, and
many other subfields of computer science is highly relevant to industry _and_
very interesting academically.

~~~
jonsterling
I do pure type theory, semantics, proof theory & intuitionistic mathematics.
Very little of this will find a home in industry (at least, not for several
decades). Industry has historically been incredibly resistant to 100% of the
things I'm interested in, and I don't blame them!

But it's not just the hard & expensive stuff that they are resistant to. Even
the "easy" stuff (like adopting a programming language designed by the
professionals instead of the amateurs) they won't do.

I build interactive proof assistants. But I'm not pushing their applicability
to industry, and I don't expect them to be relevant to industry (for a long
time at least). Why? Because it's too expensive. Formal verification in type
theory MAKES NO ECONOMIC SENSE; ask anyone who's actually ever done any
industrial verification, and you will find out what tools they are using, and
it has _nothing_ at all to do with the area of research I'm involved in. This
is because there are inherent trade-offs in every technique, and industrial
use-cases tend to prefer a certain set of trade-offs, and I prefer a different
one.

But it's a fascinating topic, and something that I'm preparing to devote the
next several years of my life to. And I can safely say that a meteorite will
more likely destroy Manhattan than will any of my computer science research be
of widespread relevance to industry.

So, no, I totally disagree with everything you have said.

------
lpw25
> I still serve on program committees and review articles for journals and the
> like.

Judging academia from your experience on program committees is like judging
the entertainment industry from watching Britain's Got Talent.

------
hnur
Relevant:

[http://www.smbc-comics.com/?id=2088](http://www.smbc-comics.com/?id=2088)

------
draw_down
Translation: Universities should continue on the path to becoming the research
arm of industry. Academics should alter what they do and how they think, in
order to better suit industry. Academic research does not have value or merit
of its own, outside of its usefulness to industry. Impact on the world happens
only in an industrial context.

~~~
jsolson
Actually reading the article, explicitly none of those things.

Alternatively: _if_ you are doing work that attempts to have industry
relevance you should have some idea of what problems are actually relevant to
industry. In particular, just because you think something is an interesting
and challenging problem that just _has_ to be affecting industry players does
not mean it actually is. It may have been solved already, or you may have made
some poor assumptions on conceiving the problem which, if corrected, make the
problem disappear entirely (perhaps replaced by a different one that would've
been a more valuable research target).

If you're trying to do forward-looking invent things that aren't even a
twinkle in industry's eye yet, that's absolutely fine too, and the author
explicitly calls out that the only important thing here is to recognize when
your work isn't likely to be applicable in the near term. Nowhere does Matt
state that this makes this sort of research less valuable, and honestly in
many cases academia is the only place it can reasonably happen due to funding
incentives.

------
alberte
#insert standard weekly hackernews attack academia article response here

------
nijiko
I think we should talk about the constraints rather than relevancy.

------
PaulHoule
I know you Silicon Valley people think that farmers are a bunch of hicks and
an easy target to be disrupted.

No f-ing way.

Cornell University has many departments that are good, but when I look at the
agriculture and vet school they are beyond anybody else.

Ag schools do research which is relevant to the technological and business
problems of their industry. They do plenty of work on genetic engineering,
chemistry and work closely with the likes of Monsanto. They are doing a lot
for big ag. They also do research to help small farmers beat pests without
pesticides, produce and market (delicious!) Halal meat, even help householders
same money and have a better lawn. "Organic" and "Alternative" innovations
diffuse into the mainstream. Pesticides are expensive to buy and to apply; if
there is a cultural tweak that's cheaper, they'll do it in a flash. When corn
prices got high, Cornell promoted dairy farmers to plant cabbages and other
crops as an alternative forage.

They are always beta testing new crops in our area; Cornell and UNH are
finding variants of plants that perform well in the cold Northeast climate,
have expanded Wine production and are commercializing new fruits such as the
Paw Paw.

Their research is relevant, and it is also communicated directly to the public
and industry. Cornell Agricultural Extension has an office in every county of
the state that you can walk up to and call and get questions answered, go to
an event, etc. They work with trade publications, local government.

And it is not just New York, they do research on tropical agriculture and run
a program to get access to agricultural literature to anyone in poor countries
that need it.

I would point to that as being a much more real "technology transfer" than the
people who are concerned about copyrights and patents.

~~~
dang
> _I know you Silicon Valley people think that farmers are a bunch of hicks
> and an easy target to be disrupted. No f-ing way._

Please don't make unsubstantive, divisive generalizations like this in HN
comments. They make the threads worse, including your otherwise fine comment.

~~~
PaulHoule
Honestly it is a joke. It drives me nuts though to see how HN people don't get
it that better rural broadband would mean they make more money.

~~~
revscat
Stop. There are no "HN people". There are people who post to HN, but the
opinions are for the most part surprisingly varied. You are shooting yourself
in the foot by making over-generalized comments like this.

------
swehner
When you're telling other people what they should do, you're already lost.

I also don't feel academia is obliged to any particular promise of delivery,
but that's kind of independent.

~~~
jsolson
That's not really what the article is doing, though...

Instead it's saying roughly: much of the work coming out of academia
_attempts_ solve problems applicable to industry, but in actuality industry is
not actually suffering from the problems solved (either because underlying
assumptions are wrong and industry is plagued by a different problem or
because the problem has already be adequately solved by existing work).

~~~
swehner
Some quotes from the article:

    
    
      "My first piece of advice: do a sabbatical or internship in industry."
    
      "you have to work on a real product team"
    
      "hold yourself to a higher standard"
    
      "keep an open mind"
    

Kind of painful to reread it, actually.

~~~
jsolson
All of those quotes are conditioned on the premise that you're doing research
which purports to be relevant to industry. In _that_ context I agree with
Matt.

> Now, I don't think all academic research has to be relevant to industry. In
> some sense, the best research (albeit the riskiest and often hardest to
> fund) is stuff going way beyond where industry is focused today. Many
> academics kind of kid themselves about how forward-thinking their work is,
> though. Working on biomolecular computation? That's far out. Working on
> building a faster version of MapReduce? Not so much. I'd argue most
> academics work on the latter kind of problem -- and that's fine! -- but
> don't pretend you're immune from industry relevance just because you're in a
> university.

This paragraph makes this point abundantly clear: there's plenty of research
that is focussed on expanding the field and is _discovering_ problems that
_will_ exist once industry catches up. Fantastic. It's risky to try and find
the future, but someone has to do it, and I think (based on this and his other
writing that I've encountered) Matt would agree that time spent in industry is
of dubious utility to those folks[0].

For research that is attempting to tackle extant problems encountered by
industry, though, the researcher would be well served by ensuring that the
problem they're attempting to solve is actually extant _and_ that the
assumptions they're making in trying to solve it don't simply sound reasonable
but actually reasonably embody assumptions underlying real (aka industry)
deployments. Otherwise they may get a lovely paper which solves a problem that
nobody actually has or presents a solution that is impractical because it
assumes things which are, in real deployments, incorrect.

[0]: On one hand, it might give them a better feel for in which direction a
tractable future lies, but by the same token it might prevent them from
exploring potentially fruitful avenues by constraining their thinking.

