Hacker News new | past | comments | ask | show | jobs | submit login

I think most of the observations of that talk still stand.

> System software became about managing large numbers of machines.

Yes, but is academic research on these topics relevant? Most of the progress on infrastructure software has come from the attempts of internet companies to cope with their big data problems. The people who implemented these systems have all received their phds from academic CS institutions and they used bits and pieces of existing distributed systems research, but they did their innovative work in the context of commercial companies, not academic CS departments.

> Good new languages were developed.

And what are they? A quick look at https://tiobe.com/tiobe-index/ (for the lack of a better resource) corroborates the thesis that the bulk of software is developed using the same old boring languages. Yes, they have evolved, but not much. Golang is, well, not exactly innovative. Rust still has a long way to go to be called a mainstream language.

> Things running in containers on in VMs need far less of an OS, and a lot less OS state.

Unikernels exist but is there a compelling non-niche use case for them? Docker acquired Unikernel systems and that's what we got so far: https://blog.docker.com/2016/05/docker-unikernels-open-sourc...

These are all great examples confirming the thesis that producing good relevant systems research is hard. On the one hand we are piling up gross inefficiencies on top of decades old technology, so improving the current state of affairs should be easy, right? On the other, software systems are inherently very open systems with a lot of stakeholders, so doing any kind of successful "clean-slate redesign" is almost unthinkable.




> The people who implemented these systems have all received their phds from academic CS institutions and they used bits and pieces of existing distributed systems research, but they did their innovative work in the context of commercial companies, not academic CS departments.

Three thoughts.

First, the title of the talk is "Systems Software Research is Irrelevant", not "Systems Building in Academic Departments is Irrelevant". e.g. the talk discusses plan 9. The distinction the talk makes, I think, is between commercial products and R&D not between industry and academia.

In that sense, I think the stuff out of google (mapreduce up through tensorflow) are good examples of that trend reversing

Second, industry R&D has almost always lead the charge on developing large systems in CS. That's not new.

Third, as you noted, "[many of] the people who implemented [and more important lead the design of] these systems have all received their phds from academic CS institutions and they used bits and pieces of existing distributed systems research". One role of CS academic research is to build foundational ideas and then crank out competent researchers who are able to build real systems/algorithms/companies on top of those ideas. Just because TensorFlow was developed at Google rather than UW doesn't mean that academic research is now irrelevant.

> And what are they?

There's a lot of OCaml, Haskell, and Scala code in the world. C# and the entire .NET family is a veritable treasure trove of academic pl ideas making it into production languages.


Good comments. They got me thinking, what exactly is "research" - I don't know what distinction Rob makes, but for me research software is software that is written with the main goal of publishing a paper in mind. In this sense mapreduce is emphatically not research software - it was first deployed into production and only then there was a paper (with production deployment serving as validation). A good counterexample would be not mapreduce but Stonebraker's c-store which then got commercialized as Vertica.

> One role of CS academic research is to build foundational ideas and then crank out competent researchers who are able to build real systems/algorithms/companies on top of those ideas.

Educational role is very important, but do phd graduates build on top of their research? Instead it seems, they learn the foundations (which were "research" a few decades ago), do a phd project on some very specialized and inconsequential thing and then go off to do real stuff at commercial companies.


To use an example from another field - nothing would have seemed more specialized and inconsequential than clustered regularly interspaced palindromic repeats in bacterial DNA. But then, that turned into CRISPR. Working on obscure, niche stuff is how you actually contribute to science.


Obviously I don't know much about biology, but are regular genome patterns with unclear functions such a common occurrence that they are seen as inconsequential?

> Working on obscure, niche stuff is how you actually contribute to science.

That's what they tell you (my favorite example BTW is conic sections - obscure and niche for millenia before it became known that they describe the orbits of planets). But is the deluge of highly specialized obscure papers really the consequence of free play of scientists' minds pursuing their own interests? Or is it more a consequence of the sheer number of phd candidates and postdocs that each need to achieve sufficiently novel results on a reasonable time scale with a fairly certain chance of success (objectives that are obviously in conflict)?

Of course ground-breaking research can grow out of niche results. But for growth to happen someone should build upon and improve upon these results. It is difficult to build upon a research prototype that works barely enough to register a minimal improvement to some metric and is thrown out afterwards.


>someone should build upon and improve upon these results.

I'm also not from a biology background - but I remember from doing philosophy, it was much more useful to find a paper that made some trivial advance in a very specialized area that you were interested in, than ten 'general' papers. I mean, if you're building something, and somebody has written a paper that addresses a part of your domain, it's gold dust - even if it's generally too niche for anybody to bother reading, even if it's substandard work.

You're right that there are some kind of perverse incentives in the hothouse-production of phd theses, but I think in general, people should embrace the triviality and irrelevance of scientific work. Alchemy set out to answer the big questions of eternal life, and gold from lead, and ended up answering nothing. Science set out to answer questions like, how are colours in flower petals passed through generations? If you look at the history of 'big questions', it's way less illustrious than that of small, boring ones.


Apache Spark and Mesos both came out of academic research, and both are quite popular in industry, so those are two counterexamples to your claim.


Of course there are counterexamples. Yours are good ones (although Google had a system comparable to Mesos much earlier, they just didn't want to publish a paper). Some others that I mentioned in another post are DBMS projects by Stonebraker. BTW, both Spark and Mesos seem to come from the same place (UC Berkeley), maybe they are doing something seriously right there?


Berkeley has a somewhat unique model in that our systems lab partners with industry to discover their pain points. Our research hence often solves some issue faced by industry, making it more relevant for practitioners. The current iteration of this lab is the RISE lab




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: