Hacker News new | past | comments | ask | show | jobs | submit login
Systems Software Research Is Irrelevant (2000) (cat-v.org)
68 points by qntty on March 5, 2019 | hide | past | favorite | 43 comments



Pike's pessimism in a way reminds me of the Bill Gates quote:

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.

Still some of his pessimism has been spot on. Linux is Unix and RISC-V is, well, the fifth RISC from Berkeley which was even then only codifying some principles Seymour Cray laid down (now with Cray vector instructions!).

But LLVM is more pliable and experimental (and slower) than GCC. Didn't see that coming. No one even remembers Netscape but Chrome and Firefox are not simply Netscape.

He's dead right on the over emphasis on measuring. I never even read the measurements section in papers. I'm only interested in the ideas. That said, recent conferences attach a badge to a paper which submits a verification artifact. That's useful, especially if I really want to dig into the paper.


Strong disagree here on the topic of measurement. While some papers are "idea" papers (and goodness is obvious), others are best evaluated through careful measurement, showing strengths and weaknesses of the approach. Bad evaluation will lead to bad conclusions about the merit of an idea.


> I never even read the measurements section in papers. I'm only interested in the ideas.

Genuinely curious, how do you identify if an idea is of any practical use without caring about measurements?


An idea doesn't have to be practical and it certainly doesn't have to be immediately practical. I think useful is a locally greedy optimization and that is not in itself at all bad. I just think that an idea might be able to take you a little further, sometime, just not right now.


That's a fair point. I probably overemphasized the word practical in my original comment. I didn't mean immediately practical. (I assume you meant "doesn't" in your comment.) And I do agree that even ideas or systems which perform poorly could be imminently useful. But I personally find looking at measurements helpful to give me some sense as to whether or not the approach being proposed is suitable for solving problems I care about.


For example, if you are reading the classic papers in a seminar, are you really even interested in their measurements? Well, if you are looking forward at new papers, I think the ideas come first. Remember, there's a ton of stuff to read. A buddy said he read two papers a day in grad school.


If you're reading classic papers then presumably they've already proven themselves worthy (by being considered classic)


Exactly. The thought process I go through in evaluating a new paper is certainly different from when I am reading older, proven work. But in both cases, I still personally care about the measurements.


I think you meant "doesn't have to be immediately practical".


Measurements presented in a paper are not a good indicator for whether an idea is practical. If the results weren't good, the paper wouldn't exist.


Unfortunately, I've found this isn't always true, but I see your point.


Some ideas can be great in the small even if they won't scale. Not everything needs to be built for massive scale.


I agree not everything needs to be built for massive scale. I didn't mention anything about scale in my comment.


Rob Pike wrote this because of a disappointment in the decline of system software research, specifically in operating systems. In the 90s there was a lot of great work but the momentum, he is stating, was on the decline.

When you read what he writes (at least from my perspective), it's not a sincere call to stop doing systems research, but that everyone else thinks it's less relevant so what the hell is the point of pursuing it anymore?

One supporting point to this is, the conferences like ACM SOSP [1] and USENIX OSDI [2] have sessions now that have nothing to do with operating systems or systems software. For example, bug finding, "big data", web apps, RDMA (high-performance networking), machine learning, security (this does have relation to systems, but the research presented often is tangential, e.g., with GPGPUs, or a secure chat system), and graph processing (??).

[1] http://sigops.org/s/conferences/sosp/2015/current/index.html

[2] https://www.usenix.org/conference/osdi18/technical-sessions


> Instead, we see a thriving software industry that largely ignores research, and a research community that writes papers rather than software.

I have actively followed the NSDI and SIGCOMM community, and this is, for the most part, true: Research venues have become a hiring billboard for big companies (Microsoft specifically?), and that's all that is left of research.

Most papers published from these companies (and academia) are flawed at the core and primarily story driven (at least in the two conferences that I mentioned). Companies publish with data that is inaccessible to anybody but them---MSR, specifically, takes this to an extreme. The scientific contribution of most papers is close to nil. Writing and storytelling dominate the field. Experiments are cherrypicked, are rarely reproducible, and the software is seldom useable.

People rarely are willing to think outside the box and spend more than a year on a paper. Most people pick an idea from an outside field, apply it, and publish a paper. That's the end of it. Most people that I have talked with publish to get their name out, and rarely care about any scientific agenda.

Imagine, in a system's research community, a large portion of academic advisors cannot develop proper software---they can, however, pitch stories and write text for days.

The problem is not that people are evil... it's that the system/community has decided to take this path. And for one, I cannot fathom why. It is not even rewarding to publish a paper in these conferences anymore ... except to enjoy the trip and the dinners.

You hear stories on how people try to optimize their chances of getting in, e.g, I have heard and seen from good researchers that you should not register your paper early because you will get a two-digit paper number, indicating that this is a resubmission, and lowering your chances of getting in, etc. There are many such hidden gems there.


It's not clear what is the main problem this talk/slides want to solve.

There's a lot of research and advancement in the things that matter:

- Rust is showing how linear types can get a nice balance between safe systems and great resource usage.

- GPUs and TPUs are the next-gen somewhat general purpuse computer hardware, but none of them came from operating systems research

- containers partially solve the resource sharing problem, though Plan 9 what the author refers to is probably a better system design for containers. At the same time there's a huge pressure at the big companies to improve resource sharing (like the performance and UX of lambda functions) while minimizing the side channel vulnerabilities, so I don't think there's no money put into it (this is a multi-billion dollar problem for these companies, and the CEOs know it, as they have to keep building the datacenters).

These are just a few examples from all the advancements that were happening, I don't see any reason why universities would do so much better than companies.


Even at that time (2000), Sun was already working on -or soon would be- all of these and other things that have enormous impact on the industry:

  - ZFS
  - DTrace
  - SMF (think systemd done right, though SMF
         has its issues)
  - FMA
I remember Bryan Cantrill, Mike Shapiro, and others at Sun screaming up and down that there was a ton of innovation left to do in operating systems. They were absolutely right.

In some sense, ZFS and DTrace were applications of old ideas, but boy were they done well. ZFS is BSD4.4's log-structured filesystem done right and better. DTrace is applying the dictum from database land of aggregating statistics as close to the source as possible. They could just have been papers, I guess, but they chose to make an impact by writing code.

Why write papers first then code when you can write code first and papers later? Academia mostly only pays for papers, so you don't get that much code out of academia. The private sector mostly only pays for code, so you don't get that many papers out of the private sector.


> It's not clear what is the main problem this talk/slides want to solve.

Basically, Linux's design sucks, and the products that surround it suck. Except for a tiny few applications, nearly all of the industry for the past 18 years has gotten almost nothing innovative adopted by the majority of the industry.

Oh, well, containers, a combination of package manager and resource isolation. But that basically already existed as jails/zones, just without the package manager and remote backend. And microservice orchestrators, those are new, and useful. But neither of those make a dent in the laptops and mobile devices which the majority of the planet uses.

Just as one comparison, it's been 27 years since Plan 9 was first released, and we still don't have most of its incredibly useful, novel features in any operating system. We're actually poorly re-inventing distributed operating systems, but as incompatible, over-designed, kludgy distributed applications.


Well, if "Linux's [non-]design sucks" is the problem, there is no solution. The Linux community is simply not interested in design, design reviews, reviewing interface designs, and so on. They are not setup for it and don't want to be. They are only interested in code.

So you get things like epoll.

On the other hand, Linux won. Maybe because of this, or maybe in spite of this.


I agree that Linux' design sucks, but I don't agree in the operating system taking the role of distributed computing.

Distributed computing is really hard because all applications are efficient with different computing/storage models. You can look at Redis, BigTable, PostgreSQL, TensorFlow, MapReduce just as an example. There's no reason why you couldn't do deep learning on top of any of them (they are all generic enough to execute the algorithms), but you can easily have a 100000x speed in execution speed, in which case being compatible doesn't have any advantage.


Rob is way smarter than I am but maybe this opportunity just doesn't exist any more. How much is there to be done in systems research that has a chance of influencing the industry but isn't just another layer of abstraction on top of existing well agreed upon standards?


A lot, I hope. We're entering a time where chip fabrication tech is commodicized. That's an opportunity for new architectures. Further up the stack there are a lot more inefficiencies that can be removed. In my laptop here is a CPU that can do more than two billion operations per second, but it struggles to render many websites. What we have now can't be the end-all of system's software.


I took this as his key point:

> Instead, we see a thriving software industry that largely ignores research, and a research community that writes papers rather than software.

Although, from the rest of the slides, he also laments a lack of research, he is (was?) disappointed that research didn't end up in the market like it used to. I've certainly read papers that claim to solve a problem I have, but been unable to apply them, because I really need software and not a paper (or because the software works for exactly the example data and nothing else).

Research that produces another layer of abstraction is unlikely to be super useful. Research that produces different layers might be. Rob wanted to see whole operating systems and cool demos, but I'm happy to see things like data structure and locking changes in tcp stacks that allow for significantly better multicore performance. (That's unfortunately likely to go away with MPTCP or QUIC style TCP over multiple in time UDP sockets --- making that perform well is a good target of research too)


There is a never-ending amount of work to do in systems research.


It sucks that many of the great innovations from Plan9 never became widely adopted, but systems research still goes on! Containerization, for example, and virtualization, have given rise to a whole new field of systems research. And a lot of it is being done in Pike's own language, `go`


Au contraire ;) Innovations of Plan9 landed in mainstream systems: Unicode, Namespaces => Containers. If you will, Docker is the result of a mainstream implementation of a Plan9 feature.

I was recently thinking, if you want to create a popular system feature, you just need to copy something from Plan9. IMHO the only reason nobody uses Plan9 is its bad hardware and userland software support. Also I've given up hope this will ever change, however it's a nice innovation testbed somehow.


Some of us are still using Plan 9. I'm glad to see the connection between Plan 9 namespaces and containers mentioned, most people have missed that step in following the evolutionary pathway from unix chroot through BSD jails to Linux containerization. There is actually a semi-secret modern Plan 9 container-like service platform, but the Plan 9 community doesn't usually try to publicize/commercialize its work so almost nobody knows about it.


I don’t see the connection that clearly. You forget about OS/390 virtualisation. That would be what comes closest to what Docker is today, as the earliest innovation I can think of.


Oh really which service would that be? ;)


> "But now there are lots of papers in file systems, performance,security, web caching, etc.," you say. Yes, but is anyone outside the research field paying attention?

uh... yes.


The past 19 years show just how much. The industry is absolutely abuzz with research and its applications.

Systems research sure felt like a dying field in 2000. But 2000 was the start of its revival. So much good has happened since. I wouldn't know where to start. GPUs? The Cloud? iOS and Android? ZFS and DTrace? Rust? I'm afraid of all the things I might miss in making a representative list!


Note well his definition of "is": "Now, not ten years ago, and I hope not in another ten years." Maybe you're actually agreeing with him?


Where is that? Anyways, I'm not disagreeing about the state of play in 2000. I'm disagreeing with the pessimism.


It's in the "Definitions" section. Interestingly, however, it's not in the PDF - the words are there, but the definitions of the words are missing.


Yes, Dtrace and ZFS as well. Zones. Even MS Systems Research has a decent body of contribution. Anyone remember Singularity, their .NET based OS?


He talked about a systems project nobody would attempt. Microsoft did with Midori. It was amazing result. Then, they shelved it outside use in one service. (Sighs)


Yes. Practical examples: - ASLR - Capsicum / pledge (capabilities based OS security) - Micro kernels - Uni kernels - Reproducible builds / OSes - Software transactional memory on OS level

Heaps of cutting edge stuff, especially in BSD.


I've heard talks by Alan Kay where he also decries that lack of research in Universities: They aren't exploring new hardware or new OSes. He talks about how at PARC they built computers that were 5 to 10 years ahead of their time in terms of speed and functionality, and could therefore explore a different set of ideas w.r.t the interaction with the user.


I could not agree more! To me the big picture is that the industry is aligned towards servitude to MBAers, marketeers/salespeople and advertisers.

Let's go sell more and grow x% per year!!! Until the next disruptive fast-food junk product puts us out of business.

In the meantime we've helped making our end-users (the rest of the world) stupider.

How proud are you code monkey?


> OLD stuff. Compare program development on Linux with Microsoft Visual Studio or one of the IBM Java/web toolkits.

If visual studio and tools of the same design philosophy so great, then why are sublime text/atom/vs code such a massive success?


MS VS was actually way closer to Sublime that to its current incarnation. I used VC++6.0 as a replacement for Notepad for many years, because it started quickly and handled different code pages (encodings) semi-decently (while most everything else on Windows died on BOM or sth).


The polemic aged remarkably well


Is that sarcasm? Cause I think it must be.

Pike says (well, said, in 2000) there's been nothing new in GUIs. Fine, he was right in 2000. Then the iPhone and iOS happened just a few years later. Oops.

Or how about this:

> It has reached the point where I doubt that a brilliant systems project would even be funded, and if funded, wouldn’t find the bodies to do the work. The odds of success were always low; now they’re essentially zero.

Oh really? As he wrote that... Microsoft was about to release Windows 2000 and Active Directory. Sun was investing (or soon would start) on things like ZFS and DTrace, which have had massive impact on the industry. Apple was recovering and soon would give us the iPhone and iOS. The cloud was a long way away, though, again, Sun had its "grid" just a few years later, and now the cloud has been a huge driver of innovation up and down the stack. GPUs were soon to be a big thing. Microsoft, Amazon, Google, and many others, have invested huge amounts in research and innovation. Tensor processing and AI are all the rage in ways the AI researchers of the 80s could not have imagined. Linear types are being applied in real-world programming languages (Rust) to finally find a better way than garbage collection to alleviate manual memory management.

Pike's pessimism was wrong. I'm sure he sees this now. I'm sure he's ecstatic at how things turned out.

In his defense, 2000 capped a bleak decade in the software industry.

As Keith M Wesolowski used to sign his emails:

  Keith M Wesolowski "Sir, we're surrounded!"
  Solaris Kernel Team "Excellent; we can attack in any direction!"
As an industry, we attacked in all directions.

2000 was the start of beautiful things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: