Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Systems Software Research is Irrelevant (2000) (cat-v.org)
71 points by Philipp__ 34 days ago | hide | past | web | favorite | 78 comments




"What happened" was two-fold:

1. Starting especially a few years before 2000 but continuing today, the software industry is quite profitable, pays well, and has lots of openings, while the a academic job market in systems research continues to pay poorly and has much more limited openings. So if you want to do systems software research while also having an enjoyable quality of life, you might as well go to a company and get paid well instead of spending your days writing a thesis and grant proposals.

2. Computer science is a field where the cost of basic research equipment is low (a computer), and more interesting research environments generally are beyond the scale of academia (tens of thousands of hardware nodes, hundreds or thousands or more QPS of production load, etc.). That makes it quite different from e.g. biology or high-energy physics on one end where you usually need to be in academia to get access to the equipment or e.g. mathematics (including theoretical CS) and literature on the other where it doesn't matter where you are; in systems research you only get access to the equipment from being in industry.

That doesn't mean that systems software research, done in industry, is (or was or will be) irrelevant; it means that the narrower definition of "research" as "that which is done in academia" is inaccurate (including industry with the trappings of academia, i.e., people at Google or Bell Labs writing papers in academic journals and hiring people with Ph.D.s). Systems software research happens in industry and is quite relevant to itself.


Commercial research is decently different from academic research, which is what I think Rob may be referring to.

Commercial research needs to keep in mind the existing legacy systems used by the sponsor. Innovations are more evolutionary instead of revolutionary as the field matures. They may be more tailored to observable pain points of the research sponsor. They may not be widely shared if they yield results providing a competitive advantage. While it may not demand immediate returns, commercial research does have an axe to grind. All of this hampers advancement in the field of computer science in general.

I also don't know if there's any kind of commercial research on the scale of XEROX PARC or Bell Labs. I can't think of any off the top of my head. Microsoft and Google do some pretty neat research, but I don't think they've shipped anything quite on a similar scale.

There's really no organization hiring the best talent to work on the kind of black swan events commercial research may miss. For example, I think it'd be cool to have a microcode-based OS; I've heard it would help with keeping operating systems secure. But who would fund it, and who would work on it? Right now it doesn't look like anybody would, and that might be what Rob is concerned about.


Some specific innovations off the top of my head that are pretty firmly outside traditional academic research, and seem more revolutionary than evolutionary:

- Linux's read-copy-update synchronization mechanism. It has been described in papers, but you're better off following mailing list posts or LWN writeups.

- Rust's borrow checker and lifetime system. It's built on existing well-known ideas (e.g. affine types) and there's since been some academic work on formalizing it, but the specific system Rust uses has no direct precedent, is pretty novel, and was developed outside academia. (Note that Rust came out of Mozilla Research, which is far, far smaller than Bell Labs but also an organization that intentionally works on revolutionary and not evolutionary improvements.)

- libdill and Trio's structured concurrency, a solid theoretical framework for handling async/await-shaped problems without turning your execution into concurrent spaghetti. The techniques are not unprecedented, but https://vorpus.org/blog/notes-on-structured-concurrency-or-g... is a better framing of it.


> more interesting research environments generally are beyond the scale of academia

I think the real impairment to OS research is deployment. If your idea isn't compatible with one of the existing OSs, in such a way that it can run a web browser, then nobody's going to use it. Heck, even Windows Phone couldn't get adoption. OS ideas that require people to completely rewrite applications and interaction paradigms are non-starters no matter what benefits they offer - unless they can fulfil a need that can't be fulfilled any other way. So quite a lot of work goes into bypassing the OS entirely for hardware-specific single-program networking applications, and everyone else has to keep with their existing paradigms.


It's important to remember that consumer-facing apps is just a part of the market; and even with consumer-facing apps, the user interface is often only a small part of it.

Even totally plain-looking device could be full of innovative research: a network router which uses completely new kernel. A new network protocol or a compression algorithm. New programming language. Automatic verification and/or fuzzing tools. A network of internet of things devices which share no code with any of the existing OSs.


> Even totally plain-looking device could be full of innovative research: a network router which uses completely new kernel.

True. Although all those kinds of devices tend to prefer "free" over "innovative", and to keep the OS layer as thin as possible.

> A new network protocol or a compression algorithm. New programming language. Automatic verification and/or fuzzing tools.

To me those aren't really systems software, but that may be a matter of opinion?


Re 2) Biology and Physics are both really expensive (think particle accelerators and human genome). We could dramatically increasing academic computer science funding to let tackle those large interesting problems.


Virtualization. Capabilities. Kernel-bypass networking. Static code analysis. Verified-pointer microarchitectures. Coverage-guided fuzzing.

Systems software research has come a long way since 2000.


Virtualization: See VMS (1977) Kernel-bypass networking: See microkernels (1967) Need I go on?


I think it is a wild misunderstanding of how academic research works to say that the first demonstration of a concept is equal to all further work on a concept. It is like saying that the Human Genome Project isn't recent work because the structure of DNA was discovered in 1953.


"arithmetic has existed for quite a while...what's the novelty Of Mr. Newton's integral? Mathematics research is irrelevant."


This is very common thinking - people seem to equate the first discussion of a concept with "discovery" with "the important stuff".

If that line of thought were consistent, it would credit Babbage, or maybe Turing, as the last computer scientist to do something useful.


I would say there is a difference between development that significantly change the way we look at things and adapting known principals to changing demands.. Where exactly that boundary lays is, I admit, murky at best.


And machine learning is still stuck in the sixties...


It's stuck in the 1760s with the publication of Bayes Theorem.


Yes, please do demonstrate an ancient source of coverage-guided fuzzing, I would be very interested in that.


I think you could make a point that it would be a practical implementation of the "Infinite Monkey Theorem" made practical by extension of Moore's law.. I think the original author expresses disappointment in the lack of fundamental new developments in system research and he has a point. Then again the wheel hasn't changed much in recent history either..


I suspect you're not very familiar with modern fuzzing research because improvements in this area are definitely not simply a manifestation of Moore's law. Many clever and rather fundamental advancements have been invented and implemented here; for instance, the combination of symbolic execution with concrete execution known as concolic testing.


VMS? See IBM System/360 in 1967.


Virtualization and static code analysis existed before 2000.

Granted - the tooling improved.


Sure. And that's what most research is: Take an existing idea and make it better.


Or a known phenomenon and explaining it.


The last 19 years of systems software research have not refuted Rob's thesis. Industry has made incremental progress, academia has written papers but not built much that people want to use. Despite massive increases in graphics processing power, desktop UIs are still about the same as in 2000, just with more shininess.

And the number one thing that could have gotten better in the last 19 years but didn't: security.


There's a lot of churn in tech - people jump to new stacks for job prospects, instead of solving hard problems in existing frameworks. This is part of the reason why tech keeps on reinventing the wheel instead of providing improved productivity perceived from the business (customers' business needs) perspective.


Also, work environments strongly encourage buzzword-driven [development|careers].

It's almost like human society is trying really hard to keep developers busy.


> desktop UIs are still about the same as in 2000, just with more shininess.

That is because the majority of people fundamentally do the same things with computers than they did 20 years ago. Browse the web, edit pictures, videos, put together presentations, document layout, spreadsheets, etc.

Of course now your home videos are in 4K instead of 320p, and webpages are 10MB of JS instead of 10k of text... but these are changes in scale, not in kind.

However, shiny features is what gets people attracted to your platform, so we get shininess (never mind if functionality actually gets lost in the process).

The perfect illustration of this for me is George RR Martin, a professional writer of indisputable success, doing all of his writing work on a 1980s workstation with WordStar 4.


> And the number one thing that could have gotten better in the last 19 years but didn't: security.

In 2000, people mostly still used Windows 9x. A single-user system with no sandboxing and no built-in firewall.


He's talking about research, so you have to look at the state of the art, not what the masses were still using. In 2000 I used FreeBSD, which was pretty good. It had jails [https://en.wikipedia.org/wiki/FreeBSD_jail] by then, which a reasonable person might still prefer to modern Linux containers.


Windows 2000 also existed at that time, was widely used, and was basically straight out of the future. It had:

NTFS 3.0 with file encryption support

Logical disk management for dynamic disks & expansion of a logical partition over multiple physical disks. Without a reformat.

Distributed file systems & hierarchical storage management.

MMC with group policy control, active directory, centralized event viewer for OS & application events, and system service management

Speaking of which, system services were a thing that actually existed and were managed (systemd fighting still continues, so Linux still hasn't "caught up" on this)

Plug & Play ACPI support (technically windows 98 was the first to support this but it was so broken it was a joke - Linux lagged by a few years and didn't really support it until 2.6).

User-mode print drivers

Network QoS

time service with SNTP support


We had jails, but we didn't have virtualized network stacks, pluggable TCP congestion frameworks, or bhyve.


FreeBSD jails have certainly not be a static concept since 2000, so the term "FreeBSD jail" does not denote a single, unchanged thing.


> And the number one thing that could have gotten better in the last 19 years but didn't: security.

This is an astonishing claim: what makes you think it hasn't gotten better? It's gotten a LOT better since 2000.


I’d love to go back to Windows 2000 (and Google circa 2000). The software industry (at least in the desktop side) peaked two decades ago, then spent most of the last decade or so badly reinventing everything on the web.


> I’d love to go back to Windows 2000

While better than Windows 9X, Windows 2000 was also horrendous with regard to security. That was the era where Windows saw so many exploits and worms, and their security practices so lax that because they started the firewall a few seconds after starting the network interfaces when booting, if you were connected to the internet without a separate firewall on boot (fairly common at that time) it was likely you would be infected by a worm in that few seconds of unprotected networking.

Anyone around at that time will remember the rampant worms infecting large swaths of the internet connected Windows machines. Code Red. Sasser. Blaster. Slammer/Sapphire.


Gmail didn't come out until 2004. You'd be stuck with Hotmail in 2000. Google Maps didn't come out until 2005.

Google Docs (and the subsequent migration of MS Office to web accessible forms) didn't come until even later.


Gmail is lame compared to Outlook 2000. (It also broke self-hosted email for everyone.) Likewise Google Docs can’t hold a candle to Office 2000 (or even Word Perfect 6.1). It has extremely bare-bones control over text formatting and page layout. E.g. no kerning, limited styling, no footnote styles, limited control of header/footer formatting, no section breaks, etc. No section breaks! The version of Word Perfect I installed from a stack of floppy discs had section breaks!

Microsoft's web apps are a grim reminder of how desktop UIs have evolved backwards. (I’m in the midst of evaluating Office 365 as part of some IT transitions at work.) It's missing tons of features even compared to Word 2000. And it's a total pig. I thought Office was a pig before, but moving it to the Web made everything 10x worse. (Google Docs is less of a pig, but that seems to be because it has less functionality than Gobe Productive on BeOS.)

I’ll concede that Google Maps is better than what was available in 2010. It bet it would be even better if Google turned it into a Win32 desktop app.


>Microsoft's web apps are a grim reminder of how desktop UIs have evolved backwards. (I’m in the midst of evaluating Office 365 as part of some IT transitions at work.) It's missing tons of features even compared to Word 2000. And it's a total pig. I thought Office was a pig before, but moving it to the Web made everything 10x worse. (Google Docs is less of a pig, but that seems to be because it has less functionality than Gobe Productive on BeOS.)

Sure, that's all true, but this backwards devolution also ensures the important thing: that you don't really own the code you run, a centralized provider does, and they can change or break it as they please, without having to remain compatible with your machine. This is a business model problem: they've decided they do better off turning your general-purpose, user-programmable personal computer into a dumb terminal that uses 10x bloated-ass Javascript frameworks to make AJAX calls to their HTTP servers.


What a false dichotomy.

I'd love to go back to the irreverent hacker spirit of the 90ies.


Both statements are true. Security in 2000 was worse by a lot, but the need for security was also less. While there were viruses that deleted all your files, they were very primitive compared to what is done today.


We live in a world where the ratio of computers to people is greater then one and still don't have great isolation between peograms.


We have pretty great isolation between programs on iOS and Android.


I hear you, but security wise this point is countered by the fact that every (n-1) iOS operating system has a public lpe exploit available. A user might not be able to jailbreak their idevice, but a hacker can.


I’m not sure what you mean. How would an attacker jailbreak my iPhone 7?


I wonder what makes you say that? There is more social focus on security, however that seems more a consequence of the pervasiveness of computing in our society then in fundamental progress in our systems-thinking. As far as I can tell, the basic security models have not changed much since the late 60's.


"the basic security models have not changed much since the late 60's" != "number one thing that could have gotten better in the last 19 years but didn't: security"

Those two are very different claims IMO. Who cares what the basic security models are if you are significantly more difficult to attack?

We can debate whether these were "innovative" or not but the fact is that in 2000 none of these things existed in anything beyond research if at all there: ASLR, stack canaries, RETGUARD, pledge, jails, seccomp, fuzzing, San/kSan/HWAsan (tagged mem), NX, signed bootloaders/secure enclaves. IMO, iOS took huge steps to isolate the different user applications from one another.

EDIT: I deleted a reference to SELinux. It was introduced only a handful of days before Jan 1, 2001 ;)


The article is talking about academic research while most of the examples you quote seem recent(ish) industry implementations of these security models. I'd say that corroborates rather then disputes the authors thesis..


If you consider the [admittedly naive] risk equation:

(amount of data to protect * number of systems that store or handle data * level of risk) - mitigation

...you'll probably agree that the mitigation mechanisms improved 100x but the risk improved even more.


The topic is security research, not security practice. Security practice has improved, but security research has not (or that's the thesis, anyway).


What we see is security practice. The security practices coming into play today that come out of research already done by 2000 were not widespread in 2000. Is there no security research being done in 2017 that we won't have paid much attention to for another decade?


New operating systems today tend to be just ways of reimplementing Unix. If they have a novel architecture -- and some do -- the first thing to build is the Unix emulation layer.

How can operating systems research be relevant when the resulting operating systems are all indistinguishable?

[...]

Linux is the hot new thing... but it's just another Unix.

Although they are rooted in FP notions of purity and immutability, I would say that NixOS and Guix try to fundamentally change operating systems.


What do UIs have to do with systems research? And given how much easier it is to use an interface that you're already familiar with, isn't it a good thing that they've mostly stayed the same?


> academia has written papers but not built much that people want to use.

Does "started in academia" not count? Because that'd give you easy counter-examples, e.g. Scala, Spark.


LLVM is a huge one.


The disconnect between research and development* has been constantly increasing.

* "development" as in making a technology usable, not software development


I would argue that systems research has been incredibly relevant. First consider programming languages. Even though languages such as Java, C++, C# are all widely used, they are much different languages than they were in the early 2000’s. You can see the influences of academic research especially from the functional languages (monads) on these languages. Also, Rust is an exciting new language that is enabled by the systems software research of the past.

If you look at networking, recently, there has been the move towards new protocols (quic) that was the result of systems research looking at the deficiencies of tcp. Another area is consensus algorithms. We now have large scale real life deployments of consensus algorithms, for example Spanner and etcd.

The late 90’s and early 2000’s were a weird time where the hardware was improving so fast and taking software along for a free ride that a lot of software was good enough. Now, as we bump more into the end of Moore’s Law, we will be seeing more research and real life usage of multicore and heterogenous computing and libraries and languages and operating systems that try to make that easier.


While he mentions languages & networking up top, he's not really complaining about it. Those have and continue to advance nicely. OSs have definitely gone into small-increment-improvement mode


> The late 90’s and early 2000’s were a weird time where the hardware was improving so fast and taking software along for a free ride that a lot of software was good enough.

Would you say that wasn’t the case during the past 20 years (2000-2019)? Or do you consider all that period to be “early 2000’s”?


I think Herb Sutter's 2005 article "The Free Lunch Is Over" was a signal that this was changing.

http://www.gotw.ca/publications/concurrency-ddj.htm


M. Pike's followup, 4 years later, is at https://interviews.slashdot.org/story/04/10/18/1153211/rob-p... .


"Linux may fall into the Macintosh trap: smug isolation leading to (near) obsolescence." Well, that did not happen.


This is how you end up with a language like golang.


Well, to be fair, there is almost no actual scientific programming language research to begin with, so anything goes.


Would research into side-channel attacks count under M. Pike's criteria?

Granted, whilst it is system-level it is not system software. And it has not yielded demos that people have regarded as cool, rather ones that have been received by some as horrifyingly worrying.

But it has definitely influenced industry.


Great work has been done discovering side-channel attacks, but on the other hand most side channels have been created by sloppy microarchitectural design since 2000. So I dunno if that's progress. If we see some CPUs in the next few years that are both fast and not vulnerable, I'll count that as progress.


This article predates it, but OS X (particularly after it mutated into iOS) represents probably the biggest source of systems innovation in the two decades after this article. Apple is very secretive, so their systems research often isn’t known outside the company until it’s actually going into a product.

OS X was modern for its time, but where they’ve really pushed the envelope is with iOS. They can simply move faster at scale than anyone else because they almost entirely own the IP for both the software and all major hardware components and can pivot on a dime compared to market-based coordination.


> This article predates it, but OS X (particularly after it mutated into iOS) represents probably the biggest source of systems innovation in the two decades after this article

There was almost nothing innovative about OS X, even when it came out. It was just packaged and marketed very well. Objectice-C and NeXTSTEP was a user land improvement over typical C user lands, but that's not saying much.

> OS X was modern for its time

It really wasn't. The Mach "microkernel" was from outdated 80s research. It's bloated, slow and inflexible compared to the state of the art at the time.


At launch Android was way more innovative at a systems level with per-application UID sand-boxing & a permission system and system-integration capabilities (broadcasts, services, intents, etc...)

iOS was innovative at a UI/UX level, definitely. But I can't really think of anything they did at a systems level that was at all innovative?


Android didn't innovate those. Per-application sandboxing was the default in capability systems since the 60s, and became more widely deployed in the OLPC Bitfrost security model, and even had a deployment in HP labs' Polaris Windows NT-based environment for virus safe computing. These two projects informed the early Android security model IIRC.


I always understood that the "Systems" component of OS X was called Darwin and is an open source project and anything but secretive.. Did something change?


OS X is NeXTStep at the core. It dates back to the 80's.


Hadoop? Spark? Are those not systems software?


Beowulf? MPI? Were these not already things 30 years ago?

https://github.com/intel/spark-mpi-adapter

Oh look, a paper; "For example, a recent case study found C with MPI is 4.6–10.2×faster than Spark on large matrix fac-torizations on an HPC cluster with 100 compute nodes"

Does it sound like large data analytics would have horribly stagnated?


Hadoop was not an academic project although Spark was.


Soon it will take less time and be more cost-effective to commission the integration of an SoC for your application than to risk your business to software basket weavers.


I genuinely don't know what the author is refering to.

I'm amazed at how many comments resolve around "But wait, of course systems research has evolved, see XX and YYY", followed by responses along the lines of "Nah, he was not talking about XX and YYY, rather ZZZ, etc..."

I hate being the "please define xxx" guy, but is there a consensual definition of what "Systems software" is ?


Rob defined systems software in the first part of his post as "Operating systems, networking, languages; the things that connect programs together.".


RAMCloud isn’t popping, but it did give us RAFT.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: