Hacker News new | past | comments | ask | show | jobs | submit login
What Is Systems Programming, Really? (willcrichton.net)
248 points by wcrichton on Sept 11, 2018 | hide | past | favorite | 87 comments



My idea of system programming is that, other than the "near to the bare metal" element, which may not be always true, has this quality of creating infrastructures for other layers to use. A game 3D engine and a DNS server may be both written in C++ and may use the same low level programming techniques to achieve speed, but the fundamental difference is that one is just part of an application program of some type, and the other a system that provides a general service to other programs. So clearly it's technically possible to do system programming in higher level languages, even if often this will require anyway dealing with many system calls regardless of the language, both explicitly or implicitly via abstractions.


Computer systems are only useful because they can run application programs. "Systems programming," therefore, is one that aims to improve the utility of the computer, as opposed to "application programming" which solves concrete problems posed by users.


> part of an application program of some type, and the other a system that provides a general service to other programs

I find that distinction to be a bit arbituary. To me, systems programming is one which require adherence to some sort of well defined spec/standard so as to interoperate with some (potentially underlying) "system".

In a 3D engine, the systems programming part is the part that interfaces with the graphics drivers, but not the part that derives the 3D geometry from reading files or running logic to generate the content.

In a DNS, the systems programming part is the socket interface, but not the DNS protocol (tho you could argue that the protocol is also considered "system-ish").

In a web application, the systems programming is also the socket, in a very similar vein to DNS servers.

In a desktop application, the systems programming is the interface into any native resources (such as any socket calls, any disk, or peripherals, as well as any of the graphical hardware interfacing required).

But because many of these tasks are quite common, they get put into libraries, and so many application programmers do not deal with these "systems" part, and just deal with their own application logic, and so it feels like there's no systems programming.


I think systems programming should be split in two, honestly.

You have the kernel-level systems programming where you need to be near the bare metal, where you write kernels or program microcontrollers with less RAM than a x86 CPU has L1 cache, where taking a microsecond longer can mean the overall system crashing (or even costing a human life).

And you have system-level systems programming where you write services and supportive infrastructure to enable user programs, where garbage collection and non-realtime behaviour is acceptable, where you can skip a microsecond and it'll be alright.


I think the two "poles" you've identified are good ones (~microseconds matter and ~100ms matters) but there's a wide range in between. If you do anything with video at modern resolutions, then you're in the single-millisecond-matters regime (this includes games as mentioned upthread, and just about anything in VR as well).


Video streaming and gaming have some soft realtime guarantees, yeah, though generally these aren't systems programming and application programming instead.

The plumbing below, ie graphics rendering pipeline, tend to fall into the category of kernel systems programming, controlling the GPU is pretty close to doing what a kernel does IMO.


It seems a bit telling that you described one as "kernel" systems programming and the other as "system" systems programming.


Well, I think the distinction is fair, kernel systems programming is largely concerned with things a kernel would normally do (ie, a kernel or µC code), very close to the metal or such. Maybe bare metal systems programming would have been more accurate but I feel it doesn't convey the same meaning.

System systems programming is what it says on the tin; it's about constructing systems, multiple objects interacting with rules on the interaction, etc. An alternative name might be "user space systems programming" though again I think that doesn't convey the meaning as well.


I think that in user space, it is not clear what is an application and what is a subsystem. Is a database a system or an application? I could issue queries to it directly as part of some ad-hoc research, or I could build a production process using it as a component. In one use-case, it's the application, and in the other, it's just a subsystem. And so I think the distinction between kernel-space and user-space is the only one that makes sense. Once you're in user-space, the distinction between systems and applications can no longer be applied to a tool, but only to particular use-cases for that tool.


a modern RDBMS is largely in the systems layer since it's the glue used by other application.

Having a user interface doesn't really matter IMO, otherwise none of the usermode systems programming applications would be systems programming at all.

Perhaps it would help to add that user software applications generally aren't intended to be used by other applications. Your browser is such an application since it's primary interface is being directly used by the user. The output and state of a browser is not intended to be consumed by other applications running on the system, though it is capable of running code to supplement the output and state of a single website for the purpose of further user interfacing.

An RDBMS doesn't have such a thing, the user interface is largely what you use in the application, it's primary use case is gluing together applications like browsers or websites.


You can make a distinction between the kernel and the shell (shell = all the system tools between the kernel and the applications, cf. https://www.tutorialspoint.com/operating_system/images/linux... not only sh or bash), but this distinction remains an implementation detail, and you can change the position of this interface at will. From microkernels who handle only the messaging between processes, to OSes like Multic or Windows-NT that include even GUI operations in the kernel...

You cannot either count on the hardware protection to make the distinction. Some systems may use the hardware to protect objects at a different level of granularity. For example, capability based OSes will use a single address space, and use the hardware to protect not processes, but smaller objects (capabilities). Accesses and communications is not managed to prevent processes to access objects of other processes, but to prevent or allow access to capabilities. In a softer way, it's also the case of Lisp OS (including eg. GNU emacs). There's not much specific protection on these systems, but it still works safely, because you can access only the objects to which you have a reference, and you cannot compute random addresses (it's a controlled execution environment).


Great article. This is an important distinction to make. I've noticed that the more experienced I got, I felt less and less need to micro-optimize every aspect of a program at the expense of code readability. That's where I think the design and implementation of a system intersect; making it easy to understand how it works and giving future maintainers the confidence to make changes is best accomplished with clear and readable code with the right abstractions (even if it hurts performance a little). I think that documentation will always be second to that.


Depends on the kind of software, though. If you're writing an interpreter/compiler, web server, database or operating system you don't have much choice.


> Andrei Alexandrescu (creator of D)

Andrei didn't create D; he's been influential in it, especially w.r.t. D's metaprogramming story, but the language was created and is primarily maintained by Walter Bright.


Ah, my mistake. Clarified the wording, thanks.


It was no mistake, but it is now.


IIRC He is a - listed - co-creator of D2 (There are two versions of D, the first one being entirely Walter?)


D1 was created by Walter Bright, but D2 was primarily created by both Walter Bright and Andrei Alexandrescu.


Is D2 so different as to be considered a separate language? (I haven't really followed it for years now.)


Yes it's very different.

D2 introduced a lot of new keywords, new semantics, not two "standard" libraries like with D1 where there were Tango. Tango is pretty much completely obsolete in D2. Most D1 code might work to an extend with the D2 compiler, BUT almost no D2 code will be able to run using a D1 compiler and it'll even be difficult to translate it directly.

D2 is basically when Andrei joined the development of D back in 2007.


Why am I getting downvoted for this? Lmao


> You should be able to forge a number into a pointer, since that’s how hardware works.

It's a C-centric view of the world and I wonder what old school FORTRAN77 people or lispers would think of this.

Anyway, I think the right to do this should be a privilege of the compiler. As soon as you claim this right, you abandon all possible support she can give you in battling all kinds of silly mistakes.


I'm not an "old-school lisper", but I have done some Lisp development related to embedded systems. The phrasing may be somewhat unfortunate but it ultimately does describe how hardware works: you need to write something at a numerical address that you find in the reference guide or the datasheet. At the end of the line, you do have to forge a number into a pointer. In James Micken's words, "You can’t just place a LISP book on top of an x86 chip and hope that the hardware learns about lambda calculus by osmosis."

"Higher-level" languages (like Lisp) can help you in other regards. They can include proper primitives (or give you the proper tools to write the proper primitives yourself) to ensure that direct-to-hardware access is safe, can give you better tools to manipulate DMA buffers (e.g. proper support for coroutines when manipulating double-buffered data is cool), can help you write more generic state-applying and state-reading code (i.e. code for "turn these bytes from this circular buffer into this structure" or "take this structure that represents a peripheral's current config and use it to write the proper magic numbers in the right registers"). They can offer you better semantics for translating human-readable(-ish) configuration into the right stream of bytes, or manipulate peripherals in real-time. They can help you write a better VM (or their runtime can outright be the better VM you need).

But at the end of the day, writing to the config space of a memory-mapped device is still going to consist of taking a number (or building it from several numbers) and writing bytes at that address, no matter what language you're using. Better runtime support for safe operation under this scenario, better semantics -- they're all important, but what the author expresses is not C-centric in any way.


Pointers are not even necessary for writing operating systems or memory allocators. The Oberon system writes those low-level components using intrinsic peek/poke functions (like some ancient BASIC!) that are recognised specially by the compiler and turned into direct memory modifications. This is clearly just as unsafe as pointers (probably more so), but it means you don't generally pollute the language to support some very specialised code. The rest of Oberon is memory-safe and garbage-collected (with the garbage collector also written in Oberon).


> The Oberon system writes those low-level components using intrinsic peek/poke functions (like some ancient BASIC!) that are recognised specially by the compiler and turned into direct memory modifications. This is clearly just as unsafe as pointers (probably more so), but it means you don't generally pollute the language to support some very specialised code.

How is "val = PEEK adr" and "POKE adr, val" any less polluting than allowing "val = &adr" and "&adr = val" in select places?


It makes it easier to analyze code (i.e. greppable), and it allows for a distinction between operations that do direct memory modification and regular application code (pass by reference is accomplished with an entirely different syntax in Oberon, same for pointer types).


It shouts to you, "Take care something dangerous going on".

Hieroglyphs are easy to pass by.


Lots of languages (like C#) allow pointers, but only inside blocks explicitly declared to be unsafe.

    void DoSomeUnsafeStuffHere()
    {
        // regular code here. pointers verboten.
        
        unsafe
        {
             // pointery stuff here
        }
    }
I think that is equally clear, if not even more. And again: I don't think this is any more polluting than weird out of place PEEK/POKE statements.


That is true, my point was that for C and C++ that is not clear at all.

You cannot simply search for & and * regarding point operations because those are also valid numeric operators, thus with context dependent meaning.

Being able to search for PEEK/POKE, SYSTEM.PUT/SYSTEM.GET, or just unsafe blocks makes all the difference tracking down unsafe code.

Also note that unsafe code blocks is older than C almost for 10 years, as ESPOL/NEWP already had such feature in 1961.


> You cannot simply search

Which is one of the reasons why in the modern world (of fast CPUs and large RAM) code should be edited using editors that "understand" the language's syntax.


What makes it unclear is exactly this 'pointery stuff'. There's nothing inherently unsafe about strongly typed pointers, but that's not the same as direct memory modification (which is obviously unsafe).


Strongly typed pointers is just another name for direct memory access.

The following is perfectly typesafe:

    SuperClass[] items = GetArrayOfSubClass();
Now if we try to do unsafe things to items, we may end up accidentally accessing memory we shouldn't.

    unsafe
    {
        for (int i=0; i < items.Length; i++)
        {
             var currentItem = &items[i*sizeof(SuperClass)];
        }
    }
This may or may not work, based on the runtime memory-layout of the subclass.

Basically the presence of unsafe {} means that beyond this point correctness cannot be guaranteed by the compiler. That's in fact what the keyword is for.

And that's the marker you are looking for. No need to go digging deeper into the code looking for the actual pointers themselves.


It is as unsafe as pointer, if those PEEK and POKE operations don't perform any control. But there's no reason to allow random PEEK and POKE from random processes. The system process that manage memory may be given the access rights to PEEK and POKE the memory manager registers. But not the IDE registers. And application processes won't have the right to PEEK and POKE anything.


As someone who likes Fortran90+ for numerics in HPC, I'd put it this way:

* Pointers are required mainly so you can do pointer swaps and be sure there is no unnecessary allocation (in simulations this often means being sure there's no 10-100GB allocation each timestep, which is where it really matters).

* Pointer math is unnecessary and only there because C has no/poor support of multidimensional arrays. Use a reasonable language like Julia or Fortran instead. In fact, the potential for pointer math has a big negative impact on performance since compilers have to assume aliasing. The __restrict solution is clunky in C and completely non-standard in C++, Fortran does not require this at all (no aliasing allowed except where explicitly stated).

* For almost all usecases (except swaps) a semi-managed approach like Fortran's allocatables or Objective-C's ARC or even retain/release seems to me the most straight-forward.


> Pointers are required mainly so you can do pointer swaps and be sure there is no unnecessary allocation

Many languages are getting around this issue by hiding pointers in their type system, performing copy-on-write, and taking the decision of whether allocations are performed on the stack or heap out of the hands of the programmer.


> out of the hands of the programmer

see, and that's where these languages become really clunky for HPC purposes. A compiler/runtime with HPC support (and I think also systems programming) should provide (a) performant defaults with safety as a second (but still high) priority and (b) the ability for programmers to go and set things a certain way when it is clear to them how things should be implemented on the machine.

Otherwise, in large applications, there's just too many moving parts that the compiler (optimization) can mess up. As long as we have no AGI baked into compilers it's an illusion to think that compilers will do the job of HPC engineers, so if you take away these tools we just have to look elsewhere.


Have you looked at Chapel?


Not yet, thanks. How does it compare to Julia?


Different goals.

Chapel is a strongly typed language designed from the ground up to replace C++ and Fortran for HPC programming in cluster environments, while removing the typical unsafe features from C and C++.

Initially designed by Cray, Intel has also given an helping hand.

My experience is only from language geek point of view, reading the papers.

Chapel Implementers and Users Workshop 2018 papers

https://chapel-lang.org/CHIUW2018.html


> strongly typed language designed from the ground up to replace C++ and Fortran for HPC programming ...

sounds like Julia to me ...

> ... in cluster environments

this is where it may be differentiating itself, AFAIK Julia has not been very focused on clustering from start, but I'm assuming it's now getting better.


Julia works great on clusters, I use it all the time. It's learning a lot from Chapel too. Chapel has an amazing parallelism model, and a lot of Julia's parallelism is learning from it, including the near future hierarchical task-based multithreading. Julia was designed for interactive usage that can generate fast code, while Chapel was designed for the more traditional HPC usage. With more and more people using interactive languages on the HPC to avoid the translation step of the two-language problem though, Julia seems to be eating into the domain where more hardcore people used to write C++ and Fortran with MPI (I was one of those for a bit!). But while "no need to rewrite code, just take your interactive script and throw it on the cluster" is right for some, other applications will want a statically compiled language with a strong parallelism model which is Chapel.

The issue I see with Chapel though is that HPC is such a small community compared to larger scientific computing, and Julia has such a good design that it is very easy to improve (using Julia code!) with a large developer group (currently 736 contributors, that is quite a bit more than CPython has ever had and Julia is a lot younger), Julia is a lot more nimble. So when Chapel gets a good idea, sure enough there's an MIT programming languages or HPC fellow who proposes it in Julia, and so you get stuff like https://github.com/JuliaLang/julia/pull/22631 soon after. That doesn't mean Chapel is bad, but Julia already has the largest contributor base of any of the open source scientific computing languages still being developed, and that manpower is definitely useful for keeping up with the current research trends (working on Julia is a good way to get a math/CS PhD at MIT :) )


Hasn't Julia focus been MATLAB folks mostly?

In any case, the more the merrier. :)


I think the idea is for it to be both machine friendly and MATLAB/python users friendly, by bringing the best of Fortran into the modern world and with a good type system. The types are very machine oriented but with nice abstractions to easily allow polymorphism.


A pure Julia program got >1 petaflop on Cori, so it's quite in the HPC realm.

https://juliacomputing.com/case-studies/celeste.html


I am not sure what you mean, it's not like we cast integers to pointers by hand is it? The compiler does the casting. And sure, there is usually no support from the compiler when you are programming any device other than the CPU but these devices need to be programmed. For you to read this message somebody had to program the display device and for you to even be able to access this web site somebody had to program a whole bunch of things starting from your NIC, going through various routers and to the different NIC on the HN's server. No compiler had been able to do this so far.


Every operating system I’ve worked on has had a tiny few lines of relatively isolated code for this sort of thing. Even some crappy RTOSes, it seems it is natural to want to abstract it. You need to forge a number into a physical address pointer, not just a pointer. Look at some Linux drivers, they will have some offsets defined but they don’t just create a void* and assign it to a memory location, they use macros and helper functions to fix things up. They can even check if the address is in reserved io spaces and do things like that. There are also helpers to write the bytes to those addresses. It’s a tiny amount of code that is usually already abstracted.

I don’t know. Seems like a lot of kernels do extra things so driver writers can write fairly normal looking C and produce fairly safe drivers without just arbitrarily assigning pointers addresses from numbers. And even if you use C as your comparison, those helpers will often be written with some assembly language (read C isn’t completely good enough for the task all by itself) I don’t know of it existing but a kernel in go with ref counting GC and a handful of cgo primitives seems very possible and maybe even delightful to work on.


The author's point that functional programming principles are important to systems programming and should be taught alongside it is well taken. I think he confuses the issue somewhat with all the discussion of what is a "systems programming language" as these days it's just used to refer to languages substitutable for C.


System programming is definitely a bit overloaded these days though I believe it is now widely understood to be languages like C/C++ that can be used for OS, kernel, and embedded development.

I would agree though that the language ecosystem is shifting a lot in the last few years. IMHO there are now a few languages that are becoming proper full stack languages in the sense that they scale from embedded all the way to browser development. Rust is a great example.

I'm really impressed with what the Rust community has done in just a few short years. All my life, C/C++ was the only game in town for the stuff that is now done with Rust with arguably the same or in some cases slightly better performance.

Stuff like WASM allows the use of system programming languages in places where they would not be used in the past. Using Rust transpiled to WASM makes it a proper fullstack language. And people are using it for that. And people do server development as well in it as well as OS development.


> IMHO there are now a few languages that are becoming proper full stack languages in the sense that they scale from embedded all the way to browser development.

I think you're being a bit optimistic here: traditionally, the only language that has been able to do this is C++, and Rust is displacing it simply because Rust really tries to target C++ developers and their pains. But other than that I don't see much else development here. Swift has promise, but unfortunately nobody's really writing much framework or foundational code with it; at least not yet.


> the only language that has been able to do this is C++

I'd say Object Pascal is able do it too, though it's certainly true that Pascal is not as widely used as C++. Free Pascal and Delphi are the two major Pascal implementations today:

https://www.freepascal.org/

https://www.embarcadero.com/products/delphi


That Borland management board.... :(


Rust is nice but I wouldn't call it full stack because I don't think you should choose it in situations where you can afford a garbage collector. In those cases there are much more ergonomic options available.


Well, I'm seeing people doing web apps in Rust and doing operating systems as well. Certainly the Rust community has been all over wasm and rust tooling for that is relatively mature compared to other languages. Precisely because it doesn't need a garbage collector and they don't have to wait for standard solutions for that to ship in the next year or so.

But I agree, languages like Swift and Kotlin are a bit more logical choice as fullstack languages. Both of those are also extending their reach. Particulary, Kotlin code is a very nice upgrade from javascript in terms of expressiveness, tooling, and safety. I'm using it on top of the JVM currently. People are doing Android with it and the native compiler currently under development is explicitly targeted at doing native IOS and Android apps (i.e. not the current java based DEX vm common on Android).


Its more what you do with the language than the language its self.

eg when I extended the GINO-F drivers for the top of the line HP plotter that's systems programming but the GUI system I built to analyse soil samples was not - even though they where both written in the same language


Irrespective of language and framework, if the code needs to be aware of underlying hardware then it is systems programming. ex. kernel code, user-space drivers, databases, portions of cloud or server code which need to be hardware aware, etc. What golang targets, I think, should be more of middleware or service-level programming, whether its containers or servers.


Alan Perlis has a thoughtful way of expressing this: “a programming language is low level when its programs require attention to the irrelevant.”

At first I thought it was poking fun; now I see it as a comment about what is relevant at a given level of abstraction.


I like that line of thinking.

When discussing the suitability of different programming languages I always point to the problem at hand and identifying the abstraction level the problem is at. What your problem requires you to care about and pay attention to gives this.

Ideally, solving your problem would be a one-liner in an already existing DSL created for your specific problem (domain). In reality, this rarely happens and you must have your pick beteeen everything from niche DSL:s to assembly, but the key really is identifying that problem abstraction level.

I work in a mainly C/JS shop whereas I privately prefer Rust/ReasonML. Rust would be a great fit at work, but Go would also work nicely for tons of things we do. Alas, the inertia of organizations.


> Ideally, solving your problem would be a one-liner in an already existing DSL created for your specific problem

I like thinking about the practice of programming as creating a DSL in a language to solve your specific problem.


I'm curious where Erlang fits into this story. It was built to be a "systems" language but algorithms are tersely implemented (I forget exactly, but I remember reading about a study that an Erlang program is much shorter than the comparable C program), and has gradual typing of a sort. It also is garbage collected (I don't think you could "write your own memory allocator in it", as Andrei Alexandrescu described).


Nice article.

My view of Systems Programming will forever be colored by my start with IBM mainframes. The Systems Programmer maintained all the utilities that the mainframe provided, and also the frameworks that supported online programming.

I suppose today's equivalent is a combination of a sysadmin and an applications developer, providing the applications are things like container components, build pipelines, etc.


They call them devops these days.


A systems programming language can run on bare hardware by itself, or nearly so. It is acceptable to require a very small amount of assembly code, for example to implement something like memcpy or bcopy, or to provide atomic operations.

A language is disqualified if it requires an OS or if it requires code written in a different non-assembly language. Cheating, by adding that as a huge (impractical) amount of assembly, also disqualifies a language.

Funny story about gcc: The compiler demands a memcpy, which it sometimes uses for struct assignment. If you implement memcpy in the normal way, gcc will helpfully replace your implementation with a call to memcpy! Despite this, C is still a systems programming language.


> It is acceptable to require a very small amount of assembly code, for example to implement something like memcpy or bcopy, or to provide atomic operations.

There is some stuff missing in your list: preparing the stack pointer, address layout, etc. You also need tools to produce text and data sections that can be loaded at a specific address. Even if C is a low level language, there is still quite some stuff between it and the "bare hardware". In that sense, the only language that gives you enough control to produce the binary exactly in the form needed by the hardware (without relying on external tools or libraries) is assembly.


> You also need tools to produce text and data sections that can be loaded at a specific address.

Though, of course, most compilers provide extensions that make this task generally possible within C (using the loose definition of "I don't need any separate files or inline assembly, just attributes and flags).


So we're getting closer to it. The linker inputs and scripts are the only "real" systems programming language.


stack pointer is an implementation detail. The standard does not mention the word stack even once.


So then C is cheating by your definition, because it is impossible to implement ANSI C standard library without using Assembly or compiler extensions.


I did say a small amount of assembly was fine. By "cheating", I mean something like converting the perl interpreter to assembly code and claiming that perl thus doesn't require code written in some other language.

Leaving out libraries is normal for a systems programming language. It is fine unless the language entirely doesn't work without the libraries.


that's what -ffreestanding and others (-nostdlib) are for.

C is not cheating, the compiler assumes you work on user level stuff. Want to write bare metal (kernel) then you need to tell the compiler.


Right, as mentioned on my comment, compiler extensions.

Strictly speaking that isn't proper C as defined by ISO/IEC 9899:2018.


I don't think a single definition makes any sense now: decades ago you may have had a definition of a 'computer system' that describes every big program. Nowadays there are many sets of requirements depending on context, each set of requirements implies its own constraints on memory management and scheduling so that there is no one description of how a complex program would look like.


You’re HN famous, Will!

But I disagree with this:

> Companies like Dropbox were able to build surprisingly large and scalable systems on just Python.

Many of Dropbox’s services are written in Go, and Magic Pocket is written (partly, at least) in Rust. Earlier in their development, Dropbox relied on S3, which is obviously not in Python.

Fundamentally, I think the important part of the “production” aspect of “systems programming” is that it’ll be used a lot. That’s what drives the requirement for efficiency: if you’re 10-100x less efficient, someone else will be more cost effective / solve bigger problems.

As an additional example, GitHub was written in Ruby, but that’s fine because the underlying Git and filesystem manipulation is all in C. The same thing is sort of true of Facebook’s world of PHP: PHP is mostly a wrapper around C libraries. Until it got complex enough that they rewrote the language.

tl;dr: I don’t think systems programming needs to be “low level”, but production systems do need to be efficient. More powerful computing just moves that further to the right!


In old days, computers that came from manufacturers like IBM, DCE was refrred to as system. The operating software for the system was referred to as “system software”. People who created that software were called “systems programmer”. Much of it invariably involved writing direct to metal/assembly code which is why systems programming has been associated with low level programming. People using this term to refer to task of doing some complex application architecture are doing it wrong.


I think that it is not useful to build on Ousterhout's definition as his definition of systems programming language is mostly meant as contrast to Tcl with it's everything is a string memory model (in that view one could very well conclude that Python is a systems programming language).


when I was programming in the 70s and 80s (in the UK), the term systems programmer applied to those people involved in the configuration, tuning, management of software such as CICS, MVS, and their equivalents on other hardware bases. I worked at Sperry Univac for several years and the "systems programmers" were the engineers who tweaked the OS, the TP schedulers, built the product installation scripts, designed the configs that controlled the software. It was a specific tradecraft suited to those who were more technically focused, remember that COBOL was the prevalent language for nearly all programming roles in the 70s and that there was no "web" per se. The term started to fade away roughly when PCs started entering into mainstream computing


If you are making system calls–without too much abstraction–you are doing systems programming.


I think the author makes fair points. My conclusion: Do the bit twiddling and strongly constrained parts in a low level language and the rest in a language that supports strong static typing and good interfaces.


... this deserves a supplementary question: what will get us beyond UNIX to the next platform, not just the next paystub?


Maybe something like Android, OS X/iOS, ChromeOS, UWP, Fuchsia, Unikernels, Language runtimes on Hypervisors.

And since I sense the question coming, Android, OS X/iOS, ChromeOS might use an UNIXy kernel, but its exposure surface to applications is so small, that it can be easily exchange for something else, it is just a matter of cost.

Which brings us to the next point, given the commodity of free UNIX-like OSes, maybe only an hardware reboot like Quantum computers will actually do it.


> And since I sense the question coming, Android, OS X/iOS, ChromeOS might use an UNIXy kernel, but its exposure surface to applications is so small, that it can be easily exchange for something else, it is just a matter of cost.

Android, macOS, and iOS expose almost the full set of POSIX to the programmer. Now, you may argue that it goes mostly unused, but it is there, and there are people using it.


Android does not, use at your own peril as it is not part of the official APIs and will get your app terminated if you use unauthorized APIs.

Using libc is not the same as POSIX.

https://developer.android.com/ndk/guides/stable_apis

https://developer.android.com/about/versions/nougat/android-...

https://android-developers.googleblog.com/2016/06/android-ch...

As for macOS, and iOS, if Apple removed POSIX how many apps written in Objective-C and Swift (not UNIX cli tools ported from Linux) would actually be affected?

Very few, because those written in C and C++ ported from other platforms are most likely making use of ANSI C and C++ standards.


Nowadays it broadly means using C++ programming, involving dealing with different levels of system calls.


IDK about systems programming, but one thing I do know: Systems researchers = ^[AI researchers]


640x480, 16 colours, Ring 0.


Hm I really like the observation that "systems programming" is overloaded.

I gave my take here:

https://lobste.rs/s/jrzwgy/what_is_systems_programming_reall...

In short, I think shell is actually a more appropriate language for describing systems, rather than OCaml or Haskell! If that sounds weird, then I'll point you to the evidence of shell already being used for this purpose (e.g. the 40K lines of shell in the Kubernetes repo I mentioned).

Although, on reading the blog post a little more closely, we may not be agreeing precisely on the problem.

I should probably write my own blog post about this ... there are a couple I linked that give something a flavor.


It's simple:

Bugs in application code causes errors in that application.

Bugs in system code causes errors in an user application.

If bugs in your code causes problems only to your code - it's single application. If bugs in your code causes problems in other programs - it's system of programs.


In one word - syscalls. What do I win?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: