Hacker News new | past | comments | ask | show | jobs | submit login
C Is Not a Low-Level Language (acm.org)
472 points by jodooshi on May 1, 2018 | hide | past | favorite | 316 comments

This article makes some valid points but is overall rather misleading I think. Almost all of the reasons given why C is "not a low-level language" also apply to x86/x64 assembly. Register renaming, cache hierarchies, out of order and speculative execution etc are not visible at the assembly / machine code level either on Intel or other mainstream CPU architectures like ARM or Power PC. If C is not a low level language then a low level language does not exist for modern CPUs and since all other languages ultimately compile down to the same instruction sets they all suffer from some of the same limitations.

It's really backwards compatibility of instruction sets / architectures that imposes most of these limitations. Processors that get around them to some degree like GPUs do so by abandoning some amount of backwards compatibility and/or general purpose functionality and that is in part why they haven't displaced general purpose CPUs for general purpose use.

I also had the initial impression that the article is misleading, but later on the author made the point that the C compiler is doing significant work to reorder / parallelize / optimize the code. I agree that x86/x64 is not a low-level language either, but even if it was, with the description the author provided, I'd agree with his point of C not being low-level.

Regarding cutting off backwards compatibility to improve the design, Intel's Itanium (affectionately called "Itanic") was a very progressive approach to shift the optimization work from the CPU (and the compiler) to just the compiler. I'm not sure what the reasons for its failing were, though.

I'm not an expert, but I don't think the optimization phase should be really considered here: the same kind of pattern matching used by (e.g.) LLVM to find optimizable sequence of statements also could be used by any assembler. NASM for instance offer some level of optimizations, so I think optimization should only be considered when it's part of the language specification itself, like in Scheme.

IMO C is close to low-level because it's relatively easy to imagine the resulting unoptimized assembly given some piece of code (which is why some people jokes about C being a macro assembly).

Maybe this old debate should get an slight update... and this could be the starting point: Is modern x86 assembly still "low-level"? :)

> the same kind of pattern matching used by (e.g.) LLVM to find optimizable sequence of statements also could be used by any assembler.

Some of the simpler optimizations, sure. But modern backends do many incredibly sophisticated optimizations that are way beyond any kind of simple pattern-matching-and-substitution model.

Even fundamental "optimizations" like register allocation use quite sophisticated algorithms. Optimal register allocation is NP-complete, so compilers use heuristics on top of graph coloring algorithms to do their best.

Most other optimizations rely on type analysis, data flow analysis, liveness analysis, etc.

The K-graph coloring problem for register allocation is NP-complete, but in SSA form is actually linear. The tougher problem isn't the coloring but rather where to optimally place spills and fills around loops and calls.


Yeah, the number of "registers" needed isn't really NP-complete the way that the number of colors in a graph is. It's just the maximum number of live values, which, while not exactly trivial, is not that hard to figure out.

It would be interesting to revisit a world where languages/compilers were built explicitly with common memory access semantics/out of order op/etc in mind.

One of the things that really excites me about Rust is that it's single mutable reference enforcement means you can run run `restrict` 100% of the time if you wanted which is a non-trivial performance boost. I think it's not enabled today but from previous discussions it sounds like that's just a matter of plumbing through the right things to LLVM.

Every time I've seen that rolled out in a C/C++ codebase someone invariably forgets about pointer aliasing and you spend a week tracking down some non-deterministic behavior.

My point was that since optimization is also possible in assembly, and since assembly is considered low-level, then optimization per se shouldn't be used as something that characterize high-level languages exclusively. But it is true that some abstractions used by high-level languages enable quite complex optimization techniques, so there is a clear correlation between the level of a language and the ability of its compiler to analyze and optimize programs.

"the same kind of pattern matching used by (e.g.) LLVM to find optimizable sequence of statements also could be used by any assembler"

Transmeta's x86 CPU's were even doing translations and optimizations dynamically between x86 and their internal representations.

> Transmeta's x86 CPU's were even doing translations and optimizations dynamically between x86 and their internal representations.

I think every modern x86 CPU is doing exactly that.

C cannot read the overflow bit after an ADD, because of it's abstractions ... so I would say modern ASM is still lower in some aspects because it has less constrains and more importantly, it is less expressive, which is the whole idea of this hierarchy.

The compiler should offer a macro for that. Then the question is whether to take the specification or the implementation at which point it's an absurd question to begin with. You could compare -O0 binaries, bypassing the optimization question, too.

High/low is not fine grained enough. IIRC, prolog for example would be dubbed a fifth generation language, after assembler, goto hell macro compilers, structured functional programming, and DSLs. Now coq and the like seem to be of yet a higher order (pun intended, sorry).

> C cannot read the overflow bit after an ADD ... The compiler should offer a macro for that.

If you use GCC then __builtin_add_overflow() is what you are looking for:

"The compiler will attempt to use hardware instructions to implement these built-in functions where possible, like conditional jump on overflow after addition, conditional jump on carry etc."


Yes, I know that. Y ou could add macros implementing a complete asm language to become a subset of C and C would still be more expressive

There was an article on Hacker News recently that covered some of the reasons for Itanium's failure to realize its theoretical benefits. I'm not finding it now, but IIRC, the argument made was that predicting likely-parallelizable code is actually a lot harder to do at compile time, and that, like so many ultra-optimized systems, the real world works much differently and a messier, more random approach ultimately yields far better performance.

Itanium suffered performance wise initially because they had trouble with compilers, but that's not the whole story. You also have to consider that AMD launched AMD64, which was backwards compatible, at about the same time. Later on the Itanium compilers got better, but on release it became a choice of "sluggish, incompatible and expensive Itanium with potential to perform well in the future" versus "backwards compatible, currently faster and cheaper x86_64." It didn't gain any real momentum to start because of this, which ultimately doomed it even when a lot of the issues were resolved later on.

> which ultimately doomed it even when a lot of the issues were resolved later on.

Was there ever a point in the Itanium's history where there were Itaniums that ran mainstream software with better performance than equivalently priced x64 processors?

There were hand-coded assembly loops that were 3-4 times faster than x86, using Itanium's predicates and rolling register windows.

But I guess you said mainstream. So unless you count database engines, I suppose the answer is "No."

Today you can get the same vector performance using SSE4 and AVX. Almost all of Itanium's good stuff has been rolled into Xeon.

As far as I know (which isn't very far, admittedly) they only really managed to reach parity with some performance gains over x86 in a few niches, but it's also a bit chicken-and-egg. It never had enough attention to really get the optimization and porting efforts it would have seen if it had been successful.

> the argument made was that predicting likely-parallelizable code is actually a lot harder to do at compile time, and that, like so many ultra-optimized systems, the real world works much differently and a messier, more random approach ultimately yields far better performance.

I am not an expert on computer history, but my feelings on the matter are as follows:

It's hard for certain domains, like handling millions of web requests. For most computational stuff where you're just blowing through regularly-shaped numerical computation (like for example ML, or signal processing), it's not that hard, but arguably the compilers of the time were still not quite up to it (there's a lot of neat stuff that's getting worked into the LLVM pluggable architecture these days). Of course ML wasn't really a thing back then, and intel didn't seem interested in putting itaniums into cell towers.

One way to think of the OOO and branch predict processing that current x86 (and arm) do is that they are doing on-the-fly re-JITing of the code. There is a lot of silicon dedicated to doing the right thing and avoiding highly costly branch mispredicts, etc. During itanium's heyday, there was a premium of performance over efficiency. Now everyone wants power efficiency (since that is now often a cost bottleneck). Besides which, for other reasons Itanium wasn't as power efficient as (ideally) the chosen architecture could have achieved.

>the argument made was that predicting likely-parallelizable code is actually a lot harder to do at compile time

So don't do it at compile time? That's really a very weak argument against the Itanium ISA, and honestly more of an argument against the AOT complication model. Take a runtime with a great JIT, like the JVM or V8, and teach it to emit instructions for the Itanium ISA. (As an added advantage these runtimes are extremely portable and can be run, with less optimizations, on other ISAs without issue.)

The problem, as always, is that nobody with money to spend ever wants to part with their existing software. (Likely written in C.) In 2001 Clang/LLVM didn't even exist, and I'm not familiar with any C compilers of the era that had so much as a rudimentary JIT.

There's not that much overlap between the kind of optimizations that JITs do and the optimizations that modern CPUs do. The promise of JITs outperforming AoT compiled code has never really materialized. The performance advantages of OoO execution, speculative execution, etc. are very real and all modern high performance CPUs do them. Attempts to shift some of that work onto the compiler like Itanium and Cell have largely been failures.

arguably the "sufficiently advanced compiler" (cue joke) has arrived (sadly, post Itanium, Cell) in the form of a popularized LLVM[0], so it's improper to claim failure based on two, aged datapoints.

The flaws of OOO and SpecEx are evident with the overhead required to secure a system (spectre, meltdown) in a nondeterministic computational environment, and there is certainly a power cost to effectively JITting your code on every clock cycle.

As the definition of performance is changing due to the topping out of moore's law and shifting paralellism from amdahl to gustafson, I think there is a real opportunity for non ooo, non specex in th future.

OoO and speculative execution are largely improving performance based on dynamic context that in most real world cases is not available at compile time. They are able to do so much more efficiently than software JITting can due to being implemented in hardware. There is still no sufficiently advanced compiler to make getting rid of them a good strategy for many workloads.

Most of what OoO and speculative execution are doing for performance on modern CPUs is hiding L2 and L3 cache latency. On a modern system running common workloads it's pretty unpredictable when you're going to miss L1 as it's dependent on complex dynamic factors. Cell tried replacing automatically managed caches with explicitly managed on chip memory and that proved very difficult to work with for many problems. There's been little investment in technologies to better use software managed caches since then because no other significant CPU design has tried it. It's not a problem LLVM attempts to address to my knowledge.

Other perf problems are fundamental to the way we structure code. C++ performance advantages come in part from very aggressive inlining but OoO is important when inlining is not practical which is still a lot of the time.

My point is that the dominant software programming paradigm is migrating away from highly dynamic to highly regular. A good example is Machine learning, where for any given pipeline, your matrix sizes are generally going to stay the same. A good compiler can distribute the computation quite well without much trouble, and this code will almost certainly not need SpecEx/OOO (which is why we put them on GPUs and TPUs). Or imagine a billion tiny cores each running a fairly regularly-shaped lambda.

Sure some things like nginx gateways and basic REST routers will have to handle highly dynamic demands with shared tenancy, but the trends seem to me to be away from that. As you say, this is all dependent on the structure of code; and I think our code is moving towards one where the perf advantages won't depend on OoO and specex for many more cases than now.

This might be true for some domains but it's far from true for the performance sensitive domains I'm familiar with - games / VR / realtime rendering. The trend is if anything the opposite there as expectations around scene and simulation complexity are ever increasing.

Actually if you read IBM's research papers on RISC, their PL/8 compiler toolchain was pretty much like how LLVM kind of looks like, just on the 70's.

no doubt, but popularity and timing matters.

> The promise of JITs outperforming AoT compiled code has never really materialized.

Well JITs do actually outperform AoT compiled code today. Java is faster than C in many workloads. Especially large scale server workloads with huge heaps.

Java can allocate/deallocate memory faster than C, and it can compact the heap in the process which improves locality.

I haven't seen this convincingly demonstrated. Can you point to good examples? The few times I've seen concrete claims they are usually comparing Java code with C code that no performance oriented C programmer would actually write. In certain cases Java can allocate memory faster than generic malloc but in practice in many of those cases a competent C or C++ programmer would be using the stack or a custom bump allocator.

In practice it's quite hard to do really meaningful real world performance comparisons because real world code tends to be quite complex and expensive to port to another language in a way that is idiomatic. My general observation is that where performance really matters to the bottom line or where there is a real culture of high performance code C and C++ still dominate however. This is certainly true in the fields I have most experience in and where there are many very performance oriented programmers: games, graphics and VR.

This argument has been made since the introduction of the JVM in the early mid-90's.

Seems to me like if, in practice, JIT provided better performance then by now people would be rewriting their C/C++ code in Java and C# for speed.

Most importantly people would write JITers for C and C++.

It might still be possible. The JVM and .NET both have their speed annihilated by their awful choice of memory model.

What are some languages that have a better memory model and work faster with a JIT rather than an AOT compiler?

For that matter, does Java code execute faster or slower with an AOT compiler than with HotSpot? I did a quick Google search but couldn't find an answer, except for JEP 295 saying that AOT is sometimes slower and sometimes faster :(

What's wrong with their memory model? Honest question.

the jvm lacks structs and more specifically arrays of structs as a way to allocate memory. this causes extreme bloat due to object overhead as well as a ton of indirections when using large collections. the indirections destroy any semblance of locality you may have thought you had which is the absolute worst thing you can do from a performance perspective on modern processors. what people end up doing instead is making parallel arrays of primitives where there is an array for each field. this is also not ideal for locality but it's better than the alternative since there isn't a data dependency between the loads (they can all be done in parallel).

i am not that familiar with the C# runtime and i know C# has user definable value types, but i'm not sure what their limitations are.

There's a proposal to fix this by adding value types to the JVM. It's part of something called “Project Valhalla”.


In a nutshell: Too much pointer chasing. C# actually does much better than Java here, with its features for working with user defined value types, but it could still improve by a lot.

In addition to what others have mentioned, there's also the inability to map structures on to an area of memory. The result is that you end up using streams and other methods to accomplish the same thing, and they result in a lot of function/method overhead for reading/writing simple values to/from memory.

Garbage Collection is a very consequential design decision. To free that last unused object is going to take O(total writable address space) memory bandwidth.

> Seems to me like if, in practice, JIT provided better performance then by now people would be rewriting their C/C++ code in Java and C# for speed.

It's a little bit faster, not faster by enough to matter. If you're going to rewrite C/C++ code for speed you'd go to Fortran or assembler, and even then you're unlikely to get enough of a speedup to be worth a rewrite.

New projects do use Java or C# rather than C/C++ though.

"New projects do use Java or C# rather than C/C++ though."

But not for speed reasons. Java is in no way faster than well written C/C++

X is not faster than well written Y, for all X and Y; that's not a particularly useful comparison though. I've seen a project pick Java over C/C++ because, based on their previous experience, the memory leaks typical of C/C++ codebases were a worse performance problem than any Java overhead.

Well written Java is sure to be slower than well written C/C++.

Happy? ;)

But yes, the point you make, is valid, it is much harder to write C/C++ well, because of the burden of memory management. So if you lack the time or skilled people, it might make sense to choose Java out of perfomance reasons.

Java might not be, but C# is another matter.

Specially after the Midori and Singularity projects, and how it affected the design of C# 7.x low level features and UWP AOT compiler (shared with Visual C++).

Also Unity is porting engine code from C++ to C# thanks to their new native code compiler for their C# subset, HPC#.

The discussion was about JITs vs AoT compiled native code. Unity is not using a JIT runtime for their new Burst compiler but using LLVM to do AoT native compilation and getting rid of garbage collection. If you get rid of JIT and garbage collection then yes, a subset of C# can be competitive in performance with C++ for some uses.

JIT vs AOT is an implementation detail, nothing to do with a programming language as such, unless we are speaking about dynamic languages, traditionally very hard to AOT.

In fact C# always supported AOT compilation, just that Microsoft never bothered to actually optimize the generated code, as NGEN usage scenario is fast startup with dynamic linking for desktop applications.

While on Midori, Singularity, Windows 8.x Store, and now .NET Native, C# is always AOT compiled to native code, using static linking in some cases.

As for GC, C# always offered a few ways to avoid allocations, it is a matter for developers to actually learn to use the tools at their disposal.

With C# 7.x language features and the new Span related classes, it is even easier to avoid triggering the GC in high performance paths.

For someone who doesn't develop for the MS stack but is still curious, what are these ways to avoid allocations and GC in performance-critical paths?

Nah, nobody in their right mind would use Java/C# over C/C++ for performance...


That’s a great blog post!

> I agree with Ousterhout's critics who say that the split into scripting languages and systems languages is arbitrary, Objective-C for example combines that approach into a single language, though one that is very much a hybrid itself. The "Objective" part is very similar to a scripting language, despite the fact that it is compiled ahead of time, in both performance and ease/speed of development, the C part does the heavy lifting of a systems language. Alas, Apple has worked continuously and fairly successfully at destroying both of these aspects and turning the language into a bad caricature of Java. However, although the split is arbitrary, the competing and diverging requirements are real, see Erlang's split into a functional language in the small and an object-oriented language in the large.

I still strongly think Apple is taking the wrong approach with Swift by not building on the ObjC hybrid model more.

Your article is correct that Java/C# performance is unpredictable. But, per the OP, C/C++ performance is also unpredictable, because C/C++ doesn't reflect what a modern processor actually does; there are cases where e.g. removing a field from a datastructure makes your performance multiple orders of magnitude worse because some cache lines now alias.

> New projects do use Java or C# rather than C/C++ though.

Nobody is picking Java/C# over C/C++ for performance reasons.

It is not that Java or C# are able to beat C and C++ on micro-benchmarks, rather they are fast enough for most tasks that need to be implemented, while providing more productivity.

The few cases where raw performance down to the the byte level and ms matter are pretty niche.

I've seen a project pick Java over C/C++ because of the memory leaks they saw in the latter in practice. You can call that a correctness issue rather than a performance issue if you like, but the practical impact was the same as a performance problem.

One of the big problems with predicting what can be MIMDed is that almost all the languages we use except for Haskell allow for dependency on who knows what. With out very strict refusal of state it's hard as fuck to figure out what is independent of what at compile time.

Not that it can't be done so much as getting programmers to accept it is can't be done.

Part of it was the sort of giant blind spot that "C is a low level language" creates that made even Intel forget how many man-years of optimization were actually in C compilers and the attendant hubris that "oh we can whip up something that'll beat it between now and when we start shipping the chips"

I think people often overestimate how smart the compiler is. DJB had a slide deck about this: http://cr.yp.to/talks/2015.04.16/slides-djb-20150416-a4.pdf Compilers are really useful... but, if it really has to be fast, you still need to have a human in the loop.

I was also under the impression that there hasn't been much improvement in compiling C/C++ in a long time. It would be interesting to compare the performance of gcc from 15 years ago versus gcc today, on a real world piece of code. I suspect you wouldn't see much difference (aside from the changes in C dialect over time), and some added features in the new version. Has anyone run this experiment?

I think you can make a good case that the failure of the Itanium (and other similar attempts to un-hide some of this stuff like the IBM/Sony Cell used in the PlayStation 3) was precisely because they tried to shift optimization work from the CPU to the compiler / programmer.

Funny, the compiler people I've worked with complain that Itanium tried to do too much in hardware, like the hw support for loop unrolling, which made superpipelining optimizations in existing compilers much more complicated.

Loop unrolling is one of those optimizations that actually highlights the need for dynamic CPU optimizations like out of order execution and speculative execution. It's very difficult to statically make a good decision about the optimal amount of loop unrolling to do, especially if you want to generate code that will continue to perform well on future CPUs using the same ISA. Even when targeting a specific CPU model it's difficult however since you don't know statically how many iterations of the loop you're expecting, what's currently in cache, what other code might be running immediately before or after the loop, what's running at the same time on other threads, etc.

Itanium's hardware did not make any of these things easier.

The compiler wasn't smart enough. There is information available only at runtime that can be used to get out of order execution and superscalar execution, and the compiler doesn't see any of it. More effort on standard architectures paid off.

Itanic failed because Intel/HP initially still used a front side bus memory architecture, which could not support the bandwidth necessary for peak performance on anything but matrix multiplication and other computations where most of the work is done in-cache. Then Opteron came around, with its faster memory, and Intel was suddenly thrown back into reality.

Itanic was also in-order (at least as far as dispatch), meaning anytime an instruction was stalled, so were all instructions in the same bundle or after it.

One "non-low-level" idea on Itanic, which never really panned out in practice, was for the assembler to automatically insert stop bits ;; marking assembly code "sequence points", instead of the programmer having to do it manually. But in practice, everyone did it manually, because they'd rather know how well their bundles were being used, and whether they could move instructions around in order get the full 3 instructions / bundle (6 instructions / clock).

And explicit stop bits did not provide any advantage to future hardware by marking explicit parallelism, because at every generation everyone was concerned about obtaining maximum performance on the current machine, which involved shuffling instructions into 6-instruction double-bundles, often at the expense of parallelism on future implementations (which never went beyond two bundles / clock).

Not sure if facetious, but it failed because shifting that work caused it to be slow and not backwards compatible.

> the C compiler

You mean C compiler X has the feature of Y. There are lots of compilers and that's not part of the language.

Their comment makes sense if you interpret "the C compiler" to mean "mainstream C compilers that 99% people use in production" or even just "gcc and clang".

We're talking about concrete things in the real world here, not philosophizing about the language spec.

> not philosophizing about the language spec.

That's exactly what's going on.

And yet small device C and Gcc/clang are onky kissing cousins and we (well people that aren't me: ustedes) still rhetorically lump them together as a single language.

> Register renaming, cache hierarchies, out of order and speculative execution etc are not visible

That's different from C.

In the history of x86, most new optimizations have preserved the semantics of code. For instance, register renaming isn't blind; it identifies and resolves hazards.

In C, increasing optimization has broken existing programs.

C is like a really shitty machine architecture that doesn't detect errors. For instance, overflow doesn't wrap around and set a nice flag, or throw an exception; it's just "undefined". It's easy to make a program which appears to work, but is relying on behavior outside of the documentation, which will change.

Computer architectures were crappy like that in the beginning. The mainframe vendors smartened up because they couldn't sell a more expensive, faster machine to a customer if the customer's code relied on undocumented behaviors that no longer work on the new machine.

Then, early microprocessors in the 1970's and 80's repeated the pattern: poor exception handling and undocumented opcodes that did curious things (no "illegal instruction" trap).

Then again store-load reordering is visible across cores and undefined.

> If C is not a low level language then a low level language does not exist for modern CPUs

I think that's a fair conclusion though, I don't think the article is misleading.

x86 assembly is a high level language. It's analogous to JVM bytecode. Modern x86 processors are more like a virtual machine for x86 bytecode.

>x86 assembly is a high level language. It's analogous to JVM bytecode.

If you take this position, then having the distinction between "low level" and "high level" languages becomes pointless, and we have no way to distinguish between languages like x86 assembly and C and languages like Python and Haskell. This is why we use the terms "low level" and "high level": some of these languages have a lower level of abstraction than others. The fact that it's not giving you a great idea of exactly what's happening in the transistors is irrelevant: "low" and "high" are relative terms, not absolute.

> The fact that it's not giving you a great idea of exactly what's happening in the transistors is irrelevant

The author's point is that what's happening in the transistors is relevant — not controlling it is what led to Spectre and heavy performance losses if you aren't smart about cache usage. Thinking of C as a "low-level language" makes it easier for people to overlook that fact.

And the parent's point is that there should be a distinction in level between Haskell and C, for example.

Then we need a finer grained taxonomy instead of the current model that is essentially: machine code;c/forth; literally everything else.

I mean that should be blindingly obvious anyway looking at the actual history of programming languages and CPUs but here we are in 2018 insisting that we must have exactly three categories with exactly the definitions of: assembly;C/forth; everything else.

Hmm, what does "virtual machine" mean if the "virtual machine" is implemented in hardware?

And, is nothing but actual binary machine code a "low level language"? I guess it's the lowest, I don't _think_ you can go lower than that... but someone's probably gonna tell me I'm wrong.

VHDL / Verilog is lower level!

In all seriousness, the "assembly is high level" argument is ridiculous and robs the "low level" vs "high level" categorization of all meaning.

Taxonomies have been rendered meaningless constantly throughout history, fighting it is usually fighting reality.

Perhaps C is a "medium-level language"? :)

It's a fair conclusion if you're willing to accept that the phrase "low-level language" has approximately zero modern examples.

Or maybe we should relax the definition of "low-level language" a bit?

I think the author of the article is willing to accept that, and it's part of his point. The point is not to label languages, but to illuminate how we ended up with our current combination of software and hardware design. He points out that we are in a local-maximum with respect to processor performance because most of the optimizations we've made in processor design the past few decades break the C abstract machine. That requires the compiler and processor to go through awkward contortions to present the fiction of that abstract machine while still getting good performance. The final section, "Imagining a Non-C Processor", is where he explores this idea.

Take a look at maxas [0], and the maxwell/pascal microarchitecture if you want a modern example of low-level language (iirc. they are still manufactured on small mainstream process nodes).

[0] https://github.com/NervanaSystems/maxas

The JVM provides native primitives around memory and thread management which are not present in x86, but that's a matter of degree.

x86 provides all sorts of memory abstractions. The whole memory segmentation model is an abstraction. https://en.wikipedia.org/wiki/X86_memory_segmentation

It's analogous to JVM bytecode.

That's only true in the way everything is analogous to everything.

I think it's a good analogy because it's saying that nontrivial optimizations happen below the code layer.

In early CPUs, nothing was optimized. They just executed your instructions. Now there are nontrivial optimizations and rewriting, just like the JVM.

Nope, also in the way the parent explains right after that sentence.

It's a pretty terrible analogy in nearly every obvious technical way. What an x86 cpu does with its instructions is almost but not entirely unlike what a JVM does.

None of the "registers" in x86 assembly are real. Few of the instructions are implemented in hardware, most are impelmented in software microcode.

Advanced hardware will re-order and pipeline instructions based upon data dependencies.

Sure it's not doing the exact same things the JVM does with bytecode but the point is that x86 assembly is not the language of the hardware. It's a language that the hardware+firmware knows how to interpret and optimize at runtime similar to what your JVM does with java bytecode.

It isn't at all similar. JVM bytecode is a pretty high-level IR for tokenized Java. The JVM's main unit of optimization is a method, not instructions. Its key component is a compiler, it's even called that.

An x86 cpu, as the article points out, spends inordinate resources looking for ILP. It's not a compiler in any reasonable sense of the word, while a JVM is.

The point I'm trying to make with the analogy is that the x86 instruction set is not representative of what the hardware is actually doing.

It is not "low level" because it is an abstraction or virtual platform that the processor exposes and then interprets using its own internal resources and programming interface (microcode). The x86 interface does not map closely to the actual hardware, just as the article states. It exposes a flat memory model with sequential execution and only a handful of registers.

Much the same way that the JVM exposes a virtual machine that doesn't directly map to any of the platforms that it runs on. It's an abstraction that is interpreted or compiled at runtime.

I don't understand why you think the two are so different just because the JVM is higher level.

Right, and the point I'm trying to make is this is a pretty lousy analogy. For one thing, an x86 CPU is not nearly as VM-y as you make it out - renamed registers are very much real registers, big piles of the most common instructions execute in 1 or 2 uOps. For another, the VM you've picked as an example is singularly uncpu-like. C also exposes an abstract machine, would you use that as an analogy? Probably not.

'An abstraction that is interpreted or compiled at runtime' is so broad it's exactly the what I said up top - it's analogous in the way everything is analogous to everything else. It's the sort of thing that might be true if you squint but offers somewhere between zero and negative insight.

I don't know I think you're getting caught up too much on the specifics of what they're doing.

CPU's are adopting JIT like tendencies in order to increase performance. Instruction reorder, register renaming, branch prediction, etc.

> if you squint but offers somewhere between zero and negative insight.

The insight I bring from this is that we should look moving those features out of the hardware and into the software level. Let us take advantage of them in our compilers and virtual machines.

The JVM can beat C in many scenarios because it can make optimizations based upon runtime information that a static compiler will never have available.

Imagine what we could do if we weren't chained to the x86 abstractions.

OoO, renaming, branch prediction, microcode have existed for a long time. If anything, more modern CPUs (x86 included) are RISCer than the older generations which had extensive microcode expansion for each instruction.

Even ignoring the fact that the JVM is typed, memory safe and with builtin GC (all things that were tried architecturally in the past and abbandoned), there is still a large difference between the scope and variety of non-local optimizations perfomed by any non-toy VM and the local, strictly realtime, constrained to a small window, set of reordering done by an OoO engine. Even tracing, which is used by some JITs, has been largely abbandoned in the CPU world.

Transmeta and Denver-like dynamic translation is closer to the behaviour of a software JIT and it is certainly considered drastically different from mainstream OoO.

"x86 is an architecture hobbled by its legacy ISA, the CPUs are immensely complex VM-like dynamic beasts that hide the real CPU to get performance out of it" is one of those tropes that's inaccurate enough to have a small cottage industry of online pieces explaining the wrong bits. You can probably find highly rated SO answers or HN comments about it.

The long and the short of it is, an x86 cpu is not really VM-like and a JVM is decidedly unCPU like. The analogy only works if you generalize it so much it becomes a uselessly mushy tautology or you ignore basic aspects of how each of these things work.

`javac` is a compiler. The JVM is an execution platform, just like modern x86.

What's the word that comes after 'JIT'?

What the word is not really important.

JITting is also called "dynamic translation", which is what a CPU does with microcode.

Whether that's a full compiler or not is beyond pedantic -- and irrelevant to the parent's point.

The parent is telling me how the JVM doesn't have a compiler, which was my claim. It has a couple full-blown compilers.

which is what a CPU does with microcode.

Either you know something about current x86 CPUs that I don't or words and technical terms are, indeed, not important and have no meaning.

Microcode is the new low level language. ;-)

But, to be fair, C is not that low level. In fact, when I first learned it, it was considered a high-level language because CPUs we used it with didn't have functions with parameters, only subroutine jumps.

C reaches into the realm of low-level languages because it allows you to arbitrarily read from and write to the "state" of the context you live in, but it also allows you to express constructs that have no counterpart even on the most complex CPU architectures (even if they have things that disagree fundamentally with C's point of view).

> Microcode is the new low level language. ;-)

Except for RISC, that has mostly always been the case, when we look back at all those mainframes and their research papers.

On the flipside, doesn't it allow more or less direct memory access?

I mean, yes, no, it depends on what you mean?

You can write to null in C, your operating system rejects it. You can write past your allocated memory, your OS rejects it. It's not like just because it's written in C it gets to read the kernel memory - it's just that you can try.

All memory access in userspace is mediated by the MMU, so nothing in C gets "direct" memory access - but it does allow you to screw up your own memory space pretty well... I'm not sure that's C's fault though

Many C targets also run bare metal, there is nothing to reject there, unless the hardware has an MMU and the code bothered to configure it on boot.

That's my whole point. C doesn't try to stop you from doing that. C tries to do exactly what you ask it to, and if the OS doesn't allow it it just crashes.

To me that is about as low level as you can get without bypassing the OS.

Try using atomic instructions or initializing various tables during system boot. You will find that there are things C cannot do, unless you write assembly and call to it from C.

But is there really any programming language that lets you write to arbitrary memory? This is a problem of privileges rather than how low level your language is: it's not like you can write assembly that will give you arbitrary access to kernel memory.

In writing h kernal modules in C for vxworks, I find people quite often write/read data from hard-coded memmory addresses.

>If C is not a low level language then a low level language does not exist for modern CPUs

correct, but that lack is not an argument for C being low level.

Correct - it's an argument for broadening the title.

Calling out a specific language as not being something leads one to ask, "Well what is?". In this case, there is no qualifying alternative, so the title might as well be, "There is no low-level language for CPUs".

> Calling out a specific language as not being something leads one to ask, "Well what is?".

The article does basically answer this. The last section is about what it would mean to design a chip such that a low level language were possible to design for it.

... any more.

It's not a category that can't exist, it just doesn't have any members right now.

It actually can't exist at all for current modern CPUs (x64/ARM/PowerPC) since they don't expose a programming interface for many of the things discussed in the article (speculative / out of order execution, register renaming, full cache control).

Yeah but it can for all sorts of microcontroller-y chips; embedded CPUs; DSPs; theoretical future CPUs that aren't trying to both act like a PDP11 and go faster every year...

I played a little bit with gpgpu on the raspberry pi.

I’d imagine it’s relativly primitive compared to whatever shaders are compiled to on modern GPUs, but it was humbling to have to manage things like separate, per core, disjoint register files which can only be read 4 cycles after write. The cores are heterogeneous, so there is special hardware for exchanging register reads between cores if necessary.

What language do you use for GPGPU?

Sounded like Videocore 4 assembler.

> Register renaming, cache hierarchies, out of order and speculative execution etc are not visible at the assembly / machine code level

Cache hierarchies are directly accessible with CLFLUSH, INV, WBINVD x86 instructions; we may count also PREFETCHx, but they call it "a hint". FENCE instructions touch even the multicore part of system.

Many low-level CPU concepts leak to higher layer. A bright example is false sharing, which may manifest even in Java or C# programs.

x86 is also there on modern architectures to supply an abstraction that is increasingly divergent from the actual die hardware (but compatible with expectations of e.g. a C compiler that already has an output target for previous x86 hardware).

Modern Intel CPUs basically emulate x86; there are many layers of abstraction between individual opcodes and transistor switching.

By David's postulation, even the native assembly language for the CPU is not low level. See my other comment on the parent topic for justification.

Spend a week writing in assembly and you will never call C a low level language again.

I really really liked this article, and reading the comments here is blowing my mind. Did we read the same thing?

I think it's a strong insight that insight that chip designers and compiler vendors have spent person-millenia maintaining the illusion that we are targeting a PDP-11-like platform even while the platform has grown less and less like that. And, it turns out, with things like Spectre and the performance cost of cache misses, that abstraction layer is quite leaky in potentially disastrous ways.

But, at the same time, they have done such a good job of maintaining that illusion that we forget it isn't actually reality.

I like the title of the article because many programmers today do still think C is a close mapping to how chips work. If you happen to be one of the enlightening minority who know that hasn't been true for a while, that's great, but I don't think it's good to criticize the title based on that.

As someone who does large scale computational work for a living, stuff like this is close to my heart. I often run into serious memory and run time constraints due to poorly written codes that have rather dumb understanding of the real underlying machine implementation that modern processors actually have rather than this imaginary PDP-11 that we've been brought up to believe.

I wonder how much I could save (and how many more sims I could run) if my codes were rewritten in a language that has an abstract system that is much more cleanly and simply translated to what the computer actually does in 2018.

Actually, the author's argument about PDP-11 is interesting because C would have never been considered a low level language back then, for any platform.

Wiki definition, also what I was taught in my first CS class:

"A low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture—commands or functions in the language map closely to processor instructions. Generally this refers to either machine code or assembly language."

The term is evolving to match the time, as shown by the author's interpretation already being higher level than the original intention despite the goal of preventing exactly that.

> Actually, the author's argument about PDP-11 is interesting because C would have never been considered a low level language back then, for any platform.

Sure, agreed, but I don't think it's super interesting that words evolve in meaning over time.

What I find strange about the comments here is that some people think the article's title is bad even though my experience is that many people today do think "C is a low level language" is a reasonable thing to say.

I guess I'm arguing that it is a reasonable thing to say today in the right context. When I talk to web developers who don't understand anything about computer architecture for example it's much easier for me to tell them I'm a low level developer rather than explain that I design digital hardware (FPGA) and write drivers and firmware to interact with it. But I do agree that it's important developers know that C doesn't correspond to the CPU in the way they might mistakenly think it does

Heh. Yeah. Was programming ARM assembly in 1988. C compiler was far too high level. Look how its always saving these registers to memory! Meanwhile I'm dropping into FIQ mode just for the banked R8-R14.

Now, sure, people say C is low level, and compared to Java it sure is. But it isn't low-level.

I also really liked the article and found it thought provoking. This is all way above my pay grade but I like to think that there is a more optimal cpu design and language pairing that we will eventually reach and it's fun to imagine what that might look like.

Obviously, it would be very hard to shift the incumbent model in reality. We just have to look at the lack of prosperity for the Itanium and Cell processors to see how hard it is to achieve success. But imagine if new computer languages had been created just for these processors. Commercially this would make little sense but it might be possible to create languages that fully used these processors yet retained simplicity for developers. Or maybe it isn't possible to beat the clarity of sequential instructions for human developers or maybe Out Of Order processing is the optimal algorithm. There are other changes coming too such as various replacements for DRAM that either integrate more closely with the CPU (such as 3d chips) [1,2] that by reducing the latency of main memory, could actually bring us back closer to the C model of the computer? or just change computing entirely...

[1] https://www.extremetech.com/computing/252007-mit-announces-b... [2] https://news.ycombinator.com/item?id=16894818

It'll be an interesting exercise to ask the Clang folks to relax backward compatibility, this designing a new language, if it makes their compiler go faster.

That could be deployed as a new language, or adding features to existing ones, like value types in Java, or even compiler switches that relax some C rules for faster speed. Imagine -fpointers_cant_be_cast_to_ints or -freorder_struct_fields.

It seems like the article is mostly useful for inspiring research; that is, most of us aren't the target audience.

I'm wondering what will happen as GPU's become more general-purpose. What's next after machine learning?

Would it be possible to make a machine where all code runs on a GPU? How would GPU's have to change to make that possible, and would it result in losing what makes them so useful? What would the OS and programming language look like?

> It seems like the article is mostly useful for inspiring research; that is, most of us aren't the target audience.

As a group of professionals, it is highly beneficial for us to be interested in these things. People who design languages and compilers do it largely on what is perceived as being demanded, and us as programmers are the ones that create the demand for new languages.

To put in other words, if programmers aren't aware of what's going wrong with our current languages, they cannot express their need for new languages. So, there's less incentive for researchers to produce new ways of programming computers. It is much more tempting to "please the masses" in a way that causes this local-maximum problem. It's much more interesting to research problems that translate into mainstream use than academic things that nobody actually uses.

I still think a research project (either academic or in industry) with a hardware component would be the best way to explore radical new processor architectures that are further away from C.

- Hardware designers are conservative. They aren't likely to implement a different hardware architecture because "programmers demand it" (really?), unless there's existing research showing how it can be done and a compelling reason why customers will buy the chips.

- As a hobbyist language designer, I'm still going to target something that exists and is popular: x86, JavaScript, wasm, C, or something like that. A low-level language targeting a platform that doesn't exist isn't all that appealing. But, someone might get some good papers out of doing the research.

I think it is going the opposite direction, where cpus get more powerful graphics cards integrated more and more into them. This allows for matrix math to become a bit more standard in day to day programming.

However, in the opposite direction where a gpu becomes more like a cpu, if streams could do some level of limited branching without slowing the whole thing down, it opens the door to threading frameworks and design patterns where you write a loop in code, every thread gets it's own copy of memory, and on the threading front, it just kind of works for a lot of generic code.

Then if gpus added some sort of piped-like summation-like instruction, in the cases in a loop where variables need to be shared, they can still be added, subbed, mul, div, or mod, easily and quickly, allowing for what looks and acts like normal code today, but is actually threaded. That would kind of bring code back to where it is today.

Who knows? It's kind of fun to speculate about though.

> cpus get more powerful graphics cards integrated more and more into them.

Actually the wheel of reincarnation seems to have stopped at least for the time being. It seems that there is a fundamental, hard to reconcile, disconnect between a latency optimised engine like a CPU and a throughput engine like a GPU. Hybrids CPUs like larrabee or extensions like AVX512 do not seem to be enough.

Short term, probably the best we are going to see is separate CPU and GPU cores in the same die (or more likely jus the same package), but even that is likely suboptimal.

Or, conversely, could the iPhone 11 have 20 cores that are optimised not for latency but for power-efficient execution, such as how much you can do before the device runs out of battery? These would still be ARM, and backward-compatible with mainstream languages like Swift, so will have a low barrier to entry.

Maybe apps will be allowed to run longer in the background, if there are always extra CPU cores available that don't consume much power.

I think the bigger take away for me is that what constitutes low level has evolved over time. I still think C is low level because you have to manage your own memory and can play tricks with pointers and memory that other languages protect you from. Meaning that low level takes off a lot of the training wheels. The compiler still does what it can and optimizes things but you have a far greater ability to shoot yourself in the foot than in some other language. The arguments about what a "real" low level in these comments seem mostly pedantic.

> I think the bigger take away for me is that what constitutes low level has evolved over time.

Yeah, this is a good insight. The height of the overall stack has grown. The lowest low level is lower than it was in the 60s and the highest high level is higher. So we need more terms to cover that wider continuum.

I agree that this article hits pretty hard against a lot of assumptions about how our machines are working.

I also feel like the author is trying to say something about how imperative scalar (meaning 'operates on one datum at a time') languages are causing more trouble than they're worth. Sophie Wilson said something similar in her talk about the future of microprocessors [1]. This implies that declarative and functional semantics would be more amenable to parallelization, as the author mentions in the article, as well as allowing the compiler more freedom to deduce a suitable 'reordering' of operations that would better fit the memory access heuristics the machine is using.

[1] https://youtu.be/_9mzmvhwMqw?t=26m30s

> cost of cache misses

How is the C memory model a leaky abstraction here? What better way do you suggest? Are we not fine coding sequential (in memory) datastructures in C?

C leads you to believe that memory access has uniform cost regardless of address. What is the perf cost of:

Depending on what foo points to, and which memory you have previously read, the cost can vary by close to two orders of magnitude on many chips.

C does give you the ability to control those costs, but controlling how you lay out your data in memory and controlling imperatively in which order you access it. But the language doesn't show you those costs in any way.

> But the language doesn't show you those costs in any way.

I think it's pretty clear. Access memory sequentially, and you can expect to hit the cache. Access more memory than the cache size in a random order, and you can expect to pay memory access latencies (100s of CPU cycles).

I doubt you would be willing to manage the cache yourself in every line of code. That would be a lot of code. Some programmers might want to tune cache eviction behaviour by changing it in a few controlled points in time. But not in a way that couldn't be exposed to assembler/C. (I don't know if that's even realistic from a hardware architect's point of view).

> Access memory sequentially

Except memory is virtual. Memory location 0x1000 might be forward, or backwards compared to 0xFFF, depending on the state of the Translation-lookaside buffer (TLB).

Ever notice how (when ASLR is disabled), programs all start at the same location?? (https://stackoverflow.com/questions/14795164/why-do-linux-pr...)

Hint: Virtual address 0x0804800 doesn't belong at physical address 0x0804800. The OS can put that memory anywhere, and the CPU will "translate" the memory during runtime.

This means that in an obscure case, going forward a linked list (ie: node = node->next) may involve FIVE memory lookups on x86-64

* Page Map Lookup

* Page Directory Pointer lookup

* Page Directory lookup

* Page Table lookup

* Finally, the physical location of "node->next".

An even more obscure case (looking at maybe address 0xFFFC, unaligned) may require two lookups, for a total of 10-memory lookups (the page-directory walk for page 0xFFFC, and then the page-directory walk for 0x1000).

There is a LOT of hardware involved in just a simple "node = node->next" in a linked list. Its not even CPU-dependent. Its OS configurable too. x86 supports 4kb pages (typical in Linux / Windows), 2MB Large Pages and 1GB Huge Pages.

> Except memory is virtual. Memory location 0x1000 might be forward, or backwards compared to 0xFFF, depending on the state of the Translation-lookaside buffer (TLB).

Does that matter though? I would assume that the "prefetcher" (or whatever it's called) can make its predictions in terms of virtual memory.

Regarding the linked list, it has been common wisdom for a long time that one should prefer sequential memory instead of linked lists. A nice benefit is that this simplifies the code as well :-). Growing and reallocating sequential buffers might not be possible in very dynamic/real-time and decentralized architectures - like a kernel - though.

> Does that matter though?

Yeah. Meltdown means that the OS (as a security measure) wipes away the TLB whenever you make a system call. So all of those cached TLB entries disappear every syscall due to Kernel Page-Table Isolation.

And then the CPU core has to start from scratch, rebuilding the TLB-cache again.

So last year (when we didn't know about Meltdown), a system-call was fast and efficient. This year, on Intel and ARM systems (vulnerable to Meltdown), system-calls are now forced to wipe TLB. (But not on AMD-systems, which happened to be immune to the problem)

Both AMD and Intel implement x86 instruction set, and now the performance characteristics is different between the two boxes for something as simple as "blah = blah->next".


The important bit is that C is still quite "high level", and indeed, even Assembly language "lies" to the programmer through virtual memory. The OS (specifically page-tables) can interject and have some magic going on even as assembly-code looks up specific addresses.

The simple pointer-dereference *blah is actually incredibly complicated. There's no real way to know its performance characteristics from a C-level alone. It depends on the machine, the OS, the configuration of the OS (ie: MMap, Swap, Huge Page support, Meltdown...) and more.

And all that's without even getting into things like swap, or memory-mapped files. Or, to really go all-out, mmap() an NFS file - and you could find your mere pointer dereference waiting for the network.

How about a different example entirely? Memory nowadays is either CPU (and the GPU can access it) or GPU (and the CPU has a window into it). It's terribly important to use the right one: if a chunk of memory is mostly used by the CPU, it needs to be CPU memory, and if it's mostly used by the GPU, it needs to be GPU memory. But there's no good way to specify that.

You might argue that a modern computer is more like programming a tightly-bound, nonuniform multi-processor system. And I'd agree. But C doesn't much to help program such a thing.

Still, what you normally do and can assume is that you're using just "CPU" memory with pretty predictable (I think) latencies and a transparent caching layer.

When different things are mapped into the address space, that's an abstraction the programmer (or the user) consciously made. It should be possible to figure out the performance characteristics there.

Of course, many programs work on various machines with their own performance characteristics. You should still be able optimize for any one of them by querying the hardware and selecting an appropriate implementation. If you want to put in the work.

I don't think assembler/C is such a big problem here. But then again, I'm not a low level guy (in this sense) for now.

It's worth noting that chips that were designed for high-performance computing (e.g. the Cell) from the outset generally don't have silicon devoted to things like out of order execution, register renaming, etc. In this case, the bulk of the optimization logic does shift to the programmer (aided by the compiler).

The reason is that in these domains (e.g. game consoles, supercomputing), you know ahead of time the precise hardware characteristics of your target, you can assume it won't change, and can thus optimize specifically for that ahead of time.

This isn't true for "mass-market" software that needs to run across multiple devices, with many variants of a given architecture.

> The reason is that in these domains (e.g. game consoles, supercomputing), you know ahead of time the precise hardware characteristics of your target, you can assume it won't change, and can thus optimize specifically for that ahead of time.

Cell was a failure in large part because this proved to be less true / less relevant than its designers thought.

Source: many late nights / weekends trying to get PS3 launch titles performing well enough to ship.

I did work in graduate school trying to make the Cell easier to program - basically, providing OpenMP-like abstractions that would take advantage of the SPEs. I've always been really curious: how much did your games take advantage of the SPEs? When did you send code to the SPEs versus using the GPU? Were you using libraries that helped managing the SPEs, or did you do all of it manually?

OpenMP is a bad approach for the types of problems commonly encountered in games and graphics programming in my experience. Matt Pharr's excellent series of articles on the history of ISPC gives some good explanations of what programming models actually work well for graphics particularly: http://pharr.org/matt/blog/2018/04/30/ispc-all.html

At the time I was doing most of my SPE work (helping to optimize launch titles at EA prior to the launch of the PS3) most titles weren't taking much advantage of them at all. We were a central team helping move some code that seemed like it would most benefit over, I was particularly involved in moving animation code to the SPEs. There weren't really any options for libraries to help at that point, other than things we were building internally, so it was almost all manual work.

Later on in the PS3 lifecycle people moved more and more code to the SPEs. To my knowledge most of that work was largely manual still. For a while I was project lead on EA's internal job/task management library which had had a big focus on supporting use of the SPEs but my involvement in it was mostly during the early part of the Xbox One / PS4 generation. The Frostbite graphics team in particular did a lot of interesting work shifting GPU work over to the SPEs (I think some of it they've talked about publicly) but I wasn't directly involved in that.

I completely believe you on OpenMP being bad for games and graphics programming; we were targeting the HPC community which had a heavy interest in Cell as well. But all the while, I knew a bunch of programmers out in the world were shipping Cell code, and I was always curious what their patterns were. Thanks for the answers!

The point of dynamic optimizations (such as ooo) is not only to hide implementation details (such as register file size) but very much to take advantage of dynamic opportunities that simply cannot be known statically. The optimal schedule can be very different depending on whether some load hit L1 vs L2 or even was forwarded from the store buffer.

There are some classes of very regular algorithm where you could probably predict everything (and handle the memory hierarchy) statically, such as GEMM, but it's not very common.

Yeah, this is a very important point that many people seem to be missing, including the authors of the original article it seems to me. It was certainly a big problem for performance of games on Cell in my experience.

The points made in the article are certainly valid, but C is low-level in an abstract sense: it is approximately the intersection of all mainstream languages.

I.e. if a feature exists in C, it probably exists in every language most programmers are familiar with. (I worded this statement carefully to exclude exotic languages like Haskell or Erlang).

Thus C, while not low-level relative to actual hardware, is low-level relative to programmers' mental model of programming. If this is what we mean, it's still true and useful to think of C as a low-level language.

That said, it's important to keep the distinction in mind -- statements like "C maps to machine operations in a straightforward way" have been categorically wrong for decades.

> if a feature exists in C, it probably exists in every language most programmers are familiar with.

I don't think that's true.

Off the top of my head, C has: array point decay, padding, bit fields, static types, stack allocated arrays, integers of various sizes, untagged enums, goto, labels, pointer arithmetic, setjmp/longjmp, static variables, void pointers, the C preprocessor.

Those features are all absent in many other languages and are totally foreign to users that only know those languages. A large part of C is exposing a model that memory is a freely-interpretable giant array of bytes. Most other languages today are memory safe and go out of their way to not expose that model.

> I worded this statement carefully to exclude exotic languages like Haskell or Erlang

I suspect that your definition of "exotic" is exactly "not like C".

Which is kinda true. Most popular languages are C like.

30 years ago the landscape looked quite different.

Which languages have pointer arithmetic, longjmp, goto anywhere within a function, address-of operator, memcpy that is equivalent to assignment even for lexical variables, untagged unions, switch with fall-through in absence of explicit break and a textual/token-wise preprocessor?

BLISS, Modula-2, NEWP, Mesa,...?

Of course, some of those tricks are only allowed in SYSTEM/UNSAFE blocks on these languages.

That was in response to the claim that if C has a feature, it "probably exists in every language most programmers are familiar with".

"Most programmers" are not familiar with these. (A bit sad in the case of Modula-2).

Which is a sad state of affairs, programming language history should be a required part of curriculum.

Well, there is a big jump between knowing about historic programming languages and familiarity. I know a few things about SNOBOL, but I couldn't sit down and start coding in it without some ramp-up period.

One feature that many languages don't provide is the ability to have direct control over how aggregates are organized in memory.

> Thus C, while not low-level relative to actual hardware, is low-level relative to programmers' mental model of programming

Programmers' mental model of programming is not a homogeneous set. I'm pretty comfortable in LabView, for example; a language that is extremely parallel (the entire program is composed of a graph of producer / consumer nodes and sequential operation, if desired, must be explicitly requested).

Its crazy that C has changed because of the way people used it.

Well crazy isnt the correct word... mainstream use has changed the future of an old language...

Going by their definition, I don't think there are any low level languages, at least on modern architectures. Even x86 assembly abstracts out a lot of what is going on within the CPU.

That doesn't mean the definition is useless -- rather than "C isn't a low-level language, as opposed to something else which is", the point might be "there exist no low-level languages according to most people's understanding of that term". Which is still an interesting and useful fact.

It also hides the fact C is just a couple notches above the absolute minimum most people would even consider - writing assembly code by hand - and is, effectively, the lowest most programmers will ever venture.

True, but one of the points the article makes is that in practice, there's a vast gulf of distance (person-years of C compiler development) between the C code one writes and the resulting assembly code output (and this is ignoring the fact that x86 assembly is, itself a co-evolved abstraction with C-like languages that is basically emulated on modern massively-parallel CPU architectures).

In that regard, a case can be made that when you're writing in C, you're writing exactly as close to the bare metal as if you're writing in, say, Go or Haskell.

> In that regard, a case can be made that when you're writing in C, you're writing exactly as close to the bare metal as if you're writing in, say, Go or Haskell.

No, you really can't. This is childish black and white thinking. The computational model of C is built on an interface exposed by the hardware. Go and Haskell build many additional abstractions on top of that same model.

This article could have had a fruitful discussion about what the author is trying to say, but by choosing such a clickbait title, he managed to turn it into a discussion on semantics that wants to deny useful distinctions, because in some context (not the context in which it's actually used), it doesn't fit.

This kind of linguistic wankery really pisses me off, because it's useless and rests on a misunderstanding of how people actually use language (which is to say, in context and often in relative terms).

But I believe that's the article's very point: the context and relative terms people often talk about C are incorrect. The amount of mutation delta between a C program and the corresponding assembly instructions is significant, but people continue to believe it is not, which results in all sorts of incorrect assumptions when reasoning about a C program (such as which line of code or statement executes "first").

Haskell, Go, et. al. are understood to have complex runtime machinery atop the x86 instruction set. It's an erroneous belief that C does not (and one that I've seen developers get bitten by repeatedly as they try to manage threaded C code).

How many people have resorted to SIMD intrinsics because C wasn't low level enough?

I have used the SIMD instructions quite a bit. Even used FPGA's for some tasks.

> Which is still an interesting and useful fact.

I think it just leads to quibbling over the boundary of low level, as is happening here.

I think it's just important to know that the definition changes over time relative to the state of the art. C was once considered high level. In the future, if programming languages evolve to a more natural language state, then sending serial instructions to the computer in a strange code will seem very low level to such programmers.

Quibbling and language-lawyering aside, this is clearing up a real, fundamental misunderstanding that a lot of people have.

Anecdotally I have encountered loads of programmers who actually believe that there is a straightforward correspondence between C code and what the machine is actually doing, which is wrong.

So regardless of how you want to define "low-level", understanding this point is useful.

If you think of the hardware as a black box, it does correspond to the instruction set architecture presented. Generally you don't think about implementation details of an op-amp when you wire it into a circuit. The issue is that CPUs have become so complex that the lines are blurred, since the external interface is so removed from what's going on inside with micro code and ooe and cache. So while pedantically there are no longer low level language outside of microcode, that renders the term useless, and I'd prefer the natural language evolution that's occurring.

> actually believe that there is a straightforward correspondence between C code and what the machine is actually doing, which is wrong.

That really depends on your definition of "the machine". If "the machine" is "hardware", then sure. But if software is a considered piece of logic onto itself, that, when pitted against a sound model of an architecture will result in a series of logical steps, it's different: there is a very straightforward correspondence between the model-machine and assembly/C. Whether there is a 1-1 correspondence between the model-machine and any accidental hardware it is implemented is not that relevant.

So if low-level is defined as the lowest level that any hardware abstraction functions exactly as its logical function and not its silicon, it tells you exactly what the machine should be doing, even if it's doing it through different means.

> there is a very straightforward correspondence between the model-machine and assembly/C

I believe one of the points the article is making is that, no, there is not. Pin down a C developer's actual understanding of how C is converted into x86 or x86-64 assembly, and you find that nobody actually has that abstraction riding around in their heads, because the abstraction they do have riding around in their heads would make for some unacceptably non-performant code. Even if we disregard the fact that x86 is emulated on modern hardware, the C -> (Clang + LLVM / gcc) -> assembly path is deeply complicated.

Assembly has been a fiction on most computers for a long time now. From the other side of the instruction decoder, they are more like VLIW machines than evolved 8080's.

Correct. For low-level language, we may actually want to look more in the direction of HLSL or GLSL.

I haven't done any shader programming, but can we even say that about those languages? It might just be my own inexperience talking, but for anything more complicated than matrix multiplication the innards of a GPU seem just as opaque as a CPU.

OpenGL started out as being a pretty high level language and 1.0 certainly doesn't map closely to modern hardware at all. But APIs and hardware have moved closer together over time, and stuff like CUDA and Vulkan use models that are a pretty close match to the hardware they run on. When writing CUDA you can reasonably figure the number of cycles an operation will take, and benchmarking will agree, unlike CPUs that have become so non-deterministic that they are much harder to reason about.

That said, I wouldn't look to those as examples for how to design a good "low-level" CPU language, as CPUs and GPUs solve very different problems.

I enjoyed reading this, mostly because it made me angry, then curious, then thoughtful all in one go.

Partly because I really like the PDP-11 architecture, and it's 'separated at birth' twin the 68K, it greatly influenced me in how I think about computation. I also believe that one of the reasons that the ATMega series of 8 bit micros were so popular was that they were more amenable to a C code generator than either the 8051 or PIC architectures were.

That said, computer languages are similar to spoken languages in that a concept you want to convey can be made more easily or less easily understood by the target by the nature of the vocabulary and structure available to you.

Many useful systems abstractions, queues, processes, memory maps, and schedulers are pretty easy to express in C, complex string manipulation, not so much.

What has endeared C to its early users was that it was a 'low constraint' language, much like perl, it historically has had a fairly loose policy about rules in order to allow for a wider variety of expression. I don't know if that makes it 'low' but it certainly helped it be versatile.

> A processor designed purely for speed, not for a compromise between speed and C support, would likely support large numbers of threads, have wide vector units, and have a much simpler memory model.

Sounds like a GPU?

> Running C code on such a system would be problematic, so, given the large amount of legacy C code in the world, it would not likely be a commercial success.

It seems like ATI & NVIDIA are doing okay, even with C & C++ kernels. GLSL and HLSL are both C-like. What is problematic?

C-like code that runs on GPUs is not even close to normal C, even though the syntax is similar. The way you layout your memory, schedule your threads, and add memory barriers is completely different. You are never going to take a piece of large C code written for a CPU and just run it directly on a GPU.

Huh, that’s weird, I run a C++ compiler directly on my GPU code. The only difference between CPU and GPU code at the function level is whether I tag it with a __global__ macro or not, and lots of functions compile and run for both CPU and GPU.

Memory layout, thread scheduling, and barriers are not features of the C language and have nothing to do with whether your C is “normal”. Those are part of the programming model of the device you’re using, and apply to all languages on that device. Normal C on an Arduino looks different than normal C on an Intel CPU which looks different than normal C on an NVIDIA GeForce.

OK, I guess it comes down to what you call "normal" C. I was defining it as what would run on x86 Windows or Linux.

You can look at C++ AMP too, it runs with all GPUs that support DX11 on Windows, and is a part of the Windows SDK. It's implemented by AMD ROCm on Linux, which also implements HIP/CUDA. Normal C/C++ can run fine on modern GPU architectures.

NVidia designed their latest GPU architecture to run C++.

> Sounds like a GPU?

Which reminds me I'd love to see a computer running exclusively from a GPU-like CPU.

And no, Xeon Phi's don't count. They are cool, but look too much like normal PCs.

Here’s one: https://en.m.wikipedia.org/wiki/Cray-1

They didn’t call it a GPU then, but the SIMD architecture is quite similar at a high level.

Larrabee was going to be a GPU-like CPU. https://en.m.wikipedia.org/wiki/Larrabee_(microarchitecture)

Here’s a more modern GPU based computer: https://www.nvidia.com/en-us/self-driving-cars/drive-platfor...

If you meant something that sits on your desktop and runs Linux, then yeah it’s uncommon but not unheard of to run it on a SIMD system. The trend is absolutely definitely going toward SIMD being used in general purpose computing. Even if you don’t want to count any of my examples, you will see the “normal” PC become more GPU-like in the future than it is today.

> you will see the “normal” PC become more GPU-like in the future than it is today.

I keep telling people to get used to develop on Xeon Phi's and nobody seems to listen ;-)

Today's Xeon Phi is tomorrow's Core i9.

I've only done a bit of GPU kernel writing, but I always found it very .. unergonomic. Its like they mashed C to work in a context it wasn't meant for. Which is understandable since you want to encourage adoption, but I'd guess it's part of the motivation behind creating SPIR-V and allowing people to target other languages to the GPU

SPIR is a reaction to CUDA's adoption.

NVIdia always allowed multiple language on CUDA via PTX, with the offerings for C, C++ and Fortran coming from them, while some third parties had Haskell, .NET and Java support as well.

Yet another reasons why many weren't so keen in being stuck with OpenCL and C99.

To me the argument's akin to suggesting that Robert Wadlow wasn't tall, because giraffes are taller than Robert Wadlow.

When the spectrum of the context is unambiguous, that's not an argument for finding a way to make it ambiguous.

I think that would be a fair point if the article was about whether or not we should call C a low-level language, but the article is actually about whether C maps cleanly onto what the machine actually does and what a machine might look like if we didn't have that expectation.

> The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. [...]

This strikes me as a flavor of the VLIW+compilers-could-statically-do-more-of-the-work argument, though TFA does not mention VLIW architectures.

C or not, making compilers do more of the work is not trivial, it is not even simple, not even hard -- it's insanely difficult, at least for VLIW architectures, and it's insanely difficult whether we're using C or, say, Haskell. The only concession to make is that a Haskell compiler would have a lot more freedom than a C compiler, and a much more integrated view of the code to generate, but still, it'd be insanely hard to do all of the scheduling in the compiler. Moreover, the moment you share a CPU and its caches is the moment that static scheduling no longer works, and there is a lot of economic pressure to share resources.

There are reasons that this make-the-compilers-insanely-smart approach has failed.

It might be more likely to be successful now than 15 years ago, and it might be more successful if applied to Rust or Haskell or some such than C, but, honestly?, I just don't believe this will work anytime soon, and it's all academic anyways as long as the CPU architects keep churning out CPUs with hidden caches and speculative execution.

If you want this to be feasible, the first step is to make a CPU where you can turn off speculative execution and where there is no sharing between hardware threads. This could be an extension of existing CPUs.

A much more interesting approach might be to build asynchrony right into the CPUs and their ISAs. Suppose LOADs and STOREs were asynchronous, with an AWAIT-type instruction by which to implement micro event loops... then compilers could effectively do CPS conversion and automatically make your code locally async. This is feasible because CPS conversion is well-understood, but this is a far cry from the VLIW approach. Indeed, this is a lot simpler than the VLIW approach.

TFA mentions CMT and ULtraSPARC, and that's certainly a design direction, but note that it's one that makes C less of a problem anyways -- so maybe C isn't the problem...

Still, IMO TFA is right that C is a large part of the problem. Evented programs and libraries written in languages that insist on immutable data structures would help a great deal. Sharing even less across HW/SW threads (not even immutable data) would still be needed in order to eliminate the need for cache coherency, but just having immutable data would help reduce cache snooping overhead in actual programs. But the CPUs will continue to be von Neuman designs at heart.

The meta point from the article is that this is as much a hardware problem as it is a language or developer one. An arms race was waged to create CPUs that are very effective in running sequential programs; to the point that what they present to the program is a very much a facade and they hide an increasing great deal of internal implementation detail. By David's postulation, even the native assembly language for the CPU is not low level.

To drive this juxtaposition home, I'd point to PALcode on Alpha processors in which C (and others) can very much be a low level language. Very few commercial processors let you code at the microcode level.

The overarching premise is then brought home by GPU programming, which shows that you don't necessarily need to be writing at the ucode level if the ecosystem was built around how the modern hardware functioned.

The author, David Chisnall, is a co-author on a related paper from PLDI 2016: "Into the Depths of C: Elaborating the De Facto Standards", https://news.ycombinator.com/item?id=11805377

He was also one of the earliest non-Apple contributors to Clang, was on the FreeBSD core team, and wrote the modern GNU Objective-C runtime implementation. His work on Objective-C in particular is prolific.

Also, his book on Objective-C is the best one I read.

There is an entire junkyard full of processors designed to run other languages well.

LISP machines in the 60s, Java machines in the 90s, many others.

For whatever reason, successful general purpose silicon has almost always followed a C-ish model.

It's also worth noting that Fortran runs quite well on C-ish style processors.

Exactly. While CPU designers c will certainly make sure they can run C code fast, it turns out that, for the last 40 years at least, the C model (sequential, procedural, mostly flat address space) is the most efficient to implement in hardware.

"C combines the power and performance of assembly language with the flexibility and ease-of-use of assembly language."

It feels like the author really isn't talking so much about the limitations of C on modern architectures, but the architecture itself.

Possibly relevant is this (short?) discussion[1] from 2011 about a CPU more closely designed for functional programming.

[1] https://news.ycombinator.com/item?id=2645423

It is instructive to consider GPUs and their compilers. The death of OpenGL in favour of Vulcan has come about because OpenGL is unable to express low level constructs which are essential to achieving performance. GPU drivers are actually compilers that recompile shaders to efficient machine expressions.

Thus the fundamental limitation is that the processor has only a C ABI. If there were a vectorisation and parallel friendly ABI, then it would be possible to write high level language compilers for that. It should be possible for such an ABI to coexist with the traditional ASM/C ABI, with a mode switch for different processes.

s/vulcan/vulkan damn autocorrect.

It is correct that C is not really a low level language, but the points about how C limits the processor doesn't make much sense.

It uses UltraSPARC T1 and above processors as an example for a "better" processor "not made for C", but this argument makes no sense at all. The "unique" approach in the UltraSPARC T1 was to aim for many simple cores rather than few large cores.

This is simply about prioritizing silicon. Huge cores, many cores, small/cheap/simple/efficient die. Pick two. I'm sure Sun would have loved to cram huge caches in there, as it would benefit everything, but budgets, deadlines and target prices must be met.

Furthermore, the UltraSPARC T1 was designed to support existing C and Java applications (this was Sun, remember?), despite the claim that this was a processor "not designed for traditional C".

There are very few hardware features that one can add to a conventional CPU (which even includes things like the Mill architecture) that would not benefit C as well, and I cannot possibly imagine a feature that would benefit other languages that would be harmful to C. The example of loop count inference for use of ARM SVE being hard in C is particularly bad It is certainly no harder in the common use of a for loop than it is to deduce the length of an array on which a map function is applied.

I cannot imagine a single compromise done on a CPU as a result of conventional programming/C. That is, short of replacing the CPU with an entirely different device type, such as a GPU or FPGA.

The point is specifically about parallel vs sequential programs. Legacy C code is sequential, and the C model makes parallel programming very difficult.

I met a guy back in college, a PhD who went to work at Intel, who told me the same thing. In theory, the future of general purpose computing was tons of small cores. In practice, Intel's customers just wanted existing C code to keep running exponentially faster.

> Legacy C code is sequential, and the C model makes parallel programming very difficult.

Neither of these statements are true, unless "Legacy" refers to the early days of UNIX.

Tasks that parallelize poorly do not benefit of many small cores. This is usually a result of either dealing with a problem that does not parallelize, or just an implementation that does not parallelize (because of a poor design). Neither of these attributes are related to language choice.

An example of something that does not parallelize at all would be an AES256-CBC implementation. It doesn't matter what your tool is: Erlang, Haskell, Go, Rust, even VHDL. It cannot be parallelized or pipelined. INFLATE has a similar issue.

For such algorithms, the only way to increase throughput is to increase single-threaded performance. Increasing cores increase total capacity, but cannot increase throughput. For other tasks, synchronization costs of parallelization is too high. I work for a high performance network equipment manufacturer (100Gb/s+), and we are certainly limited by sequential performance. We have custom hardware in order to load balance data to different CPU sockets, as software based load distribution would be several orders of magnitude too slow. The CPU's just can't access memory fast enough, and many slower cores wouldn't help as they'd both be slower, and incur overheads.

Go and Erlang of course provide built-in language support for easy parallelism, while in C you need to pull in pthreads or a CSP library yourself, but the C model doesn't make parallel programming "very difficult", nor is C any more sequential by nature than Rust. It is also incorrect to assume that you can parallelize your way to performance. In reality, the "tons of small cores" is mostly just good at increasing total capacity, not throughput.

I admit it's not fair to blame C in particular. The comparison is between how we write and execute software and how we could write and execute software, and the language absolutely comes into play, in addition to how the language is conventionally used. "Legacy" code in this context is code that was written in the past and is not going to be updated or rewritten.

I disagree that tasks performed by a computer either don't parallelize or the cost of synchronization is too high. At a fine-grained level, our compilers vectorize (i.e. parallelize) our code -- with limits imposed by C's "fake low-levelness" as described in the article -- and then our processors exploit all the parallelism they can find in the instructions. At a coarser level, even if calculating a SHA (say) isn't parallelizable, running a git command computes many SHAs. The reasons why independent computations are not done on separate processors -- even automatically -- come down to programming language features (how easy is it to express or discover the independence, one way or another) and real or perceived performance overhead. Hardware can be designed so that synchronization overhead doesn't kill the benefits of parallelization. GPUs are a case in point.

The world is going in the direction of N cores. We'll probably get something like a mash-up of a GPU and a modern CPU, eventually. If C had been overtaken by a less imperative, more data-flow-oriented language, such that everyone could recompile their code and take advantage of more cores, maybe these processors would have come sooner.

Rant time.

> "Legacy" code in this context is code that was written in the past and is not going to be updated or rewritten.

In that case, I would not say Legacy code is sequential. For the past few decades, SMP has been the target where sensible/possible.

> At a fine-grained level, our compilers vectorize (i.e. parallelize) our code.

Vectorization is a hardware optimization designed for a very specific use-case: Performing instruction f N times on a buffer of f_input x N, by replacing N instantiations of f by a single fN instance.

If this is parallelization, then an Intel Skylake processor is already a massively parallel unit, which each core already executing massively in parallel by having the micro-op scheduler distribute across available execution ports and units.

In reality, vectorization has very little to do with parallelization. Vectorization is much faster than parallelization (in many cases, parallelization would be slower than purely sequential execution), and in a world where all the silicon budgets goes to parallelization, vector instructions would likely be killed in the process. You can't both have absurd core counts and fat cores. If you did, it would just be adding cores to a Skylake processor.

(GPU's have reduced feature sets compared to Skylake processors not because they don't want the features, but because they don't have room—they just specialize to save space.)

> At a coarser level, even if calculating a SHA (say) isn't parallelizable, running a git command computes many SHAs.

And this is exactly why Git starts worker processes on all cores whenever it needs to do heavy lifting.

This has been the approach for the past few decades, which is why I twitch a bit at your use of "legacy" as "sequential": If a task can be parallelized to use multiple cores (which is not a language issue), and your task is even remotely computation expensive, then the developer parallelize the problem to use all available resources.

However, if the task is simple and fast already, parallelization is unnecessary. Unused cores are not wasted cores on a multi-tasking machine. Quite the contrary. Parallelization has an overhead, and that overhead is taking cycles from other tasks. If your target execution time is already met on slow processors in sequential operation, then remaining sequential is probably the best choice, even on massively parallel processors.

Git has many commands in both those buckets. Clone/fetch/push/gc are examples of "heavy tasks" which utilize all available resources. show-ref is obviously sequential. If a Git command that is currently sequential ends up taking noticable time, and is a parallelizable problem (as in, computing thousand independent SHA's), then the task would be parallelized very fast.

Unless something revolutionizing happens in program language development, then it will always be an active decision to parallelize. Even Haskell require explicit parallelization markers, despite being about as magical as programming can get (magical referring to "not even remotely describing CPU execution flow").

> Hardware can be designed so that synchronization overhead doesn't kill the benefits of parallelization. GPUs are a case in point.

I do not believe that this is true at all. That is, GPU's do not combat synchronization overhead in the slightest, lacks features that a CPU use for efficient synchronization (they cannot yield to other tasks or sleep, but only spin), and run at much lower clocks, emphasizing inefficiencies.

After reading some papers on GPU synchronization primitives (this one in particular: https://arxiv.org/pdf/1110.4623.pdf), it would appear that GPU synchronization is not only no better than CPU synchronization, but a total mess. At the time the paper was written, it would appear that the normal approach to synchronization were hacks like terminating the kernel entirely to force global synchronization (extremely slow!) or just using spinlocks, which are way less efficient than what we do on CPU's. Even the methods proposed by that paper are in reality just spinlocks (the XF barrier is just a spinning volatile access, as GPU's cannot sleep or yield).

All this effectively make a GPU much worse at synchronizing than a CPU. So why are GPU's fast? Because the kind of tasks GPU's were designed for do not involve synchronization. This is the best case parallel programming scenario, and the scenario where GPU's shine.

I'd also argue that if GPU's had a trick up their sleeve in the way of synchronizing cores, Intel would have added it to x86 CPU's in a heartbeat, at which point synchronization libraries and language constructs would be updated to use this if available. They don't hesitate with new instruction sets, and the GPU paradigm is not actually all that different from a CPU.

> The world is going in the direction of N cores. We'll probably get something like a mash-up of a GPU and a modern CPU.

It's the only option, due to physics. If physics didn't matter, I don't think anyone would mind having a single 100GHz core.

However, it won't be a "mash-up of GPU and a modern CPU", simple due to a GPU not being fundamentally different from a CPU. A GPU is mostly just have different budgeting of silicon and more graphics-oriented choice of execution-units than a CPU, but the overall concept is the same.

> If C had been overtaken by a less imperative, more data-flow-oriented language, such that everyone could recompile their code and take advantage of more cores, maybe these processors would have come sooner.

A language that could automatically parallelize a task based on data-flow analysis (without incurring a massive overhead) would be cool. I don't know of any, though. I seems optimal for something like Haskell or Prolog, but neither can do it.

However, tasks that would benefit from parallelization would already be easy to tune to a different amount of parallelism, and parallelizing what is poorly parallelized is not useful on any architecture.

Parallelization hasn't really been a problem for at least the last two decades, and I certainly can't see it as the limiting factor for making massively parallel CPU's. However, massively parallel CPU's are not magical, and many problems cannot benefit from them at all. It will almost always be trading individual task throughput for total task capacity.

The thing is, most of the time you're reflecting at some logical level that will not be the "reality". The problem is that C programmer think that C === reality === performance. C has better (lower) constant factors but by no means better all the time.

The sophistication of the compiler does not mean the language is high level.

The meaning of a high level language is to do with abstraction away from the hardware. C programmers often wince at languages that are highly abstracted away from the hardware. But those are what are "high level" languages. Especially languages that remove more and more of the mechanical bookkeeping of computation. Such as garbage collection (aka automatic memory management). Strong typing or automatic typing. Dynamic arrays and other collection structures. Unlimited length integers and possibly even big-decimal numbers of unlimited precision in principle. Symbols. Pattern matching. Lambda functions. Closures. Immutable data. Object programming. Functional programming. And more.

By comparison C looks pretty low level.

Now I'm not knocking C. If there were a perfect language, everyone would already be using it. Consider the Functional vs Object debate. (Or vi vs emacs, tabs vs spaces, etc) But all these languages have a place, or they would not have a widespread following. They all must be doing something right for some type of problem.

C is a low level language. And there is NOTHING wrong with that! It can be something to be proud of!

One of the point of the article is that C is relatively high level by your definition.

Basically it says that the C abstract machine has very little in common with most existing processor.

moreover it makes the point that in the last decades of research for CPUs the focus was "make C go fast" wich ultimately cause meltdown.

TLDR: C was close-to-the-metal on the PDP-11 but since then hardware has become more complex while exposing the same abstraction to the C programmer. That means that hardware features such as speculative execution and L1/L2 caching are invisible to the programmer. This was the cause of Spectre and Meltdown and it forces a lot of complexity into the compiler. GPUs achieve high performance in part because their programming model goes beyond C. Processors would be able to evolve if they weren't hamstrung by having to support C.

Yeah, but that's just reinventing the mistakes of VLIW all over again. Yes, CPUs have complicated behavior in a way that can't be captured by scalar imperative languages in a concise way. No, that doesn't mean that you can fix this with new abstractions.

The reason C won wasn't that it forced CPUs to adhere to its particular execution metaphor[1], but that it happened upon a metaphor that could be easily expressed and supported by CPUs as they evolved over decades of progress.

[1] Basically: byte-addressable memory in a single linear space, a high performance grows-down stack in that same memory space, two's complement arithmetic, and "unsurprising" cache coherence behavior. No, the last three aren't technically part of the language spec, but they're part of the model nonetheless and had successful architectures really diverged there I doubt C-like runtimes would have "won".

It is very important to emphasize that GPU's only "achieve high performance" in workloads tailored very specifically to their extremely limited architecture.

CPU's, on the other hand, are designed to be much more generic with decent performance for any task.

I wouldn't say this is precisely true. Look at Dolphin's "ubershaders" (https://dolphin-emu.org/blog/2017/07/30/ubershaders/): they're essentially a Turing machine, running on your GPU, used to emulate a GPU of another architecture... and yet this is still (much!) faster than doing the same on the CPU.

And there's nothing special about emulating a GPU on a GPU; you could emulate a CPU architecture just as easily, at a much higher level than you get from an FPGA, and so perhaps faster than you'd be able to get from today's FPGAs. And, if you're mapping GPU shader units 1:1 to VM schedulers, you'd also get a far higher degree of core parallelism than even a Xeon Phi-like architecture would give you. (The big limitation is that you'd be very limited in I/O bandwidth out to main memory; but each shader unit would be able to reserve its own small amount of VRAM texture space—i.e. NUMA memory—to work with.)

I'm still waiting for someone to port Erlang's BEAM VM to run on a GPU; it'd be a perfect fit. :)

I think it's more accurate to say that CPUs are high-performance for different tasks than GPUs. Simple code, very wide data workloads are atrociously slow on CPUs, and single-threaded heavily branching workloads are atrociously slow on GPUs. That doesn't mean that one is more limited than the other.

I would argue that the total set of "good performance" workloads are smaller on a GPU than they are on a CPU.

However, "good performance" on a CPU is much much worse than "good performance" on a GPU. CPU's just achieve their mediocre performance on a larger set of usecases. GPU's are specialized devices that are very good at specialized activities.

The whole point of the article is that this is largely a myth imposed by the memory model of C and C-like languages. Single-threaded heavily branching workloads would be atrociously slow on modern CPUs, if not for branch prediction (which caused Spectre). On modern processors, you have 180 instructions running in one thread.

The processor is doing an ok job filling these instructions, but a language and compiler can do a much better job. C doesn't collect any information about data dependencies, and instead just pretends all instructions are sequential. Even code which contains tight loops of sequential commands can be optimized, because you have an entire program and operating system running around that sequential code.

I was with you until the last sentence: "Processors would be able to evolve if they weren't hamstrung by having to support C."

I don't think its fair or correct to say that C is the real issue. Recently there have been languages like erlang and support for more functional models that make concurrent code a lot easier to write. The first real consumer multicore processors were only released a bit over 10 years with Intel's Core 2 duo's. Of course SMP systems existed before that, Sun had them for years, but they were relatively niche. Still, Java, C++, C#, are all languages that produce much easier to maintain code if they are single threaded. Recent darlings like JS and Python are single threaded out of the box.

The large majority of languages in use today are not designed to be concurrent as a first principle. True multicore systems have been around for decades, software and mindshare is now starting to catch up and use tools that make concurrency easy.

My Symbolics computers (running Symbolics Ivory processors) run Lisp really well, as well as C - they have a C compiler.

I have operational computers of a variety of architectures at home, including the oldest generations (6502, 680x0), Sparc, Symbolics, DEC Alpha, MIPS 32- and 64-bit, etc., and even an extremely rare (and unfortunately not-running) Multiflow, the granddaddy of VLIW.

My favorite part of the original article was the final section. I wish we had a modern CPU renassiance akin to what was going on in the 80s and 90s, but the market dominance of x64 and ARM seems to be squelching things, with optimizations to those architectures rather than novel new ones (with possibly novel new compiler technologies). 64-bit ARM was a nice little improvement, though.

> Recently there have been languages like erlang

Erlang is decades old. It's 32, only 16 years younger than C.

I don’t understand. CPUs do not support C, they support a specific instruction set. What stops them from having instructions for cache management, pipelining, speculative execution hints, etc?

They do not support C officially, but every CPU designer knows that 99%+ of the code that matters is written in C. Therefore they design chips targeting this translation from C. What the authors want is a better lower level interface that would allow for modern processor features without the legacy of the features available to the PDP11.

> 99%+ of the code that matters is written in C

I think a better way of stating this is "99% of the code that matters is written in C, or in a language designed with a similar target architecture as C in mind". Certainly a lot of code that matters is written in C++, Objective-C, and Java, but the same points hold true for all of those.

Many of these features are very difficult to use in a useful way in a static context (e.g. at compile time) because the performance gains mostly come from taking advantage of dynamic context. Speculative execution and out of order execution for example are mostly useful because you don't know at compile time exactly what data your code is processing or what CPU it is running on, what function / context you are being called from, what is in cache and what isn't, etc.

The SPUs on the PlayStation 3 were an experiment in user managed caches and that proved to be a difficult thing to make effective use of even in games where you know more context than a lot of code can assume.

The Itanium processor did exactly this. Other than being a commercial flop, it was found to be quite difficult to actually get the compiler to generate good management instructions, and x86 was often able to beat out an itanium core at the same clock speed

It seems like x86 exists at a local maxima, and Itanium didn't go far enough away to find a different peak.

Games console CPUs support those kind of instructions, don't they?

To some extent, didn't Intel go down this road with VLIW: trying to shift the burden of making code fast onto the compiler, instead of the CPU?

Thanks for the TLDR.

But if that's the argument, then not even assembly is sufficient, as control over speculative branching and prefetch is only accessible via microcode in the CPU.

I think the argument is improperly framed. This is a discussion over public and private interface. The CPU is treated as a black box with a public interface (the x86+ instruction set). Precisely how those instructions are implemented (on chip microcode) is a private matter for the chip design team, which if correctly implemented, does not matter to the user, as the results should be correct and consistent. Obviously, a poor implementation can lead to Spectre or Meltdown. But for the most part the specific transistors & diodes used to sum a set of integers, or transfer a word from L2 to L3 cache, etc. shouldn't matter to us. If the compilers are relying on side effects to alter behavior of the internal implementation based on performance evidence, then that is a boundary violation.

C is low level. It remains "universal assembly language".

The argument implied in the article is that choosing a different public interface (breaking "C compatibility" and the imposted limitations) could bring a serious performance improvement.

While precisely how those instructions are implemented (on chip microcode) is a private matter for the chip design team, we do care how much resources it takes to implement these instructions, since if we can enable a more efficient implementation then we can get better price/performance.

Is there any evidence in favor of the argument that breaking C compatibility will increase execution speed without shifting the optimisation burden to programmers or compilers?

For example, have research CPUs been built that optimise for Erlang rather than C that provide better efficiency for the same amount of programmer effort than an X86 CPU running C?

You make good points - if we're just talking about semantics, then yes C is the closest portable language to x86 or arm and is low-level in that sense. But on the other hand, semantics is not always the only important thing: performance is sometimes important also, and there these low-level details matter. The architecture does its best to hide them from the user, but the abstraction is very leaky.

For example, when writing high-performance CPU-bound code it's usually important to keep in mind how wide cache lines are, but C doesn't expose this to the programmer in a natural way.

Interesting tidbits from article:

A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches.

The Clang compiler, including the relevant parts of LLVM, is around 2 million lines of code. Even just counting the analysis and transform passes required to make C run quickly adds up to almost 200,000 lines (excluding comments and blank lines).

I hate the idea of "low-level". There is not really such a thing. You should be using a language suitable for the domain your working in.

Sadly, too many programming languages try to be the end all be all. C is language that is great for working at the system domain.

Ideally, we would have small minimalist languages for various problem domains. In reality maintaining and building high quality compilers is a lot work. Moreover, a lot of development will just pile together whatever works.

That aside, you could build a computer transistor by transistor, but it's probably more helpful to think at the logic gate level or even larger units. Heck even a transistor is just a of piece of silicon/germanium that behaves in a certain way.

So there are levels abstraction, but is an abstraction low-level? I think term probably came about to refer lower layers of abstraction that build what ever system your using. So unless your using something that nothing can be added upon. Everything, even what people would call high level can be low-level.

Heck, people call JS a high level language, but there are compilers that compile to JS. This makes a JS a lower level system that something else is built upon. This just again shows why I would say that low-level is often thrown around with connotation that is not exactly true.

Archive.is link, as the page loaded incredibly slow for me: http://archive.is/E9s70

I find this article insightful, but missing the points it tries to deliver.

What the article is very good at delivering is that current CPU's ISAs exports a model that doesn't exist in reality. Yes, we might call it PDP-11, although I miss that architecture dearly.

C was never meant to be a low level language. It was a way to map loosely to assembler and provide some higher level abstraction (functions, structures, unions) to write code that was more readable, and structured, than assembler. And yes, it is far from perfect. And yes, today is called a low level language with good reasons.

But this article is all about exposing the insanity that modern CPU have become, insanity that is the sacrifice to the altar of backward compatibility -- all CPU architecture that tried the path of not being compatible with older CPUs have died.

I am pretty sure that once we'll have an assembler that map closely to the microcode, or to the actual architecture of the internals of a modern, parallel, NUMA architecture, we will still need to have a C-like language that will introduce higher level features to help us ease writing of non-architecture dependent parts. And it will most probably be C.

The article itself has 4 definitions or "attributes" for low-level languages that can be considered contradictory:

* "A programming language is low level when its programs require attention to the irrelevant."

* Low-level languages are "close to the metal," whereas high-level languages are closer to how humans think.

* One of the common attributes ascribed to low-level languages is that they're fast.

* One of the key attributes of a low-level language is that programmers can easily understand how the language's abstract machine maps to the underlying physical machine.

So basically the entire article's premise (the title) hinges on the last bullet- which can be contested. All the other mentioned attributes can be applied to Java, C, C#, C++. So failing the last bullet point doesn't apply to just C.

I think the author's point is that despite being perceived as low-level, C doesn't really differ from, say, Java on the last bullet.

In other words, a programmer who sits down and uses C and not Java might think, "I am being forced to pay attention to irrelevant things and think in unnatural ways, but that's because I am writing fast code using operations that map to operations done by the physical machine. In a higher-level language like Java, more of these details are out of my control because they are abstracted away by the language and handled by the compiler."

I think the article does a great job dismantling this point of view, and telling the story that C is not so different from Java, aside from being unsafe and ill-specified.

Maybe true but I think the Java example is not that good. Java is still not that different from C. Java is more like a decendant to C and C++ - and to be honest both languages force you to pay attention to lots of irrelevant "low-level" detail, fictionally low-level since its not actually the machine but language itself (that is stuck in the PDP11 mental mode...)

Compared to something different like Erlang, Haskell, Lisp

High-level and low-level are relative, to be sure, but Java is definitely considered higher-level than C -- it was designed to target a virtual machine, for example, while C was designed to target real machines -- so I think it illustrates the article's point perfectly.

One reason for that is for many applications latency is much more critical than bandwidth. For PCs that’s input-to-screen latency, for servers that’s request-to-response. It’s possible to make multicore processors with simpler cores, design OS and language ecosystem around it, etc. Such tradeoffs will surely improve bandwidth, but will harm latency.

Another reason is most IO devices are inherently serial. Ethernet only has 4 pairs, and wifi adapters are usually connected by a single USB or PCIx lane. If a system has limited single threaded (i.e. serial, PDP11-like) performance, it gonna be hard to produce or consume these gbits/sec of data.

Great article if you're willing to read past headlines. I would have liked to see a mention of small processors that are still hugely popular (microcontrollers, etc.) where C is still a good fit.

The article does not properly distinguish between C as a language and what the C compiler does with the C program. The logic of the article references what the compiler does.

The reasonable way to measure languages is to look at the abstractions present in the language. C has fewer abstractions than the other languages that we are familiar with. That is the reasonable definition of the level of a language.

That's exactly the author's point. The C that programmers write is remarkably far from what the compiler generates for modern hardware.

How do you propose measuring the number of abstractions? JavaScript has remarkably few built-in abstractions, but it's in no way "low-level" from a hardware perspective.

I wonder if it's easier for a compiler/cpu to optimize "async" code ? And I often find myself having an array in JavaScript that calls the same function on each item in the array, it would be nice if such cases would be made parallel, which I think is possible to do in C++. Is that ever gonna happen in JavaScript !?

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact