> Yea, this post is not about how to use hammer, but more like curious consideration whether using hammers everywhere is not limiting us (C design)
Maybe it [EDIT: the post] is, but the title is obviously nowhere near accurate - if C is not a portable low-level language, what on earth is?
[1] It gets reposted everywhere so often I have read it multiple times, and the one thing in common I see is how every know-it-all crawls out of the woodwork to comment on the title, as if the title was something new, deep, profound or even correct.
C is only portable between systems which emulate PDP-11 at hardware level and if and only if you don't use any compiler-specific extensions.
If you use sys calls, work between different breeds of operating systems (UNIX, POSIX and Windows are not compatible with each other), you need to rewrite or wrap relevant parts, or write the relevant part beforehand inside ifdefs to be able "port" it between systems.
The gist of the piece is, hardware is evolving to please C's programming model, hiding all the complexities C is not aware of, and behave like a PDP-11 on steroids. This is why we have truckload of side-channel attacks in X86 to begin with. To "emulate" PDP-11s faster and faster.
It's not even that faithful to PDP-11, either. PDP-11 has unified integer division/modulo instruction (and it operates in double-width: it takes 32-bit dividend and 16-bit divisor and produces 16-bit quotient and remainder, just like x86), it has double-width integer multiplication (again, just like x86), it has instruction for addition/subtraction with carry — nothing of that is available from (standard) C, and it's quite a pity. And also, while PDP-12 it has built-in support for post-increment and pre-decrement for pointers, it doesn't has built-in pre-increment or post-decrement.
I think we'd have the side-channel attacks on x86 even if we wrote in assembler - unless we wrote the assembler specifically with an eye to preventing (the known kinds of) side-channel attacks.
Put differently, I don't think the side-channel attacks would disappear if we wrote in Rust or Haskell or Agda.
The side channel attacks are not a result of programming in C, but the design of the hardware which doesn't upset the view of the system w.r.t. C compilers.
All programming languages, regardless of their type (imperative, functional) or interfacing method with the system (JIT, interpreted, compiled) are not immune from these attacks, because it's the hardware which is designed to emulate PDP-11.
In other words, all programming languages target a modern PDP-11 at the end of the day. If hardware has shown all of its tricks (esp. cache management, invalidation, explicit prefetching, etc.), and lacked speculative, out of order execution, these problems will go away, but getting the highest performance would become much harder and complicated, and even impossible in some cases.
Intel tried this with IA64, with a "No tricks, compiler shall optimize" approach, and it tanked to put it mildly (esp. after AMD64 came out).
Let's say we have two chips. Chip A requires the programmer to handle all the "magic" stuff. Chip B is like current chips; it hides that stuff. Chip B is subject to side channel attacks. Chip A likely is also unless the programmer is very careful.
Which chip would have sold more? I assert that chip B would have, by a massive volume, because it didn't require the programmer to mess with all that stuff.
So I don't think that it's fair to say that the chip is trying to look like a PDP-11 because of C. I think it's trying to look like a simpler chip, so that mere mortals can program it and still get most of the maximum performance.
I think it depends on the toolchain. Itanium didn't sink because of the optimization it needs, but the because of the toolchain which can't do all the optimization.
So, if a complex processor comes with a toolchain which does all the tuning by itself, I think it can sell equally well, because the burden will not be reflected on the developer, again.
So, I think popularity of the language itself has a great impact on hardware design.
AMD AthonXP had an "Optimized for Windows XP" badge on it. GPUs are built upon the programming model OpenGL and DirectX puts forward. Modern processors are made to please C and its descendants, because it's the most prominent programming model.
Lisp even tried to change this with "Lisp Machines", and they failed, because Lisp was not mature/popular enough at that point.
So we can say programming model drives hardware very much.
I believe that the point is the processor was designed to please C (by emulating PDP-11). And this design complicates things immensely, which is how we end up with side-channel attacks on our processors.
>if C is not a portable low-level language, what on earth is?
This question doesn't have to have an answer. The author of TFA apparently believes that a low-level language is one that effectively and clearly exposes the execution model of the hardware to the programmer. Under this definition, no widespread language (except assembly) is truly low-level, and possibly none are.
Which, for what it's worth, is also what I was taught in school. C was consistently described as a high-level language by my professors, even if it is "lower-level" than almost everything else.
The real question is whether you would even want to use a language that effectively and clearly exposes the execution model of the hardware. Not even most assemblers do that as architectures give stronger guarantees that would be implied by the microarch execution model.
Some machines do expose the microarchitecture (or better, there is no architecture other that what is implemented in hardware by a specific revision) and rely on install-time or even JIT code specialization. But especially on this machines it would be insane to try to manually target them as you would have to rewrite your code for every revision.
So, targeting the effective execution model of the machine is out of question. You need an abstraction. The question is whether C is the correct abstraction.
It's plausible that a language could expose some general logic behind instruction-level parallelism and cache management — even register renaming — without being explicitly tied to the way one particular architecture does that. I have no idea how to design such a language, but from 10000 meters I think it could be done.
I think the author oversteps his case by suggesting that ILP is an abomination that exists to preserve the availability of C-like languages. In my experience, many algorithms seem to naturally lend themselves to ILP, and I often find myself wondering whether I have typed them in so that these five lines will in fact run simultaneously. One common flaw in critiques of the common C compiler model is that they all seem afflicted by a nostalgia for Lisp machines, when the space of unexplored possibilities is so much larger.
Maybe it [EDIT: the post] is, but the title is obviously nowhere near accurate - if C is not a portable low-level language, what on earth is?
[1] It gets reposted everywhere so often I have read it multiple times, and the one thing in common I see is how every know-it-all crawls out of the woodwork to comment on the title, as if the title was something new, deep, profound or even correct.