
C Is Not a Low-level Language (2018) - pcr910303
https://queue.acm.org/detail.cfm?id=3212479
======
topspin
Intel and HP attempted to deliver what this paper advocates. They designed an
entirely new processing element and compiler architecture that promised to
deliver high performance with less complexity. It was called EPIC and
manifested as Itanium. Billions of dollars and the best minds at the disposal
of an industry wide consortium couldn't make it work; one year ago the last
Itanium shipped. The market has spoken.

You can run unmodified, compiled OS/360 code from the 1960s on a z15 machine
built this year. The market values the tools and code it has invested in _far_
more than any idealized computing model you care to speculate about.

The flaws in contemporary CPUs that device manufacturers perpetrated on their
customers for almost 20 years are not the fault of C and its users. They are
the fault of reckless manufacturers that squandered their reputation in the
name of performance and, ironically, helped perpetuate the lack of innovation
in programming techniques called out in this paper.

~~~
commandlinefan
> The market has spoken.

The market “demands” cheap, turnkey, easily replaceable programmers who don’t
really know what they’re doing, and justifies this with “I made a website last
weekend, programming is easy, you’re just pretending its hard to keep out
competition”. Until software engineering is treated as actual professional
engineering, time, money and resources will continue to be wasted frivolously.

~~~
dharmab
Did you know that there was a PE exam for software engineering?

It was discontinued because less than 100 people applied for it over the
course of several years.

~~~
ThrowawayR2
No one took the exam because it was effectively impossible.

To become a PE, first the candidate has to pass one of the Fundamentals of
Engineering exam to become an engineer-in-training. Except, whoops, there
wasn't ever a software specific FE exam; the most relevant one is the EE/Comp.
E. exam. Take a look at the list of topics: [https://ncees.org/wp-
content/uploads/FE-Ele-CBT-specs.pdf](https://ncees.org/wp-content/uploads/FE-
Ele-CBT-specs.pdf) Most developers aren't going to pass that even with a CS
degree.

Secondly, you need 4-8 years of supervision by a licensed engineer. Again,
whoops, there are barely any software developers with a PE license, so who
would they get to supervise them?

Only then do you get to take the PE exam for software engineering. Frankly,
the situation was so absurd that one has to suspect that NSPE didn't want to
certify software developers as PEs.

------
joe_the_user
Something about this constantly appearing trope bugs me.

I began programming C and assembler on the VAX and the original PC. At that
time, C was a reasonable approximation of the assembly code level. We didn't
get into expanding C to assembly that much but the translation was reasonably
clear.

As far as I know, what's changed that mid-80s world and now is that a number
of levels below ordinary assembler have been added. These naturally are
somewhat confusing _but_ they aim to emulate the C/assembler model that
existed way back then. These levels involve memory protection, task switching,
caches and all things involved with having the current zillion-element Intel
CPU behave approximately like the 16-register CPU of yore but much-much
faster.

I get the "there's more on heaven and earth than your flat memory model,
Horatio" (apologies to Shakespeare).

BUT, I still don't see any of that making these "Your Ceeee ain't low-level no
more sucker" headlines enlightening. A clearer way to say it would "now the
plumbing is much more complicated and even c programmers have to think about
it".

Because... adding levels below C and conventional assembler still leaves C
exactly as many levels below "high level" language as it was before and if
there's a "true low level language" for today I'd like to hear about it. And
the same sorts of programmers use C as when it was a low level language and
the declaration doesn't even give any context, doesn't even bother to say
"anymore" and yeah, I'm sick of it.

Edit: plus this particular actual article is primarily a rant about processor
design with C just pulled into the fight as a stand-in for how people normally
program and modern processors treat that.

~~~
lmm
> Because... adding levels below C and conventional assembler still leaves C
> exactly as many levels below "high level" language as it was before and if
> there's a "true low level language" for today I'd like to hear about it. And
> the same sorts of programmers use C as when it was a low level language and
> the declaration doesn't even give any context, doesn't even bother to say
> "anymore" and yeah, I'm sick of it.

Not really. For many purposes, C is not any more low-level than a supposedly
"higher level" language. 20 years ago one could argue that it made sense to
choose C over Java for high-performance code because C exposed the low-level
performance characteristics that you cared about. More concretely, you could
be confident that a small change to C code would not result in a program with
radically different performance characteristics, in a way that you couldn't be
for Java. Today that's not true: when writing high-performance C code you have
to be very aware of, say, cache line aliasing, or whether a given piece of
code is vectorisable, even though these things are completely invisible in
your code and a seemingly insignificant change can make all the difference. So
to a large extent writing high-performance C code today is the same kind of
programming experience (heavily dependent on empirical profiling, actively
counterintuitive in a lot of areas) as writing high-performance Java, and
choosing to write a program with extreme performance requirements in C rather
than Java because it's easier to control performance in C is likely to be the
wrong tradeoff.

~~~
biggestdecision
Java forces you to use profiling, at least with C you can see the exact
instructions your compiler outputs. Missing the fancy vector instructions?
Modify your code til you can guarantee it's vectorized. With Java you are at
the mercy of the JVM to do the right thing at runtime.

~~~
kjeetgill
Not that I disagree with what you're saying, but I thought you'd find it
interesting: you can dump the JIT assembly from Hotspot JVM pretty readily to
make sure things like inlining are happening as you'd expect.

~~~
thu2111
You can also view the entire compiler internals in a visual way using the igv
tool. You can actually get much better insight into how your code is getting
compiled on a JVM like the GraalVM than with a C compiler.

However, I will admit that this is very obscure knowledge.

------
todd8
A number of claims here and in the original article are inaccurate
interpretations of the almost random walk that we have made to get to our
modern processor designs and the C programming language.

Instruction level parallelism and out of order execution were done by the
seminal CDC 6600 as early as 1964–at the time one of the worlds fastest
computers. (I remember a conversation with Seymour Cray about the difficulty
of handling machine state during an interrupt on such an architecture.) The C
programming language didn’t come along until almost 10 years later.

As the article says, C is a good fit to the architecture of the PDP-11, mini-
computer very different that the mainframes of the time. There were many
competing visions for what a “high-level” programming language ought to look
like back then, Pascal (1970), LISP (pre-1960), Prolog (1972), FORTRAN
(pre-1960), COBOL (pre-1960), Smalltalk (1972), Forth (1970), APL (1966),
Algol (pre-1960), Jovial (pre-1960), PL/1 (1964), CLU (1974). As a
professional developer and CS grad student during this period we were well
aware of these alternatives. Many of these had escape hatches to gain low
level access to the machines that they ran on and libraries were customarily
written in assembly language. C came along during this period. It wasn’t my
favorite—the whole pointer/array punning seemed unnecessary to me.

Why did C prevail? Was it because it was low-level? No, there were other well
established languages capable of low level work. I recall, a few reasons:
Unix, DEC’s PDP-11, and Yourdon.

First Unix was amazing and C was the favored language on Unix. Just being able
to type man on a TTY or VDT and see the man page for a command was so novel.
Unix OS and commands were written in C.

Second, the PDP-11 was a very popular good machine and Unix ran on the PDP-11.
Unix on a PDP-11 was a lot more fun than dropping off decks of punch cards to
run on an IBM 360 or CDC mainframe.

Edward Yourdon was an influential American software consultant, author and
lecturer. He had picked C over Pascal and other languages as a “practical”
general purpose high level language to recommend.

Meanwhile, hardware was was not at all a monoculture; even though the PDP-11
was a comercial success, it was a simple machine and just a mini-computer.
There were many attempts at alternative architectures. Harvard memory
architecture machines, capability based addressing, programmable wide-
microcode, LISP machines, RISC systems. I’ve programmed or designed systems
for most of these.

So why did the x86 become one of the dominant architectures? It was because of
the power of mass production of integrated circuits. Computers using other
architectures can be built, but like the LISP machines, they will be slower
and more expensive than mass produced processors.

~~~
cryptonector
All those languages are very serial and branch-happy too.

What would the alternatives be? SIMD, basically, and some sort of language
layered on that -- array languages most likely -- and a complete retraining of
programmers.

TFA also talks about the UltraSPARC CMT architecture and says it's a bad fit
for C because most C programs don't use a lot of threads. That's nonsense
though, since in fact there are many C10K-style, NPROC threads/processes
applications out there. Sure, many applications remain that are thread-per-
client, but those were obsolete in the 90s, and most such apps I run into are
_Java_ apps because Java didn't tackle async I/O way back when. I suppose Java
is also C's fault since Java resembles C.

C is a scapegoat here, but TFA still has a point if we ignore that part of it:
our programming languages (not just C) are serial and branch-happy, even when
they have well-developed threading and parallelism features, and this
translates to pressure on CPUs to do a lot of branch prediction.

But we do have less-radical ways out. For example, the CMT architecture
results in pretty lousy per hardware thread performance, but pretty good
overall performance with minimal or no Spectre/Meltdown trouble (because the
architecture can elide most or all branch prediction) -- this won't do for
laptops, but there's no reason it shouldn't do for cloud given server
applications written in C10K/CPS/await styles.

My bet would be on a hybrid world with a mix of CMT and SIMD, and maybe also
some deeply pipelines cores. CMT CPUs for services, SIMD for relevant
applications, and deeply-pipelined CPUs for control purposes.

------
throwaway17_17
I have read through the comments already posted and the comments from the
previous HN discussion linked also. I can’t help but feel like I got something
completely different from this article than everyone else. I am convinced that
Chisnall used the ‘C is not a Low-Level Language’ title as click bait. The
actual point of the article is to push the view that the x86 ISA, which
according to him, is structured the way it is due purely to the desire to make
the massive amount of existing C code run faster. His argument is essentially
that ‘low-level’ programmers are not delusional, but are purposely being
deluded by chip manufacturers. C, and x86 assembly, according to The article
are not low level because they have only a passing relevance to the actual
architecture of modern CPUs. Chisnall then goes on to argue that a low level
language would require an ISA that presents a clearer picture of the actual
architecture and would be geared for performance given the actual
functionality of the CPU. He then bats around several features that could be a
part of an ISA for a multi core, hierarchical memory, pipelined chip. His
references to alternate memory models, changes in register structure and
amount, a push for immutability and other features to adapt the ISA to reflect
what would constitute actual performant code.

I’m all for his vision, it seems like there could be an x86 ISA translation
layer or a portion of cores dedicated to maintaining X86 compatibility l,
while transitioning to a new ISA. In fact, just have a new ISA be the target
the CPU reducers x86 to and also expose that underlying ISA. But as said
elsewhere in the thread, it’s been tried before and it hasn’t worked yet.

~~~
Certhas
But it's not clear that this has been tried. Itanium wasn't exactly that, was
it?

Edit: It would be easy to imagine exposing some of the lower level details in
a new ISA that lives alongside x86, and then allowing languages that have the
right abstractions to make use of them, creating better fits between existing
abstract programming models and underlying computational resources...

------
Typhon
This article quotes Perlis' famous saying that "a programming language is low-
level when it calls attention to the irrelevant", and I am reminded of another
Perlis aphorism :

« Adapting old programs to fit new machines usually means adapting new
machines to behave like old ones. »

~~~
gumby
Only met him once, but the man was a giant of computer science.

------
nine_k
The money quote:

\---

 _...processor architects were trying to build not just fast processors, but
fast processors that expose the same abstract machine as a PDP-11. This is
essential because it allows C programmers to continue in the belief that their
language is close to the underlying hardware._

\---

All else follows: hardware parallelism and memory hierarchy are not exposed to
standard C. The compiler rewrites the code ruthlessly to replace loops with
sequential instructions, vector instructions, etc (or not, then you wonder why
and how to trigger the optimization).

C compilers do a number of things to continue supporting abstractions from 50
years ago. The article suggests that maybe other approaches, not compatible
with C, could be considered for CPUs (not just GPUs).

~~~
xenadu02
The proposed benefit of C is that it is “close to the metal”, and from that
follows that the generated code is “obvious” and thus its performance
characteristics are “easy” to reason about.

It turns out that none of these three things are actually true. That just
leaves us with a language poorly adapted to today’s use cases and
simultaneously hardware that has optimized for the C abstract machine in not
always useful or secure ways.

~~~
zik
You're quite correct in saying that none of those things are true but you're
wrong in saying that these are "the proposed benefit" of C. I feel that's a
common misconception really.

A huge advantage of C that most people seem to forget these days is that it's
quite a minimalist language by today's standards so it's relatively easy to
learn and reason about.

But maybe the most compelling advantage is that its underlying architectural
concepts map well to most real-life CPUs so it's a much better basis for
compilers to generate efficient code than many languages. This is a subtly
different thing from being "close to the metal". It's really more like "has
compatible design concepts with CPUs". The abstract machine of C has
relatively few gotchas when converting to machine code and that's what it's
really about.

That doesn't mean that C's the "high level assembler" people speak about
though. It's not. Take a look at some optimised assembler output from a C
compiler and you'll see that's patently not true. It can be very difficult to
even understand how the C source code relates to the generated assembler code
- often you'll see that none of the same operations occur and nothing happens
in the same order.

~~~
LessDmesg
C is easy to learn but not to reason about. First of all, C is really C +
macro-C, two separate languages that don't know about each other. Second, all
the damn UB. Third, weak typing and fourth, memory errors. So no, reasoning
about C is far from easy.

~~~
zik
These arguments sound like the arguments of someone who hasn't programmed much
in C. C's undefined behavior is a non-issue in almost any realistic
programming situation - it would only normally crop up in cases where you're
deliberately doing something strange like overflowing a variable. The typing
is a non-issue if you don't do stupid things. Memory errors can be an issue
but tools like valgrind make them a relatively minor hassle. None of these
affect your ability to reason about C, at least not as a normal programmer.

~~~
LessDmesg
> overflowing a variable

...can easily happen indeliberately. Just as any other exceptions. Which C
can't even handle in a deterministic way because it doesn't have built-in
exceptions, so you have to rely on libraries or remembering to check error
codes. Not easy to reason about.

> typing is a non-issue

When all your function ptrs have to be cast to and from (void*) - hell yeah
it's an issue.

> minor hassle

Then why are buffer overflow exploits in C programs on the news?

~~~
zik
> When all your function ptrs have to be cast to and from (void*) - hell yeah
> it's an issue.

I don't know how you got this impression. This is 100% incorrect.

------
stephc_int13
Wait; what? If C is not a Low-Level Language, then what is a Low-Level
Language?

"The features that led to these vulnerabilities, along with several others,
were added to let C programmers continue to believe they were programming in a
low-level language when this hasn't been the case for decades."

Now C is again the root of all evils...

But I'm afraid that's not right, all those CPU optimizations (branch
predictions, speculative execution, caches, etc.) are not tied to any specific
languages.

They have been designed to make existing programs run faster; if all our
software stack was written in Java, Lisp or PHP, I think that on the hardware
front, most of the same decisions would have been made.

~~~
dragonwriter
> Wait; what? If C is not a Low-Level Language, then what is a Low-Level
> Language?

Assembly, actual machine code. (Contrary to the article, C was never a low-
level language, when it was younger it was literally a textbook high-level
language because it allows abstracting from the specific machine, and while
it's less likely to be what a textbook points to as an example today, that
hasn't changed.)

~~~
nwallin
Following the metrics if the article, assembly language isn't low level
either. Assembly language only gives you access to 16 integer registers and
the 16 (?) sse/avx SIMD registers on x86_64. It doesn't give you access to the
64 or so integer registers or the who knows how many SIMD registers there are.
Assembly instructions do _not_ map to uops, no matter how much we pretend they
do. We couldn't even program uops of we wanted to. These instructions are not
executed in the order we specify them, and some of them are not executed at
all: modern CPUs have their own dead code detectors and will drop instructions
if it feels like it.

Assembly language programmers have less control of the microcode than raw JVM
bytecode programmers have over the x86_64 instructions that eventually get
executed have.

~~~
millstone
Right, but that's the hardware interface. The CPU consumes a compressed
instruction stream. Compression is achieved by the compiler via a lossy
mapping of infinite registers onto a finite register set. This stream is then
re-inflated by the CPU through discovering false dependencies in the
interference graph via register renaming, and then cleaning up spilling via
caching.

If this seems absurdly complex, it might be because of the absurd complexity.
But the alternative has been tried, and tried and tried (RISC, VLIW), and
always a failure. Well fuck.

~~~
nwallin
I mean... sure?

But what's the point, what low level languages are there? The linked article
is arguing that C isn't low level because modern CPUs behave so differently
than what their hardware interface suggests they do. If we accept this
argument, then assembly language isn't low level either, because it suffers
from all these same limitations. If assembly language isn't low level, then
why is "low level" even a phrase?

My point is that if you're going to argue that C isn't low level, then it's
hard to argue that assembly language is. Conversely, if you're going to argue
that assembly language is low level, it's hard to argue that C isn't. So it's
flippant to argue (in this thread) that assembly language is low level without
also rebuking the article or coming up with a persuasive argument as to why C
shouldn't be lumped in with assembly language.

Personally, I think the article is wrong. C is low level. It is useful to
distinguish between C and Python in terms of C is low level and Python is high
level. It is a useful mental model, therefore I'm keeping it. But if people in
this thread are going to make both arguments that the article is correct and
assembly language is low level, you'll need to justify that fairly strongly.

I'm also not arguing that RISC or (ew) VLIW are the answer.

~~~
dragonwriter
> The linked article is arguing that C isn't low level because modern CPUs
> behave so differently than what their hardware interface suggests they do.

Which is right in conclusion, but wrong on reasoning.

C isn't low level because it allows, by design, allows writing code that works
on very different hardware interfaces by abstracting away from what the
particular machine is does independently of whether or not the CPU behaves the
way it's interface suggests. This is why decades ago C was a textbook example
of an HLL and nothing relevant to that description has changed in the
intervening period.

> It is useful to distinguish between C and Python in terms of C is low level
> and Python is high level.

Python is in the general class of languages for which the term very high level
programming language was created, and, yes, it's useful to distinguish between
Python and C (hence the term coined for that purpose), but it's also useful to
distinguish between Assembly and C (hence the terms coined for _that_
purpose.)

~~~
a1369209993
> allows writing code that works on very different hardware interfaces by
> abstracting away from what the particular machine is does

 _Looks at a 8 /16bit in-order processor with synchronous, byte-at-a-time
memory access and perhaps 1Kbit of on-chip registers total._

 _Looks at a 64bit, out-of-order, speculative, multicore behemoth with 64(or
72)bit data bus accessed by a embarassingly complicated asynchronous protocol,
and cached in multiple MB of on-die RAM, as well as dozens of general purpose
registers and hundreds if not thousands of special-purpose or model-specific
registers._

 _Looks at QEMU and other x86 interpreters._

So what you're saying is that x86 assembly is a _very bad_ high-level
language?

------
Google234
Good previous discussion with 316 comments :
[https://news.ycombinator.com/item?id=16967675](https://news.ycombinator.com/item?id=16967675)

------
dienciebsiwbsi
The talk of out-of-order execution and caching doesn't make much sense. You
could write in machine code and still have no idea how long a memory access
will take or in what order your instructions will execute, so by this
article's logic machine code ia a high-level language. Maybe "sequential
instructions with flat memory model" is a high-level abstraction of what a
modern machine does, but it is the only abstraction we have. The article
proves not that C is high-level but that modern CPUs offer only a high-level
interface.

(And you can get around some of this, too. You can issue prefetches and so
forth.)

------
jasonhansel
But would this prevent the next Spectre? Not necessarily. Hardware vendors
would continue to optimize execution of machine code in unexpected ways,
including ways that allow for side channel attacks. They wouldn't be
optimizing for C code, but they would still be optimizing for _some_ portable
low-level language.

Unless we made that low-level language constantly change as hardware
microarchitecture improves (thus giving up on portability), I think we'd be
back with the same problem of a mismatch between low-level languages and
underlying CPU architecture.

~~~
cryptonector
TFA doesn't even propose a non-serial programming paradigm that programmers
can easily learn. I suppose array languages, maybe? It's hard to imagine a CPU
architecture where there's no branches, or minimal branches, or a language
that greatly minimizes them yet can be used to implement the sorts of software
we're fond of. A lot of what we do with computers is highly parallel, but also
highly serial with lots of logic -- is that the fault of C, or is that natural
and C is just a scapegoat? My money is on the latter.

------
lmilcin
I program in ARM assembler, C, Rust, Java and Clojure on a daily basis and for
me C is definitely low level language.

For me a "level" up is something that helps me structure my program in a
fundamentally better way. Java has VM and garbage collector, Lisps have macros
and REPL. For me these are fundamental enablers to create different types of
flows that would not be at all practical in assembly or C.

The difference between assembly and C is just that you need couple of
instructions in Assembly to get an equivalent of line of code in C but the
fundamental program structure is the same.

The automation is nice but other than reduced overhead the same problems that
are difficult in Assembly are still difficult in C and for me this means they
are roughly on a same level.

~~~
xenadu02
> The difference between assembly and C is just that you need couple of
> instructions in Assembly to get an equivalent of line of code in C but the
> fundamental program structure is the same.

This is absolutely not true, unless you turn off all optimizations. If you
take that route you find that your C code is slower (sometimes orders of
magnitude slower) than code written in other languages.

C is less interesting without breaking the idea that "couple of instructions
in assembly" == "a line of C code". Is also much less interesting without
undefined behavior (which is what allows many of the optimizations that makes
C "fast" and thus "closer to the metal" in many people's minds).

I don't fully agree with the article, but I certainly think it is long past
time for a new systems programming language that is safe-by-default. There is
plenty of proof that we can define away most types of "undefined behavior",
get many kinds of memory safety, while still providing escape hatches for
situations where it really matters. Unfortunately we are still firmly in the
"denial" phase with half of our industry arguing that C is just fine and
dandy.

~~~
lmilcin
Compiler optimizations are something that people were rolling off in assembly
long before they were realized in C compilers.

You will also notice that compiler optimizations have not much to do with a
language being lower or higher level. You go for a higher level language
because you have performance to spare and you would instead want to write your
large application with less effort.

------
Marazan
The point is that C has no concept of caches. Yet efficient cache usage is
crucial to writing performant low level code.

So what C programmers have to engage in cargo culted patterns to try and
trigger the correct behaviour from their optimising compiler.

~~~
criddell
I'm always amazed that multi-thread code works as well as it does. I can have
some data in multiple caches being used by multiple threads on separate CPUs
and as long as I get the memory barriers right, it will work. Combine that
with predictive branching and out-of-order execution and it feels even more
magical.

~~~
Marazan
Yeah, multi threading on a modern processor basically seems like witch craft
to me.

------
talkingtab
You need to define low-level language carefully. As the article says, a
language like assembly is indeed close to the metal, however each assembly
language is too close to some particular metal to work on other metal. On the
other hand, C is probably as close to the metal as you can get and still run
on a z80, z8000, m68k, i386, i286, arm, etc.

We would all love to C(!) a better common low-level language that was common
across architectures but there is not one that I know of. Anyone?

There is indeed a problem associated with Spectre, Meltdown, etc., but
associating that with the C language seems like a misdirection.

------
coreai
This paper revealed me some of the things than modern designs have adopted,
which the regular uninformed coder would never notice (which he may not need
to most of the times). But recently I have been looking into HFT computers and
I see that these things are running C code (a friend works at a small startup
who said they use C programs for most of the orders) on regular computers with
most even using off the shelf hardware (intels 9900KS and 9990XE are hot
targets and anandtech and servethehome have shown off some hardware). HFTs are
highly sensitive to optimizations and it is my understanding that the lower
they go to the hardware, the better returns given how competitive it can get.

With so much in the middle from a high level C program to low level
instructions on a CPU, I wonder if we will see companies like JP Morgan and
Morgan Stanley (big ones with money and time to invest) enter the chip
business heavily. This could then bring back some of those optimizations to
the consumer space and startups in the area of fast and efficient C code then
might get into trouble. As of now this area seems to be open to compete.

------
thayne
Sure, a new processor that is designed to be optimized for a different
threading and memory model might be better in a lot of ways, but backwards
compatibility is important. We can't simply throw away all of the existing
software written in C, and in most cases a regression in performance for C
programs would be unacceptable. Sure, such a processor might find niche use
cases, where it doesn't need to run any legacy software (including OS) and can
take advantage of massive parellism, but I don't see it being able to replace
the prevalent abstract computer model used in CPU's today.

------
daxfohl
Is assembly a low level language? It presumably benefits from instruction
parallelism, branch prediction, caching too.

~~~
non-entity
Not sure, but someone basically told me nasm is not an assembler because it
has optimization options

------
mnowicki
There's not objectively defined tiers for what level each programming language
is on, when people describe a language as low-level they are speaking
relatively; They're saying 'picture a typical programming language, I'm
talking about something like that except more low-level'.

Almost all people would consider C to be a low-level language compared to most
languages they're familiar with. If you work in assembly all day then maybe
you don't think C is a low level language, but those people aren't the ones
calling it low-level.

Unless someone wants to define a cutoff for what a 'low-level' language is(and
that would be a bad idea, the way it's used now is very useful and working as
intended - it'd be better to make a new word) then I think the way the term is
typically used is perfectly fine.

I could see an argument for saying C is 'less low-level than people typically
assume', but saying it's not low level makes me think it's on the same level
as Java or something.

I'm probably just nitpicking though, I always feel like people ignore the idea
that the point of language is to communicate what you're thinking to someone
else and have them interpret what you're trying to say as closely as possible,
and language is already pretty good at evolving in a way that optimizes this.
Trying to change that or force people to address technicalities that are
outside the scope of what they're trying to communicate only complicates
it(ignoring obvious exceptions like if you're writing a scientific paper and
say things that are factually wrong for the sake of clarity)

------
ncmncm
There is no surer route to being blasted on HR than to suggest there is
anything about C that makes it slow, or distant from the actual machine.

Certainly, C is close to assembly language, but assembly language is itself
itself a compiled and heavily rewritten and optimized language, nowadays.

The actual machines we have today are so complex that we are not smart enough
to program them in actual machine language or anything close to it, but people
insist they want a machine language that looks primitive, and close to C. So,
the manufacturers give us that, and then compile the hell out of it, in
hardware, trying stoically to extract performant instructions from the vague
hints we provide them via the instructions we are willing to give them -- that
resemble C.

We are stuck in a deadly embrace: any new language must perform well on a
machine designed to emulate the C abstract machine, and any new processor has
to emulate that abstract machine.

A new language designed to direct the operations of a wholly different design,
no matter how capable, has no chance to succeed.

The only way forward may be for a language to be designed to program FPGAs
directly, and bypass the whole C-industrial complex. Unfortunately, FPGAs are
still mired in medieval-guild-style secrecy, so there is no more access to
their internals than to mainstream C engines. The access offered is via
Verilog or VHDL, which resembles C. It is not clear to what degree the guts of
FPGAs are compromised by this orientation.

If somebody ever musters the courage to publish a fully exposed FPGA that can
be programmed directly, and can be latticed by tens or hundreds for increased
power, then it will become possible to create a wholly new language that need
not be efficiently translatable to for execution on machines that don't
resemble all the C machines. I won't be holding my breath.

------
sullyj3
> The root cause of the Spectre and Meltdown vulnerabilities was that
> processor architects were trying to build not just fast processors, but fast
> processors that expose the same abstract machine as a PDP-11.

What would an abstract machine that better matched current processors look
like?

Edit: Probably should've read the rest of the article first

~~~
vectorEQ
This paper might be of interest:
[https://arxiv.org/pdf/1902.05178.pdf](https://arxiv.org/pdf/1902.05178.pdf)

it explains the 'concept' of these kind of attacks and why its kind of
impossible to make an abstraction which does not suffer from such flaws in the
presence of high precision timers which are in turn needed for high precision
applications (not sure what, i guess realtime applications or things which
need to measure super precise.. maybe audio / video ? )

the paper goes a bit further and imo is a bit simpler to read than the
original spectre / meltdown papers.

------
gpderetta
OoO execution, virtual memory, simd, virtualization, microcoding, caches, and
other features that are claimed to exist to propagate the illusion of a pdp11
they all predate or are contemporary with the creation of C.

They have been invented because they objectively make computers faster or
easier to use.

------
cromwellian
Don’t GPUs also do prefetching and use internal texture, tile, and
rasterization caches? Don’t some GPU drivers attempt shader optimization to
maximize ILP? Didn’t some GPUs have or used to have multiple FMAD and special
ALUs pipe?

I’m a little iffy at the idea that we should consider GPU models safe as GPUs
for most of their history were single user, and there hasn’t been a lot of
time to attack virtualized containers sharing GPUs.

------
0xdeadbeefbabe
> Low-level languages are "close to the metal," whereas high-level languages
> are closer to how humans think.

Assembly instructions or the numbers they map aren't that hard to think about,
and I prefer them to object hierarchies for example.

------
mcraealex16
Will RISC-V have any effect on this? Is there a way to use RISC-V that solves
at least some of these problems. It seems like it should be a priority to
solve this no?

------
Koshkin
I wish my computer had 1024 fast PDP-11 cores in it.

~~~
deRerum
That is an interesting idea. I wonder if a small processor core could be
stamped out on an fpga and hundreds of small processors can run
simultaneously.

Then I wish my c++ objects each ran in a separate (virtual) processor
instance. They could have event signalling built into the language as well as
the hardware.

It might force people to partition their design for more fine grained
parallelism. C++ objects use function calls for interfacing with objects.
Using object methods for event handling is just a ridiculous hack. Each object
should have its own memory space.

~~~
wyldfire
> I wonder if a small processor core could be stamped out on an fpga and
> hundreds of small processors can run simultaneously

I think it's very likely that this idea is well explored. CellBE, Tilera, Xeon
Phi/MIC, GPUs, etc.

> Then I wish my c++ objects each ran in a separate (virtual) processor
> instance. They could have event signalling built into the language as well
> as the hardware.

You're right to identify that the challenge for this kind of design is to come
up with a programming model and an associated IPC or I/O or memory
tier/caching mechanism. The HPC space is a graveyard of accelerator concepts
that never reached critical mass. GPUs have been the rare success. They can
amortize their business across multiple industries. They're already built in
to a lot of PCs, so easy to experiment with. OpenGL and later CuDA/OpenCL
created an abstraction that was fast and somewhat portable (not as much in
cudas case). The abstraction relieved you of the burden of having to know much
about the device's internal design whole still being quite fast.

> Each object should have its own memory space.

I don't think I know what advantage this provides. Can you share more? What do
you do about composition? Nested memory spaces? Sounds challenging and
potentially high overhead.

~~~
deRerum
Yes the examples you have all have multiple processing elements but they are
vector processors. I was talking about simple and cheap scalar processors.

GPUs rely on symmetry to simplify the hardware design. Multiple (like 64)
processing elements share the same instruction decoder. They have to access
adjacent registers. So they become vector processors.

CPUs devote a huge chip area to caches and instruction pipelines. GPUs took
out much of that area and complexity and replaced it with raw floating point
computing power. For certain applications this has proven to be a good trade
off.

What I described was a similar trade off...replacing a few heavily pipelined
processors with massive amounts of cache memory with smaller cheaper cores. I
wonder if it might prove to be the optimal micro-architecture for certain
applications and given certain languages and certain data patterns.

------
LessDmesg
If C is not low-level anymore andnot fast without a ton of optimizations, then
let's see a truly low-level language that's as fast or faster. Let's see
Erlang or whatever consistently beat C. That's supposed to be easier for a
language that maps better to the modern hardware, right? So where is it?

~~~
di4na
Nowhere bevause the modern hardware is only developped to be accessed C-like.

Of course that cripple it, but that is what the market ask so it is the only
one produced.

Or you could look at HPC. Ever wondered why they use Fortran?

------
gobwas
<joke>Why has no one mention Go yet?</joke>

