Hacker News new | comments | show | ask | jobs | submit login
Should you learn C to “learn how the computer works”? (steveklabnik.com)
343 points by steveklabnik 73 days ago | hide | past | web | favorite | 375 comments



C doesn't necessarily teach you how computers work. But it does teach you how software works. Our modern software empire is built on mountains of C, and deference to C is pervasive throughout higher-level software design.

>You may have heard another slogan when talking about C: “C is portable assembler.” If you think about this slogan for a minute, you’ll also find that if it’s true, C cannot be how the computer works: there are many kinds of different computers, with different architectures

As with many things, this phrase is an approximation. C is a portable implementation of many useful behaviors which map reasonably closely to tasks commonly done on most instruction sets. C programmers rarely write assembly to optimize (usually to capture those arch-specific features which are unrepresentable in C), but programmers in other languages often reach for C to write the high-performance parts of their code. The same reason C programmers in the early days would reach for assembly is now the reason higher-level programmers reach for C.


> Our modern software empire is built on mountains of C, and deference to C is pervasive throughout higher-level software design.

This I agree with and also I'd call the most important reason for Rust programmers to learn C. The C ABI is a lingua franca, not because it's good or pure but (as with spoken linguas franca) happened to be in the right place at the right time. It defines certain conventions and assumptions that didn't have to be assumed (implicit stack growth without declared bounds, single return value, NUL-terminated strings, etc.) and a lot of software is written to it, to the point that if you want your (e.g.) Python code and Rust code to interoperate, the easiest way is to get them to both speak C-compatible interfaces, even though you're not writing any actual C code.


For those who don't know what ABI means (as I did not) and thought it might be a typo.

ABI = Application Binary Interface

https://en.wikipedia.org/wiki/Application_binary_interface

https://upload.wikimedia.org/wikipedia/commons/b/bb/Linux_AP...


Yup. In compiled-languages land it's common to see things that are API-compatible but not ABI-compatible: the compiled objects don't work together, but if you recompiled the same source code it would work with no changes. This happens (sometimes) when you do things like reorder members of a structure or switch to a larger integer type for the same variable.

C and libc have been ABI-stable on most platforms for decades. C++ on some platforms (e.g., GNU) is stable with occasional ABI breakages in the standard library; on others (e.g., MS Visual C++) it breaks with new compiler versions. Rust isn't ABI-stable at all and has no clear plans for it, despite a strong commitment to API stability (i.e., old code will compile on new compiler versions). Swift 5 is targeting ABI stability.


To save you the read: in web terms, the API is e.g. Stripe and the ABI is HTTP.


There is not really any such thing as "the" C ABI -- different platforms have totally different conventions, even on the same instruction set.


It's shorthand for "the ABI relevant to the current context", which is entirely reasonable.


It wasn't at all clear to me from the post I responded to that this is what the poster meant, or that they knew the fact I was stating.


I'm assuming "the" C ABI probably means System V.


C at the very least teaches the difference between stack and heap memory, a crucial concept obscured by most higher-level languages.


There is no difference inside the computer between those concepts.

For convenience, many architectures have a single assembly instruction for taking one register (called the "stack pointer"), adjusting it by a word, and moving a register to the address at that word, and a corresponding instruction to move data back to a register and adjust it in the other direction. That's the extent of the abstraction. You can implement it with two instructions instead of one, and then you get as many stack pointers as you want. Zero, if you want.

There is no "heap" on modern OSes. There used to be a thing called the program break, beyond which was the heap. It's almost meaningless now. You ask for an area of virtual memory via mmap or equivalent; it is now one of several heaps you have.

And you can pass around pointers from all your stacks and all your heaps in the same ways, as long as they remain valid.

You need to understand this in order to understand how thread stacks work (you typically allocate them via mmap or from "the heap"); how sigaltstack works and why you need it to catch SIGSEGV from stack oveflows; how segmented stacks (as previously used in Go and Rust) work; how Stackless Python works; how to share data structures between processes; etc.


I think this is lawyering a bit. There might not be a "heap", per se, but there is stack allocation (which is embedded all the way into the ISA) and "everything else". You can, after all, build a simple "heap" on top of an adequately large static buffer.

Since this is a distinction that is very important both for performance in high-level languages and for correctness in C, and one C forces you to think about and makes plain without having to reason through escape analysis and closures, I think the previous comment's point is well taken.


The distinction is important for apartmented thread models, such as those found on Windows, where you do have multiple heaps, and pointers allocated on one are not valid in another.


Stack allocation of return addresses is embedded in most ISAs- but stack allocation of local variables isn't (because that's not A Thing at the ISA level).

Consider, say, shadow stacks- local variables might end up living on a totally separate "stack" from the one "call" pushes onto and "ret" pops from, and one which the ISA has no idea about.


In what sense is stack-relative addressing, PUSH, POP, and addition/subtraction on the stack pointer not ISA-level support for stack allocation of local variables?


Addition, subtraction, and stack-relative addressing are, in most architectures I'm familiar with, generic- stack-relative addressing ends up just being a special case of register-relative addressing, and addition and subtraction are rather important instructions in their own right.

PUSH/POP are more obviously "hey store your locals on the stack, kids", but on, say, x64, how often do you actually see a push instead of bp/sp-relative addressing? Most compilers seem to be much happier representing each stack frame as "a bunch of slots, some for things where the address is taken so it needs to be in memory, and others where I spill things into as needed".

On ARMv7 (i don't know enough about v8 / AARCH64), "push"/"pop" are just stores/loads with postdecrement / postincrement.

Is this lawyer-y? Sure. But I think it's still correct, and this being HN...


On x86, push/pop have dedicated hardware optimizations known as the stack engine which perform most of the rsp increments/decrements and passes those offsets into the decoder, instead of using executions slots on them. push/pop are also much smaller than the corresponding mov/add instructions.

It's much more optimal to use a series of push/pops for smaller operations like saving registers before a call than to manually adjust and store onto the stack.

While technically this is still incrementing/decrementing a register and storing, the amount of isa/hardware support for such things clearly demonstrates that the x86 isa and modern x86 hardware gives special treatment to the stack.


There's a reason 32-bit ARM has 'move base register down and then do some stores' and 'do loads and then move base register up' (stmdb and ldmia) rather than just plain old ldm and stm, and I'm pretty sure it's because it makes the entry and exit sequences for function calls with a stack shorter. (The ARM ISA doesn't privilege a downward growing stack, so you can use stmib and ldmda if you want your stack to grow upward; the Thumb ISA, however, does want a downward stack for push and pop.)


That's definitely a large part of it, but it also makes small memcpys nicer (load sizeof(foo)/4 registers from sourceptr with post increment, store them to destptr with post increment)


They may not be distinguished at the instruction level, but if you’re trying to argue it’s not a useful thing to teach you’ve lost me. Dynamic vs static vs scoped (stack) allocation is incredibly important to reasoning about basic programs.


Like I established in my opening comment, C tells you how software works, not how hardware works. Regardless, you're still wrong in several ways here. Making it out of several discontinuous regions of memory doesn't make your heap less of a heap. Most architectures also provide instructions for loading data from the stack and optimize for this purpose, using their caches more efficiently and making design decisions which have clearly been influenced by the C (or System V) ABI.


For your definition of 'computer', perhaps. Many microcontrollers have the stack defined by the hardware.


And some, such as RISC-V, have no predefined stack pointer at all. (Push and pop are implemented by doing them manually; call is implemented by the jump instruction optionally taking an argument for a register to use as the stack.) C, at best, teaches you the C abstract machine, which supports both of these concrete implementations equally well.


Leaning how microcontrollers work is not the same as 'learning how the computer works' in general, which is the subject at hand.


Here's the C99 standard:

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf

There are no occurrences of the words "stack" or "heap" in this document.

What the spec actually discusses is "storage durations". Now, in many cases you can say, well, "automatic storage duration" means it's on the stack, but that's not something C has any opinions about.

If you want to know about the stack and the heap, saying "learn C, and then learn how these abstract C concepts map onto the stack and the heap" might not actually be the best way to figure this stuff out.

What actually ends up happening, in my experience, is that to figure C out you have to get a decent mental model of how the stack and heap work, and then, from that, say "automatic storage duration and malloc/free are basically just the stack and the heap".

I learned calculus as a kid because I needed it for an online electronics class I was taking. But I'm not going to recommend that folks take electronics classes so they learn calculus! (that said- having a motivation to learn something, having an application in mind, a problem you want to solve... certainly seems to make learning easier.)


Debates about the "stack" rage every couple of years on comp.lang.c. Invariably people conflate two different meanings: (1) an abstract data structure with LIFO ordering semantics, and (2) #1 implemented using a contiguous block of [virtual] memory.

The semantics of automatic storage necessarily imply #1, but they do not require #2. Indeed, there are widely used implementations which implement #1 using linked lists (e.g. IBM mainframes, and GCC's split stacks or clang's segmented stacks).

Similarly, function recursion semantics necessarily imply #1 for restoring execution flow, but not #2. In addition to the examples above, so-called shadow stacks are on the horizon which maintain two separate stacks, one for function return addresses and another for data.[1] In the near future a single contiguous stack may be atypical.

[1] Some variations might mix data and return addresses if the compiler can prove safe object access. Or the shadow stack may simply contain checksums or other auxiliary information that can be optionally used, preserving ABI compatibility.


And just going to throw out there that IA64 split the function return stack and data stack into different hardware stacks, so they've been out for twenty years at this point at least.


"C" doesn't just include the spec but also the customary semantics [1] of the language. The customary semantics certainly include the stack and the heap.

[1] https://arcanesentiment.blogspot.com/2014/12/customary-seman...


People don’t learn C by learning the spec, and virtually every c implementation and runtime I know, it uses a stack and heap.

Oddly you seem to recognize this, so I’m not sure what your point is.


At some point, these conversations seem to always turn into a contest to see who can be the most pedantic. Which I think is pretty fun - and pretty well in the spirit of the original article - but that's me, and I can't fault anyone for being turned off by it.

's funny, I am certainly aware of having, at some point, at least skimmed over the C99 standard encountered the concept of a storage duration. But, aside from that brief few hours, virtually the entirety of my C-knowing career has been spent thinking and talking in terms of stacks and heaps.

Meanwhile, in my .NET career, I really did (and, I guess, still do) feel like it was important to keep track of how .NET's distinction is between reference types and value types, and whether or not they were stack or heap allocated was an implementation detail.


Most languages do that internally. It's not a reason you should learn C in particular while you're trying to get an understanding of heaps and stacks.


Indeed. There's a place for language lawyering, but that place is not everywhere.


They just call stack allocation "automatic storage duration" in the spec. If you read it, you pretty much have to implement it as a stack; you just don't have to keep the frames contiguous in memory. They still need to maintain stack style LIFO access though, so the only spec compliant implementations that don't use a traditional stack have back pointers in each frame to maintain stack style LIFO semantics.


I think I only got a reasonable understanding on the stack and heap concepts when I got to make a compiler in my undergrad classes. Some 2 years after I learned C.

I'm not even sure the C memory model depends on the code/stack/heap separation.


Or passing by value vs passing a pointer, or the importance of memory management...

Modern languages solve problems created developing in legacy languages (primarily C). The issue is that knowing a solution without knowing the prior problem which the solution addresses, doesn't really lend itself to clarity.


They don't solve problems created by C, they just solve problems C doesn't. The problems still exist, and sometimes the high-level compiler even picks the wrong solution. Learning C helps you understand when and why this has happened.


Arguably those problems did not exist before the C-era as software wasn't large or complicated enough. C++ directly attempts to improve over C by providing new mechanisms to solve problems encountered by C developers every day.

Point is that out of context, a lot of the things modern languages provide seem superfluous. Why have objects? Why have templates, closures, or lambdas? For a novice programmer, these are many answers for questions yet to be asked.

When you come from a heavy C background and you encounter something like templates, you know EXACTLY what this is for and wish you had it years ago.

I'm as hardcore as they get C programmer and I'm having a ball with C# for this very reason.


When I see templates, I run in the opposite direction at a dead sprint. C++ has added very little of value to the language and templates are one of their worst offerings. Metaprogramming is an antipattern in C.


> Metaprogramming is an antipattern in C.

Yet we often see passionate arguments in its favor, as for instance from Sústrik (http://250bpm.com/blog:56) and Tatham (https://www.chiark.greenend.org.uk/~sgtatham/mp/) and a person on the internet who recommends replacing LAPACK with preprocessor macros (http://wordsandbuttons.online/outperforming_lapack_with_c_me...). Would you care to comment on its enduring popularity?


You can write bad code in any language, C is no exception. We also see people persistently arguing in favor of simpler and less magical C. If you want reliable, maintainable code in any language, do not use magic. Metaprogramming is cool, no doubt about it, but it's unnecessary and creates bad code.


"Exploding" undefined behaviour is a problem created by C. Other languages don't have it, not even assembly languages.


There are many languages which have undefined behavior.

C does have a lot of it, and it can be anywhere. In many languages, you can cause undefined behavior through its FFI. Some languages, like Rust, have UB, but only in well-defined places (a module containing unsafe code, in its case).


If a language lets you cause undefined behaviour via FFI into C, I think it's fair to say that that remains a problem created by C.

I do take your point about Rust, but I'd see that as deriving from LLVM's undefined behaviour which in turn descends from C; I'm not aware of any pre-C languages having C-style exploding undefined behaviour or of any subsequent languages inventing it independently.


> I think it's fair to say that that remains a problem created by C.

That is fair, however, I don't think it's inherently due to C. Yes, most languages use the C ABI for FFI, but that doesn't mean they have to; it's a more general problem when combining two systems, they cannot track statically the guarantees of the other system.

With Rust, it has nothing to do with LLVM; it has to do with the fact that we cannot track things, that's why it's unsafe! Even if an implementation of the language does not use LLVM, we will still have UB.


> With Rust, it has nothing to do with LLVM; it has to do with the fact that we cannot track things, that's why it's unsafe! Even if an implementation of the language does not use LLVM, we will still have UB.

I can see that any implementation of unsafe Rust would always have assembly-like unsafeness (e.g. reading an arbitrary memory address might result in an arbitrary value, or segfault). But I don't see why you would need C-style "arbitrary lines before and after the line that will not execute, reads from unrelated memory addresses will return arbitrary values" UB?


> assembly-like unsafeness

> I don't see why you would need C-style "arbitrary lines before and after the line that will not execute, reads from unrelated memory addresses will return arbitrary values" UB

This reason this happens is because we don't compile things down to obvious assembly, they get optimized. Each of those optimizations requires assumptions to be made about the the code. If you break those assumptions, then the optimizations can result in arbitrary results happening. Those assumptions determine what is and isn't UB.

Most languages just don't give the programmer any way to break those assumptions, but languages like C and Rust do. Thus, Rust will always have this 'problem' because it can/will make even more aggressive optimizations then C will, meaning badly written `unsafe` code will have arbitrary behavior and results from the optimizer if the compiler doesn't understand what you're doing.


You are right that there's no inherent need for UB. However, we made the choice to have it.

The reason to have it is the exact same reason that the distinction between "implementation defined" and "undefined" exists in the first place: UB allows compilers to assume that it will never happen, and optimized based on it. That is a useful property, but it's a tradeoff, of course.


> If a language lets you cause undefined behaviour via FFI into C,

FFI isn't into "C", it's into your operating system's binary format. And since no two systems behave the same, it's UB however you look at it.


> FFI isn't into "C", it's into your operating system's binary format.

Most people call that format "the C ABI," and "FFI into C" is short for "FFI via the C ABI."

> And since no two systems behave the same, it's UB however you look at it.

That's not what UB means. It would be implementation defined, not undefined.


Rust does not have a spec, so all Rust behavior is undefined.


This is literally true but also overly reductive; we do make certain kinds of guarantees, even if there isn't a full specification of the entire language yet.

The reason that a spec is taking so long is that we're interested in making an ironclad, really good spec. Formal methods take time. C and C++ did not have specs for much, much longer than the three years that Rust has existed. We'll get there.


In my opinion as an onlooker of Rust, it seems more interested in shiny features and breaking the language every month than in becoming stable and writing a spec. It's far from replacing C in this regard.


What breaks in the language every month? That’s not the experience our users report. There are some areas where we do reserve the right to tweak things, but we take great care to ensure that these breakages are theoretical, rather than an actual pain. We have large (hundreds of thousands of LOC, and maybe even a few 1MM ones) code bases in production in the wild now, we cannot afford to break things willy-nilly.

You are right that we are far, but that’s because it’s a monumental task, and it’s fundamentally unfair to expect equivalence from a young language. It is fair to say that it is a drawback.


The language doesn't break as much today, but what constitutes "idiomatic" Rust is constantly changing. I don't use Rust but I spend a lot of time with people who do and am echoing what I've heard from them, and seen for myself as a casual user of Rust software and occasional drive-by contributor.

It doesn't have to be a monumental task. Rust is simply too big, and getting bigger.


Thanks. That’s much more reasonable, IMHO.

Specs are always a monumental task. The first C spec took six years to make in the first place!


But C never set out to be what it is today. It was a long time before anyone thought a spec was worth writing. For a language with the stated design goals of Rust, a specification and alternative implementations should be priority #1. You can't eliminate undefined behavior if you don't define any!


After recently (1 year) learning Rust I noticed that Rust teaches about low-level CS concepts of passing by value vs passing by pointer a lot better than C. Because you not just have references and values, but also have to think about richer semantics of what you are trying to do: sharing, mutability, and move semantics. So you learn not only that a few low level concepts exist, but you also learn how to use them correctly.


Memory management in C is very different than memory management in JavaScript, which throws exceptions when you try to access invalid memory, but will gladly let you create memory leaks inside event callbacks or other loops and keep chugging along gladly until it can't anymore. Writing a safe, efficient, high performing JavaScript app is very different than writing the same thing in C, and although I know both deeply well, I don't think learning C has helped me write better JS.


Just playing devil's advocate here... why is this such a "crucial" concept, for someone who is using a higher level language (like Python or Ruby)?

If 99% of what that person does is gluing together APIs and software modules, and they can see their memory usages are well within range, why does it matter?


I can’t make a thorough argument, but u/flyinglizard (in a comment adjacent to mine) mentioned what I was thinking: Ruby/Python almost entirely abstracts the concept of pointers, references, and memory. For beginners and intermediates, it’s not a big deal until you get mutable data structures like lists and dicts. When I learned CS, we were taught C/C++ first, and I remember the concept of mutability and object identity being not at all difficult.

Without being able to refer to fundamental structures and concepts, I usually just exclusively teach students non-mutable patterns (e.g. list.sorted vs list.sort), while pushing them to become much proactive at inspecting and debugging.


true. it's like interior decorator vs. interior designer

if you're just going to move couches around and put down throw pillows, who cares about a solid foundation of design fundamentals and architectural psychology.


It's also like fluid dynamicist vs quantum physicists. If you're just going to move oil around and put down differential equations, who cares about a solid foundation of quantum gravity and chromodynamics.


Yep. In truth, one can almost never win the argument of, "...but do I really need to know this?" But people should be aware that knowing fundamental things provides insight and advantages.


For sure, but there are diminishing returns. My rule of thumb is that understanding two levels of abstraction below whatever you're doing is usually worth it.


Compared to people who know only high level languages I have noticed that with my C background I can figure out performance problems better because I have an idea how the higher languages are implemented.


Because we haven't figured out a large body of non-leaky or resilient abstractions yet. If a higher level construct was indeed a true superset of lower level functionality, or was robust enough to be applied to a wide variety of situations, then it would be okay. Right now in web software, because we optimize for development velocity so heavily, the tools a smaller product would reach for are fundamentally different from the tools a very high load service would. Until we can come up with a set of robust higher level abstractions that scale well (or better than they currently do) then we'll be stuck where we're at. Thread pools will saturate, GC will stutter, sockets will timeout, and all sorts of stuff that higher level abstractions do not capture well.

(I kept this post vague but can offer concrete examples of you'd like)


> C at the very least teaches the difference between stack and heap memory, a crucial concept obscured by most higher-level languages.

Go does that too. C teaches manual memory allocation, de-allocation and pointer arithmetic as well.


No. In the machine model used by Go, everything is allocated on the heap. The optimizing part of the compiler can then move allocations from the heap onto the stack if it can prove that the allocation does not escape the lifetime of a stack frame.


No, it doesn't. Go does escape analysis and allocates stuff on heap if needed.


You can also learn that difference in something like C#. You don't need C for that.

And C pretends there's a distinction between the stack & heap that doesn't actually exist. There is no significant difference there.


I suggest maybe you learn C :) or Rust.

There are significant performance and strategy differences between the stack and the heap. On the stack, allocation is cheap, deallocation is free and automatic, fragmentation is impossible, the resource is limited, and the lifetime is lexically scoped.

On the heap, allocation might be cheap or it might be expensive, deallocation might be cheap or it might be expensive, fragmentation is a risk, the resource is 'unlimited', and the lifetime is unscoped.

Your statement is equivalent to saying "there's no difference between pointers and integers" - technically, they are both just numbers that live in registers or somewhere in memory. In reality, that approach will not get you far in computer science.


Those performance and strategy differences only exist inside C or other high-level languages, as a result of abstractions created by those languages. They are not in any way reflective of "the computer."

By all means it's certainly a valuable abstraction, one that most high-level languages support. But that's like how functions are a valuable abstraction, or objects, or key-value stores, or Berkeley sockets. Learning those abstractions is absolutely important and also completely irrelevant to understanding "the computer".

(As Dijkstra once said, computer science is no more about computers than astronomy is about telescopes. Learning C is valuable for computer science, but that doesn't mean it gives you a deep understanding of the computer itself.)


You are more than welcome to use a stack in an assembly language too. What you say certainly can be true, but many architectures include dedicated stack pointer registers and operations to manipulate them, either special-purpose (AVR, __SP_H__ and __SP_L__, push, pop) or more generic (x86 %sp, push, pop). I'd argue that functions exist at the hardware level to some extent too in architectures that support, for instance, link registers (PPC $LR) and special instructions (call, ret instead of a generic jump family). Calling conventions, sure, are an attraction over functions.


> Your statement is equivalent to saying "there's no difference between pointers and integers" - technically, they are both just numbers that live in registers or somewhere in memory. In reality, that approach will not get you far in computer science.

No, because in reality those are handled by different computational units. Integers are handled by the ALU, and pointers are handled by the loader. They are distinct things to the CPU.

Stack & heap have no such distinction. There isn't even a heap in the first place. There's as many heaps of as many sizes as you want, as the "heap" concept is an abstraction over memory (strictly speaking over virtual address space - another concept C won't teach you, yet is very important for things like mmap). It's not a tangible thing to the computer.

Same with the stack. It's why green-threads work, because the stack is simply an abstraction over memory.


> No, because in reality those are handled by different computational units. Integers are handled by the ALU, and pointers are handled by the loader. They are distinct things to the CPU.

I strongly disagree. Yes some execution units are more let's say "dedicated" to pointers than other, and obviously ultimately you will dereference your pointers, so you will load/store, but compilers happily emit lea to do e.g. Ax9+B and in the other direction, add or sub on pointers. Some ISA even have almost no pointer "oriented" register (and even x64 has very few)


Exactly, this will vary wildly between architectures. I would accept the statement on, for instance, a Harvard architecture but the world is a lot more nuanced in the much more common von Neumann architecture.


No no, haha, that's also not quite right. You can save the stack frame to memory then load it back which is what green threads do [1]. This includes the register state also, which is not what we're talking about here when we say "stack" -- we're referring to stack variables, not full frames. Full frames are even further from the heap, by virtue of including register states which must be restored also.

Although, it sounds like you're agreeing with me that there is in fact a difference between the stack and the heap because to your point one exists supported at the hardware level with instructions and registers, and one doesn't exist or can exist many times over. Hence, different.

[1] https://c9x.me/articles/gthreads/code0.html


> On the heap, allocation might be cheap or it might be expensive, deallocation might be cheap or it might be expensive,

In other words it might behave just like "the stack"; the differences between different kinds of "heaps" are as large or larger than the difference between "the stack" and "the heap".

> fragmentation is a risk, the resource is 'unlimited', and the lifetime is unscoped.

None of these is true in all implementations and circumstances.

Understanding the details of memory management performance is important for particular kinds of programming. But learning C's version of "the stack" and "the heap" will not help you with that.


I'd argue that given special purpose registers exist on most platforms to support a stack, and instructions dedicated to manipulating them (x86 %sp, push, pop) that the stack is in fact a hardware concept. The heap, however, is left as an exercise to the reader.


> I'd argue that given special purpose registers exist on most platforms to support a stack, and instructions dedicated to manipulating them (x86 %sp, push, pop) that the stack is in fact a hardware concept.

True enough; however sometimes access to "the heap" (in C terms) will use those instructions, and sometimes access to "the stack" will not. Learning one or two assembly languages is well worth doing, since they offer a coherent abstraction that is genuinely relevant to the implementation of higher-level languages. Not so C.


Well, you can just write a naive allocator that just bumps pointer and never deallocates things :). To appreciate stack vs heap, you have to learn a bit about memory management and some basic algorithms like dlmalloc, imho.


Definitely, as I've replied elsewhere, the stack is in fact a hardware concept whereas the heap is left as an exercise to the reader.


You're joking. There's a very serious distinction between the stack and the heap - perhaps they live in the same memory but they are used very differently and if you mix them up your things will break.


There is no hardware distinction between stack memory and heap memory.

In fact C teaches a model of a semi-standard virtual architecture - loosely based on the DEC PDP7 and/or PDP11 - which is long gone from real hardware.

Real hardware today has multiple abstraction layers under the assembly code, and all but the top layer is inaccessible.

So there's no single definitive model of "How computers work."

They work at whatever level of abstraction you need them to work. You should definitely be familiar with a good selection of levels - and unless you're doing chip design, they're all equally real.


There definitely is, unless there's a hardware heap pointer and hardware allocation and deallocation instructions, as there are for the stack :)


This isn't true. Modern instruction sets have clearly been influenced by and designed to optimize the C ABI.


...like returning a pointer to the stack:

    char *dupstr(const char *src) {
       char new[1024];
       strlcpy(new, src, 1024);
       return new;
    }


stack_ret.c:5:10: warning: function returns address of local variable [-Wreturn-local-addr] return new;


Why is this a warning and not an error? Are there situations in which you would want to return the address of a local variable?


I'm not sure it really pretends that there is so much a distinction so much as C supports stack allocation/management as a language feature, but heap support is provided by libraries.


Exactly. Maybe we could say that for 99% of software C is the lowest level. Its implemented in something, that's implemented in something, that's implemented in C. So, if you learn it, you can understand your software stack completely.

Also, C is learn once, write anywhere. If there is some kind of computer, then there is probably a C compiler for it. That's not true of any other language to the same extent. Know C and you can program anything.


Exactly, if you want to know how a computer works, you should take a course in Computer Architecture. I remember that subject from BSc quite fondly.


I don't disagree with anything you said, but I wanted to point out you also don't disagree with the author. Your point ("this phrase is an approximation") is also the author's point. I felt he walked through the why fairly and with nuance.


Our popular software stacks are written in C and C++, but that's more because of history than anything else. I rarely reach for just "classic" C for performance anymore. These days I'm more likely to reach for GPUs, SIMD intrinsics (available in Rust), or at least Rust/Rayon for code that needs maximum performance.

C is fundamentally a scalar language. In 2018, the only code for which "classic" C is the fastest is code that exhibits a low level of data parallelism and depends on dynamic superscalar scheduling. Fundamentally sequential algorithms like LZ data compression, or very branchy code like HTTP servers, are examples. This kind of code is becoming increasingly less prevalent as hardware fills in the gaps. Whereas we used to have C matrix multiplies, now we have TPUs and Neural Engines. We used to have H.264 video codecs, but now we have huge video decoding blocks on every x86 CPU Intel ships. Software implementations of AES used to be the go-to solution, but now we have AES-NI. Etc.

As an example, I'm playing around with high-performance fluid dynamics simulations in my spare time. Twenty years ago, C++ would have been the only game in town. But I went with TypeScript. Why? Because the performance-sensitive parts are GLSL anyway--if you're writing scalar code in any language, you've lost--and the edit/run/debug cycle of the browser is hard to beat.


Except even when you think vectorized processing should be a performance win, it often isn't: http://www.vldb.org/pvldb/vol11/p2209-kersten.pdf

I'd argue that GPUs and SIMD instructions have so many restrictions that they're useless for general purpose computing. Yeah, they have niche spaces, but in terms of all the different kinds of programs we write, I think those spaces are going to remain niche.


Vectorised processing is a win according to that paper: it is faster than scalar code. The key point of the paper is that compiled queries---doing loop fusion---is sometimes more of a win (in a database context, where "vectorisation" doesn't always mean SIMD as in this discussion).

Doing fused SIMD-vectorised operations will likely bring the advantages of both, with relatively small downsides. This is a relatively common technique for libraries like Eigen (in C++), that batch a series of operations via "Expression Templates" and then execute them all in as a single sequence, using SIMD when appropriate. (Other examples are C++ ranges and Rust iterators, although these are focused on ensuring loop fusion and any vectorisation is compiler autovectorisation... but they are written with that in mind: effort is put into helping the building blocks vectorise.)


Depends on what you mean by "general purpose computing". Is machine learning general purpose? Are graphics general purpose? Is playing video and audio general purpose? If those aren't general purpose, I'm not sure what "general purpose" means.

GPUs and other specialized hardware aren't good at everything, and I acknowledged as much upthread, but the set of problems they're good at is large and growing.


Are you promoting Rust on every topic about GC / performance topics?


Rust is fundamentally a scalar language too. If anything, I'm promoting vector languages like GLSL/Metal/etc.


It varies from application to application.

In my domain, we reach for the CUDA libraries to write the high-performance parts of our code. ;)


That requires special Hardware in contrast to C code.


I spent like a thousand dollars on the box sitting under my desk; I'm pretty sure my C code runs on special hardware too. ;)

(and worth noting: if I pull out the special hardware you're thinking of from that box, my particular thousand-dollar-box is no longer able to run software I need because the GUI requires a graphics accelerator card. The OS authors have already reached for a subset of CUDA to optimize the parts of the GUI that needed optimization).


Intel will not sell you an x86 processor without a GPU capable of compute these days.


That will affect a raspberry pi user how?


Raspberry Pi has a Videocore IV GPU on the chip.


> there are many kinds of different computers, with different architectures

are there ? how many laptops, desktops, smartphones, tablets, use anything else than the von neumann architecture ?


On hardware level, all those use von Neumann architecture. Wasm, however, exposes Harvard architecture (implemented on top on von Neumann machines). Yet, you can compile C to Wasm, which is a data point about how abstract C is: so abstract that it can target Harvard architecture despite targeting just von Neumann architecture being sufficient for targeting modern hardware.



C is a useful and universal model of computation not just because most native software is ultimately built on C, but because any hardware that people end up widely using also has to have a sensible C compiler.

The C model is relatively close, I’ve heard, to the hardware of a PDP-11. But there have been tons of abstractions and adaptations on both sides of that coin ever since; not only can a C program provide a language runtime that behaves quite differently from a PDP-11, but the C program itself is compiled to machine code that is sometimes quite different than the code for a PDP-11, and even the low level behavior of the CPU often differs from what’s implied by the ISA. And while all of these models of computation are Turing equivalent, the transformations to and from C are very well known.


I like this article a lot. There are two ways you can think of looking at the field of programming:

* As a continuum from "low level" to "high level".

* As a giant bag of topics: strings, heap, hash tables, machine learning, garbage collection, function, instruction set, etc.

If your goal is to have a broad understanding of CS, you want to explore the whole continuum and many topics. C is great for that because it exists at a sweet spot that's lower-level than most languages but not so low level that you have to jump into the deep end of modern CPU architectures which are fantastically complex.

Because C has been so successful, there are few other successful languages that sit near it on that line. Those that are (C++, Rust) are much more complex. So if your goal is just to get familiar with that region of the continuum and not become an expert in a new language, C has a good price/performance ratio.

Also, it's a good language for getting exposure to many topics other languages hide. If all you know is JS or Ruby, C will teach manual memory management, static types, heap versus stack allocation, pointers versus values, primitive fixed-sized arrays, structs, and bit manipulation. Its sparse standard library means you'll end up implementing many common data structures yourself from scratch, so you'll get a better understanding of growable arrays, linked lists, trees, strings, hash tables, etc.

"Portable assembly" is a nice slogan for C. But it's worth remembering that back when it was coined, the emphasis was on "portable", not "assembly". At the time, C was an alternative to assembly languages, not higher-level languages.

It's never been very close to an assembly language. C has types while assembly is untyped. C has function calls that abstract the calling convention while assembly languages make that explicit. C implicitly converts between types, assembly doesn't.


I still don't understand why you should learn C if you really want to know "how the computer works". The author says "By learning C, you can learn more about how computers work", but why taking the long path if the short path exists? Just learn assembly language. It doesn't take that much time, a few weeks are enough to get a good insight into how simple CPU work (registers, memory, no difference between numbers and pointers, a stack that grows, etc.). Why spending time on learning C if you already know that this is not what you actually wanted? That's like wanting to learn German and then learning Dutch because you heard it's easier than German.

To all the people who are going to reply "But machine language is not how the CPU works!!1!!111": Yes, I know that real modern CPUs translate the machine code to something else internally and that they do things like register renaming and microinstruction reordering and they have ports and TLBs and instruction caches and data caches etc. But CPUs still have registers and an instruction pointer and most of them have a stack pointer and interrupts and exceptions. C will definitely NOT help you to understand those things.


> if you really want to know "how the computer works".

This depends on what the person is really trying to learn. If they are just interested in learning how a simple CPU does what it does, then, yes, going straight to an assembly language for some old chip from the 80s is a good idea. Writing an emulator is a fun and instructive exercise.

But my impression is that most programmers who want to learn how the computer works want to because they want to be able to write more efficient software. They want to understand how computer performance works. Learning a simple CPU is anti-helpful for that. It leads you to believe wrong things like "all memory access is equally fast".

If you want to learn how to write faster code, C is pretty good because it strips away the constant factor overhead of higher level languages where things like GC, dynamic dispatch, and "everything is a reference" mask hardware-level performance effects.

It lets you control layout in memory, so you can control data cache effects. Without a VM or interpreter overhead, most of the branches the CPU takes will be branches you explicitly write in your code, so you can control things like the branch predictor.


Thank you!

> if your goal is just to get familiar with that region of the continuum and not become an expert in a new language, C has a good price/performance ratio.

I think this is a particularly great point in your post. I wonder what AndyKelley thinks of this, as in my understanding, that's sort of what Zig is trying to do as well. That is, Zig is attempting to be a language on a specific spot on the price/performance ratio, as it were.


I agree with this characterization. Zig is trying to directly replace the niche that C represents, in terms of exactly this tradeoff.

So if Zig is successful, in 4 years the title of this post would have been "Should you learn C/Zig to 'learn how the computer works'?" and in 8 years the title would have been "Should you learn Zig to 'learn how the computer works'?". :-)


Excellent :)


I think a new language has to be relatable with another and have some sort of huge defining feature(s) in order to get some sort of momentum, rust has rubyists(still one of my favorite languages)/c++ and safety, hopefully zig has something as well.


I teach C. It's an increasingly terrible way of learning "how a computer works". You would be much better off learning a little assembler.

The problem is most of the pain of learning C comes from undefined behaviour, which isn't how "computers work". The fact that depending on optimisation level writing past the end of an array might write to memory, or might not, is (mostly) a unique feature of C. Similarly sometimes a signed integer will overflow "cleanly", but then sometimes undefined behaviour will kick in and you'll get weird results. This (ironically) makes checking for overflow when adding integers in C annoyingly complicated.

With assembler if you write through a pointer, you write to that memory location, regardless of if you "should" be right now. You do multiplication or addition and you get well-defined 2s-complement wrap-around.


> You do multiplication or addition and you get well-defined 2s-complement wrap-around.

You get that, if that's what your CPU implements. Of course that's what all commonly used processors nowadays like x86 do. But C was meant to run on weirder ISAs as well. The specs of C allow so much undefined behavior in order to let C just emit the instructions for the multiply or memory access, and C deliberately says "not my problem" for whatever this architecture happens to do with edge-cases like overflow.

Using assembly instead of C cuts past the abstraction of the architecture, for both good and bad. You get defined behavior but lose portability. Practically speaking, yes, x86 assembly probably is a better way to learn without getting distracted by forty-year-old hardware oddities.


When I was learning assembly, one of my favorite things to see was what assembly got generated (more or less) from really, really simple higher level application code (C, C++, etc). Code consisting of really simple arithmetic, loops, function calls, object creation, etc.


As someone who felt that C was the path to knowledge for how modern computer systems "work", Forth and QEMU have become my "stretch challenge" for those with the motivation to tinker.

For me, working thru the resources on the OSDev wiki by taking jonesforth and linking it with the bootstrap from the "Writing an OS in Rust" tutorial (https://os.phil-opp.com/) really showed me how far C is from the hardware, and how much closer Forth connects to the idiosyncrasies of a computer system. Further, the immediacy of the Forth REPL can help with little experiments and building up the understanding of how much a machine's architecture influences code/data structures, opcode choice, privilege management, etc.

Plus, understanding Forth will bend your brain in a very strange but positive way :)


I’m convinced that Forth is a valuable intellectual exercise.

Can you elaborate on how you feel Forth better matches how a computer actually works? I’m not yet convinced on that point, but I have no Forth experience.


The most fun, in a tinkering sense, that I've ever had with low level programming was in a variant of Forth (within Minecraft, years ago when the mod that included it was still up to date).

That experience made me regret that Forth wasn't part of my formal education experience: it is a wonderful slightly above assembly language.

Forth is what should be included in a BIOS as the absolute lowest level interpreted language. A basic machine abstraction could be made by using only interpreted functions/procedures and that could be used to bootstrap add-in routines for attached hardware. It would be very possible to write low level bootstrap drivers that could be used on any architecture providing the specification (including some interface hooks for defining how to register and use an IO interface).


I'm not sure if you're posting this with awareness, but for the public benefit I'll say that this had been done, ages ago indeed.

https://en.wikipedia.org/wiki/Open_Firmware

I've had the pleasure to interact with it while trying to get a SPARC Ultra 60 workstation to work. The driver hooks missing from most modern hardware meant that not all graphics cards could be used with full support - currently BIOS drivers provided on some PCI hardware (RAID, network adapters) are only for the x86 architecture. I forgot what they are called though...


It would be my pleasure :)

I'll refer to jonesforth [1], but I think the lessons apply to the other Forth's I've seen.

The main core of a Forth is generally written in assembly to bridge the specifics of a processing unit and the canonical Forth words. For instance, most control flow words are defined in Forth, but a couple critical ones are implemented in assembly and "exported" up to the higher level words. [2]

The main task of the core is to bootstrap to a sufficiently powerful yet abstract dictionary of words that will allow a full Forth to flourish. But to get there, you'll have to make a lot of choices that will be constrained by your chip (Intel x86, x86_64, ARM, Z80, etc). The big one is type of interpreter (direct threaded, indirect threaded, etc).

In jonesforth, check out the docs and implementation of NEXT [3]. The choice was guided by a quirk of the Intel instruction set that allowed loading the memory pointed by esi into eax and incrementing esi all in one step; further, Intel x86 allows an indirect jump to the address stored in a register (eax). Not all instruction sets support this pattern, you'll have to experiment and investigate for various chips.

That choice is a critical one, but it's one of many. Where do you store the dictionary of words? How do you implement DOCOL? What registers will you preserve? Why? What instructions will you expose? Which ones will you hide? Will you allow new words to be created with machine code?

In C, much of this has been decided by others decades ago (existing operating system ABI choices, mainly). In Forth, you can try things out and only be restricted by the chips, which is what all of us are ultimately constrained by, even fancy-pants languages like Rust and Go :)

Once you get past the core into Forth, some of the computer details recede, but you usually have to be somewhat aware of how you are using the system, far more than modern programming languages. Forth is the ultimate deep-dive "language" [4], but that is as much a curse as a blessing.

Basically, use Forth for the understanding it provides, then go back to making good software in typical languages aided by the enlightenment you've attained :)

[1] https://github.com/nornagon/jonesforth/blob/master/jonesfort...

[2] https://github.com/nornagon/jonesforth/blob/master/jonesfort...

[3] https://github.com/nornagon/jonesforth/blob/master/jonesfort...

[4] I've actually stopped calling Forth a programming language; programming system captures the idea a little better, I find.


I agree about the benefits of a Forth-like language for tinkering. It's still very simple (like C), but the execution model is totally different, simpler, and more functional/mathematical.

I learned the HP programmable calculator version of Forth as one of my first languages, and I loved programming in that model. I think one's brain loses some flexibility if it hasn't programmed in a nonstandard model from an early time.


back in the 1980's when I first got into programming I wrote a 3D flight simulator in Forth ... terrific language ... loved teaching myself how to construct multi-dimensional data structures when the language itself only offered 1D arrays ... back pre-internet folks had to roll up their sleeves and write the plumbing level logic themselves unlike today with endless libraries at hand


Do you have the source to your OS bootstrap code + jonesforth? I'd love to play around with that.


It's hard to share, unfortunately, but I would recommend trying it out yourself regardless. The journey was more rewarding than the destination, at least for me.

In my case, the destination was a very simple Forth interpreter that read input from the serial port and sent a single packet over the virtual NIC interface. There are so many ways to go, that just happened to be my choice.


Thank you for this =)


What really made computers click to me was reading a book that had the premise: "learn just enough assembly to be able to implement C features by hand; we'll show you how". Sadly, I don't remember the title.

Another revelation much later on, was, as discussed here, the realisation that C is indeed defined over an abstract machine. I think much of those realisations were because of reading about how crazy compiler optimizations can be and how UB can _actually_ do anything.


Sounds a bit like "Programming from the Ground Up" [1].

[1] https://savannah.nongnu.org/projects/pgubook/


Or maybe just Professional Assembly Language [1]

[1] http://www.wrox.com/WileyCDA/WroxTitle/Professional-Assembly...


Will check this out as well as the book above. Thank you.


IIRC there was some UB feature that when compiled with GCC would launch nethack.


Found it:

>When GCC identified “bad” C++ code, it tried to start NetHack, Rogue, or Towers of Hanoi. Failing all three, it would just print out a nice, cryptic error message.

https://feross.org/gcc-ownage/


The description is a little off. It was more a C thing than a C++ thing. And it was invoked by using any pragma; the gcc developers at the time had borderline religious objections to the idea of pragmas.


Alright, that is pretty awesome.


I have to ask, because I've wondered for a couple years now, is your name a One Piece reference?


Yes it is! Thank you for noticing!


Nice! As of a couple months ago I'm actually going through the anime series again with my 9 year old. It's interesting getting his take on the characters, since it's a bit different than mine (and he's way younger than I was when I first watched it). For example, he's mostly bored by any Zoro fights, likely because at this point they are a lot of before and after cuts, but Zoro was always one of my favorite characters. We'll see if that continues, we're only just finishing the Spypeia arc.

I will say, waiting until he was capable and willing to read the subbed version was probably the right choice. Dubbed shows of any genre drive me nuts, and I'm not sure he would have the patience for the series if he was younger (the Alabasta arc was still taxing...). I do take a perverse pleasure in hinting about how crazy stuff becomes later, while also convincing him to not ruin it for himself by looking it up. ;)


One valuable property of C that the author didn't hit, C code is easily translatable into assembler in your head. He kind of misses this point with the virtual machine discussion. Yes C code becomes different types of assembly by platform, but you can look at C and have a clear idea of what the assembly will look like.

This is at a really good level for driving intuitions about what the computer is actually doing. Your concern 99% of the time when you are trying to think at this level is performance, and thinking in C will give you the right intuition about how many instructions your code is generating, what and when memory is being allocated, and which exact bytes are being pulled off the disk for certain function calls.

Modern swift/java/javascript compilers are so good that they will often generate better code than you write. This often makes knowing C a less useful skill. But, even so, when trying to understand what a compiler optimization is doing, you are probably thinking in terms of the C code it is generating. It's at exactly the right level to be clear about what is actually happening without drowning yourself in JUMP statements.


> Your concern 99% of the time when you are trying to think at this level is performance, and thinking in C will give you the right intuition about how many instructions your code is generating

I really don't think that's true given modern optimizing compilers. I remember back when C "best practices" were:

* Avoid moving code out into separate functions since calls are expensive.

* Hoist subexpressions out of loops to avoid recomputing them.

* Cache things in memory to avoid recomputing them.

But inlining means now we refactor things into small functions and usually expect that to be free. Optimizers automatically hoist common subexpressions out of loops.

And caching is useful, but making your data structures larger can cause more data cache misses, which can be more expensive that just recomputing things. I've seen good CPU cache usage make a 50x difference in performance.

Even in C, optimizing is now an empirical art and not something you can reason about from first principles and running the compiler in your head.


There is an article about this. 'C Is Not a Low-level Language'.

https://queue.acm.org/detail.cfm?id=3212479


> One valuable property of C that the author didn't hit, C code is easily translatable into assembler in your head. He kind of misses this point with the virtual machine discussion. Yes C code becomes different types of assembly by platform, but you can look at C and have a clear idea of what the assembly will look like.

I've seen this many times, I'll be honest: I can write C, but for the life of me I don't have the slightest clue how the assembly code will look like.


Same here.

At one point, I probably could've looked at C and had a pretty clear idea of what the 68HC12 assembly would look like... but I've never bothered to learn x86/x64 assembly, or ARM assembly, and I've written way more C code for those architectures than I ever did for microcontrollers back in undergrad.


> I can write C, but for the life of me I don't have the slightest clue how the assembly code will look like.

Compiler Explorer can help a ton. https://gcc.godbolt.org/

I recommend liberal use of Compiler Explorer to verify claims during code review.


Why not simply imagine it as something like this:

  ; c = a + b

  load  reg1, a
  load  reg2, b
  add   reg1, reg2
  store reg1, c


That's the correct steps, but that doesn't add anything that the original C code doesn't do.

On x86, that could be: mov [ebx], eax add [ecx], eax

Which is an entirely separate set of operands, instructions, and addressing modes. It's still the same steps, but it's not really telling you anything.


> Yes C code becomes different types of assembly by platform, but you can look at C and have a clear idea of what the assembly will look like.

Not anymore, with modern optimizing compilers it's hard to reason about -O3 assembly output.

https://godbolt.org is amazing to play with that


If you want a developer to have a good intuition for what assembly looks like, the only real way to do that is for them to take an assembly class or to play with assembly a bit.

It doesn't really matter if the assembly they play with comes from C or Rust or C++ or any other language that outputs assembly.

In short, learning C doesn't teach you assembly. Learning assembly teaches you assembly.


You are conflating that it is possible for a human to reasonably fast manually "compile" C into assembly, with that being the only way to do it. Many times the resulting code from modern clang/gcc is so optimized there isn't even any assembly code left to reason about. Even with -Og i many times get the famous "Optimized out" when trying to look at a variable in GDB, with -O3 even more so.

You are correct though that it is a close as you are going to get compared to many other languages, mostly because they all will make use of GC, vtables and dynamics on almost every line of code which adds noise in generated output or they run behind an interpretter or JIT.


I find this article to be disingenuous. Yes, C isnt "how a computer really works". Neither is assembly. The way a computer works is based off of transistors and some concepts built on top of that (ALUs for example). However, there is no need to know about any of that because you're presented with an abstraction (assembly). And thats really what people mean when they say C is closer to how a computer actually works: its a language with fewer abstractions than many others (most notably, its lack of garbage collection, object oriented behaviors, and a small runtime). That lack of abstraction means that you have to implement those concepts if you want to use them which will give you an understanding of how those abstractions work in the language that has them built in.


But in addition to the mismatch between the abstractions provided and the real hardware, C qua C is missing a huge number of abstractions that are how real hardware works, especially if we pick C99 as Steve did, which doesn't have threading since that came later. I don't think it has any support for any sort of vector instruction (MMX and its followons), it doesn't know anything about your graphics card which by raw FLOPS may well be the majority of your machine's actual power, I don't think C99 has a memory model, and the list of things it doesn't know about goes on a long ways.

You can get to them from C, but via extensions. They're often very thin and will let you learn a lot about how the computer works, but it's reasonable to say that it's not really "C" at that point.

C also has more runtime that most people realize since it often functions as a de facto runtime for the whole system. It has particular concepts about how the stack works, how the heap works, how function calls work, and so on. It's thicker than you realize because you swim through it's abstractions like a fish through water, but, technically, they are not fundamental to how a computer works. A computer does not need to implement a stack and a heap and have copy-based functions and so on. There's a lot more accidental history in C than a casual programmer may realize. If nothing else compare CPU programming to GPU programming. Even when the latter uses "something rather C-ish", the resemblance to C only goes so deep.

There's also some self-fulfilling prophecy in the "C is how the computer works", too. Why do none of our languages have first-class understanding of the cache hierarchy? Well, we have a flat memory model in C, and that's how we all think about it. We spend a lot of silicon forcing our computers to "work like C does" (the specification for how registers work may as well read "we need to run C code very quickly no matter how much register renaming costs us in silicon"), and every year, that is becoming a slightly worse idea than last year as the divergence continues.


But that's fighting against a straw man. When people advice others to "learn C," they don't mean the C99 specification they mean C as it is used in the real world. That includes pthreads, simd intrinsics, POSIX and a whole host of other features that aren't really C but what every decent C programmer uses daily.

As you point out, modern architectures are actually designed to run C efficiently, so I'd say that's a good argument in favor of learning C to learn how (modern) computers work. Pascal is at the same level as C, but no one says "learn Pascal to learn how computers work" because architectures weren't adapted according to that language.


> . Pascal is at the same level as C, but no one says "learn Pascal to learn how computers work" because architectures weren't adapted according to that language.

They used to say it though, before UNIX derived OSes took over the computing world.


A computer does not need to implement a stack

What general purpose computer exists that doesn't have a stack? Push/pop have been fundamental to all the architectures I've used.


Hardware stacks are a relatively recent feature (in historical terms) even though subroutine calls go way back. Many systems did subroutine calls by saving the return address in a register. On the IBM 1401, when you called a subroutine, the subroutine would actually modify the jump instruction at the end of the subroutine to jump back to the caller. Needless to say, this wouldn't work with recursion. On the PDP-8, a jump to subroutine would automatically store the return address in the first word of the subroutine.

On many systems, if you wanted a stack, you had to implement it in software. For instance, on the Xerox Alto (1973), when you called a subroutine, the subroutine would then call a library routine that would save the return address and set up a stack frame. You'd call another library routine to return from the subroutine and it would pop stuff from the stack.

The 8008, Intel's first 8-bit microprocessor, had an 8-level subroutine stack inside the chip. There were no push or pop instructions.


I don't have direct experience, but I believe that system/360 programs have historically not had stacks, opting instead for a statically allocated 'program save area'.

https://people.cs.clemson.edu/~mark/subroutines/s360.html

XPLINK, a different linkage convention, also for MVS, does have use a stack-based convention:

https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/...


RISC-type architectures with a branch-and-link instruction (as opposed to a jsr- or call-type instruction) generally have a stack by convention only, because the CPU doesn't need one to operate. (For handling interrupts and exceptions there is usually some other mechanism for storing the old program counter.)


Can you point me to a RISC architecture that doesn't have push and pop instructions?


Nearly all of them?

ARM's pop is really a generic ldmia with update. You can use the same instruction in a memcpy.

MIPS, PowerPC, and Alpha don't have anything like push and pop, splitting load/stores and SP increment/decrement into separate instructions.

AArch64 has a dedicated stack pointer, but no explicit push pop.

In general the RISC style is to allocate a stack frame and use regular loads and stores off of SP rather than push and pop.


ARM64 uses regular loads and stores to access the stack, I believe. It also is one of the architectures with branch-and-link. https://community.arm.com/processors/b/blog/posts/using-the-...


From the perspective of "stack" as an underlying computation structure: Have you ever played a Nintendo game? You've used a program that did not involve a stack. The matter of "stack" gets really fuzzy in Haskell, too; it certainly has something like one because there are functions, but the way the laziness gets resolved makes it quite substantially different from the way a C program's stack works.

If a time traveler from a century in the future came back and told me that their programming languages aren't anywhere as fundamentally based on stacks as ours are, I wouldn't be that surprised. I'm not sure any current AI technology (deep learning, etc.) is stack-based.

From the perspective of "stack" as in "stack vs. heap", there's a lot of existing languages where at the language level, there is no such distinction. The dynamic scripting language interpreters put everything in the heap. The underlying C-based VM may be stack based, but the language itself is not. The specification of Go actually never mentions stack vs. heap; all the values just exist. There is a stack and a heap like C, but what goes where is an implementation detail handled by the compiler, not like in C where it is explicit.

To some extent, I think the replies to my post actually demonstrate my point to a great degree. People's conceptions of "how a computer works" are really bent around the C model... but that is not "how a computer works". Computers do not care if you allocate all variables globally and use "goto" to jump between various bits of code. Thousands if not millions of programs have been written that way, and continue to be written in the embedded arena. In fact, at the very bottom-most level (machines with single-digit kilobytes), this is not just an option, but best practice. Computers do not care if you hop around like crazy as you unwind a lazy evaluation. Computers do not care if all the CPU does is ship a program to the GPU and its radically different paradigm. Computers do not care if they are evaluating a neural net or some other AI paradigm that has no recognizable equivalent to any human programming language. If you put an opcode or specialized hardware into something that really does directly evaluate a neural net without any C code in sight, the electronics do not explode. FPGAs do not explode if you do not implement a stack.

C is not how computers work.

It is a particular useful local optimum, but nowadays a lot of that use isn't because "it's how everything works" but because it's a highly supported and polished paradigm used for decades that has a lot of tooling and utility behind it. But understanding C won't give you much insight into a modern processor, a GPU, modern AI, FPGAs, Haskell, Rust, and numerous other things. Because learning languages is generally good regardless, yes, learning C and then Rust will mean you learn Rust faster than just starting from Rust. Learn lots of languages. But there's a lot of ways in which you could equally start from Javascript and learn Rust; many of the concepts may shock the JS developer, but many of the concepts in Rust that the JS programmer just goes "Oh, good, that feature is different but still there" will shock the C developer.


> A computer does not need to implement a stack

C doesnt "have", or require, a stack, either. It has automatic variables, and I think I looked once and it doesn't even explitly require support for recursive functions.


You might be thinking of Fortran? C does require support for recursive functions.


You're right. I searched the C99 for "recurs" and found the relevant section that briefly mentions recursive function calls.

That means static allocation is insufficient for an implementation of automatic variables in a conformant C compiler. Nevertheless I still like to think of it as a valid implementation sometimes. In contemporary practice many stack variables are basically global variables, in the sense that they are valid during most of the program. And they are degraded to stack variables only as a by-product of a (technically, unnessary) splitting of the program into very fine-grained function calls.


> We spend a lot of silicon forcing our computers to "work like C does" (the specification for how registers work may as well read "we need to run C code very quickly no matter how much register renaming costs us in silicon"

Can you elaborate? I thought I knew what register renaming was supposed to do, but I don't see the tight connection between register renaming and C.


Register renaming was invented to give assembler code generated from C enough variables in the forms of processor registers, so that calling functions wouldn't incur as much of a performance penalty.

The UltraSPARC processor is a stereotypical example, a RISC processor designed to cater to a C compiler as much as possible: with register renaming, it has 256 virtual registers!


Register renaming is older than C (the first computer with full modern OoOE was the 360/91 from 1964). It has more to do with scheduling dynamically based on the runtime data flow graph than anything about C (or any other high level language).


Agreed with your historical information, but the comment about function calls and calling conventions is not without merit. If you have 256 architectural registers you still can't have more than a couple callee-save registers (otherwise non-leaf functions need to unconditionally save/restore too many registers), and so despite the large number of registers you can't really afford many live values across function calls, since callers have to save/restore them. Register renaming solves this problem by managing register dependencies and lifetimes for the programmer across function calls. With a conventional architecture with a lot of architectural registers, the only way you can make use of a sizable fraction of them is with massive software pipelining and strip mining in fully inlined loops, or maybe with calls only to leaf functions with custom calling conventions, or other forms of aggressive interprocedural optimization to deal with the calling convention issue. It's not a good fit for general purpose code.

Another related issue is dealing with user/kernel mode transitions and context switches. User/kernel transitions can be made cheaper by compiling the kernel to target a small subset of the architectural registers, but a user mode to user mode context switch would generally require a full save and restore of the massive register file.

And there's also an issue with instruction encoding efficiency. For example, in a typical three-operand RISC instruction format with 32-bit instructions, with 8-bit register operand fields you only have 8 bits remaining for the opcode in an RRR instruction (versus 17 bits for 5-bit operand fields), and 16 bits remaining for both the immediate and opcode in an RRI instruction (versus 22 bits for 5-bit operand fields). You can reduce the average-case instruction size with various encoding tricks, cf. RVC and Thumb, but you cannot make full use of the architectural registers without incurring significant instruction size bloat.

To make the comparison fair, it should be noted that register renaming cannot resolve all dependency hazards that would be possible to resolve with explicit registers. You can still only have as many live values as there are architectural registers. (That's a partial truth because of memory.)

There are of course alternatives to register renaming that can address some of these issues (but as you say register renaming isn't just about this). Register windows (which come in many flavors), valid/dirty bit tracking per register, segregated register types (like data vs address registers in MC68000), swappable register banks, etc.

I think a major reason register renaming is usually married to superscalar and/or deeply pipelined execution is that when you have a lot of physical registers you run into a von Neumann bottleneck unless you have a commensurate amount of execution parallellism. As an extreme case, imagine a 3-stage in-order RISC pipeline with 256 registers. All of the data except for at most 2 registers is sitting at rest at any given time. You'd be better served with a smaller register file and a fast local memory (1-cycle latency, pipelined loads/stores) that can exploit more powerful addressing modes.


Why do you claim that local variables and function calls are specific to C? They seem to be very popular among programming languages in general.


Because I'm an assembler coder, and when one codes assembler by hand, one almost never uses the stack: it's easy for a human to write subroutines in such a way that only the processor registers are used, especially on elegant processor designs which have 16 or 32 registers.


I surely did use the stack a lot, back in the old days when coding Z80 and 80x86 Assembly.


You almost had to, because both Z80 and x86 processors have a laughably small number of general purpose registers. To write code which works along that limitation would have required lots of care and cleverness, far more than on UltraSPARC and MC68000.

On MOS 6502 we used self-modifying code rather than the stack because it was more efficient and that CPU has no instruction and data caches, so they couldn't be corrupted or invalidated.


> computer does not need to implement a stack and a heap and have copy-based functions and so on.

AFAIK intel cpus have a hardware stack pointer


That's correct, but "does not need to implement" and "this kind of computer does implement" are not incompatible statements.


> I find this article to be disingenuous. Yes, C isnt "how a computer really works". Neither is assembly. [...]

He addresses all of that in the subsequent bullet points of his summary, and elaborates on it in the body of the article (your criticism stops at his first sentence of the summary). It goes into a nuanced discussion; it doesn't just make a blanket statement. I don't find it disingenuous at all.


"How a computer works" is both a moving target, and largely irrelevant, as shown by the number of ignorant (not stupid) replies in this very topic. RISC and ARM changed the game in the 90s, just as GP-GPUs did in the 00s and neural and quantum hardware will do in the future.

What's really needed are languages that make it easier to express what you want to accomplish - this is still a huge problem, as evidenced by the fact that even a simple data munging task can be easily explained to people in a few dozen words, but may require hundreds of LOC to convey to the machine...

(BTW, it's time for us to be able to assume that the computer and/or language can do base-10 arithmetic either directly or via emulation. The reasons for exposing binary in our programming languages became irrelevant decades ago - ordinary programmers should never have to care about how FP operations screw up their math, it should just work.)


Thank you.


> The way a computer works is based off of transistors and some concepts built on top of that (ALUs for example). However, there is no need to know about any of that because you're presented with an abstraction (assembly).

I'd argue you should know how that works.

Cache-misses, pipelining and all these subtle performance intricacies that seem to have no rhyme or reason just fall out out understanding exactly how you get shorter clock cycles, different types of memory and the tradeoffs that they present.

One of the best things I ever did was go build a couple projects on an FPGA, when you get down to that level all of these leaky (performance) abstractions are clear as day.


I'd argue that you shouldn't be driving a car unless you've rebuilt a transmission yourself, but that argument wouldn't go far.


That'd be like saying your end user should understand cache misses, however if you're starting a car design/repair shop you might want to know about how transmissions work.

The primary reason for dropping down to native is to get better performance. If you're going to do that you'll leave 10x-50x performance on the table if you don't understand cache misses, prefetching and the other things that manual memory placement open up.


I'm not going to play anymore metaphor/semantic games. It's nice that you did that project, but it's not at all necessary for someone to engage in that in order to understand performance issues.


You're the one that raised the metaphor, but okay?

I'm not saying that you can't do performance work without having done that. Just that you'll be at a disadvantage since you're at the mercy of whatever your HW vendor decides to disclose to you.

If you know this stuff you can work back from from first principals. With a high level memory architecture of a system(say tiled vs direct rendering GPU) you can reason about how certain operations will be fast and will be slow.


You're the one that raised the metaphor, but okay

And your response was absurd. You don't rebuild a transmission in order to run a shop. You don't even rebuild a transmission as an engineer creating cars, you shop that out to an organization specializing in the extremely difficult task of designing and building transmissions. I wanted to avoid this waste of time, but here we are.

As for the rest of your comment about reasoning about performance, none of that requires the work you did. Again, neat project (for you), but completely unnecessary in general.


It would be valid, though. A computer programmer must understand how a computer works lest she or he write slow, bloated, inefficient software.


Given how many person-years have been saved and how much value has been produced by "slow, bloated, inefficient software", I must disagree in the strongest possible terms. Producing "slow, bloated, inefficient software" is far, far preferable to not producing software at all.


I would rather have no software or write the software myself than waste all the time of my life I've had to waste because of such shitty software, and it is indeed the case I've had to write such software from scratch because the alternatives were utter garbage. So we deeply, vehemently disagree.


I would go further and say that how modern computers work at the level that interests most people is somewhat based on C. It would be very different with another hardware abstraction paradigm such as Lisp Machines (as their name suggests), but that's not what we have :).

EDIT: I was going to update my comment a bit now that I thought more about it and that I'm on a computer rather than a mobile phone, but in the meantime jerf posted a very good reply to the parent comment so just read that ^^.


Except it isn't, not really.

Even just the distinction between the stack & heap is wrong. They aren't different things, just different functions called on the otherwise identical memory. It's why things like Go work fine, because the stack isn't special. It's just memory.

malloc & free are also totally divorced from how your program interacts with the OS memory allocator, even. GC'd languages don't necessarily sit on malloc/free, so it's not like that's an underlying building block. It's simply a different building block.

So what are you trying to teach people, and is C really the way to get that concept across? Is the concept even _useful_ to know?

If you want to write fast code, which is what you'll commonly drop to C/C++ to do, then just learning C won't get you any closer to doing that. It won't teach you branch predictors, cache locality, cache lines, prefetching, etc... that are all super critical to going fast. It won't teach you data-oriented design, which is a hugely major thing for things like game engines. It won't teach you anything that matters about modern CPUs. You can _learn_ all that stuff in C, but simply learning C won't get you that knowledge at all. It'll just teach you about pointers and about malloc & free. And about heap corruption. And stack corruption.


> Even just the distinction between the stack & heap is wrong. They aren't different things, just different functions called on the otherwise identical memory. It's why things like Go work fine, because the stack isn't special. It's just memory.

To add to this: I have seen people who learned C and thought it to be "close to the metal" genuinely believe that stack memory was faster than heap memory. Not just allocation: they thought that stack and heap memory were somehow different kinds of memory with different performances characteristics.

And the C abstract machine maps just fine to computers where the heap and stack are separate, unrelated address spaces, so this isn't even necessarily mistaken reasoning for someone who just knows C.


Separate stack and data memory adress spaces will make the machine incompatible with ISO C due to impossibility to convert between "pointer to void" and "pointer to object". Code address space is allowed to be separate.


> malloc & free are also totally divorced from how your program interacts with the OS memory allocator, even. GC'd languages don't necessarily sit on malloc/free, so it's not like that's an underlying building block. It's simply a different building block.

The realization that malloc is really just kind of a crappier heavy-manual-hinting-mandatory garbage collector was a real eye-opener in my college's "implement malloc" project unit.

(To clarify: the malloc lib is doing a ton of housekeeping behind the scenes to act as a glue layer between the paging architecture the OS provides and high-resolution, fine-grained byte-range alloction within a program. There's a lot of meat on the bones of questions like sorting memory allocations to make free block reunification possible, when to try reunification vs. keeping a lot of small blocks handy for fast handout on tight loops that have a malloc() call inside of them, how much of the OS-requested memory you reserve for the library itself as memory-bookkeeping overhead [the pointer you get back is probably to the middle of a data structure malloc itself maintains!], minimization of cache misses, etc. That can all be thought of as "garbage collection," in the sense that it prepares used memory for repurposing; Java et. al. just add an additional feature that they keep track of used memory for you without heavy hinting via explicit calls to malloc() and free() about when you're done with a given region and it can be garbage-collected).


Stack accesses _are_ different in hardware these days, which is why AArch64 brings the stack pointer into the ISA level vs AArch32, and why on modern x86 using RSP like a normal register devolves into slow microcoded instructions. There's a huge complex stack engine backing them that does in fact give you better access times averaged vs regular fetches to cache as long as you use it like a stack, with stack-like data access patterns. The current stack frame can almost be thought of as L½.


The stack pointer is just that, a pointer. It points to a region of the heap. It can point anywhere. It's a data structure the assembly knows how to navigate, but it's not some special thing. You can point it anywhere, and change that whenever you want. Just like you can with any other heap-allocated data structure.

It occupies the same L1/L2 cache as any other memory. There's no decreased access times or fetches other than the fact that it just happens to be more consistently in L1 due to access patterns. And this is a very critical aspect of the system, as it also means it page faults like regular memory, allowing the OS to do all sorts of things (grow on demand, various stack protections, etc...)


Google "stack engine". Huge portions of the chip are dedicated to this; if it makes you feel better you can think of it as fully associative store buffers optimized for stack like access. And all of this is completely separate from regular LSUs.

There's a reason why SP was promoted to a first class citizen in AArch64 when they were otherwise removing features like conditional execution.

That's also the reason why using RSP as a GPR on x86 gives you terrible perf compared to the other registers, it flips back and forth between the stack engine and the rest of the core and has to manually synchronize in ucode.

EDIT: Also, the stack is different to the OS generally too. On Linux you throw in the flags MAP_GROWSDOWN | MAP_STACK when building a new stack.


Would it be fair to say that it will teach you how to dictate the memory layout of your program, which is key to taking proper advantage of "cache locality, cache lines, prefetching, etc..."?


> It's why things like Go work fine, because the stack isn't special. It's just memory.

Care to elaborate?


C teaches you how a computer works because the C abstract machine is defined such that operations that must be manually performed in assembly language must also be manually performed in C.

C doesn't let you write something like:

    string x = y;
...because to actually create a copy of a string the computer must allocate memory, copy memory, and eventually free the memory. In C each of these steps is manual. Higher-level languages support high-level operations like the above, which obscures how much work a machine has to actually do to implement them.


Well... C teaches you how a PDP-11 worked, but modern computers aren't PDP-11s either. Most happen to expose a PDP-11-like structure via x86 assembly, but even that abstraction is a bit of a lie relative to what's going on under-the-hood.

C doesn't let you write "string x = y" because it doesn't have string as a primitive variable type; that's the whole and only reason. It's not quite correct to say that "the computer must allocate memory, copy memory, and eventually free memory" to do string assignment---it depends heavily on context and how a given language defines 'string' (are strings mutable in this language? If not, that statement may just be setting pointers to two immutable memory locations the same, and that immutability may be maintaned by type constructs in the language or by logical constraints in the underlying architecture like read-only TEXT memory pages---or both!---and so on).

The old Apple Foundation NSString is an interesting example of how the question of string manipulation is a complicated one that doesn't even lend itself well to the "allocate, copy, free" abstraction you've described. Pop that thing open, and you discover there's a ton of weird and grungy work going on inside of it to make string comparison, merging and splitting of strings, copying strings, and mutating strings cheap. There are multiple representations of the string attached to the wrapper structure, and a host of logic that keeps them copy-on-X synchronized on an as-needed basis and selects the cheapest representation for a given operation.


The C = PDP-11 meme doesn't really line up with reality. Those features that people normally point (ie. the auto increment/decrement) were taken from BCPL, which is older than the PDP-11.


> C doesn't let you write "string x = y" because it doesn't have string as a primitive variable type; that's the whole and only reason.

But why does C not have string as a primitive variable type? It's for exactly the reason you state: there are very different approaches a high-level language can take wrt. string ownership/mutability. These approaches might require GC, refcounting, allocation, O(n) copies, etc -- operations that do not trivially map to assembly language.

C is close to the machine precisely because it declines to implement any high-level semantics like these.


I wouldn't call "lacks a feature" the same as "close to the machine" any more than I'd call "specifically refrains from defining whether 'a' or 'b' is first evaluated in an expression like 'a - b'" as "close to the machine." Machines lack statements and expressions entirely, but C has those; the reason that C refrains from being explicit on order of evaluation for (-) and other operators is that on some machines, evaluating b first, then a, then difference, lets you save a couple of assembly instructions at compile-time, and on some other machines, the reverse is true. So C's ambiguity lets the compiler generate code that is that much faster or terser on both machines (at the cost of, well, being Goddamn ambiguous and putting the mental burden of that ambiguity on the programmer ;) ).

Perhaps it is more correct to say not that C is "closer to the machine," but that C "tries to position itself as somewhat equidistant from many machines and therefore has design quirks that are dictated by the hardware quirks of several dozen PDP-esque assembly architectures it's trying to span." Quite a few of C's quirks could go away entirely if someone came along, waved a magic wand, and re-wrote history to make K&R C a language targeted exclusively at compiling Intel-based x86 assembly computer achitectures, or even one specific CPU.


The exact reason why C shouldn't be used to write high level software.

Also, the set of things you can do in standard C is a subset of the things you can do in assembler.


heated debate over where to draw the line between low and high level software ensues


Grumble grumble...

Procrastination ensues


   > *operations that must be manually performed in assembly language must also be manually performed in C.*
That's not true and if it were it would defeat the whole purpose of C, being a higher level abstraction than assembly.

To give one obvious example, in C i can just call a function, no thinking required. Calling a function in assembly requires manually pushing stuff to the stack and then restoring it again.


I definitely would not consider a CS education complete without C. If you don't know C, that means you don't know how how parts of operating systems work. It definitely makes you a less useful engineer.

I don't think people need to be able to code in C at the drop of a hat. But it should not be a scary thing for good engineers.


Related article by Joel Spolsky. "The Perils of Javaschool" [1]

C really will test you and make you a better programmer than any high level language ever could. Someone who is a good C programmer will right better code in any language than someone who is just focused on high level languages. The same could not be said for python, java, or js.

1. https://www.joelonsoftware.com/2005/12/29/the-perils-of-java...


Nonono. I've seen a good engineer write C code in C#. I think you should start with Scala or Rust and do the blob languages later.


When I started my CS degree most home OS were still mostly written in Assembly, and on my geography Turbo Pascal was the language to go when Assembly wasn't required.

My university introductory classes were Pascal and C++.


> If you don't know C, that means you don't know how how parts of operating systems work.

How so? It's not like you can't write operating systems in other, more readable and understandable, languages.


The historical dominance of C over the last 30-40 years is, by itself, enough to justify C as a core part of a computer science curriculum.


Yeah I meant "operating systems used by 99% of people". I don't mean to say you need to know C to understand operating systems in general.


What C teaches is that the underlying memory model is a flat, uniform, byte-addressed address space.

One of the consequences of C is the extinguishing of machine architectures where the underlying memory model is not a flat, uniform, byte-addressed address space. Such as Symbolics or Burroughs architectures, or word-addressed machines.


We have our differences, but you’re totally correct here, and I’m not sure why you’re downvoted.

The “byte addressable” thing is exactly why C was created over B, even, right? That was one of the crucial features not supported.


I don't think he is correct. The underlying representation of a pointer is not defined by the C standard. You could have a C implementation that works on segmented, non-flat architectures. Look at the C compilers from the DOS days, for example...


I think we're talking past each other, and in some ways, this is what the post is about.

The C abstract machine presents a flat, uniform, byte-addressed address space.[1] The reason that it does not define what a pointer's representation is is because it needs to map that to what the hardware actually does, which is your point.

1: Actually, my impression is that it does, but this is actually an area of the spec in which I'm less sure. I'm going to do some digging. Regardless, the point is that the memory model != the machine model, which is your point.

EDIT: For example, https://en.cppreference.com/w/c/language/memory_model

> The data storage (memory) available to a C program is one or more contiguous sequences of bytes. Each byte in memory has a unique address.

Though cppreference is not the spec itself, of course. It is the conceptual model that's often presented.

LAST EDIT: I happened to run into someone who seemed knowledgeable on this topic on Reddit: https://www.reddit.com/r/programming/comments/9kruju/should_...

TL;DR no! This is another misconception. Tricky!


Yep, these common assumptions are not always true if you read deep enough into the spec. Even though they may be true in practice, only as a side effect in most implementations...

I taught myself C when I was a teenager (on an Amiga, back in the 80's.) It is amazing that almost 30 years later, I'm still learning new things about it.


I just looked up B (again, had seen it earlier):

https://en.wikipedia.org/wiki/B_(programming_language)

From the above article:

>B is typeless, or more precisely has one data type: the computer word.

I had read the Martin Richards (inventor of the language) book about BCPL (predecessor of B) early in my career, although I could not work on it, since there was no machine available to me that could run it. (It was already outdated then.) Interesting language though, and even the systems software examples shown in the book (written using BCPL) were cool.

You're right about why C was created, byte addressability.

From the article about C:

https://en.wikipedia.org/wiki/C_(programming_language)

>The original PDP-11 version of Unix was developed in assembly language. The developers were considering rewriting the system using the B language, Thompson's simplified version of BCPL.[11] However B's inability to take advantage of some of the PDP-11's features, notably byte addressability, led to C.


I'm not sure why C was created over B. ANSI C had more abstraction than K&R C; early versions of C didn't have function prototypes or type checking. There seems to have been a slow progression towards more abstractions, as the machines used for compiling got bigger. Many of the early bad decisions in C probably come from having to run the compiler in a very small, slow machine. Today we expect to get the whole program into an in memory data structure within the compiler, but that was not possible in the PDP-11 era.

C abstracts control flow, but not memory access. C data access is pretty much what the hardware gives you. Memory is bytes addressed by integers. There's even pointer arithmetic. C puts a layer of data structures on top of that, but the underlying memory model is one "(char *)" cast away. Every call to "write" implies that cast.

By the time you get to, say, Python or Javascript, that flat memory model is completely hidden. There could be a compacting garbage collector underneath, changing the addresses of everything while the program is running, and you'd never know.


The memory model you describe is overly simplistic, and is not actually mandated by the C standard. Too many assumptions in that area will lead to undefined behavior.

Example: conversions between function pointers and regular pointers... https://stackoverflow.com/questions/32437069/c-void-function...


This is an excellent essay, and touches on a lot of the key ideas both of what C is and of how C maps to the hardware. In particular, I completely agree with where Steve lands here, which is that C can give you more understanding of how computers work but won't necessarily reveal some "deep ultimate truth" about the zen of computers. Especially in light of this essay recently shared on HN (https://blog.erratasec.com/2015/03/x86-is-high-level-languag...), which makes the excellent point that even a language that is compiling down to raw x86 assembly is still operating in a "virtual machine" relative to what goes on in the actual chip and the flow of electrons through the silicon-metal substrate.

It's abstractions all the way down.


Coming from an operating system background, I find the article's use of "virtual machine" very bizarre. The article confuses very different things by stating that "'runtime', 'virtual machine' and 'abstract machine' are different words for the same fundamental thing".


What does "virtual machine" mean to you? I have some guesses, but I don't want to spoil it by suggesting some.

I think that these terms are some of the most confusing our discipline has.


"global" and "static" are pretty overloaded terms. In LLVM, for example, Global Value Numbering has nothing to do with global values. "const[ant]" is also pretty bad, but that's more because there's several slightly different notions where the distinctions between them are pretty important.


"Virtual machine" has a broader sense than emulation or virtualization within a hypervisor. From the perspective of an assembly programmer, C implements a low level virtual machine that abstracts much of the drudgery that assembly requires. For instance, you can store 32 signed bits in a variable without caring about the native word size and bit representation in the underlying hardware (though in some circumstances you may still care). That is the "virtual" part of C's abstraction.


In some contexts these mean the same thing (a layer of abstraction), while in other, more technical, they mean different things.


How do you feel about the "Java Virtual Machine" then?


A java virtual machine runs java byte code. It makes sense because the compiler emits code for a machine that is an abstract concept, not something that physically exists, and the virtual machine executes this bytecode.

In typical use of a C compiler, you get code for the target processor, i.e. x86. In theory, you don't have to worry about the underlying hardware, so you could say you are programming for some abstract machine that runs C code, but it is much less meaningful than saying java has a virtual machine.


Or LLVM, the Low Level Virtual Machine?


... which now just LLVM, because it's so confusing! http://lists.llvm.org/pipermail/llvm-dev/2011-December/04644...


The part that's closer to how the computer works is the part that lets you write an allocator.

Witness any discussion of a pointer bug at any level of the stack. No matter what the piece, someone will jump in and say, "that should have been bounds checked!" A lot of times it's true. But what they miss is that the bounds check needs to come from somewhere, which means there is a layer of the system where it doesn't apply. Somewhere you have a larger piece of memory, and it gets divided into smaller chunks, and the boundaries are fiction, meaningful only to a higher level abstraction. This is the way it needs to work.

Reducing the code in which that's a concern is likely legitimate. But then you enter into "that's not really how it works".


Do you need to learn C to have a successful career in CS or any other field? No. Should you learn it? Yes.

Do you need to bake bread to eat it? No. Should you learn to bake it? Yes.

There are lots of things you should learn to do because they're useful and teach you about how the world works. The miracle of society is that you don't have to learn most of them to enjoy using them.


Yes! I rarely use C these days to write Mobile Apps but knowing how to use C is absolutely great to be able to do some complicated things some times. It never hurts to be able know more than you need.


That bread analogy is on point. The old quote comes to mind,

>A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyse a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects. -- Robert Heinlein


I disagree with the bread analogy being on point.

For one, it misses the fact that learning how to bake it doesn't benefit the eating process. Whereas, learning how to code in C benefits (but is not required) in general programming.

I understand that analogies are always compromises because there is always going to be something that differs from the original motif. Also, analogies should be used when the motif is complicated with many layers of abstraction and using one adds something - whether it is clarity, succicicty or reduction of abstraction. In the case of parent comment, it is not too difficult to understand the original motif. So, the bread analogy adds virtually nothing to it.


> For one, it misses the fact that learning how to bake it doesn't benefit the eating process.

Maybe it works differently for bread, but with whisky, learning how it has made has certainly enhanced my consumption, if only to give me the language needed to describe flavors and compare and contrast.


Worth noting that Heinlein didn't necessarily actually believe this himself. It's spoken by one of the many classic polymath characters which he liked to include in his stories.


Learning to read Ancient Greek should be on that list.


I would add being able to clean & jerk your own body-weight, swim a mile, and be able to barter with anyone from any culture.


On the topic of "C is portable assembler":

When you ask what portable assembler is, there's actually three different things you could mean:

1. C is portable assembler because every statement maps pretty directly to assembly. Obviously, compiler optimizations tend to make this completely not true (and most of the rants directed towards compiler developers are precisely because they're eschewing naive mappings).

2. You can represent every (reasonable) assembly listing using C code. Well, except that C has no notion of SIMD value (yes, there's extensions to add vector support). Until C11, it couldn't describe memory ordering. Even with compiler extensions, C still doesn't describe traps very well--any trap, in fact, is undefined behavior.

3. Every assembly instruction can have its semantics described using C code more or less natively. Again, here C has an abstract machine that doesn't correspond very well to hardware. There's no notion of things such as flag registers, and control registers are minimal (largely limited to floating-point control state). Vectors and traps are completely ignored in the model. Even more noticeably, C assigns types to values, whereas processor definitions assigns types to operations instead of values.

Note here that I didn't need to invoke undefined behavior to show how C fares poorly as portable assembler. The problem isn't that C has undefined behavior; it's that a lot of machine semantics just don't correspond well to the abstract semantics of C (or indeed most languages).


C conceptually is the nearest layer to the hardware that you'll get outside assembler. What that means is that most C operations can be easily translated into the machine code equivalents and that makes debugging of compiled code so much easier. A disassembly of many binaries will reveal their C roots by function calls being equivalent to jump statements, as an example. If you have the C source then you can usually figure out what's going on.

That simplicity and the smallness of the C language makes it truer to the hardware underneath and as you don't need to descend through layers of frameworks, like a high level language such as python, to understand what the heck is going on.


> What that means is that most C operations can be easily translated into the machine code equivalents and that makes debugging of compiled code so much easier. A disassembly of many binaries will reveal their C roots by function calls being equivalent to jump statements, as an example. If you have the C source then you can usually figure out what's going on.

This is very much not true in my experience. Optimization (even just -O1, which is required to make C be more worth writing than Python for large-scale apps) will do things like reorder statements, inline functions, elide local variables, set up tail calls, etc. It is a specific skill requiring experience to be able to understand what's happening when stepping through a C function in a debugger, even with source at hand and debugging information included. If you haven't been frustrated at "Value has been optimized out" or breakpoints being mystically skipped, you haven't done enough debugging work to really acquire this skill, and saying that C maps to the machine is just theory.

There is value in a highly unoptimized language toolchain for debugging, sure. But honestly CPython is closer to that than any production C binary you're likely to see.


Why would you turn on optimization while debugging?


Once you get used to it, it's not terrible. I don't debug on anything other than release builds (but with symbols) so that I know that I'm debugging the actual issues.


Sometimes the bugs don't happen when optimizations are off. (-:


That'd be a compiler bug, then. I don't think those would be common.

EDIT: I suppose the other possibility is that the program is doing some weird things, like reading its own machine code from memory.


Neither of the above are required: your code might be racing with some other API. If your code finishes fast enough, the other code isn't ready for you. With optimizations off, you consistently lose the race and the bug doesn't show up.


Or undefined behavior in the program, which happens to manifest differently under different compile options.


Because with them turned off, the executable might not fit on the target platform, for example.

Quite common on games targeting consoles.


Because you're debugging a failure in production code. Extraordinarily common.


Scaling your code for benchmarking criteria is a good example.


You can get that as well with NEWP, PL/I, Extended Pascal, BLISS, Modula-2 and tons of other system languages.


Maybe but there's a reason C emerged as the winner because none of those systems really has an established reference model to look at. C has several - e.g. UNIX being written in C.


UNIX being available for free with source code is what helped C gain adoption.

The nomimal fee universities had to pay AT&T was meaningless compared to what something like VMS or OS/360 would cost them.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: