
“C is not how the computer works” can lead to inefficient code - steveklabnik
https://words.steveklabnik.com/c-is-not-how-the-computer-works-can-lead-to-inefficient-code
======
jpochtar
For many people, "C" means C+{gcc/clang}+{x86/arm}, which has much more
specific semantics than the C99 abstract machine.

They'll ask interview questions like "what's sizeof(int)" expecting a number,
and will find "it depends on the compiler/architecture" evasive, unless you
follow it up with "but it's always 4". To be fair, this is how 99% of people
use C in practice, and x86/arm behave pretty similarly to each other in the
scheme of things (vs, say, a LISP machine).

Taking into account ABI constraints, including x-{language,module} call
safety, threading expectations, and more, a typical C programmer has to have a
decent idea of how the machine works, relative to a Python/JS/Ruby programmer.

I don't love that someone "learning C to learn how the machine works" will
think malloc()/free() are how the machine works, and will get have wrong
expectations ideas about memory when using, say, V8's bump allocated nursery +
compacting GC. But it's not a bad first step for someone who doesn't know what
a pointer is to the machine.

~~~
_bxg1
> in the scheme of things (vs, say, a LISP machine)

Heh

~~~
jpochtar
;)

------
karmakaze
It's unproductive to talk about a "C abstract machine" then go on to say how a
hardware cache invalidates a program's correctness.

------
snagglegaggle
You see similar issues with e.g. Haskell or a lazy FP languages where certain
idioms behave wildly different from each other.

Ultimately I'm not sure what his complaint is. He says "If it takes too long,
or takes no time at all, that’s not the language’s concern, strictly
speaking." Would you trade correctness for speed? Would you make programs
maintain poor performance in the face of hardware upgrades?

What is the point of either?

~~~
_bxg1
I don't think he's complaining so much as pointing something out.

All programming is abstraction. Even binary could be seen as obscuring the
analog physical properties of the chip. What you have to decide is what level
of abstraction best presents the things you care about, and sets aside the
things you don't.

I personally always lean towards abstractions that are strong when it comes to
correctness, even if that means they're leaky when it comes to performance.
You can always go back and optimize later.

This is why I hate writing C(++). At the language level they may enforce
correctness assumptions well enough, but abstractions _written_ in them are
incredibly leaky in terms of correctness.

------
zach43
I wonder if there is a typo in the title: '"C is how the computer works" can
lead to inefficient code' makes more sense to me.

~~~
dang
I think you're right so we've de-notted the title above.

~~~
im3w1l
I think you got it wrong.

“C is not how the computer works” is a statement about C controlling an
abstract machine rather than the metal.

The article's thesis is that if only think about the abstract machine ("not
how the computer works"), and don't take the metal into account then you can
get inefficient code.

~~~
dang
Ok, we'll re-not it.

