
Deconstructing K&R C Is Dead (2015) - rebekah-aimee
http://c.learncodethehardway.org/book/krcritique.html
======
rhexs
Low level languages are tough. Gaining a mastery of C does require knowing
quite a few strange rules and quirks and it's certainly a bit harder than
learning Python. The C FAQ does a good job of illustrating some of the more
confusing parts. Sure, I wish I could write Go instead, but that isn't going
to happen on the many embedded systems I work on.

This is a rather strange and insulting article. I'm not sure why Zed can't
help "old programmers" nor do I understand why he's angered that individuals
know about undefined behavior in C. Is there any background to this or did he
have the misfortune of being insulted on IRC?

Edit -- I googled for a bit and discovered this was in response to someone
doing a pretty good job technically reviewing the book for free!
[http://hentenaar.com/dont-learn-c-the-wrong-way](http://hentenaar.com/dont-
learn-c-the-wrong-way) Perhaps the title was a bit inflammatory.

Zed's rebuttal is at [https://zedshaw.com/2015/09/28/taking-down-tim-
hentenaar/](https://zedshaw.com/2015/09/28/taking-down-tim-hentenaar/) and is
a great example of how not to react to constructive criticism. My favorite
part is his safercopy function and the lack of size_t.

And finally, to leave us all with a quote from Zed's rebuttal:

"Over this next week I’m going to systematically take down more of my
detractors as I’ve collected a large amount of information on them, their
actual skill levels, and how they treat beginners. Stay tuned for more."

Wow.

~~~
kinkdr
> Low level languages are tough

I disagree. Low level languages, especially C, are the easiest to master. K&R
book is the only book you need to read to know everything about C. All you
need after you understand the fundamentals is a bit of discipline.

C++ on the other hand is extremely difficult to master. Just have a look at
the rules for Rvalue references and you will see what I mean.

It may be easier for a complete novice to write some code that doesn't crash
in C++ than it is in C, but not mastering it, or even be good at it.

~~~
haberman
> K&R book is the only book you need to read to know everything about C. All
> you need after you understand the fundamentals is a bit of discipline.

I am a huge C fan but this is not true at all. C has tons of pitfalls,
especially with modern UB-aggressive optimizing compilers. There are a lot of
rules you need to be aware of that are not naturally-occurring results of the
fundamentals.

~~~
mpweiher
> especially with modern UB-aggressive optimizing compilers.

You put your finger on the problem: "modern UB-aggressive optimising
compilers". C, the language, is actually quite simple (if not easy). The crazy
stuff that compiler writers have been doing recently while aggressively mis-
reading the C standard is the problem and does make things very complicated.

Why "misreading"?

From 1.1:

"The X3J11 charter clearly mandates the Committee to _codify common existing
practice_."

 _Their_ emphasis, not mine. So is there a mandate to use the definitions of
the standard to _invalidate_ common existing practice? Clearly not. Yet that
is what is happening.

More from the standard (defining _UB_ ):

" _Undefined behavior_ gives the implementor license not to catch certain
program errors that are difficult to diagnose. It also identifies areas of
possible conforming language extension: the implementor may augment the
language by providing a definition of the officially undefined behaviour."

Does it say "Undefined behaviour gives implementors license to add new
optimisations that break existing programs"? Clearly and unambiguously not.

See
[http://port70.net/~nsz/c/c89/rationale/a.html#1](http://port70.net/~nsz/c/c89/rationale/a.html#1)

~~~
the_why_of_y
Your interpretation of "codify common existing practice" would imply that _no_
new compiler optimizations could be implemented since 1990 (when the first
version of the standard was published), as any optimization could potentially
change the observable execution behavior of an erroneous program that contains
UB.

> More from the standard (defining UB):

Your quote is not from the normative text of the standard, but from the non-
normative rationale. Note however that it explicitly says that programs that
contain undefined behaviors are erroneous, and that the implementation is not
required to emit diagnostics for the UB. Pretty clearly this allows
implementations to optimize erroneous programs into whatever they think is
funny this week.

The normative text of the standard is pretty unambiguous:

    
    
        undefined behavior
        behavior, upon use of a nonportable or erroneous program construct or of erroneous data,
        for which this International Standard imposes no requirements
    

[http://www.iso-9899.info/n1570.html#3.4.3](http://www.iso-9899.info/n1570.html#3.4.3)

~~~
mpweiher
> Your interpretation of "codify common existing practice" would imply that no
> new compiler optimizations could be implemented since 1990

Utter nonsense. I use that word carefully, but in this case it is absolutely
appropriate.

Compiler optimisations per an old but very useful definition aren't allowed to
change the visible behaviour of programs (in terms of output, obviously they
are allowed to change execution times).

For example, even just a couple of years ago the compilers I used would
execute a loop that sums the first _n_ integers. Nowadays compilers detect
this and replace the loop with the result. While this isn't particularly
useful, because probably the only reason you're summing the first _n_ integers
in a loop is to do some measurements, it is (a) a perfectly legal optimisation
and (b) happened after 1990.

Unsurprisingly, you left out the second part of the (later) definition:

    
    
       NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable
        results, to behaving during translation or program execution in a documented manner characteristic of the
        environment (with or without the issuance of a diagnostic message), to terminating a translation or
        execution (with the issuance of a diagnostic message).
    

Notably absent is "use the undefined behaviour to shave another 0.2% off my
favourite benchmark".

~~~
the_why_of_y
> Unsurprisingly, you left out the second part of the (later) definition:

It is _not_ part of the normative definition, which says "for which this
International Standard imposes no requirements". In ISO standards, notes are
without exception non-normative.

Although I think they really should add your proposed text as an additional
example, as their current set of examples is evidently confusingly incomplete
:-)

------
barbs
I can't really speak for Zed's expertise and/or value to the programming
community. From what I gather, a few of his projects are widely used (Mongrel
comes to mind), and he seems to know his stuff pretty well. I also identify
strongly with his Programming Motherfucker[0] rant.

But man, the guy is insecure to the point of requiring therapy or something.
He seems obsessed with his image and status, and the slightest criticism will
cause him to lash out in an immature and ridiculous manner. Past rants have
him making lewd comments about penis-sizes and challenging others to a
physical fight[1].

It's a shame, because if he just relaxed a bit and took criticism gracefully,
he'd probably find himself to be a bit more valuable to the community and
employers, and would actually be a pretty decent dude. Instead, his writing
seems to reek of a constant need to validate and defend himself.

This is probably an unfair comparison, but I can't help but think of Terry
Davis: a brilliant programmer hindered by mental issues. Schizophrenia is
obviously not the same as insecurity, but I think the situation here is
somewhat similar.

[0] [http://programming-motherfucker.com/](http://programming-
motherfucker.com/)

[1] [http://harmful.cat-v.org/software/ruby/rails/is-a-
ghetto](http://harmful.cat-v.org/software/ruby/rails/is-a-ghetto)

~~~
bloat
Despite having a veneer of a good comment, well written, sourced with links,
and starting with some (faint) praise - it's actually just an ad hominem, and
not appropriate here.

~~~
amoruso
It's not really ad hominem. More like constructive criticism of his rhetorical
style.

~~~
bubuga
> It's not really ad hominem. More like constructive criticism of his
> rhetorical style.

No, it's a blatant ad hominem.

The guy invested his time and effort trying to improve the world by writing a
technical book, which he then proceeded to give it away for free, and to this
we see people like barbs replying with personal attacks accusing the author of
being mentally disturbed to the point of requiring therapy.

This is a personal attack at its worst.

Perhaps the issue here is the C programming language and how teaching it can
be improved, not what insults and personal attacks a random user online is
able to throw at the author of a technical book.

People have more to lean from writing on undefined behavior than puerile
complains regarding comments on penis sizes and ironic accusations of
immaturity.

~~~
barbs
The term "ad hominem" is typically used to describe the fallacy of attacking
the person making an argument, rather than the argument itself. Sure, I'm
discussing his character, but I'm not trying to win any argument here - he may
be correct in what he's saying.

> _The guy invested his time and effort trying to improve the world by writing
> a technical book, which he then proceeded to give it away for free_

And I think this is certainly laudable, especially since they seem to have
helped so many people. But he also called the Rails community "pricks, morons,
assholes, and arrogant fucks who didn’t care about the art or the craft." and
I think he should be held accountable for that, amongst other things.

I wrote a comment about the author's behaviour in public forums and in blogs,
something I think he should be held accountable for, and something which I
believe hurts both him and the communities he participates in. I believe this
is relevant, and I'm entitled to discuss this here.

~~~
bubuga
> The term "ad hominem" is typically used to describe the fallacy of attacking
> the person making an argument,

This is a discussion on a book on the C programming language written by
someone, and here you are going full throttle on your personal vendetta
against the author while saying absolutely nothing regarding the book or the
programming language.

> Sure, I'm discussing his character

Precisely.

Go vent your frustrations somewhere else.

~~~
barbs
Please stop dictating what this discussion is about, and what people can and
cannot discuss here. This is perfectly relevant.

------
pcwalton
This is obviously a bitter rant, and devolves into uncomfortably ageist
territory about halfway through.

I do agree that we should be moving away from C and C++, though. It's pretty
simple, really: C was a pretty good language in 1978. We didn't know a lot of
things in 1978 that we do now in 2016. It now makes sense to revisit those
decisions in light of nearly 40 years of practice. The so-called "PL
Renaissance" has given us a whole host of new languages which have steadily
chipped away at the dominance of C and C++, and I think this is a healthy
trend that ought to continue.

~~~
xenadu02
I'm ready for the hate, so here we go... C was not a well-designed language in
1978.

The fact that C arrays decay to pointers without any bounds is single-handedly
responsible for a huge chunk, possibly even the majority, of all RCEs, worms,
malware, and exploits. Ever. In the history of computing.

It was a bad design.

It was a bad design in 1978.

It was known to be a bad design in 1978.

Other languages knew that checking array bounds was important, including for
security. The internet made the impact of using C much more devastating but
people were exploiting buffer overflows in the 80s to great effect. Some of
C's predecessors/contemporaries passed a length as the first part of an array
so bounds-checking was possible, though that has the downside of not being
able to pass slices of an array without copying.

C could have included an arrayref type that was a length + base pointer, and
let array l-values decay to an arrayref instead of a pointer. Then taking a
slice of an array would not require copying elements. You could still take the
address of an individual element. This would not have required much work to
implement, even in 1978! Maybe the first compilers didn't insert array bounds
checks, but at least the entire design wouldn't preclude them. Let's say you
even spell arrayref as []. It would mean sizeof() works on arrays passed to
functions. __

void wat(int[] values) { for(int i = 0; i < sizeof(values); i++) {
printf("look ma, no buffer overflows! %d", values[i]); } }

(Yes, I know this is not K&R syntax)

Maybe you can forgive C for the stupid header compilation model (why let the
compiler do what you can make the programmer do by hand?). You can understand
why they might not have foreseen the need for namespaces. D&R didn't invent
the macro system so that's not even their fault.

What is unforgivable is the horribly stupid design of C's arrays.

 __I actually think it would be beneficial if the standards committee added
arrayref now. It won 't fix all the busted C code but at least you could start
improving the #1 problem. Compilers could eventually adopt a flag to prohibit
arrays from decaying directly to pointers. You'd probably have to introduce
lengthof() to avoid confusion and use some other syntax to declare one, maybe
array(int) or something.

~~~
mjevans
I suspect this has a -lot- to do with performance.

When C was designed, and even today, there are systems without pipelining,
where it is expensive (in time) to de-reference a memory address and follow
that pointer.

I do not argue that the design you suggest would be safer, and even have
advantages for slicing; but that's really not the kind of program that C was
intended to service writing.

Also, C is supposed to scale down to //really// simple systems. Systems that
lack indirect addressing modes, caches, MMUs, etc. It is literally intended to
be a thin veneer over actual assembly for those systems, and why so many
operations are specified in terms of /minimum standard unit size/ (for
portability of that almost machine code between systems).

What you advocate is more like what C++ actually /should/ have been; a reason
to use something more than C to gain advances in safety and ease of design.

~~~
Kristine1975
_> I suspect this has a -lot- to do with performance._

It's questionable whether people wanted that performance though, at least when
it resulted in less security. About bounds checking in ALGOL 60:
[https://en.wikipedia.org/wiki/Bounds_checking](https://en.wikipedia.org/wiki/Bounds_checking)

 _A consequence of this principle is that every occurrence of every subscript
of every subscripted variable was on every occasion checked at run time
against both the upper and the lower declared bounds of the array. Many years
later we asked our customers whether they wished us to provide an option to
switch off these checks in the interest of efficiency on production runs.
Unanimously, they urged us not to—they already knew how frequently subscript
errors occur on production runs where failure to detect them could be
disastrous._

~~~
wahern
"The block structure of ALGOL 60 induced a stack allocation discipline. It had
limited dynamic arrays, but no general heap allocation. The substantially
redesigned ALGOL 68 had both heap and stack allocation. It also had something
like the modern pointer type, and required garbage collection for the heap.
The new language was complex and difficult to implement, and it was never as
successful as its predecessor."

\--
[http://www.memorymanagement.org/mmref/lang.html](http://www.memorymanagement.org/mmref/lang.html)

Adding runtime bounds checking of automatic storage arrays (i.e. arrays on the
stack) is relatively easy in C, at least until the compiler runs into illegal
type punning. The real problem in implementing these compiler safeguards comes
with crossing translation units, or with heap blocks. There's a reason
languages like Rust and Go rely heavily on static linking and stack
allocation; it's more difficult or more costly to implement those safeguards
when the compiler can't see all the source code, or pointers pass through an
opaque layer. Nothing in C precludes automatic bounds checking of all array
access, via fat pointers or lookup tables. Fabrice Bellard's Tiny C compiler
implemented precise bounds checking for both automatic and dynamic storage-
allocated objects a decade before UBSan and ASan. Even deriving an invalid
pointer crashed the app at the precise point where it happened. That widely-
used C compilers don't do that is a strong hint there are other, real-world
constraints in place.

Also, in language like Java it's not uncommon to see people reinventing
dynamic heap allocation using char arrays, susceptible to all the same
overflow problems. When you see people doing that, that should be a hint that
a language like C might work well.

I don't understand all the C hate. Then again, I have no problem employing
various languages according to the task, or creating DSLs. I suppose if I was
wedded to a single language or to the idea of a single language, C would look
much worse to me.

~~~
dbaupp
_> There's a reason languages like Rust and Go rely heavily on static linking
and stack allocation_

This is untrue: Rust certainly does not do any optimisations linking
statically by default, nor is there a difference between putting an array on
the stack or on the heap. While it is true that code can benefit from whole-
program optimisation, it isn't the default in either language, just like it
isn't the default in C.

~~~
wahern
Languages which bake in automatic bounds checking at every access rely on
optimization to recover the performance hit. Without static linking, automatic
GC, and other constructs that's very difficult.

LTO notwithstanding, once you add those more sophisticated constructs,
iterating the language becomes more difficult. You don't hit upon the best
method for implementing various types the first time, or the second time, or
even the third time. glibc is backwards compatible for programs compiled over
15 years ago (GCC's fixinclude hacks notwithstanding). You'll never see that
with Rust's or Go's standard library, just like you never saw that with C++.

My point wasn't that static linking was necessary. My point was that static
linking is indicative of other tradeoffs that most people don't understand.
Static linking isn't just about making packaging easier. It's also about
making it easier to write and implement the compiler and standard environment.

My more abstract point is that people who think C is on its last legs don't
understand the whole picture. There's nothing intrinsic to C that makes it
unsafe. Febrice's compiler was perfectly capable of implementing the C
standard to the letter. What makes C unsafe are the requirements found in the
niches where C exists, and those requirements don't magically disappear
because the name of the language changes.

Rust supports unsafe code, but implementing code in Rust which is rigorously
robust in the face of OOM situations, or where you need to implement use-case
memory management strategies requires relying almost exclusively on unsafe
code. (Try using Rust without boxing, for example, as is necessary if you want
to catch OOM.) If you don't need those things, you probably don't need a low-
level language, either. I love C, but I also love language like Lua with
lexical closures and stackless coroutines. To me, languages like Rust and even
C++ exist at a middle ground that is very unappealing to me.

C isn't standing still, either. Strategies like SafeStack (see
[http://dslab.epfl.ch/proj/cpi/](http://dslab.epfl.ch/proj/cpi/)) can provide
substantially the same safety guarantees as Rust in terms of real-world attack
vectors, without having to modify any existing C software, and without giving
up performance.

None of this is to say languages like Rust are useless. Just that the harms
and inevitable demise of C per se are, IMHO, greatly exaggerated. And if and
when a language like Rust grows in usage, I doubt it will supplant C so much
as open and populate virgin territory.

~~~
pcwalton
> C isn't standing still, either. Strategies like SafeStack (see
> [http://dslab.epfl.ch/proj/cpi/](http://dslab.epfl.ch/proj/cpi/)) can
> provide substantially the same safety guarantees as Rust in terms of real-
> world attack vectors, without having to modify any existing C software, and
> without giving up performance.

That paper indicates that you do in fact give up performance, and the
performance is comparable to existing SFI techniques. SafeStack itself is
insufficient to prevent UAF problems with the heap. CPI prevents them, but
with significant overhead. And you still don't get full memory safety.

------
wyldfire
I am one of the many stalwarts whose bookshelf contains a prominent copy of
K&R C. But over the last 10 years or so I find myself referring to it less and
less often. It's a huge problem that it stopped at the second edition. The 2nd
ed was great in 1999. It is not great in 2016, it is only good.

> "You're right, but you're wrong that their code is bad." I cannot fathom how
> a group of people who are supposedly so intelligent and geared toward
> rational thought can hold in their head the idea that I can be wrong, and
> also right at the same time.

Zed, you're right, period. But I think you probably just hurt people's
feelings because they revere Kernighan and Ritchie and this is _one_ prominent
item of their legacy.

> But C? C's dead. It's the language for old programmers who want to debate
> section A.6.2 paragraph 4 of the undefined behavior of pointers. Good
> riddance. I'm going to go learn Go (or Rust, or Swift, or anything else).

Amen. The union of those three are likely to address all use cases that C
handled in the past.

BTW the blog post would be clearer if titled: " 'Deconstructing K&R C' is
dead". Gotta love mixing up C with natural language operator precedence
ambiguity. :)

~~~
pcwalton
The thing is, I think you can simultaneously have all of these opinions: (a)
K&R were/are top-notch computer scientists; (b) K&R was a fantastically
written book; (c) C was a great language in 1978; (d) we should be moving away
from C in 2016. The fact that we didn't know as much about programming
languages in 1978 as we do now in no way diminishes the significance of the
work.

I think that C should rapidly be moving toward obsolescence, _and_ I hold K&R
in great esteem.

~~~
wyldfire
Agreed on all accounts.

------
jimbokun
Maybe this made Zed feel better, but communicates almost nothing to any
outside reader.

Not a single actual quote from any of his detractors, for the reader to judge
for him or her self if their criticisms have any validity.

The categorical declaration of "I cannot help old programmers," without
providing the evidence he has for this claim. Lots of name calling, though.

No link to the original content, to determine for ourselves whether or not it
was fair to K&R's work.

I suppose Zed just meant this to be personally cathartic, and didn't realize
he posted it on a public web site where other people can read it?

~~~
Animats
_Maybe this made Zed feel better, but communicates almost nothing to any
outside reader._

Yes. I can't figure out exactly what he's ranting about. He writes "I will
make it clear that my version of C is limited and odd on purpose because it
makes my code safe." Does this mean he defined a safer subset of C? (There are
lots of those. I've taken a crack at that myself [1], but it's politically
hopeless. Rust is the way forward.)

Why would anyone want to write K&R C today? It's awful. It didn't even check
function parameter types. Struct fields were just offsets; you could use one
on a pointer of the wrong type and the compiler wouldn't complain.
(Considering that Pascal predated C by some years, and had a sane type system,
this was kind of lame. But they were trying to compile in 64K of 16 bit words
in one pass. That was an adequate excuse in the 1970s.) The first ANSI C at
least had a sane type system.

[1]
[http://www.animats.com/papers/languages/safearraysforc43.pdf](http://www.animats.com/papers/languages/safearraysforc43.pdf)

------
alexeiz
I looked at Zed's books but didn't find them either containing much of useful
material, or written well enough to be worth reading. Granted, I'm not a
novice to subjects he writes about. But still, I find it peculiar that his
high opinion of his works seems to be rather detached from reality. The books
are written in a simplistic and sometimes demeaning style and it's obvious
that many people will not like it. But when someone writes an honest review,
he seems to get too upset about it. While he obviously likes to critique other
peoples' works (such as K&R), he's very sensitive to critique of his own
books. Zed thinks that it's acceptable to insult the reviewer (Tim Hentenaar)
in response. Reading his response made me cringe. Insulting reviewers is just
not what respectable authors do.

------
mark-r
I went to the Internet Archive just to see what all the fuss was about. Hate
to say it, but I agree with the detractors. He seems to be completely missing
the concept of _preconditions_. If the preconditions are met, the code is
good; if the preconditions aren't met, undefined behavior occurs. Most people
programming C or C++ for more than 10 minutes learn to pay attention to these
things. The chapter would have been much better if it would have stuck to the
importance of validating preconditions, rather than simply pretending they
don't exist.

Good riddance.

------
pfarnsworth
To be perfectly blunt, C does not need Zed Shaw "saving" it. He can go ahead
and ignore it or end his book or rewrite the chapter or spout his vitriol over
how stupid C programmers are; nothing he does will make an impact.

~~~
pjmlp
We have CVE for that.

------
meritt
FYI, this rant is from January 2015. Surprised to see it showing on HN today.

If anyone is interested in what he removed, you can find it here:
[https://web.archive.org/web/20150101224641/http://c.learncod...](https://web.archive.org/web/20150101224641/http://c.learncodethehardway.org/book/krcritique.html)

~~~
dang
It wasn't obvious when this dates from, but we'll take your word for it and
add 2015 to the title.

~~~
meritt
Thanks. Archive has the 'dead' article first indexed Jan 6th, 2015.

[https://web.archive.org/web/20150106191636/http://c.learncod...](https://web.archive.org/web/20150106191636/http://c.learncodethehardway.org/book/krcritique.html)

------
bsg75
> I cannot help old programmers.

Unfortunate he uses this categorization. The problem is a mindset that can
exist in any generation.

------
DLA
Oh Zed. Really?

There is nothing wrong with carefully crafted C code for applications were it
is the best suited tool. Sure, there are sharp edges. True you can write
crappy, security nightmare code.

You do make some good points. I agree Go is fantastic. Rust is coming along as
well. However, C still runs the world. That's not changing anytime soon. Not
with the explosion of IoT and GPU type devices. And, hello Linux kernel and
all the glorious command line tools on nix.

Try using Go or Rust (love both, x2 for Go) to allocate say a hundred GB of
memory for some huge/fast in-memory data processing. Let me know how far you
get.

Your rant is as polarizing as those who are blind to C's flaws (yes, there are
a few). Stop saying "don't write C", that's just childish. Rather, what about
"let's write better, less security flaw prone C."

As an engineer, one ought to choose wisely when choosing tools. This means
pros and cons and balanced unemotional decision making. Not a holy war against
a given tool.

And I am a professional programmer.

Let's do C where C makes sense.

(Edit: fixed typos)

~~~
Manishearth
> Try using Go or Rust (love both, x2 for Go) to allocate say a hundred GB of
> memory for some huge/fast in-memory data processing. Let me know how far you
> get.

There is no fundamental reason why this should be slower or harder in Rust.
Rust generally compiles down to more or less the same code C does.

There _are_ reasons why this could be slower in Go, but it really depends on
what program you're writing, so it might even just work fine. If you don't hit
the GC, for example (and Go gives you ample opportunities to not hit the GC),
data processing should be quite fast. But it depends.

I'd love to hear real-world experiences with such systems in Go.

~~~
courtf
We have a few Go processes with high memory usage. For one in particular,
while it's been higher in the past (~150GB), we're sitting at 40-80GB per node
right now.

The busiest node traffic-wise had average GC time over the past 20min of 3.4ms
every 54.5s. 95th percentile on GC time is 6.82ms.

That node is sitting at 36GB in-use right now, and has allocated (and freed)
an additional 661GB over the past 20min.

Can't really speak to how fast this is vs other environments, but it's smooth
sailing overall. /shrug

~~~
Manishearth
That sounds much better than the Java stories I've heard, which makes sense
since Go is better at avoiding the heap.

No idea how it compares with others; and not sure if it is representative, but
to me that sounds pretty decent.

------
siegecraft
Damnit, I was tricked into reading something by professional troll Zed Shaw.
The hypocrisy of him complaining about "the dark side of programming" is
hilarious consdering he is a very good example of that. His style of debate is
to insult and call people names who have offered non-judgemental and
constructive criticism and I'm sure nothing I'm saying is news to anyone who
has a passing familiarty with him.

------
zvrba
I've read/maybe even briefly participated in a discussion about Zed's book a
couple years ago, and the technical debate went like this:

Z: K&R's strcpy is broken, e.g., you can forget to null-terminate the string.
Mine is safer.

Ohters: It's not broken, of course it'll do something unpredictable if you
break its preconditions.

Z: strcpy is still broken.

Others: Your function will break too if you pass it the wrong length.

Z: This cannot happen, K&R strcpy is broken, mine is safe.

------
ori_b
The specific section of his 'Learn C the Hard Way' book that he's referring to
was mostly, as I recall, complaining that the C string functions defined in
K&R will fail when you don't pass them valid data, and therefore, they're
fundamentally broken.

Make of that what you will, but it seems to me that given all of the other
ways that C can blow up due to programmer error, it seems reasonable to expect
programmers to pass a valid string to a string function.

~~~
regularfry
I'm with Zed on this one. Giving the programmer fewer things to have to
remember, by design, is a de facto improvement. Forcing a human to repeatedly
do a task which could have been designed out is evidence of a bad design.

Mind you, we're talking about the stdlib here. You can swap this stuff out.
Some people do: djb is a fairly well-known example.

------
webkike
K&R C was my introduction to real programming. I treated it ever since as a
book, not a reference. Does that make sense? For me it was a glance into the
mind of the creator of the language. Yes, some of the ways of programming were
flawed in a way that results in today in many terrible things, not the least
in which is death. But I was able to outgrow K&R, to learn better things, with
its succinct language reference as my wings. I was fine not learning security
oriented programming immediately. And I certainly enjoyed learning it with
minimal snark and curse words. Maybe my method was harder, really? Learning
things without you telling them to me? Maybe, don't really care. Maybe people
don't like your essay because it's essentially shitting on a reference to a
language. That is what K&R was in the beginning. It morphed into something
else by your demands not the authors. Let my hero rest in peace. Learn to
breathe. C is as dead as Latin is.

~~~
kinkdr
> C is as dead as Latin is

I wish that was true, but you will be surprised how many things you use
everyday are written in C. Even the ones you would never imagine.

Node.js for example, a large part is in C. Redis, C. Memcached, C. PHP itself
is written in C.

~~~
webkike
I'm sorry this wasn't clear; I was subconsciously waxing poetic. I meant to
say that C's presence is constantly fading but its influence is widespread.

~~~
wahern
There's a difference between fading away and the universe expanding.

Once upon a time most Unix software was written in C, shell, and awk. Then
Perl came along. Did that diminish C? No. Then Java. Did Java diminish C? No.
Then Python. Did Python diminish C? No. (You can throw C++ somewhere in there;
not sure where. Though IME C++ use really seemed to explode with Windows
developers migrating to Linux.)

In each case the universe of software expanded, but C was never diminished.
People who think Rust, Go, or whatever will diminish C are ignorant of
history. Of course, maybe the predictions will bare out. But I seriously doubt
it, and it will be despite their underlying premises, not because of them.
Rather, much more likely is an expanded ecosystem.

As I explained else thread, there's nothing intrinsic to the C standard which
makes it unsafe. Compilers are free to add bounds checking at every point in
the program; in most cases it would be just as cheap as in C++ or even Rust.
It would require much rebuilding and retooling, but not much rewriting
existing software. (Relying on undefined behavior is dangerous not only
because of optimizations, but because undefined behavior can also preclude
automatic bounds checking.)

That C compilers don't do that is a function of 1) baggage and 2) other
functional constraints, like strong ABI compatibility. But neither of those
are set in stone. People who think C is hopelessly unsafe make the same
mistake every C newbie (and some die-hard C-is-just-assembly people) do:
conflating the language semantics with implementation and machine details.

People assumed that clang would quickly overcome GCC because it was so new and
nimble. But clang still hasn't unequivocally really overtaken GCC, and
certainly hasn't obsoleted GCC. Rather, the competition merely spurred GCC to
evolve faster. I see much the same happening with C.

In the future, look to systems like OpenBSD, FreeBSD, and Alpine Linux, which
are more free to upgrade their toolchain and runtime environments with
backwards-incompatible changes, to field enhanced C environments with better
bounds checking and mitigations. Approaches like stack canaries and ASLR are
only the tip of the iceberg for what's possible.

~~~
kibwen

      > Compilers are free to add bounds checking at every 
      > point in the program; in most cases it would be just as 
      > cheap as in C++ or even Rust.
    

It would not be as cheap as in Rust because Rust uses an explicit standard
library feature (iterators) to obviate the need for bounds checks in the vast
majority of loops to begin with. But in C indexing is pervasive within loops,
so you'd need to come up with much cleverer compilers that could manage to
prove that bounds checks were unnecessary (compilers can already do this in
some cases, for C/C++/Rust, but it's not perfect).

Likewise, one _could_ make integer overflow in C well-defined, but this would
also make C slower than Rust because the use of iterators means that Rust
doesn't need to check for overflow on each loop iteration. Via language (or
rather, library) features, Rust reclaims the performance that it otherwise
would have lost to C by dint of being free of undefined behavior. I think
you'd have a hard time doing this in C without rewriting every `for` loop in
existence.

------
cognivore
I have mixed feelings about this, but I cannot disagree with it.

a. I haven't written a program in C in over 10 years. I wrote software 5 days
a week for those 10 years.

b. I wouldn't want to write a program in C now.

c. The first "high level" programming language I learned was C, from a book
(not K&R C), while travelling in Asia, without a computer. It taught me well,
but I immediately went on to other languages.

e. I can't shake the idea that there is some value to knowing that low level
stuff, even though I don't use it much myself.

Maybe linux kernel hackers will keep it alive. I know game programmers use it
a lot as well. But for the majority of us, it's kind of an arcane skill now.

~~~
keithnz
I use it all the time in embedded systems. It's very common. Basically there
is no good alternatives until you get to much bigger chipsets. However in the
embedded world you tend to go with a limited subset of C. Especially no use of
dynamic memory.

------
silent90
I'm sensing a huge incomprehension in a great amount of posts. The key is to
know the purpouse of tools. C is a "close-to-the-metal" type language. You can
control a low-level things, execution time, "number of hops" when writing
data, etc. If you want a friendly language with "no segfaults, no memory
leaks" then go higher level (which in many cases is a better choice, i.e. a
GUI desktop application with no performance constraint). If you have a
problems wrigint in C then you simply still can't C and using the wrong tool
for the task.

"But C? C's dead. It's the language for old programmers who want to debate
section A.6.2 paragraph 4 of the undefined behavior of pointers"

Someone has to build the low-level stuff. Dear boys in too-tight pants and a
hippie mustache: your high-level things and gluten-free snacks does not grow
on trees.

~~~
pjmlp
> Someone has to build the low-level stuff.

Some of us where already doing it in much better languages, before C had any
meaning outside AT&T walls.

------
markhahn
joke? the author is self-important in precisely the way that K&R weren't.

------
kinkdr
You had my sympathy until I read the "error prone _shitty_ language like C".

Next time before getting pissed off about the response you get, think what
could it be that you have said or done that may have triggered it.

------
ipsin
To me, C is like PHP (ignoring for a moment that PHP was written in C).

You can document its shortcomings, its dangers and all the headache-inducing
choices. But while you're doing that, people all over the world are building
wonderful and terrible things with it.

So you've moving on to Go or Rust? Great! Good choices! But remember that
there are people who may disagree _and be wrong_ and also do something
interesting with that wrongness.

~~~
mfukar
No language makes it the least bit difficult to write bad code. This is not an
argument in favour or against _any_ language.

------
dschiptsov
K&R taught fundamentals and a good style. It is timeless classic, because the
principles doesn't change within successions of mass hysteria.

Plan9 dialect of C is another example. There is portable mk package, with
includes core libs (libbio, libutf, etc. which also served as core libs for
earlier versions of Golang) to appreciate what C supposed to be.

I would paraphrase - attention seeking by attacking classics is a poor style.

~~~
regularfry
Part of his point is that K&R _isn 't_ good style. It's a clear, consistent,
and well-demonstrated style, which made it popular, but that doesn't make it
_good_.

------
lkrubner
Do remember that this guy wrote:

"I’ve more or less kept my mouth shut about some of the dumb and plain evil
stuff that goes on in the Rails community. As things would happen though I’d
take notes, collect logs, and started writing this little essay. As soon as I
was stable and didn’t need Ruby on Rails to survive I told myself I’d revamp
my blog and expose these fucks."

and:

"After Mongrel I couldn’t get a gang of monkeys to rape me, so forget any
jobs. Sure people would contact me for their tiny little start-ups, but I’d
eventually catch on that they just want to use me to implement their ideas.
Their ideas were horrendously lame. I swear if someone says they’re starting a
social network I’m gonna beat them with the heel of my shoe."

So that is very much his style of writing.

~~~
dang
I don't think bringing out a list of generically outrageous things someone
said in the past rises to the level of discourse we're trying for here.

We detached this subthread from
[https://news.ycombinator.com/item?id=11727718](https://news.ycombinator.com/item?id=11727718)
and marked it off-topic.

~~~
lkrubner
The man has a long history of writing angry rants. You don't think that would
influence how people might read his current rant? You don't think the readers
of Hacker News might like to know about his past history of similar behavior?
You don't think it helps interpret the level of anger in his current rant?

~~~
dang
I do think all those things. But the cost of moving in the direction of
personal vendettas or witch hunts is higher than the benefits you listed (if
benefits they are).

I don't mean that's what you intended, but that's the direction it points in,
which it isn't in the long-term interests of HN to allow.

------
Yuioup
Punctuation seems to be dead. It took me a while to understand the title.

------
twblalock
Lots of insecure butthurt and resentment in this article, and not much
substance.

------
gravypod
I can't see how you could say C is dead when there isn't really anything that
can replace it.

I'll take it as dead when the Linux kernel, or it's futuristic replacement, is
written in something other then C.

If you are talking about at the user-space level, then yes I can see that. But
you shouldn't assume your single use case, higher level user space apps, is
the only use case.

~~~
pcwalton
In what way, specifically, is Rust unsuitable for building a kernel?

There's no argument that the Linux kernel is currently written in C. But that
doesn't prove that nothing exists that _can_ replace C.

~~~
z92
We can't say if Rust is a suitable replacement unless a a team tries to write
a kernel in Rust, and then comes up with a comparative result.

Right now C is only the tried and true solution. The rest are possibilities
only.

~~~
pcwalton
There are two ways to interpret your post. The first is "there's no kernel
written in Rust that is as complete as Linux". The second is "Rust is
unsuitable for a kernel". The first interpretation is obvious and completely
uninteresting; the second is something you haven't supported at all.

------
ExtremisAndy
This childish rant is embarrassing. With millions and millions of lines of C
code basically running the internet and of vital importance to countless
devices, calling it a s __*y language is beyond ridiculous.

~~~
pcwalton
Asbestos was also used in millions of buildings and was vitally important as
insulation. It was also something that was a bad idea and something that we
needed to move away from.

~~~
mturmon
I agree with where you're coming from. Another analogy that comes to mind is
knob-and-tube wiring ([https://en.wikipedia.org/wiki/Knob-and-
tube_wiring](https://en.wikipedia.org/wiki/Knob-and-tube_wiring)).

It's an older home wiring technology that works fine for years if undisturbed,
is still present and working OK in homes all over, was invented in the early
days of electrified homes, requires considerable skill to install properly,
tends to be unsafe if not handled skillfully, is expensive and delicate to
modify, has no hidden components, allows interesting wiring layouts because
conductors are separated, ...

One could go on with the obvious parallels. (I learned on a PDP-11.)

~~~
pcwalton
Yeah. The weird thing is that in other industries, people have no trouble
admitting that the old stuff is often problematic and needs to be replaced. In
the supposedly forward-looking tech industry, though, we stick with our tools
from 1978 and stubbornly resist admitting that we have learned anything since
then. It's strange.

~~~
jacobolus
I don’t buy that at all. There are huge amounts of path-dependent cruft
throughout all human endeavors:

    
    
      - A base ten number system
      - Lack of useful structure in the symbols and names for numerals, and lots of weird inconsistencies in number names
      - Inconsistent, confusing, and arbitrary names/notation for basic mathematical operators and functions
      - Use of inferior Gibbs/Heaviside vector algebra instead of Clifford/geometric algebra
      - Very poor notational conventions in many advanced math/physics fields
      - A highly irregular calendar
      - Poorly designed measurement systems
      - English spelling
      - Very distorted dominant world map projections
      - Most nutrition “science”, including federal dietary guidelines
      - Bogus forensic “science” used to imprison innocent people
      - The methodology and writing style used in political science
      - Many essentially debunked economic models which continue to be taught
      - A legal system chock full of incidental complexity and inconsistencies
      - Inadequate species taxonomies
      - Poor color models used in art/design
      - Even worse, specification of colors using proprietary, arbitrary Pantone chips
      - Lots of poor/obsolete metrics used for evaluating lighting
      - Audio mastering with heavy-handed dynamic range compression
      - Lectures as primary pedagogy in high school/college
      - Grammar drills as a method for teaching foreign languages
      - Modern zoning requirements in many countries
      - Many unsafe and inefficient street design requirements
      - The rigid design of modern shoes (let’s not even start on heels)
      - Terrible user interfaces for most household appliances
      - Mediocre user interfaces for many musical instruments
      - An inefficient and dangerous typewriter / computer keyboard (which persists on tiny phone screens!?)
      - Unhealthy design of office furniture, car/airplane seats, child strollers, etc.
      - .....
    

Some of this stuff is decades old. Some is thousands of years old.

