Something I quote regularly, regarding how C was hardly impressive in terms of performance quality in generated native code.
As anyone coding in the 1980's home computers knows, any junior doing Assembly could easily beat those compilers.
It was the abuse from UB in optimising compilers that eventually made the difference.
"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."
The problem is that C compilers have very advanced optimizations. These make the language more dangerous. And yet, the programmer still has to take care of it!
I have a litmus test. Every few years, I compile code which does doubly-linked list operations:
struct node {
struct node *next, *prev;
};
/* Original: pred and succ must not overlap, but this is not expressed. */
void insert_after_A(struct node *pred, struct node *succ)
{
succ->prev = pred;
succ->next = pred->next;
pred->next->prev = succ;
pred->next = succ;
}
/* Optimize with restrict: pred and succ do not overlap. */
void insert_after_B(struct node *restrict pred, struct node *restrict succ)
{
succ->prev = pred;
succ->next = pred->next;
pred->next->prev = succ;
pred->next = succ;
}
/* Optimize by hand in "load-store style". */
void insert_after_C(struct node *pred, struct node *succ)
{
struct node *aft = pred->next;
succ->prev = pred;
succ->next = aft;
aft->prev = succ;
pred->next = succ;
}
I'm consistently finding that the manual optimizations in B and C are required to get the instruction count down.
Simply adding restrict is a simpler manual optimization, but it's dangerous; now demons are allowed to fly out of our noses when the arguments point to overlapping objects, which doesn't happen in function C.
And what's the use of restrict, when I can get the same benefit by coding in a certain way.
I remember that excerpt from her interview and I'm with her almost 100% and yet, if this is another case of worse is better (we had a thread about that recently on HN) then there was a definite demand for C. I was not yet around as a developer but maybe those compilers were too expensive? Did they require too large and costly machines? Did they run on the computer people had at home? Etc.
Thanks. (And I think I've seen you quote that here before. I just didn't realize that this was what you were referring to.)
Next question: How did C do that? By becoming dictator? No, it did it by persuading people that C was a better direction than those other ecosystems. That is, the other direction lost the debate. How and why?
I still don't understand the phenomenon of "worse is better".
My current best guess is a riff on Bob Martin's (?) observation about new programmers (noobs) are minted faster than wisdom of the ancients can be transmitted. An ever expanding circle of ignorance, the center occupied by cranky old people yelling "But you're doing it wrong!" to no one in particular.
So when 'C' emerged, amongst otherwise equal options (to noobs), perhaps it appeared to have both the 'portable' bit fiddly bits (vs ASM) and the structured programming bits of Pascal. And natch, noobs wouldn't be immediately aware of the broken glass and tiger traps.
Further, like with all grifts, there's a turf grab for the new hotness. Something along the lines of "Turbo Pascal was for casuals. Real programmers, like you, should use Watcom's professional new 'C' compiler."
--
I was so enamored with object-oriented analysis and programming. A real zealot. Two (different) friends with experience were "meh". I vividly remember thinking "Those old people just don't get it."
Ahahahah. The folly of youth. Of course, they were proven right. (But also wrong in some ways.)
I try to remember young me now that I'm old, eschewing each new hotness, while trying to keep an open mind, gleaning whatever good bits tumble out.
Worse is better: This can be taken a number of ways. In terms of time (and money), "worse but I can afford it and get it today" beats "better but I can't actually get it". "Worse but easier to use" may also beat "better but harder to use". In terms of writing OSes (which was the use of C), the ability to flip bits and deal with raw memory was very useful. A safer tool might have been "better", but it also could have made it harder to write OSes.
The Fran Allen quote seems to be saying that C made all this promising research stop. Bell Labs did not have armed thugs going out pointing guns at peoples' heads. Why did the research stop, if it was so promising? Because the researchers (many, at least) decided it was less promising - or at least, that more promising options were now available. So if C did that, it did it by either showing that previous directions weren't promising, or by opening up better options. Either way, it's rather unfair to mourn the paths that were not pursued and blame C for it.
As anyone coding in the 1980's home computers knows, any junior doing Assembly could easily beat those compilers.
It was the abuse from UB in optimising compilers that eventually made the difference.
"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."