
Some Were Meant for C: The Endurance of an Unmanageable Language [pdf] - ingve
http://www.cl.cam.ac.uk/~srk31/research/papers/kell17some-preprint.pdf
======
jasode
This is a long paper and the author has 2 main claims:

1) C Language popularity is more to do with cognitive ease of _memory
addresses_ as a conceptual model for inspection and change. Author claims
memory address mental model overshadows _runtime performance_.

2) switching to "safe" languages like Java/C#/Rust is not necessary. With no
changes/violations to existing C Language specification, a new/different
_implementation_ (compiler) can add more runtime safety checks similar to
_managed_ languages. An example from the paper:

 _> Consider unchecked array accesses. Nowhere does C define that array
accesses are unchecked. It just happens that implementations don’t check them.
This is an implementation norm, not a fact of the language._

Those 2 ideas look orthogonal but he ties them together at the end.

I'll take some poetic license (e.g. a little exaggeration) to reword the
author's idea to help spur discussion...

Consider the idea of the _Sufficiently Smart Compiler_ [1] that claims that a
"slow" and "high-level" language like Python/Ruby could be _theoretically_
analyzed and compiled to be as fast as C or handcrafted assembly.

In a way, the author is coming from the opposite direction. If you had a _"
Sufficiently Smart Runtime"_ for a new C Language compiler implementation, it
could (theoretically) do all sorts of extra checks and bookkeeping that
wouldn't require any changes to C source code and wouldn't violate the
existing C Language standard. (E.g. Imagine a new C runtime that did many
checks similar to Valgrind + UBSAN + ASAN + debugger memory fences, etc.)

Would the program execution be slower? Well yes, but that's not really an
issue because according to author's claim #1, what programmers really like
about C is the mental ease of accessing memory addresses. The performance is
important, but it's a secondary benefit -- according to the author.

[1]
[http://wiki.c2.com/?SufficientlySmartCompiler](http://wiki.c2.com/?SufficientlySmartCompiler)

~~~
bsder
C's popularity is due to the fact that it is _predictable_ within certain
bounds (single thread or limited concurrency).

No GC pauses, no weird runtime crashes due to a strange constructor, no
gigantic exception chains, etc.

The only languages in the TIOBE index that can even try to make that claim
are: C at #2, C++(if you subset it) at #3, Objective-C/Swift(#18/#11),
Assembly at #14, Ada at #29, and maybe FORTRAN(#35).

That's not a lot of options if you _need_ runtime predictability. Basically C,
C(with additions), C(with additions), assembly(hack, spit), Ada (okay), and
FORTRAN (God help you).

Even _now_ , that means C or Ada--and the first free Ada compiler was 1992.

Yes, Rust is coming. But it's got a way to go yet.

~~~
mike_hearn
The idea that C is predictable is in my view a sign of someone who hasn't got
to know C really well.

The trends around undefined behaviour will hopefully put a bullet in the head
of this idea for good. It's extremely hard to look at C and reason about what
an optimising compiler will turn it into.

Malloc is not more predictable than a GC pause. Both malloc and free can take
unpredictable amounts of time. If anything it's less predictable because
modern GCs at least have pause time targets, but mallocs never do. You just
don't notice it because people don't tend to measure malloc latency. In turn
that's because malloc pauses only affect memory allocation operations, they
don't stop every thread, which is a benefit it's true, but it's not about
predictability and more about UI latency.

C not having exceptions doesn't make it more predictable. It just means that
if something goes wrong you get a useless and probably corrupted core dump.
The number of times I've been able to fix a bug in a piece of managed code
given _only_ a stack trace from the end user is huge. The number of times I've
been able to fix a bug given "Segmentation fault" with no other info is zero.

~~~
bsder
> The trends around undefined behaviour will hopefully put a bullet in the
> head of this idea for good. It's extremely hard to look at C and reason
> about what an optimising compiler will turn it into.

Sure when you turn on -Oinfinity. Nobody does that in embedded unless they are
hard pressed on some metric (RAM size, generally, or CPU flops occasionally).

Overall, though, C is really fairly predictable. Unsigned arithmetic _does
what you expect_ \--the fact that signed arithmetic doesn't under higher
optimizations is a fairly recent phenomenon (and not an uncontroversial one).
Variables go where you expect. Pointers act like you expect. Casting and
precedence sometimes sneak up on you, but parentheses generally manage that.

Const has issues at the boundary cases. Trying to stuff something into ROM and
then telling the rest of the system that "really-no-you-cant-cast-that" can
make things tricky with "incompatible pointer" issues.

Floating point arithmetic, though, is just a disaster.

> Malloc is not more predictable than a GC pause.

Ayup. And what's the first thing real-time embedded folks do? Throw out malloc
(which is library, not language, but that's pedantic). Real-time-embedded
systems tend to allocate all memory statically, up-front. Or they use a custom
malloc that they control the behavior of.

> C not having exceptions doesn't make it more predictable. It just means that
> if something goes wrong you get a useless and probably corrupted core dump.

Predictable and useful are orthogonal.

And, the fact that I can't attach to running state of a crashed program is a
failure of _TOOLS_ not the language. The fact that I can't attach to a system
that crashed, examine the state, fix what I need to, and continue is a fault
of the people who make C IDE's. There is no reason other than lack of monetary
incentive that this cannot be done.

------
pcwalton
This is another article overanalyzing the success of C, when in fact the
reason for the success of C is very simple and obvious: Unix was free and in a
lucky position in 1973; Unix got popular; C is the language of Unix; therefore
C got popular. There is no inherent benefit in C that, for example, a somewhat
modified version of Pascal or Algol wouldn't have inherited. And these kinds
of articles always ignore the fact that in the past decade or so, C and C++
have been _declining_ in popularity. By and large, new programmers are not
learning C the way they were in the '90s. For better or worse (personally, I
think, for the better), they're starting with JavaScript, Python, Ruby, or
even PHP.

I'm highly skeptical of the conclusion that what we need is a new safer
implementation of C, too. Switching to a new compiler is a _very_ high burden
for a lot of projects, and at the end of the day they're still left with all
the problems of C, like header files, no namespaces, terrible standard
library, etc. etc. (Even adding compiler _switches_ is a high burden, which is
why Linux distros took so long to widely deploy basic things like -fstack-
protector.) By contrast, switching to a new language (or incrementally writing
new components in a new language, which is how this always goes in practice)
is _also_ a very high burden, but the benefits are larger: you don't have to
deal with all the problems of C.

In my view, this is why safer versions of C have repeatedly failed over the
years, while new languages have flourished. Migration to a new language _or a
new compiler_ is expensive no matter what, so teams will only do it if they
see enough benefit to justify the expense of doing so. Merely adding some
amount of safety to C isn't worth it, but the large safety _and_ productivity
gains you can get from a different language can be.

~~~
AnimalMuppet
I've used both C and Pascal in embedded systems. Pascal is _painful_ compared
to C. A "somewhat modified" version might help, but I doubt it would be
enough. To steal a phrase from my friend Michael Pavlinch: Pascal was like
picking your nose with boxing gloves on. A modified boxing glove isn't really
going to solve the problem.

For that matter, once we weren't on Unix but rather on the PC, and we _had_ a
nicely-modified Pascal (Turbo Pascal), why did C/C++ win there, too?

~~~
pcwalton
> For that matter, once we weren't on Unix but rather on the PC, and we had a
> nicely-modified Pascal (Turbo Pascal), why did C/C++ win there, too?

Turbo Pascal was quite successful in its day. But Microsoft chose C, and the
rest is history. Absent Microsoft's decision, Pascal might still be around.

If you look at early Mac development, for instance, Pascal was actually
preferred. C only ended up winning due to being better known, which was a
result of the critical mass of programmers trained on Unix and Microsoft's
offerings.

~~~
jstelly
That doesn't agree with my experience. I switched from Turbo Pascal to Turbo C
in the late 80s while doing DOS development because it was a better tool for
the job. It had nothing to do with microsoft or windows (v3.0 was not yet out
and few people developed windows apps before v3.0). Pascal (the language) was
definitely not preferred for DOS development at that time - it's just that
until 1987 there wasn't really C development environment that could compete
with Turbo Pascal.

I did some Amiga development back then also and that was exclusively in C with
some 68k assembly. I don't really recall anyone hoping for a pascal
environment to replace their C tools, but the Amiga OS was more C-oriented
than DOS at the time.

------
panic
_Is such a safe implementation of C really suitable for systems programming,
rather than merely application programming? If we understand system-building
as communicativity, then certainly such a system retains communicativity—so
long as alien objects can be described to it in a manner sufficient for
dispatching the same dynamic checks. If I memory-map a file, say, I can safely
access that memory only if the structure and meaning—the bounds and the types,
roughly—are described much like those of other in-memory objects. Tools and
systems for providing these descriptions are currently lacking—but are a
logical extension of the runtime type information already developed in recent
work. In the case of file formats, some cases like the ELF example we saw
earlier (§5.5) show that the format has already been defined for us, thanks to
the manifest layout of objects declared in C._

This is a key point. There are scattered systems for describing the layout of
arbitrary binary data—C structs/unions, Erlang binary patterns, ASN.1 Encoding
Control Notation, Kaitai Struct[1]—but nothing has really caught on across
language boundaries. It's hard not to feel this data format barrier when
you're using a C API from another language. We'll need to do something about
this barrier if we want a true multi-language system (not just a bunch of
awkward C FFIs).

[1] [http://kaitai.io/](http://kaitai.io/)

~~~
haneefmubarak
Certainly, but for instance, take one of your examples: Kaitai Struct. It
doesn't have support for C (at least it's not listed among the languages on
its homepage). OTOH, for more complex payloads I've often seen Protocol
Buffers used (yes, I know they don't have native C support either but there's
lots of good libraries for using `protobuf`s with C).

The thing with FFIs is that above all we want them to be fast and simple. C
rules for laying out structs generally means no parsing necessary, with direct
access to fixed offsets for everything you want. If you're ever having
problems figuring out the layout of a struct, it's relatively straightforward
to just dump some simple load/store code into a compiler and have a look at
what it does (assuming you can understand assembly at a basic level):
[https://godbolt.org/g/khGPWA](https://godbolt.org/g/khGPWA)

------
chubot
Related: "Safe Systems Software and the Future of Computing by Joe Duffy" at
RustConf 2017.

[https://www.youtube.com/watch?v=CuD7SCqHB7k](https://www.youtube.com/watch?v=CuD7SCqHB7k)

I summarized this excellent talk here [1], but one of the main points is that
compatibility with existing systems is important for adoption. (They learned
that the hard way -- by having their entire project cancelled and almost
everything thrown out.) He advocates unit-by-unit rewrites rather than big-
bang rewrites, just like Kell does in this conference article.

And compatibility with C in Windows should be easier than it is in the Unix
world, because the whole OS is architected around a binary protocol AFAIK --
COM.

My sense is that Rust may not have thought enough about compatibility early in
its life. Only later when they ran into adoption problems did they start
talking more about compatibility.

Also, it seems Rust competes more with C++ than C, and there seems to be very
little attempt to be compatible with C++ (although perhaps that problem is
intractable.)

Personally I don't think Rust will be a successful C replacement. It will have
some adoption, but the Linux kernel will still be running on bajillions of
devices 10 years from now, written in C. And in 20 years, something else will
come along to replace either C or Linux, but that thing won't involve Rust.

[1]
[https://www.reddit.com/r/ProgrammingLanguages/comments/6y6gx...](https://www.reddit.com/r/ProgrammingLanguages/comments/6y6gx4/safe_systems_software_and_the_future_of_computing/)

~~~
pcwalton
> My sense is that Rust may not have thought enough about compatibility early
> in its life. Only later when they ran into adoption problems did they start
> talking more about compatibility.

Of course Rust thought a lot about compatibility with C in its early days. I
remember fast FFI was in Graydon's very first presentation about the language
in 2010. Almost everything about the language changed, but that focus did not.

> Also, it seems Rust competes more with C++ than C, and there seems to be
> very little attempt to be compatible with C++ (although perhaps that problem
> is intractable.)

Rust has gone pretty far in wanting to be compatible with C++, with the C++
stuff added to bindgen for Stylo. We've gone further than most other
languages. It's not fair to say there's been "very little attempt": we
literally couldn't have shipped Stylo to Nightly Firefox without doing the
work to bridge C++ and Rust.

From your other post, it seems that one of your main complaints is that Cargo
exists instead of having Rust use Makefiles. All I can say is that the
reaction to Cargo from Rust programmers is overwhelmingly, almost universally
positive, and abandoning Cargo in favor of Makefiles would instantly result in
a fork of the language that would take Rust's entire userbase. Not solving
builds and package management is not a realistic option for a language in
2017.

~~~
wahern
Following the logic of the article, Rust has made the exact same mistake every
other language has made, which is to conceptualize compatibility with the C
ecosystem has merely an issue FFI. Rust is hardly the first language to focus
on easy FFI from day 1, but according to the article that's not nearly
sufficient. And like most other modern so-called systems language, Rust hasn't
gotten around to committing to a stable, exportable ABI. In fact, I think much
like Go the general sentiment is that this is largely undesirable, as stable
ABIs can cripple evolution of the implementation, especially those that rely
on sophisticated type systems.

~~~
pcwalton
> And like most other modern so-called systems language, Rust hasn't gotten
> around to committing to a stable, exportable ABI.

That's not true. The C ABI is stable and exportable, and you can opt into it
on a per-function basis. We do that for integration with existing projects all
the time.

Again: All of you are talking as though the idea of integrating Rust into a
large C++ project is some far-fetched theoretical idea, and that we made some
obvious mistakes that make this goal impossible. In fact, we're shipping an
integrated Rust-C++ project _today_ : stable Firefox, used by millions of
users.

~~~
wahern
I'm not arguing that it's too difficult integrate Rust with C or C++ projects.
I'm simply trying to get at the distinctions that the article is making, which
are rather subtle.

One aspect of Rust that fits well, IMO, with the characteristics the article
argues are under appreciated is its emphasis on POD--objects as compact, flat
bytes. That puts Rust much closer to achieving what C does best (again,
according to the article), which is first-class syntactic constructs over
memory--namely, pointers. But it falls short in the sense that to _export_
Rust objects (rather than import alien objects into Rust) you have to do so
explicitly. And presumably the author would argue that Rust is significantly
undervaluing the benefit of a stable ABI that would allow other applications
to import Rust objects without an explicit language-level construct (i.e.
explicitly annotating APIs with no_mangle).

Obviously when you're building a large application, cathedral style, the
requirement to explicitly annotate is not only less burdensome, but quite
useful (for many reasons). But in a larger, heterarchical ecosystem of
software, that's actually quite limiting. Our first instinct is to argue that
permitting such unintended peeking behind the curtain is dangerous and
unnecessary, but the article speaks directly to that.

Imagine a Rust with a stable ABI that was exported via Sun's CTF format. CTF
is like DWARF but much simpler (and thus little incentive to strip it), and
it's being integrated into both OpenBSD and (I think) FreeBSD to facilitate
improved dynamic linking infrastructure. Rust could even, theoretically,
continue randomizing member fields. And this data could be consumed by any
language's toolchain, not simply Rust's toolchain. That sort of language-
agnostic, holistic approach to interoperability is largely what I think the
article is getting at.

~~~
pcwalton
I'd be all for a standard language agnostic ABI. I'm not on the language
design team anymore, but I suspect you wouldn't have any trouble convincing
them to get on board with such a thing either. The ones you'd need to convince
would be the C++ folks, I suspect :)

------
ryandrake
It's common thinking (often a misconception) that C programmers only
grudgingly use C because it does some vital thing that all these other
"managed" and "safe" languages cannot not: if only those other languages added
that feature, all C programmers, having no more reason to stay, would finally
be able to abandon C! This is a good list of positive reasons to prefer C even
if other languages are also suitable.

------
AnimalMuppet
Here's a good quote: "Unless we can understand the real reasons programmers
continue to use C, we risk researchers continuing to solve a set of problems
that is incomplete and/or irrelevant, while practitioners continue to use
flawed tools."

In other words, stop blithely claiming that everyone is stupid for using
C/C++. Instead, _find out why they use it_. Then, if you continue to think
that C/C++ needs to be replaced, find a better way for those people to do what
they are doing that they currently find C/C++ to be the best way to do.

~~~
mannykannot
C is frequently praised (including in some of the posts here) for its
suitability for real-time and embedded systems development, but the author
appears to be proposing modifying the C runtime and code generation in ways
that, when done in other languages, are claimed to render them unsuitable for
these purposes.

I think researchers are justified in looking for solutions for common
problems, even if many C programmers will be uninterested in them, so I will
not reject his proposals peremptorily.

~~~
camus2
> C is frequently praised (including in some of the posts here) for its
> suitability for real-time and embedded systems development, but the author
> appears to be proposing modifying the C runtime and code generation in ways
> that, when done in other languages, are claimed to render them unsuitable
> for these purposes.

Don't people think there are things C could improve that wouldn't affect its
suitability for these tasks? I mean getting namespaces doesn't strike me as
being an hindrance for real-time systems for instance. A boolean type and
true|false as keywords instead of macros? tagged unions? multiple return types
to deal with errors more easily? more facilities in the language in general to
avoid the use of macros to make up for its lack of polymorphism? to me macros
always felt like a lazy cop-out.

~~~
ratboy666
Tagged unions? -- please no... I could be convinced to completely lose unions
in C, though (pointer to member or base of struct can be cast to another
struct anyway, so losing unions doesn't gain anything; for the same reason
tagged unions just would not be useful)

Boolean type? Sure, but that would be dependent on use. What is wrong with a
bitfield one bit wide instead? What _may_ be useful is "packed bitfield" type
("packed" in Pascal). Then, array of packed bit could be expressed.

Multiple return types - yes, "return a b;" (or something would be nice.

Lack of polymorphism - reference "void *" The main problem is that calls
cannot be constructed in C (that is, the standard does not have a "C to C"
FFI.

Anyway, just food for thought.

~~~
benlorenzetti
Unions would be nice if the syntax for accessing substructure members could be
nominally short circuited. For example:

struct ab { int a; int b; };

union c { struct ab ab_short_circuit; int a; };

union c c1; c1.a = 1; c1.b = 2;

~~~
cesarb
That already exists, as a Microsoft extension, and if the struct is declared
within the union, in standard C:
[https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/Unnamed-
Fields....](https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/Unnamed-Fields.html)

(However, in your example, c1.a is ambiguous, so it won't compile.)

~~~
benlorenzetti
I did not realize this as a Linux user--thanks.

(But this is my point, it is should not be ambiguous.)

------
WalterBright
> Language migration: all-at-once or not-at-all. Like any language, C persists
> partly because replacing code is costly. But perversely, the implementation
> technologies favoured by more modern languages offer especially unfavourable
> effort/reward curves for migration. Migration all at once is rarely
> economical; function-by-function is probably the desired granularity.

D's new "Better C" support allows for function-by-function granularity in
building chimera programs that contain any mix of D and C. It's much more than
having merely access from D to functions written in C.

[https://dlang.org/blog/2017/08/23/d-as-a-
better-c/](https://dlang.org/blog/2017/08/23/d-as-a-better-c/)

------
TheAceOfHearts
Started reading the article, once I reached the second page I lost interest
and scrolled to the conclusion.

Based on the parts I read, the writing style is needlessly verbose, and the
author is not saying anything which hasn't already been said.

~~~
exikyut
I agree with the verbosity. But many of those in the demographic that would do
away with C belong more to the current pop-culture of "coding!!1" instead of
the carefully-considered, patiently-implemented ancient art of engineering.

The pop-culture "coding" collective as a whole is not generally known for its
appreciation of terse explanations. I will admit that it does favor immediate
gratification, though; there is that.

Ideally, this would do the rounds with different people excerpting different
bits of it. That would spark many little conversations over time, and
contribute to keeping the discussion going. That would be nice.

~~~
mhink
Personally, I appreciated how the author spent a decent amount of time
"unpacking" what he meant- for instance, the "To manage or to mediate"
section. Terseness is only useful when you already have a shared protocol for
understanding the message and a guarantee that it won't be garbled along the
way.

On the other hand, if you're trying to communicate to folks without that
shared protocol (in this case, to people who aren't familiar with/haven't
spent much time using C as a primary language) it's kinda necessary to go a
little further to get the point across.

> Ideally, this would do the rounds with different people excerpting different
> bits of it. That would spark many little conversations over time, and
> contribute to keeping the discussion going. That would be nice.

Absolutely agree.

------
jancsika
> More generally, C’s notion of memory, arranged in an address space, allows
> code to address (point to) and access (read, write, call) objects inhabiting
> that space. Unlike most other languages, those objects need not have been
> defined within our program. In fact they even need not behave in the same
> way as such objects. Despite this, in all cases we access them in the same,
> uniform way.

But can't a systems language like Rust do this, too?

~~~
kibwen
Yes, it can and does. A common usecase for Rust is to write libraries that are
then linked in to higher-level language VMs and interpreters, as a way of
extending those languages, a process which inevitably involves accessing
memory that wasn't allocated by Rust and that behaves according to whatever
invariants the higher-level language imposes.

------
jstewartmobile
I like it. Whenever one of these C-shortcomings articles come up, we get the
obligatory "rewrite it in Rust!" and "we already rewrote it in JavaScript!"
comments.

Even so, there is A LOT of software already written in C/C++ that isn't going
to be converted any time soon, and if you could tweak the compiler in such a
way that makes those programs just 1% better, that is a REALLY BIG THING.

So, good on you Stephen Kell for this constructive paper!

------
cpcallen
You can pry gcc out of my cold, dead hands when your fancy type-safe high
level languages will let me do things like:

* Fork a running program to enable analysis or serialisation of program state without blocking, or

* Use mmap to allocate all my datastructures on disk, or

* Have full control over what happens when my program receives a signal, or

* exec another program but have it to inherit all the open file descriptors and network connections, or...

~~~
zbentley
I regularly do all of those things . . . in Perl, with either core features of
the language, or ubiquitous, well-supported libraries, in readable, concise
code that doesn't "fight" with the language/runtime.

Something like Java definitely makes some of those things very hard née
impossible, but not all high level languages are the same in those regards.

(And yeah, I just called Perl "readable". Bite me.)

------
phkahler
First class communication as a feature. Notice that many of the more popular
languages value the ability to link to C libraries. Most languages have a way
to call (or even statically link) external C code. It's not as easy to do the
same with other languages because they lack this ease of interfacing. It's
easy in C because it's low level, everything is plain old data and function
pointers.

------
Chiba-City
Low level backward compatibility with portability idioms is great for enduring
software assets. Anyone not sweating speed and space between equivalent
possible implementations is a different kind of useful software developer with
different operative quality criteria and execution risks. Rust will hopefully
enjoy a long and boring stable future.

------
ngcc_hk
Whilst not ever close to understand the details, I am still wonder on how
memory and pointer manipulation especially in embedded system and system
programming can be replaced other than by assembler.

Of course once you have the core in c and assembler, you try to move to higher
level or domain specific "language". Even word or this box in browser is high
level supported ultimately in that.

------
dibyendu
Hi,

I think C's success is also because it was designed while solving real
problems when writing UNIX. Unfortunately the newer languages that claim to be
"systems" languages were not designed while building operating systems. Here
is Dennis Ritchie's assessment of reasons for C's popularity:

(Extract from
[http://csapp.cs.cmu.edu/3e/docs/chistory.html](http://csapp.cs.cmu.edu/3e/docs/chistory.html)).

C has become successful to an extent far surpassing any early expectations.
What qualities contributed to its widespread use?

Doubtless the success of Unix itself was the most important factor; it made
the language available to hundreds of thousands of people. Conversely, of
course, Unix's use of C and its consequent portability to a wide variety of
machines was important in the system's success. But the language's invasion of
other environments suggests more fundamental merits.

Despite some aspects mysterious to the beginner and occasionally even to the
adept, C remains a simple and small language, translatable with simple and
small compilers. Its types and operations are well-grounded in those provided
by real machines, and for people used to how computers work, learning the
idioms for generating time- and space-efficient programs is not difficult. At
the same time the language is sufficiently abstracted from machine details
that program portability can be achieved.

Equally important, C and its central library support always remained in touch
with a real environment. It was not designed in isolation to prove a point, or
to serve as an example, but as a tool to write programs that did useful
things; it was always meant to interact with a larger operating system, and
was regarded as a tool to build larger tools. A parsimonious, pragmatic
approach influenced the things that went into C: it covers the essential needs
of many programmers, but does not try to supply too much.

Finally, despite the changes that it has undergone since its first published
description, which was admittedly informal and incomplete, the actual C
language as seen by millions of users using many different compilers has
remained remarkably stable and unified compared to those of similarly
widespread currency, for example Pascal and Fortran. There are differing
dialects of C—most noticeably, those described by the older K&R and the newer
Standard C—but on the whole, C has remained freer of proprietary extensions
than other languages. Perhaps the most significant extensions are the `far'
and `near' pointer qualifications intended to deal with peculiarities of some
Intel processors. Although C was not originally designed with portability as a
prime goal, it succeeded in expressing programs, even including operating
systems, on machines ranging from the smallest personal computers through the
mightiest supercomputers.

C is quirky, flawed, and an enormous success. While accidents of history
surely helped, it evidently satisfied a need for a system implementation
language efficient enough to displace assembly language, yet sufficiently
abstract and fluent to describe algorithms and interactions in a wide variety
of environments.

