
The Curious Case of the Longevity of C - pablobaz
https://www.ahl.com/ahl-tech-the-curious-case-of-the-longevity-of-c
======
hellofunk
I've always like the description of C as just being "raw memory, but with
syntactic sugar."

I think C is still so pervasive for 2 simple reasons (I might be wrong):

1) It's a small language. Compared to many newer languages (Swift, modern C++,
old C++, Objective-C, Java), it's quite tiny. The language itself can be
learned in short time. (Which is deceptive, because properly using what you
have learned takes longer than a short time).

2) It's fast. It's just direct memory access, direct and explicit hardware
manipulation. And whether justified or not, there is a very large portion of
the developer community that remains obsessed with speed. I see it all the
time, even when speed isn't not going to matter enough to justify C. I often
see people going way out of their way, making their work much more time-
consuming, because of their everlasting pursuit of writing the fastest damn
program they possibly can.

~~~
danieltillett
The thing I love about C is it almost always does what you want fast, but you
are never certain. No matter how much you think you know C, there is always
some new edge case to experience. C is exciting.

~~~
akvadrako
I completely disagree with this. C is the only language I ever used I felt I
had mastered. Sometimes I needed to check things in the spec or pause to
consider the C99/C89 discrepancies, but essentially I knew everything about it
after just a few years of daily use.

I've never felt that way with Python, JS, Ruby, Java, C++ or any of the other
languages I've used. They are at least 10X more complex in terms of primitives
and non-trivial interactions.

~~~
barrkel
C is deceptive, though. It encourages a mental model of high-level assembler,
but it's an assembler for an abstract machine, not the actual machine; and
compilers are increasingly making the difference evident.

~~~
akvadrako
Maybe many people do have incorrect assumptions about C, but I wouldn't say
it's deceptive. The abstract nature becomes plain when using an optimising
compiler for a while. And they have been the norm for at least 20 years.

~~~
mrec
Optimizing compilers have been around for yonks, sure, but I think taking
aggressive advantage of UB is relatively recent. See e.g. Regehr in 2012 [1] :

> _The current crop of C and C++ compilers will exploit undefined behaviors to
> generate efficient code [...], but not consistently or well. It’s time for
> us to take this seriously._

[1]
[https://blog.regehr.org/archives/761](https://blog.regehr.org/archives/761)

~~~
akvadrako
I see your comment is grey, so I guess people down-voted you and maybe you
want to know why. It's because you are wrong. Compiler optimisations have
always been relevant; even in the 90's they would omit entire blocks of code
or reorder operations for example.

I assume now it's more aggressive, because undefined behavior should never be
relied upon. That's the point.

~~~
mrec
I think you're reading me as saying something much stronger than what I
actually meant. When we talk about C being "deceptive" we don't need to
consider optimizations that respect the as-if rule. (Well, maybe when looking
at the C->asm mapping or trying to benchmark, but that's not the level most
people are working at.)

The trend toward UB-based optimizations seems significant to me because they
can and will take a bunch of C code which _looks_ superficially reasonable,
and which _used_ to do X as intended, and suddenly (and perfectly legally)
make it start doing Y instead. I assumed that's what barrkel was alluding to
above.

And yes, I'm sure there are old optimizations which will also do that, but
basic things like hoisting and reordering and unrolling and dead-code
elimination don't fall into that category.

~~~
akvadrako
_> And yes, I'm sure there are old optimizations which will also do that_

So then it's not new behavior.

------
userbinator
_Add to this the disastrously central role that C has played in our ongoing
computer security nightmare_

One sees plenty of news on the downsides, but on the other hand the lack of
security has also enabled console homebrew, iOS jailbreaking, Android rooting,
and a bunch of other creatively liberating activities which involve doing
something not officially sanctioned. It's worth pondering whether we would be
better off, had everyone switched to something much "safer" (maybe even with
formal verification) a long time ago. I think it's debatable.

~~~
pmorici
Feels like the old claim that Macs didn't catch viruses like Windows does.
Turned out that no one bothered to make viruses for something no one uses. In
other words the "security nightmare" is as much due to its popularity and
ubiquity as it is to do with any particular detail of the language itself.

Maybe the most secure language is the one so esoteric, and complex, and
incomprehensible that no one uses it to make anything and therefore is never
exploited.

~~~
baldfat
That lie? They on the other hand recommended that people install anti-virus
programs till that got pointed out. [https://www.cnet.com/news/apple-suggests-
mac-users-install-a...](https://www.cnet.com/news/apple-suggests-mac-users-
install-antivirus-software/)

I hate that it is okay to lie. Of course any OS can have a virus and malicious
code. It runs code.

------
jwilk
> _We rely on Python which is written in C89, a standard older than many of
> our developers._

Python ≥ 3.6 requires (some features of) C99.

> _Perhaps the reluctance to move to a more ‘modern’ C is that even C99 can’t
> legally buy a drink_

Perhaps it's because MSVC didn't fully support anything newer.

Source:
[https://www.python.org/dev/peps/pep-0007/#c-dialect](https://www.python.org/dev/peps/pep-0007/#c-dialect)

~~~
ericfrederich
Yeah... good luck taking a newer version of Python an compiling it on Visual
Studio 2012. I had to jump through hoops even to get 2.7 to compile and wound
up abandoning it because it caused problems down the line with ctypes.

~~~
pjmlp
VS2012 is quite pre-historic, we already had 2013, 2015 (with three updates)
and are now on the third 2017 update.

C99 support was added in VS2015 to the extent required by C++14, similarly C11
support was added in 2017 to the extent required by C++17.

~~~
dragonwriter
> VS2012 is quite pre-historic

You don't work in the enterprise domain which is VS’s core market, do you?

~~~
jdmichal
Still on Visual Studio 2010 here... The pain is that _everyone_ has to move at
once due to the way the project files are upgraded when you open them in a
newer version.

~~~
T-hawk
Microsoft fixed this after 2010. VS 2012 and onwards can open a 2010 project
file with no changes. A mixed population of Visual Studios from 2010+ can work
on the same project file.

~~~
jdmichal
That is actually extremely good to know. Thanks for the information!

------
reacweb
"C is the desert island language". see
[http://crypto.stanford.edu/~blynn/c/intro.html](http://crypto.stanford.edu/~blynn/c/intro.html)

For me, Linux is a huge development environment dedicated to C programming.
Chapter 2 and 3 of manual pages gives C api. C gives the most direct access to
kernel. IMHO we should try to find a replacement for C and the first target
for this language should be linux kernel. If you can conceive a language that
Linus Torvalds accepts, the rest of the planet will follow ;-)

~~~
kwhitefoot
> If you can conceive a language that Linus Torvalds accepts

Good luck!

------
dingo_bat
In my mind, C very closely resembles what's going on in my mind's model of the
computer. I "think" in C when I'm thinking about an algorithm. It's almost
exactly the same as the fake language people sometimes use for explaining an
algorithm. I think this property contributes towards the popularity. No
"artificial" abstractions like classes, templates, modules, imports etc. Just
think of the first step you would do to solve a problem, and write it. Then
think of the next step, and write that. And you get fast, portable code that
produces tiny binaries! What more do you need?

~~~
unboxed_type
Thou shall embrace the true power of functional programming

------
simias
I think that during the nineties the focus switched from compiled, small
runtime, unmanaged language to higher level, garbage collected, "big runtime"
languages like Java or the scripting languages. One notable exception being
C++ but its complexity and the difficulty to interface it with other languages
meant that it couldn't completely replace C.

So for a long time if you wanted to do low level, fast, small, reasonably
portable code you simply didn't have much of a choice.

Rust is the first language in a long time that I think could end up replacing
C in all of its use cases but there's still some way to go. The main
difficulty I can foresee is that like C++ it's significantly more complex than
C and harder to interface with other programing languages (you end up making
an unsafe C interface).

I think C is here to stay. The billions of lines of C code out there won't be
rewritten in a fortnight.

~~~
zid
Rust would need severe changes to replace C where C shines, being able to
cobble object files together into something that works. I can run malloc from
userspace and 14 different previous invocations of a C compiler later I have
registers with my parameters in touching page tables.

ABIs, APIs etc have to be able to propagate size and type information that is
basically how C defines them if you want anything to work (pointers, integers,
floats).

C embeds a remarkable amount of cross-object-file information about a function
just with 'int f();'. We know the range of valid values, we know how big the
object is, etc.

Rust literally has none of these mechanisms and likely never will, interfacing
Rust to other Rust, such that you will get any benefit at all from using it
would require megs of type, scope, size, etc information to crawl across the
calling convention.

There's a reason C++ names look like CintFuncf!!!@34$902, Rust doesn't have or
plan to have anything like this yet, and it'd need to be 20x more complicated.

If you get rid of Rust's magic about being able to remove guards and checks by
infering things about the data to make it conform to calling conventions
instead of using types, you have just invented C.

~~~
simias
Well, you get what you pay for. Rust can call and expose C-style ABI functions
natively so you can always do what C does. If you want a safer Rust API you
can also have that but it makes interfacing with 3rd party code more
difficult.

I disagree somewhat with your assertion that:

> C embeds a remarkable amount of cross-object-file information about a
> function just with 'int f();'. We know the range of valid values, we know
> how big the object is, etc.".

First of all this is not embedded in the object file but rather in the header,
Rust doesn't need that. The object only tells you the name of the symbol and
that's about it. In particular that means that the C linker can't detect ABI
mismatch if the prototype of a function or the layout of a struct changed, as
long as the symbol is found it'll link just fine.

Furthermore even with the header available a lot of the time C prototypes are
not sufficient to know how to use a method. Take for instance:

    
    
        sometype_t *some_function(someothertype_t *param, int flag);
    

Is param an input or output parameter? Do I own the return value or is it
allocated by the function? Or maybe it's just a member of param so it has the
same lifetime? I know that flag is an int but that doesn't really tell me
which are the valid values I can put in there. In Rust function signatures
tell you all of that and it's enforced by the compiler.

So yeah, there's a significant overhead in Rust here, but it's for a good
reason IMO. It does make it harder to make quick hacks with the linker though.

> There's a reason C++ names look like CintFuncf!!!@34$902, Rust doesn't have
> or plan to have anything like this yet, and it'd need to be 20x more
> complicated.

Are you talking about name mangling? Rust does that too but that's not really
the same issue, it's just about generating unique "flat" names for objects
that include namespacing and generic info. Like if you have a "fn foo<T>(t:
&T)" and you instanciate it with T = i32 and T = String you need to generate
two symbols. C doesn't need that because it doesn't have namespaces, generics
or overloading.

~~~
kibwen
_> C doesn't need that because it doesn't have namespaces, generics or
overloading._

Overloading as in operator overloading? Because I don't see how that would
affect symbols.

Though along with namespaces and generics, there is one thing that Rust also
bakes into symbols: versioning information. This is how, in the case of deep
dependency graphs, it's possible for a finished binary to include multiple
copies of the same library in the event that multiple versions are
transitively depended upon. But that doesn't add any complexity to symbol
mangling on its own, because if you already have namespaces then you can just
treat it as a namespace that only the compiler can see.

~~~
simias
No I meant parameter-based function overloading like C++ has (or is it not the
wrong term? I forget).

So like:

    
    
        int do_something(int param);
        int do_something(double param, char *param2);
    

I don't see how you can avoid some form of name mangling since obviously you
can't just define two duplicate "do_something" symbols.

~~~
steveklabnik
Rust does not currently support parameter-based function overloading, and
there are no plans to in the near-term, if ever.

(This isn't the only way to end up with duplicate symbols, just trying to make
it clear that this specifically won't be a problem with Rust.)

------
Santosh83
C works everywhere, and everyone knows C, or can learn it in a couple of weeks
and get coding. This is a hard combination to wholesale replace, but then not
even Rust aims to do that. Instead it aims to chip away gradually, and I don't
see why it can't do _that_ .

~~~
geezerjay
> C works everywhere, and everyone knows C, or can learn it in a couple of
> weeks and get coding.

You hit the nail in the head. The go-to tutorial and reference book of C is
«The C programming language» by Brian Kernighan and Dennis Ritchie, which
provides a complete and very thorough description of C and its standard
library in less than 260 pages. That's unbeatable.

As a comparison, the go-to book for C++, «the C++ programming language» by
Bjarne Stroustrup, goes beyond 1200 pages and doesn't cover some fundamental
aspects of C++, and even «The Rust Programming Language» by Steve Klabnik and
Carol Nichols, a book on a programming language which was designed to eat away
C's market, is over 400 pages.

This speaks volumes on the effort required by anyone to get on their feet and
be productive with these programming language.

~~~
steveklabnik
Different books have different goals. K&R is short, but the standard is ~700
pages. The Rust book is over 400 pages, but contains entire chapters of just
"let's build a project together." There's no spec yet like C has. Writing
styles differ dramatically, K&R are more concise than I am, and don't dive
into some details as much.

Then, you may also consider the framing of "simple" vs "easy", they're not the
same thing. And that's even if we agree that C is simple in the first place,
which I personally consider not true.

Basically, I don't think that comparing page counts of random documents says
anything meaningful about language complexity.

~~~
crispinb
As an aside I don't have time or any purpose beyond curiosity to learn Rust
right now. But having glanced through the Rust book a few times, I wish I did.
It's the sort of book (a bit like K&R in this sense) that could lead one
astray ..

~~~
steveklabnik
Thank you; even saying "a bit like K&R" at all is high praise to me.

That said, not everyone likes my writing style, so I'm glad that there are
other books coming out as well.

------
jdblair
My 2 cents, from the perspective of someone who built a career on writing
software in C for small devices.

C has longevity b/c its compact and provides a straightforward model of memory
on the machine. I understand the desire to use safe, garbage collected, memory
safe language when you're serving HTTP requests, but sometimes you need to
access the hardware: twiddle a GPIO or read from a DMA device. This is where
I've yet to see a good replacement for C, and by extension, C++ (b/c its
fundamentally still just C). Maybe rust is there, but I don't have experience
there to judge.

[edited for clarity]

~~~
steveklabnik
To be clear, since I have seen this pop up lately, Rust doesn't have a GC and
absolutely gives you access to hardware. The challenges for it on embedded are
mostly toolchain issues.

~~~
scruple
So, are vendors working on rust toolchains for their hardware? I'm asking from
a place of genuine curiosity, I left embedded a number of years ago. I've
always viewed the 'rust breaking into embedded' problem as a toolchain issue,
as well. But, my intuition is that vendors will be slow to release rust
support in toolchains because the industry is still firmly in the "we write in
C" camp.

~~~
analog31
This is an interesting thread...

[https://news.ycombinator.com/item?id=14071282](https://news.ycombinator.com/item?id=14071282)

In my view the industry inertia is a chicken and egg thing. You want a chip
that runs your existing code. Then you want to write new code on your existing
chip. And you have ongoing projects at different stages, sharing chips and
code.

------
Animats
C was originally a language for modest size programs on machines with 64K
address spaces. C isn't a bad language for a thousand line program. It's a
terrible language for a million line program. Just to get memory safety,
there's too much that has to be manually coordinated across compilation unit
boundaries.

The three big questions in C are "how big is it?", "who owns it?", and "who
locks it?" The language gives zero help with all of those issues. Most later
languages deal with some or all of those issues.

~~~
dxhdr
C could use a few features to help with million line codebases, like
namespaces. Beyond that, programs that large require good human engineering to
manage complexity.

------
colechristensen
I'm quite bored by the obsession of some for replacing C.

Yes, it does exactly what you tell it to do and that's dangerous. It was a
high level language 25 years ago, but today the metaphor should be assembly.
Nobody would criticize assembly for letting you shoot yourself in the foot, C
is much the same.

The biggest threat to the tech sector is Linus dying and being replaced by
some charlatan who insists on replacing C and using Jira.

</offtopic>

~~~
edmccard
>Nobody would criticize assembly for letting you shoot yourself in the foot, C
is much the same.

But I bet they would criticize _programs_ for being written in assembly, if
they didn't need to be.

If you could have a language with all the performance of C without the
footguns, why wouldn't you want that?

~~~
chii
> a language with all the performance of C without the footguns, why wouldn't
> you want that?

I've yet to see a language that actually delivered on this claim.

~~~
bluejekyll
Perhaps you aren’t looking?

Rust delivers on all these claims. And there have been others before it. Rust
hits all the sweet spots for me.

~~~
rbehrends
Rust sits in a really weird spot. It's too high-level for a lot of low-level
work, and too low-level for a lot of high-level work.

Example for the first case: Writing a garbage collector runtime in Rust has
most of the same problems in Rust as in C, because you have to write most of
it in unsafe code, where Rust inherits much of C's undefined behavior w.r.t.
pointers via LLVM. In short, you have largely the same problems and have added
a hard dependency on Rust.

For high-level work, almost all [1] of what Rust gives you is memory safety
and that comes at the price of dealing with a LOT of extra language
complexity. But aside from dynamic memory management, memory safety isn't hard
(we did that back in the 1970s and 1980s), and for dynamic memory management,
we can get memory safety with a garbage collector and much less complexity. So
Rust is primarily of interest for those use cases where garbage collection is
not an option.

While that still gives you plenty of interesting use cases for Rust, there are
also plenty of programming niches that it serves poorly.

[1] People will also mention "fearless concurrency", but guaranteeing the
absence of data races is not hard. That more languages don't do it is partly
because they simply neglected that aspect [2], but also because any mechanism
– including Rust's – for doing so inherently constrains your options w.r.t.
concurrency [3]. Plus, avoiding data races is the easy part of getting
concurrency right.

[3] Concurrent Pascal had guaranteed absence of data races in the absence of
pointers in the 1970s, Eiffel had done it with pointers in the 1980s, and
there was a plethora of research in the 1990s to do it in various other ways.

[3] For example, there are plenty of use cases, such as certain idempotent
operations, where data races are not only perfectly safe, but also desired for
performance. There are also use cases where you can prove that no data races
occur, but a type system cannot easily capture that.

~~~
tdbgamer

      It's too high-level for a lot of low-level work, and too low-level for a lot of high-level work.
    

This is true in the very specific cases that you gave, but I believe that is
the minority of use cases, not the majority.

Even the example of writing a GC that requires tons of unsafe code, that is
not a good argument for making _all_ the code unsafe. All the unsafe GC code
would be abstracted away into a module and would be more obvious to those
looking at it that they will need to be watchful for undefined behavior. Now
you can proceed writing the rest of the project in safe, simple Rust.

    
    
      People will also mention "fearless concurrency", but guaranteeing the absence of data races is not hard
    

Maybe for developers that are very familiar with the race conditions of
parallel code, but definitely not for most people. Even seasoned developers
will make mistakes with simple multithreaded code.

Also, the reasoning behind "x is easy so why do I need my language to check it
for me" is questionable. The whole point is that you have a guarantee. Have
you never had a compiler catch a stupid mistake before it happened and felt
relieved? I doubt it. Now imagine if instead of debugging stupid data races in
your parallel code you can spend that time optimizing and improving it. I fail
to see how this can be viewed as negative.

Sure Rust doesn't cover 100% of use cases, but it definitely covers more than
you're implying. It's low-level enough that Redox OS can be written in Rust,
but high-level enough that Firefox is now outpacing other browsers and
parallelizing everything with Rust.

~~~
rbehrends
> All the unsafe GC code would be abstracted away into a module and would be
> more obvious to those looking at it that they will need to be watchful for
> undefined behavior.

That code that could be "abstracted away" would be "virtually all the code" in
my example.

> Maybe for developers that are very familiar with the race conditions of
> parallel code, but definitely not for most people. Even seasoned developers
> will make mistakes with simple multithreaded code.

I'm not talking about manually guaranteeing absence of data races. I mean
absence of data races as a language feature.

> Also, the reasoning behind "x is easy so why do I need my language to check
> it for me" is questionable.

This is not at all what I was talking about. You completely misunderstood me.

~~~
steveklabnik
I think you'd be surprised, even operating systems, the canonical unsafe
activity, has a relatively low percentage of unsafe code. For example,
[https://doc.redox-os.org/book/introduction/unsafes.html](https://doc.redox-
os.org/book/introduction/unsafes.html) says

> A quick grep gives us some stats: the kernel has about 70 invocations of
> unsafe in about 4500 lines of code overall.

~~~
rbehrends
> I think you'd be surprised, even operating systems, the canonical unsafe
> activity, has a relatively low percentage of unsafe code.

My example was a GC runtime, not an OS kernel. If I have only very little
unsafe code, then I could just do that in C and the rest in whatever other
high-level language suits my project and not see any difference.

The bigger problem – where Rust failed to pick some low-hanging fruit, IMO –
is that "unsafe" is too much like the bad parts of C. There is no medium
position between "everything is defined and memory-safe" and "everything may
explode at a moment's notice".

My most practical need for a low-level language is a language that is in that
in-between position: semantics that remain easy to comprehend and predictable
even if there are no static guarantees, and where I have to use a different
strategy for software assurance. The point here is that for such a language I
_can_ resort to alternate validation tools (think Ada and SPARK for an
example). Rust's unsafe mode does not handle that situation well because (like
C) it does not provide a foundation for alternate validation strategies.

It's perhaps also worth pointing out that I have a formal methods background.
In short, I've done formal specifications/proofs for software before. In this
context, safe Rust has a fairly high cost for only providing memory safety
(and few other guarantees), and unsafe Rust is not a good foundation (or at
least, not much better than C) for bringing advanced tools to bear.

~~~
jononor
There are several things that make unsafe Rust better than C wrt to ensuring
correctness. For example the stronger, more expressive type-system and fewer
instances of undefined behavior for common operations. The standardized,
modern tools for unit-testing and fuzzing are also nicer in Rust.

------
jstewartmobile
As maddening as it may be, my gut tells me it has more to do with the
precompiler than anything else. When it comes to wrestling with cross-platform
differences, I don't know of a corresponding feature in Rust or Go that is
equally powerful (or equally ugly!).

As long as computing remains a battleground where rich assholes put vastly
different wrappers around the exact same shit so they can wring more money out
of us, I will probably be writing in that abominable assembly language w/
turing-complete string-paster for many years to come.

~~~
kibwen
Speaking of the C preprocessor, I found this recently: from the Firefox source
code, a red-black tree written entirely in 800 lines of C macros:
[http://searchfox.org/mozilla-
central/rev/f54c1723befe6bcc722...](http://searchfox.org/mozilla-
central/rev/f54c1723befe6bcc7229f005217d5c681128fcad/memory/build/rb.h) .
Submitted for your enjoyment. :)

~~~
throwme211345
Why would someone do that?

~~~
dboreham
Because long ago (and probably in a different product) making a function call
was significantly costly and worthwhile optimizing away via macros.

------
jerrre
>Perhaps now it is time for a new generation to make their mark, will their
efforts last 40 years? Rust anyone?

I'd think it's quite difficult for a new language to replace a language whose
main attractive points are it's longevity, stability (not the code and
programs coming out of it, but the syntax and tools etc), and broad support.

There are many fields where C is easy to beat. But I don't think the next 40
year lasting language will be a C replacement.

~~~
adrianratnapala
Indeed. I think the obvious candidate is JavaScript.

C is what it is partly because of its relationship with Unix, and also because
it was the language that gave most straightforward access to "the machine" \--
whatever that is.

And the hard truth is that The Machine was the only real universal platform
around. But now we have this thing called The Web, which is not _quite_ as
universal but is getting there, and is much more like a single platform. JS
has a special relationship with it.

The only thing that might upset this JS+Web applecart is WebAssembly. A "C of
the Web" (call it W) would be a language high level enough to be pleasant to
use, but which would have little or no runtime beyond what is native to
WebAsm, and thus anyone could use libraries written in W.

~~~
tim333
I note looking at webassembly.org's getting started page that their hello
world example is written in C. Maybe the "C of the Web" will actually be C.
([http://webassembly.org/getting-started/developers-
guide/](http://webassembly.org/getting-started/developers-guide/))

~~~
kibwen
The ability to compile C code to WASM will be invaluable for porting well-used
C libraries to the web, but given that the goal of WASM is to define a ("more-
or-less") language-agnostic bytecode, I see little reason why people would
prefer to use C when they could continue using Javascript, or, eventually,
whatever the 40-years-from-now equivalent of Python is. (Of course, you'll
always be able to use C if you do want to, so it's not like it's ever going to
risk vanishing.)

~~~
krapp
> I see little reason why people would continue using C when they could
> continue using Javascript, or, eventually, whatever the 40-years-from-now
> equivalent of Python is.

People will use C or Python or whatever on the web for the same reasons so
many people transpile to javascript today, including that they simply don't
like javascript and would rather write code in a language they prefer.

~~~
kibwen
Indeed, but, as much as I admire Typescript, the vast majority of people
aren't transpiling anything to Javascript. The grandfather comment was
concerned with whether the future "C of the web" would literally be C, but I
don't see how the introduction of WASM will start convincing the majority of
people to start shipping webapps written in C. C is useful today as a low-
level lingua franca, but on the web WASM will be the lingua franca, by
definition, and C will be competing with many other languages that compile to
WASM.

~~~
krapp
> The grandfather comment was concerned with whether the future "C of the web"
> would literally be C, but I don't see how the introduction of WASM will
> start convincing the majority of people to start shipping webapps written in
> C.

WebAssembly supports C/C++ _now_ , and there is a lot of existing code that
could be tested or convered. WebAssembly will probably get garbage collection
and be able to support many other languages soon, but by then network effects
may have taken over - especially if most online tutorials for WASM cover C or
C++.

------
zeveb
> In 1987 this code took around half an hour to run, today 0.03 seconds.
> Progress eh?

The thing I take away when reading this is that in 1987 there were superior
languages to C which — while usable — were just a _bit_ too sluggish, and
hence lost out. I'm thinking specifically of Smalltalk & Lisp, but I'm sure
that there are others (maybe ML was around that far back?).

Well, if a program which once ran in half an hour can now run in 300
milliseconds, I think that maybe we should reconsider using just a little of
that extra performance to run a better language environment.

Imagine if we had OSes written in safe languages which catch errors rather
than allowing programs to misbehave. Honestly, it's hard for me to do because
I've become so accustomed to crashes. Recently I've been doing some X
programming with Lisp, and it's amazing — even when I make a mistake, things
keep on running properly. I just see an error and that's that.

Well, most of the time, anyway. Nothing's perfect. Still, the experience is
_orders_ of magnitude better than writing C. I mean that literally, not
figuratively.

(Also, as an aside: when I switched tabs to the article, it displayed for just
a second, then disappeared. Reader mode didn't even work. I had to enable
JavaScript just to view some text. What gives‽)

~~~
andrewmcwatters
> reconsider using just a little of that extra performance

Welcome to software development, where the code is shit, and the hardware
improvements don't matter.

------
raarts
> Add to this the disastrously central role that C has played in our ongoing
> computer security nightmare

Hindsight is 20/20.

C was simply the pervasive language when the internet happened. C was designed
when the world was a friendlier place, where buffer overflows did not exist.
It was embraced by everyone because of its speed and simplicity.

No C programmer was raised/educated to have security in the front of his mind.
Hell, many programmers of any language still don't.

Stop bashing C and start using Oauth instead of inventing your own
authentication schemes.

~~~
pjmlp
Hoare in 1981 referring to C.

"Many years later we asked our customers whether they wished us to provide an
option to switch off these checks in the interests of efficiency on production
runs. Unanimously, they urged us not to--they already knew how frequently
subscript errors occur on production runs where failure to detect them could
be disastrous. I note with fear and horror that even in 1980, language
designers and users have not learned this lesson. In any respectable branch of
engineering, failure to observe such elementary precautions would have long
been against the law."

Fran Allen on "Coders at Work," about C.

"Oh, yeah. That would have been fine. And, in fact, you need to have something
like that, something where experts can really fine-tune without big
bottlenecks because those are key problems to solve. By 1960, we had a long
list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are
higher-level than C. We have seriously regressed, since C developed. C has
destroyed our ability to advance the state of the art in automatic
optimization, automatic parallelization, automatic mapping of a high-level
language to the machine. This is one of the reasons compilers are ...
basically not taught much anymore in the colleges and universities."

------
throwaway2016a
C was the first language I learned in the mid-90s and I still go back to it
once and a while when I want raw speed and direct memory access. I know other
languages, sure, but I have yet to find a suitable replacement that lets me do
what I want to do on a machine level without getting in the way.

Lately I've been doing some of the stuff I would normally do in C in GoLang
instead. But it still can't match C in performance. (beats the hell out of
Node.js though which is where I do most of my middle-tier / front-end code)

It also makes an excellent foot gun for the same reason.

------
blub
If something is to replace C (or C++), I hope it's not Rust, because Rust is
even more complicated than C++.

People should not have to bend their mind to satisfy some tool like the borrow
checker, the tool should do the right thing and allow them to express their
ideas in code with as little effort as possible.

~~~
kibwen
_> Rust is even more complicated than C++_

Example? In my experience, those saying this have either never used Rust, or
never used C++. A strict compiler does not a complicated language make. To
wit, if the response is "lifetimes", note that lifetime tracking is also
critically important in C++ (not to mention C), it's just that C++ compilers
give one far less help in doing so than Rust does.

~~~
blub
Take for instance [http://cglab.ca/~abeinges/blah/too-many-
lists/book/README.ht...](http://cglab.ca/~abeinges/blah/too-many-
lists/book/README.html). I could feel the frustration when reading those
compiler errors. I mean it's a list, one of the simplest data structures to
implement, I was doing that stuff in high-school.

But it's not just about a particular example, instead I get a general feeling
of inaccessibility when reading Rust code. It's not enough to know C or C++,
one has to understand the Rust way. This might seem unfair to complain about,
but I didn't have any such issues when learning e.g. Swift or Objective-C, to
give an example of non-GC languages.

~~~
steveklabnik
Side note, we have completely re-done the format of compiler errors since that
was published. To take a random example from 2.1:

    
    
      src/first.rs:1:1: 4:2 error: illegal recursive enum type; wrap the inner value in a box to make it representable [E0072]
      src/first.rs:1 pub enum List {
      src/first.rs:2    Empty,
      src/first.rs:3    Elem(T, List),
      src/first.rs:4 }
      error: aborting due to previous error
    

is now

    
    
      error[E0072]: recursive type `List` has infinite size
       --> src/main.rs:1:1
        |
      1 | pub enum List {
        | ^^^^^^^^^^^^^ recursive type has infinite size
      2 |     Empty,
      3 |     Elem(i32, List),
        |               ----- recursive without indirection
        |
        = help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `List` representable
    

Users have reported a lot less frustration after these changes, and we're
always trying to make them better. For more on this effort [https://blog.rust-
lang.org/2016/08/10/Shape-of-errors-to-com...](https://blog.rust-
lang.org/2016/08/10/Shape-of-errors-to-come.html)

~~~
marcosdumay
I'd like to thank you a lot for those error messages. As somebody learning the
language, they are incredibly helpful.

~~~
steveklabnik
I will tell that to the people who actually did said work, I'm sure they will
appreciate it :D

------
tlb
I wonder if the danger of C is something people like, deep down. I spent some
time talking to people who do dangerous jobs, thinking about automating their
jobs with robots. Most of them _liked_ the danger. They enjoyed having
mastered the art of not getting killed by falling tree branches, or live
wires, or runaway trucks, or whatever the job involved. They felt satisfaction
at doing a job where, if someone else tried to do it, they'd probably get hurt
on their first day. You can imagine various ways this could have evolved in
hunter-gatherer societies.

Most new languages are far safer than C. For some that's the main selling
point, for others it's just a side-effect of having GC. I wonder if you could
get a lot of traction with a language as expressive as Ruby, but with footguns
galore. I'm not sure that space has been adequately explored.

~~~
AnimalMuppet
I wouldn't put it as "danger", exactly. It's freedom. The language won't trap
you, pretty much ever. You need to write directly to hardware registers? Fine.
You need to drop into assembly for a bit? Go for it. However down-and-dirty
you have to get, you can do it.

Whereas with some other languages, if you ever need to get more down-and-dirty
than the language is designed to allow, even for just a little bit of your
program, well, too bad - you're not allowed to.

~~~
pjmlp
I could do all of that in Turbo Pascal, Turbo Basic and Modula-2 compilers.

~~~
AnimalMuppet
Ah, I had forgotten that Turbo Pascal let you drop into assembly.

But could you write directly to a hardware address? I don't know, but I doubt
it - Turbo Pascal ran on the PC, which didn't have memory-mapped hardware. You
could do an outp. But in C, you could do

    
    
      *(unsigned long*)0xFFFE0004 = 0x8000FF2C;
    

which was either absolutely necessary or absolutely stupid, depending on
whether or not you had custom memory-mapped hardware at the specified address.
Could you do that in Turbo Pascal?

What's more, Turbo Pascal was PC-only, at least initially. If you had to work
on a 68000-based embedded system running PDOS ( _not_ PC-DOS), and you had to
use Pascal, it _wasn 't_ Turbo Pascal. It was just bog-standard Pascal. Inline
assembler? No way. Pointer to a variable-sized array? Can't do it (you cannot
give it a type). It was _painful_ to work in that environment. And that
environment was Pascal as specified by the language standard. Turbo Pascal
fixed most of the problems, but it trampled all over the standard to do it.
(At least it did have separate compilation.)

Why was it painful to work in that environment? First, we couldn't access our
custom hardware without having to link to an assembly-language subroutine. In
doing so, we lost type safety, which led to at least one hard-to-find crash.
It also was just much more difficult and error-prone to write those routines.
Second, we wanted to have a user-specified variable-sized array. We wound up
having to create the largest array we could given the memory the machine had,
and only using the part that the user specified, which was a pretty ugly
kludge. Third, Pascal was just clumsier to use than C. It was more verbose and
more finicky. (One part I remember in particular was the semicolons. You
couldn't have a semicolon on the last statement in a block. As you added or
removed statements, you kept having to fiddle with the semicolons, including
on lines other than the ones you were changing.)

~~~
pjmlp
Something like,

    
    
        myMemoryAddr : Longint absolute $FFFE0004;
    

There were Turbo Pascal compatible compilers for Amiga, and it was the most
common dialect, to the point ISO Extended Pascal compatibility was mostly
ignored.

There are still companies selling Turbo Pascal compatible compilers for
embedded systems.

[https://www.mikroe.com/mikropascal/](https://www.mikroe.com/mikropascal/)

Inline Assemly is not part of C, it is a common compiler extension, whose
semantics are not even portable across compilers.

Back in the day when Pascal compilers were more widespread, C was hardly much
more portable, with each compiler having its own little world between K&R C
and ANSI C89.

------
mmjaa
It shouldn't be such a surprise, unless one is ignorant of a stable, decades
old, computing maxim:

Old computers (and by definition, old computer software) never dies. _The
users do_.

------
JacobiX
Even in relatively new fields like machine learning, nearly every software
that we use at work is written in C or is a wrapper around a C library, or in
some cases in Fortran (a 60 years old programming language) : we use
extensively PyTorch, torch, numpy, LAPACK, Scipy, etc.

------
tudorw
PHP is having it's 21st this year and is in use on around 200 million
websites, I think it'll make it :)

Logo is 50 :)

------
rbanffy
The "AHL" name brings me fond memories of Creative Computing magazine and
their editor, David H. Ahl.

Sadly these AHL's are unrelated.

------
makecheck
Binary compatibility is as much a reason as the language itself.

For C++ I remember spending years trying to carefully track compiler versions,
precise build setup, etc. or your library would probably fail to link. I have
never had a binary compatibility problem with a plain C library.

------
asnyc
C exposes you to low level programming environment, memory management and
system internals. Its a great base to build upon for programmers. Even after
more than a decade, I still have fond memories for K&R C - a must read for
programmers.

------
Veedrac
C was so successful because it had, and to a large extent still has, no
competitors.

~~~
dboreham
It was also free.

~~~
pjmlp
Only on UNIX, until Sun started to charge money for their SDK, which lead to
many devs starting to help GCC development.

------
yellowapple
"We use git for source control. hg, largely written in Python, was started
within a few days with the same goals. Who won?"

I somehow doubt git's popularity has a whole lot to do with it having been
written in C.

------
walshemj
So COBOL and Fortan is still going strong in the areas its appropriate for.

~~~
goatlover
Lisp is still around too. SmallTalk as well.

------
qwtel
c.f.
[https://en.wikipedia.org/wiki/Lindy_effect](https://en.wikipedia.org/wiki/Lindy_effect)

------
jasonmaydie
it's time we admitted bad code isn't because of the tools we use but a
function of how we write code. You can write bad code even in "safe"
languages. Leak memory with languages that come with a GC, etc etc.

~~~
Ar-Curunir
Yes, but some languages make it far easier than others.

------
faragon
C is simple and beautiful.

~~~
glandium
C _looks_ simple, but is not. That's part of the problem with it.

