Hacker News new | comments | show | ask | jobs | submit login
The Curious Case of the Longevity of C (ahl.com)
178 points by pablobaz 50 days ago | hide | past | web | 301 comments | favorite



I've always like the description of C as just being "raw memory, but with syntactic sugar."

I think C is still so pervasive for 2 simple reasons (I might be wrong):

1) It's a small language. Compared to many newer languages (Swift, modern C++, old C++, Objective-C, Java), it's quite tiny. The language itself can be learned in short time. (Which is deceptive, because properly using what you have learned takes longer than a short time).

2) It's fast. It's just direct memory access, direct and explicit hardware manipulation. And whether justified or not, there is a very large portion of the developer community that remains obsessed with speed. I see it all the time, even when speed isn't not going to matter enough to justify C. I often see people going way out of their way, making their work much more time-consuming, because of their everlasting pursuit of writing the fastest damn program they possibly can.


> I often see people going way out of their way, making their work much more time-consuming, because of their everlasting pursuit of writing the fastest damn program they possibly can.

Problem is you have the complete opposite on the other side, folks who don't even think about speed (usually because their development machine is quite powerful and so they don't notice). Thus we end up with programs that are resource hogs and slow that have no good reason to be.

There is a middle ground to be found I think.


Some programs are resource hogs. Most programs don't justify the effort to be pre-occupied with speed. I think it's fine for a developer to place runtime speed low on their priorities until they see an actual problem. Then, if it's hogging resources for something, they can do something about it. Thing is, however, for most tasks, this will not be necessary, and for many programs it will not be a problem, even if you use the slowest algorithms.


The thing I love about C is it almost always does what you want fast, but you are never certain. No matter how much you think you know C, there is always some new edge case to experience. C is exciting.


I completely disagree with this. C is the only language I ever used I felt I had mastered. Sometimes I needed to check things in the spec or pause to consider the C99/C89 discrepancies, but essentially I knew everything about it after just a few years of daily use.

I've never felt that way with Python, JS, Ruby, Java, C++ or any of the other languages I've used. They are at least 10X more complex in terms of primitives and non-trivial interactions.


That is where the excitement comes from. You think you know C, but when you least expect it something come up that bites you in the backside.

I love C for its warts and all, but you should treat it like a tamed wild animal.


C is deceptive, though. It encourages a mental model of high-level assembler, but it's an assembler for an abstract machine, not the actual machine; and compilers are increasingly making the difference evident.


Maybe many people do have incorrect assumptions about C, but I wouldn't say it's deceptive. The abstract nature becomes plain when using an optimising compiler for a while. And they have been the norm for at least 20 years.


Optimizing compilers have been around for yonks, sure, but I think taking aggressive advantage of UB is relatively recent. See e.g. Regehr in 2012 [1] :

> The current crop of C and C++ compilers will exploit undefined behaviors to generate efficient code [...], but not consistently or well. It’s time for us to take this seriously.

[1] https://blog.regehr.org/archives/761


I see your comment is grey, so I guess people down-voted you and maybe you want to know why. It's because you are wrong. Compiler optimisations have always been relevant; even in the 90's they would omit entire blocks of code or reorder operations for example.

I assume now it's more aggressive, because undefined behavior should never be relied upon. That's the point.


I think you're reading me as saying something much stronger than what I actually meant. When we talk about C being "deceptive" we don't need to consider optimizations that respect the as-if rule. (Well, maybe when looking at the C->asm mapping or trying to benchmark, but that's not the level most people are working at.)

The trend toward UB-based optimizations seems significant to me because they can and will take a bunch of C code which looks superficially reasonable, and which used to do X as intended, and suddenly (and perfectly legally) make it start doing Y instead. I assumed that's what barrkel was alluding to above.

And yes, I'm sure there are old optimizations which will also do that, but basic things like hoisting and reordering and unrolling and dead-code elimination don't fall into that category.


> And yes, I'm sure there are old optimizations which will also do that

So then it's not new behavior.


I think the pre- versus post-increment is a great example of this -- Assembly language is unambiguous about when the increment happens. It happens when you hit the increment instruction. The situation in C is almost intentionally confusing. This was done back in the early days of structured programming. I suspect the motivation was to win people over from using gotos for everything, since it allows for variation in how loops and conditionals are executed.


Some of the processors back then had various pre-increment vs post-decrement addressing modes so there was a performance difference which one you used. To keeps things unambiguous, use the increment operator in a statement by itself.


Interesting, that sheds some light on the situation!

And indeed, I do always use the increment operator by itself. The amount of extra thought it takes to distinguish

    while(i--)
from

    while(--i)
just isn't worth it, when instead I could write

    for(;i>0;i--)
which is very clear about what the last value of i in the loop is.


C is very unexciting to me, in a good way. With C you can figure out what the compiler has done by inspecting the generated code. Have fun doing that with an interpreted language, or compiled output from any other language for that matter!

Actually there are a few other languages that are great for debugging runtime code behavior; Lua comes to mind.


Do you have any examples of this uncertainty? I hear this argument a lot, but have a hard time understanding why this seems to be a common grievance. I've been writing C for years and never once did I hit a scenario of dealing with undefined or uncertain behavior. Writing normal C is very well defined. No?


I find this hard to believe. Even initialized variables are undefined behavior, I’ve yet to meet a c programmer that hasn’t been bitten by this.


Please explain how that is UB?


Ugh. Sorry autocorrect, should read “uninitialized variables”


It is well understood that all variables need to be initialized. Most compilers consider uninitialized variables an error. So I would say someone running into this and possibly other UB scenarios is just a case of writing bad code.


And always new buffer-overflows to find.


I know, right? The GP’s point baffles me. C is a tool. I don’t want my tools to be exciting! I want my movies and vacations and relationships to be exciting, but not my tools. I want my tools to be boring and predictable.


I don't find it a common bug in day to day programming. It may be well known because of the security implications. I get perhaps a 2-3 crashes a year from buffer overflow when developing an app. OTOH, I'm constantly being pumelled by failed type checks, undefined variables, uninitialized variables, etc. Last time compiler even warned me about buffer overflow, which was nice.


> 2) It's fast. It's just direct memory access, direct and explicit hardware manipulation

I also find C really handy for parsing binary formats (ASN1, Ms EMF...), parsing binary stuff in Python for example feels miserable. However, for manipulation string based formats (xml, json, yaml), C is quite miserable, even with the help of libs like libjson-c or libxml2.

And to be fair, even for parsing binary formats, C can be quite unforgiving. I still have some traumatic souvenirs from the first time I ran American Fuzzy Lop on my EMF to SVG converter... (by the way, I love the afl logo: https://upload.wikimedia.org/wikipedia/commons/f/fa/AFL_Fuzz...)



In python you just import pyasn...


well, currently writing binary tools using python and I find that quite pleasing with the struct module ..


The C90 standard was 230 pages, but C11 is up to around 683 now: some 3X bigger. That's just to describe a language whose computational model consists of Von Neumann style word mangling, and which has next to no useful library functions. (You want a simple linked list, you have to write code or go third party.)

You can treat C as a small language, if you don't have to work with anyone else.


Yes 683 pages, but 178 of that is the language:

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf

That's up from 95 in C90, a factor of 2 not 3 in 20 years.

http://read.pudn.com/downloads133/doc/565041/ANSI_ISO%2B9899...

You can treat Rust as a useful language as long as you don't need to get work done. I rather like C11. I would have liked Rust, I tried, if they'd have known when to say no or if they valued sparseness for its readability because it reads like C+++:

  const USAGE: &'static str = "usage: blah blah";
By contrast, I like what seL4 did with C. They wrote in C and proved in Isabelle.


In today's Rust you can drop the 'static.


This example is one of those things I keep running into though, and don't fully understand: why does type inference not work on consts? e.g., here, why can't we drop the entire type:

  const USAGE = "usage: blah blah";
The RHS has sufficient type information, does it not? After all, if this were a let USAGE = ... inside a function, that would be sufficient.

(I don't find this particular bit a deal-breaker though; thus far, I quite like Rust.)


This was an explicit decision; we could do this kind of inference but chose not to, similar to how you must declare the types of functions. We started with the most conservative choice, and then are feeling it out; that's why you can now drop the lifetime.


Not sure why this is being downvoted. The GP commenter is mistaken, the following line is completely valid Rust:

    const USAGE: &str = "usage: blah blah";
Compare it to the equivalent in C:

    const char *USAGE = "usage: blah blah";


And while you are at it, compare that to:

  #define USAGE "usage: blah blah"


The equivalent C needs a second const -- your first one refers to the data behind the string but not the string pointer, and is more like Rust's `let usage: &str`.

    const char * const USAGE = "usage: blah blah";


Or rather `let mut usage: &str`. Rust actually is kind of similar to C here in that the pointer and pointee can be separately const or mutable. But in Rust "const" is the default and mut needing to be explicitly declared, whereas with C it's the other way around.


In C90, the Language clause goes from page 18 to 95, so 78 pages. Not 3X less but 2.3X.


> The C90 standard was 230 pages, but C11 is up to around 683 now: some 3X bigger.

The C11 standard (ISO/IEC 9899:2011) adds support for threading and atomics, and adds IEC 60559 specifications in its annex.

Yet, the language specification part of ISO/IEC 9899:2011 is less than 180 pages, and the remaining pages are dedicated to libraries and the annex, which on itself takes over 200 pages.


> next to no useful library functions

Ah, compared with its early 1980s competition (Pascal, Modula-2, BASIC), C was quite lavishly equipped with library functions.

Now, get off my lawn, kids!


This if a far cry of Java's full formal language definition


3) Compilation is much faster than C++.


There's a tradeoff, however. You can trade C++'s compilation time for the program's runtime, in many cases. You can now do quite a lot at compile time in C++ that is not possible in C.

Therefore, technically, C++ has slower compilation time but potential for faster runtime.


Add to this the disastrously central role that C has played in our ongoing computer security nightmare

One sees plenty of news on the downsides, but on the other hand the lack of security has also enabled console homebrew, iOS jailbreaking, Android rooting, and a bunch of other creatively liberating activities which involve doing something not officially sanctioned. It's worth pondering whether we would be better off, had everyone switched to something much "safer" (maybe even with formal verification) a long time ago. I think it's debatable.


Contrarily, had we 'switched to something much "safer" a long time ago', then by now--out of sheer necessity--we may have been adept at leveraging economics and politics to make hardware user-hackable. As it is we could just be using the swiss-cheese safety of C as a crutch to avoid having to tackle the hard social problems of getting people to care about liberated devices.


Or the number of people interested would have dried up.


Possible, but doubtful. For ages the default state of software was closed-source, and yet the desire to share and be free did not dry up. People yearn for freedom, and in its absence they will make it, but mildly-comfortable half-solutions have a great way of defusing movements.

But slowly the pool of exploits will dry up, either because we stop using C or because we finally figure out a way to write C securely, and we'll find ourselves decades behind at the necessary task of producing hardware that is open by design rather than open by accident.


Feels like the old claim that Macs didn't catch viruses like Windows does. Turned out that no one bothered to make viruses for something no one uses. In other words the "security nightmare" is as much due to its popularity and ubiquity as it is to do with any particular detail of the language itself.

Maybe the most secure language is the one so esoteric, and complex, and incomprehensible that no one uses it to make anything and therefore is never exploited.


That lie? They on the other hand recommended that people install anti-virus programs till that got pointed out. https://www.cnet.com/news/apple-suggests-mac-users-install-a...

I hate that it is okay to lie. Of course any OS can have a virus and malicious code. It runs code.


That language will be sure to link against some C libraries, at least, libc. That might be enough to crack deeper into the system.

The problem with C is that it's widely used for critical system-level software, and practically all other tools were (and are) inadequate for the task.


> One sees plenty of news on the downsides, but on the other hand the lack of security has also enabled console homebrew, iOS jailbreaking, Android rooting, and a bunch of other creatively liberating activities which involve doing something not officially sanctioned.

On the other hand, without the pervasive vulnerabilities enabling this hacking, we might have been more motivated to enact a legal framework to require companies to permit people to hack devices they own.


Or we might not. This is a kind of high-risk counterfactual.


Best comment of the thread! "Be careful what you wish for..."


> We rely on Python which is written in C89, a standard older than many of our developers.

Python ≥ 3.6 requires (some features of) C99.

> Perhaps the reluctance to move to a more ‘modern’ C is that even C99 can’t legally buy a drink

Perhaps it's because MSVC didn't fully support anything newer.

Source: https://www.python.org/dev/peps/pep-0007/#c-dialect


And to be honest modern C99 stuff do not solve the fundamental problems in C (and solving those gives you something that is not C)

(some of the changes) http://www.open-std.org/jtc1/sc22/wg14/www/newinc9x.htm


Yeah... good luck taking a newer version of Python an compiling it on Visual Studio 2012. I had to jump through hoops even to get 2.7 to compile and wound up abandoning it because it caused problems down the line with ctypes.


VS2012 is quite pre-historic, we already had 2013, 2015 (with three updates) and are now on the third 2017 update.

C99 support was added in VS2015 to the extent required by C++14, similarly C11 support was added in 2017 to the extent required by C++17.


> VS2012 is quite pre-historic

You don't work in the enterprise domain which is VS’s core market, do you?


Still on Visual Studio 2010 here... The pain is that everyone has to move at once due to the way the project files are upgraded when you open them in a newer version.


Microsoft fixed this after 2010. VS 2012 and onwards can open a 2010 project file with no changes. A mixed population of Visual Studios from 2010+ can work on the same project file.


That is actually extremely good to know. Thanks for the information!


You work around this by installing two versions of Visual Studio side-by-side.


I do. We migrated to VS 2015 this year.

Usually it takes one year to upgrade to a new version, after its release.

We are already playing with 2017 for next year upgrade cycle.


"C is the desert island language". see http://crypto.stanford.edu/~blynn/c/intro.html

For me, Linux is a huge development environment dedicated to C programming. Chapter 2 and 3 of manual pages gives C api. C gives the most direct access to kernel. IMHO we should try to find a replacement for C and the first target for this language should be linux kernel. If you can conceive a language that Linus Torvalds accepts, the rest of the planet will follow ;-)


> If you can conceive a language that Linus Torvalds accepts

Good luck!


For a new language to have practical applications in the Linux codebase, it has to run on every architecture supported by Linux (and ideally more), and use the same linker and ABI as C. A tall order, to say the least.


In my mind, C very closely resembles what's going on in my mind's model of the computer. I "think" in C when I'm thinking about an algorithm. It's almost exactly the same as the fake language people sometimes use for explaining an algorithm. I think this property contributes towards the popularity. No "artificial" abstractions like classes, templates, modules, imports etc. Just think of the first step you would do to solve a problem, and write it. Then think of the next step, and write that. And you get fast, portable code that produces tiny binaries! What more do you need?


Thou shall embrace the true power of functional programming


I think that during the nineties the focus switched from compiled, small runtime, unmanaged language to higher level, garbage collected, "big runtime" languages like Java or the scripting languages. One notable exception being C++ but its complexity and the difficulty to interface it with other languages meant that it couldn't completely replace C.

So for a long time if you wanted to do low level, fast, small, reasonably portable code you simply didn't have much of a choice.

Rust is the first language in a long time that I think could end up replacing C in all of its use cases but there's still some way to go. The main difficulty I can foresee is that like C++ it's significantly more complex than C and harder to interface with other programing languages (you end up making an unsafe C interface).

I think C is here to stay. The billions of lines of C code out there won't be rewritten in a fortnight.


Rust would need severe changes to replace C where C shines, being able to cobble object files together into something that works. I can run malloc from userspace and 14 different previous invocations of a C compiler later I have registers with my parameters in touching page tables.

ABIs, APIs etc have to be able to propagate size and type information that is basically how C defines them if you want anything to work (pointers, integers, floats).

C embeds a remarkable amount of cross-object-file information about a function just with 'int f();'. We know the range of valid values, we know how big the object is, etc.

Rust literally has none of these mechanisms and likely never will, interfacing Rust to other Rust, such that you will get any benefit at all from using it would require megs of type, scope, size, etc information to crawl across the calling convention.

There's a reason C++ names look like CintFuncf!!!@34$902, Rust doesn't have or plan to have anything like this yet, and it'd need to be 20x more complicated.

If you get rid of Rust's magic about being able to remove guards and checks by infering things about the data to make it conform to calling conventions instead of using types, you have just invented C.


Well, you get what you pay for. Rust can call and expose C-style ABI functions natively so you can always do what C does. If you want a safer Rust API you can also have that but it makes interfacing with 3rd party code more difficult.

I disagree somewhat with your assertion that:

> C embeds a remarkable amount of cross-object-file information about a function just with 'int f();'. We know the range of valid values, we know how big the object is, etc.".

First of all this is not embedded in the object file but rather in the header, Rust doesn't need that. The object only tells you the name of the symbol and that's about it. In particular that means that the C linker can't detect ABI mismatch if the prototype of a function or the layout of a struct changed, as long as the symbol is found it'll link just fine.

Furthermore even with the header available a lot of the time C prototypes are not sufficient to know how to use a method. Take for instance:

    sometype_t *some_function(someothertype_t *param, int flag);
Is param an input or output parameter? Do I own the return value or is it allocated by the function? Or maybe it's just a member of param so it has the same lifetime? I know that flag is an int but that doesn't really tell me which are the valid values I can put in there. In Rust function signatures tell you all of that and it's enforced by the compiler.

So yeah, there's a significant overhead in Rust here, but it's for a good reason IMO. It does make it harder to make quick hacks with the linker though.

> There's a reason C++ names look like CintFuncf!!!@34$902, Rust doesn't have or plan to have anything like this yet, and it'd need to be 20x more complicated.

Are you talking about name mangling? Rust does that too but that's not really the same issue, it's just about generating unique "flat" names for objects that include namespacing and generic info. Like if you have a "fn foo<T>(t: &T)" and you instanciate it with T = i32 and T = String you need to generate two symbols. C doesn't need that because it doesn't have namespaces, generics or overloading.


>First of all this is not embedded in the object file but rather in the header, Rust doesn't need that.

Granted, pointed about the header, I was talking more about the 'int' part but failed to describe myself well at all. I mean more that you can map it onto the ABI properly and easily. Try shuffling internal Rustisms over the usual SysV ABI.

What Rust means by anything is still entirely up in the air, and last I checked their bug tracker, the issue had been open for years and was basically commented as "probably won't ever resolve due to rust changing too often".

>Furthermore even with the header available a lot of the time C prototypes are not sufficient to know how to use a method.

A keyword could be added for that, you wouldn't be able to enforce it on the ABI level but you could make the compiler check the same as Rust. I'm not here to argue about which is better though, I think I accidentally led you up this path though.

>Are you talking about name mangling? Rust does that too but that's not really the same issue

I need to know how to generate those 'flat names', is the point, I can't call it if I can't name it, and it all plays in from the same systemic issue in that Rust has no idea what it is doing from week to week.

EDIT: I'll use this space to re-iterate, if you get rid of all the Rustisms and shuffle everything over the existing SysV ABI, you're left with C. Keeping the Rustisms will need megs of data sending over the ABI somehow to allow Rust to elide bounds checks, etc that Rust is useful for to begin with.


> What Rust means by anything is still entirely up in the air, and last I checked their bug tracker, the issue had been open for years and was basically commented as "probably won't ever resolve due to rust changing too often".

Do you have a link? There's a good chance that issue was opened and that comment made before 1.0, when things really were changing too often. Change is less common these days, though I'm sure the compiler developers do still enjoy not being restrained by the need to maintain ABI backwards-compatibility (which isn't to say they couldn't be convinced, only that it would take cajoling).

> it all plays in from the same systemic issue in that Rust has no idea what it is doing from week to week.

What do you mean by this?

> Keeping the Rustisms will need megs of data sending over the ABI somehow to allow Rust to elide bounds checks, etc that Rust is useful for to begin with.

I'm confused here. I don't see how the ABI has anything at all to do with bounds checking; bounds checking, where it exists, is all done at runtime by regular code in the standard library. And by "etc" I presume you're referring to type inference (which you mentioned in an earlier comment), but Rust doesn't do inter-procedural type inference; it's not like Haskell or ML. There's simply nothing to infer, and no need that I can see for an ABI to care about that. Can you be more concrete?


They're probably referring to https://github.com/rust-lang/rfcs/issues/600 It's absolutely true that we have no plans for ABI stability in the near term.


> C doesn't need that because it doesn't have namespaces, generics or overloading.

Overloading as in operator overloading? Because I don't see how that would affect symbols.

Though along with namespaces and generics, there is one thing that Rust also bakes into symbols: versioning information. This is how, in the case of deep dependency graphs, it's possible for a finished binary to include multiple copies of the same library in the event that multiple versions are transitively depended upon. But that doesn't add any complexity to symbol mangling on its own, because if you already have namespaces then you can just treat it as a namespace that only the compiler can see.


No I meant parameter-based function overloading like C++ has (or is it not the wrong term? I forget).

So like:

    int do_something(int param);
    int do_something(double param, char *param2);
I don't see how you can avoid some form of name mangling since obviously you can't just define two duplicate "do_something" symbols.


Rust does not currently support parameter-based function overloading, and there are no plans to in the near-term, if ever.

(This isn't the only way to end up with duplicate symbols, just trying to make it clear that this specifically won't be a problem with Rust.)


Nitpick: C11 introduced support for some form of that kind of overloading, using _Generic macros. See http://en.cppreference.com/w/c/language/generic, https://stackoverflow.com/questions/9804371/syntax-and-sampl..., https://stackoverflow.com/questions/40096584/c11-generic-usa....

Ugly? Yes, IMO, but it also is useful.


It would need it if it had it. How do I find your code that adds bananas together using a banana adding operator via a text name?


If you're referring to the ability to define entirely new symbolic operators like you can do in Haskell and Scala, then you'll be happy to hear that Rust doesn't support such a thing. (Frankly, I'm happy about it too. :P ) Rust only allows you to overload a fixed set of operators, and that overloading is all done via traits with well-known names (e.g. overloading the plus symbol "+" is done via implementing std::ops::Add for a type), and traits too can be effectively treated like namespaces as far as symbols are concerned.


I use Rust daily to cobble it together with C object files.

    rustc --crate-type staticlib file.rs -o rusty.a
    gcc file.c rusty.a
    ./a.out
It totally works (and debug symbols, C debuggers, profilers, code coverage — all works in the mixed executable).

Of course C can't directly call Rust's ABI, but Rust can call and export `extern "C"` functions, and there are tools to automate this. You can declare C function prototypes in Rust using references instead of raw pointers, so you even get basics of borrow checking for the C code.


> Rust literally has none of these mechanisms and likely never will

It's true that Rust has no stable ABI, but "likely never will" is hyperbole. If people demand it in sufficient quantity then it will happen in time. I personally hope it does someday, but I'm in no hurry.

> There's a reason C++ names look like CintFuncf!!!@34$902, Rust doesn't have or plan to have anything like this yet, and it'd need to be 20x more complicated.

Rust does have name mangling already. It may not be standardized (again, no stable ABI), but C++'s name mangling isn't standardized either. And I see no justification for why name mangling in Rust would need to be "20x more complicated" than in C++; AFAIK Rust symbols pretty much already imitate C++ symbols in order to play nicely with existing tooling.

This would be more informative if you gave some concrete examples. Any hypothetical Rust ABI would be more extensive than the de facto C ABIs, undoubtedly, but the concern here seems unjustified.


C works everywhere, and everyone knows C, or can learn it in a couple of weeks and get coding. This is a hard combination to wholesale replace, but then not even Rust aims to do that. Instead it aims to chip away gradually, and I don't see why it can't do that .


> C works everywhere, and everyone knows C, or can learn it in a couple of weeks and get coding.

You hit the nail in the head. The go-to tutorial and reference book of C is «The C programming language» by Brian Kernighan and Dennis Ritchie, which provides a complete and very thorough description of C and its standard library in less than 260 pages. That's unbeatable.

As a comparison, the go-to book for C++, «the C++ programming language» by Bjarne Stroustrup, goes beyond 1200 pages and doesn't cover some fundamental aspects of C++, and even «The Rust Programming Language» by Steve Klabnik and Carol Nichols, a book on a programming language which was designed to eat away C's market, is over 400 pages.

This speaks volumes on the effort required by anyone to get on their feet and be productive with these programming language.


Different books have different goals. K&R is short, but the standard is ~700 pages. The Rust book is over 400 pages, but contains entire chapters of just "let's build a project together." There's no spec yet like C has. Writing styles differ dramatically, K&R are more concise than I am, and don't dive into some details as much.

Then, you may also consider the framing of "simple" vs "easy", they're not the same thing. And that's even if we agree that C is simple in the first place, which I personally consider not true.

Basically, I don't think that comparing page counts of random documents says anything meaningful about language complexity.


As an aside I don't have time or any purpose beyond curiosity to learn Rust right now. But having glanced through the Rust book a few times, I wish I did. It's the sort of book (a bit like K&R in this sense) that could lead one astray ..


Thank you; even saying "a bit like K&R" at all is high praise to me.

That said, not everyone likes my writing style, so I'm glad that there are other books coming out as well.


> Different books have different goals. K&R is short, but the standard is ~700 pages.

ISO/IEC 9899:1999 is 554 pages, and the annex alone takes around 140 pages.

ANSI ISO 9899:1990 is even shorter: 230 pages.


I'm going by ISO/IEC 9899:201x, the draft, since I have not paid for the spec. This does include annexes, but some of them are normative...

Really though, this just furthers my point; page count is a terrible metric for this.


From the article: “One standard textbook takes 534 pages to explain secure coding standards in C“


Yes, but that comes later. When Joe. D. Newbie wants to start programming, C seems quite approachable, conceptually. Hack away at things in an imperative fashion. There's a reason Visual Basic, Perl, PHP, Java (+IDEs), Javascript are/were the most popular languages. They're easy to get into.

I'm not counting C++ because it was piggy-backing on C.


> When Joe. D. Newbie wants to start programming, C seems quite approachable, conceptually.

Coming from Java, my first experience with C was trying to write a trivial program and seeing nothing but the text "Segmentation fault" at runtime. Expecting a nice Java-like backtrace to tell me where the error was, I raised my hand and asked the TA what a "Segmentation fault" meant, and how to get a backtrace. He laughed, rolled his eyes, and went back to playing Slime Volleyball.

Compared to any other language in use these days, C is anything but approachable. And when it comes to learning "secure coding standards in C", as the grandparent comment mentions, one cannot risk putting that off lest they develop bad habits that are never undone (though other environments share this property as well to some degree, e.g. PHP and client-side Javascript).


Coredumps and debuggers are a thing. Sounds more like a bad TA.


I won't dispute the latter. :P But speaking as someone who knows his way around GDB, the experience of using it is nowhere near what a Javascript programmer (and let's face it: this is all new programmers) will be primed to expect via browser debugging tools, and the necessity of using it/frequency with which it must be used due to C's lack of a runtime (understandable) and weak type system (understandable for 1972) make it a frustrating choice for autodidacts looking to pick up a second language.


I'm extremely happy that I learned to program in QBasic and then C++, with that awful Bloodshed DevC++ compiler. Everything since (except Haskell...) has been baby-town frolics, in comparison.


> Bloodshed DevC++ compiler

God, this is the first time in over a decade that I see a reference to Bloodshed Dev-C++. IIRC, Dev-C++ was just an IDE, and the compiler was actually GCC. The IDE was actually very good for its days, and considering it was a free IDE before free IDEs were a thing.


I thought we were talking about a language that newbies could pick up easily? It's not an easy language if right away I have to use debuggers to figure out why my simple code won't run.


I'd say that with C there's a knee-high step to get up at first if you're dealing with pointers for the first time, then it's smooth sailing until you double free your first pointer, and then smooth sailing once you're more meticulous about that until you overflow a buffer using strcpy, then learn about strncpy and learn the hard way about it not putting in a trailing zero if the buffer is filled and then you learn about strlcpy, and then you get bit somewhere later on by signed/unsigned differences, and so on.

You end up building up an impressive set of street smarts dealing with C, based almost wholly on actual bugs found the hard way. This period last for a long time, if it ever ends.

That may be some of it's charm. You've bled copiously along the way, due to somewhat disguised behavior. Why give up your hard won knowledge just to go play with legos? It's like learning first to juggle knives and hatchets, and wondering if you should "progress" to safe juggling balls specifically designed not to maim you.

I'm joking, mostly. I love programming in C, but it's definitely not an easy or simple overall journey. You can make some cool things with it, though, which is the point.


You get safe programming for “free” in Rust. So those 400 pages mentioned are not equivalent to what you glean from the intro to C book. It’s not a fair comparison.

Edit: fixed pronoun referencing gp...


>You get safe programming for “free” in Rust.

Not really. Rust will tell you your program is unsafe at compile time, whereas c will tell you at runtime. The major difference is that c sometimes won't tell you about an unsafe program, whereas rust will call perfectly safe programs unsafe. Note that for a beginner, the former is vastly preferable, because it allows them to iterate much faster. It doesn't matter if your code is buggy if you throw it away, and it doesn't matter if your code has security bugs if it will never be worth hacking.


No, Rust is not safe. From the 3 fundamental safeties: memory, concurrency and types, Rust only provides type safety.


Rust has the first two, and a rather strong type system, which is soon getting the equivalent of higher minded types, putting it roughly on par with Haskell in that respect.

How much stronger do you want your type systems to be?


What do you mean? Memory and concurrency safety are so important that they're in the main slogan of the language.


Does learning to program safely come before or after one deploys the code to an IoT device or other networked service?


> From the article: “One standard textbook takes 534 pages to explain secure coding standards in C“

If you wish to compare books on specialized topics, the book on C++'s STL by Jossutis spans over 1100 pages and the book on C++ Templates by Vandevoorde, Jossutis and Gregor spans over 800 pages.


I bet that manuals of Basic-80, Forth, Scheme, and Smalltalk are all short. The amount of power they give you differs drastically, though.


"The C programming language" is a fantastic book and I treasure it. Unfortunately it's also not a book I would recommend to people who want to write good secure code.


I would argue that while writing your first c doesn't take much knowledge (and is much easier than say rust), writing good C is harder than writing e.g., good Java. (Where I'd say some dimensions for good are bug free, reasonable performance, and readability)


> C works everywhere, and everyone knows C, or can learn it in a couple of weeks and get coding.

I think you mean, they think they learn it in a couple of weeks, and then over the next 10 years are continuously surprised at how little they actually understand C.


> everyone .. can learn it in a couple of weeks and get coding

Not so sure about that. I started on assembly and then C, though did neither professionally. But I'd be a bit scared of touching a serious C codebase now. There's so much to think about there that only long experience can really prepare you for. Sure you can cover the language very quickly with K&C (and what fun compared to much dev learning). But although I'm out of touch with the C world now, I suspect it would take a long time to get from there to being more useful than dangerous.


My 2 cents, from the perspective of someone who built a career on writing software in C for small devices.

C has longevity b/c its compact and provides a straightforward model of memory on the machine. I understand the desire to use safe, garbage collected, memory safe language when you're serving HTTP requests, but sometimes you need to access the hardware: twiddle a GPIO or read from a DMA device. This is where I've yet to see a good replacement for C, and by extension, C++ (b/c its fundamentally still just C). Maybe rust is there, but I don't have experience there to judge.

[edited for clarity]


>C++ (b/c its fundamentally still just C)

I agree with your comment but hate this comparison. C and C++ are completely different languages. Idiomatic C++ looks nothing like C and vice versa.


Yes, I agree with you, but I make this point because C++ gives me the same direct access to the hardware. I can always just write C (or inline assembly, if I really have to go there). That's why my last embedded project was written in C++. There was a core that implemented SPI by bit-banging GPIO registers, but wrapped around that was idiomatic C++.


The line is pretty grey, but I don't consider that much different than a project in any language writing and linking to a few modules in C where necessary.


I think the other point is we have decades and decades of code at this point written in C, or built on top of C. In some regards 'if it ain't broke don't fix it' holds quite a bit of truth.

The embedded world may start to move to doing more things in a higher level language to speed up development in areas that may not need the same level of high performance or real time certainty. Stuff like TI RTOS while still C made getting to a proof of concept much faster.


To be clear, since I have seen this pop up lately, Rust doesn't have a GC and absolutely gives you access to hardware. The challenges for it on embedded are mostly toolchain issues.


Is there anything anywhere comparing ROM and RAM usage of Rust vs. C for embedded devices? Can you write anything meaningful in Rust with say 8KB of RAM?


I don't have great numbers, https://news.ycombinator.com/item?id=15367507 talks about 256k flash and 32k RAM.

That said, the new AVR support includes 8 and 16 bit chips, I don't know the lowest amount of RAM that has had actual code running on it yet.


So, are vendors working on rust toolchains for their hardware? I'm asking from a place of genuine curiosity, I left embedded a number of years ago. I've always viewed the 'rust breaking into embedded' problem as a toolchain issue, as well. But, my intuition is that vendors will be slow to release rust support in toolchains because the industry is still firmly in the "we write in C" camp.


This is an interesting thread...

https://news.ycombinator.com/item?id=14071282

In my view the industry inertia is a chicken and egg thing. You want a chip that runs your existing code. Then you want to write new code on your existing chip. And you have ongoing projects at different stages, sharing chips and code.


That's what I meant by challenges; it's pretty much "can LLVM support the chip and then did someone put in the compiler work in rustc", rather than "vendors are directly providing Rust support." Your intuition is mine as well.


This is good to hear. I may give rust a shot. I have a lot of inertia, though, because I already know how to set up and maintain my cross-compile toolchain.

I've also heard that Go may a viable option, at least on ARM, and I'm considering giving it a try for my next embedded side project.


So, some quick things: every Rust compiler is a cross-compiler, so that's not the tough part. You need to compile 'libcore' for the target, but https://github.com/japaric/xargo makes that pretty straightforward if your chip isn't one of the ones provided by the Rust project, if it is, then you just need to run a command to download it and you're good to go.

https://forge.rust-lang.org/platform-support.html is said target list today. AVR is coming soon, it works but has bugs and is hacky.

Japaric is quite prolific, he writes over at http://blog.japaric.io/ and his latest posts are about his RTOS-like framework.

Finally if you're into ARM, https://www.tockos.org/ might be up your alley.


Noting all this for future research. Thanks!


Any time! :)

I really need to write better docs on all of this...


C was originally a language for modest size programs on machines with 64K address spaces. C isn't a bad language for a thousand line program. It's a terrible language for a million line program. Just to get memory safety, there's too much that has to be manually coordinated across compilation unit boundaries.

The three big questions in C are "how big is it?", "who owns it?", and "who locks it?" The language gives zero help with all of those issues. Most later languages deal with some or all of those issues.


C could use a few features to help with million line codebases, like namespaces. Beyond that, programs that large require good human engineering to manage complexity.


If you look for million line C program, you invariably have to deal with something like Glib


I'm quite bored by the obsession of some for replacing C.

Yes, it does exactly what you tell it to do and that's dangerous. It was a high level language 25 years ago, but today the metaphor should be assembly. Nobody would criticize assembly for letting you shoot yourself in the foot, C is much the same.

The biggest threat to the tech sector is Linus dying and being replaced by some charlatan who insists on replacing C and using Jira.

</offtopic>


I would be (mostly) fine with C for the purposes it's designed for if each C compiler were required to have an option that makes the compiler use implementation-defined behavior everywhere. This would make C more like a portable assembler and less like the software engineering version of "do you feel lucky, punk?"

As it is, lots of major C applications already do this on a piecemeal basis and use one or more of:

  -fno-strict-aliasing
  -fno-strict-overflow
  -fno-delete-null-pointer-checks
with gcc to keep a lid on the effects of undefined behavior (and often more or even stricter options, such as -fwrapv).

The problem is that even experienced C programmers can get tripped up by undefined behavior all the time. "Yes, it does exactly what you tell it do" doesn't really cut it when the vast majority of actual humans cannot safely predict the effects because they're so unintuitive. Or, in the words of Douglas Adams:

  "But the plans were on display . . ."
  "On display? I eventually had to go down to the cellar to
  find them."
  "That's the display department."
  "With a torch."
  "Ah, well the lights had probably gone."
  "So had the stairs."
  "But look, you found the notice, didn't you?"
  "Yes," said Arthur, "yes I did. It was on display in the
  bottom of a locked filing cabinet stuck in a disused
  lavatory with a sign on the door saying 'Beware of the
  Leopard'."


> with gcc to keep a lid on the effects of undefined behavior (and often more or even stricter options, such as -fwrapv).

Speaking of which, a killer feature I'd like to see in a C compiler is a flag for debug mode which SIGABRTs whenever it enters an undefined behaviour codepath. For example, sometimes the compiler knows that if pointers alias, or an integer overflows or something, it can do whatever it wants. I want a flag which will automatically add assertions to the generated code, so at least in debug mode it'll crash if thats ever actually the case.


It exists. Clang has UBSan which can detect many undefined behaviour (https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#...). It's integrated into Apple's Xcode 9, which can pause into the debugger when encountering an issue. I was actually using it to debug an issue just as I read your comment.


It exists, but it doesn't do anything in the example that was discussed recently, "undefined behavior may call a never-called function": https://news.ycombinator.com/item?id=15324414

As far as I can tell, -O0 is the only way to avoid that particular mis-optimization in Clang.


This cannot be done for all undefined behavior (because that would be too expensive, if at all possible), but you can use the `-fsanitize` family of options in both gcc and clang to at least get runtime errors for quite a few important cases of undefined behavior.


How possible do you think it is to create a new language which is almost, but not quite, exactly the same as C?

Edge cases that have evolved into it do have plenty of problems, but it's very difficult to create a new backwards-incompatible standard which is just a little bit different.

Then again, the embedded (microcontrollers) world seems to be filled with slightly esoteric implementations of C and everybody gets along just fine.


Almost impossible to do efficiently. Let's consider a tiny example:

    void copy(int* in, int* out) {
       for(int i = 0; i < 10; ++i)
       { *out = *in; ++out; ++in; }
    }
Now, in C it is undefined behaviour if either o in or out point into the stack where the variables i, in and out live. Maybe their values will change, maybe they won't. In practice, this code will be optimised to some code that just stores in and out in registers.

Now let's imagine we want to make it defined. Then the statement '* out = * in' would have to become something like:

1) Write the current values of i, in and out to memory (in case * in overlaps with any of their locations.

2) Read * in into a register

3) Write *out from that register

4) Re-read i, in and out from memory, in case they just changed.

Now, you could make that more efficient by adding to the start of the function a check which looked to see if either 'in' or 'out' pointed anywhere near the stack, but there is still going to be a significant cost. If you plan for this from the start with your language, you can drive the costs much lower.

The main thing that separates C from assembler is that you do not have to worry about what values are in registers, and which are in memory. However, that means any code which writes to memory provides a place where this can make a difference.


Oh, you can fix it pretty simple

"The compiler can always assume writes to memory via one object don't change values in memory accessed via any other object."

Then, if you know your code can overlap, you can use some invalidate function to mark that "memory accessed via this object was changed by some statement prior, make sure to re-read values cached in registers". Because that's the odd case where you're doing something unusual. In general, "The compiler is allowed to optimise things assuming that spooky action at a distance isn't happening". You already have something very similar for atomics, which might be hidden behind a standard library, but in any case are a sequence of "lock the bus, read/write, memory barrier", telling your CPU and your compiler that memory access cannot be reordered around the memory barrier.


> Now, in C it is undefined behaviour if either o in or out point into the stack where the variables i, in and out live

Seems to me that that could only happen if you had some other bug clobbering the values of in and out, or you're passing something to the function that you shouldn't be. In either case, it's an error on the side of the programmer.

> If you plan for this from the start with your language, you can drive the costs much lower.

It shouldn't be the job of the language or compiler to make up for the shortcomings of the programmer besides maybe code generation optimization.


You can express what you're talking about (non aliasing mutable pointers) for free, at compile time.

In fact, you can't express this in C (or, restricted, whatever), so I do wonder if your assumption that because it's UB it'll compile to the optimal code is true.


I think you are thinking about the memory that 'in' and 'out' point to aliasing each other. That's handled with the "restricted" keyword, as you say.

I'm talking about something else, the issue of 'in' and 'out' pointing into the stack, so writing to them changing the value of the local variables. That is basically treated as UB by every C compiler, and no compiler I am aware of provides any switches which would make it easier to handle, it's just horrible and your code behaves in different ways depending on the compiler and optimisation level.


> Then again, the embedded (microcontrollers) world seems to be filled with slightly esoteric implementations of C and everybody gets along just fine.

But embedded teams often severely limit the language constructs you may use, in order to avoid the foot howitzers and only have to worry about the foot handguns.

There are also many other techniques (such as pre-allocating all variables during initialisation) which can be considered a mitigation techniques for the many traps C offers.

I'm also skeptical of the quality of a lot of embedded software (and I've seen quite a bit of it).

This is not to say that C is bad, or that I disagree with you. But embedded is a bad example.


As I said, I think that basically doing a global search and replace of "undefined" with "implementation-defined" in the standard would be a good first attempt (this should be possible, because that's how C compilers worked for a long time and – with optimizations turned off – generally still do). This wouldn't solve everything, but it makes the use case for C as a "portable assembler" better, which could then be used as a foundation to build upon.


> How possible do you think it is to create a new language which is almost, but not quite, exactly the same as C?

See Friendly C:

https://blog.regehr.org/archives/1180

https://blog.regehr.org/archives/1287


>Nobody would criticize assembly for letting you shoot yourself in the foot, C is much the same.

But I bet they would criticize programs for being written in assembly, if they didn't need to be.

If you could have a language with all the performance of C without the footguns, why wouldn't you want that?


> a language with all the performance of C without the footguns, why wouldn't you want that?

I've yet to see a language that actually delivered on this claim.


Perhaps you aren’t looking?

Rust delivers on all these claims. And there have been others before it. Rust hits all the sweet spots for me.


Rust sits in a really weird spot. It's too high-level for a lot of low-level work, and too low-level for a lot of high-level work.

Example for the first case: Writing a garbage collector runtime in Rust has most of the same problems in Rust as in C, because you have to write most of it in unsafe code, where Rust inherits much of C's undefined behavior w.r.t. pointers via LLVM. In short, you have largely the same problems and have added a hard dependency on Rust.

For high-level work, almost all [1] of what Rust gives you is memory safety and that comes at the price of dealing with a LOT of extra language complexity. But aside from dynamic memory management, memory safety isn't hard (we did that back in the 1970s and 1980s), and for dynamic memory management, we can get memory safety with a garbage collector and much less complexity. So Rust is primarily of interest for those use cases where garbage collection is not an option.

While that still gives you plenty of interesting use cases for Rust, there are also plenty of programming niches that it serves poorly.

[1] People will also mention "fearless concurrency", but guaranteeing the absence of data races is not hard. That more languages don't do it is partly because they simply neglected that aspect [2], but also because any mechanism – including Rust's – for doing so inherently constrains your options w.r.t. concurrency [3]. Plus, avoiding data races is the easy part of getting concurrency right.

[3] Concurrent Pascal had guaranteed absence of data races in the absence of pointers in the 1970s, Eiffel had done it with pointers in the 1980s, and there was a plethora of research in the 1990s to do it in various other ways.

[3] For example, there are plenty of use cases, such as certain idempotent operations, where data races are not only perfectly safe, but also desired for performance. There are also use cases where you can prove that no data races occur, but a type system cannot easily capture that.


  It's too high-level for a lot of low-level work, and too low-level for a lot of high-level work.
This is true in the very specific cases that you gave, but I believe that is the minority of use cases, not the majority.

Even the example of writing a GC that requires tons of unsafe code, that is not a good argument for making all the code unsafe. All the unsafe GC code would be abstracted away into a module and would be more obvious to those looking at it that they will need to be watchful for undefined behavior. Now you can proceed writing the rest of the project in safe, simple Rust.

  People will also mention "fearless concurrency", but guaranteeing the absence of data races is not hard
Maybe for developers that are very familiar with the race conditions of parallel code, but definitely not for most people. Even seasoned developers will make mistakes with simple multithreaded code.

Also, the reasoning behind "x is easy so why do I need my language to check it for me" is questionable. The whole point is that you have a guarantee. Have you never had a compiler catch a stupid mistake before it happened and felt relieved? I doubt it. Now imagine if instead of debugging stupid data races in your parallel code you can spend that time optimizing and improving it. I fail to see how this can be viewed as negative.

Sure Rust doesn't cover 100% of use cases, but it definitely covers more than you're implying. It's low-level enough that Redox OS can be written in Rust, but high-level enough that Firefox is now outpacing other browsers and parallelizing everything with Rust.


> All the unsafe GC code would be abstracted away into a module and would be more obvious to those looking at it that they will need to be watchful for undefined behavior.

That code that could be "abstracted away" would be "virtually all the code" in my example.

> Maybe for developers that are very familiar with the race conditions of parallel code, but definitely not for most people. Even seasoned developers will make mistakes with simple multithreaded code.

I'm not talking about manually guaranteeing absence of data races. I mean absence of data races as a language feature.

> Also, the reasoning behind "x is easy so why do I need my language to check it for me" is questionable.

This is not at all what I was talking about. You completely misunderstood me.


I think you'd be surprised, even operating systems, the canonical unsafe activity, has a relatively low percentage of unsafe code. For example, https://doc.redox-os.org/book/introduction/unsafes.html says

> A quick grep gives us some stats: the kernel has about 70 invocations of unsafe in about 4500 lines of code overall.


> I think you'd be surprised, even operating systems, the canonical unsafe activity, has a relatively low percentage of unsafe code.

My example was a GC runtime, not an OS kernel. If I have only very little unsafe code, then I could just do that in C and the rest in whatever other high-level language suits my project and not see any difference.

The bigger problem – where Rust failed to pick some low-hanging fruit, IMO – is that "unsafe" is too much like the bad parts of C. There is no medium position between "everything is defined and memory-safe" and "everything may explode at a moment's notice".

My most practical need for a low-level language is a language that is in that in-between position: semantics that remain easy to comprehend and predictable even if there are no static guarantees, and where I have to use a different strategy for software assurance. The point here is that for such a language I can resort to alternate validation tools (think Ada and SPARK for an example). Rust's unsafe mode does not handle that situation well because (like C) it does not provide a foundation for alternate validation strategies.

It's perhaps also worth pointing out that I have a formal methods background. In short, I've done formal specifications/proofs for software before. In this context, safe Rust has a fairly high cost for only providing memory safety (and few other guarantees), and unsafe Rust is not a good foundation (or at least, not much better than C) for bringing advanced tools to bear.


There are several things that make unsafe Rust better than C wrt to ensuring correctness. For example the stronger, more expressive type-system and fewer instances of undefined behavior for common operations. The standardized, modern tools for unit-testing and fuzzing are also nicer in Rust.


You might be interested in this paper [0] wherein the authors implement a high-performance GC in Rust. Quoting the abstract:

We find that Rust’s safety features do not create significant barriers to implementing a high performance collector. Though memory managers are usually considered low-level, our high performance implementation relies on very little unsafe code, with the vast majority of the implementation benefiting from Rust’s safety. We see our experience as a compelling proof-of-concept of Rust as an implementation language for high performance garbage collection.

[0] http://users.cecs.anu.edu.au/%7Esteveb/downloads/pdf/rust-is...


That seems to be a bit misleading. What they seem to do, inter alia, is expose memory addresses as a safe type in Rust, with pointer arithmetic and dereferencing simply declared safe without it actually being so. There is no check that the underlying address actually points to valid memory, satisfies aliasing rules, etc.


Did you read the paper? Dereferencing is not "simply declared safe." There's an entire section of the paper that goes over the API of the Address type, and explicitly points out that dereferencing is considered unsafe. Their conclusion runs directly contrary to your stated claims:

> We found that the Rust programming model is quite restrictive, but not needlessly so. In practice we were able to use Rust to implement Immix. We found that the vast majority of the collector could be implemented naturally, without difficulty, and without violating Rust’s restrictive static safety guarantees. In this paper we have discussed each of the cases where we ran into difficulties and how we overcame those challenges. Our experience was very positive: we enjoyed programming in Rust, we found its restrictive programming model helpful in the context of a garbage collector implementation, we appreciated access to its standard libraries (something missing when using a restricted language such as restricted Java), and we found that it was not difficult to achieve excellent performance. Our experience leads us to the view that Rust is very well suited to garbage collection implementation.


Oops, I meant to write pointer conversion, not dereferencing. Yes, I read the paper, as I was taking that directly from the Rust code shown in the paper.

The underlying problem is that Rust's borrow checker cannot possibly capture at compile time the arbitrary relations between objects managed by a garbage collector and arbitrary object layouts. Whatever guarantees it provides, memory safety isn't one of them (and cannot be), even if nominally only part of it is unsafe code. Pointer arithmetic and memory safety do not mix.

What you get out of that is basically an alternative approach to information hiding, not memory safety.


It sounds like you're shifting the goal posts pretty dramatically to me. Compare what you said above (which presumably prompted the reference to this paper)

> Writing a garbage collector runtime in Rust has most of the same problems in Rust as in C, because you have to write most of it in unsafe code, where Rust inherits much of C's undefined behavior w.r.t. pointers via LLVM. In short, you have largely the same problems and have added a hard dependency on Rust.

to the conclusions drawn in the cited paper. They sit in pretty stark contrast from where I'm standing.


I'm not sure where you're getting the "goal post shifting" from. Writing "p.plus(n)" in their Rust code is not materially different from what "p + n" would be in C/C++ code. That the former is nominally safe code doesn't change the fact that the same unsafe Rust code gets inlined at every callsite. You could do the same in C++ by creating an Address class (similar to smart pointers) with restricted operations; these operations would not magically become safer just because they're inlined by the compiler rather than spelled out in the code.


Your critique of Rust can be largely applied to C++. Maybe the latter is a niche language, but that niche was not served by many offerings up until recently, and C++ is still going strong, despite being less safe than Rust.


> But aside from dynamic memory management, memory safety isn't hard (we did that back in the 1970s and 1980s), and for dynamic memory management, we can get memory safety with a garbage collector and much less complexity.

At a severe cost in performance. Static object lifetimes cover 99.9% of a garbage collector's use cases, without the performance cost of GC, nor the nondeterministic runtimes. We first saw static object lifetimes come into their own with C++'s value semantics; Rust refines and clarifies the idea and makes memory safety an inherent part of the language itself.

Static object lifetime is to GC what static types are to dynamic types. Lisp is 1960s tech. It has failed, and been replaced with something much better.


I think your terminology is off. Static lifetime means that objects exist for the entire duration of the program and do not affect memory management at all, automated or manual. If you're talking about automatic variables, then things already become more complicated.

For starters, we have to assume that we don't deal with value types (which will end up on the stack, one way or the other), but with local variables that reference heap objects. Second, we have to distinguish between tracing and reference-counting GCs.

A modern tracing garbage collector will have cost for such temporary allocations comparable to `alloca()` and those allocations will typically be inlined. The cost of deallocating a short-lived stack object is zero (yes, zero). This is possible because GCs (unlike manual memory management schemes) are compacting. Whether one approach or the other comes out on top is very situational.

More importantly, I dispute the 99.9% as a vast exaggeration. There are plenty of important use cases (such as persistent data structures, shared caches, etc.) where unique ownership is insufficient; Rust requires you to use either copying or reference counting when you run into shared ownership scenarios, both of which are more expensive than tracing GC (naive reference counting is already one of the more expensive memory management methods known, and atomic reference counting is especially expensive).

If you use reference-counting GC, then for any program that satisfies Rust's borrow checker, the optimizer can eliminate reference counts that satisfy the same conditions (assuming that the optimizer knows about them because they're part of the language semantics). This is largely what Swift does, for example.

Finally, there is deferred reference counting, which incurs only trivial overhead for objects with automatic lifetime (on the order of a fraction of a percent). This is because this algorithm incurs real cost only when pointers are written to global or heap locations; this is also why it's seen limited use in practice: it's excellent for objects with automatic lifetimes and does not rely on the generational hypothesis, but those do not constitute 99.9% of all use cases. If they were, deferred reference counting would have a far more prominent role.

This does not even account for the fact that when there is overhead, that overhead is generally trivial in an imperative language with value types.

There are use cases, of course, where a tracing GC is an inappropriate choice, but that is not because of throughput. Tracing GCs make interoperability with other GCs different, for example, and have implicit memory overhead that may be prohibitive in large applications such as a web browser (that can easily consume gigabytes of memory on a laptop or desktop machine). That said, there are alternative approaches to garbage collection that do not have those problems.


Rust delivers on all these claims on a very limited set of platforms. The support for BSDs is abysmal, with only x86_64 NetBSD and i686/x86_64 FreeBSD even being "guaranteed to build", while OpenBSD, Windows XP and Solaris sit in some kind of state of hopelessness[1].

Rust is not a replacement for C in the sense of portability. People love simplifying the world into Windows/macOS/Linux, but that is not all you may want to target.

[1] https://forge.rust-lang.org/platform-support.html


"All of the performance of C without the footguns"

Nowhere in this sentence do I see "for every platform C targets".

Given no other requirement other than C without the footguns (memory unsafe) is there a good reason not to use the safe version? I'd say there are some, but they aren't crazy compelling (ie: developers have to learn rust, maybe harder to hire for, etc).


It's still a very realistic reason that someone would prefer C over Rust. Having to learn Rust, it being harder to hire for, etc. are really just as irrelevant to the named requirements, but important concerns nonetheless.


What you point out can be fixed with time and resources. C has had 40 years to be ported to all of these other platforms. Rust has been realeased and stable for a little over 2.


Rusty has a significantly higher cognitive load for programmers than C. I find it quicker to write reliable code in C than in Rust because I can write the C code and a comprehensive set of tests quicker than I could write just the Rust code with no tests.


> I find it quicker to write reliable code in C than in Rust because I can write the C code and a comprehensive set of tests quicker than I could write just the Rust code with no tests.

Tests can't establish the absence of bugs the way Rust can. You only think your C code is reliable, you don't actually know that it's reliable. Rust only appears harder because of the latent bugs in your C program that you're not aware of.


Testing does not perfectly substitute for verification, and vice-versa. In particular, comprehensive testing does not scale: at some point in the growth of your code, your testing will no longer be comprehensive, even if it started that way. On the other hand, no amount of static analysis will eliminate semantic errors (but neither will testing.)


> Rusty has a significantly higher cognitive load for programmers than C.

For simple cases, this may be true, for complex cases it but shifts the cognitive load up front. Which may be more frustrating, but also may prevent a large class of hard to identify, intermittent in manifestation, bugs from getting into production. Which also saves programmer frustration.


How much time have you spent with Rust? This kind of thing will obviously vary with the individual.


I've been doing the same lately, I don't know how it would scale to a bigger project but I've been enjoying it so far. I've been running the tests under valgrind as well which has found 1 or 2 issues a lot faster than debugging would. The best part of that is that valgrind shows me actual errors in actual running code whereas the rust compiler shows potential errors.


Valgrind works with Rust too, to be clear. Most similar tools, like AFL do too.


Rust is very sweet, but it lacks the simplicity of C. I get that ML style programming is all the rage today, and ML itself is a simple language, but programming languages today carry a lot of baggage and styles, maybe to cater to wide range of programmers. But in the end, it complicates the language. I have noticed that a lot of people find Rust or Scala to be very hard because of these reasons.


The simplicity of C relative to Rust is an illusion in the sense that the main thing that makes Rust feel difficult at first ("fighting the borrow checker") relates to a concern that's relevant to C (allocation lifetimes), but in the case of C the burden of dealing with the issue is on the programmer and not on the compiler.


> Rust is very sweet, but it lacks the simplicity of C.

That's like saying a SawStop (http://www.sawstop.com/) lacks a simplicity of a sawblade. I mean - Yeah, sure, but sawblade also won't stop you from turning yourself into amputee.

I understand Rust can be overly verbose, but main complexity comes from the borrow checker and the effect adding another kind of type, to track lifetimes. The lifetime system is the main selling point.

There are other sources of complexity in Rust, but I am glad to say both Rust/Scala seem to be looking for a way to simplify things.


Perhaps if the SawStop would complain about using certain types of wood after another, forced you to pick the wood from the pile in a certain order, etc. I bet woodworkers would be quite annoyed by that.


This is in fact what the Sawstop does because if you use the wrong material it will engage and destroy your very expensive saw blades. This is one of the (many) reasons why SawStops aren't more popular and why they are in fact removable. So really, the sawstop is like Rust in that you can do what you want when it considers it dangerous by embedding c code, but it requires you to put in an annoying level of effort.


> Rust delivers on all these claims

No it doesn’t.

One reason is rust doesn’t have SSE/AVX/Neon intrinsics.

Without them, you can’t get anywhere near all the performance of C, nor anywhere near advertised performance of any modern CPU.


Sometimes the compiler will autovectorize, but you're right that these are important. That's why it's actively being designed; it's a temporary limitation, not an inherent property.


Rust is the ugliest, least palatable, alternative to C for me. Seems like they said "Let's chain function calls together like java(script) as the preferred expression and throw in lispy looking declarations and logic constructs. When people get frustrated with our safety paradigm there will always be unsafe..for experts."


Yeah, I feel pretty much the same way. I get that there are a lot of sharp edges in C but I don't see why every attempt to do a new language wants to throw the baby out with the bathwater. Why not do a C dialect that fixes whatever it is that bugs people? There is value in the Rust language, why couldn't that value have been added to a C like language?


There have been attempts. Safe-C and Cyclone come to mind.


Pascal compilers up to the mid 90's, were as fast as C compilers.


> a language with all the performance of C without the footguns

IMO, this isn't even the right goal. The problem is that in many cases using C is itself a premature optimization. Consider two languages:

* C, which gives you performance by default at the cost of an entire arsenal of footguns with esoteric nonlocal trigger conditions

* Hypothetical language X, which has all the same footguns but engineered for safer triggering and locked up in a safe that you have to choose to open when you want C-like performance

I'd rather have hypothetical language X (which is an accurate portrayal of many real existing languages), because it's got better failure modes. Performance issues are less impactful, in general, and more importantly they are more obvious. It is usually easy to tell when code is not fast enough. The endless parade of CVEs ultimately deriving from memory safety issues, often decades old, is living proof that misuse of the footguns is frequently far from obvious.


I have serious doubts that anybody would actually create such a language. The basic problem with C is the ease in which you're allowed to shoot yourself in the foot or in other words the great pedantic lengths you have to go to in order to not do so.

A language that avoided it would have to be very close to C while making it only mildly more difficult to foot shoot. Everyone who is trying is overshooting and therefore not really writing an adequate replacement low level systems language. Even if they did, it would be so close to C that adoption pickup would be low, awkward, and the language would fail outright. (Think Python 3 but worse)

>But I bet they would criticize programs for being written in assembly, if they didn't need to be.

Certainly. If you're writing your web app in assembly you are very likely a crazy person unless your goal is to do something ridiculous. If you're writing some assembly in a critical path in a web server to precisely control the network stack, you might not be a crazy person (see Netflix tech blogs).


>A language that avoided it would have to be very close to C while making it only mildly more difficult to foot shoot.

Says who? A language where every single memory access did NOT have the potential to be a buffer overflow would be much more difficult to foot-shoot with, even if it allowed you to sometimes access memory with raw pointers. Just having the ability, as in Rust, to mark code as either safe or unsafe goes a long way towards preventing footguns.

>Everyone who is trying is overshooting and therefore not really writing an adequate replacement low level systems language.

Using Rust as an example again, I don't think there's anything about being able to say "this code right here cannot cause memory unsafety" that precludes "low-level systems" programming.


Rust delivers on many of the use-cases, so much so that there is a full operating system (kernel and userland) implemented in Rust called Redox[1]. I understand being critical of hype, but Rust is legitimately a very interesting and exciting language.

[1]: https://www.redox-os.org/


Ada, Modula-2, Mesa, PL/8, ESPOL, Object Pascal and many others come to mind.


> I'm quite bored by the obsession of some for replacing C.

Well, I'm quite bored of the endless stream of memory safety vulnerabilities that can be systematically eliminated with memory-safe languages like Rust. Note that every single one of the CVEs disclosed yesterday in dnsmasq are memory safety violations, and the fact that buffer overflows, data races, and other related errors are not only pervasive, but also tend to sit around for years in codebases[1].

> Nobody would criticize assembly for letting you shoot yourself in the foot, C is much the same.

It's completely acceptable to criticize the choice of any tool. I can and do criticize the use of C for any code that touches a network and in some cases consider it willful negligence, as using a non-memory-safe language to parse untrusted data is asking for trouble.

[1]: https://twitter.com/johnregehr/status/914663997647069184


When you speak of “parsing untrusted data”, do you mean to imply that “parsing trusted data” even exists? I for one would view any and all input as untrusted.


"Trusted data" can exist in theory -- on a closed network where you control all endpoints -- but does not exist in practice (as many devices eventually connect to the Internet or exchange data with Internet-connected computers).

Therefore, it follows that re-writing parsers in memory-safe languages would provide a nice bang-for-buck.


In some cases, say the parser for an interpreter for a general purpose programming language, I would consider the data "trusted". You are already allowing execution of arbitrary code, so there is nothing to be gained by exploiting the parser.


As others have already alluded, the problem with C is not its low level nature or that it allows you to shoot yourself in the way asm does. The real problem are the arcane unintuitive rules of ISO C standard that in comparison to asm footgun is more like carrying a critical lump of plutoniun in your pocket that might go off at any point.


> I'm quite bored by the obsession of some for replacing C.

That's fine... but the billions of dollars spent on security and the massive trove of user data that's been exfiltrated due to C is kinda a problem regardless of whether you are bored or not.

> Nobody would criticize assembly for letting you shoot yourself in the foot, C is much the same.

I wouldn't criticize a developer for writing bad code. I'd criticize the language for letting them, or the developer for choosing such a language. Same with C and Assembly.

> The biggest threat to the tech sector is Linus dying and being replaced by some charlatan who insists on replacing C and using Jira.

That is... not the biggest threat. It seems extremely unlikely that that is where linux kernel development would go.


It isn't "due to C." This is a popular meme on HN, but it's incorrect in its focus.

If anything replaces C I'm skeptical that more than a shift in the character of the attack surface (rather than the area) will occur. That is, there will be just as much to attack, it's just that the attacks will require alternate methods. Some attacks will be harder to employ, some less, but on average probably about the same quantity and seriousness will occur.


> It isn't "due to C." This is a popular meme on HN, but it's incorrect in its focus.

You can call it a meme but that doesn't change my opinion.

> Some attacks will be harder to employ, some less, but on average probably about the same quantity and seriousness will occur.

This is unfounded, I don't really know how to respond to it other than that nothing has ever really worked this way in security. You don't get less secure in one area because you were more secure in another.


> You don't get less secure in one area because you were more secure in another.

I agree with you, however that doesn't address what I actually wrote.


"Some attacks will be harder to employ, some less" - I'm interpreting this sentence that way.


> it does exactly what you tell it to do and that's dangerous.

gcc: “hold my beer while I optimize out that block of code you clearly didn’t mean to be in your program”


The example of optimizing in a block of code that is never explicitly called is even more fun: https://kristerw.blogspot.co.uk/2017/09/why-undefined-behavi...



That's not because of C, that's because of GCC.


gcc does it precisely because the C standard says that it can do so, and any program that's relying on not getting broken that way is not valid C (well, "undefined behavior", but it's the same thing in practice).


gcc does it precisely because the C standard says that it can do so

Or rather, because the standard "imposes no requirements". A compiler that does the obvious/sensible thing for UB can also claim conformance.

From C99 3.4.3 paragraph 3 (emphasis mine): "Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message)."


Where exactly does C standard explicitly say it can do so?

UB was introduced to allow skipping checks that otherwise might have added overhead, like array bound checks or null checks or to give more flexibility to how correct programs are optimized (e.g. freedom to evaluate arguments in any order). Therefore each compiler/library/os vendor could implement these cases differently.

However, it was never meant to allow compiler to assume "ok, I can see your code is totally broken, so let's break it more, cause you don't care anyway".


> However, it was never meant to allow compiler to assume "ok, I can see your code is totally broken, so let's break it more, cause you don't care anyway".

That's nevertheless how compiler writers are interpreting the standard right now. Take integer overflow for instance. Stuff like this:

  unsigned u = some_computation();
  int x = u;   // because of reasons
  x += 42;     // hmm this may overflow, let's check that
  if (x < 0) { // It's a 2's complement machine, this'll work
      handle_overflow(x); // now, let's deal with the overflow
  }
Now here's how your optimising compiler see that stuff:

  unsigned u = some_computation();
  int x = u;   // x is always positive.
  x += 42;     // signed integer never overflow
  if (x < 0) { // x was positive, no overflow... always false
      handle_overflow(x); // Let's delete this dead code.
  }
Then, what actually happens:

  unsigned u = some_computation();
  int x = u;
  x += 42;
This is insane.


A correction: a compiler would not be able to assume that `x` is always positive because casts between signed and unsigned integer types are based on their binary representation (what computers see), not their value (what humans see).[1]

[1] https://wandbox.org/permlink/dDDAJjOlm50GD1PW

It would be able to (correctly) make that assumption if `x` were a long, assuming 64-bit longs and 32-bit int/unsigneds.


In the C standard, a cast from unsigned to signed integer is undefined behavior if it overflows. So the compiler can, indeed, assume that "x" is always positive in the code snippet above - if the value of "u" is such that it would require signed wraparound when initializing "x", the rest of the program is U.B., and so that possibility doesn't even have to be considered.


This is untrue; see section 6.3.1.3:

When a value with integer type is converted to another integer type ... [if] the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1570.pdf

So casts from unsigned to signed must have a well-defined result, but the implementation can choose how to define it. You could in theory have an implementation that defines that casts from unsigned to signed are always positive,[1] but since that's not how GCC and every other mainstream (existing?) compiler defines it, they are not allowed to assume the result of a cast from an arbitrary unsigned integer to a signed integer of equal width is positive.

[1] In which case the compiler would be allowed to assume that unsigned-to-signed casts are never negative. This assumption would however be both safe and correct, and would have nothing to do with UB.


I always wished C gave access to the overflow and carry flags. Would have made a lot of things simpler.


> // It's a 2's complement machine, this'll work

Let's see, that's not an assumption you can make in C, or is it? I seem to remember that a compiler is allowed to present to the programmer at least one or two representations other than two's complement.


From C99 on, it's either one of: two's complement, ones' complement, or sign + magnitude.


That's sounds familiar, so it's probably what I've read. What did previous versions say?


"The type char, the signed and unsigned integer types. and the enumerated types are collectively called integral types. The representations of integral types shall define values by use of a pure binary numeration system."

And a footnote on that:

"A positional representation of integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position."


http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf is the draft of the C11 spec,

3.4.3 undefined behavior

behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements

NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

-------------

Even if you want to argue over what "imposes no requirements" means, I'd argue that "ignoring the situation completely with unpredictable results" is very clear.

The note also says (and elsewhere in the standard it reinforces this) that since there are no requirements, implementations could absolutely do things like perform runtime checks, etc. Compilers do not do this though. This isn't for no reason, but that's a separate issue.


Before the standard, C compilers applied a principle of least surprise. The standardisation has created a huge mess by leaving so many undefined behaviours were compilers could do as they want.


It's because of C as it has been redefined in recent times, but the kinds of people who write standards and compilers.

The galling thing is that they did this to win a benchmark war against Fortran.


I'm equally bored with the apparent obsession of some to hate on Jira. It's a dev tool like any other meaning for some it works wonderfully well with some quirks here and there. Hmm, like C, I'd say. And I use and like both.


Well, in fairness, JIRA has tried to evolve well beyond the role of a mere bugtracker. Between the complex workflows of its large existing userbase, the huge complexity of the code, and the desire to use it for project management, it has become really slow and painful both to use and to develop.

I've heard some horror stories and apparently the development team is pretty heavily pressured to fix a lot of things.


Yes, it does exactly what you tell it to do and that's dangerous.

Exactly, if you could get all the benefits without the danger why wouldn't you?


I don't worry about Jira, but I do worry about someone replacing C.


25 years ago it was a middle-level language.

I hardly saw any reason to use it instead of Pascal dialects, besides being "the language" on UNIX systems.

Thankfully Bjarne created C++.


Has Linus ever talked about appointing a successor?


I love you man.


As maddening as it may be, my gut tells me it has more to do with the precompiler than anything else. When it comes to wrestling with cross-platform differences, I don't know of a corresponding feature in Rust or Go that is equally powerful (or equally ugly!).

As long as computing remains a battleground where rich assholes put vastly different wrappers around the exact same shit so they can wring more money out of us, I will probably be writing in that abominable assembly language w/ turing-complete string-paster for many years to come.


Speaking of the C preprocessor, I found this recently: from the Firefox source code, a red-black tree written entirely in 800 lines of C macros: http://searchfox.org/mozilla-central/rev/f54c1723befe6bcc722... . Submitted for your enjoyment. :)


There are whole libraries that provide these sorts of generic containers for C:

http://sglib.sourceforge.net/

http://klib.sourceforge.net/

https://github.com/pmj/genccont


I've used OpenBSD's version of that more times than I care to remember: http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/...


Looks to be some small amount of C code in those macros as well.


Why would someone do that?


Because long ago (and probably in a different product) making a function call was significantly costly and worthwhile optimizing away via macros.


That is how we do "generics" in C. Not happy about it, just sayin'.


>Perhaps now it is time for a new generation to make their mark, will their efforts last 40 years? Rust anyone?

I'd think it's quite difficult for a new language to replace a language whose main attractive points are it's longevity, stability (not the code and programs coming out of it, but the syntax and tools etc), and broad support.

There are many fields where C is easy to beat. But I don't think the next 40 year lasting language will be a C replacement.


I think the redox OS has shown that it's not so hard to build a new OS in rust. I suspect that it won't be long before people are using redox in production (another year or so?). Once there's a viable competitor for linux, perhaps C won't be seen as the default language for system programming any more.


> I suspect that it won't be long before people are using redox in production (another year or so?)

This not realistic. No one serious/big will use it in production for the next 10 years easily. There is no certainty that it will be mainstream OS ever in the future.

You will see thousands of problems next year in production that was not seen in testing today. Linux has millions of users and millions of use cases and edge cases, so many drivers written specially for it, tested and proven to work. Stability and maturity is the key for OS today. No one will trade that for some unproven hobbyist OS project.


Indeed. I think the obvious candidate is JavaScript.

C is what it is partly because of its relationship with Unix, and also because it was the language that gave most straightforward access to "the machine" -- whatever that is.

And the hard truth is that The Machine was the only real universal platform around. But now we have this thing called The Web, which is not quite as universal but is getting there, and is much more like a single platform. JS has a special relationship with it.

The only thing that might upset this JS+Web applecart is WebAssembly. A "C of the Web" (call it W) would be a language high level enough to be pleasant to use, but which would have little or no runtime beyond what is native to WebAsm, and thus anyone could use libraries written in W.


I note looking at webassembly.org's getting started page that their hello world example is written in C. Maybe the "C of the Web" will actually be C. (http://webassembly.org/getting-started/developers-guide/)


Due to lack of GC support, C, C++, and Rust are what you can reliably use with wasm at the moment. That integration is coming in the future, and should lead to other languages working well too.


The ability to compile C code to WASM will be invaluable for porting well-used C libraries to the web, but given that the goal of WASM is to define a ("more-or-less") language-agnostic bytecode, I see little reason why people would prefer to use C when they could continue using Javascript, or, eventually, whatever the 40-years-from-now equivalent of Python is. (Of course, you'll always be able to use C if you do want to, so it's not like it's ever going to risk vanishing.)


> I see little reason why people would continue using C when they could continue using Javascript, or, eventually, whatever the 40-years-from-now equivalent of Python is.

People will use C or Python or whatever on the web for the same reasons so many people transpile to javascript today, including that they simply don't like javascript and would rather write code in a language they prefer.


Indeed, but, as much as I admire Typescript, the vast majority of people aren't transpiling anything to Javascript. The grandfather comment was concerned with whether the future "C of the web" would literally be C, but I don't see how the introduction of WASM will start convincing the majority of people to start shipping webapps written in C. C is useful today as a low-level lingua franca, but on the web WASM will be the lingua franca, by definition, and C will be competing with many other languages that compile to WASM.


> The grandfather comment was concerned with whether the future "C of the web" would literally be C, but I don't see how the introduction of WASM will start convincing the majority of people to start shipping webapps written in C.

WebAssembly supports C/C++ now, and there is a lot of existing code that could be tested or convered. WebAssembly will probably get garbage collection and be able to support many other languages soon, but by then network effects may have taken over - especially if most online tutorials for WASM cover C or C++.


Indeed there can be more than one lingua-franca, much as Fortran and Pascal shared the native machine with C. But they all have in common that they added little by way of run-time dependencies, and a web lingua franca would also need this property.

But on the web, the basic "built-in" run-time is much richer than was available to those languages. It includes the DOM and many other things. JavaScript already exposes all those things, and no more, so I think it is ideally placed to become (remain) the dominant lingua franca.

So even if the WASM world is multi-lingual, expect APIs to be described in JS terms, maybe with some kind of backward-compatible type decoration thrown in.


Yep. C lives on because it's the language of Unix. JavaScript will live on because it's the language of the Web. It has nothing to do with the language itself -- it's the platform you have access to through it.


Javascript might soon become obsolete with the arrival of WebAssembly.


The wasm project states that obsoleting JavaScript is a non-goal.


But this is what will happen eventually. No programmer with more than 5 years of professional experience really likes working with Javascript; they just put up with it. As soon as you can work in <insert your favorite language here> and compile it to WebAssembly, with the added bonus that it will run pretty fast, the popularity of Javascript will diminish drastically.


Your second sentence is pretty demonstrably false.


Only where Javascript is being treated like a "compile target." Something is going to have to run the existing Javascript code on the web from now on, so the language itself likely isn't going anywhere, regardless of whether or not other languages can target WASM.

Not the mention everywhere else it's been used - WebAssembly definitely isn't replacing Node.js, for instance, or game scripting.


> WebAssembly definitely isn't replacing Node.js, for instance, or game scripting.

WebAssembly is certainly able to replace JS in game scripting.


People won't be scripting in WebAssembly. Other languages could target WebAssembly, but so could javascript.


> I think the obvious candidate is JavaScript.

You're probably right, but egad, JavaScript is an even worse language than C. It's like jumping from the frying pan into the fire.

At least it's garbage-collected.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: