Hacker News new | past | comments | ask | show | jobs | submit login
The Resurgence of C Programming (oreilly.com)
194 points by ingve on Dec 27, 2016 | hide | past | favorite | 315 comments



Whenever C gets brought up, people are like, "pointers... memory... dangerous!" In 20+ years of C, I've never had a (self-inflicted) problem in that area. Maybe I'm the rain man of bit-twiddling?

What has caused me tremendous grief are all of the inconsistencies in the compilers, the gotchas of the precompiler, and the ambiguities in the language itself. C++ is even worse.

Rust is great, but not great enough to justify a redo of a program that has been extended and maintained for decades (like vim or photoshop).

I think an iterative tightening of the language would be of greater utility to everyday computing than a rush to re-do everything in something "better."


I always felt that Objective-C was a great take on a "better C"... Definitely not conceptually better in the ways that Rust is, but simply better for a lot of the practical things where C suffers the worst.

Obj-C was an appealingly simple extension to C from the compiler point of view, but for a long time it was hobbled by the runtime/framework situation, as the NeXT/Apple runtime wasn't open source.

Just as that compatibility hurdle was getting fixed by open source efforts around 10 years ago, the iPhone happened and Apple went nuts with extensions to the language... It used to be that Objective-C proponents would brag that you can learn the syntax in 5 minutes if you know C. Now the language is filled with uncountable double-underscored keyword warts and @ declarations. Today's Obj-C looks more like an intermediate format for a Swift->C compiler rather than an evolution of Obj-C 1.0.

I understand completely why that happened. I just wish that Obj-C had gained more of an open source foothold when the opportunity was there, so it wouldn't have been so dependent on Apple's stewardship which eventually killed it as an independent language.


Back in 1986 or so, there were two languages based on C: C++ and ObjectiveC. I was casting around looking for an edge with my C compiler, and investigated the two.

On usenet, there was about the same amount of traffic in the C++ and O-C newsgroups, the respective communities were about the same size.

Which should I choose? ObjectiveC was produced by Stepstone, and they wanted a royalty. C++ was produced by AT&T, and I phoned up their lawyer who said no royalty was required.

So I picked C++ based on that.

In 1987 I shipped the Zortech C++, the first native C++ compiler, and the popularity of C++ promptly zoomed. (Yes, I do think ZTC++ had a lot to do with the crucial early surge of C++ adoption, because the PC was where the programming action was at the time, not Unix.) The traffic in the C++ newsgroup exploded, and the ObjectiveC traffic crashed.

O-C was dead until Apple picked it up again later.


Dude's not wrong about Zortech's influence, young'uns. It was truly a game changer. Love the anecdote about calling the lawyer!


I not only asked him about royalties, I asked him if I could call it "C++". He laughed, and said nobody had ever asked that before, they just did it. He was very nice and said I was welcome to use it.

I wish I remembered his name. He was a good guy.

I also remember how nice Andrew Koenig was to me in the early days. All the C++ people were nice, but Andy was a standout.


Great perspective, thank you! I would definitely agree that your early C++ compiler on DOS had a huge impact on the language's popularity.


Hear hear!

And then I watch something like Gary Bernhard's "Boundaries" talk[1], which isn't bad, mind you, and think "maybe this Brad Cox guy with his Software ICs really knew what he was talking about?"

Where Apple went off the rails with Objective-C was trying to make it into an OO language like Java. It isn't. It is a deliberate hybrid, with a component implementation language ("C") married to a component connection language ("Objective").

My own approach is to take this distinction seriously, and instead of doubling down on the "better Java" part like Swift does (which in Software-IC terms is focusing on the less interesting and mostly solved component implementation part) and instead expand on the ways we can connect components (dataflow, constraints, storage composition, etc.).[2]

Minor correction: GNUStep, libFoundation etc. happened around 20 years ago, not 10.

[1] https://www.destroyallsoftware.com/talks/boundaries

[2] http://objective.st/


>trying to make it into an OO language like Java. It isn't.

Can you elaborate? In what ways?


Objective-C is just an idiosyncratic COM-like system with first-class language support (something that COM desperately needed). There's a performance ceiling with Objective-C that C++ doesn't have, due to some key decisions early on that make it much more like COM than C++—pervasive atomic reference counting everywhere, mandatory heap allocation, virtual dispatch requiring (caching) hash table lookups, and so forth.

I think if Objective-C had caught on and C++ hadn't, the performance-critical world would look very different: you'd probably just see those domains stick to C.


I honestly wouldn't mind a world where Obj-C had been used to filled all the COM-shaped holes in PC operating systems, including Windows and Linux desktop...

Obj-C is a great solution for messy GUI code and OO plumbing when you still need seamless C interoperability. COM and its various C++ wrappers tried to solve the same problem, but fell short in so many ways. On Linux, Qt and Gtk+ both built their own incompatible and inferior solutions to the same problem.


The NeXT/Apple Objective-C runtime has been Open Source since the late 1990s, the API and ABI have been documented since the late 1980s so anyone could create a compatible runtime, and GCC has had a portable runtime almost since Objective-C was taken into its mainline.


I can agree about the nullability annotations & generics additions, but having properties, ARC, & blocks made writing Objective-C programs so much nicer, IMO.


Objective-C will slowly move to where it belongs - the dustbin of programming language history.

Merging C with a higher-level language could only make sense in the 80s, where people didn't know better and invited all the dreadful problems of C code into application development.

Swift is the proof that one can have a safe, powerful language that can also be fast. It's not yet fully mature, but it will get there, the speed at which it's replacing Obj-C in iOS and macOS development is impressive.


I've never seen its syntax not look bad enough that I immediately abandoned all hope of using it ever.


I agree with you. I've been a C and C++ programmer for decades, and nothing to do with memory safety or leaks has ever risen to the level of prohibitive or even annoying. Ironically, I've had infinitely more grief from fighting hostile garbage collectors, but that's because I'm a game programmer. I quickly dropped Rust upon realizing that the core of the language addresses problems I don't have.

I'm always met with strawman arguments about how I can't possibly write "correct" code. It's a strawman because I don't claim to write correct code. I don't care if Herb Sutter can find 1000 "bugs" in my C++. I care that my users and myself are happy.


I've heard this criticism from several game developers at this point. Contrary to some other voices here, I do think it's legitimate: if safety isn't important to you, then I can't argue with that.

At the same time, I think that Rust made the right choice for most domains. For most software, memory safety problems that hostile attackers find have consequences. Games are an exception, not the rule. It would be irresponsible for us to target servers or network-facing client software without memory safety.

It's also of course true that we have many game developers in the Rust community. A lot of people, including me, see safety as a productivity booster, especially combined with all the other features in Rust. I've advanced suggestions in the past for making unsafe Rust easier to program in, effectively turning off some of the safety checks, specifically for game developers who would prefer not to have the restrictions. These suggestions have been very unpopular in the Rust community, including from the game devs, and so they haven't gone further.


I'm a little confused game developers are so dismissive of attack surface -- after all, most games I've played in recent memory have some level of network access (multiplayer, achievements, etc.), and very few game developers seem to be OK with rampant exploitation in their online communities.


Am I dismissive of the catastrophic effects of being shot, because I never wear a bulletproof vest? Again, not every individual app has the same problems. I'm not working on WoW. My best guess as to the number of people trying to attack my games is honestly zero.


"Never trust the client". The assumption should always be that the game client is lying to the server, whenever that's possible. In multiplayer games, a stream of keypresses is sent to the server, and a game state is sent back to the client.

At least, that's part of the argument that I assume they'd make.


Totally, although I assume at least some game shops are using shared C++ libraries on both client and server.


The real problem with C (and to a lesser extent C++) nowadays is not just memory leaks or segfaults, but all the security vulnerabilities you get from the lack of memory safety.

Your code might work 100% correctly for your users, but it may also put their system into risk when processing arbitrary and potentially malicious data. When your system needs to handle data that may come from anywhere - and almost every system does - writing Herb Sutter-correct code is no longer just a nice-to-have.


A big part of the point I meant to make is the difference between theoretical problems and practical problems, and how priorities and risk vary a lot across applications. The number of complaints I've received about malware targeting my games over the last 25 years is exactly zero. So when I go looking for my perfect language, that's not a problem that I'm trying to solve. I'm not arguing that other languages shouldn't exist, I'm saying they don't address the most important problems in my domain (roughly, expressiveness and performance).


> A big part of the point I meant to make is the difference between theoretical problems and practical problems, and how priorities and risk vary a lot across applications.

And this is how we had ILoveYou and similar viruses. In the name of the whole internet, I hereby thank you very much for your hard work in this matter.


ILoveYou was an email attachment virus. Email attachments are forever being targeted, games less so.


I would say, far lesser extent C++ (unless you're just writing C and compiling with a C++ compiler).


That might be the chorus when it comes to selling new languages, but there is no reason C could not be compiled with memory safety. It is not done because by the end of the day people don't care for safety much.


It's obviously not happening because C's design... especially lack of any guarantees... combined with what those checks required made complete, memory safety hurt performance a lot. Talking multiples worse (300+%). Things like Softbound+CETS and Criswell's SAFEcode have gotten it down below 100% worse performance in many cases. Others seem to explode in delay on specific routines due to how they interact with the bookkeeping methods.

Whereas, Ada, Wirth, and Eiffel languages were designed to make the job easier for users and tooling. SPARK was even tailored to make it easy to prove absence of C-style vulnerabilities with automated proving. That was done by a small team over a period of years whereas global assortment of teams putting decades into static analysis or proof tools for C haven't pulled that off outside even more restrictive tools like Astree Analyzer.

C is just inherently bad for safety despite safe ways of doing it existing. That's because it was designed almost exclusively to run fast on a PDP-11. That simple. On other end, Hansen wrote Edison for a ultra-minimal language for a PDP-11. It was smaller than C, easier to process, and still had safety features. Reiterates it was simply a personal preference by C's designers to not care about safety while many others did.


As a game programmer, exploitability is fairly low on your list of priorities, but for other systems it's quite important. If hackers can find 1000 bugs in your C++ that get them remote access to your computer through the browser, ... well, this is why people hate Adobe.


> I don't care if Herb Sutter can find 1000 "bugs" in my C++.

Until the new release of GCC improve its performance by 10% by optimizing more cleverly an Undefined Behaviour but OOPS it now breaks your code …

> I quickly dropped Rust upon realizing that the core of the language addresses problems I don't have.

Rust isn't only about safety, it's also a «C++ on steroid from 2015» with a ton of cool features : Modern type system, no null pointer, parallel code without fear, package manager …

Rust is a huge productivity booster IMHO !


> I quickly dropped Rust upon realizing that the core of the language addresses problems I don't have.

There are plenty of other expressiveness advantages to Rust, over and above the safety advantages you apparently wouldn't benefit from.


The problem is that stuff like ownership and mutability are explicit and smack in the center of the language. They're not optional. I don't want to write those semantics at all.


How would you feel about Modula-3 without the upper-case and a bit less verbosity? It was industrial development of Wirth's philosophy at DEC. I always thought it did the best of the safer languages balancing features, simplicity, safety, fast compiles, and runtime performance. Example is built-in concurrency, simplified (optional) OOP, and deciding on per variable basis whether to use garbage collection.

https://en.wikipedia.org/wiki/Modula-3

I've considered rebooting it with a few modern lessons built-in plus some Ada safety features. Everything transparent where it doesn't go into the language unless it's easy to work with and consistent. Add Scheme-like macros and good FFI for C like in Julia.

EDIT: I just saw your comment about cherry-picked subset of C++. That's how I describe Modula-3 in many comments elsewhere. Now I'm extra curious what gripes you may have with it.


"Upgrading" Modula-3 is an interesting idea, but I wonder how much you'd have to tweak for it to be worth the effort. I suspect that adding a new chrome would end up as something awfully like Go.

Go, in my opinion, is currently the best distillation of the Wirth progression of languages that we have. The Go authors (one of whom worked on Oberon-2 at Wirth's ETH Zürich) took a lot of inspiration from Wirth, and Go is in many ways a reactionary language as a result: A simple, strict type system; easy to parse (Wirth's languages all famously fit on a single screen of EBNF grammar, not sure if Go is quite that small); design choices favouring the pragmatic over the theoretical (ignoring the last 30 years of type theory), reasonably performant, etc.

I'm not sure there's that much left in the original well. Modula-3 did have a "safe" construct, but it seems a bit naive today (nowhere near Rust's advanced, complicated safety features). Explicitly choosing between GC/non-GC seems like something compiler writers should be able to automatically optimize. Modula-3 did have generics, of course.


I think it could be a bit better if it got close to Ada's safety features without their unusability. I was considering adding a borrow checker, ref counting, and Go-style GC to it for various use cases. Idea being people could choose along spectrum from quick prototyping with reasonable performance to hard work for safety without GC. Add Design-by-Contract support from Eiffel with SPARK-like static analysis on modules simple enough to allow it. Complex modules could be analyzed easier than C since it's a Wirth-style language. Port either Rust's or SCOOP's concurrency model so it's safe-by-default. Maybe a few things from functional programming for decoupling of modules without getting too much overhead.

I don't think it has to be Go language or have nothing we've learned from type theory while being easy to use. Just has to have a base as easy to use as Go with options to bring you closer to Ada, Eiffel, or Rust in safety + performance. Plus macros maybe with compiler being LISP on inside for ultra-easy transforms like Julia did. :) The combination should be quite advantageous where baseline of safety & maintenance is achieved at Go speeds with effort of rest increasing other attributes.

That was my brainstorm on it anyway. It's all up for debate outside the part where it would be easier to learn & compile than Rust or Ada. Their strategies are intrinsically complex vs a Wirth language with some upgrades.


I think a language like that could be great if you managed to synthesize all the good parts into a modern whole.

I always liked the very pragmatic mechanism, in Wirth's languages and in Ada, of attaching constraints to data types; for example, defining a type as an integer of range 1..9. On the one hand, from a type-theoretical viewpoint, it's completely cheating — the mechanism is hardwired into the language, there's no real underlying system or symmetry to it, unlike a proof-based language such as Idris or Agda. On the other, it's awfully pragmatic.


Modula-3 sounds interesting, from a couple of fans I've spoken to over the years. I'm curious as to how it was simultaneously (a) taught in universities for years and (b) completely forgotten in the popular culture.


There used to be a big ad for Module-2 ( not 3 ) toolchains in the back of Byte magazine ( when it was relevant ) but they were all bytecode interpreted, and not as performant as 'C'.

Universities weren't that relevant back then. Fully half the people I worked with through the end of the 20th Century did not have CS degrees.


In good part because Universities where I was (partly due to donations and funds from IBM) did not tolerate microcomputers, connect to them, and refused to put in a single course regarding them. PC's weren't classy, they were only the future.


It's a social and economic phenomenon. It happens so much I discussed here the possibility of creating some central resource for knowledge in programming, security, etc specifically designed to prevent it. It would have an accompanying forum of people vetted to know about various tech or skills. That way experienced people can pass the wisdom onto new crowd to cut down on some of the fads or at least how they deafen everything.

A number of companies used Modula-3 experiencing its benefits. Only use I recall in FOSS that was widespread was CVsup which was since rewritten.

https://modula3.elegosoft.com/pm3/intro/questions/whym3.html


Unfortunately, I'm not familiar with Modula-3 or Ada. But I will look into them. Thanks.


Oh wait, if doing Ada, then take this link:

http://www.adacore.com/knowledge/technical-papers/safe-and-s...

Remember as you read it that it starts a bit from scratch in the 1980's mainly for embedded, real-time systems that can't fail. The thing to focus on as you skim table of contents or chapter contents is they systematically find the constructions that screw people up, statically prevent them where possible at type system, catch others dynamically by default, and let you drop to unsafety where necessary. The range types (esp existential types), ability to control bit layout, & concurrency safety are old favorites of users in safety-critical space.

SPARK is a subset of Ada with extra annotations to help it catch errors statically. It can, with automated provers, show the most common problems can never happen in your code. Here it is:

http://www.spark-2014.org/uploads/itp_2014_r610.pdf

http://www.spark-2014.org/about


Sorry, I still don't see it. Once you internalise the logic of Rust's borrow checker, you'll see that probably 80% of your code already follows this discipline and would pass the borrow checks without trouble.

Some small amount of code is probably hiding mistake in various corner cases which the borrow checker will point out. The remainder is legit safe code, and you can use unsafe annotations to do what you would do in C.

So you're throwing away a powerful tool just because it impacts maybe 5% of the code you want to write, but not even in a way that would really impede you in the end.


Sometimes you have to wonder if these admonitions from industry trendsetters are really just sneaky tricks to slow the rest of us down...

"Hey, that C++ code that's been working for years? All wrong! Go rewrite it in Node.js. No, no, no, in Go this time. Now Rust!" ...


> Sometimes you have to wonder if these admonitions from industry trendsetters are really just sneaky tricks to slow the rest of us down...

No, I didn't work on Rust for the past six years while sitting in the office twirling my mustache trying to think of ways to slow the industry down.

I worked on it because we kept seeing the same issues come up again and again and again.


I absolutely don't disagree with both of you.

That being said, I've been knowing game developers for years and while I don't know their exact complaints, they kept saying they wished they were working with something else than C++.

I think it's mostly solving the same problems many times over and over, during their entire career.


I do wish I was working with something else. Something I'm still surprised doesn't exist yet. I understand the reason it doesn't exist is that a cherry-picked subset of C++ is good enough. This hypothetical language would be a greatly simplified C++ with a couple dozen small annoyances smoothed over.


Well, you CAN do simplified C++. Apple even had their own version of it called 'Embedded C++' (the drivers in OSX are like that) -- basically, remove RTTI, exceptions, templating and a few other bits and you get a C with namespacing, objects, nice context-based stack based objects and a few other nice things, without the bloat.

So, yes technically you CAN write nice lightweight C++; but you won't be able to enforce it in a team, as everyone will need 'their' pet feature of doom to bring back the bloat.

How do I know? Well, I tried for years to do it like that; and eventually I had to give up. Now I do plain C, because it can't be corrupted in the same way.

"But, but, I just #included <vector>!!"

"... What, for a 5 byte constant array?"

"Yeah I know, but the compiler will just inline it!"


Or worse:

    #include <boost/fusion/container/vector.hpp>
"Why do we need Boost for this single vector?"

"Well... We're using Boost elsewhere..."

Now we use C. Not C++.


If someone is using a vector for a small array of fixed size in C++, in a place where performance matters, then their knowledge level is so low that you are going to have issues with them no matter what, I'm afraid. Knowing that vector uses the heap is beginner level knowledge.

On the other hand, in C you are stuck with a built in array now. If you pass this to a function that doesn't get inlined, it will decay into a pointer, then inside that function the compiler has actually "lost" knowledge of the size of the array. Now you have to pay for a loop.

In C++ you can pass a std::array by const reference, the compiler knows exactly how big that array is at all times, and for small arrays the loop checks can be eliminated entirely.

Many of the arguments against C++ boil down to: if you're working with a lot of people who have a knowledge level below what I would expect someone with 1 year of good experience to have, then they can make more mistakes. Is the level of developers generally really that low?


On the other hand, in C you are stuck with a built in array now. If you pass this to a function that doesn't get inlined, it will decay into a pointer, then inside that function the compiler has actually "lost" knowledge of the size of the array. Now you have to pay for a loop.

To which my answer would be "write your own". Whenever I program in C I always use my own dynamic, bound checked arrays that store their own length.

I am often surprised that many C programmers often don't build their own abstractions. The OpenSSL library is a prime example of this - the same 4 or 5 arguments passed in the same order to hundreds of functions. I am always surprised they didn't just make a struct.


You are missing my point.

It can store it's own length, but the compiler will still not be aware of the length, except in a real local context (i.e. in the same block of code, either actually, or via inlining). So if you decide to loop over your array, you have to pay the price of looping (jumping and comparing).

If you know that you are working with an array of fixed length, and you use a C++ std::array, you can pass this by reference to functions and those functions will know at compile time how long the array is. So you don't have to pay for looping at all; if you do something small in a loop the compiler will unroll it to 3 assignments or what not.

In addition, the reason that C programmers don't build their own abstractions is because C does not have templates. So building your own abstraction either means that it's not very abstract (i.e. restricted to one data type), or you are using void, or macros, or both. These things are all horrible to work with, void based data structures also involve a performance hit. This tilts the trade-off between writing abstraction and single-use code far, far towards the latter.

In C++ you have templates which are far better for writing your own data structure than macros, and have no performance penalty.


You might be interested in Jai[0], Jonathan Blow's (as yet) unreleased language designed with game development in mind.

It's not strictly a subset of C++, but rather focuses on syntax and semantics that are convenient for the rapid development/exploratory refactoring and high performance that are characteristic of game dev.

Lots of tools for different kinds of manual memory management, build rules integrated into the source, syntax support for easily switching between "arrays of structs" and "structs of arrays" (and data-oriented types in general), simple static typing with type inference, that kind of thing.

[0] https://github.com/BSVino/JaiPrimer/blob/master/JaiPrimer.md


Absolutely. I'd be delighted to code in a simplified C++.


> In 20+ years of C, I've never had a (self-inflicted) problem in that area. Maybe I'm the rain man of bit-twiddling?

Have people been actively looking for mistakes in your code to find exploits?

I just wrote a simple throwaway .otf parser in C, which I've been coding in for a decade and a half. After trying to think about all the places overflow due to malicious input could occur, I just gave up and wrote "FIXME: Insecure" over it, pending a rewrite in something better.

In general, I don't see how you can deny that memory safety problems exist. Pwn2Own is a thing.

> What has caused me tremendous grief are all of the inconsistencies in the compilers, the gotchas of the precompiler, and the ambiguities in the language itself. C++ is even worse.

> Rust is great, but not great enough to justify a redo of a program that has been extended and maintained for decades (like vim or photoshop).

You just explained exactly why all those inconsistencies and ambiguities exist. And we will keep suffering from them as an industry until we decide to move beyond a language from 1978.


Dude, I never denied that those problems exist, and I have no beef with your language.

My primary points are:

1. Memory management has never presented itself as public enemy #1 in my personal career. Other things have always been far more troublesome.

2. For complicated programs that have been continuously developed and extended for decades, a re-write in a "better" language is not a particularly good idea.

FWIW, for new stuff, Rust is a good choice.

Also, being at Mozilla, I can understand why memory management seems like priority #1. You are a platform!

The rest of us are building on the shifting sands that the west coast creates. Keeping up with Microsoft changing this and Google changing that is WAAAAY more trouble than pointer arithmetic (for me anyway).


> 2. For complicated programs that have been continuously developed and extended for decades, a re-write in a "better" language is not a particularly good idea.

Can you explain the strategy you propose to stop the entire browser industry from getting knocked over at Pwn2Own every year?


If Mozilla thinks it's a good trade-off to spend the next several years fighting functionality bugs in exchange for better pointer and memory guarantees with Firefox, they are more qualified to make that call than I am.

For other types of program that I am familiar with, that is a raw deal. Take vim. I don't have it exposed on the network. I've never had any problems with it. It does not affect me in the slightest if the guys at a Pwn2Own can contrive some convoluted attack against it. The functionality is more important than the security is for my usage case. I would rather it just keep working than have it be undefeated at Pwn2Own.


At this point I think you're making an argument about software evolution more than programming languages. "Just keeping it working" is fine for software like Vim, which has a natural monopoly on the small niche of longtime programmers who have memorized the keystrokes and like it the way it is. (I am not insulting Vim users: I am one of them!) It does not work for software like browsers in a deeply competitive market, which have to cater to a large audience of hundreds of millions who constantly demand increased stability, security, and performance.


This thread has gotten so unwieldy that I now have more arguments than I started with! ;-) But yeah, evolution over revolution.

Perhaps that could be a fun challenge. Take some creaky old thing, have team A re-do it in Rust, have team B do an OpenBSD-style hackathon on it in its native language, let the QA guys and the hackers loose on it, and see which one fares better.


> Rust is great, but not great enough to justify a redo of a program that has been extended and maintained for decades (like vim or photoshop).

I agree with that, and most people do even in the Rust community. I don't expect people to rewrite their software in Rust, but if people could stop writing new C code[1], the world would become a better place in the next decades …

> Whenever C gets brought up, people are like, "pointers... memory... dangerous!" In 20+ years of C, I've never had a (self-inflicted) problem in that area.

In 20+ years of C, you never faced a single segfault ?! I'm sorry but I don't trust you. You're basically pretending being the One True God of programming.

[1]: even in a full C project, it's straightforward to write you new code in Rust in separate files, thanks to Rust really good interop with C. Example: https://bawk.space/2016/10/06/c-to-rust.html


It's just not that difficult to avoid segfaults, signed integer overflow, all the usual Mr. Hyde aspects of C.

There's nothing godlike about it - it's just practice. He's been using C for 20+ years. So that's probably 10+ years of very little or no Bad UB Voodoo.


> He's been using C for 20+ years. So that's probably 10+ years of very little or no Bad UB Voodoo.

The implication being that it takes nearly 10 years of C programming experience to get to the point where there's very little or no bad UB voodoo?

I guess as long as we only let 10-year veterans of C write programs in it, we're fine then. We can solve how to deal with the fact nobody else if getting experience in C anymore because they aren't allowed to write in C later...


I meant very probably. Surely it takes less than that.


Sure, but 10 years is a large enough time that "less than that" can be half, it it's still rather long. There's something to be said for languages that change the effects of poor choices from security problems and inefficiencies to mostly just inefficiencies. Nobody is born knowing how to program, so all programmers have to go through a phase where they don't have the knowledge or experience to make the correct choice in every situation. Every language will have some percentage of its user base subject to this more or less.

Another way to think of it is herd immunity. The more we can gently guide and gently admonish these nascent programmers for doing things that are probably a bad idea, the more we can protect everyone else, the public and other programmers alike, from most their bad choices until they've learned when it's appropriate to break those rules.


My claim is that the particular class of bugs relating to memory/overflows have not been an issue in my career, while bugs arising from API and language ambiguities have caused considerable headaches.


The problems caused by language ambiguities and pointer arithmetic and addressing is exactly what makes C difficult.

So, I don't disbelieve you but I've known 20 year C experts have their WTF moments when shown code that would seem to work but doesn't.

But I agree that C difficulty is sometimes exaggerated - partly it's expectations.

ps. by ambiguity do you mean undefined behaviour?


> So, I don't disbelieve you but I've known 20 year C experts have their WTF moments when shown code that would seem to work but doesn't.

If after 20 years they don't know the C spec well enough to know if a piece of code should work or not, I simply wouldn't consider them very good programmers.


Hrrm.

If Jon Bentley can have a bug in his Programming Pearl book then so can anyone.

https://research.googleblog.com/2006/06/extra-extra-read-all...


I never said they wouldn't create bugs, but not fully understanding a line of code (outside of needing to conceptualize external dependencies) is certainly frightening after 20 years of working in C.


What it shows is that sort of code never shows up in practical C. There's stuff in Java (and Javascript for that matter) I don't understand, and I've been programming in those for about as long as anybody.


Absolutely.


    Whenever C gets brought up, people are like, "pointers... memory... dangerous!"
That's actually the point. It's even coded into C2x charter ( http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2086.htm ):

    The new spirit of C can be summarized in phrases like:

        (a) Trust the programmer.
        (b) Don't prevent the programmer from doing what needs to be done.
        (c) Keep the language small and simple.
        (d) Provide only one way to do an operation.
        (e) Make it fast, even if it is not guaranteed to be portable.
        (f) Make support for safety and security demonstrable.


That's the thing; programmers are NOT TRUSTWORTHY. This is a "humans are fallible" thing, not a "you can't trust stupid programmers" thing.

This is why C and Unix are irredeemably flawed designs; they trust the individual programmer/sysadmin to get things right and disaster happens when they don't. In the case of Unix, it can be demolished with a single fat-fingered rm command as root. VMS made a different call: in VMS it is assumed that the sysadmin is tired and hasn't had their coffee. This affects everything from how permissions are assigned to the notorious verbosity -- and extensive help system -- of the VMS shell. Properly configured, VMS was darn-near impregnable; it had also never fallen to the sort of buffer vulnerability common under Unix.


Too bad the business model of VMS was so bad, eh? UNIX, and more to the point, KILLED VMS on every front. And yes, I have had the displeasure of having to use VMS and code in it and deal with the special snowflake sysadmin people who operate VMS, the most recent one being just two years ago. What a time! What mystifies me is that there are still many companies out there who refuse (stuck?) to migrate off of ancient OpenVMS Alpha hardware--all because the IT people stomp their feet and throw tantrums and won't let it go. Where are the rational tech people? Still advocating these monoliths for job security...

VMS had its day, its day is over. Learn. New. Skills.

I enjoyed using VMS in college years ago. I am now biased because I have never been mistreated as a professional developer so badly than while dealing with that VMS admin! The man was so toxic that the company should have fired him many times over but refused because he was the "only" one who could kinda sorta run their database (note: they had to restart the application daily to keep it running and the boxes had all kinds of memory issues and they were buying used hardware off of eBay to keep it running--this is a major healthcare company mind you, thousands of customers whose lives are at stake).

Time marches on, skillsets have to be upgraded, which is apparently too much for some people to accept.

Dumb companies, dumb sysadmins.


(f) Does this mean they'll remove some undefined behaviour? Or at least mandates options to that effect? That would be marvellous.


I tend to read undefined as don't do it


It's not always possible / feasible to avoid undefined operations.

Use the modulo operator? Undefined unless you checked for negative first.

Add two integers? Undefined unless you check for overflow.

Granted those examples get a pass because on modern two-complement architectures everything handles the undefined behavior in an identical way.


I believe the modulo operator is implementation defined. As should be signed integer overflow, though it isn't.

One big problem about "undefined" is, when some platform go bananas over some stuff, the behaviour is standardised as undefined for all platforms, allowing the compilers to go bananas. Which they do, because every bit of undefined helps them perform some micro-optimization they couldn't do before.

I long for the day where the standard forces compilers to provide options such as -fwrap or -fno-strict on platforms that can reasonably support it.


That's Modula-2. Maybe they just need to change the syntax a bit to look more like C, built an IDE plugin, and reboot the Modula-2-to-C compiler to leverage optimizing compilers.

From the reboot one group is doing:

http://modula-2.info/m2r10/pmwiki.php/Spec/DesignPrinciples


Seems to me (b) and (d) are mutually exclusive, as are (a) and (f).


Also, there is the risk of re-doing everything and getting worse results: e.g. executable/library bloat, code harder to read/understand, interoperability/binding issues, etc.


At least Rust allows to write drop-in replacements.


Theoretically, yes. In practice, you'll get "Frankensteins" with multi-platform compatibility issues.


From a security perspective, C is inadequate. This is the biggest issue facing our use of the language currently.


From a security perspective, if you think C is inadequate, try taking a look at the instruction set docs for one of Intel or AMD's semi-modern chips. ;-)


Cornell did that. They saw horrible things. Then, they fixed the root cause:

https://www.cs.cornell.edu/talc/

Did something similar with C in Popcorn: a safe, C dialect that compiles to TALC. Gotta wonder what things would be like if more groups took such an approach to the problems they've found in C, UNIX, etc. Well, it was a nice vision at least. :)


I followed the web search links to Cyclone, Popcorn's successor and ended up at http://www.eecs.harvard.edu/~greg/cyclone/ which is a 404.

Which suggests how popular these safe C's are, regrettably.


Yep. How it always goes. Fortunately, Cyclones region and dynamic safety helped inspire Rust's scheme. It went somewhere even if not for C programmers in general. Far as also-rans, there's Microsoft's Vault, FLINT's C0 used in Verisoft for whole-system verification, and Clay that's mainly applied by Lea Wittie in stuff like device drivers.

https://web.archive.org/web/20080219014100/http://research.m...

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.468...

http://www.verisoft.de/VerisoftRepository.html

http://www.eg.bucknell.edu/~lwittie/research/Clay.pdf

http://www.eg.bucknell.edu/~lwittie/research.html


That is not an excuse. Defence in depth is a thing.


If you're building on a sinkhole, I don't see how having really nice floors helps.


I don't see how the chip's 'sinkhole' enabled bugs like heartbleed, which was industry-scale, which absolutely could've been prevented by having a really nice floor.


You can fix heartbleed--even in C. You can't fix TPM hacks, rowhammer, etc. without buying a new machine.

That's not even getting into the potential of the missteps in the RMM / MMU / SMP / virt systems that exasperate even Torvalds and deRaadt.


You are making an all or none fallacy. While those things do exist, we can't use them as an excuse to prevent the common case.


I'm not making any kind of fallacy here. The chipset is the foundation. Even with perfect programs written in perfect languages, the Mossad can still get whatever they want without breaking a sweat thanks to the silicon.

If we were going to set priorities for making things "better," the foundation would be the reasonable place to start. We'd probably be happy with Algol if we were running it on B5000-style hardware.


It sounds like you mean that stopping the vast numbers of lesser attackers other than Mossad is so much less important that its worth ignoring.

Most people's day to day experience of vulnerabilities is someone getting access to their password or (for the techies) their private server. Stupid actions aside (like leaving a database open to the internet), it'd be nice if things like heartbleed weren't so prolific.

Perhaps when we're living in a dystopian facist state we will wish we started out by fixing the hardware vulns, but until then it'd be nice if, for example, a person who has setup ssl correctly can assume it won't leak data all over the place.


Never said to ignore anything. I'm just outlining priorities.

With things like rowhammer, you don't have to be Mossad to exploit it, and you can't apt-get or windows update your way around it.

From your own examples, better languages can only do so much.


No point in building elsewhere if any earthquake will cause the structure to collapse, killing everyone inside, even on more solid ground.

I'd love for chip design to be the weak point for once. Let's make it happen.


From a speed and control perspective, nothing else is adequate.

The best we can hope for in the medium term (20 years?) is to have C as a target / intermediate / escape (ie. FFI) language.


From a security perspective, C is inadequate.

The implication here being that Rust is adequate, I take it? Let us not circle around the issue!

You could assert the same for assembler, and yet, assembler is the ultimate programming - the last stop: highest performance, maximum control, no safety. When then, was the last time you heard or read someone claiming that "assembler is inadequate", especially when it comes to security?

I could think of no language more adequate to push the electronics to their limit, and beyond, as Commodore 64 teaches us over and over again, even in 2016.

All I want to say is that your argument rings hollow. Yes, C has no safety. If my code has a hole, it will be noone's fault but my own. And that's the way I like it: it is far more preferable to an ultracomplex programming language like Rust.

And finally, a Zen koan: I was troubleshooting some code today, which I wrote in AWK. Now the code transformed the data exactly as I had coded it, and the code was 100% correct, the transformed data was exactly as expected, yet no output was coming out.

Everything which was in the program was 100% correct. The problem wasn't in that which was in the program, but that which was not: there was no clause to output the transformed data. There is enlightment to be had from this tale, specifically about security, and even more specifically about Rust the programming language and its compiler.


> The implication here being that Rust is adequate, I take it?

I think the implication here being that Rust is an improvement. Perhaps not even Rust is adequate, depending on one's needs, but it is at least a step in the right direction.

> Yes, C has no safety. If my code has a hole, it will be noone's fault but my own. And that's the way I like it

This is the way you and so many other C programmers "like it". As a result, buffer overflows, use after free bugs, and other completely understood reoccurring bugs continue to plague the CVE database in the form of new RCE vulnerabilities.

Maybe you're the promised child, the next Buddha, who will usher in a new age of secure C.

> I was troubleshooting some code today

Oh, you're a mere mortal like the rest of us.

Look, I get it, you don't want to give up the "power" of C for the improved memory safety of something else you don't yet understand. But you could really simplify your argument to: "I don't care much about safety, so C is fine." And, hey, you know, that's actually a legitimate argument.

I care about safety. I recognize I am mortal, so I have my code reviewed for mistakes. I recognize my reviewers are mortal, so I apply static analysis, unit tests, fuzz testing, and integration tests to catch the mistakes my reviewers won't. Even these are fallible, so I apply all the latest and greatest mitigation that I can - ASLR, stack cookies, W^X, the works.

If there's a hole, it will be the "fault" of me, my coworkers, the authors of the tools we use, and so many more. This is a step in the right direction - towards how just how I like it. Where one mortal may fall, perhaps a team of them shall succeed.

And yet it's still not enough.


Look, I get it, you don't want to give up the "power" of C for the improved memory safety of something else you don't yet understand.

Not quite: I am vehemently against the idea of trading simplicity of C for ultra complexity of Rust, just so I could get some compile time checking, and I will forever be the enemy of that idea.

But you could really simplify your argument to: "I don't care much about safety, so C is fine."

I care very much about safety. But I will not trade simplicity for an ultra complex language like Rust, just so the compiler could do some compile time checking.

Oh, you're a mere mortal like the rest of us.

Buddha was a mere mortal too. However, this programming business is special to me. Computers aren't only my passion. They're my life.

Maybe you're the promised child, the next Buddha, who will usher in a new age of secure C.

And just like all the 33 buddhas, I too am straining and struggling to become one, even if it takes 100,000 years.


> Not quite: I am vehemently against the idea of trading simplicity of C for ultra complexity of Rust

The intricacies of undefined behavior escape a lot of my coworkers. Having to painstakingly explain strict aliasing rules and what constitutes a sequence point (or doesn't) isn't "simple". Even basic multithreading support as recently as C99 required third party libraries, and compiler extensions - if only to prevent the unexpected reordering of code by the optimizer.

In practice, this is complex enough that I resort to third party tools to try and manage these and other complexities in the form of static analysis, aided by even more compiler extensions. I retrofit them to existing code, and then it points out where shared data is accessed without being protected by locking the appropriate mutex - and then I smile, for I've seen weeks sunk debugging such bugs, and I've spent 10 minutes to not only catch one such bug, and prevent them in the future for this code. Iterator invalidation checks? Sign me up!

Rust's lifetime management and borrow checking are, frankly, a much more comprehensive and unified way of doing a lot of what I was already doing in C and C++ code. We end up disagreeing on which language is the simple one, and which language is the complex one.

> I care very much about safety. But I will not trade simplicity for an ultra complex language like Rust, just so the compiler could do some compile time checking.

Do your actions agree with your words? I've listed several of the things I've done to wrangle with the problem of safety - not a comprehensive list, mind you, but a good start. Perhaps you go even further, and use one of the "safe" C subsets such as MISRA C?

EDIT:

> Yes, C has no safety. If my code has a hole, it will be noone's fault but my own. And that's the way I like it: [...]

Are you sure you care very much about safety?


Rust's lifetime management and borrow checking are, frankly, a much more comprehensive and unified way of doing a lot of what I was already doing in C and C++ code. We end up disagreeing on which language is the simple one, and which language is the complex one.

Yes, we disagree, and vehemently at that. I think borrowing is one of the most overcomplicated things in Rust, and is one of the major reasons why I am against it. You hit the nail on the head.

Are you sure you care very much about safety?

Yes. Every piece of code I write in C, when I write it in C, is run through a debugger before it is put in production, because I don't trust my code and I don't trust myself. In fact, I don't release anything in any programming language without having debugged it first, even when it appears to function correctly. It's still simpler than Rust!


> When then, was the last time you heard or read someone claiming that "assembler is inadequate", especially when it comes to security?

All the time. Virtually no one would recommend writing any serious software in assembler. The only software written in assembler is high performance numerical code and certain hardware-specific kernel device drivers.

> If my code has a hole, it will be noone's fault but my own.

And who's going to hold you accountable for that hole that just leaked 100,000 credit card numbers? No one, that's who. The institutional bias against safety and the insulation programmers enjoy from the consequences of their code are the only reason attitudes like yours persist, and the trivial security bugs that inevitably follow.

> And that's the way I like it: it is far more preferable to an ultracomplex programming language like Rust.

Rust the language is not ultracomplex. Don't confuse the language with the compiler.


> The only software written in assembler is high performance numerical code and certain hardware-specific kernel device drivers.

Don't forget the cryptographic primitives : coding them directly in asm is the only way (at least the only way I know) to be sure that the execution will be done in constant time to avoid «timing attacks».



If you want individual programmers to be "held accountable" for bugs they write, it seems that you do not consider their employers or managers even to be involved. This is an overly-simplistic understanding of where bugs come from.

Your argument depends on the premise that Rust is necessarily better than (say) C. If it turns out that Rust has a bug in it (it's software, it should have bugs, and it's relatively new software as well), then should the programmer who chose it over something else be liable? What if the alternative were an extremely rigid discipline for C programming using a very mature toolchain? I suspect your evidence for Rust being safer isn't worth taking to court.

Thought experiment: if an Evil Dictator simply killed programmers for writing software that contained bugs, would it give rise to a race of perfect programmers? Or would it crush the industry and impair the economy? Would you stake your life on Rust being bug-free? But you can financially ruin almost any engineer for life with the value of a suit for 100,000 credit card numbers. Under that regime, engineers who assumed all the risk would reasonably expect a much larger share of the profits. Are you starting to understand why programmers are not held individually liable for every bug they write?


Maybe I read it differently, but it seems to me that GP us more concerned with the institutional bias and the attitude rather than anything relating to legal liability... the programmer is allowed to get away with believing that they are taking responsibility for the security of their code, but never feels any consequences for failure, and thus believes they are doing a good job when they are not.


That's phrasing it better than my original rushed comment, thanks. This insular environment doesn't tend to incentivise responsible choices in software development or in deployment.

Perhaps liability is one way to address it, but hardly the only way. "Engineer" here in Canada is a protected term and you must belong to the society of professional engineers (P.eng), which carries various legal responsibilities when signing off on work.

Requiring code reviews by a P.eng creates a whole different set of incentives, which would emphasise languages that are more expressive, clearer and easier to audit because the P.eng's time is way more costly than the whining of a software developer that wants to squeak 1% more performance out of his inner loops.


Individual engineers are held accountable -- sometimes criminally -- for bridges they design. Why shouldn't we do the same for programmers -- make it a licensed and bonded profession with serious consequences for screwing up? It could go a long way towards cleaning up the crappy practices which are accepted as normal for the field. Silicon Valley techie culture has long afforded programmers too much freedom. They serve the public and should be held accountable to the public.


> If you want individual programmers to be "held accountable" for bugs they write, it seems that you do not consider their employers or managers even to be involved. This is an overly-simplistic understanding of where bugs come from.

It's not overly simplistic to point out that at least 80% of security vulnerabilities would never have happened if the program had been written in a memory safe language. That's not a management problem, that's an institutionalised programmer bias.

And my argument does not depend on Rust at all. Plenty of safer alternatives to C exist, like Ada, or heck, Frama-C verification of C code. These tools aren't well known or sufficiently funded because of your exact type of bias that there's nothing wrong with choosing C.


"assembler is the ultimate programming - the last stop: highest performance, maximum control, no safety. "

Nah, that's microcoding, synthesis of control/data paths in NISC, or use of FPGA's. Assembly is for people that can't handle doing a custom ASIC on their own. ;)

https://en.wikipedia.org/wiki/No_instruction_set_computing


And now some guy's about to poke y'all with a soldering iron (etc)... :-)


> When then, was the last time you heard or read someone claiming that "assembler is inadequate", especially when it comes to security?

Get some drinks with OpenSSL (or BoringSSL or LibreSSL) hackers and ask them how they feel about perlasm.

> If my code has a hole, it will be noone's fault but my own.

If you're a one-person development team, you're not talking about the same scale of development the rest of this thread is about.


IMO it's yes and no.

YES: Because you're right. Small more iterative changes in terms of tightening up the language to allow for more predictable software (and environment like the kernel) can get us a long way ahead.

NO: Because even though you're very correct, there comes a point in any software stack where the expenses of rewriting the whole thing are vastly smaller than the expenses of iteratively reworking it.

I have drifted away from C/C++ a long time ago -- around 10 years. I personally got sick and tired of per-compiler quirks, different precompiler policies, and writing sed scripts to fix messed up preprocessed files, to name only 3 generic problems.

IMO I believe if something between Golang and Rust comes along and brings only their positives to the table, it'll be very worth to rewrite (or make richer version of) a huge amount of legacy tools in them -- especially if they bring extremely easy cross-compiling and strong runtime guarantees.

I don't know enough to claim either way but in my eyes C has been around for a very long time and a good chunk of the complaints are still in place. I'm tempted to think the community had its chance and that it's time something [somewhat] better must replace it.


In the commercial space, devs already have to rework their apps frequently to handle the curve-balls MS/Apple/Google etc. throw at them. If we had a C compiler with webkit-level dominance, and we made just one fix to it every cycle, that would be a big thing, and it would be manageable!

It's like, "Oh, to be C-2016 compliant I've got to get rid of all my calls to unsafe string functions." Search, replace, and some elbow-grease later, and I've improved my program with a much lower likelihood of breaking functionality.


> devs already have to rework their apps frequently to handle the curve-balls MS/Apple/Google etc. throw at them. If we had a C compiler with webkit-level dominance, and we made just one fix to it every cycle, that would be a big thing, and it would be manageable! It's like, "Oh, to be C-2016 compliant […]

Fortunately, the web doesn't work the way you think it work ! JavaScript written in 2005 is still working nowadays, and «not breaking the web» is the main reason JavaScript is so quirky. And the backward compatibility story of Microsoft Windows over the last 30 years is one of its bigger success.

You can't just expect people to rewrite their whole code every now and then … And if you do, people just stop updating or even drop your platform.


Both Windows and JavaScript do break things, regularly. Small stuff, but stuff, and worst of all, it is usually in the service of some internecine pissing-contest instead of an incremental approach towards "better."

If we did the same thing with C, but with more high-minded goals, it would be a fine investment in the future.


Since I am no longer heavily invested in the C/C++ stack I have no choice but to agree with you.

My general point is that IMO the tech giants' patience towards C's persistent problems is long over and that's why we have Golang, Swift, Rust etc. Don't crucify me for the "persistent problems" thing please. ;)

I fully understand the need for a language as close to the bare metal hardware as possible but many people -- me included -- prefer a more stable and predictable environment with the price of less speed.

I believe there's no right or wrong side in this particular preference.


How do you define problem? In terms of consequences or mere presence of a bug? If it's consequences, is it plausible that you have had no problems because not enough people are interested in attacking your code? If you say you have never produced a memory/pointer related bug that got past you, then you probably are the rain man.


I mean that through strict valgrind and fuzzers, I've never had it find anything in my own code. I've also never had a hard-crash from anything in my own code. I attribute this to personal paranoia about input and indices.

Worst experiences I've had in that domain were due to inconsistencies in things like the Win32 API across different OS versions that were allegedly supposed to behave the same way.

Sometimes I think better API and language documentation would improve programs more than better languages.


Wouldn't you prefer to have the computer deal with the personal paranoia for you?

I love C dearly, but I can write so much faster in, say, Python, and that code is usually production-suitable. If I say "for line in open(sys.argv[1])", I don't have to spend 50 lines dealing with open() failing, or sys.argv() being the wrong length, or allocating a buffer to read into and recombining lines longer than the buffer, or whatever. Ideally in Python I'd print better error messages, but it's fine if I forget; Python will take care of each of those problems and do something safe, if not user-friendly. If I forget any of those in C, it's emphatically not fine.

Rust - and many other higher-level compiled languages! - promise to bring that development speed and confidence back to the domains where C dominates.


I do prefer it, and for new things, I love Rust and Golang and Clojure. I even kind-of like Python (aside from the blocks-by-indentation thing).

I'm just saying that for old things, staying the course will probably be more fruitful than fresh start in a different language.


That's why the spurred interest in C and C++ in providing more safety mechanisms, which I believe is because of the increased competition in this realm from Rust, is a good thing. If we can get most the way to what Rust provides in C and C++ through tooling and extensions, there's a path for existing source that can't or shouldn't be rewritten, and new projects get to make a choice of what they want to use.


I believe that most of the C++ safety stuff happened independently of Rust existing. A lot of it started before Rust was a big thing.


Sure, but I would argue that there's much more interest in it now that there's a credible alternative that does care about it. That is, it existed, and people used it, but now people might feel a little bit of a need to justify why they are using C or C++ when then didn't before, and so their confirmation bias might cause them to look more into those existing safety mechanisms, and maybe even use them.

Put another way, even if Rust were to disappear tomorrow, I think the net impact would have been very positive in that it exposed people to these concepts in a concrete way they weren't before, because they are optional in the C and C++ ecosystem.


I have seen memory errors in large C codebases produced by larger teams. I think C requires a certain level of discipline and a lot people working at large companies simply don't care enough.

From my experience using a simple subset of C++ using things like shared_ptr reduces problems a lot.


>I've never had a (self-inflicted) problem in that area.

That you know of.


"(Self-inflicted)" is an important disclaimer, because it implies that you work mostly on your own from the metal up?

(Also a meta-note about HN discussion: we have years of people saying "there's a problem here" and one guy saying "it's not a problem for me", seemingly unable to see why anyone else has a problem. "Works for me" is not a mission statement.)


Nah. I have had to deal with memory issues in 3rd-party dependencies, so I'm not saying it never happens. I'm just saying I have much bigger problems in other areas that are mostly unaffected by "better" languages.


Maybe I missed it in the article, but it alludes to this: C is the least common denominator. It's not a great language, but it works everywhere and will work in the most restricted of environments.

As the article states, it's resurgence is bc of restricted small devices, but that's not because it's a great language. Debugging memory issues is horrible, having strange things happen because of randomness between compilers, no fun.

I had a recent desire to get back to baremetal, but I didn't go back to C; I have too many battle scars. I went to Rust, and have never looked back or regretted it. The FFI between Rust and C is pretty easy too for any cases where I need it.


It seems that a few years ago several big name tech companies found themselves at the limit of what they could do with the c#/java style languages. But they weren't willing to go back to c/c++. Facebook started experimenting with D, apple created swift, google created go, Mozilla created rust, Microsoft had a rumored system language that never seemed to materialise, and even c++ is trying to get away from itself.

It is sort of looking like the industry as a whole is starting to see if there isn't a better way to do things and I think we will see how things shake out in the next five years or so.


I think this is a nice development :)

When I was doing my CS degree it was Java all the way.

I mean, today you can start a Haskell or OCaml project without people laughing at you.


I find your description to be the best one in the thread. It's also reflecting my exact motivation in dabbling with Golang.

Thank you.


For new projects on hardware that supports the Rust compiler, there are few reasons to choose C over Rust.

That said, take the kind of lifelong momentum that PHP has, and multiply it by a few orders of magnitude, and you have C. It will be a very long time before C is replaced by anything.


I'm sure I'll still be able to compile my c programs, without change, in 10 years.

Is rust willing to make that commitment? Or even 5 years? 2?


We have very strong backwards compatibility guarantees. We do reserve the right to make soundness fixes, but they must be trivial to fix in user code. Otherwise, we don't break backwards compatibility. We have had one or two of those in the first year and a half of post-1.0 Rust, but virtually all code that did not hit one of those issues still compiles today. Newer iterations of the C standard have also introduced small backwards incompatibilities, it's just how the world works.

Our intention is to have that kind of long-term stability, yes, and we put a lot of work into making sure that it happens.


I'm glad to hear that -- I've had bad experiences with some other languages moving too fast and breaking things. I understand that's great for people developing every day, but I have several systems I dip in and out of.

Now I know rust has entered a stable phase, I might give it a look!


Trust me; we hear you. pre rust 1.0 was not pleasant, and it was even my job to keep up with the changes.

For more on how this works:

https://blog.rust-lang.org/2014/10/30/Stability.html

https://github.com/rust-lang/rfcs/blob/master/text/1122-lang...

https://github.com/rust-lang/rfcs/blob/master/text/1105-api-...

These are the RFCs and policies we came up with when cutting 1.0. I haven't read them in a while, but there haven't been any changes. Some of the text and/or examples might be reflective of the time they're written.

Oh, and one more thing: we have a tool, "crater", that we run every so often against every open source package in the ecosystem to make sure things don't break. Some of the most popular packages are built on every commit, as well.


Yes. But if people choose to , like me, go to a safer language that reduces congnitive load around bugs etc, it's easy enough to use the FFI of Rust to utilize any existing C libraries out there.


The other big advantage of C is that multiple compilers for it exist that are good enough to use in real life, for a wide variety of toolchains, and it's simple enough that it's possible to write a compiler for it (if not necessarily a very good one) as a hobby project. We can be reasonably sure that C will continue to exist.

By comparison, C++ has precisely two working open source compilers --- clang and gcc --- and Rust has one: if you commit to using Rust, you're also committing yourself to an LLVM-based monoculture. The chances of the entire Rust team being simultaneously hit by a bus is low... but it's something that needs to be factored into a product assessment.


> Rust has one: if you commit to using Rust, you're also committing yourself to an LLVM-based monoculture

rust currently has only one backend, which is LLVM, but their is a working project to build an alternative one for the Cretonne compiler[1] for the «debug» builds, and there might also be a GCC backend for rustc one day if people are interested enough to build it. I'm not aware of any fundamental difficulties that would prevent it.

[1]: https://internals.rust-lang.org/t/possible-alternative-compi...


(The reason cretonne is debug only is that it's raison d'être is for WASM, so it won't be doing (many) optimizations. It will help with rapid prototyping however, and perhaps let you do things like a rust interpreter)


That argument seems akin to avoiding C# because only MS supports it, or avoiding Python because CPython is the only serious implementation.

The language with the most sensible implementations seems to be javascript, I guess.


It can be rational to develop projects on new languages. It can also be rational to consider the risk posed by the ecosystem you are plugging into. For example, it would be rational to consider the risk to a new project of basing itself on a language that is only supported by one corporation. (Imagine if Oracle had become a single point of failure for Java.)

That said, Microsoft has very deep pockets, shows every indication of continuing to promote and use .NET extensively, and has also built up a lot of commercial buy-in over the 15 years or so that .NET has existed, which could carry it forward a while or fuel alternate implementations if something catastrophic happened.

PyPy is a serious implementation of Python. There are others and have been others over the years. While the corporate buy-in isn't the same as .NET or Java, there is a large amount of open source funding and buy-in from very diverse sources. Python is older than .NET as well.

Maybe the risk of using Rust is acceptable even if a bit larger, or the added risk is offset by Rust's good qualities - nonetheless, it's a very new language with no Microsoft-scale company behind it. Not precisely comparable.

It doesn't matter if Rust is perceived as having less ecosystem risk than Python or .NET, if in (say) 10 years it doesn't make significant headway against C, which is what it actually has to compete with. Here we could consider the examples of Pascal and Lisp - they competed with C, were arguably much better languages, and they lost anyway.

It's something worth considering if your dreams are founded on Rust.


>That argument seems akin to avoiding C# because only MS supports it

Or avoiding Flash because only Adobe supports it, or SilverLight/VB because only MS supports it, or Objective C because only Apple supports it.


Many popular languages nowadays essentially have just one viable open source toolchain: Go has the Go toolchain (gccgo being niche), Python 3 has CPython (Jython and IronPython only support Python 2.7, and PyPy is not up-to-date), Swift has the apple toolchain, Haskell has GHC (none of the other compilers is widely used).

That's not necessarily a bad thing since the compilers are all open source and you can port them to whatever platform you want anyway. C managed to spawn a lot of adequate compilers, but the language itself has seen very few changes since it was standardized (27 years ago), while all the other contenders (including C++!) see a major release at least every 2-3 years, and up to several times a year (for Rust).

You can't have that pace of changes and still maintain compatibility between all the different implementations.


>Many popular languages nowadays essentially have just one viable open source toolchain: Go has the Go toolchain (gccgo being niche), Python 3 has CPython (Jython and IronPython only support Python 2.7, and PyPy is not up-to-date), Swift has the apple toolchain, Haskell has GHC (none of the other compilers is widely used).

The problem is that having multiple providers prevents the BDFL from playing too hard with the language.

If there was an alternate Python toolchain, Python3 wouldn't have happened.


The question is not so much how many working compilers are there, but how many rely on the language.

Practically, PHP has one interpreter, but if they go out, you can be assured that someone will fork and maintain it.

On the other hand, if MyToyLangOnGitHubGraduateProject goes belly up, you're stuck maintaining it yourself.


If you never adopt new technology there is no progress. C++ dissed, but offered enough ergonomics that it gained adoption. Java succeeded in giving people a platform on which they could distribute software while relying on a consistent runtime. Rust is new compared to these, but again offers significant ergonomics over C and C++, seems to be delivering on platform independence pretty well (better than C/C++ imo); it really is the best of both worlds.

What's also amazing, is that Rust is growing through sheer love of the language. There is no one pushing it, but for the joy of hacking. It is lik C in that way.


> As the article states, it's resurgence is bc of restricted small devices, but that's not because it's a great language.

To be fair, it's an amazing language, but if you're coding to get things done with a deadline (or with an unknown set of specs), of course you should go for the nailgun instead of making custom hammers.


Syntactically I agree with you. I love the C style, I pretty much only use C derived languages for a reason. I think it was great for it's time...


Back in the mid 80s, the CS program at my Uni switched from Pascal to C as the standard course language. C always felt like a crippled death trap in comparison, even if real work would have had to have been done in Modula, rather than actual Pascal. C has a nice data literal mechanism, and return/break, but all the things you gave up: array bounds checking; references that couldn't be null; subrange types; non-int enums; arbitrary array indices; sets; nested subroutines.

How many people even realize that C++ (formerly "C, with classes") is largely just a rehash of the Simula preprocessor (for Algol) from the 60s?

That said, I think the Unix creators meant for C to be an assembly alternative bootstrap language, not the primary development tool for almost all applications.

In a sense, the rise of microcomputers was a disaster for programming language development, or at least a mixed bag in the short run.


Having worked in both, Pascal (at least in the 1980s) felt like trying to work inside a straightjacket.

Perhaps the worst example: The size of an array was part of the type of the array. This means that you couldn't access outside of the array. That was good. It also meant that you couldn't have a variable-sized array, because there was no possible type for it to have! That was really problematic at one particular point when we needed to deal with a variable-sized 2D array. We wound up having to create the maximum size we had memory for, and only use the subset that we needed, which to this day I consider a kludge to deal with a language limitation.

For more, Google for "Why Pascal Is Not My Favorite Programming Language". Despite being written by Kernighan, it's not a Pascal-vs-C evaluation - it's more Pascal-vs-Ratfor, with Pascal coming off rather badly for real-world use.

For university teaching, though, Pascal was probably better...


The fixed size array limitation of the 1970 CDC Cyber teaching version usually gets an extension in the (Modula or Modula inspired) versions that people actually use. E.g. -

http://www.freepascal.org/docs-html/ref/refsu69.html

https://www.modula2.org/sb/m2_extensions.htm

Yes, I've read "... Not My Favorite ..." (and "Worse is Better", and "Unix ... C ... Hoax")


Yes, I know there was an extension. Unfortunately, the one I had to use didn't have it, so I was trapped.


I'd suggest that it's still "its time", but not for daily use for most people.

Hardware is cheap and fast and there are safer languages (I hear Rust is good; I'd use Java if I wasn't carpal-tunnel concerned), and often a VHHL is more than sufficient for time/speed trade-off s or even obligatory (e.g. Python, client-JS respectively).

But that said, C is _great_ as a teaching/learning language -- not much magic, small --, as a "I have very specific/strict goals" tool, and as a very well-tested underpinning for making languages that are safer/magical (e.g. CPython).

Would I pay 60 people to write C for a .com/.io startup where 10 people writing NodeJS could get something out the door faster? No, but I'd love it if a couple of those 10 had a solid C background so when _shit gets weird_ or a framework's magic starts barfing on systemd's magic, the engineering team doesn't decide it's time to switch to golang.


I have also been looking to get back into bare metal programming professionally (I miss the torture). Did you see an immediate switch to rust or is this a rare unicorn project in a sea of C/ASM code for embedded devices?


I've been writing my own ground up stuff. Nothing really on the hardware side for me to publish, but I have been writing a few articles on my DNS project which I have some plans for in the small device realm at some point (not free of allocations yet though).

There are many articles by others out there for getting going with Rust on baremetal, and with Rustup it's now easy to cross compile from your Dev computer to the tartget device.


fosho. Will look into those, but I guess my question is if it's worth practicing some development for embedded devices in rust to switch jobs or if most are still using C and I shouldn't waste the time.

EDIT: if anyone can answer this that would be so awesome :)!


Compared to C, the number of employers looking for rust developers is so close to 0 that it's essentially a rounding error.

I really like rust, and I think it's worth learning, but it is definitely in the early adopter stage and not something that has really made a dent in the embedded industry, let alone taken over.


Yeah, if you're looking for employment opportunities today, or in the next couple of years, learn C.

That said, if you want to be a better C programmer (and you already know C): learn and write Rust, and figure out what sorts of things you can't write in Rust and why. Once you learn how to make the Rust compiler's safety checks pass without it yelling at you all the time, you'll know how to write good code in a similar language without the safety checks.


I'll second my sibling comment in saying that C (and C++!) will be around for a long time, and is epically easier to get hired with if you're trying to break into the industry. The C ecosystem just has too many advantages at this point (i.e. not just existing code but also number of supported platforms, etc.).

Also, a significant fraction of embedded code is going to be "unsafe" Rust by definition: drivers performing volatile load/stores on memory mapped hardware. In those scenarios, defect mitigation techniques are a matter of system architecture (MPUs, pre-empting deadline-based task schedulers) rather than being language-specific. Arguably even Rust provides you with no protection against the most devious bugs (memory barrier usage, cache-DMA interactions).


The machine is ultimately unsafe. Rust's strength is to be able to encapsulate that unsafety to be a small part of your code, and have most of your code be safe. This is even true in an embedded or osdev context, though the percentage is higher than in an application context.


I am new to programing (well new in the sense that I have been doing it sparsely for years) and only have written code for microcontrollers, is rust a good replacement for C in that domain?

I don't know much about other languages, but it seems C is pretty well tailored for doing all bit-level stuff. Is rust the same way?


"All" modern compiled languages will have bit-manipulation. As long as a language can compile for your microcontroller, you're golden to use something higher level.


I love C, I really don't see what the fuss is about about 'replacing' it really. It's a sharp tool, requiring care and attention.

I did C++ for about 20 years, and gradually reverted to C, simply because of the footprint and the ease of maintaining a C codebase compared to a C++ codebase. C++ codebase's footprint will /explode/ very quickly, while it's really, really hard to do that with C.

You can't have 'shockwave' effects in C codebases; you can't have an intern adding a subtle 2 liner to your codebase in an obscure base class and make the footprint of your application explode. (been there...). It's a lot easier to review, test and maintain over a number of years.

Also, I've grown up considerably since I believed object oriented programming is THE way to solve everything related to programming problems, I've seen a lot of cases where you'd have a spaggetti plate of classes to solve a problem while you in fact could have solved that problem in a 15 line static function.

C -- especially C99 with it's extensions -- is nice, lightweight, hard to mess around with. it's actually a sharp tool but at least it's a straight sharp tool. You know where it's going and how it can hurt you. It's not the case for many of the other solutions who just add bloat to try to hide the fact that they are about as sharp cutting, but it'll get you at the back of the head, later :-)

I keep a keen eye on what's going on with Rust and Go in particular, but it'll be a long time still for me to commit to any of the new kids on the block. Right at this minute I imagine myself easily having to rage-recode it in C at some critical point because of some unforeseen effect.

My only regret with C is that the C committee is completely out there. C11 doesn't solve any of the problems that would be useful for 99% of C programming. Like, rolling in some of the GNU extensions that have been around for 25+ years (case X...Y comes to mind!)


> I really don't see what the fuss is about about 'replacing' it really. It's a sharp tool, requiring care and attention.

I really like the take on the "sharp tools" analogy here: https://www.schneems.com/2016/08/16/sharp-tools.html . Your tool might be sharp, but that doesn't mean that you can't make it safer to use.


The C/C++ toolchain has been working to add guards and guides to their sharp tools. Things like the various sanitizers in the llvm toolset have already meant that a lot of bugs that would have been released are instead fixed beforehand. You also have things like OpenBSD's implementation of malloc which also helps in the software feedback loop in finding logic errors that allow for things like use after free errors. C still requires attention, but there are a lot of places where it's harder to cut yourself.


I personally use libtalloc [0] for my allocations in C. Small footprint, excellent feature set, and it has heap checks, reference counting and all the gizmos you'd like in an allocation library. It's actually awesome, I highly recommend it.

For C 'gotcha' cleanups, you can use cppcheck of course (highly recommended) but there's also 'scan-build' the static analyzed from clang.

[0]: https://talloc.samba.org/talloc/doc/html/index.html


Love talloc: just taking the opportunity to plug ccan/tal which I wrote after hacking on talloc for a bit:

http://ccodearchive.net/info/tal.html


Cool, why the parallel project, if I may ask? What did you find you needed 'better' than talloc? :-)


Case in point:

https://www.youtube.com/watch?v=eiYoBbEZwlk

Maybe we need to use your sharp tools point with that video. It drives the point home with even experienced people losing fingers to their tools followed by tech modified to prevent it. The demo also never gets old. Just too badass.


That thing looks like it could kill a man if dropped from sufficient height. On the other hand, if I envision it without a case, I'm still not seeing a terribly dangerous object.

> This design is an evolution of planes over centuries of use

How did early ones look like then? Was getting hurt by them a problem, at all?

I'm not debating that main point, that in many cases, safety and usefulness are orthogonal. But that example is kind of worse than none.


It's correct to say that C is a more explicit programming language than C++, where each line of code won't have large side-effects in performance or code size.

The consequence is having to painstakingly code everything at a low level of detail though, with only macros and its limited functions to rely on for code reuse. This would be fine for low-level embedded work, but anything other than that is simply painful, unproductive and in my opinion not fun at all. I shudder at the thought of having to manually free resources, to use the embarrassing C strings and arrays or having to write dozens or hundreds of line of code to do one simple thing that C++ can do more safely and also in a highly-reusable way.

C is inadequate and has been for a long time. Its status of mythical, hardcore programming language continues to protect it, but slowly people are becoming more outraged at its inadequacies, as they should be. That's what the fuss is about.


Indeed. It's certainly possible to make a language with safety checks that could be turned off in the final version which then has the same performance, but has built in help during development, as well as less cryptic notation. As stated elsewhere, Pascal/Modula (for example) has some very nice features and is written at about the same level.

As for small devices in this day and age, it's not like we can't use cross compilers hosted on desktop computers, since that what phone/tablet systems already do.


> I love C, I really don't see what the fuss is about about 'replacing' it really. It's a sharp tool, requiring care and attention.

You just answered your own question: most people don't apply sufficient care and attention. So should we leave sharp tools lying around and encourage their use knowing they're going to be abused and cause serious problems?


> you can't have an intern adding a subtle 2 liner to your codebase in an obscure base class and make the footprint of your application explode.

What do you mean by "footprint" here? Because if it's heap space or CPU, you can certainly have it.


I don't know, if my large, commercial application suddenly starts taking twice the memory, or takes twice the time it used to take to start, can I 'have it'?

And if I can 'have' that, where do I stop?

That was my principal problem with C++ in the end; you'd get some people in the codebase, and then it was endless fear of someone screwing it up, or deciding that it was such a good idea to add 'boost' thingies to the application because it was SO much nicer. And added 20MB to the installer size (let alone memory footprint).

I remember one day when I investigated why the new version of our commercial application was taking about a minute to startup while it used to take about 15 seconds...

After a long time, pain and effort, it turned out that someone had decided that one of the String class accessors would take it's String parameter as a value, and not as a reference. ONE character source code difference, but it made the millions of Strings created at startup being duplicated, converted back forth from UTF16 to UTF8, destructed etc etc etc.


It's too late for your project, but in case anyone else has similar issues, some things can help:

* coding standards (e.g: boost X is forbidden). Should be checked by the CI with clang-format + e.g. a Python script.

* code review

* static analysis (e.g: clang-tidy)

* performance-focused integration tests


Oh, now I get your point.

Yes, C++ is much easier to mess with.


C++ is a horrible language


Given that clock speeds are pooping out, this makes total sense. Things already seem to be moving to an even lower level when performance really matters (shaders, FPGAs, etc). C looks fairly forgiving after implementing things in verilog.

For the "why not X" crowd, the problem is that there are very few Xs out there with 30+ years of code to build off of. That, and it's almost a psychosis among us CS grads to overestimate the magic that "better" languages are capable of.


I don't hear much about performance problems causing major trouble in popular software and making everyone patch things in haste.

For unsafe memory management errors leading to RCEs it's a different story. Guess what language are the patches usually in.


Not everyone has the same usage case. For general user applications, yeah, performance isn't that big of a deal. That's not the entirety of computing though.

Some people do things where performance is crucial. This is probably why Intel is moving into the FPGA space and Google is making their own accelerators for machine learning.


Absolutely. People who write games or audio / video software sometimes even do hand-written assembly fragments.

The very high-performance things are a relatively narrow area, and an area where C can easily lose to e.g. Fortran.

Also, for ML stuff you likely don't have a huge entrenched C codebase yet. You are freer to choose a more expressive language, and only hand-optimize / rewrite in C a small amount of hot paths.


Regardless of what glue language you use to write ML stuff, if you follow it down to its core, most of it is going to use a BLAS library to do matrix multiplication. That's where the heavy lifting is. And BLAS libraries are in C/C++ or Fortran. For example, OpenBLAS or Intel MKL.


BLAS Level 1 was released in 1979, thirty seven years ago.

There was no other comparable language to implement it in besides Fortran. C was a newcomer at the time, like Rust currently is.

The amount of optimization and edge-case handling work that went into BLAS since then is enormous.

BTW, Fortran, being mostly aliasing-free, is a safer language than C, and more optimizaton-friendly.


I thought Fortran's aliasing-free rule was just an assumption of the compiler, not something that it actually enforced.. so you could very easily write code that contained aliasing bugs.


> For general user applications, yeah, performance isn't that big of a deal.

Maybe if you use recent, powerful machines and don't particularly care about energy efficiency. In my experience with less powerful machines, the applications which tend to be noticeably slow are exactly the widely used 'general user applications': of the ones I use, Firefox, Chromium, LibreOffice and Thunderbird.


In fact, C (with SDL2) seems perfect for writing a retro video game. Memory leaks aren't usually a concern, because most memory is allocated up-front to save time and give very predictable and consistent performance. And most video game logic is very simple to do using simple functions, pointers, and structs, without the need for higher-level constructs like closures and GC which make a lot more sense for a mobile TODO-list app than for a video game. At least that's my experience, but obligatory disclaimer: I've never finished writing a video game (yet!) :D


You don't hear about it because one can't "patch things in haste" to fix performance issues caused by choosing the wrong programming language for the problem at hand.

Instead one hears complaints about Python's GIL, hears about people moving to golang, about this new thing called Rust, about a C and C++ resurgence and so on.


"Guess what language are the patches usually in." Patches are most likely to be in the languages used most in the most popular projects. That's just the rational prior.


This assumes that all languages have an equal propensity for vulnerabilities. Many people would challenge this assumption.


>I don't hear much about performance problems causing major trouble in popular software and making everyone patch things in haste.

How did nginx unseat apache?


I keep changing my mind about C/C++.

I loved my junior days programming in C & C++. It was really fun writing your own logging framework, container classes etc. OTOH spending weeks looking for root causes of memory problems was fun the first few years then became an annoyance.

Higher level languages are much more productive - which was initially great, but now I feel more like a sysadmin than a programmer. Its a lot of researching, wiring up and configuring libraries. Probably results in better results but not as fun as hitting the metal.

So sometimes I wish I could go back to the old days but my nostalgia is probably rose tinted. I'd love to do it, but I don't think its commercially productive enough, so I'm skeptical of talk of a resurgence.


I find myself doing most of my hobby programming in C. I think it kind of evokes the same thing other hobbyists(like wood working) get by building something from scratch even though there are easier ways to get close to the same end result. There's just something fulfilling about building something with your own two hands as it were and seeing it work.

That being said, I don't think I'd like to do C professionally and I'll happily stick with languages the manage my memory and have all kinds of libraries that give me "free" stuff.


Agreed. My first choice for a project on *nix is usually in C: it feels like real coding rather than including half the packages/modules from 'pick your glue language here' where the opacity is blinding. But many times I'll stop after working through the first data structures and IPC elements and decide to go higher level in the interest of time and energy (and for other (younger) people being able to maintain it).


I love the analogy to woodworking. When I am working on game dev, I prefer to use a old SDL based engine I developed in C++ instead of some of the new popular frameworks which are more productive and powerful. I just like tinkering and that "built from scratch" feeling for my hobby projects.


Try Rust maybe?

It has a different learning curve from C, as initially you'll be having a hard time getting programs to compile at all, but from that point the ergonomics are much better and you don't ever have to deal memory bugs unless you're developing a low-level library.


I second this notion. I'm quite productive in Rust now and can string together web services quickly. (To date I've written a text to speech engine, a laser projector renderer, and some image processing stuff.)

Rust is so awesome. Unlike Python or Ruby it's typesafe at compile time. Unlike Java and Go, it has no dumb GC, and unlike C and C++ it's memory safe. At compile time!

Programs can be distributed as a single binary, which makes deployment a breeze.

If that weren't enough, Rust innovates beyond every language I know of with its compile time threadsafety and fucking sweet package manager. (Cargo is seriously the best package manager I've ever used--perhaps the best in existence right now.)

Give Rust a try. It's freaking awesome.

(I'll buy you a coffee if you do and don't like it!)


Rust certainly seems to have a very enthusiastic (albeit small) user base. I'm someone who usually feels comfortable rapidly switching between a wide variety of languages, including unpopular ones (D is one of my favorites), but I've never seriously tried Rust. There are two reasons for this.

First (and this is the less important one), the syntax seems second only to Java in verbosity and often seems different for no readily evident reason. Secondly, virtually any time someone's advocating for Rust, you get a long lecture about how memory safety is the most important thing in the world and anyone who disagrees is so hopelessly backward that they should be disqualified from writing any code at all. Well, it happens to be very low on my list of concerns. In fact, for most of the low level code I write, memory safety, ownership tracking, borrow checking, etc. are wholly irrelevant. So, given that, should I look at Rust (as compared to C++ and D, which I use and enjoy)? Does it have meaningful advantages that compensate for its syntax, immaturity and performance limitations (vectorization, etc.)?

I've asked a few Rust users this before, but never really gotten a satisfactory answer that didn't involve the same memory/thread safety argument. Give me one and I'll happily give Rust a try, no coffee necessary. :)


Can you give examples of C++ code which would not require you to keep the ownership or borrowing straight in your head, and which a port to Rust would cause you unnecessary verbosity?

I usually find that if my C++ code required little to no mental tracking of ownership or borrowing or lifetimes, the rust was straight forward. And if the rust was complex, the C++ required me to keep too much in my head (or even had bugs).


Technically, all code requires you to keep track of ownership and borrowing. In cases where these are intellectually trivial concerns (which, I would argue, are by far the most common cases), Rust's forcibly explicit approach is, if you accept the assumption of the previous sentence, unnecessarily verbose by definition.

I cannot address the latter part of your argument (the more important part, I think), however, because as I stated, I have not invested serious effort in Rust. Intuitively, from the code and documentation I've seen, I'd be wary of trying to implement, say, highly performant (emphasis on the performance) cyclic graphs in Rust rather than C++, but that's merely an impression, hence my interest in learning more. And certainly, whatever the difficulty, the likelihood of bugs in C++ would be higher.


I don't find rust verbose unless I am dealing with complex situations though; that was my point.

Move-by-default, elided lifetimes on many parameters, etc, are all verbosity-free.


https://news.ycombinator.com/item?id=13265758 has a lot of discussion on this topic.

Not your parent, but instead of writing my own thoughts here, I think I'll write a blog post tomorrow :)


FWIW, I've seen people praise traits, but never seen any in depth discussion of their advantages and limitations with respect to, say, C++ polymorphism (both dynamic and static, via CRTP etc. for zero cost OOP) and trait classes. Similarly, algebraic types are frequently mentioned, but only with simple matching examples and scarce discussion of the limitations. I'm sure other aspects of the language I'm not immediately aware of also deserve more discussion.

Anyway, looking forward to your post.


Ownership helps me design code which is decoupled and easy to refactor.

Rust traits are more powerful, expressive, and maintainable than other polymorphism systems like inheritance.


Idk if I share your enthusiasm, but rust is pretty cool. Certainly has its uses though for bare metal I'd still say go with C (and ASM) if using atmel's AVR chips. At least until rust gets a working compiler to AVR. Seems it is getting ever closer https://github.com/avr-rust/rust


How developed is the ecosystem? One thing I enjoy about .net is the huge number of free and commercial libraries, the great IDE, and the ability to code a very broad range of applications with a single language (website, CLI, GUI, service, etc). Is Rust comparable?


Not at all. Rust has a long, long road ahead before it has an ecosystem comparable to Java or .NET. These systems have literally man centuries of work poured into them.

That said, I still predict Rust becoming the most important language of the decade, if not the decades that follow simply due to the strength of its foundation. Rust solves problems that no other language has--real problems with actual costs--, and fits into a very interesting space where it can eat away at the lethargic incumbants.

Rust is a web service language [1]. A desktop application language [2]. A CLI tooling language [3]. A core system library language. A game development language (eventually) [4]. It has a bright future ahead in so many different domains, and the tooling and libraries will come and reinforce this.

[1] Hyper, Iron

[2] Servo

[3] Ripgrep

[4] Piston


I suggest a thought experiment: take a few random projects you have or plan to work on, go to [1] and search for the kind of packages you need and see what's missing. It's getting better all the time, and you'll find most straightforward projects are totally doable in Rust.

[1] https://crates.io/


That does sound freaking awesome, I really will have to try it out.


The borrow checker is hard to deal with at first, but after you power through the initial difficulty it becomes second nature. I rarely have to fight the compiler, and when I do it's because I'm trying to do something stupid.

In a way I'd liken learning Rust to learning Vim. There is an upfront cost, but the payoff is going to be huge.


> So sometimes I wish I could go back to the old days but my nostalgia is probably rose tinted.

It seems like nostalgia is a big thing over the past few years. Old game emulators, people getting back into C64 or whatever. I guess it's fun for some people. I've considered playing with such stuff, but I know I'd miss some of the real advances we've made since then.

But my major projects at work are in C. My previous job was about 50% C, 50% scripting languages. For all C's deficiencies it's a known quantity and does its job well enough. And knowing C well is a rare enough skill to be valuable, apparently.

When I don't need C I'd much rather reach for something like ocaml/haskell, a lisp, erlang/elixir, or something. Basically I ignore all the "C replacement" languages because I can either use C or pick something way better.


The tools and stdlib improved ton too. I had address sanitizer and undefined sanitizer default on for a while and that helped a lot on memory issues. Things such as int32_t, intptr_t, ptrdiff_t made it is easier to be confident about your pointer math as well.


Well, C and C++ are very different languages. But going forward, this long time C programmer is writing in Rust unless required by exigencies beyond my control. A gun pointed at my head or writing an LLVM backend qualify. Not much else.


The reality is that anything you do for the first time is fun, and then gets boring.


C has this mystique of being hardcore, but it's not really that hard. I actually think languages like Ruby are more difficult because they hide so many (important) details from you. I would agree that C can be inconvenient, but its not hard.


C's syntax is simple, sure. But there's so many little things that can bite you in the ass very easily, which is why most people think it's hard. For production code, if you don't have a copy of whatever standard, and aren't able to use that to find your answers, you shouldn't be working on that code.

Think of it like this: In python you could be a beginner, and still write effective code to accomplish your goal. And while it might be slow, you also avoid having deal with a great many subtleties, which in C, to write code for a similar function, you could very well need to be high-intermediate to write the same bug-free code.

Faster? Most likely, but if it's error-prone, does speed really matter?

That's my belief anyway, ymmv.


To reinforce this, I still believe C is one of the most elegant syntax's for a "portable assembler". There are many dark corners of C, take for example pointers: https://kristerw.blogspot.com/2016/03/c-pointers-are-not-har...


This is an interesting read, thanks. However I do not understand the opening sentence:

"Pointers in the C language are more abstract than pointers in the hardware, and the compiler may surprise developers that think that pointers in C work in the same way as pointers in the CPU."

What are pointers in the hardware? I am not sure what is being said. Any ideas?


> What are pointers in the hardware?

Under current "flat address space" architectures, a pointer in the hardware is nothing more than a memory address, represented as an integer. That is, a pointer with value 0x12345678 is the address of the memory 0x12345678 bytes above the start of the virtual address space.

In the C language, pointers are more abstract: they have to point to either within an object, or one past the end of an object. This is because there are architectures like segmented architectures, where a pointer has two parts (segment and offset) and cannot be treated just like an integer, and the same C program should in theory be able to also run in these architectures without change.


Thanks for pointing out the alternative of segmented memory, this now makes sense. Cheers.


If you need a "copy of whatever relevant standard", then you're in some fairly deep trouble. Use the 10-40% of the language that works all the time, and you'll have no such trouble. Use the language such that constraints are available to be checked against ( i.e. , no blind pointers ) and you'll be healthier.


I think that when people say "C is hard", what they usually mean is "writing portable bug-free code with C is hard".


At least for some, "C is hard" translates into "pointers and their syntax are difficult to grok, especially around arrays and structs".

The other I've commonly seen is transitioning from a language that allows arbitrary data in a "string" variable. Null-terminated strings are no fun in comparison, and passing buffers around is more work.


That's because the dominant ideas about what "portable" means are pretty bad. To wit - the way the Linux kernel does it isn't my first choice...


Interestingly, one can substitute Ada/SPARK for C in a lot of this article on "C's benefits" with huge boost on safety & maintenance side. For cutting-edge, one project on 8-bit micro's used ATS language for its combo of safety & performance. OcaPIC let's tinkerers use Ocaml subset for algorithms if performance doesn't preclude it. People are also getting a combo of high-level and C benefits by using DSL's embedded in eg Haskell that output C. The Atom, Ivory, and Tower languages are examples of this.

The only thing that's uniquely an advantage of C in the article is understanding legacy code in popular projects. It's mostly in C for historical & social reasons. Mastering C is a great help there. Although, one could still use Ada (Ada-to-C compiler) or DSL approaches above with better results when writing code. Maintainers won't accept that in popular projects, though. So, back to knowing & writing good C for this sort of thing.


GHC deprecated the C backend since 7.0 (just FYI, the choices of LLVM and Cmm backends are more nuanced).


The DSL's generate it themselves I believe. Here's Ivory for instance:

http://ivorylang.org/ivory-introduction.html


>The only thing that's uniquely an advantage of C in the article is understanding legacy code in popular projects.

As much as I love C (despite its flaws), I have to agree here: Like drinking, C is only fun if you do it socially. If you do it nonsocially at all hours, your friends might begin to distance themselves from you. Or hold an intervention.


So... Ada is better, and anyone could use it where they use C - and in fact could use it to generate C. But nobody does.

Is that your point?


There was recently a discussion about Ada here and it seemed real nice with an obvious question - if it's so nice, why no one is using it outside of scope of people already using it. So, I gave it a try. Ada is a wonderful, WONDERFUL, language. Albeit a bit verbose. Here comes the catch though. Everything related to Ada seems to be tied to Adacore and proprietary toolings. There's a path for you to do it without (still tied to Adacore though), but not as obvious and not as widespread. Kind of like Common Lisp in ye olde Franz days.

C is great too. C99 especially so. C11 threads are good too. C20 seems to be on a good path as well, here's its charter: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2086.htm

Only thing I would add to C as a language or in standard libraries is some kind of (better) string and UTF handling. ICU4C is coolio, but too much. Antirez's SDS looks cool, but I haven't used it, so I don't know from practice how/if it works with UTF and if it's great in production.


I like Ada. It's nice. It's actually beyond nice. I don't like working at/for the places that use it.

Just in terms of "I'm used to it, and it (mostly) works everywhere I want it to be", C89 + your UTF-handling would be super. I can (and have) construct support for CJK if/when needed, but, gosh, at this point, I wouldn't mind relegating it to some standard run-time.


"Everything related to Ada seems to be tied to Adacore and proprietary toolings. There's a path for you to do it without (still tied to Adacore though), but not as obvious and not as widespread. Kind of like Common Lisp in ye olde Franz days."

That's a fair criticism. The comparison to Franz CL seems good, too. The good news about this risk is that AdaCore at least FOSS'd most of the key stuff. The compiler also has rigorous validation suite that can be used in an alternate one. I've always thought it would be better to redo the compiler in Scheme, Ocaml, or Haskell anyway to use all the wonderful tooling they have for abstraction, assurance of correctness, and maintenance. Maybe a front-end that then gets run through LLVM as is getting more common.


You hit the point I've missed to punctuate. At least Adacore's stuff is FOSS'd. Well, a version or two away at least. In contrast with Allegro, that's a huge point forward. Also, at least someone is actively working on FOSS, even though it's only one company (mostly). That's kind of the only major downside I saw with Ada when I've tried to answer to myself that question of why isn't anyone using it outside of people already using it. The other part-answer would probably be that it's not in fashion, but I'm not concerning myself with that.


I remember that I loved reading most of what was on Allegro's feature page. Especially AllegroCache where you put OOP data in an OOP database instead of the round peg, square hole concept involving a RDBMS & ORM that are popular. They throw in a Prolog for queries and their own little SourceForge of libraries. Kept it proprietary at a cost.

It about seemed worth the money albeit I'd keep a FOSS variant of my code on CLISP or something in parallel just in case. ;) Then I saw the real anti-FOSS: royalties. They charge royalties!? In (year here) in software!? Their dependence is reminiscent of Adacore but their financial benefit on development and distribution can only be topped by Microsoft, IBM, and Apple. ;)


Those royalties, man. That's some next-level Oracle stuff they had going on. I always wondered would CL have had more adoption if Franz had abandoned those and sold tools only. I don't know how Mirai pulled it off. They were using Allegro, as far as I remember. Though they went bust, but I bet not due to royalties, hah.


The point is that many benefits ascribed to C to justify use of C arent unique to C and therefore dont immediately justify it. Then I corroborate that with several examples including Ada. You got that much right.

The nobody does part seems off given amount of customers companies like AdaCore have and how much is recurring business. Safety-critical industries like tools that help software work correctly from the start. A significant chunk, esp in aerospace, use Ada.


C was designed in a time when codebases were much, much smaller than codebases today; when your project got big enough that it was unwieldy for C you embedded a lisp or TCL interpreter and used that to string together small C utilities.

As I Get Older™, I'm starting to agree with the Suckless[1] people that the small-project way is better, and that one of the reasons it's better is that C still works for it. Projects that are small enough for a single developer to understand in the nature of things have fewer problems. Yes, that would mean a change in what we expect software to do and how we expect it to behave, but that changes all the time anyways, and I think a pendulum swing back in the minimalist direction would be a good thing.

[1] http://suckless.org


At one point, there was a pejorative in circulation among some programmers: "hugeware". It seems to have been forgotten. We seem to have developed a kind of "hugeware acceptance".

What was dubbed hugeware was actually small by today's standards.


Is it that amazing how best tools for any given job keep resurging no matter how many times people try to replace them?

Turns out that for many jobs C is simply the best language.

Memory limited? C

CPU limited? C

No OS? C

Need precise-ish control of executed code? C

Want inline assembly you can integrate with? C

New chip/architecture? C

Predictable performance? C

Hm, not surprising after all :)


Except Rust has all of those with significant improvements to the type system, safety and library/package ergonomics.

I don't use the word "significant" loosely here at all either.


When you can fit a rust program that works as a voice-changing dictaphone into 1K of flash on a Cortex-M0, get back to me



C is not going away because is useful, simple, beautiful, and has no problems on dynamic linking on most OSs.


I wouldn't say simple. But I agree on the rest.


Well, C is simple in comparison to its alternatives. By a long shot, in my opinion :-)


Well then, I look forward to your argument supporting the use of Brainfuck or programming raw Turing tapes. Can't get much simpler than those options by your metric!


I think that it could be said that C is in fact simple but not easy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: