Hacker News new | past | comments | ask | show | jobs | submit login
Time safety is more important than memory safety (halestrom.net)
87 points by panic 6 days ago | hide | past | web | favorite | 108 comments





I actually have had the pleasure of porting a program from a defunct 16-bit language called Actor to C++. It wasn't a huge deal even at around 800 kLOC. All mainstream languages in the future are going to have some combination of structured, functional, and object-oriented programming. Converting is mostly going to be about syntax and libraries.

As someone who has worked with C and C++ for living for over 20 years, I wouldn't think twice about picking either Go or Rust if I were to start again. Go gives you the fast edit/compile/test loop of an interpreted language with the runtime speed of a compiled language. Rust is the language that the C++ Committee would make if they could start over.

That being said, Go will never take over the C, C++, and Rust niche. Going to and from Go-land and C-land is too expensive. Google has no interest in stepping behind libraries that aren't internet servers. Go will live a long life as a great environment to port your Python, Ruby, and other bloated server languages. It just will never be the next language to write a web browser.

Rust is amazing though. I see this as the programming language of the future until the U.S. Government slams down the hammer and forces everyone to use DOD-approved Ada.


>Rust is amazing though. I see this as the programming language of the future until the U.S. Government slams down the hammer and forces everyone to use DOD-approved Ada.

I'd be happy if we used ADA or Rust for embedded systems. They're both amazing in their own right.


I don't share the same negativity towards Go, in spite of disliking some of its design decisions.

TinyGO, TamaGO, Android's GAPID, Fuchsia's TCP/IP stack, gVisor, are all good examples that Go has a place on C's domain, when one is willing to invest the resources (time and money) into making it happen.

Using Ada wouldn't be that bad actually. :)

As for Rust, now with Google using it on Fuchsia and Microsoft doing internal Rust summits, lets see how it evolves. Both companies are quite big into ISO C++.


> Fuchsia's TCP/IP stack

Isn't it the part they decided to rewrite in Rust two years ago ? Or am I confusing with something else ?


Netstack appears to still being used, looking at the code repository.

And even if not, gVisor keeps using it, and gVisor is used in production on Crostini and Google Cloud.


Rust is interesting and clever, but it will not see widespread use, for pretty much the same reason that Haskell won't: It's simply too complex for Joe Coder to deal with.

C++ is likewise quite complex, but it has the huge advantage that a team of mediocre programmers can largely just keep to a simple subset of the language and bumble along. With Rust, you must learn and deal with the memory model, and it's not an easy thing.


There’s nothing complex about Rust memory model

I think the conclusion is that C++ is too dangerous in the hands of non-experts.

As far as it goes, I agree, and I rarely recommend C++ as the right language for a project.

Nonetheless, on real-world projects, a team of average programmers can make reasonable headway in C++. They'll probably write a lot of dubious and buggy code and eventually slam into the complexity wall. In Rust, they'd slam into a wall almost immediately and very noticeably. Managers won't put up with that.


I've never really dug into C++, but I've written a little bit of C, and I feel it's generally easier to write a given program in Rust than it is in C.

I can see why Rust has its reputation for difficulty - the burrow checker is a new idea, and the unfamiliar is inherently dificult. However, I think once you've spend a modicum of time, it's a fairly straightforward language, simply because the compiler almost always tells you exactly what you're doing wrong.

I don't feel like it's a particularly ergonomic language, and it's certainly hard to read (very symbol dense), but I think it's easier to learn to appease the burrow checker than it is to debug your average C program, especially if you're the sort of person (I certainly am) that produces dubious and buggy code more days than not.


That may be, but debugging "looks like work" in the eyes of mgmt.

The borrow checker, on the other hand, looks more like "I can't find any applicants in Little Rock who can do this stuff.".


On the other hand, I think the most important thing in making developers replaceable is the degree to which the codebase explicitly contains all the information relevant to it.

Lifetimes are an additional, important piece of information that would have previously been something a programmer would have to have an intuition for, or that would need to be put in documentation.

Now, that work is delegated to the compiler, and so, your individual developer is somewhat more fungible.

I don't know if rust is specifically the future in this regard, but I think if I was a machivellian manager, I'd be interested in replacing instances of human intuition and group knowledge with tooling, as much as possible. The burrow checker is one such tool - even if automatic garbage collection is probably a more straightforward one.


> Rust is the language that the C++ Committee would make if they could start over.

Bjarne has explicitly refuted this opinion.


Did he go into more detail?

If "will this be usable in ten years?" is a concern, then you shouldn't be worried about your language choice. You should be worried about the services your program depends on.

Does your mobile app do anything useful if Google shuts down their authentication servers? What happens to your users if the App Store/Play Store decides to purge applications that don't actively support recent APIs? What happens when the Maven repositories your build depends on get shut down?

I would much rather figure out how to build a Rust program in ten years than try to recreate a web service that simply no longer exists.

While we're at it: All the "obscure" languages discussed here are FOSS. Finding an implementation will be relatively easy, and having access to the source means that things like binary compatibility issues can be worked around.

This is nothing like trying to resurrect a program written in an obscure proprietary dialect of Pascal that was only made available on a run of 200 floppy disks.


Programs I write are generally designed not to depend on any particular internet services (if they need some, they are configurable), and this is part of the reason to do.

And, yes in the case of these FOSS it is probably easily enough to find an implementation and to work around the binary compatibility issues. In the case of a program written in an obscure proprietary dialect of Pascal that you do not have an implementation of, well, sometimes you might reimplement a subset which is enough to run these programs (I have done similar things in the past), although of course it is probably going to take longer and be more difficult than FOSS is going to be.


Counterpoint: NES games were handwritten in assembler for a long-dead architecture. You couldn't pick a worse development environment for "time safety". The source code for most of the games is lost, too. Yet they remain some of the most portable programs in existence, because of emulation.

The PC architecture is extremely well-documented in practice (as are alternatives like WebAssembly), and there are going to be emulators for them around for the foreseeable future. Emulators for the PC architecture are no more likely to die in the future than C compilers are. You will be able to run any program written for the PC, in any language, for a very long time.


Emulation isn't portability. It enables you to run the program, but not to modify it and adapt it to fit in with a new platform.

Yes, although people still write programs for NES, sometimes for that reason, so it works.

> The PC architecture is extremely well-documented in practice (as are alternatives like WebAssembly), and there are going to be emulators for them around for the foreseeable future.

Intel has patents on the x86 ISA. Though the basics of the instruction set (386, 486, Pentium) are outside the scope of a patent, it's impossible to write a complete emulator for a chip made within the past 20 years without infringing on Intel's IP.


Actually you can, in any country but the US and Japan. Software patents aren't enforceable anywhere else AFAIK (and at least not in the EU for sure). For a commercial product that's a show stopper indeed, but for an open source emulator without commercial ambitions that wouldn't be much of a problem.

Which would only be an issue for the next 20 years at most. I suspect we won't run out of native x86 processors in that time frame.

Regardless, complete emulators exist today, and Intel hasn't sued anyone over them so far. Intel did threaten to sue Microsoft over Windows-on-ARM's x86 emulator do so a few years ago [1], but that doesn't seem to have gone anywhere, and even if it did, it's unlikely anything would happen to open-source emulators.

[1] https://www.theregister.co.uk/2017/06/09/intel_sends_arm_a_s...


Nintendo claims the same about emulation of their consoles. It hasn't appeared to be much of a deterrent.

Anyone can claim anything that helps their cause. All you have to do is not implement the emulator using the official developer documentation for the system.

For example on the original Gameboy the thing you have to avoid is using the copyrighted BIOS ROM. But you can avoid that by just initializing the CPU registers and instead starting at the cartridge start address.


FWIW, independent reimplementation protects you against copyright infringement but not patent infringement.

True, but patent infringement are limited to few places in the world, while copyright infringement are enforceable pretty much everywhere.

> I don't think it's responsible to ask ordinary programmers to start their projects in new languages.

Rust is 14 years old[0], its compiler has been self-hosting for 9 years, and its 1.0 release was nearly 5 years ago. Sure, that's not as old as C or C++, but I wouldn't call it "new" either.

[0] https://en.wikipedia.org/wiki/Rust_(programming_language)#Hi...


Provided code written 14 years ago wouldn't compile on current compilers, I think it's fair to consider today's Rust is a fairly new (i.e 5 years old) language.

I’d call it maybe still a bit too new for applications where enterprise-level funds or harm to human life is in scope if the software fails. But it’s getting there.

“too new”. Would you say the same about he new Boeing plane, then - too?

The 737 max? Yeah probably.

However if 737 continued flying we would get more deaths rather than more fixes, wouldn’t we?

Age has got nothing to do with stability.


Rust isn’t going anywhere in foreseeable future. Even if it died today, it could stay around for a decade or two (Python 2 did. C99 is both a dead language and still considered new).

When it becomes so obsolete you won’t be able to use it, it’s likely that whatever you wrote in it also won’t have any value beyond being a historical artifact.

And if you want to preserve programs and compilers for museums or far future, how about building computer for WASM+WASI and archiving that?

These technologies are still new, but their spec is much smaller and simpler than C and native OS APIs. You’ll be able to recreate a WASM interpreter even centuries later, and from there revive the compiler and rebuild the programs.


> C99 is both a dead language and still considered new

Lest anyone doubt this, I recently worked on a (proprietary) library that the company kept C89-clean because some (similarly proprietary) embedded targets are basically frozen in the mid-90s.


That is pretty common in the embedded world. ANSI C is still equivalent to C89 here, which isn't really true anymore.

This argument only makes sense if you believe the longevity of your project is more important that the personal safety of its users. For a lot of projects that might be the case. For most commercial and major open source projects, it is not.

There's always Common Lisp. Still portable 35 years on.

I recall once, not quite 20 years ago, porting a large CL program from a 32-bit implementation to one of the then-new 64-bit Lisps. Someone who didn't know CL thought that would be a hellish job, but it turned out to be quite straightforward.


With the new research being done in macro-level type systems there's a lot of great reasons to check out lisp again (as a person who loves strongly typed languages)!

Type Systems as Macros - https://www.ccs.neu.edu/home/stchang/pubs/ckg-popl2017.pdf

Dependent Type Systems as Macros - https://www.ccs.neu.edu/home/stchang/pubs/cbtb-popl2020.pdf

It's so cool - especially the bits where they reconstruct each part of the type system and show how each feature they add contributes to the power/capability of it. :D


I used macro expansion for type checking in Zeta-C [0], the first C compiler for Lisp Machines, in 1983. Didn't write a paper about it, though.

Dependent types are certainly interesting -- I'll give this a read. Thanks!

[0] http://bitsavers.trailing-edge.com/bits/TI/Explorer/zeta-c/


Awesome! I'm really enjoying reading your source code. Cool stuff; would definitely be interested in hearing your opinion of the papers.

> In contrast my old C projects from 5-8 years ago still compile and run

Sigh. Again this circular logic. And again, this is a fact simply because many people chose to muscle through a lot of problems that C has. It's NOT because C is amazing or anything. Many people chose to muscle through COBOL's problems as well. Is COBOL amazing? Is COBOL giving you time safety?

This is like saying that the Amish have the superior philosophy because there are still Amish communities. They have... a philosophy. They hold it dear. They insist on living by it. That's it. There's nothing more to it. No deeper revelation.

I am not going to engage in the "C vs. Go/Rust" debate apart from saying that clearly many people dislike C and C++ and are looking for alternatives. Which by itself should immediately hint the author that his case is not as universally accepted as he seems to want to make it.


"My old C projects from 5-8 years ago still compile and run ... in my trusty Ubuntu 12 VirtualBox inside Windows 10!"

:)


It's actually the assumptions around longevity that need to be examined.

Software is not like etching perfect museum-pieces.

It is the union of the world that defines software's context, and since that will always change, we'll have to continue to write and re-write software to ensure the match is appropriate


It depends on what you're working on. I'm in the simulation area at an engineering company, and we've got plenty of Fortran code that dates back to the '70s that's still just fine. A little maintenance to work on removing deprecated syntax, a little work to replace common blocks with a more decoupled design, and we're probably good for decades more use. And if we can't get to that maintenance work right now, the strong backwards-compatibility of Fortran will have our back until the next opportunity. :)

Same here. We are stuck in Watcom F77 right now due to using some old extensions. We are in the middle of eliminating control characters from format statements so we can finally upgrade to new compiler.

I believe this is the most coherent counterargument to the article's point. Given how often software is rewritten and requirements evolve, how much does it make sense to target longevity? And when I say rewritten I don't just mean big-bang rewrites-from-scratch, I also include Ship of Theseus rewrites where over time everything gets rewritten one piece at a time.

I don't have a good intuition for this. I think everyone has anecdotes of both sides; projects that were supposed to be throw-away code that lived on for decades and everyone is familiar with changing scope and requirements requiring changes to otherwise perfectly fine code.

Put another way, the longevity of code is only important to the extent that it still fulfills the needs of its users. Otherwise, as always if you can get away with it, the best code is no code.


It doesn’t and the only reason we’ve done it this way is because software distribution at scale had to be on physical medium of one sort or another.

Shipping updates used to mean literally shipping new hardware to be installed.

We’ve treated software like a literal thing to meet boxed software business sales expectations. Back in the day people sold things.

There are a number of different ways to build and distribute software that achieve the same level of results. We’ve just stuck to using the ones that traditional business models are willing to fund.

Apparently curve fitting for ephemeral finance growth is achieving literal economic gains. Or something. The epistemology of economics has diverged from the ontology, become less about material goods, more about information organization which is harder to value given as we say here, we curve away too fast and have to rewrite it, no one knows


I thought this was going to be about concurrency, but it's instead about the risk that programming languages will become obsolete quickly.

I thought this was going to be about bounding the time complexity of your algorithms; I don't know if any language does that yet. I suppose this title has a lot of interpretations.

The early versions of programmable vertex and pixel shaders did that by enforcing bounded loops and not having any function calls.

It turns out that that's far too restrictive as a programming model, so while modern shading languages still forbid recursion, they are otherwise unbounded.

It would be interesting if there's a more useful middle ground somewhere, analogous to what Rust does with lifetimes.



My understanding is that automatically determining time complexity is impossible in the general case due to the halting problem.

And getting a compiler to truly understand the time complexities of your data structures would involve either extremely difficult theorem proving or just forcing it.

I do think it would still be interesting if a language had support for determining time complexity. It seems like it's often possible even if it isn't in the general case.


Another issue is that algorithmic complexity only really 'composes' in a trivial and uninteresting way: Ok, you have a for loop over N items, multiply the complexity of the loop body by N.

Well, no not quite... it may be the case that one particular iteration of the loop makes all the subsequent steps completely trivial, so O(1)... and even further it may be that such a step is guaranteed to be reached in log(M) time, etc. etc. You see this type of thing a log in graph algorithms where they look like they might be O(n^3) or whatever, but leaving markers on nodes can avoid a lot of repeated work (by skipping marked node), so they end up as O(n + k) or whatever.

Upper bounds for worst case complexity are relatively easy... just assume the worst, but the interesting thing is proving TIGHT bounds for worst case complexity.


I was expecting a discussion of timing side channels in encryption. Valid points all around, really.

Spoken in the voice of David Attenborough: and here we have the lifelong C programmer fighting for status among his tribe. He proclaims to have an authority on time, and his tribe agrees, as they wilfully ignore the fiery meteor that descend upon their position. Time, it seems, has almost run out.

I think this is yet another post that falls into the trap of mistaking HN/Reddit -- fashion publications, really -- for something else. They're GQ or Vogue, not the New York Times. https://news.ycombinator.com/item?id=22106559

The vast majority of developers do and always will choose well-established languages. If Go or Rust or Zig or Kotlin or Clojure or Elixir ever become truly popular, by that time they will have become established and "safe" (unless they were Microsoft technologies, but they're not).


If you agree with this principle, I think the end result is that you choose the most popular language for some window.

For banks and airlines, you conservatively pick a large window. At one point that ended up being COBOL. That is slowly migrating to Java now. I suspect it will move to TypeScript in about a decade or so.

For new code today, I'd probably pick TypeScript.

A few years ago, I'd have picked Ruby. Before that, Python, Before that, Perl/CGI. Before that, Java. Before that, C++.

I wonder if at some point the tides will turn back to writing pure-personal-computer-and-no-cloud C/C++ Windows applications.


You just need to go into life sciences lab robots, factory automation, medical devices, ticketing machines,.... to enjoy doing Windows WPF or C++ applications.

I doubt that most developers ever choose a language to work on at work. They usually join a pre-existing project that already has enough of an investment into a given language that choosing any other language requires a very compelling reason.

To see what languages developers are actively choosing as opposed to going along with, you'll need to measure the proportion of languages used among greenfield projects (and ideally truly greenfield ones, i.e. ones that don't require technical support from other teams within the same org).

I don't have data for this, but I'd be surprised if Go wasn't way high up on that list (and on Android, similarly for Kotlin).

So does anybody have data on greenfield projects?

N.B. I also wouldn't be surprised if the majority of developers hardly ever chose a language for anything, that is I wouldn't surprised if the majority of developers have never started a substantial project from scratch and have only ever joined existing ones. A lot of people prefer to do things other than code in their free time and again, in the workplace, especially in large organizations, it's perfectly possible to only ever work on pre-existing codebases.


Only anecdotes, but I've had the opportunity to be responsible for the choice of language in some greenfield projects, usually relatively small line of business applications. My choice is almost always nodejs, lately with TypeScript front ends and my most recent project using TypeScript on the node side too.

The reasons for this choice (I've also worked with Ruby, Scala, C#, Java etc):

* JavaScript developers are everywhere, it's an accessible language to learn with tons of libraries and a huge community

* Nodejs is a simple and great general purpose backend platform that will get you really far for 95% of use cases. It scales well, has a simple concurrency model and simple deployment and ops tooling

* Using same language on front end and back end reduces cognitive overhead. My biggest client at the moment uses C# on back end and TypeScript on front, and despite them both being very nice Microsoft languages there's still so many differences


Golang is already a top 15 language, I would call that truly popular.

Not really. It has a 0.6% market share in the US, and I would assume a much lower share elsewhere: https://www.hiringlab.org/2019/11/19/todays-top-tech-skills/

I mean, it's popular, but I'm not sure it's a "safe language" just yet. Language popularity obeys a power law of sorts. Go is about 6x less popular than Ruby at the moment, and I don't know if I would count Ruby as a "safe language" either. So far, I haven't seen many Go programs meant to be used for the next, say, 15 or 20 years.


Of course, new developers will want to program in Kotlin. Just as per the post, they're not entirely sure of any reasonable why's. They're pretty adamant they don't want to spend their next 40 years ending up as the elder peers tending the same codebase though. :>

What a pile of garbage.

First of all try to compile C code written for 8 bit and 16 bit 80's micros to see how well it has stood the time, without making use of hardware emulators, spoiler alert it won't even compile.

Then plenty of programming languages, some of them older than C, are doing great.

Anyone with enough money can enjoy NEWP on Unisys ClearPath, a systems programming language almost 10 years older than C.

Or for free one can enjoy Algol 68 programs in 2020, http://algol68.sourceforge.net/

Guess what, those languages took engineering seriously, unlike C.

"The first principle was security: The principle that every syntactically incorrect program should be rejected by the compiler and that every syntactically correct program should give a result or an error message that was predictable and comprehensible in terms of the source language program itself. Thus no core dumps should ever be necessary. It was logically impossible for any source language program to cause the computer to run wild, either at compile time or at run time. A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary pre- cautions would have long been against the law."

"Hoare's The 1980 ACM Turing Award Lecture", https://www.cs.fsu.edu/~engelen/courses/COP4610/hoare.pdf

Still disappointed how Azure Sphere can sell its security story, while offering only C as programming language, so...

Even C++, with its copy-paste C89 compatibility, is already better than writing plain old C. And guess what, the language is 40 years old now.


I think of C as the Volkswagen Beetle, easy to produce inexpensive thing that hit a niche that didn't previously exist. Worse is better, low end eats the high end plus Dijkstra quote on Basic.

The real message in Trusting Trust is that the language has been backdoored at the design level, it doesn't need a malicious compiler. A message always has multiple parts, most people miss the important ones and just assume that literal interpretation is the intended one.


> C is much more likely to be the "safe" option if time x usefulness of your project is a goal.

What about using Lua? Sure, different versions of the language are incompatible, but you could target a particular version. Anyone in the future who would be able to compile your C program would also be able to compile the correct version of the Lua interpreter, because it’s written in pure ANSI C.

Using Lua instead of C would gain memory safety, at the expense of performance.


In theory, we are asymptotically approaching a stable set of programming languages - a constellation of local-optimums. When something comes along with a different name and new syntax and cleans up some semantics, it's called a new language, but that's missing the point - the intent of the programming environment it provides remains similar. And then writing code in it is no real issue.

What ties us down to old languages is the whole environment - it is Joe Armstrong's "to give a gorilla a banana you must pick up the entire jungle" run rife through our production systems. Because our code increasingly depends on globally-sourced libraries, we tend towards copying around the entire planet and so there are layers of accretion all over.

And yet emulators do exist and run software successfully. The trick there is that they have a terrarium of sorts: a boundary that limits the ecosystem. The explosions that obsolete software tend to occur where a terrarium was not planned for by the original authors, rather continuous dependence on an evolving ecosystem was taken for granted.


It's tempting to just drop this one because it's borderline flamebait, but there are so many pernicious misconceptions here that it's worthwhile calling them out explicitly. I'm generally not into author-chiding, but TBH the post was written in a very inflammatory and ignorant fashion that I'd encourage the author to be more thoughtful in the future so that we can all get out of the muck.

1. False choice between "unsafe" and "new" languages. No, just no. Safe languages have been around since the 1960s (e.g. LISP). While it's not really "C versus the world", C/C++ are in the vast minority when it comes to safety across all programming languages. Throw a stick, hit a safe programming language. Java, C#, Rust, Go, Python, Ruby, R, Lisp, Clojure, Scala, Haskell, ML, Modula 3, TCL, JavaScript: the list goes on and on and on.

2. C programs don't break. Just plain false. C and C++ are underspecified languages, meaning they have undefined behavior in some situations--actually, most situations. Undefined behavior is silent and there are poor diagnostics. It's almost always a program bug. Undefined behavior and even nonportable behavior is absolutely rife in the C and C++ world. Programs aren't necessarily portable across platforms, compilers, language versions, and even compiler bugs. Well-specified languages give programs a much better chance of being portable. Which leads to:

3. Portability is mostly a language issue. This is only partially true. The fact is that portability and forward compatibility (time safety as this person calls it) is a function of dependencies--on the language version, compiler, external libraries, and general environment around the program too. Fewer dependencies generally means better forward compatibility. It's also good if those dependencies are maintained by teams who care a lot about backward compatibility. Some languages do more than others.

4. "Time safety" is a term. I have never heard of this term before, which just speaks to the uninformed nature of this post. I think the author means forward compatibility.

Anyway, if you feel like the core message of this post is that unsafe=future-proof, feel free to choose another unsafe language to do your next program. Hint: there aren't many.

PS: Java 1.0 is now 25 years old. Java 1.0 programs still compile today and the binaries that compiler generated back then still run on today's JVMs.


> PS: Java 1.0 is now 25 years old. Java 1.0 programs still compile today and the binaries that compiler generated back then still run on today's JVMs.

If you want to write a random program that you don't plan to support but want to remain usable, Java is your best bet.

I've on multiple occasions recently used really random Java GUIs that are from the 2000s (Melee related). It sometimes took some finagling (iirc I had to use Java 8 or something .. luckily it was just a nix-shell away!)


Was looking for this.

I use Pascal for all my projects because it has memory safe strings and arrays.

Almost all buffer overflows and security bugs could be solved by rewriting all software in Pascal.

Everytime a software crashes, you should say, it crashed, because it was not written in Pascal

I just spend two hours modifying my xml parser to load files that have a doctype with inline declarations. Never needed to load an xml file with a doctype before, and I only wrote the parser for the files I have. I also have plans for a json parser. There is a surprising lack of Pascal libraries.


This reads like trolling. "It crashed, because it was not written in Pascal"? Come on.

It may have crashed because it was written in C/C++, perhaps. But on the exact same grounds, why not say "It crashed because it was not written in Haskell"? (Or any of several other languages.) Why Pascal in particular?


Because back in the 1970's, Pascal and C were the two languages battling it out. At the time C seemed like the best option of the two, but now that everything is online, we realize Pascal is the better option of the two.

benibela doesn't sound like s/he is talking about the 1970s. Rather, the point seems to be crashes today.

But no, the people back then weren't fools wandering in darkness. C won not just because it appeared better, but because it was better. Pascal appeared better - it had better theory, and a better story. But for actually writing software, C was better. Even with all the "shoot yourself in the foot" potential, C was still better. Programming in Pascal was like picking your nose with boxing gloves on compared to programming in C. (I exaggerate, but Pascal was noticeably - and frustratingly - clumsier to use.)


This is a very strange comment IMO. There are already perhaps too many XML and JSON handling libraries for Object Pascal, as well as libraries for pretty much anything else I can think of.

But do they work and are maintained? I do not trust them. At first I used the fpc xml parser, but it stores everything as utf-16 internally, and once it crashed on me during the utf-8 <-> utf-16 conversion. I do not want to use utf-16 for anything, so that was a completely pointless crash.

You would probably like go, its much like pascal. And it has libraries for everything!

Except, check the language reference for FreePascal and Go, regarding language features.

And Go still doesn't have anything comparable to Lazarus, which isn't as feature rich as Delphi/RemObjects.


Two things:

A) The person you're replying to is simply quite wrong about the amount of Pascal libraries available. There is no "domain of interest" I can think of that does not have at least one "defacto" library for it.

More commonly though there's four, five, six or more libraries for any given thing to choose from, which often turn out to each have specific strengths such that you may very well end up using more than one of them in your project.

B) They would almost certainly probably not like Go. Object Pascal as implemented by Free Pascal is a language that effectively embraces with open arms almost all of the things that Go actively avoids: for example, traditional (single) inheritance, operator overloading, both function overloading and generics, and so on and so forth.

As far as inheritance specifically, the general indifferent attitudes towards it of "there are definitely times and places where it makes more sense than anything else to use" amongst Pascal programmers are in my opinion basically a direct result of the fact that "bad experiences the compiler developers personally had with inheritance as specifically implemented by C++" are NOT something that actively factors into their decision making process or something that they really think about or care about at all (or more broadly, something that users of the compiler generally think about or care about at all).

This sets it quite far apart from other languages such as Rust, where "things C and C++ arguably did wrong" are in fact heavily influential and both thought about and discussed regularly.


Last time I checked, Pascal has generics and exception handling.

Pascal has neither generics nor exception handling. At least, with reference to ISO Pascal.

Nobody refers to ISO Pascal anymore. It is either Delphi or FreePascal.

If you're planning that far ahead it may not be an either-or situation. That is, in the future C/C++ may also have an enforced memory safe subset. In which case the issue becomes, how do you write your code today so that it will conform to the safe subset? There are conformance tools in the works that can already give you a sense of the restrictions that will be imposed [1].

[1] https://github.com/duneroadrunner/scpptool


Please don't use promotional accounts on HN. We're here for curious conversation, not promotion. I appreciate that you've been posting these links in threads where they're mostly relevant, as opposed to blanket-spamming the site, but it's still not in the spirit of HN to have single-purpose accounts or to use this place primarily for promotion. It's ok to post your own work occasionally, as long as it's part of a diverse mix of posts that are motivated primarily by intellectual curiosity. That's the value we're optimizing for here.

https://news.ycombinator.com/newsguidelines.html


What kills code more than anything else is not the language of choice, but operating system vendors "improving" user experience by deprecating APIs and frameworks.

That said, there are language communities that definitely value change more than stability * cough * javascript * cough.

But go. Imho it's unfair to implicate go. It has no backward incompatible change that I know of, and it literally depends on nothing but the kernel interface, which is known to be very stable.


From the title you might think it's talking about safety from pathologically slow inputs. Especially considering there's a different post also on the front page right now about that exact thing.

> PerfFuzz: Automatically Generating Pathological Inputs (2018) [pdf]

https://news.ycombinator.com/item?id=22315542


I kinda disagree. With C, you risk old autoconf scripts breaking, depending on library versions that operating systems don't include anymore, compilers learning how to exploit undefined behaviour they didn't exploit before, programs breaking due to making incorrect assumptions (`char` is signed, right?). C++ also adds exciting issues in terms of backwards compatibility breaks (for instance, programs using `register` keyword won't work since C++14) - refer to Annex C in C++ specification for a long list of BC breaks.

None of this is an issue in Rust. Cargo is a proper build system, not a weird combination of shell script and M4 using programs that are prone to breaking changes (like awk). Cargo requires defining program dependencies and prevents accidentally using a dependency that exists on an operating system. Rust in safe code avoids undefined behaviour, noticeably avoiding the risk of accidental undefined behaviour. Rust has much less implementation-defined behaviour than C (`i8` is a well defined type). Rust edition system means that breaking changes avoid affecting old code (even if Rust were to, say, remove `static mut` in edition 2024, this change wouldn't affect programs that wouldn't explicitly upgrade to edition 2014).

Rust is serious about backwards compatibility. Breaking changes are mostly unacceptable (other than in cases of soundness bugs, but even then there is typically a warning cycle before fixing a bug). In fact, you may take a look at "Compatibility notes" sections in https://github.com/rust-lang/rust/blob/master/RELEASES.md. The compatibility breaks tend to be very minor, and unlikely to break stuff. For instance, in Rust 1.39, `asinh(-0.0)` was changed to return `-0.0` instead of `0.0`. Strictly speaking a breaking change, but a program depending on that is very very unlikely. Rust also has crater which prevents accidentally breaking backwards compatibility by testing whether every public GitHub repository and crates.io library still works the same way. Additionally, even in case program compilation breaks anyway, it's possible to use an older version of a compiler with rustup.

Rust is here to stay. Even if Mozilla ended up supporting the project, some other company that uses Rust internally would continue to support it. It's free software after all. Even if that wouldn't happen, the old rustc releases would still work.


usually the argument to keep using a bad language is “but we have all this stuff we built and we don’t have that in newer things so we can’t easily change”. but this is a new take! we can’t use new languages because we expect only these flawed ones to live FOREVER

New programming languages should address this concern. C by itself doesn't, really.

TL;DR: It's better to have buffer overflows and use-after-free bugs that still work in 50 years than to have secure code you may need to migrate.

I think you're being sarcastic? But I also think the article genuinely believes that's true.

In 50 years there will be C programmers who know how to fix a buffer overflow, compile, and run your program. Conversely, a program written in a language and dependent on a package repository that only existed from 2017-2024 will likely never find an academic willing to invest the thousands of hours in learning that legacy code, digging through archives, and making that program run again.


> In 50 years there will be C programmers who know how to fix a buffer overflow, compile, and run your program.

I don't think that's a given. In 2070, C might be today's COBOL. There are a few people who know it, and are paid obscene amounts of money to maintain aging systems to keep them from falling apart.

Now, there's certainly more C code now than there was COBOL code at COBOL's peak, but that's no predicting the future. What I'd call "modern" computing and software development has only been around for 30-40 years or so. In 50 years I expect things to be radically different.


Indeed. But in the meantime you have to worry about getting pwned by the buffer overflow.

The infinity-plus-oneth HN pigpile-on-the-C-luddite thread. Ugh...

The knowledge needed to compile a C program into executable code will never die, as long as there is a human race.

Lots of C code have already bitrotted because APIs it depends on have been abandoned. Resurrecting it isn't just a matter of compiling the code, but recreating the environment it ran in.

That was before storage become abundant and cheap; from a source code storage perspective. It will not become more expensive.

Nowadays storage capacity is developed to store video - this is really the only thing that needs a lot of new storage capacity.

Every line of sourcecode written by every programmer ever is a fart in the wind compared to what's being uploaded to Youtube in the next 60 seconds.


Which does nothing to address "it works on my machine" build related issues. And more modern languages have much more robust solutions to hermetic build and package management.

That's what docker images are for :-)

C is only about 50 years old. I'd say it remains to be seen (not in our lifetime).

you might be able to compile C89, but would you be able to debug it? What if there are incorrect assumptions about the word size in your computer?

As part of "C lore", the answer is yes. In C89:

int main() {

func('a');

}

func(c)

char c;

{

char * s = &c;

printf("%c\n", * s);

}

is, um, dangerous. c is an int, not a char. So on a big endian machine, s may end up pointing to the high byte. Taking the address of a parameter is dangerous, unless the widening is taken into account.

This does work on a little endian machine (tried it). On a 68000, it likely doesn't print 'a' (\0 is most likely).

On little-endian, this is a lot safer. ANSI C takes care of this as well. Yes, we can "debug it". There is a chance that all knowledge of this is lost. But, for now, those of us "in the know" have experience to keep the old alive.


And, for those who have been contemplating parameter widening: on the 68000, insert s += 3. For "portable" C89 code, declare the parameter an int (not char) -- then introduce char c2 = c and s = &c2 instead. This transformation is "safe" because parameters are copied by value and cannot be returned that way.

As to debugging -- many of these systems did not have "debuggers" -- an example would be Whitesmiths C in the late 70s and the 80s. We used a strategy of old-school paper validation, combined with function testing. Yes, things moved more slowly.

Code can be maintained. The above C89 code was just compiled with gcc 9.2.1 -- with gcc -std=c89 c89.c it compiled without warning, and ran (40 year old code).

The issue with (say) bring forward Whitesmiths C is that the Whitesmiths standard library is not POSIX. Not hard to convert, though. For example (as I recall) %d was %i

FORTRAN 77 (and FORTRAN IV) is in a similar category. As is COBOL and Common LISP.

SNOBOL4 is a bit of an outlier -- The original interpreter was written in macro assembler. That assembler is now converted to C, and the original interpreter can still be run:

The Macro Implementation of SNOBOL4 in C (CSNOBOL4BX) Version 2.0 by Philip L. Budne, January 1, 2015 SNOBOL4 (Version 3.11, May 19, 1975) BLOCKS (Version 1.10, April 1, 1973) Bell Telephone Laboratories, Incorporated EXTENSIONS (Version 0.25, June 16, 2015) Fred Weigel

No errors detected in source program

CODE (TUE AUG 4 10:28:58 EDT 2015) RUNNING ON CSNOBOL4 MAINBOL WITH SPITBOL, BLOCKS, EXTENSIONS ENTER SNOBOL4 STATEMENTS (TRY ? FOR HELP) 5,541,616 BYTES FREE CODE:

And, yes, I modified the original interpreter to add some extensions back in 2015. However, Budne has published the pattern matching engine in Javascript so there is an easy migration for those programs (https://github.com/philbudne/spipatjs).

In my opinion (and this is strictly my opinion), only Javascript appears to have this "lasting" property wrt modern languages. An important characteristic is simplicity, and forward-backward compatibility. As well, multiple implementations are important.

FredW


Elixir has just feature frozen itself and it looks like it won't 2.0 (the underlying erlang subsystems might change, though).

You might like zig which is shaping up to be a saner c, and Andy Kelly looks like he's staving off feature creep.


I don't get the argument. Is the author complaining about complexity of languages like Rust? I.e. only simple languages supposedly have longevity? C++ is complex, yet it already exists for quite a long time too.

Just because C happens to be a long used language doesn't mean you shouldn't be using newer and better languages and that those languages can't be used for a long time too.

C is used not due to big benefits, but mostly for legacy reasons today. Basically a lot of projects are already stuck with it. But for new ones, surely use Rust, not C.


Honestly, if you care about longevity, Rust simply isn't the language yet. I'd at least wait until there's a specification and a GCC frontend.

Anyway, C does have some benefits as a programming language.

• "Legacy" projects like Linux and CPython will keep it alive for decades to come.

• It's extremely portable. Rust is less portable, and even when Rust supports a niche platform, it's relatively clunky to get things started.

• C libraries are always very easy to call from other languages. Rust isn't quite there yet, and translating things like traits can be a bit of a pain.


Agree. At work, we have a component written in Pro* C (C + Oracle). It's about 20 years old code. All the original developers are now gone and now we're all Java developers. We had a segfault last month in production. No one was confident in the fix we did because even though we all know C (more or less), no one knew the pitfalls of C. Segfault happened in string formatting which is a very trivial thing to a Java developer. It's not yet rewritten in better language yet not because C is such good language but because rewrites are a pain. But because of the last segfault, the business is now more motivated to have it rewritten in Java.

Here is something that I can gladly agree with you.



Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: