Hacker News new | past | comments | ask | show | jobs | submit login
Feds: Critical Software Must Drop C/C++ by 2026 or Face Risk (thenewstack.io)
50 points by afdbcreid 39 days ago | hide | past | favorite | 97 comments



Leave it to politicians to pit bull a language. Model checked C/C++ is memory safe. Had they reached out to a wider set of people for guidance, they'd have a more balanced report.

I will agree that software safety -- not just memory safety -- is critical. Trying to attack this at the language level instead of the development process and assurance level is daft. FIPS certification and aerospace certification both require auditing already. It's not much of a stretch to require an audit of critical infrastructure to verify that safety processes are in place.

Simply adopting a different language won't make software safe. It will make it safer, perhaps, but we can do better. Model checked code -- be it written in Rust, C, or C++ -- is on the same level. Tools exist for each. That is what CISA should focus on, not trying to force organizations to migrate their code bases to some new and shiny language.


I'm a little familiar with TLA+ but it can't verify the actual code you want to run, only a restatement of your intended algorithm. Are there published model checkers that can check real C++? How would they catch double-free or use-after-free or double-throw?


Yep. ESBMC and PolySpace both work for C++.

I use CBMC for C.

They run an abstract machine model through an SMT solver. Among other things, this abstract machine model tracks memory usage, UAF, aliasing, etc. With custom assertions, it can also cover issues like casting safety, serialization issues, logic errors, etc. More or less, you can build up any proof obligations you need and run them through the model checker. It will provide counter-examples if it detects a failure.


I'm definitely digging into this, thank you!

I'm a little concerned about ESBMC's "simulating a finite prefix of the program execution with all possible inputs" when important software like operating systems are intended to run indefinitely. Would they start with a valid state (filesystem, process list) and prove no invalid state is reachable within n syscalls or interrupts? Maybe I was hoping for proving invariants.


So, the way I deal with this is to provide an exit condition in the loop that is triggered in a non-deterministic finite way by the model checker but that is not triggered at runtime. This allows for termination, which is required as part of the model check, while maintaining the loop.

The best way that I've found to model check software is to decompose the software by function, defining function contracts and invariants along the way. To reduce complexity, shadow functions that follow the same function contracts and maintain the same invariants can be substituted. A shadow function uses non-determinism to exercise the same range of behaviors that the real function provides in a way that is simpler for the model checker to verify. When verifying a function with the model checker, you can substitute functions it calls with these shadow functions to reduce the complexity of the model check.

The example server I'm using for the book initiates a TLS connection with the client. Each connection is maintained by a fiber that enters into a command-response loop. When verifying this loop function in the model checker, I can shadow the functions that read the command and send the response. As long as these shadows follow the same function contract as the function, I can verify that the loop function is correct. For instance, the read command function returns a command in the output variable on success that must be freed by the caller. The dispatch function that dispatches the command and sends the response expects this command to be valid and treats it as read-only data with a lifetime that is beyond the lifetime of the dispatch function. Finally, the loop cleans up the command. All of this can be specified using the function contracts defined by this function as well as the semantics of the real functions and the shadow functions. We know based on the contracts that we expect this command object to be created on success, and we know based on the abstract model that it must be cleaned up. If, for instance, we freed it before calling dispatch, the abstract model would find a UAF counter-example.

If this sounds like a lot, there is a reason why I'm writing a book on the topic of practical model checking in C, with a mature well-worked example that mirrors a real-world application.


How very interesting.

Does that detect issues at runtime, or prove statically that the code is "safe" (whatever the remit of that definition is here)?

If the former, I guess you're still at the mercy of test coverage and all the pitfalls it entails.


Model checking employs abstract interpretation, so it's similar to static analysis. I hesitate to use the latter term, because it's often used to describe how linting works. This is significantly more advanced.

The model checker translates an abstract model of execution for the code into an SMT solver, and uses this to find counter-examples. The SMT solver works backward from failure cases to discover a combination of inputs or program state that can cause any one of these failure cases. This traces memory, data flow, and control flow. It's actually a pretty neat thing.

It doesn't require existing test coverage or anything at runtime to work.


That's the intent of the document - to reach people for guidance. The document under discussion is requesting comment from industry.


Yep. I'm writing up a response letter based on my own work with model checked C. It's not the language but the process and the tooling that matters. It is definitely true that the industry has been quite lax with memory safety, but the solution isn't to rewrite everything in yet another language. The solution is to tighten development processes and put forward a plan to make existing code safer.


> It's not the language but the process and the tooling that matters.

You're missing some other critical components: the developers, and the costs.

If you come up with processes and tooling that is difficult to use widely, you're going to negatively impact your ability to deliver. That's not a trade-off you can ignore. If the cost of using C++ safely ends up being that only (say) 10% of the developers who currently use it will be able to keep doing that -- that on its own might justify a government policy decision to avoid it completely.

> but the solution isn't to rewrite everything in yet another language

How many times more effort would it be to rewrite a typical C++ function in a memory-safe language vs. verify the equivalent guarantees with model-checking tools, in your view?

Like how much actual work are you saving here? And how do the resulting turnaround times (say, compile/verification/etc. times) compare?


> You're missing some other critical components: the developers, and the costs.

No, I'm not.

> If you come up with processes and tooling that is difficult to use widely,

That's an unfounded assumption. The tooling and processes I have developed are no more difficult to use than unit testing.

> How many times more effort would it be...

I'd say compared to the 10% or so overhead of model checking, it would be approximately 8X as difficult, given that this requires learning a new language, using it correctly, and rebuilding the semantics in a way that is safe in that language. But, first, you need to learn new frameworks, rebuild all of the testing and quality assurance that went into the first development process, and rebuild the same level of trust as the original. Writing software is the easy part. It's all of the other stuff, like analysis, testing, confidence building, and engineering that costs money, and this would have to be redone from scratch, as the artifact from that original effort is being thrown away for a rewrite. Remember: the cost of rewriting software isn't just the cost of sitting down and banging out code.

> And how do the resulting turnaround times (say, compile/verification/etc. times) compare?

Model checking is actually pretty fast if you use proper shadowing. Compiling with C is lightning fast. I'd say that using a model checker is a little slower than compiling with Rust.

Also, you are mistaken that using a memory-safe language gives equivalent guarantees. It does not. A model checker allows you to write safer code. There are more errors than memory errors. Memory safety fixes many of the problems, but not nearly enough. So, after that rewrite, you'll also need to add a model checker or some other kind of formal methods to be as safe as just using the model checker to begin with. For Rust, that's Kani. It's not an apples to apples comparison.


>> If you come up with processes and tooling that is difficult to use widely,

> That's an unfounded assumption. The tooling and processes I have developed are no more difficult to use than unit testing.

My assumption was just founded on the reality of how few developers use model-checking tools, and how they difficult they find them. If your tools are radically better, and they work for C++, that's obviously amazing.

Could you give us an idea of what the equivalent of this simple C++ code would look like, so we can see how it looks in comparison? You know, with whatever bells and whistles you need to add to it for the model checker to verify its correctness:

  #include <string>
  #include <unordered_map>
  
  template<class Char>
  auto make_map(size_t argc, const Char *const argv[]) {
    using S = std::basic_string<Char>;
    std::unordered_map<S, S> map;
    for (size_t i = 0; i + 2 <= argc; i += 2) {
      map[S(argv[i])] = S(argv[i + 1]);
    }
    return map;
  }
  
  int process(const std::unordered_map<std::string, std::string> &);
  
  int main(int argc, char *argv[]) {
    return process(make_map(argc, argv));
  }


The reason why so few developers use model checking tools is because there is limited documentation out there for how to use them effectively. That is changing. Hell, I'm writing a book on the subject.

These tools aren't difficult to use. They just aren't well documented.

I've got better things to do with my afternoon than model check random bits of code in a forum. You wouldn't ask someone talking about a unit testing framework to unit test an example that you write up. If you're curious, I'm writing a book on model checking C, which will be going off to the publisher sometime in 2025. There are also examples online.


> I've got better things to do with my afternoon than model check random bits of code in a forum. You wouldn't ask someone talking about a unit testing framework to unit test an example that you write up.

I totally would, and I would also reply to their example. I've exchanged code examples in the past right here on HN for discussions like this.

You obviously don't have to do anything, but if you're going to claim it's only a 10% difference compared to unit testing -- expect people to be skeptical unless you can show side-by-side examples.

> If you're curious, I'm writing a book on model checking C, which will be going off to the publisher sometime in 2025. There are also examples online.

I'm not at all curious about model checking C. I've seen that before numerous times, and frankly, even if you could cut the development cost in C by 50% through model checking, I still wouldn't be interested.

You keep going back to C... in a conversation that's that's supposed to cover C++. I'm not sure why you do this, if the solutions to them are actually comparable -- a lot of the world is on C++, so you would find a much, much broader audience if you cater to that. My suggestion (and you're obviously welcome to ignore me, but I'm just letting you know my reaction as a reader) is to focus your response letter & book at least partly on visibly demonstrating how your proposed solutions would have similar success on C++. If you don't sell people on this being a solution for C++, I doubt your writing will make many people change their minds about switching to a different language.


I'm focused on C because a lot of our critical infrastructure is written in C. The Linux / BSD kernels, firmware, etc. It's also where my interest is. Model checked C is a good enough balance of cognitive load, safety, and time to market for my purposes. Others may disagree.

Tooling exists in C++. That shouldn't be too surprising given that the C++ abstract machine model is not much more complex than C and has similar complexities as Rust. The same techniques work there as well. If someone wants to tap that audience by translating this process to C++, they are welcome to do so.


What are some of the tools you'd recommend?

In my case I'm working with Nginx and I'd like to make a few custom modules safer.


For C, I recommend CBMC.


“The development of new product lines for use in service of critical infrastructure or [national critical functions] NCFs in a memory-unsafe language (e.g., C or C++) where there are readily available alternative memory-safe languages that could be used is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety.”

Now that's a strong statement.

But it's real. There are so many state actors doing cyberattacks now that everything needs to be much tougher. Otherwise, someday soon much of the world stops working.


I think it is important to recognize that this will only apply to generic normal software, which is already mostly written in memory-safe languages for government e.g. Java. Data infrastructure with state-of-the-art scale, performance, and workload requirements will continue to be written in C++ for the foreseeable future, largely because there is no alternative. Maybe Zig some day?

A pattern I’ve seen (and used myself) is that the heavy core engines are written in C++ for performance and maintainability but skinned with a Rust API wrapper. Rust is closer to a “better Java” than a C++ replacement for extreme systems software.


Calling Rust a "better Java" and not a credible alternative for C++ for "extreme systems software" without further detail is an opinion that I find hard to take seriously. So I'd like to understand your viewpoint better. Could you elaborate on what aspects in C++ specifically make it so much better suited for these tasks from performance and maintainability perspective?


I shudder to think just how badly the US, Russia, and China have hacked each other by this point.


I wonder if this is a real solution. "Memory safety" has sure been pushed hard the last few years, but this feels more like a "we need to do something, this is something, we should do this" kind of response than anything that will really address the issue.

If security-through-virtualization had been the fad lately, would that have been proposed instead?


It is not a real solution. The people delivering memory-safe code today do not think their systems are secure against individual, lone attackers, let alone fully-funded state actors. The overwhelming majority of them, all software developers, and software security professionals probably think it is literally impossible to design and develop usable systems secure against such threats, i.e. can achieve the desired requirements.

Let us do this thing that literally every practitioner thinks can not achieve the requirements and maybe we will accidentally meet the requirements in spite of it is a bona-fide insane strategy. It only makes sense if those are not "requirements", just nice-to-haves; which, to be fair, is the state of software security incentives today.

If you actually want to be secure against state actors, you need to start from things that work, or at least things that people believe could, in principle, work and then work down. Historically, there were systems certified according to the TCSEC Orange Book that, ostensibly, the DoD at the time, 80s to 90s, believed were secure against state actors. A slightly more modern example would be the Common Criteria SKPP which required NSA evaluation that any certified system reached such requirements.

But if you think they overestimated the security of such systems, so there are no actual examples of working solutions, then it still makes no sense to go with things that people know certainly do not work. You still need to at least start from things that people believe could be secure against state actors otherwise you have already failed before you even started.


> f you actually want to be secure against state actors, you need to start from things that work, or at least things that people believe could, in principle, work and then work down. Historically, there were systems certified according to the TCSEC Orange Book that, ostensibly, the DoD at the time, 80s to 90s, believed were secure against state actors. A slightly more modern example would be the Common Criteria SKPP which required NSA evaluation that any certified system reached such requirements.

Right. I was around for that era and worked on some of those systems.

NSA's first approach to operating system certification used the same approach they used for validating locks and filing cabinets. They had teams try to break in. If they succeeded, the vendor was told of the vulnerabilities and got a second try. If a NSA team could break in on the second try, the product was rejected.

Vendors screamed. There were a few early successes. A few very limited operating systems for specific military needs. Something for Prime minicomputers. Nothing mainstream.

The Common Criteria approach allows third-party labs to do the testing, and vendors can try over and over until success is achieved. That is extremely expensive.

There are some current successes. [1][2] These are both real-time embedded operating systems.

[1] https://www.acsac.org/2009/program/case/ccsc-Kleidermacher.p...

[2] https://provenrun.com/provencore/


> That is extremely expensive.

And it proves nothing. And it's as expensive for every update, so forget updates.


Provenrun, used in military aircraft, has 100% formal proof coverage on the microkernel.

We know how to approach this. You use a brutally simple microkernel such as SEL4 and do full proofs of correctness on it. There's a performance penalty for microkernels, maybe 20%, because there's more copying. There's a huge cost to making modifications, so modifications are rare.

The trouble with SEL4 is that it's not much more than a hypervisor. People tend to run Linux on top of it, which loses most of the security benefits.


> The trouble with SEL4 is that it's not much more than a hypervisor. People tend to run Linux on top of it, which loses most of the security benefits.

Well, yeah, that's a problem.

But the bigger problem is that this works for jets, as long as you don't need updates. It doesn't work for general purpose computers, for office productivity software, for databases (is there an RDBMS with a correctness proof?), etc. It's not that one couldn't build such things, it's that the cost would be absolutely prohibitive.


It's not for everything. But the serious verification techniques should be mandatory in critical infrastructure. Routers, BGP nodes, and firewalls would be a good place to start. Embedded systems that control important things - train control, power distribution, pipelines, water and sewer. Get those nailed down hard.

Diagrams like this scare me.[1]

[1] https://new.abb.com/docs/librariesprovider78/eventos/jjtts-2...


> Routers, BGP, ...

Well, but those need new features from time to time, and certification would make that nigh impossible. I'd settle for memory-safe languages as the happy middle of the road.


seL4 is a bit more than a hypervisor, but it's definitely very low-level. In terms of a useful seL4-based system, you may want to look at https://trustworthy.systems/projects/LionsOS/ – not yet verified, but will be.


Ooooo thanks for the rare gem of comment to give me fun things to look up.


I think certification overestimates security, absolutely. Certification proves nothing.

You can use theorem provers to prove correctness, but you can't prove that the business logic is correct. There's a degree to which you just cannot prevent security vulnerabilities.

But switching to memory-safe languages will reduce vulnerabilities by 90%. That's not nothing.


While these types of failures are the 50-70% problem, 30% left seems like a big problem too and the black-hats will just concentrate more on those if the low hanging fruit are removed with rust, C#, python, whatever


50-70% defect reduction is significant, though.

It is true that black hats are going to focus on the remainder pretty much by definition because there’s no other choice. The rest is a problem and it needs a solution. But the fact that it exists is not a sound argument against choosing a memory-safe language.

Current solutions are not ideal but they still eliminate a huge portion of defects. Today we can avoid all the memory issues and we also can focus on the next biggest category of defects. If we’ll keep halving possible defects it won’t take long before software is near defect-free.


I guess my point was it won’t be a “tidal wave” of solved security issues. No all the effort that went into find buffer overflow and use after free errors just gets shifted to combing through code for logic errors and missed opportunities to tighten up checks. It’s not going to be 50-70% reduction. Maybe half that? I mean it would help, but it’s not going to fix the problem in a huge way at all.


It's a 90% solution, not a 100% solution. We don't have a 100% solution, not at any affordable cost.


>everything needs to be much tougher

I think you are being charitable to it. Less charitable version can be "Nobody ever will get fired for buying IBM". :)


Cool. Is this going to require phasing out systems written in C/C++ with horrible security track records like Linux and Windows? Or are they going to get a "too critical to be improved" exemption?


Do you have an OS written in a memory safe language with a good security track record we can switch to? If we start building now we might have a prototype by 2030.


There are plenty with good security track records, formal specifications, formal proofs of various security properties, and decades of deployments.

SCOMP, GEMSOS, INTEGRITY-178, seL4.


seL4 is far more rigorous that the others. The problem today is a widespread lack of understanding of software engineering approaches and tools for formal verification of source model verification, compiler toolchains, and binaries, and also a lack of diligence and effort in implementing rigorous practices.


Don’t forget Temple OS


> Eschew flamebait. Avoid generic tangents. Omit internet tropes.

https://news.ycombinator.com/newsguidelines.html


god seL4 everywhere would be awesome.


This is the most HN comment I’ve seen in recent memory.


> "Companies have until January 1, 2026, to create memory safety roadmaps."

This doesn't bode well for open source software not backed by a "company" that can write these roadmaps and deliver on them.

aka Sounds like Microsoft, Oracle, and other's lobbying has been effective.


FOSS, en mass, probably doesn't do MISRA or consistent testing, relies on random people, may not sign code or artifacts, and could take a hobby/complacency attitude. For software deemed "critical", the feds are free to donate money and assistance to help critical projects formalize and improve themselves rather than proclaiming unfunded mandates on volunteers.


Surely there is going to be an enormous list of exemptions submitted and approved immediately.

My quick skim did not make this clear: is this for software only or would hardware appliances also count? Routers, modems, PLCs used in gas centrifuges, etc. are just as attractive for exploitation.


I think the really big difference with software is that software is much more likely to be used in unexpected ways where vulnerabilities that weren't important at time of original design end up exposed on the internet for exploitation by everyone.


Why does that not apply to router firmware for example? The primary role of a domestic router is to be exposed to the internet.


On the other end of the "hardware appliance" spectrum, there is the F-35 Joint Strike Fighter whose software is written mostly in C and C++, heavily networked, and will be part of any future "loyal wingman" drone program. Each plane costs over 80 million US dollars.



How would any software be FIPS compliant? Is there a "memory-safe" implementation of TLS that is also FIPS certified?


rustls together with aws-lc-rs (and its FIPS mode) can do FIPS-compatible cryptography together with memory-safe TLS processing around it. The FIPS-compatible cryptography is provided in C, but low-level crypto routines is usually not where memory safety issues hide.

https://www.memorysafety.org/blog/rustls-with-aws-crypto-bac...


This misses the point. C/C++ are unsafe because that’s how implementations happen to work today.

C/C++ can be memory safe. Fil-C/C++ is a good example. It’s not a new language, just a different way of implementing it.

Here’s more info about Fil-C: https://github.com/pizlonator/llvm-project-deluge/blob/delug...


While security is a legitimate concern this article leaves an impression of paid for piece to scare us into paying more money to security consultants and / or bend over to big vendors.


that means you have to use rust for system level programming then? there is really no other alternative at system programming as far as memory safe is concerned, that uses no GC or VM.


You can have a systems programming language with garbage collection. Most concerns regarding GC are around unpredictable pauses, but you can absolutely design a predictable GC suitable for real-time applications.

Rust is not the only option. The Ada programming language was developed in the early 1980s, primarily for the U.S. Department of Defense (DoD). Ada was designed for safety-critical embedded systems and real-time applications, especially in areas like avionics and military systems. In 1991 the DoD mandated Ada, but then reversed the decision in 1997. As I read the linked article and the governments "strong stance" I'm wondering if history will repeat itself.


I was going to post that the same sort of mandate had applied to Ada, and was eventually abandoned. I remember Ada the mandate, but I recall obtaining waivers from the mandate (on specific DoD programs) before 1991. I predict that this mandate will suffer the same fate as the previous one.

A quick search confirms my memory:

The DoD Ada language mandate, which required the use of the Ada programming language for most Department of Defense software projects, was established on March 30, 1987 through Department of Defense Directive 3405.2.

https://dl.acm.org/doi/10.1145/255471.255599



GC or VM may be perfectly fine for system level programming. Microsoft's project singularity had everything including the kernel using GC iirc


No, you can still use C++ for the foreseeable future for applications that require it. They are saying don’t use C++ in contexts that clearly don’t need it.

There are some important applications in US DoD where Rust is not a viable replacement for C++. You’d need to broadly disable memory-safety and the performance is still worse. C++ isn’t going anywhere for those applications because it offers material advantages over Rust. And to be frank, this kind of software rarely has memory safety issues generally; the kinds of things that lead to memory unsafety are the kinds of things that also produce poor code on unrelated metrics.


Perhaps Ada?


also MISRA c and c++


That sounds great. I love Rust.


There's go to be billions of loc of critical C/C++ software left. By 2026? Doesn't sound realistic.


The title is wrong. The guidance only says they should have roadmaps by 2026 unless the end of support date is before 2030, then they can tell customers to get bent and deal with C/C++ until support runs out. Roadmaps are easy to produce, so compliance will be easy. Follow through will be more interesting to see. And, again, the guidance is that they should have roadmaps. They aren't mandating anything.


Thanks.


This seems somewhat incoherent and is too focused on shallow claims about languages instead of trying to understand why the memory bugs happened in the first place.

Are unsafe code blocks in Rust or C# okay? Presumably yes if there are good reasons to do so, sometimes it is necessary. But then as a matter of policy, why is Rust meaningfully different than something like using Valgrind with C++? Of course there are substantive differences from a developer's perspective. But just as a stressed or cynical C++ developer might give up on solving the Valgrind error, a similar Rust developer might give up fighting the borrow checker and add "unsafe" to their buggy code. A federal impetus to switch from C++ to Rust would seem to incentivize this laziness further.

To be clear this isn't a criticism of Rust's design or implementation - demarcated blocks of unsafe code is pragmatic and sensible. The problem is how humans build software. In this sense I don't think we've really settled whether "rewrite the code in Rust" is actually safer than "redo our technical management to include automated memcheck testing and paired code reviews." At the very least, I don't think the latter is insufficient, and the feds are being too heavy-handed by making this about language recommendations.

[If it were up to me I would rewrite it in Rust! Saying "the feds made me" is an excellent excuse :) But I don't like the feds making such strong recommendations/demands when I feel the facts are still quite murky. There simply haven't been enough case studies.]

I also think the feds here (along with techies in general) are undervaluing formal specifications and underestimating the risk of undefined behavior.[1] Rust is very stable but it's not formally specified and until recently had known bugs in its very design, not merely in the rustc implementation. (I think those bugs finally got fixed this year.) Considering how cutting-edge Rust is I am sure there are other "theory bugs" somewhere. The point is that critical software also needs stability, and it is unwise to chase memory safety without considering the risks of being tied to an old version of a compiler, especially with unsafe code.

Again: not saying that Rust is automatically bad because it isn't formally specified. But these issues should at least get lip service.

[1] E.g. this fairly detailed document doesn't discuss this at all: https://www.cisa.gov/sites/default/files/2023-12/The-Case-fo...


Rust's borrow checker proves some invariants that the code always upholds. Valgrind only checks that this run hasn't triggered some undefined behavior so far, it's always possible that (ab)using it further or another run with different input might.


> But then as a matter of policy, why is Rust meaningfully different than something like using Valgrind with C++? Of course there are substantive differences from a developer's perspective.

I did not say they were the same or even equivalent, though I was sloppy with synecdoche and didn't specifically mean Valgrind. My point is that unsafe Rust without Valgrind is more dangerous than C++ with Valgrind, and the feds are not adequately considering this when thinking about how large organizations might rewrite their C++ applications.

"We will rewrite it in 100% safe Rust" is guaranteed way to eliminate virtually all C++ memory errors. It does not follow that "we will rewrite it in 95% safe Rust, except for the tricky bits" will eliminate 95% of C++ memory errors.


> My point is that unsafe Rust without Valgrind is more dangerous than C++ with Valgind

Your point is wrong is the issue.

Doing unsafe Rust is harder – there is some syntaxic salt plus doing unsafe code in Rust requires different mindset.

But it's also safer. Why? Because borrow checker is never turned off. Plus you get warning, lints, miri, clippy warning about your code being wrong.

> It does not follow that "we will rewrite it in 95% safe Rust, except for the tricky bits" will eliminate 95% of C++ memory errors.

Again, wrong. It should go like this "Rewrite in Rust will be memory safe (assuming we didn't bungle up the unsafe bits)".

Compiler relies on you for checking the boundary between safe and unsafe. If that's correct, great. If not you only have to check like 5% of your codebase. Which is 20x less than 100%. Maybe I'm not a 20x developer.

Which is a stark contrast to C++ you either write your safe abstractions and pray to God that others won't break, or you make Vec.pop be unsafe because you can't be bothered to return an Optional or null.


> To be clear this isn't a criticism of Rust's design or implementation - demarcated blocks of unsafe code is pragmatic and sensible. The problem is how humans build software. In this sense I don't think we've really settled whether "rewrite the code in Rust" is actually safer than "redo our technical management to include automated memcheck testing and paired code reviews." At the very least, I don't think the latter is insufficient

I think the latter is woefully insufficient, having seen critical C++ memory safety bugs that got past every sanitizer. For me the debate was settled once our team at Meta shipped a large, high-performance Rust service, built over many years, that simply did not have memory safety bugs -- and its success became a crucial part of why Rust is now a first-class language at Meta.


YAY!!! are we gonna have more formal verification???? woohoo!!

oh its about memory safety.


So... F35?


I mean.. the last three critical nation wide software failures had nothing to do with memory safety.. but okay. Shouldn't we base recommendations on actual experience?

All the memory safety in the world can't save you from a dumb vendor just screwing millions of computers at once.


Just wait five days and this will all go away.


Nope, the US government seems to be deadly serious about this, and with good reason. Software security is a absolutely a critical national security issue at this point.


You can't just hand wave the entirety of the C/C++ ecosystem away.

There is no second option for many applications or hardware


Both admins want this to happen, one will just grant more exemptions than the other.


DOD mandated that all systems be written in Ada, that lasted about 5 minutes. DOD also fails to properly fund programs and properly manage them so there are critical systems literally still depending on DOS.


CISA is stupid. Logic bugs don't go away with Rust.


People can downvote all they want. But CISA needs a talking point, and "Rust solves it" is one they won't ever let go. Go talk to those freaks and remember I told you so.


I regularly solve logic bugs using affine typing and shared xor mutability, so clearly some subset of logic bugs is solved with Rust, no?


Ok, I didn't really define logic bugs. I think of things like race conditions as memory bugs because its improper access to the same variable.

So I suppose All Bugs Are Logic Bugs.

But I really meant that many software software vuln aren't even that fancy. Sure if you have something like an iPhone which has whole companies trying to hack it, then eventually the bugs you have left are fancy heap overflows. But lots of products have logic errors, like mischecking creds etc.

I will also pick on CISA for recommending a language that requires something like cargo. Why is it a good idea for critical infrastructure to require internet access to compile its code? CISA is supposed to be concerned about the fact that critical infra. is privately held and they should encourage secure practices. So suggest a language that in practice requires internet access? this is absurd to me.


Cargo does not require internet access to compile. You need it just to download packages once (which you obviously do in any ecosystem). Cargo also cryptographically verifies that downloaded packages haven't been tampered with.

Affine types also help with credential checking! Newtype wrappers synergize really well with them. I wrote a post about some of this a few years ago: https://sunshowers.io/posts/types/


C++ is only "memory-unsafe" if you are hiring bottom of the barrel talent.

Likely the same kind of folks for which we had to change car manuals from including schematics and repair instructions to including warnings about not drinking the coolant...


I think that argument had more merit back when C++ was a much simpler language.

I've been programming in C++ since 1991, and the language / standard library are now so complex that it's hard to be 100% confident that I understand the meaning of any non-trivial code.


I have to massively disagree here. I'm far more confident about code written in the modern style of C++ than I am in old school C++.

Modern C++ with RAII, smart pointers, constexpr, concepts, and heavy instrumentation is so much easier to reason about. It's actually close to Alexander Stepanov's idealised C++.


Maybe we're talking about different subsets of C++.

IIUC, you're referring to codebases which limit themselves to what's often called "modern C++".

I'm working with codebases that draw from a gnarly combination of older and newer parts of the current language spec.

So in my experience, very few C++ programmers are familiar with every part of the current language spec that has relevance to these codebases.

I.e., their code isn't obviously correct regarding many aspects of correctness, including memory safety.


I can only partially agree. The new stuff often breaks in much nastier ways when it does eventually break. The worst of that old school C with objects code (and the toolchains that built it) can be dissected using simple tools and a lot of single stepping. Even when things miscompiled, you had a reasonable shot of figuring out the boundary conditions so you could avoid it in the future.

A lot of the modern stuff isn't like that. There's a thousand different rules governing ten thousand different constructs that interact in fun and dangerous ways.


You must have had incredible luck to never have made a mistake, even ones you didn't realize you made.


Hate that all you want, but you know it's true if you've been in the industry for more than a couple of years.

It's a mistake to cater to the lowest common denominator. We'll all be coding in Scratch in a decade if we keep down this path.


> C++ is only "memory-unsafe" if you are hiring bottom of the barrel talent.

Stated differently-- a bad hiring process can never raise the risk of memory corruption when you use a memory-safe language.


Even if I took your word for it: are you surprised a government would hire as cheaply as possible?


Having worked in government: Well, yes. Defense contractors aren't cheap, and they could probably hire cheaper talent if they were motivated/willing to clear away some red tape. That said, not everyone at a defense contractor is exactly top talent.


Can we please be responsible professionals and move past this. This was settled many years ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: