Took me a while to realise this is a project by the French National Cybersecurity Agency (ANSSI), not just some random github repos from someone with Important Opinions on how to use a programming language.
All of this advice makes a lot of sense but a lot of recommendations cannot easily be checked for or enforced. It would be nice to be able to quickly put Clippy into "ANSSI mode" to check and enforce all of these requirements.
Explaining why something is bad is still necessary if you rely on automated code review. Clippy does this, with specific codes for specific (potential) problems that the linter catches. You can learn a lot about patterns in Rust by just reading through the bad patterns and read their suggested alternatives.
I fully agree, but then it would be integrated with Clippy or in a wiki with a page per code (like shellcheck).
The document in this post is helpful because it does the first step of explaining the rationals, that can then eventually be automated (and then/copy pasted in the explanations).
I agree with you all. Bake it into clippy or make a different clippy aka seclippy. Also, I appreciate the docs so I understand why with examples. Also make a attribute that blocks some of these things
According to French Wikipedia, one of the names for Clippy in French is Trombine. Given the other user mentioned that it means "face", and given trombone is, I believe, a paper clip (my French education unfortunately did not get this far), I suspect it's used as a kind of cute pun thing.
Kind of like how "Clippy" is obviously derived from the word "paperclip", but has slightly cuter and more friendly connotations. (Except for those experienced with Clippy, who may have other connotations at this point...)
> In a secure Rust development, the code must not leak memory or resource in particular via Box::leak.
Uh, so is Box::leak forbidden in general? Because creating a &'static in some initialization code that lives for the rest of the program is a rather common use case… https://docs.rs/log/0.4.17/src/log/lib.rs.html#1408
Hm, good point. (It's implemented as a Once (which is an atomic usize) and an UnsafeCell for holding the actual data.)
And while using SyncOnceCell normally might incur some small checking overhead on each access, you can use it to obtain a &'static. https://play.rust-lang.org/?version=nightly&mode=debug&editi...
But then, by tialaramex's logic, isn't it just as bad as Box::leak? (Maybe not quite as, because it can leak maximally one thing. You can't do this in a loop.)
It's usually considered a leak if we can't release it.
Arguably you can release the thing you leaked with Box::leak(). Just tell Box you want a new box that's just a thin wrapper for your mutable reference (this is unsafe), then drop the box. But generally the purpose of Box::leak() is to never release the thing inside the box.
So, yes, you could argue semantically about whether this is "really" a leak but in practice that's why it's named Box::leak()
My definition of leak would generally be what Valgrind would call a leak. Wouldn't Valgrind not call this a leak given that there would still be a reference remaining at program exit?
If my understanding is correct, the reference could get dropped at the end of the scope while the pointed at memory location will remain in memory - so Valgrind would leak it, but I haven’t tested it.
Sadly these guidelines are as incoherent as last time. For example, the rules absolutely forbid using an `unsafe` block to call a rust function (since this falls in none of the allowed uses of `unsafe`), but then the rules turn around and require the use of `unsafe` to zero out variables.
I don't think vendoring helps at all. People can't review all the dependencies' code. And recursively, for the dependencies of the dependencies. At some point, you delegate trust.
What helps is having provenance information, signing and SBOM. One example. https://sigstore.dev (vendor neutral effort from the Linux Foundation).
Not only does the guide not mention it, it seems to actively disagree with you:
> The cargo-outdated tool must be used to check dependencies status. Then, each outdated dependency must be updated or the choice of the version must be justified.
If there's something that's missing mention, it is imho that the Cargo.lock files should be committed, and also included e.g. in docker builds. (I don't believe in offhanded "cargo update". And even if you did accidentally type it, nothing will be built/executed yet. There's ample opportunity to git restore Cargo.lock.)
> one casual offhand cargo update away from catastrophe
There is a scary amount of libraries who don't even bother specifying patch levels and will auto update everything upon request without question, even worse cargo doesn't make it explicit and as such version "1.0" is equivalent to "1.0.*" in the cargo manifest. Please refer to my previous comment about bad actors not respecting semver as much as many would love them to.
`cargo audit` can be compromised by bad actors. `cargo update` also. If you have a security first application then always vendor, this isn't rust specific.
Sure, "cargo update" will update the dependencies – that's what it's meant to do. The point of Cargo.lock is to ensure that all compilations use the same known set of versions, otherwise it's possible for developers to use one library and the users to use a different one. With Cargo.lock a new version of library won't be used without an explicit "cargo update".
Not on crates.io, they're pretty adamant about "no scoping" approach to naming.
Scoping does not really solve injecting malicious code into the dependency tree in some update. Scoping is more of a defense against typosquatting via new crates.
You can run your own crate registry if you're particularly concerned over what gets published on crates.io.
All of this advice makes a lot of sense but a lot of recommendations cannot easily be checked for or enforced. It would be nice to be able to quickly put Clippy into "ANSSI mode" to check and enforce all of these requirements.