Remember the NSO iMessage exploit?[1] copyGifFromPath didn't really copy the GIF from the path.
At some point, you do have to trust that the kernel API's documentation (or whatever) is correct[2], simply because it's physically impossible for you to exhaustively verify that each piece of software you use (transitively) has correct documentation and consistent semantics. That doesn't mean that you shouldn't audit third-party code, do code reviews, write tests, fuzz your code, and use static analysis and formal methods - in fact, you should do all of those things, if you can.
But, "don't trust comments" is a gross oversimplification. Perhaps "trust, but verify" is a better pithy saying.
[2] technically, if you did all of the above things, or found other people that did them, then you wouldn't have to trust documentation - but the vast majority of the time, most of the software you encounter will not have been thoroughly audited, tested, and fuzzed, with a nice formal specification
"Trust, but verify" is a semantically null expression that appears to have filtered out of the Soviet Union during arms reduction negotiations, and eagerly taken up by Reagan administration appointees.
You can only ever trust or verify. "Trust, but verify" is functionally identical to "verify", thus, equivalently, "distrust, therefore verify", with maybe a generous helping of cynicism, doublethink, and official mollification.
>
ncmncm 1 hour ago | parent | context | flag | on: Don't trust comments
"Trust, but verify" is a semantically null expression that appears to have filtered out of the Soviet Union during arms reduction negotiations, and eagerly taken up by Reagan administration appointees.
In a modern context it means, look for errors but not deception. It's how I approach code review.
* I could be a verbatim translation and so lose subtle semantics; "don't expect what you don't inspect" would be idiomatic
* Any Soviet saying would come from a culture used to double-speak to survive government oppression and censorship. Which is probably still new to many Americans who remember good old days.
> "Which is probably still new to many Americans who remember good old days."
It's been straight up surreal watching the world turn into the George Orwell "1984" world we were all warned about as children, and seeing the USA (Land of "Freedom") gleefully join in the dystopian "fun".
That may be a confusion of different grades of trust. For example, the difference between trusting someone's intent versus trusting their capability or trusting an outcome in an imperfect universe. Alternately, the degree to which one cooperates in advance of getting verification. A "measure" of trust, if you will.
Either way, if you insist on an excessively narrow definition of trust as being "never verifying", then yes, they're mutually exclusive... but only because it's circular logic.
I’ve always taken it to mean “don’t block people you already trust waiting on the verification to come through but do check it eventually.”
Bar tabs are a good example of this behavior. Just get your regular a drink and assume their card will go through. In the past checks worked like this too.
1. Let the person start doing what they set out to do.
2. Check if they actually have permission.
3a. If yes, stop.
3b. If no, block their access.
In this vein, "distrust" means you believe they may have shortchanged you, while "trust but verify" means you're also open to the possibility they gave you a $10 instead of a $5.
A prerequisite to trust-but-verify is having the luxury of trust. What you describe is a scenario where you may not afford any. In that case it's verify-or-bust :)
This is trivially stupid. "I like the Corvette, but I'd get it in red" does not mean you hate Corvettes. "I like steak, but only medium or below" does not mean you hate steak.
You don't like the Corvette. Not that Corvette. However, you like Corvettes in general. Or you only like red Corvettes.
> I like steak, but
Turn the comma into a period: I like steak. It's unconditional; steak is a big yes for you. Except it's not. You have conditions. So sometimes you like steak and sometimes the opposite. You don't like steak.
I think the interpretation of but is not as trivially stupid as you say.
You can't trust function names, but they have a better chance of being kept up to date than comments, since they're more likely to be in active use. But yeah, the only thing you can really trust is the type signature (and even that, only if the language has a strong culture of not doing random nonsense).
On the other, this does feel a bit whatabouty. Because yeah, you can’t check the kernel code, but you can and should check the code of direct dependencies. Not every time you do anything, but certainly whenever there’s something to be feared. Running code against untrusted inputs from the Internet is one of the few places that would justify “reverse engineer the kernel to make sure it’s safe” levels of concern, in the right conditions.
Can someone help me understand what actually happened here? As far as I can gather, the comment in the code wasn't a lie, but the overall system was complicated enough that there was a way for the thing-that-shouldn't-happen to happen anyway. So the docs weren't wrong, but there was a bug in the code that led to incorrect behavior that deviated from the docs.
Thus the hacker refrain, "lies, damned lies, and comments", for which oddly I'm unable to find any examples despite having seen it recited several times over the years. In fact, I believe I came across that rephrasing before learning of Twain's famous original. I always found it a more pithy justification for why source code comments should only explain why, not how or what.
I don’t want comments, I want commentary. Every time I’m confidently wrong in a commit message - anything from the wrong bug ID to declaring victory prematurely - I wish I could go modify or amend it.
Commits need to be in a separate version tree from their commit messages.
I kind of wish there was a verifier for function/statement comments to flag inaccurate comments (besides interface descriptions + general comment at the top of a function.)
Sort of a combination of a reverse github auto-pilot metric and checking how old a comment is based on it's surrounding code.
You could even syntax highlight based on how accurate it thinks it is like how down voted HN comments fade out.
You could flag how many revisions it has been since the comments were updated in a function as compared to the code, but if that became a metric, someone would probably start optimizing for it by making trivial comment changes without doing real review.
It becomes obvious this is the way the world works the more you start to generate unique comments rather than simply agree or upvote other people’s comments.
Remember the NSO iMessage exploit?[1] copyGifFromPath didn't really copy the GIF from the path.
At some point, you do have to trust that the kernel API's documentation (or whatever) is correct[2], simply because it's physically impossible for you to exhaustively verify that each piece of software you use (transitively) has correct documentation and consistent semantics. That doesn't mean that you shouldn't audit third-party code, do code reviews, write tests, fuzz your code, and use static analysis and formal methods - in fact, you should do all of those things, if you can.
But, "don't trust comments" is a gross oversimplification. Perhaps "trust, but verify" is a better pithy saying.
[1] https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
[2] technically, if you did all of the above things, or found other people that did them, then you wouldn't have to trust documentation - but the vast majority of the time, most of the software you encounter will not have been thoroughly audited, tested, and fuzzed, with a nice formal specification