I agree with both. Real security is never of zero benefit, but I've worked on a lot of "security" tickets that are of no pragmatic benefit:
1. Angst to clean up the latest non-vuln. vuln. (e.g., the use of the vuln. library doesn't meet the requirements listed in the vuln. for it to be vuln., or the vuln. is researcher spam and … isn't really a vuln., but good luck trying to get someone to understand that, or where a vuln. gets publicized but your upstream OS is derping around and hasn't actually released the patched version yet)¹
2. ill-advised attempts to add TLS or similar: adding TLS is not trivial (you do have to have the cert available, so that the ends can validate), but I've seen a lot of TLS additions, e.g., on top of a secure network, or where TLS is bolted on but the library is set to "don't verify". (Worse, this is the default in many libraries: as someone implementing TLS, industry would be greatly helped by having defaults that were secure. PG, MySQL, I'm looking at you.) In one case an auditor asked the mobile apps to do cert pinning — which they did, without ever involving the BE engs. (They weren't told they needed to!) The next cert rotation broke both mobile apps, and broke both differently: the Android app pinned only the private key (and had a NPE if it wasn't RSA) and the iOS app pinned the cert, so that one was truly broken. I've also seen a VPN setup where the clients were configured to accept any leaf cert issued by a public web PKI CA … and check nothing else. (I.e., anyone could go to that CA, get a leaf, and it would MitM the VPN.)
3. adding yet-another-security-linter that has problems with every little thing. E.g., we had one that really disliked that we did string concatenation in our SQL queries. But we were concatenating parameterized queries, as the search parameters (including the number of parameters), came from the user. So a program would need to add, e.g., "AND column > ?" to the query, and append the value to the parameter list. Ended up switching some of that to SQLAlchemy, through which it couldn't see, even though it was functionally the same. SQLAlchemy did work with a better internal AST, though, so it caught some errors our query builder caught only after bugging. I also have a linter from my current security team that files a bug for each package in a container that has vulns. against it. But it does not ever update its bugs, … so you don't know if anything is actually fixed. (I'm not patching package by package; I'm usually invalidating a cache & forcing it to do the equivalent of "apt-get update && apt-get upgrade", so I'm going to close a bunch of tickets all at once … but I have no way of knowing which ones.)
4. Audits, where the auditor asks things like "do you use RC4, TLS, or AES?" which … good grief
5. concerns about PII handling for information that's part of a public (government) record.
¹my default disposition is to patch, regardless of theoretical exploitability, if it is trivial to do so. But if it's not trivial — e.g., if the upgrade is a SemVer incompatible upgrade that requires me to rebase the entire monolith on to a new web framework and I lack the resources to do that right now, and the vuln.'s conditions to be exploited say we're not vuln., I am going to take the pragmatic approach of "we're ignoring this, as it doesn't apply to our codebase."
1. Angst to clean up the latest non-vuln. vuln. (e.g., the use of the vuln. library doesn't meet the requirements listed in the vuln. for it to be vuln., or the vuln. is researcher spam and … isn't really a vuln., but good luck trying to get someone to understand that, or where a vuln. gets publicized but your upstream OS is derping around and hasn't actually released the patched version yet)¹
2. ill-advised attempts to add TLS or similar: adding TLS is not trivial (you do have to have the cert available, so that the ends can validate), but I've seen a lot of TLS additions, e.g., on top of a secure network, or where TLS is bolted on but the library is set to "don't verify". (Worse, this is the default in many libraries: as someone implementing TLS, industry would be greatly helped by having defaults that were secure. PG, MySQL, I'm looking at you.) In one case an auditor asked the mobile apps to do cert pinning — which they did, without ever involving the BE engs. (They weren't told they needed to!) The next cert rotation broke both mobile apps, and broke both differently: the Android app pinned only the private key (and had a NPE if it wasn't RSA) and the iOS app pinned the cert, so that one was truly broken. I've also seen a VPN setup where the clients were configured to accept any leaf cert issued by a public web PKI CA … and check nothing else. (I.e., anyone could go to that CA, get a leaf, and it would MitM the VPN.)
3. adding yet-another-security-linter that has problems with every little thing. E.g., we had one that really disliked that we did string concatenation in our SQL queries. But we were concatenating parameterized queries, as the search parameters (including the number of parameters), came from the user. So a program would need to add, e.g., "AND column > ?" to the query, and append the value to the parameter list. Ended up switching some of that to SQLAlchemy, through which it couldn't see, even though it was functionally the same. SQLAlchemy did work with a better internal AST, though, so it caught some errors our query builder caught only after bugging. I also have a linter from my current security team that files a bug for each package in a container that has vulns. against it. But it does not ever update its bugs, … so you don't know if anything is actually fixed. (I'm not patching package by package; I'm usually invalidating a cache & forcing it to do the equivalent of "apt-get update && apt-get upgrade", so I'm going to close a bunch of tickets all at once … but I have no way of knowing which ones.)
4. Audits, where the auditor asks things like "do you use RC4, TLS, or AES?" which … good grief
5. concerns about PII handling for information that's part of a public (government) record.
¹my default disposition is to patch, regardless of theoretical exploitability, if it is trivial to do so. But if it's not trivial — e.g., if the upgrade is a SemVer incompatible upgrade that requires me to rebase the entire monolith on to a new web framework and I lack the resources to do that right now, and the vuln.'s conditions to be exploited say we're not vuln., I am going to take the pragmatic approach of "we're ignoring this, as it doesn't apply to our codebase."