

Static Analysis Fatigue - gnosis
http://blog.regehr.org/archives/259

======
sqrt17
I think "actionable" is the keyword here. Dumping the error messages straight
from the tool into a bug report does not tell the recipient

(a) how many false positives there are - either code paths that don't occur in
the wild or undefined behaviours that actually yield the same result in all
sensible implementations (as the a+b-c versus a+(b-c) case), or what
possible/likely consequences of such implementation-dependent behaviour would
be.

(b) how he/she can reproduce the bug to see if it disappeared (hence the PHP
guy's request about the exact test cases these errors occur in)

It's also not quite helpful if people cannot run the debugging tool
themselves. This is pretty independent of whether the tool uses static
analysis, or runtime debugging, or a mixture of both.

------
tptacek
Backstory to this: there's a bunch of commercial providers of code
security/reliability scanners (Fortify, Coverity, Ounce, Klocwork; surely
there are others) all of whom simultaneously got the great idea of running
their stuff on big open source projects (as a "donation") and then press
releasing the findings.

~~~
ekidd
Yes. These tools are great if you can include them in your nightly build, and
if they're accurate enough you can insist on 100% clean runs every night. But
a big dump of cryptic errors, many of false positives, from a commercial
vendor with a proprietary tool? If you care about security, you've _got_ to
act on it. But I understand why volunteer maintainers get pretty grumpy after
the first half-dozen false positives, or if the tool seems to be half-baked.

~~~
EdiX
There is another problem with this type of tools, you can read about it here:

<http://blog.regehr.org/archives/226>

most of what it's found while still a bug may never lead to erratic behaviour,
it's hard to estimate the severity of each bug and it's a lot of work to wade
through all of them.

