Static analysis probably does generate basically 100% false positives.
Organizations that manage to operationalize code scanners usually spend many months with full-time staff configuring them and tuning their output --- most of which is nonsensical, for instance randomly assuming dynamic memory is leaking, or that a local variable enables a race condition. There is a whole cottage industry of consultants that does nothing but this.
When all that work is done, the team still needs a Rosetta Stone for the issues they actually do investigate, one that is highly context sensitive and dependent on the different components of their application. For instance, a Fortify or Coverity issue might be bogus in 90% of cases, but actually relevant to one particular library.
There is from what I can tell no source code scanner on the market that will take a product sight unseen and produce a report from which real vulnerabilities can be extracted with a reasonable amount of effort.
There are, on the other hand, many consultancies that will do "point assessments" --- ie, not the long-term shepherding and building of a static analysis practice, but just looking at one release of one product for flaws --- that consist mostly of running a "static" tool like Fortify and a "dynamic" tool like WebInspect, and then handing off the report.
Davidson's take on licensing and security inspection is embarrassing, but she is not at all wrong about consultants and security tools.
Organizations that manage to operationalize code scanners usually spend many months with full-time staff configuring them and tuning their output --- most of which is nonsensical, for instance randomly assuming dynamic memory is leaking, or that a local variable enables a race condition. There is a whole cottage industry of consultants that does nothing but this.
When all that work is done, the team still needs a Rosetta Stone for the issues they actually do investigate, one that is highly context sensitive and dependent on the different components of their application. For instance, a Fortify or Coverity issue might be bogus in 90% of cases, but actually relevant to one particular library.
There is from what I can tell no source code scanner on the market that will take a product sight unseen and produce a report from which real vulnerabilities can be extracted with a reasonable amount of effort.
There are, on the other hand, many consultancies that will do "point assessments" --- ie, not the long-term shepherding and building of a static analysis practice, but just looking at one release of one product for flaws --- that consist mostly of running a "static" tool like Fortify and a "dynamic" tool like WebInspect, and then handing off the report.
Davidson's take on licensing and security inspection is embarrassing, but she is not at all wrong about consultants and security tools.