Organizations that manage to operationalize code scanners usually spend many months with full-time staff configuring them and tuning their output --- most of which is nonsensical, for instance randomly assuming dynamic memory is leaking, or that a local variable enables a race condition. There is a whole cottage industry of consultants that does nothing but this.
When all that work is done, the team still needs a Rosetta Stone for the issues they actually do investigate, one that is highly context sensitive and dependent on the different components of their application. For instance, a Fortify or Coverity issue might be bogus in 90% of cases, but actually relevant to one particular library.
There is from what I can tell no source code scanner on the market that will take a product sight unseen and produce a report from which real vulnerabilities can be extracted with a reasonable amount of effort.
There are, on the other hand, many consultancies that will do "point assessments" --- ie, not the long-term shepherding and building of a static analysis practice, but just looking at one release of one product for flaws --- that consist mostly of running a "static" tool like Fortify and a "dynamic" tool like WebInspect, and then handing off the report.
Davidson's take on licensing and security inspection is embarrassing, but she is not at all wrong about consultants and security tools.
But notice the other comment about a "well-known security researcher" "alleging" vulnerabilities that yee-haw we're already working on fixes for so we're awesome and he's lame and nanny nanny boo boo etc.
Serious cognitive dissonance there. I used to buy a lot of Oracle product. Their value proposition has grossly weakened over the last decade and a half or so, so I don't any more. But if I did, I'd be embarrassed today.
Her statement gains 1% truth because Oracle might already have picked the low-hanging fruit, and any more reports they get really are full of chaff. I find this unlikely, but it's possible. She gets another 1% for this.
> A customer can’t analyze the code to see whether there is a control that prevents the attack
That's actually a pretty decent point. Anyone who has actually studied static-analysis reports for any length of time has probably encountered this phenomenon. For example, you might find a potential buffer overflow that's real in the context of the code you analyzed, but the index involved can't actually be produced because of other code that you didn't. Or maybe a certain combination of conditions is impossible for reasons related to a mathematical property that has been thoroughly vetted but that the analysis software couldn't reasonably be expected to "know" about. Ironically, these kinds of "reasonable false positives" tend to show up more in good programmers' code, because they're diligent about adding defensive code handling every condition - including conditions that aren't (currently) possible. In any case, while it's a good point, it's applicable rarely enough that it doesn't really support the author's broader position.
I think the impedance mismatch here might be that you're a software developer, and we're talking about security teams.
I don't know that anyone is arguing that static analysis is useless for developers. If you're intimately familiar with the code you're working on, there are probably a lot of ways to make static analysis results both valuable in every edit/compile/debug cycle, and an important part of your team's release process.
But when you're close to the code, it's easy to forget how much of the tool's output you're ignoring (either literally, by just skimming past findings you know don't matter, or implicitly, by configuring the tool to match your environment or subtly changing your coding style to conform to Coverity's expectations).
Security teams can't generally do this. They're stuck with the raw output of the barely-configured tool. The results of static analysis in these circumstances is nonsensical: memory leaks, uninitialized variables, race conditions, tainted inputs reaching SQL queries, improper cleanup of sensitive variables, 99.9% of which aren't valid findings, but all of which look super important, especially if you're consultant with 6 months of experience charging $150/hr to run Fortify on someone else's code, then petulantly demanding a response for every fucking issue the scanner generates.
They're fine dev tools, but they are terrible tools for adversarial inspection, which is what Davidson is talking about.
The problem here is:
1) She might be a writer but boy did she not convey the message I think she wanted to, which is kind of a shame.
2) She doesn't apparently much understand "reverse engineering" with more nuance than "my legal team says you can't do it so there", which is much more of a shame for someone who carries a CSO bag.
1) the answer should usually be either "fixed in this patch, install it" or "it's a false positive, try developing an actual exploit if you don't believe us". Not expensive, provided Oracle actually runs static analysis tools against their software and addresses the findings before releasing updates.
2) If Oracle actually runs static analysis tools against their software and addresses the findings before releasing updates, there should be very few tickets of this type to begin with, often from the debut of new tools or from naive user mistakes. Finding something, and worse finding something over and over again, means that Oracle QA is inadequate.
Finding something, and worse finding something over and over again, means that Oracle QA is inadequate.
By what delusion do you think it's not the tool finding the same false positive over and over that's inadequate. The tools are not perfect, and often their developers are very obstinate in what they consider a finding.
To deal with false positive reports from customers, Oracle needs to archive what the false positives in each release of their software according to popular tools are. Not the tools they use to find bugs: all tools customers use. It's not like they cannot afford tool licenses or large databases.
Adopting a code style that reduces false positives (along with bugs) and fixing actual problems before release so that no customer sees them would also be good policies.
Even without improving their software development process, educating users about which static analysis tools are discredited and rigorously demanding working test cases in support tickets to weed out false positives are two things Oracle could do without alienating their customers.
Oh, wow! How could we be so stupid. All we have to do is build a false positive database of all tools, everywhere. Don't forget for all versions. Not just all versions of the tools, all versions of your code. And whenever a new tool or version comes out, rerun everything. Because inevitably some guy limping along with Oracle 8 is going to download the latest Parasoft release and he's gonna what a word with you because probably Parasoft doesn't even have an Oracle 8 instance laying around any more. No problem!
Adopting a code style that reduces false positives...
Force coders to change their style because "AppScan v6.00.01 patch 10 throws a falsy on this expression in version 11i patch 200". Sure, why not...fuck those guys. People should be slaves to the tools, not the other way around.
rigorously demanding working test cases
Forcing your 400k some odd customers to come up with test cases for your code before you pay attention to the vulnerability reports they genuinely believe are important. That sure won't alienate customers at all.
Proof once again how easy it is to wave ones hand over a complicated subject one doesn't fully understand, assign simple (and wrong) solutions based on limited information and declare others inadequate.
Investigating analysis tool reports once and for all is the only way to minimize this cost. You seem to neglect various cost-mitigating factors:
- Reports from different tools are going to hit the exact same spots in code, for the same reasons, making the marginal cost of analyzing the report from yet another tool low and decreasing and making the matching between support tickets and known false positive reports very easy. Closing tickets as vague would also be easy.
- Reports for version N and version N+1 of the product are going to be very similar. Likewise for version N and N+1 of a static analysis tool.
- Only popular (and good) analysis tools deserve up-front usage before releasing products. Others can be run only after someone files reports, and for the most unlikely ones being unprepared is the best choice. There's no value in a strawman like complete coverage of all possible tools.
- Static analysis tools are useful. Using them thoroughly would provide significant value beyond the dubious niche of reverse engineering support tickets.
Somehow this CSO is unaware of binary static analysis, ala Veracode. You can still get plenty of false positives from binary SAST, but it's NOT de-compilation.
My question would be whether binary SAST falls under the prohibition against reverse engineering. I wouldn't think so, but that's one for the lawyers unfortunately.
Meanwhile: a huge portion of everything Oracle ships is Java, and consultants absolutely do run Java security scanners on decompiled jar files from Oracle products.
"A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code)"
It's a stretch to interpret this as an admission that it's only a license violation when decompilation to source is involved. I read it as "all static analysis operates against source code".
It's hardly embarrassing to point out that important detail, and I don't think it's fair to assume that the motivation for correcting the error is "to feel smarter than" the one who made it.