Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

She has a point here. Static analysis does generate a lot of false positives, and it requires a pretty in-depth understanding of the code to determine whether any given hit is a real issue. Unfortunately, that sort of understanding doesn't usually come from just running a static analysis tool (or fuzzer, OWASP scanner, etc., etc.). The problem comes (and I have personally been on the receiving end of this) when running the tool results in a couple of dozen to a couple of hundred trouble tickets you can't simply ignore that say things like "I found a bug and if you don't respond I'm going to dump Oracle", "I found a bug and if you don't fix it on my schedule I'm going to post to HackerSiteDuJour" or "I found a bug pay me a bounty or I'm gonna make a big stink". And so someone will have to go and look at the report and "prove" that just like 99.999% of the time, it's a false positive, and they will have to do that for every "security" person who cranks up a tool and finds the same "vulnerability".

The problem here is:

1) She might be a writer but boy did she not convey the message I think she wanted to, which is kind of a shame. 2) She doesn't apparently much understand "reverse engineering" with more nuance than "my legal team says you can't do it so there", which is much more of a shame for someone who carries a CSO bag.



Oracle cannot ignore annoying and low-expected-value static analysis tickets, but:

1) the answer should usually be either "fixed in this patch, install it" or "it's a false positive, try developing an actual exploit if you don't believe us". Not expensive, provided Oracle actually runs static analysis tools against their software and addresses the findings before releasing updates.

2) If Oracle actually runs static analysis tools against their software and addresses the findings before releasing updates, there should be very few tickets of this type to begin with, often from the debut of new tools or from naive user mistakes. Finding something, and worse finding something over and over again, means that Oracle QA is inadequate.


Ah...you've never used one of these tools on a large code base. The problem is that when I run the tool in my QA environment, I identify the false positive and configure my tool to account for the false positive (or I create a compensating control). If you run the same tool, you'll see everything I tuned out, and I then have to go back and trace where the finding was tuned out, why it was tuned out and make sure that's still right. It's not inexpensive when multiplied by hundreds of times. And that's if you use the same tool as I am; if you're running different one, we just as well could be starting from scratch.

Finding something, and worse finding something over and over again, means that Oracle QA is inadequate.

By what delusion do you think it's not the tool finding the same false positive over and over that's inadequate. The tools are not perfect, and often their developers are very obstinate in what they consider a finding.


Silencing false positives when one runs static analysis tools is only enough for the purpose of a single bug-finding campaign, not for a sustainable effort.

To deal with false positive reports from customers, Oracle needs to archive what the false positives in each release of their software according to popular tools are. Not the tools they use to find bugs: all tools customers use. It's not like they cannot afford tool licenses or large databases.

Adopting a code style that reduces false positives (along with bugs) and fixing actual problems before release so that no customer sees them would also be good policies.

Even without improving their software development process, educating users about which static analysis tools are discredited and rigorously demanding working test cases in support tickets to weed out false positives are two things Oracle could do without alienating their customers.


Oracle needs to archive what the false positives in each release of their software according to popular tools are.

Oh, wow! How could we be so stupid. All we have to do is build a false positive database of all tools, everywhere. Don't forget for all versions. Not just all versions of the tools, all versions of your code. And whenever a new tool or version comes out, rerun everything. Because inevitably some guy limping along with Oracle 8 is going to download the latest Parasoft release and he's gonna what a word with you because probably Parasoft doesn't even have an Oracle 8 instance laying around any more. No problem!

Adopting a code style that reduces false positives...

Force coders to change their style because "AppScan v6.00.01 patch 10 throws a falsy on this expression in version 11i patch 200". Sure, why not...fuck those guys. People should be slaves to the tools, not the other way around.

rigorously demanding working test cases

Forcing your 400k some odd customers to come up with test cases for your code before you pay attention to the vulnerability reports they genuinely believe are important. That sure won't alienate customers at all.

Proof once again how easy it is to wave ones hand over a complicated subject one doesn't fully understand, assign simple (and wrong) solutions based on limited information and declare others inadequate.


I doubt there is such a flood of false positive reports from customers misusing static analysis tools as the article complains about, but as far as Oracle wants to do something about them minimizing the cost of writing off a report as a false positive is the only rational solution. (Complaining about reverse engineering is not rational.)

Investigating analysis tool reports once and for all is the only way to minimize this cost. You seem to neglect various cost-mitigating factors:

- Reports from different tools are going to hit the exact same spots in code, for the same reasons, making the marginal cost of analyzing the report from yet another tool low and decreasing and making the matching between support tickets and known false positive reports very easy. Closing tickets as vague would also be easy.

- Reports for version N and version N+1 of the product are going to be very similar. Likewise for version N and N+1 of a static analysis tool.

- Only popular (and good) analysis tools deserve up-front usage before releasing products. Others can be run only after someone files reports, and for the most unlikely ones being unprepared is the best choice. There's no value in a strawman like complete coverage of all possible tools.

- Static analysis tools are useful. Using them thoroughly would provide significant value beyond the dubious niche of reverse engineering support tickets.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: