
In-Depth: Static Code Analysis (2011) - tzhenghao
https://www.gamasutra.com/view/news/128836/InDepth_Static_Code_Analysis.php
======
ThePhysicist
It's important to keep in mind that there are vast differences in static
analysis tools. Some of the commercial ones that John mentions in the post are
quite powerful but also very expensive and out of reach for most startups /
smaller companies. Most of these started off as spin-offs from research
groups, and there's hundreds of man-years of research behind some of them,
which (IMHO) makes the technology hard to replicate in the open-source world.

Then there's open-source static analysis tools, which unfortunately are often
much less powerful. Especially for dynamically typed scripting languages like
Python it's very difficult writing good static analyzers because there's no
type information available (projects like mypy should improve that though). In
my opinion, when working with such languages gradual typing like done by MyPy
/ type annotations in Python or transpilation like Typescript does is a very
good first investment when developing in these languages, and will also make
static analysis easier later.

There are some studies (I can't find the link now) that compared various
measures for improving code quality in terms of cost and effectiveness, and
the takeaway was that if you're starting out with QA manual code reviews are
usually the most cost-effective way to improve software quality. Of course you
will need to invest some time in getting a high-quality review process set up,
and ensure that there are developers in your team that have the experience to
do meaningful reviews.

~~~
rasjani
While back, i did a tool that uses clang sa, tidy and cppcheck to provide more
information to our existing Coverity scans. For coverity scans, mostly the
meaningful stuff was enabled and results compared to clang & cppcheck where
not that different. Next time id just to with those and not bother with
coverity unless licensing is already there and some aspects require coverity..

Anyway, more is better. Coverity is nice once you get it setup and you have
recentlish version but combine it with open source offerings too!

~~~
marktangotango
Interesting that back when McPeak was working on Elsa/elkhound clang wasn’t
available. clang is definitely a game changer for c/c++ static analysis.

------
nestorD
Here is a good serie of articles on static analysis tools :
[http://btorpey.github.io/blog/categories/static-
analysis/](http://btorpey.github.io/blog/categories/static-analysis/)

------
nickpsecurity
Two others worth checking out that focus on soundness with low to no false
positives are RV-Match and TrustinSoft Analyzer:

[https://runtimeverification.com/match/](https://runtimeverification.com/match/)

[https://trust-in-soft.com/products/](https://trust-in-soft.com/products/)

The RTV people are the one's behind K framework and an executable semantics
for C with a GCC-like interface:

[http://www.kframework.org/index.php/Main_Page](http://www.kframework.org/index.php/Main_Page)

[https://github.com/kframework/c-semantics](https://github.com/kframework/c-semantics)

That RV-Match builds on an open framework, does well on benchmarks (yeah
skepticism here), and has little to no noise (most important) is what I like
most about it. I say most important since digging through false positives
makes developers not want to use a tool. I haven't tried either product
myself, though.

Here's a list of open tools that did well in small-scale experiments with and
without false positives:

[https://klee.github.io](https://klee.github.io)

[http://www.cprover.org/cbmc/](http://www.cprover.org/cbmc/)

[https://cseweb.ucsd.edu/~rjhala/blast.html](https://cseweb.ucsd.edu/~rjhala/blast.html)

[http://www.splint.org](http://www.splint.org)

[http://spinroot.com/uno/](http://spinroot.com/uno/)

[http://cppcheck.sourceforge.net/#features](http://cppcheck.sourceforge.net/#features)

[https://en.wikipedia.org/wiki/Sparse](https://en.wikipedia.org/wiki/Sparse)

[https://cpachecker.sosy-lab.org](https://cpachecker.sosy-lab.org)

------
pieterr
Paraphrasing Dijkstra: “Using static code analysis shows the presence, not the
absence of bugs.”

See: (1) “Finding Heartbleed with CodeSonar”, (2) “Why Do Software Assurance
Tools Have Problems Finding Bugs Like Heartbleed?”

[1] [http://blogs.grammatech.com/finding-heartbleed-with-
codesona...](http://blogs.grammatech.com/finding-heartbleed-with-codesonar)

[2] [https://www.swampinabox.org/doc/SWAMP-
WP003-Heartbleed.pdf](https://www.swampinabox.org/doc/SWAMP-
WP003-Heartbleed.pdf)

~~~
nickpsecurity
I think that's a misapplication of the quote. Testing just checks an algorithm
on specific inputs. The proofs check it for all inputs. Some static analyzers
are designed to show absence of bugs of specific type for all inputs. They're
more like proof.

Astree Analyzer and TrustinSoft Analyzer are examples that claim to prove
absence of errors with sound analysis.

