
Coverity Scan Update - fcambus
https://community.synopsys.com/s/article/Coverity-Scan-Update
======
danielhochman
Coverity Scan regularly goes down for hours or days.

In February of 2018 it was down for over a month with no word or ETA on when
it would be fixed. I hadn't thought about it since then (we discontinued use),
but researching it now they released a statement saying that it was hacked.
There was not a single status update during the outage.
[https://www.theregister.co.uk/2018/03/19/coverity_scan_crypt...](https://www.theregister.co.uk/2018/03/19/coverity_scan_cryptomining/)

------
kstrauser
I wonder who the hosting provider was. I'm not seeing much in the news about
one that "unexpectedly ceased operations", just the expected background news
of scattered outages.

~~~
westi
Based on DNS history for the domain it looks like it was
[https://twitter.com/nephoscale](https://twitter.com/nephoscale) |
[http://nephoscale.com/](http://nephoscale.com/)

~~~
kstrauser
Wild! I wonder why they picked a host that I'd literally never heard of before
this moment? Not that I claim encyclopedic knowledge of virtual hosting
providers, but still.

~~~
dsl
Coverity was an acquisition. The hosting company was probably ran by a friend
of the founders (both seem to be based in the bay area).

------
sanxiyn
Coverity is really good. It is a pity some of its advances, effective in
practice but not really "publishable", will forever remain as proprietary
secret.

Source: I worked on static code analysis product and we extensively black-box
tested Coverity.

~~~
tacostakohashi
What kind of advances are you thinking of?

As best I can tell, most of the warnings are either things that can be figured
out for a single translation unit and (some) compilers will eventually
incorporate as warnings, or things that can only be figured out by analyzing /
linking across many translation units - which a compiler can't do, but the
actual "advance" is simple enough if you have all the function definitions to
hand.

~~~
sanxiyn
Most advances are about filtering false positives. If you do "simple"
interprocedural analysis and dump the result you will find lots of bugs
together with lots of false positives and you can't sell that.

In other words, all the advances are in warnings you don't see. Yes, all
Coverity warnings you see, are simple. I agree.

Quoting from [https://cacm.acm.org/magazines/2010/2/69354-a-few-billion-
li...](https://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-
code-later/fulltext)

> Since the analysis that suppresses false positives is invisible (it removes
> error messages rather than generates them) its sophistication has scaled far
> beyond what our research system did. On the other hand, the commercial
> Coverity product, despite its improvements, lags behind the research system
> in some ways because it had to drop checkers or techniques that demand too
> much sophistication on the part of the user.

~~~
jetru
:) Cool that you recognize that.

This is correct. Coverity does a bunch of specific analyses designed to
eliminate False positives. This includes analysis to determine which data
states are not possible for a given code path, and doesn't report those
specific issues. This also works using data across function calls.

For C/C++/C#, Coverity has by far the lowest false positive rates, which make
it probably the best in class for those languages.

Disclosure: I used to work at Coverity.

~~~
sanxiyn
Fascinating! The specific example you mentioned sounds publishable, any idea?
I mean, I am just curious, I no longer work on this.

I now have quite different perspective wrt false positives; that false
positive rate is not important. It's all about perspectives. Name the tool bug
search engine instead of bug finding tool. You rarely look beyond the first
page of search engine result. Develop a ranking algorithm such that all alarms
in the first page is relevant. You can use any probabilistic voodoo to rank.
Etc. I don't know whether this can work. But I think it's worth a try.

~~~
tacostakohashi
I think the takeaway here is that Coverity have just made a (clever) business
decision to eliminate false positives, so that (nearly) 100% of their defects
are "correct". It's quite possible that there are real defects that they _don
't_ show because their confidence is less than 100%.

~~~
sanxiyn
No, not really. "false positives are the enemy" (or at least, in my
alternative model, relevancy) becomes _glaringly obvious_ for anyone who tried
this at scale. The question is how to reduce false positives (or improve
relevancy). The part that "real defects not shown" exist is called unsoundness
and pretty much everybody now agrees you should be unsound. (Or being sound is
a different market.)

Of course, this is not obvious to people who haven't tried, so it is reported
over and over and over. The most recently, from Google. Lessons from Building
Static Analysis Tools at Google (2018).
[https://cacm.acm.org/magazines/2018/4/226371-lessons-from-
bu...](https://cacm.acm.org/magazines/2018/4/226371-lessons-from-building-
static-analysis-tools-at-google/fulltext) I mean, I could have told them.

Select quotes:

> Unlike compile-time checks, analysis results shown during code review are
> allowed to include up to 10% effective false positives.

IMPORTANT NOTE: 10% effective false positives means much less than 10% false
positives! In above quote, Google defines true bugs marked as not-a-bug by a
developer as effectively false positives. If what you say is true, but users
misunderstand, it doesn't count.

------
walterbell
Has anyone tried LGTM / Semmle QL for automated code review? They claim 100K
OSS projects are using the service. It's a bit hard to find technical
information on the product, but they have found CVEs in mainstream products,
including iOS.

[https://lgtm.com](https://lgtm.com) &
[https://semmle.com/ql](https://semmle.com/ql)

~~~
spatulon
I work on C/C++ analysis at Semmle, and am happy to answer any questions you
might have. (We also support C#, Java, JavaScript, Python... and Cobol!)

We have a few high-profile projects using our automated code review (marketing
people tell me I'm not allowed to call it 'PR integration' any more). One
example is on the AMP Project, where we caught a regex injection vulnerability
in a PR before a human looked at it:
[https://github.com/ampproject/amphtml/pull/13060](https://github.com/ampproject/amphtml/pull/13060)

Our default analysis has found a few other vulnerabilities (remote buffer
overflows due to misuse of snprintf in rsyslog and Icecast spring to mind)
but, honestly, I think our strength lies in the fact that you can write custom
queries that find bugs specific to a single codebase's foibles.

For example, my first ever CVE was for a vulnerability in ChakraCore. Google
Project Zero found the original bug - type confusion caused by failure to
check a flag indicating that the last element of a list should be cast to a
different type - but we wrote a query to verify that code accessing that
particular list always checked the flag. So when some new code got introduced
with the same bug, we noticed as soon as we re-ran the query on the new
commit.

~~~
walterbell
_> We also support C#, Java, JavaScript, Python... and Cobol!)_

Where do these rank in chances of future support?

    
    
      - Go
      - Rust
      - Lua
      - Swift
      - Haskell

~~~
samlanning
Go support is currently in development, so very likely :)

As for the other languages there, there are no plans currently on the roadmap
for 2019 to add any of them. However if this is something that particularly
interests you, we'd encourage you to apply for a job and note that you'd like
to add support for a particular language :)

------
sunyc
I honestly thought it is gone!

All links are dead, and synopsis.com’s big Corp style website isn’t helping
one bit.

------
joshstrange
> Coverity Scan is a free static code analysis tool for Java, C, C++, C# and
> JavaScript. It analyzes every line of code and potential execution path and
> produces a list of potential code defects.

There we go, I had no clue what this even was. Do a lot of people here use it?

~~~
radicalbyte
Coverity have one of the best static analyzers for C++ and CSharp. Not a
surprise considering they have had* ex-Microsoft compiler engineers such as
Eric Lippert working for them.

I understand from speaking to C++ engineers who have extensive experience in
embedded / industrial applications that Coverity is used extensively there.

Personally I've never seen the need to apply it to CSharp projects because the
language is naturally safer than C++ and you get a lot of "bang for the buck"
by using the tools built into Visual Studio and from JetBrains.

* He works at Facebook now :facepalm:

------
rurban
Wouldn't it be great if professional websites will someday get to the level of
non-professional websites? E.g. by giving this announcement page a proper
title: "Coverity Scan Outage".

Update is a change, this is an outage.

