
McCabe's Cyclomatic Complexity and Why We Don't Use It (2014) - jcr
https://www.cqse.eu/en/blog/mccabe-cyclomatic-complexity/
======
gtrevorjay
Been there. Did not get the t-shirt, but I did perform a [large empirical
study][1] on McCabe's. It turns out that once you account for hetero-
scedasticity it gives almost no more information than lines of code. Whether
that reflects more on McCabe's or the average programming style, I (still)
leave as an exercise to the reader.

[1]:
[http://www.scirp.org/journal/PaperInformation.aspx?PaperID=7...](http://www.scirp.org/journal/PaperInformation.aspx?PaperID=779)
"Cyclomatic Complexity and Lines of Code: Empirical Evidence of a Stable
Linear Relationship"

------
keithnoizu
CC by itself isn't all that useful but C.R.A.P index can come in pretty
handing for identifying under tested code even if the occasional, and usually
unnecessary, switch statement needs to be ignored.

~~~
jcr
Thanks for pointing out the C.R.A.P Index. A bit of digging found the initial
blog post(s) from 2007, along with java and php coverage.

Part 1:
[https://www.artima.com/weblogs/viewpost.jsp?thread=210434](https://www.artima.com/weblogs/viewpost.jsp?thread=210434)

Part 2:
[https://www.artima.com/weblogs/viewpost.jsp?thread=210575](https://www.artima.com/weblogs/viewpost.jsp?thread=210575)

crap4j:
[https://www.artima.com/weblogs/viewpost.jsp?thread=215899](https://www.artima.com/weblogs/viewpost.jsp?thread=215899)

PHP: [http://jacobsantos.com/blog/2007/general/what-is-your-
crap-i...](http://jacobsantos.com/blog/2007/general/what-is-your-crap-index)

PHP: [http://www.levihackwith.com/how-to-read-and-improve-the-
c-r-...](http://www.levihackwith.com/how-to-read-and-improve-the-c-r-a-p-
index-of-your-code/)

------
nickpsecurity
Good examples. The tools should come 2nd to humans wherever it makes sense. I
push strong static analysis tools like Astree & SPARK because they find things
that are hard for humans. Complexity metrics, if well-designed, might find
trouble spots in the code. However, I agree with author that this sort of
thing is hard enough for a computer to understand that anyone using that
should let human mind decide if a change makes sense for comprehension.

Preferably, several of them that are more likely to disagree. ;)

~~~
jcr
I started digging into McCabe's Cyclomatic Complexity [1] after reading an
interesting message on the DragonFlyBSD mailing list [2] regarding its
questionable usefulness on complex data structures and file system (HAMMER)
code. The 'pmccabe' program [3] mentioned looks interesting, but I'm still
figuring out what's available and I'm still working my way through the main
wikipedia article (which links to the submitted article).

I think the main problem is that complexity is a bug light for hackers; we
know it will probably end badly, but the appeal is so intoxicating that we
just can't help ourselves.

[1]
[https://en.wikipedia.org/wiki/Cyclomatic_complexity](https://en.wikipedia.org/wiki/Cyclomatic_complexity)

[2]
[http://lists.dragonflybsd.org/pipermail/users/2016-January/2...](http://lists.dragonflybsd.org/pipermail/users/2016-January/228535.html)

[3]
[https://people.debian.org/~bame/pmccabe/overview.html](https://people.debian.org/~bame/pmccabe/overview.html)

~~~
nickpsecurity
"I think the main problem is that complexity is a bug light for hackers; we
know it will probably end badly, but the appeal is so intoxicating that we
just can't help ourselves."

That's a good metaphor. There's two ways of looking at this situation: human-
centric and tool-centric. The human side is that anything too complex to
understand can hide flaws in local implementation or interactions with other
things. The tooling side is that things too complex for our tools to analyze
can't give us the assurance level of alternatives. I think McCabe's works
value is on tooling part but too often is assessed on human side. The counter-
example is a good one where something is easy to verify for humans but
produces a meaningless number that represents what tools would perceive.

My rule of thumb is: Can you understand everything the software might do in
success or error states? Have you tested that to be sure it does? Have you
expressed it simply enough to apply static or dynamic analysis tools to show
its safety against common cases of error?

These things seem to be a pre-requisite for software that's correct by
construction. Just one part of the process among many. Yet, a critical one.
Work like McCabe's _may_ help on tooling side. Yet, we often have a better
shortcut: your code doesn't pass if the analysis can't prove its safe after X
amount of runtime. "Animats" said that's Microsoft's approach to certification
of drivers with their amazing SLAM verification kit: run it w/ an hour max.
Driver quality is higher than ever despite complexity being higher than ever.
Direct result of their assurance activities.

