

This is why you can't build a Sufficiently Smart Compiler - eli_gottlieb
http://marmoach.blogspot.com/2012/03/this-is-why-you-cant-build-sufficiently.html

======
arohner
> I've never heard of a compiler that will do Big-O complexity analysis for
> the programmer, at least not without dependent types (static analysis
> dependent upon and capable of predicting dynamic (run-time) values)

Why is this hard? In Clojure, the complexity of all core functions is well
defined. i.e. map is O(n), hash lookups are log_32(n) or better, etc. Seems
like you can compute the complexity analysis at compile time. Then at runtime,
you only need to know whether n is "big" or "small".

>most actual code in non-functional languages is written in an effectful,
inherently-sequential style when it could otherwise be stateless/pure and
embarrassingly parallel

I'm pretty sure there are plenty of companies who would switch to functional,
if it actually gave them the SSC.

~~~
eli_gottlieb
>Why is this hard? In Clojure, the complexity of all core functions is well
defined. i.e. map is O(n), hash lookups are log_32(n) or better, etc. Seems
like you can compute the complexity analysis at compile time.

Because your standard for-loop iterating through a collection is a special
case of all for-loops. Much old code doesn't even use iterator-based or
comprehension-based for-loops, but instead just iterates through arrays by
using a local variable as an index.

    
    
        for(int i=0;i<strlen(string);i++)
    

The programmer knows that strlen() will return the length of the null-
terminated string _string_ , but without dependent types or some similar
computation, the compiler has _no idea_. To your C compiler, that string is
just another pointer to a character!

If we "move up" to a language like Java or C++ where iteration uses (gasp)
iterators, _then_ we have to detect the _special case_ of code employing an
iterator class/type and associate a collection-size variable _n_ with the
collection over which we're iterating.

I say special case because that check will only catch the employment of an
iterator via an iterator-based _for_ loop, not a _while_ loop. And the _while_
loops or index-based _for_ iterations can also have off-by-one errors and
other bugs that will, needless to say, prevent our static complexity analysis
from proceeding as planned.

These things _look_ easy to analyze to the human eye because Pseudocode is the
Sufficiently Smart Language, designed by humans for performing complexity,
computability and correctness proofs. Once we get actual code involved, any
resemblance to our vaunted Pseudocode is a happy coincidence.

------
gtani
Very good post. Related discussion of how close different groups are to SSC-
ness (or not): ML, haskell, scala, and the typechecker component (OP likes coq

[http://www.reddit.com/r/dependent_types/comments/r61wo/bring...](http://www.reddit.com/r/dependent_types/comments/r61wo/bring_dependent_types_into_practical_programming/)

