
Strong functional programming - DanielRibeiro
http://etorreborre.blogspot.com.au/2012/06/strong-functional-programming.html
======
KeithMajhor
Here is some relevant discussion: <http://lambda-the-ultimate.org/node/2003>

Also, codata doesn't solve everything. You still can't filter infinite data.
Filtering with \x -> x > 0 won't terminate for all infinite lists of numbers
since [0 0 0 0 0 ...] is a potential case.

------
ced
Interesting. I am not that familiar with ML languages, but isn't the situation
equivalent to division by 0 in arithmetic? Division by 0 is undefined, because
assigning it to any natural number would yield a contradiction. Alternatively,
one could define x/0=⊥, like Haskell does, but then every other operator in
arithmetic would have to handle ⊥.

Mathematicians have chosen to keep division by 0 undefined (except in special
contexts), and I am inclined to agree. I suspect that the primary reason for
the existence of ⊥ in Haskell is because of the Hindley Miller type system's
insistence on giving a type to everything.

Any comment appreciated.

~~~
vilhelm_s
Note that bottom is a theoretical concept. If you actually evaluate (1/0) in
the haskell interpreter, you get an error message

    
    
       *** Exception: Ratio.%: zero denominator
    

Similarly, if you run an infinite loop in the interpreter, it just loops
forever. (Well, in certain special cases it can figure that out and throw an
exception). The implementation never creates an actual "bottom" value that has
to be handled by e.g. arithmetic operators.

The issue of bottom comes up when you try to define precisely what the meaning
of a program is (in the jargon, when you give a denotational semantic). It is
clear that the expression (1+1) should "mean" 2, but what is the meaning of an
infinite loop? Answer: a special "error value" which we call bottom.

You may wonder why most traditional mathematics doesn't make a fuss about
bottom values, and just get away with saying "this is a partial function".
This is because most everyday mathematical functions are defined by e.g.
simple structural recursion, so issues of termination are seldom very
complicated. Once you have general recursion, you need to be more careful.

~~~
ced
Thank you, that cleared things up.

Does Haskell allow you to assert that a function (like, say, the Sieves of
Eratosthenes algorithm) _does_ terminate, so that it may use that info to make
additional simplifications?

~~~
gtani
That's a good question that i had previously noted for followup, but I never
followed up.

slide 50: how does agda check termination.

<http://www.seas.upenn.edu/~sweirich/ssgip/main.pdf>

[http://research.microsoft.com/en-
us/news/features/terminator...](http://research.microsoft.com/en-
us/news/features/terminator.aspx)

------
drostie
I don't know if anyone else noticed, but I appreciated this little joke from
the article: in saying that 0 = 1 "allowed you to prove anything" he created a
circular reference from the article to itself and then stated that the
reference itself evaluated to `#fail`. :D

One of my web dev bosses talked a lot about how nobody really needs Turing
completeness and how most data-processing algorithms really just do a case
dispatch on a structured non-recursive data set, so that you can quickly prove
that it halts and be done with it. It was persuasive, and we wrote our data
structures in terms of themselves, but I am not sure that I would want to
write in a programming language designed for that. I'm glad that HTML and CSS
halt, for example; but I'm glad that when I want to write a recursive
factorial program in JS that it doesn't blow up.

I'm still working my way through what this all looks like in Haskell. Thanks
for linking it!

Edit: this is the second greatest infinite loop ever:

    
    
        data Silly a = Very (Silly -> a)
        bad a :: Silly a -> a
        bad (Very f) = f (Very f)
        ouch :: a
        ouch = bad (Very bad)
    

It is second only to the infinite loop in Chef: "Bake the potato. Wait until
baked."

~~~
lambdasquirrel
Hmm... it may be a little too late to comment on this and still be noticed,
but from what I understand, not all forms of recursion are created equal, and
you can do most things using total functions, that can be guaranteed to
terminate. In fact, the compiler can check it automatically without any
modification to your code, using a couple of known proof strategies, the
simplest ones being inductive and lexicographic convergence.

------
ecesena
[slightly OT] Does anyone think that Implicit Computational Complexity should
be explored in some "real" programming language?

Although current achievements are probably far away from being really
meaningful in practice, I see some potential. For instance, I think it would
be cool for a cloud provider to have bounds on the running time of programs
sent by clients, just because code is written in a certain language.

------
tramfjord
If we equate haskells Int type to a mathematical integer, then (loop 0) -> 1 +
1 + ...

1 + 1 + ... = 1 + (1 + 1 + ...)

We can still use equational reasoning, it's just that equational reasoning is
different for divergent non-terminating equations

