
Seemingly impossible functional programs (2007) - espeed
http://math.andrej.com/2007/09/28/seemingly-impossible-functional-programs/
======
skybrian
This code combines several tricks having to do with lazy evaluation. We can
define a type that requires a lazy function to ask for each bit of its input
argument one at a time, in whatever order decided by the function's
implementation. Furthermore, we can partially evaluate any number of lazy
function invocations in parallel without finishing any of them that we don't
need.

As a result, laziness reveals information about a function's implementation in
a way that strict evaluation does not. It matters which parts of the input a
function actually reads.

Similarly, in non-lazy languages, functions that take callbacks reveal
information about themselves by calling the callback (or not). We can write
tests demonstrating the order in which a function calls its callback.

~~~
evanpw
This is exactly right. The lazy version seems like magic, but it's really just
exploiting the fact that if computing p(a) never evaluates a(n) for n >= m,
then p(b) = p(a) whenever the first m elements of a and b agree, so you can
cut off an infinite portion of the search space without checking them all.

I wrote up a Python gist
([https://gist.github.com/evanpw/d69f1ccb5edd0b672e9e](https://gist.github.com/evanpw/d69f1ccb5edd0b672e9e))
which performs this same trick by explicitly snooping on the callbacks. It
seems a lot less magical than the Haskell version.

EDIT: The main "cheat" seems to be the ability to identify the predicate which
returns false on every input (e.g., by seeing if returns false without
evaluating its argument). The continuity argument described in the article
basically says that if p is not always false, then there's some input a with
a(n) = 0 for all sufficiently large n that causes p to return true, so you
could just enumerate all such inputs until one returns true.

------
brianberns
I don't understand how this program can ever terminate, since there is no base
case. Every invocation of find_i starts by calling forsome, and every
invocation of forsome calls find_i. How can it possibly end?

Perhaps this relies subtly on Haskell's laziness? If so, I don't think there
is a "quick translation" to ML or OCaml.

~~~
cousin_it
Not every invocation of forsome calls find_i. It calls p(find_i(...)), but p
can return a value without actually evaluating the inner find_i. So yeah, it
relies on laziness, though you can translate it into a strict language by
inserting lambdas in a bunch of places.

~~~
brianberns
Thank you. How does Haskell manage to avoid evaluating find_i(...) before
invoking p? Since the inner value is the argument to p, it's hard to imagine
how p can do anything until it has that value.

EDIT: I think I see the light. The result of find_i(...) is not what's sent to
p. Instead, Haskell passes a thunk that might never need to be evaluated.

~~~
tome
Exactly. If I write

    
    
        let f x = 1
    
        print (f (g somethingelse))
    

then the call to g is never evaluated. f returns 1 immediately.

------
Jaxan
I have read this some years ago. I really enjoyed diving into this subject. My
favourite part is:

> Common wisdom tells us that function types don’t have decidable equality. In
> fact, e.g. the function type Integer -> Integer doesn’t have decidable
> equality because of the Halting Problem, as is well known. However, common
> wisdom is not always correct, and, in fact, some other function types do
> have decidable equality, for example the type Cantor -> y for any type y
> with decidable equality, without contradicting Turing [...] This seems
> strange, even fishy, because the Cantor space is in some sense bigger than
> the integers. In a follow-up post, I’ll explain that this has to do with the
> fact that the Cantor space is topologically compact, but the integers are
> not.

------
black_knight
> It is natural to ask whether there are applications to program verification.
> I don’t know, but Dan Ghica and I speculate that there are, and we are
> planning to investigate this.

I wonder if they got any interesting results from this investigation. There
are some practical applications which works on infinite data (such as
streams), but I don’t see immediately how this applies there.

------
jamesfisher
> The next version of the definition of Haskell will have a built-in type of
> natural numbers.

Did this ever happen? I haven't seen it.

~~~
Kutta
It's been there for a while, although not intensely advertised:
[https://hackage.haskell.org/package/base-4.8.1.0/docs/Numeri...](https://hackage.haskell.org/package/base-4.8.1.0/docs/Numeric-
Natural.html)

------
hellofunk
>I will use the language Haskell, but it is possible to quickly translate the
programs to e.g. ML or OCaml.

The article predates C++11 by many years, and with functions-as-values and
lambdas now in the language, would be interesting to see if it could also be
translated to C++.

~~~
lmm
C++ still doesn't have (tagged) union types so you'd have to implement Maybe
by hand, which is a little tedious. But I don't see anything in this code that
requires laziness or the Haskell type system, so you can translate it pretty
directly into pretty much any language (though as sibling mentions, in a
language without TCO you might have to manually trampoline it - again tedious,
and unperformant).

~~~
hellofunk
Tagged unions are a pretty common exercise in C++, they don't require much
code to implement.

As for Maybe, if it is basically an "optional" as in other languages, then the
newer c++17 has <optional>, if I'm not mistaken.

