
Programming With Nothing: FizzBuzz in the lambda calculus in Ruby - tomstuart
http://experthuman.com/programming-with-nothing
======
raganwald
I remember an interview with Quincy Jones, where he was asked "Which song do
you wish you'd written yourself?" His answer was "Strange Fruit," a
tremendously significant Jazz standard (if it's new to you, listen to it
without thinking about the lyrics, then read the lyrics and listen to it
again).

I have to say, this is the essay I wish I had written. It's beautiful by every
one of my standards of beauty, most especially in that the journey of writing
it appears to be even more attractive than the pleasure of reading it.

I'm glad to read it today,

Thank you!

------
mudgemeister
The presentation from which this article is adapted was a definite highlight
of the Ru3y Manor conference (and received rapturous applause).

I highly recommend watching the video of the original presentation at
<http://rubymanor.org/3/videos/programming_with_nothing/> as Tom Stuart's
public speaking skills made this a thoroughly enjoyable (if a little mind-
bending) talk.

~~~
jgwhite
Tom’s talk blew my mind. Can't recommend watching this enough.

------
psykotic
Gorgeous piece of writing!

You don't actually need the Y combinator for any of the cases presented like
mod, range, etc. Church numeral iterators are more than sufficient for the
task.

I'll use Haskell to illustrate, but you could easily translate this into his
subset of Ruby.

    
    
        -- represent n as a Church numeral
        iterate 0 f x = x
        iterate n f x = f (iterate (n-1) f x)
    
        -- m modulo n can be calculated with at most m conditional subtraction steps
        mod m n = iterate m (\x -> if x < n then x else x-n) m
    
        -- build the range back to front using a (number, list) pair as state
        range m n = snd (iterate (n-m) (\(x, xs) -> (x-1, x:xs)) (n-1, []))
    

The mod implementation is an example of a general pattern. Whenever you can
bound the number of iterations in an algorithm as a computable function of the
arguments, you can implement the algorithm by computing the upper bound and
iterating that many times with an iterator function that acts like the
identity once it reaches its base case (for mod, the case is x < n).

The range implementation displays another important method called 'tupling' or
more generally 'strengthening the induction hypothesis'. It underlies the
predecessor/decrement function for Church numerals which the author of the
article presents but chooses not to explain; the idea is simple, if rather
inspired. Rather than iteratively compute n-1 as a function of n, we will
compute a more general datum, the pair (n-1, n). That might seem like a
pointless change, but when formulated this way, the problem becomes
surprisingly easy:

    
    
        dec n = fst (iterate n (\(_, x) -> (x, x+1)) (0, 0))

~~~
kd0amg
Your iterate function as written relies on a top-level define feature, which
pure lambda calculus lacks (motivating the use of fixed-point combinators).

~~~
psykotic
No, iterate is just a helper function to convert a Haskell integer to a Church
numeral. If the inputs were directly represented as Church numerals, it
wouldn't be needed and you'd just replace every instance of iterate n with n
itself. I thought this would be evident to someone who had read the article
and understood Church numerals, so I didn't go into detail about it.

Does that clear up your confusion?

~~~
kd0amg
Ah, I see. My mistake.

------
patio11
Holy cow.

This could have replaced ~8 weeks of my CS languages/compilers class, and I
would have understood the material better at the end of it.

~~~
barrkel
I don't know.

I see it as a kind of entertaining academic game - a Glass Bead Game, if you
will, and I intend the deep allusion - but I wouldn't put too much faith in it
teaching you much about the mechanics of compilers. It's one way of
decomposing semantics into more simple elements, but it's not the one chosen
for almost all practical languages, which after all have to execute on
silicon, not in the Lambda calculus.

It doesn't teach anything about parsers, and nor does it exercise much
thinking in trees. It does emphasize recursion, but in a way that makes it
seem like an absurd roundabout way of doing things, rather than something that
more usually simplifies the expression of a program. Above all, it doesn't
really demystify how the text of your program changes the coloured lights on
your screen. It's a splendidly constructed wonderland, a pyramid of
abstraction with nothing but function application at its core, but I don't
think it leaves you with a lot more than a sense of having seen something very
clever.

(FWIW, nothing in the presentation was new to me, so any residual sense of
wonder has faded. Take that into account in my perhaps cynical judgement.)

~~~
munin
parsing is in my opinion the crappiest part of compiler writing. I feel pretty
safe saying that because we have made tools to automate or near-automate the
act of writing parsers for compilers ...

I would argue that there's a big difference between "demystify how the text of
your program changes the coloured lights on your screen" and "demystify why
the text of your program changes the coloured lights on your screen". if you
are interested only in the first question, you might be an engineer. if you
are also interested in the second question, you might be a computer
scientist...

~~~
barrkel
Parsing is the best understood part of writing compilers; that's why it has
tools, not because it is the "crappiest". (If anything, the wealth of free
tools available gives a clue as to how fun dealing with it is.) But using the
tools well requires some understanding of how they work; and if you're doing
an industrial-strength parsing job, you'll probably end up writing the parser
by hand, because what a tool gives you - speed in converting specification
into implementation - is not usually the constraining factor; rather,
functionality and performance of the end result are.

As to your question "why", nothing about the lambda calculus will tell you
anything about why your program changes the coloured lights. There is only
"how" and "will", by which I mean human agency. There is no answer to "why"
here, and there cannot be, because the "why" resides in people's minds. It
takes no more extra effort to believe in "if" than beta reduction.

Take that single example: implementing if as a primitive rather than a
function with lazily evaluated arguments means greatly increasing practicality
at the cost of the sparse beauty of minimalism. 'If' is very common;
optimizing it, diagnosing misuses of it, etc. is a lot harder once you've lost
it in a forest of function applications.

~~~
gruseom
I agree with everything in your first paragraph and would add the following:
parsing is overrated. It's interesting the way that crossword puzzles are.
Nothing wrong with that, but it can be a distraction; it's just not that deep
a space.

That's not to say that the people who worked out how to do it in the first
place weren't brilliant. They were, and it was a hard problem. But it's a
solved one.

~~~
barrkel
There are still some fairly hard problems in parsing. For example, doing
minimal work to convert a series of text deltas into abstract syntax tree
deltas, using caching to avoid throwing away too much. This is highly relevant
to IDEs for providing code completion and other analysis, but it's usually
solved with a mix of brute force - restarting the whole parse from the top -
and trickery, such as skipping uninteresting function bodies, or parsing a
much simplified version of the language that ignores many productions.

~~~
gruseom
I didn't know that. Thanks.

------
hendzen
That was pretty cool. Interesting to note that the numbers he used are
essentially an implementation of the Peano Axioms, where the successor
function wraps the predecessor in a lambda.
(<http://en.wikipedia.org/wiki/Peano_Axioms>)

Here's a simple recursively defined number system in scheme:
<https://gist.github.com/1466985>

~~~
rmcclellan
Indeed. This encoding is due to church
(<http://en.wikipedia.org/wiki/Church_encoding>).

------
gnaritas
Excellent article, though I had to laugh at the introduction of the if
statement just to avoid the appearance of calling the boolean directly, which
happens to be exactly how Smalltalk implements its if statements: a direct
call to the boolean passing a block as the argument. This approach to
programming is how Smalltalk has only a handful of reserved words vs the
80'ish I think Ruby has.

~~~
raganwald
True, however the stated goal here was to replicate the Ruby example more-or-
less as-is. Which means building something elegant and then greenspunning
cruft on top of it :-)

Your comment highlights how simple things we take for granted as basic ideas
(like if statements) may not be as axiomatic as we assume.

------
rbxbx
Yes yes yes, wonderful article. Look forward to watching the video later as
well :)

I did a lightning talk at SCNA this year which covered similar, if somewhat
different, and certainly less comprehensive ground, which may be of interest
to readers of this thread/article.

<http://git.io/objects-as-closures> (full code and all everything)

<https://gist.github.com/1372131#file_v2.md> (outline/notes)

(please don't make fun of me, Scheme peeps)

------
js2
This is really wonderful, and I felt like I was reading something Peter Norvig
would write (modulo s/Ruby/Python/ not that it matters here). If you enjoyed
it, SICP belongs on your reading list.

------
i2
The same can be done with Python lambdas:

    
    
      >>> ZERO = lambda f: lambda x: x
      >>> FIVE = lambda f: lambda x: f(f(f(f(f(x)))))
      >>> to_int = lambda f: f(lambda x: x+1)(0)
      ... etc.

~~~
neilk
Mark-Jason Dominus did this for Perl, more than a decade ago.

<http://perl.plover.com/lambda/> aka "How to write a 163 line program to
compute 1+1"

Although I really like the OP's approach since he slowly morphs a program his
readers can understand, rather than constructing a programming system from
scratch.

------
Cushman
The title of this article, "Programming With Nothing", piques my interest.

I don't have a strong CS background, so this might be a silly question... But
obviously, on a computer, "code" is really data, a series of bytes that
instructs the processor what to do. In a literal sense, this sort of lamba
calculus implementation isn't far removed from bog standard procedural
programming. What's the actual philosophical background here-- what _is_ a
function, really? What makes it special?

~~~
nandemo
Well, all sufficiently strong programming languages are equivalent in the
Turing sense, and Turing machines are equivalent to lambda calculus in
computational power.

That said, lambda calculus is rather different from standard procedural
programming. The closest thing to it in the "real world" would be functional
languages such as Scheme, ML and Haskell.

What makes functions special? I think the article answered that: with very
simple ingredients, namely recursive functions that take only 1 argument, you
can essentially write any program that you could write with full-fledged Ruby
(or any other Turing-complete programming language).

------
lambdapilgrim
I have been working through SICP. In Chapter 2 they introduce Church Numerals.
I wrote a blog post recently demonstrating by the method of substitution how
arithmetic operators on Church numerals finally break down to work. Would
appreciate your comments about it. EDIT: Here it is:
<http://lambdapilgrim.posterous.com/numbers-without-numerals>

------
julius
Being on the frontpage here... shouldn't there be an extra section "Recursion,
briefly"... (<http://en.wikipedia.org/wiki/Y_combinator>)

------
ufo
The only thing I miss here is pointing out that you can use function
application (\map -> ...)(definition_of_map) instead of defines/assignments.

But then the final piece of code would not look as awesome so whatever :)

------
bitops
Great article, and to me it's further evidence of Lisp's greatness (a great
influencer of Ruby). If your primitives are powerful enough, you should be
able to build most of the language yourself from the ground up.

Wasn't it Paul Graham who said that Ruby is an acceptable Lisp?

~~~
raganwald
Eric Kidd?

[http://www.randomhacks.net/articles/2005/12/03/why-ruby-
is-a...](http://www.randomhacks.net/articles/2005/12/03/why-ruby-is-an-
acceptable-lisp)

------
MonkeyCoder
Another decent example of lambda calculus (factorial in Scheme):
[http://blogs.msdn.com/b/ashleyf/archive/2008/12/03/the-
lambd...](http://blogs.msdn.com/b/ashleyf/archive/2008/12/03/the-lambda-
calculus.aspx)

------
groovy2shoes
Matt Might has some similar articles on his blog:
<http://matt.might.net/articles/> (see under "Functional Programming").

------
Tyr42
You say the Y combinator, as presented, could be done in Haskell, but it's got
an infinite type, so it's a bit trickier to get working.

------
jonbro
this is the definition of a turing tarpit.

~~~
kd0amg
I think I'd be more inclined to call this version a Church tarpit.

------
tripa
Nice coincidence, I spent part of my weekend hacking on an Unlambda FizzBuzz.

------
skylan_q
!

Now functional programming and lambda calculus makes sense.

This is un- __cking-real.

Thank you.

