
Programming Without Variables - ProgC
http://www.drdobbs.com/cpp/programming-without-variables/240161204
======
crntaylor
Nice article. I don't really understand the final point, though. Here's a
simple implementation of the collatz function that returns the entire list
iterated through, using a simple data structure, and that is O(I) in the
number of iterations I

    
    
        collatz_vec :: Int -> [Int]
        collatz_vec = reverse . go []
          where
    	go acc 1 = acc
    	go acc n = go (n:acc) (if even n
    		then div n 2
    		else 3 * n + 1)
    

You simply push new values to the _front_ of the list rather than the end, and
then reverse the list all in one go at the end. Sure, you could use a fancier
data type like an IntMap, but why bother when you already have linear
complexity with a simple one?

Edit: In fact, here's an even simpler solution that doesn't need to reverse
the final result

    
    
        collatz_vec :: Int -> [Int]
        collatz_vec = takeWhile (/= 1) . iterate step
          where
            step 1 = 1
            step n = if even n then div n 2 else 3 * n + 1
    

It's not tail recursive (because it uses `iterate`) but it's lazy -- so if the
elements of the list are consumed as they are created, the whole computation
can be done in constant space as well as being fast.

~~~
secretguy
Or perhaps like this if you do not want to use `reverse` and want it to be
tail recursive:

    
    
        collatz :: Int -> [Int]
        collatz x = go id x []
          where
            go :: ([Int] -> [Int]) -> Int -> [Int] -> [Int]
            go f 1 = f . (1:) 
            go f n = 
              if even n
                then go (f . (n:)) (n `div` 2)
                else go (f . (n:)) (3 * n + 1)

~~~
masklinn
Why would you want to not use reverse? (tail-recursion is orthogonal, and I
understand not really useful in haskell unless you force strict evaluation
though I may be wrong)

~~~
zeckalpha
It increases the constant time, as reverse is O(n).

------
byte_bach
Worth noting that the assumption that the Collatz function terminates ('while
n != 1') is actually an open mathematical question. It's been checked for
initial values up to the godzillions, but there is no proof yet that this
function will always terminate for all inputs. Paul Erdos was reported to have
said "Mathematics is not yet ready for such problems".

------
n00b101
Next week: linked lists?

Linked lists have performance problems on computers. You need contiguous
memory arrays for performance, since Direct Memory Access engines, caches, and
pipelining all achieve peak performance only when memory is strictly accessed
in contiguous chunks. How do functional language implementations deal with
that problem?

~~~
Fixnum
For one thing, you generate as few as possible by aggressively fusing away
intermediate lists (at least if your language is pure).

Also, the garbage collector will often move lists into contiguous region(s) of
memory.

More to the point, though, sometimes a list is the appropriate data structure
for a problem (like returning all intermediate results from the Collatz call,
where you don't know how many there will be). Sure, you could use an array and
double its size every now and then, but you're doing an awful lot of violence
to your program just to handle something the runtime could take care of for
you :)

And yes, I know there will be a slight performance penalty under many
circumstances.

------
agentultra
It was like reading the Little Schemer in C++. If you want to see how far you
can take this I suggest reading that book. It's fundamental and enlightening
at the same time.

~~~
gtani
+1

also Land of Lisp and Realm of Racket (the sequel will be Solar System of
Macros and Special forms

[http://realmofracket.com/](http://realmofracket.com/)

[http://landoflisp.com/](http://landoflisp.com/)

------
kazagistar
The question I always ask is... why? You can come up with all kinds of crazy
ways to program. You can probably even come up with all kinds of crazy
rationalizations about "purity" or "math" or "logical" or "simple" or whatever
other difficult to measure metric.

When it comes down to it, however, programming languages are neither for
"simple expression of ideas" nor "execution by computers". They are for humans
to write, read, and reason about the actions of computers. The biggest
challenge most programmers face is not "how do I express this in an elegant,
pure way that shows how smart I am?" but "how the heck does this giant
existing code base work exactly?" and "how can I modify it in such a way that
does not make it harder for other developers?"

A key goal of a language should be ease of reasoning. The conjecture that
removing variable does this for most people is wholly unproven.

------
tehwalrus
> _Next week, we 'll look at what such a data structure might be._
    
    
        (8 . (4 . (2 . (1 . NIL))))
    

(There's a rage face to accompany this answer, the one for an overexcited noob
who knew an answer, but rage faces don't belong on HN.)

(I'm unclear whether the array version should include the 1, since it is not
"counted").

~~~
dborg
Actually you'd want to save it as (1 . (2 . (4 . (8 . NIL)))) since o/w you'd
still get quadratic time complexity as you need to unpack your whole list when
you add the next value.

~~~
tehwalrus
True! I forgot that you only keep a pointer to the start.

------
frevd
I'd rather like to see an article about the opposite - mutables in math, or
better, programming in math. I know iterative algorithms and math do not have
much in common, but it would be fascinating to explore some possibilties of
opening up math to such kind of sequential calculation.

~~~
kazagistar
Math generally isn't even recursive. When it is, it is only so as a byproduct
of its declarative nature. You don't usually deal with instances, but the set
of all possible valid instances.

That said, I really don't see the point of making programming look more like
math... the deep distaste most people have for the abstract symbol
manipulations therein, and the clear favoring of instances and examples, seems
to pretty effectively point towards the opposite direction.

------
msh
but he still have variables...

~~~
jfarmer
There is no place where he's assigning a value to an already-defined variable.
Nothing is "varying." Perhaps the restriction can be put this way: you're only
allowed to define variables by naming them as an argument to some function and
then calling that function with some value.

~~~
derefr
Still, this isn't what I expected when I clicked the link; I was expecting
something about point-free or combinatoric programming :)

~~~
jfarmer
Given that it started with some relatively messy C++, I don't think this
article was aimed at folks who even know what those words mean.

------
username42
I do not understand how this can be a recent article. Programming without
variables was the hype in the early 90s with languages like graal
([http://link.springer.com/chapter/10.1007%2F3-540-16442-1_6](http://link.springer.com/chapter/10.1007%2F3-540-16442-1_6)).
Now, the most relative hype is non mutable data structures (for example in
scala).

~~~
username42
Please, add comments to explain why you are moding me down so much.

I love when Gerald Jay Sussman explains the difference between recursion and
iteration ([http://ocw.mit.edu/courses/electrical-engineering-and-
comput...](http://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-001-structure-and-interpretation-of-computer-programs-
spring-2005/video-lectures/)), but I really feel this is history of computer
science, not hot news.

~~~
jfarmer
Your comment is what has come to be called a "middlebrow dismissal," _cf._
[https://news.ycombinator.com/item?id=4693920](https://news.ycombinator.com/item?id=4693920).

~~~
username42
Yes, I really think that the article "Programming Without Variables" does not
deserve the first page of HN, but I do not have enough karma to mod it down.
The title refers to an interesting subject, but the content is almost only
about technicalities in converting a loop into a recursive, then into a tail
recursive function. The level is so poor, it is really deceptive.

~~~
gknoy
Many of the readers here are programmers; a well written article (or blog
post, even) about coding in a way which lets your compiler use tail recursion
optimizations is Pretty Interesting to many of us, even if we already know in
our hearts that recursion can be efficient.

The fact that he's doing this in a non-lisp makes it more accessible to some,
and just interesting to others. I think that's OK to have on the HN front
page, even if it's not something you personally find interesting.

Try considering that HN is a resource to more than just you or just me.
Consider that some programmers here might be more or less experienced than us,
and that it's possible this might be the window into "oh, hey, recursion!"
that some new programmer needed in order to discover a new avenue of learning.

In that light, this is a pretty worthy article on HN. I especially like that
he goes into implementation detail, rather than just saying, "Duh, use tail
recursion", and thus ends up explaining how to actually do it well (in case a
reader didn't know).

