Hacker News new | past | comments | ask | show | jobs | submit login
Programming Without Variables (drdobbs.com)
73 points by ProgC on Sept 13, 2013 | hide | past | web | favorite | 36 comments



Nice article. I don't really understand the final point, though. Here's a simple implementation of the collatz function that returns the entire list iterated through, using a simple data structure, and that is O(I) in the number of iterations I

    collatz_vec :: Int -> [Int]
    collatz_vec = reverse . go []
      where
	go acc 1 = acc
	go acc n = go (n:acc) (if even n
		then div n 2
		else 3 * n + 1)
You simply push new values to the front of the list rather than the end, and then reverse the list all in one go at the end. Sure, you could use a fancier data type like an IntMap, but why bother when you already have linear complexity with a simple one?

Edit: In fact, here's an even simpler solution that doesn't need to reverse the final result

    collatz_vec :: Int -> [Int]
    collatz_vec = takeWhile (/= 1) . iterate step
      where
        step 1 = 1
        step n = if even n then div n 2 else 3 * n + 1
It's not tail recursive (because it uses `iterate`) but it's lazy -- so if the elements of the list are consumed as they are created, the whole computation can be done in constant space as well as being fast.


Or perhaps like this if you do not want to use `reverse` and want it to be tail recursive:

    collatz :: Int -> [Int]
    collatz x = go id x []
      where
        go :: ([Int] -> [Int]) -> Int -> [Int] -> [Int]
        go f 1 = f . (1:) 
        go f n = 
          if even n
            then go (f . (n:)) (n `div` 2)
            else go (f . (n:)) (3 * n + 1)


Why would you want to not use reverse? (tail-recursion is orthogonal, and I understand not really useful in haskell unless you force strict evaluation though I may be wrong)


It increases the constant time, as reverse is O(n).


Part of the point seems to be to explain the efficient implementation of such an approach. (How would you write GHC if you didn't have it available...?) Try to translate your code into C++...


I don't write C++, but if it has linked lists then at least the first one should be trivial. The second one would probably involve function pointers.

Maybe his point was that you need to use a linked list rather than an array to get an efficient solution? Coming from a Scheme/ML/Haskell background I tend to view linked lists as much simpler than arrays!


Consider the author concluded with

> In effect, the whole idea of an array is built on being able to change part of it, and if we intend to avoid changing memory once we have put values into it, we need another kind of data structure entirely. Next week, we'll look at what such a data structure might be.

I'm assuming that a linked list is precisely the data structure he'll introduce next week. ;)


Worth noting that the assumption that the Collatz function terminates ('while n != 1') is actually an open mathematical question. It's been checked for initial values up to the godzillions, but there is no proof yet that this function will always terminate for all inputs. Paul Erdos was reported to have said "Mathematics is not yet ready for such problems".


Next week: linked lists?

Linked lists have performance problems on computers. You need contiguous memory arrays for performance, since Direct Memory Access engines, caches, and pipelining all achieve peak performance only when memory is strictly accessed in contiguous chunks. How do functional language implementations deal with that problem?


For one thing, you generate as few as possible by aggressively fusing away intermediate lists (at least if your language is pure).

Also, the garbage collector will often move lists into contiguous region(s) of memory.

More to the point, though, sometimes a list is the appropriate data structure for a problem (like returning all intermediate results from the Collatz call, where you don't know how many there will be). Sure, you could use an array and double its size every now and then, but you're doing an awful lot of violence to your program just to handle something the runtime could take care of for you :)

And yes, I know there will be a slight performance penalty under many circumstances.


It was like reading the Little Schemer in C++. If you want to see how far you can take this I suggest reading that book. It's fundamental and enlightening at the same time.


+1

also Land of Lisp and Realm of Racket (the sequel will be Solar System of Macros and Special forms

http://realmofracket.com/

http://landoflisp.com/


The question I always ask is... why? You can come up with all kinds of crazy ways to program. You can probably even come up with all kinds of crazy rationalizations about "purity" or "math" or "logical" or "simple" or whatever other difficult to measure metric.

When it comes down to it, however, programming languages are neither for "simple expression of ideas" nor "execution by computers". They are for humans to write, read, and reason about the actions of computers. The biggest challenge most programmers face is not "how do I express this in an elegant, pure way that shows how smart I am?" but "how the heck does this giant existing code base work exactly?" and "how can I modify it in such a way that does not make it harder for other developers?"

A key goal of a language should be ease of reasoning. The conjecture that removing variable does this for most people is wholly unproven.


> Next week, we'll look at what such a data structure might be.

    (8 . (4 . (2 . (1 . NIL))))
(There's a rage face to accompany this answer, the one for an overexcited noob who knew an answer, but rage faces don't belong on HN.)

(I'm unclear whether the array version should include the 1, since it is not "counted").


Actually you'd want to save it as (1 . (2 . (4 . (8 . NIL)))) since o/w you'd still get quadratic time complexity as you need to unpack your whole list when you add the next value.


True! I forgot that you only keep a pointer to the start.


or build it lazily...


I'd rather like to see an article about the opposite - mutables in math, or better, programming in math. I know iterative algorithms and math do not have much in common, but it would be fascinating to explore some possibilties of opening up math to such kind of sequential calculation.


Math generally isn't even recursive. When it is, it is only so as a byproduct of its declarative nature. You don't usually deal with instances, but the set of all possible valid instances.

That said, I really don't see the point of making programming look more like math... the deep distaste most people have for the abstract symbol manipulations therein, and the clear favoring of instances and examples, seems to pretty effectively point towards the opposite direction.


but he still have variables...


Here’s a (non–tail-recursive) solution in Factor that doesn’t.

    : collatz ( n -- length )
    { { [ dup 1 = ] [ drop 0 ] }
      { [ dup even? ] [ 2 / collatz 1 + ] }
      [ 3 * 1 + collatz 1 + ] }
      cond ;
Usage:

    27 collatz


There is no place where he's assigning a value to an already-defined variable. Nothing is "varying." Perhaps the restriction can be put this way: you're only allowed to define variables by naming them as an argument to some function and then calling that function with some value.


Still, this isn't what I expected when I clicked the link; I was expecting something about point-free or combinatoric programming :)


Given that it started with some relatively messy C++, I don't think this article was aimed at folks who even know what those words mean.


Yeah, I was expecting point-free also.


they could all be declared const and the program would still run.


he is using them as values


you might have a point, but it was not what I was expecting when I clicked the link.


I do not understand how this can be a recent article. Programming without variables was the hype in the early 90s with languages like graal (http://link.springer.com/chapter/10.1007%2F3-540-16442-1_6). Now, the most relative hype is non mutable data structures (for example in scala).


This is a column, not news. In addition, he is using non-mutable data, it's the entire point of the article. So people voting you down are assuming that you both didn't read the article, and are judging it based on strange criteria.


Please, add comments to explain why you are moding me down so much.

I love when Gerald Jay Sussman explains the difference between recursion and iteration (http://ocw.mit.edu/courses/electrical-engineering-and-comput...), but I really feel this is history of computer science, not hot news.


What the community is telling you is that your view of what's a fit for HN is not consistent with theirs. As you're comparatively new to HN, your view of what should be on the front page is likely to be incomplete.

In addition, the dismissive tone of your comment, based on you seeing a similar feature years ago in an obscure language rubs people the wrong way. Are we never supposed to discuss coding techniques or algorithms simply because they were once discussed elsewhere years ago?

Enjoy HN and contribute to the dialog; try not to attack other people's contributions.


Your comment is what has come to be called a "middlebrow dismissal," cf. https://news.ycombinator.com/item?id=4693920.


Yes, I really think that the article "Programming Without Variables" does not deserve the first page of HN, but I do not have enough karma to mod it down. The title refers to an interesting subject, but the content is almost only about technicalities in converting a loop into a recursive, then into a tail recursive function. The level is so poor, it is really deceptive.


Many of the readers here are programmers; a well written article (or blog post, even) about coding in a way which lets your compiler use tail recursion optimizations is Pretty Interesting to many of us, even if we already know in our hearts that recursion can be efficient.

The fact that he's doing this in a non-lisp makes it more accessible to some, and just interesting to others. I think that's OK to have on the HN front page, even if it's not something you personally find interesting.

Try considering that HN is a resource to more than just you or just me. Consider that some programmers here might be more or less experienced than us, and that it's possible this might be the window into "oh, hey, recursion!" that some new programmer needed in order to discover a new avenue of learning.

In that light, this is a pretty worthy article on HN. I especially like that he goes into implementation detail, rather than just saying, "Duh, use tail recursion", and thus ends up explaining how to actually do it well (in case a reader didn't know).


The "level is so poor?" I don't know what that could possibly mean except "This article is beneath me."

Well, that's fine. It was written for folks who had never thought about the relationship between iteration and recursion in this way and is clearly angling towards introducing something like a point-free style. Perhaps there are more people like than on HN than you thought.

Both your original comment and this one are prototypical examples of a middlebrow dismissal. You're not actually saying anything about the article, but you're telling us a lot about how good you think your taste is.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: