Hacker News new | past | comments | ask | show | jobs | submit login
It Is What It Is (And Nothing Else) – Why Recursion Still Matters (existentialtype.wordpress.com)
52 points by ingve on Feb 22, 2016 | hide | past | web | favorite | 72 comments



This is a prime example of the conflict between the "machine model" of programming (a computer is a large and elaborate state machine that advances instruction by instruction) and the "maths model" (by reasoning from a set of axioms and rules of production we can transform information or prove facts).

The former maps onto imperative programming, and the latter onto functional programming. It seems that different people find these models harder or easier to understand - and then get confused when not everyone sees it that way.


I think this POV is limiting. In my experience, many people who take the "maths model" respect and use the "machine model" quite often as well.

The formal term of note here is "denotation" in that "a language may be understood through its denotation" as an abstract machine or as a mathematical object. The best understanding comes from recognizing that many denotations may be valid for a language and that great richness comes from this. To work, the denotations must be in alignment in various ways and when they are it means that you can transport understanding from one denotation to another to the language itself.

So I don't think there's an honest conflict, though I imagine there are programmers and PL experts who either (a) honestly only ever use one denotation or (b) play favorites and this may lead to some clouding of things.


Yes, you can use both, but look at all the disputation of preference going on elsewhere in this thread - that's what I meant by a conflict.

I like your phrase "transport understanding"; an important part of programming is dealing with informal understandings of what the software does or is intended to do. Most programming has very little formal reasoning in.


Ok, total agreement there. I think denotations are pretty identical to languages in the sense that you pick one you like then defend it to the death but, should you choose, could always learn to use more and would profit massively from doing so.

So there are holy wars to be expected.

I think you point out something really interesting with this notion of formal/informal. I think it's tough to draw that line because a lot of "formal" reasoning is designed to be informally used but needs formality in its presentation.

Or, to be more concrete, I think that most programmers use a slew of semi-actualized denotations when understanding code. Usually this is a transparent process and a skill gained by experienced programmers is to align your semi-actualized denotations so that they both agree with one another and with reality.

Formalism is just a tool to get ahold of these and fully actualize them. It's something one could choose to invest in—or not.

But the underlying process of transporting understanding and using the best denotation for the moment is very real regardless of the level of formality.


To be fair to the author, he's one of the few researchers trying to bridge this divide (e.g. see https://existentialtype.wordpress.com/2014/09/28/structure-a... )


     the conflict between the "machine model"  ...
I'm not sure I can agree. The original post makes the point that you can see recursion in multiple ways, including machine-oriented (e.g. using a stack and stack pointer) and abstract, e.g. using an axiomatisation of self-reference, for example using the Y combinator which satisfies YM = M(YM).

Incidentally, stacks can also be given an axiomatic treatment indepented of concrete machine models.


I think that the confuse comes from the fact that one wants to mix both views; after all, I don't remember see anyone being confused bu the classic recursive definitions (Fibonacci, factorial) in my school days. IIRC the factorial function is introduced in high school. It can literally be understood by a kid.

I believe the reason why people want to expose both views at the same time is that the math view doesn't really explains "how it works" as far as CS is concerned.

As far as I'm concerned, I can't imagine programming without the knowledge of how things work at assembly level. I think I would feel so crippled in my understanding that I would want to learn about assembly... again. I would probably not stand working with a machine that works on magic.


> I can't imagine programming without the knowledge of how things work at assembly level.

The author's point is that recursion is a pervasive concept that is essential to understand even how modern computer hardware works:

> the very concept of a digital computer is grounded in recursive self-reference (the cross-connection of gates to form a latch), which, needless to say, does not involve a stack. Not only do real programmers use recursion, there could not even be programmers were it not for that.

The author is not suggesting that we forget about "what's really going on". He's rightly pointing out that you actually cannot understand what's really going on at all without first understanding recursive self-reference.


Personally, I think that analogy conceals more than it reveals. Recursion in computer software requires nested definitions that eventually reach a simple base case resulting in termination. An sram cell or J/K flip-flop, on the other hand, doesn't involve nesting or reach a simple base case.

In my mind recursion is a snake that has eaten a smaller snake, whereas an sram cell is a snake eating its own tail.


> Recursion in computer software requires nested definitions that eventually reach a simple base case resulting in termination.

I think you've just illustrated the article's point.

You're describing well-founded recursion, which is very useful, but doesn't include e.g. continuation-passing ( https://en.wikipedia.org/wiki/Continuation-passing_style ) or co-recursion ( https://en.wikipedia.org/wiki/Corecursion ), hence perpetuating the myths the article is complaining about.

The really unfortunate thing is that sub-sets of these ideas keep getting re-invented (AKA "reinventing the square wheel"), for example exception handlers in place of continuations, iterable objects in place of co-recursive data, etc.

As for J/K flip-flops, they're recursive because their output is their own input. For example, given co-inductive stream of `j` and `k` values (the flip-flop inputs), we can generate a co-inductive stream of outputs `q`:

    function flipflop(init_js, init_ks) {
      function ff(js, ks, q_old) {
        var j     = car(js);
        var k     = car(ks);
        var q_new = j * not(q_old) + not(k) * q_old;
        return cons(q, ff(cdr(js), cdr(ks), q_new);
      }
      return ff(init_js, init_ks, 0);  // Initiate the co-recursion with 0, arbitrarily
    }
Whilst I've used co-recursive data for convenience, the fact that the result `q_new` becomes the argument `q_old` for the next call is unavoidably recursive.


Bingo. Recursion is easy in the func^H^H^H^H"application-oriented programming" model of computing where the whole program is a mathematical expression written out in tree form on a blackboard, and applying a function means erasing a node that says (fib 8) and replacing it with a node that says (+ (fib 7) (fib 6)), and eventually with one that says 21. But most programmers, most of the time, seem to use a different mental model, one of functions as little actors that send function-call messages to each other and themselves, including when they are doing "functional programming" (and ofc mutable state and performance considerations tend to push the language into this model anyway). The problem is that in an actors-ish model continuations are easy, while recursion and function-call behaviour in general are hard, derived concepts built on top of continuations.

So I've come to think that introductions to programming should usually look something like this:

Stage 1: Teach "application-oriented programming": functional programming that's explicitly, and exclusively, about substituting expressions in an expression tree.

Stage 2: Teach how to program using only simple "dataflow" models of stateless (or stateful) actors that reliably pass messages (ie. call continuations) to each other in parallel. (To prepare for stage 3 you'll need to cover "dynamic dataflow" with actors passing around the capabilities to call other actors, but it's also a good opportunity to probe what can reasonably be done with a static graph of calling capabilities.)

Stage 3: If the students are led to push the Stage 2 programming models hard enough they'll naturally see that a lot of the code they write for those models (but NOT all of it) really wants to be able to take advantage of an explicit function-call abstraction. Teach them how to implement the Stage 1 language as a DSL inside a Stage 2 language.

Almost as an incidental benefit the students will end up understanding call/cc, too. By contrast, it's the attempt to blur these two, very different, models of computing together and handwave the distinction that leaves students confused by recursion—and they should feel confused by recursion in this case! The feeling of confusion is their intellect warning them that something is wrong. The real damage is done when they either give up, or end up feeling that they understand it when they don't.

(Another side benefit is that it's an early introduction to the huge and indispensable diversity, and yet intense inter-relatedness, of models of computing. Among other things that should hopefully temper the zeal to smash square pegs through the One True, Round Hole, and hopefully even the polar opposite vice of unthinkingly picking up whatever "the tool for the job" is supposed to be. Similarly, the order of stage 1 and stage 2 could be reversed.)

TL:DR: "Continuations are easy: functions are hard."

PS: The really fun question is how lexical scope fits into the Stage 2 model. It's a restricted case of something else...


   Continuations are easy: functions are hard.
That's a deep truth. Thinking about of computation message-passing actors is Hewitt's and Milner's dream. It's a hard sell to the programming mainstream though. In particular the functional programming community sees functions as the basis on top of which everything else needs to be layered as an effect. It's much more productive to go the other way round and see functions as a very peculiar and well-behaved from of message exchange. (Scala also makes this point.)


It's interesting to use languages that do not harshly penalize recursion by imposing stack size limits. The upcoming GNU Guile 2.2 release, a Scheme implementation, has a dynamically expandable stack so algorithms that are naturally recursive can be implemented that way rather than contorted into iterative form to please the stack limit god.

For example, in Guile 2.0, this natural definition of 'map' will cause the stack to overflow on large lists:

    (define (map proc lst)
      (if (null? lst)
          '()
          (cons (proc (car lst))
                (map proc (cdr lst)))))
The iterative solution is to construct the list backwards and reverse it as a final step:

    (define (map proc lst)
      (let loop ((lst lst)
                 (result '()))
        (if (null? lst)
            (reverse result)
            (loop (cdr lst) (cons (proc (car lst)) result)))))
There's no real gain here in terms of performance or readability. You trade a large stack for a large reverse operation at the end. Guile 2.2 allows a whole class of recursive algorithms that were not practical before because of the fixed size stack (like most languages I know of), and they perform well, too!

Needless to say I agree with this article. Recursion is an integral part of computing, and more programming languages should provide better support for it. Embrace it, don't avoid it!


There is a performance gain. Two primary gains: reduced memory usage (your stack size is constant, with tail-call elimination); theoretically faster execution as you don't have a cons and a return pending at the end of each iteration.

You also eliminate the function call, potentially, on the recursive call. If TCE is present, instead of a function call, the recursive step is actually just a branch (and can be an unconditional branch in many cases). Reverse can also be made efficient, as the Erlang people have done.

I will agree that the first version is clearer in illustrating the intent of the program for many people. The second one is probably just familiar for me from my time playing with Erlang, where it's the idiomatic way to write something like this.


But can you do that with something like fibonacci? You can't just tail-call optimize that away, can you, since there are two recursive calls?


I was under the impression that Schemes generally compile down to continuation-passing-style, and hence in the case of something like Fibonacci one of the recursive calls would be performed as a tail call, whilst the other would become part of the continuation, e.g. something like:

    function fib(n, k) {
      if (n == 0) k(1);
      if (n == 1) k(1);
      fib(n-1, function(f_1) {
                 fib(n-2, function(f_2) {
                            k(f_1 + f_2);
                          });
               });
    }


The natural recursive definition of a Fibonacci function is a different beast. The big issue with the naive and simple Fibonacci implementation is that it duplicates work, a ton of it. The 'map' above doesn't have this issue. To make Fibonacci efficient, it can be implemented as an iterative algorithm which isn't too bad to read. But there's better news! The recursive algorithm can be kept as-is, provided that the function is memoized, so that already computed values are cached.


I was thinking about this the other day too. Recursion in CS101 is taught in such a bizarre way. At least it was back when I was in CS101. It was almost as though CS tries to discourage its usage simply because "most minds cannot understand it as easily as iterative loops".


>It was almost as though CS tries to discourage its usage simply because "most minds cannot understand it as easily as iterative loops".

Wouldn't that be enough reason? They're supposed to teach you how to write good code, and that includes readable code.


No, "they" are not. Maybe you're thinking of Software Engineering? Computer Science is supposed to teach you about the branch of mathematics called Computer Science. Recursion is a very important part of that—important enough that the need to address it early was one of the things that made Scheme an obvious choice for a teaching language for decades.


No. This is an overly restrictive definition.

Computer Science is it's own, highly diverse field, composed of software engineering, algorithms, theory of computation, human-computer-interaction, AI, systems engineering, databases, security, graphics, scientific computing, and many other sub-divisions.

CS hasn't been solely a branch of mathematics for decades.

See what Stanford considers part of CS: http://www-cs.stanford.edu/research

See what Berkeley considers part of EECS: http://www.eecs.berkeley.edu/Research/Areas/


In Portuguese universities informatics Software Engineering and Computer Science are part of the same degree, there isn't a clear distinction as such.

It is up to you to choose which lectures you want to use your optional credits for.


True, there often is no clear distinction.

On the other hand, I studied CS in South America, and I can count on one hand the lectures which aimed to teach you "how to write good code". Maybe one or two of them, and only in the macro "good software engineering practices" sense. Most of the others assumed you sort of picked up how to code on your own, or that you asked other students or TAs on the side. There absolutely were NO "how to code" lectures, and the overall idea you got was that CS was a theory-heavy degree. Recursion, proofs, type theory, graph theory, etc, were absolutely the focus of the degree.

There was a different degree called "Software Engineering" (and not CS) from the same university, but I think they heavily overlap in content, so in practice the point remains.


In Portugal, at least back on my university days, the only bad ones which might relate to a pure US CS theory only degree was "Mathematics degree with specialization in computation".

On my case beyond Architecture design, there was hardly any "how to write good code" lecture.


It's more readable because other programmers have been taught the imperative style too, because their teachers find it more readable, because <tail call here>.


Not enough reason in my opinion. The thing is all the people in my CS101 that couldn't fluently grasp recursion generally didn't do very well and I don't think many of them ended up working in software jobs.

Recursion, good code and readable code are not related; I don't understand this point.


Iterative loops are much easier to analyze and reason about, by humans and by computers. For example, they always terminate. To paraphrase Dijkstra, "I regard general recursion as an order of magnitude more complicated than just repetition, and I don't like to crack an egg with a sledgehammer."

https://tinyletter.com/programmingphilosophy/letters/i-don-t...


Dijkstra was from a different era from when mutable state wasn't frowned upon.

Note to self: This may not go down well here but I'm willing to take the flack.


He actually discusses it a bit in the previous paragraph, which does not seem to be online anywhere:

> When programming languages emerged, the "dynamic" nature of the assignment statement did not seem to fit too well into the "static" nature of traditional mathematics. For lack of an adequate theory mathematicians did not feel to easy about it, and, because it is the repetitive construct that creates the need for assignment to variables, mathematicians did not feel to easy about repetition either. When programming languages without assignments and without repetition --such as pure LISP-- were developed many felt greatly relieved. They were back on familiar grounds and saw a glimmer of hope of making programming an activity with a firm and respectable mathematical basis. (Up to this very day there is among the more theoretically inclined computing scientists still a widespread feeling that recursive programs "come more naturally" than repetitive ones.)

Continued https://tinyletter.com/programmingphilosophy/letters/i-don-t...


I've always felt that imperative programming is more in line with the way the overwhelming majority of people are exposed to algorithmic instructions; cookbooks, furniture instructions, Lego construction, driving directions, pick-your-own-adventure books, etc. Especially given how piss-poor most mathematics instruction is, unless you happen to be a mathematician and have put in the effort to recast your brain along those lines, it's easier to come to grips with the imperative style vs more mathematically inspired paradigms.


I wasn't slighting Dijkstra but merely commenting on the era of the time.

You can for example see how primitive things were back then by describing "mutable state" as "dynamic" and "immutable" as "static". Dynamic and static these days have entirely different connotations typically more associated with type systems.

Things have moved on and now in 2016 mutable state is increasingly being pushed out of codebases in favour of immutable practices. Recursion is (and always has been) one way of doing that.

:)


> For example, they always terminate.

How do you terminate?

   for(;;) {
   }


break;


Not all programming languages have break; C syntax was just an example.

Also break is no different from using return (to keep C syntax as example) when recursing.

If you guard break with if then you also need to prove if the "if" condition does indeed provide a dataflow path to break.


> If you guard break with if then you also need to prove if the "if" condition does indeed provide a dataflow path to break.

In all cases, for all possible inputs.

On the other hand, if that's what your for loop looks like, you probably have the same problem of proving that your recursion terminates (in all cases, for all possible inputs).


Of course, I was just arguing against the OP that all loops terminate.


    Iterative loops are much easier to analyze and reason about,
This is not the case. Loops and recursion can be translated into each other, hence are of equal complexity in terms of reasoning. And you see that when you write down the Hoare-logic axioms for either.

What Dijkstra had in mind was probably simple, syntactically constrained forms of loops, like for-loops where the loop variable is read-only in the loop body. They are simpler than general recursion, sure. But there are corresponding simpler forms of recursion, e.g. primitive recursion or tail recursion that are much simpler than general recursion.


> Loops and recursion can be translated into each other...

True.

> ... hence are of equal complexity in terms of reasoning.

False. That's like saying that Stokes' Theorem says that the integral of a differential form over the boundary of a manifold is equal to the integral of it's derivative over the whole manifold, and therefore the two integrals are equally easy to do. In fact, the two integrals are often not equally easy to do, which is the primary practical reason that we care about Stokes' Theorem - it gives us a way to convert hard integrals into easier ones.

You cannot use formal equivalence to prove practical ease of doing the problem.

But the situation isn't even that simple. A blanket statement that "loops are much easier to analyze and reason about" is also false. It depends on the situation, and often on who's doing the reasoning. Different people think in different ways.


> You cannot use formal equivalence to prove practical ease of doing the problem.

Agree completely, but want to bring it back to another computing topic: Technically, anything that can be computed with one Turing-complete language can be computed with another. And every language clearly is not equivalent in terms of ease of understanding and reasoning about as every other language.

Trivially, compare computations in brainfuck to computations in C. Compare the recursive definition of tree traversal available in lisps to languages without recursion (some BASIC variants and others) where you have to jump through hoops and self-manage a stack.


    compare computations in brainfuck to computations in C. 
I'd suggest that this is an orthogonal issue. It's better to compare two otherwise identical languages, one where Turing-completeness is achieved by loops, another where the same is done by recursion. (Or a language that offers both). Then formal reasoning is of the same complexity.


    You cannot use formal equivalence to prove 
I'm afraid I have to disagree here. There is a mechanical translation from loops to recursion and back. Likewise, there is an associated mechanical translation from Hoare-triples and associated proofs for reasoning about loops to Hoare-triples and associated proofs for reasoning about recursion and back.

So in a hard way, the complexity of formal reasoning about the correctness of loops and recursion must be the same.


That doesn't mean that the reasoning in a human brain is equally difficult, or equally error-prone. If nothing else, doing the transformations (and back) adds to the cognitive burden.


That depends primarily on the programmer's experience, if you are used to loops but not recursion, then obviously the former is simpler than the latter, and vice versa. If you are equally familiar with both, they then thinking about it is equally difficult.


Yes, that is the point I tried to make. Use simple, constrained forms such as (C) "for(int i = 0; i < n; i++) { ... }" with n an int and no modification of i in the body, or (Python) "for e in l:" where l is a list that is not modified in the body. Then your code is very easy to analyze, not just in theory but also in practice for compilers, tools and people reading your code. Many language designers have realized this and provide special syntax for this very common case. I'm not aware of any language that has special syntax for primitive recursion, but it's an interesting idea.

My apologies for making an absolute statement with unclear terms, thus inviting uncharitable interpretations.


I agree, that simple loops that you describe are easier to analyse than full recursion.

But simple loops are not as expressive as full loops (where you can modify the loop variable). As a simple example, try to phrase the Euclid's algorithm for computing the GCD using "for(int i = 0; i < n; i++) { ... }" where "i" is not modified in the loop body.


Just use n = max(a, b). Of course that doesn't help you much unless you're adhering to strict safety standards (e.g. power of ten rules), it just shifts the burden of proof from termination to correctness. And there's no such trick with the Ackermann function.

I'm really just advocating a rule of least power. Don't go into full general recursion just because you can.


Why is this being down-voted? Could somebody please explain why they think what I wrote is wrong? I'd be happy to learn something new about loops and recursion.


Wasn't me, but it's probably because iterative loops are believed to be easier to reason about. OTOH the recursive definition for the Fibonacci or factorial numbers seem easier, for one because they map to the literal explanation.


    iterative loops are believed to be easier to reason about.
This cannot be the case, because you can translate loops into recursion and vice versa, so every reasoning problem that one finds with loops is also a problem in reasoning about recursion and vice versa. If you look at the Hoare-logic rules this shows up clearly. In both case you need a suitably invariant, and you need a termination argument.


Does the translation come for free? If not, it's probably complicated to reason about.


Loops aren't any safer than recursion. A loop is a special case of recursion and is just as susceptible to the halting problem.


> Iterative loops are much easier to analyze and reason about, by humans and by computers. For example, they always terminate.

It's a bit of an apples-to-oranges comparison to compare (proper, bounded) for-loops to general recursion. A more appropriate question would be whether humans and computers find, say, primitive recursion or structural recursion easier to analyse and reason about than for-loops.

The difference between general- and primitive-recursion becomes very apparent when working in Coq, for example!


> For example, they always terminate.

No. In fact, it's a huge problem in imperative code at all layers.


I never got what the problem with recursion in CS101 might be.

In Portuguese universities, by time you get to this programming issues you already had enough maths to be able to relate recursion to how mathematical proofs work.

At least it was so for me and most of my friends back in the 90's.


I prefer it to loops, I find it more concise, it makes you really think about what your code is doing when you make it recursive.

Often I find subtle bugs in my loops where as with recursion the compiler seems to pick them up for me.


In what scenario do you prefer recursion over a loop? Your comment made me think of code readability and maintainability. But I'm not judging. Really curious how you manage to prefer recursion over loops.


I prefer recursion to loops since loops bundle everything together in one environment; variables left over from previous iterations are still in scope, which is confusing and may cause problems, e.g. using a variable before it's been updated for this iteration, or after it's been updated for the next iteration, etc.

Recursion is just function calls. The only things which are in scope are those required by this iteration, accepted as arguments. The only thing I need to produce is a return value. This is a small, well-defined interface, as opposed to the self-interfering spaghetti code of a loop.

By accepting values as arguments and calling functions with different values, the problem of when and what to assign variables goes away; the language does it automatically by binding arguments.

Also, recursion (like function calls in general) allows thinking in terms of values, which are timeless and explicit in the code, rather than state, which is implicit and varies during execution.

Semi-related: http://chriswarbo.net/blog/2012-10-02-looping_in_javascript....


Anecdote: After 12 hours of programming, I still had to implement some easy string search. I could not think straight at that stage and several imperative solutions failed. Then I thought "dammit, let's use a functional approach with recursion". It was faster to write, tail recursive, and correct the first time.


Which environment are you working in that doesn't have a string search as part of the standard library?


INTERCAL. Seriously, what does it have to do with the point I made?


If working with trees, recursion is usually easier to understand.


If working with recursively expressed data, recursively expressed algorithms are usually easier to understand.


My 2c: Building a hierarchical structure requiring opening and closing tags/code/etc (like HTML) is where I like to use recursion. The code structure matches the data structure, so you only have to reason about it once.


I think it's rather universal that, for instance, a depth-first tree traversal is best done recursively.

In my experience, I almost always prefer recursion since it allows me to work more closely with the structure of the original data that I'm interested in instead of a set of a proxy indices which I have to track separately.


Other commenters have already mentioned that recursion is often a more natural fit for "recursively-defined" structures like trees. Here's an example using nothing more exotic than integers.

Suppose you want to compute the greatest common divisor of two non-negative integers. There's a famous algorithm that goes all the way back to Euclid's Elements, which you can write iteratively like this:

    def gcd(a,b):
        if a<b: a,b = b,a
        while b != 0:
            a,b = b,a%b
        return a
That's pretty nice, and it's the way I would usually implement it. But in terms of readability I think the following is better:

    def gcd(a,b):
        if a<b: return gcd(b,a)
        if b==0: return a
        return gcd(b, a%b)
because it makes it explicit that the point is that at every point in the computation you're replacing gcd(a,b) with gcd(a',b') in such a way that the calculation keeps getting easier.

It turns out that if the gcd of a,b is d then there are always integers x,y such that ax+by=d, and it's sometimes useful to compute x,y along with d. (For instance: if d=1 then you have ax=1 mod b, so this gives you an efficient way of computing reciprocals in modular arithmetic.)

Here's how that goes, iteratively and then recursively. (I notice that I've used two extra state variables for the iterative version. That's what seems like the most natural approach. I'm not sure whether there's a convenient way to use only two.)

    def xgcd(a,b):
        p,q,r,s = 1,0,0,1 # a = p.a0+q.b0, b = r.a0+s.b0
        if a<b: a,b, p,q, r,s = b,a, r,s, p,q
        while b != 0:
            k = a//b
            a,b, p,q, r,s = b,a-k*b, r,s, p-k*r,q-k*s
        return a,p,q

    def xgcd(a,b):
        if a<b:
            d,x,y = xgcd(b,a)
            return d,y,x
        if b==0: return a,1,0
        k = a//b
        d,x,y = xgcd(b,a-k*b) # d = x.b + y.(a-kb)
        return d,y,x-k*y
The difference isn't dramatic in any of these cases, but I think the recursive versions are clearer because the code is closer to the underlying mathematical theorems -- e.g., gcd(a,b) = gcd(b, a mod b) -- that make it work.

[EDITED to add:] Perhaps it's useful to think of it this way. If you wanted to explain how, say, that iterative xgcd function works, you'd do it with a loop invariant; in fact, I actually wrote one, in that first comment "explaining" what p,q,r,s signify. (In real code I would be more verbose about it.) The recursive function gets the same idea across in the language itself: each iteration of the loop becomes a call to xgcd, and the loop invariant becomes the fact that the corresponding call to xgcd actually computes what it's supposed to compute.


Nice. Here's an Erlang version of your code, which I think reads even more like math:

  gcd(A, B) when B == 0 -> A;
  gcd(A, B) when A < B  -> gcd(B, A);
  gcd(A, B)             -> gcd(B, A rem B).
I also really appreciated the posting of more of Dijkstra's thoughts (from rer0tsaz) above, in particular this line struck a chord:

  When programming languages emerged, the "dynamic" nature of
  the assignment statement did not seem to fit too well into the
  "static" nature of traditional mathematics.
  
(Along those lines, Erlang's recasting of the equals sign as a pattern matching operator felt so natural when I learned how it worked.)

[Edit: zap redundant sentence.]


That syntax definitely helps. It is readable even though the variable names are non-descriptive.


only bob harper would say that the stack is difficult for students to understand but propose structural operational semantics as a solution.


I suppose I had trouble with recursion until I understood that each call should strive to be stateless - meaning, you don't try and remember previous results. Each new call should be just the same as the initial. You only act on the current set and try to reduce the problem further. Trying to remember previous iterations results in a lot of headaches.


> Implementing recursion does not require a stack, nor does it require any special provisions for managing the stack.

When I wanted recursion in Atari BASIC it involved implementing stacks for all my local variables (because there's not such thing as local variable in Atari BASIC).

Thinking about the task in terms of bits of work that should be put on a stack or in the queue felt way more controllable than recursion.


Once I needed to reimplement recursion in mathlab because the stack was too small, it was an interesting experience, I had a matrix-repurposed-as-stack and did all the push/pop manually while the recursion ran as a loop - goes to show that recursion isn't that essential if one knows how to split state, data and code properly.


Recursion is just induction backwards.

But you need to know what a stack is - because it gives you the practical depth to which you can recurse. Ditto with tail recursion optimization.

And that is why C should be mandatory. You must be able to translate the mathematical concepts into something real and tangible.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: