
Dijkstra was right – recursion should not be difficult - gregorymichael
https://blog.angularindepth.com/learn-recursion-in-10-minutes-e3262ac08a1
======
jerf
"This task can also be solved using iterative solution, but it’s way more
complicated because we would have to nest for loops but don’t know how deep
they are nested. The unknown number of nested loops is a common characteristic
of all problems that are recursive in their nature and should give you a hint
that recursive solution is required."

Would suggest instead something like

"We could solve this directly by managing our own stack of work done and
keeping track of where we are in the problems as we divide them up, but it's
easier to take advantage of the fact that the programming language is already
providing us a call stack that can do this work for us."

Then, later, you can point out that this also helps with the backtracking
solution as the programming language is maintaining the old environments; if
you were iteratively managing your own stack you'd also have to backtrack
manually.

Because that's how you transform a recursive call into an iterative one, not
nested for loops. The general points made in the article still hold up, of
course.

~~~
edmccard
>Because that's how you transform a recursive call into an iterative one, not
nested for loops.

I came to understand recursion through nested for-loops, though, but that
might just be a historical accident. In the summer of 1988 I had written a
Mastermind solver in Microsoft Quickbasic[1] for the specific case of 4
"pegs". At each guess, I used for-loops nested 4 deep to build a structure
representing the possible remaining solutions (for all possibilities of the
first peg, check all for the second peg, etc.) Then I wanted to extend it to
work with variable numbers of pegs; after spending most of that summer trying
to figure out a loop-based solution, I discovered that subroutines could call
themselves and it hit me that I could have a function that took the
possibilities already computed for a peg, and then called itself with those
applied to the next peg.

So I've always thought of recursion as a way to have arbitrarily deeply nested
loops.

[1] Believe it or not, it did support recursive functions!

~~~
catmanjan
Quickbasic was great, fond memories of typing out hundreds of DATA statements
for game sprites.

------
aaron-santos
Recursion shouldn't be difficult, but we write recursive programs using one
tool: self-referential functions. That's like trying to write structured
programs using one tool - goto.

Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire[1]
pioneered generalized recursion schemes and influenced work in Haskell[2] and
Scala[3]. It's great having the vocabulary to describe recursion and the tools
to elegantly recurse, but anamorphism tends to shut down conversations faster
than mentioning the m-word.

[1] -
[https://maartenfokkinga.github.io/utwente/mmf91m.pdf](https://maartenfokkinga.github.io/utwente/mmf91m.pdf)

[2] - [https://hackage.haskell.org/package/recursion-
schemes](https://hackage.haskell.org/package/recursion-schemes)

[3] -
[https://github.com/slamdata/matryoshka](https://github.com/slamdata/matryoshka)

~~~
carapace
I dunno.

I spent some time with that paper awhile back, implementing hylo-, ana-, and
cata-morphisms in Python and Joy to be sure I was following along.

I loved it, but I was ultimately convinced that Category Theory et. al. isn't
quite as useful for the working programmer as it should be.

What I _did_ find useful is the treatment of recursion in the Joy language.
[http://www.kevinalbrecht.com/code/joy-
mirror/j05cmp.html](http://www.kevinalbrecht.com/code/joy-mirror/j05cmp.html)

~~~
vanderZwan
Joy is interesting: I never programmed in it, but I had a lot of fun reading
it and learning how it works from the documents.

~~~
carapace
To me it seems like it might be a kind of "silver bullet" for programming, due
to a number of factors: simplicity and elegance, suitability for partial
evaluation and super-compilation, and for parallelization.

I haven't tried to implement a large system in it yet though. If I get the
time I hope to re-implement the Oberon OS using Joy.

------
igorbark
This article touches on some good points. In particular, the part about
trusting that the function will do the right thing is something that people
who already understand recursion take for granted but is extremely crucial.
However:

"Let's tweak our sum function a a bit: ... That's it."

I really dislike articles that earnestly try to convince the audience that
some subject which many find extremely difficult to understand is actually
very simple. My impression is that the intent is to help people who lack the
confidence to engage with the topic, but in my opinion the exact opposite
effect is achieved. At that point in the article, nothing about recursion had
been explained and it's not reasonable to expect someone who previously did
not understand recursion to read that section and come to the conclusion
"that's it!" Instead they'll be sitting there, extremely confused by something
which they're being reassured is actually quite simple. This is not going to
help their confidence.

So while I appreciate the author's intent, I think that someone who is
struggling to understand recursion might leave this article worse off than
when they started. And I wish this trope would disappear from tutorials.

~~~
dajohnson89
I auto-ignore tutorials that say stuff like, "X made simple". It's almost
always trying too hard to make a genuinely difficult subject easy, thereby
obfuscating the topic more.

~~~
bluGill
I learned long ago that books titled like "advanced X" will be a very simple
book on X suitable for any layperson with basic knowledge. Books titled
"introduction to X" or "elementary X" will be extremely difficult.

~~~
tomsmeding
I have not yet seen books titled "advanced X" that are actually grokkable, but
my university courses called "introductory X" were exactly the hardest ones.

~~~
chii
In general, laying foundations to any broad topic is hard! I imagine knowledge
to be like pyramids, where the bottom support the top. But to support a tall
pyramid, a large bottom is required. Thus, we get these obscenely difficult
'intro' books...

------
ballenf
You don't really understand how difficult recursion is until you've tried to
teach a bunch of programming newcomers the concept. A few get it quickly, but
for the rest all the "think of it like ..." suggestions don't really help.
It's a concept that everyone seems to have to learn through practice (as the
author implies). I fell into the trap that if I only explained it clearly
enough I could make it easy for others. Everyone learns it their own way and
you just have to give students enough opportunities to practice for them to
get it. I don't disagree with Dijkstra that the concept shouldn't be
difficult, but I do not believe there are any easy paths to the concept for
most.

It's one of the most interesting learning processes in that once you get it,
it feels almost as simple as a for loop or simple math. But until you get it
it feels like the most opaque, impenetrable wall where 1+1 != 2.

Side note, recursion would be slightly easier if the concept of a base case
were applied to _all functions_ as preconditions. The idea of a base case
really has more applicability than just in recursion, it's just that we don't
call it a base case in other situations.

Recursion is also interesting to implement when doing TDD. It comes in a
refactoring step, of course, but you don't have to think about the base
case(s) because they've already been done. Again, not suggesting this is
helpful to a newcomer struggling, but is just an interesting process.

~~~
patrickthebold
Can you expand on "base case for all functions"?

I'm having a hard time figuring out a base case for say calling toString on an
object, or incrementing an integer.

~~~
falcor84
> I'm having a hard time figuring out a base case for say calling toString on
> an object, or incrementing an integer.

In both cases, we can special-case the "zero" of the domain. For example,
toString of an object without any attributes would return '{}', while
incrementing the integer zero would return 1.

I'm not sure about the usability of the latter in an iterative approach, but
the former could definitely be used as the basis of an iterative algorithm
that traverses all of an object's attributes.

------
jancsika
> This is because a recursive sub-function never solves the exact same problem
> as the original function, but always a simpler version of an original
> problem.

This isn't always true. There are plenty of recursive functions whose purpose
is merely to climb until a condition is met that there are no more rungs in
the ladder.

Recursion is difficult because it isn't just taught, it is mythologized as the
potentially infinite source of a one's love for programming. If teachers
similarly mythologized the "switch" statement I'm sure they could confuse the
topic of conditionals just as easily.

~~~
thethirdone
>> This is because a recursive sub-function never solves the exact same
problem as the original function, but always a simpler version of an original
problem. > This isn't always true. There are plenty of recursive functions
whose purpose is merely to climb until a condition is met that there are no
more rungs in the ladder.

What do you mean here? What is the ladder in your analogy? The stack? Is the
"a condition" that the stack has reached its limit or that you have reached a
base case?

If computing f(50) requires computing f(50), it will never end. Likewise if
you have a base case at 0, if f(50) requires computing f(51), it will never
end. You must solve a problem closer to your base case than the original
problem.

> Recursion is difficult because it isn't just taught, it is mythologized as
> the potentially infinite source of a one's love for programming. If teachers
> similarly mythologized the "switch" statement I'm sure they could confuse
> the topic of conditionals just as easily.

I don't think that recursion is mythologized. It allows you to express the
core idea of the algorithm really succinctly.

For example defining the Fibonacci sequence just requires the beginning
numbers and the rule for progression.

    
    
        fib(n):
          if n = 0 or 1: return 1
          return fib(n-1)+fib(n-2)
    

It is much simpler than the iterative solution (though generally being
inefficient).

Often the simplicity gained by using recursion (if you are comfortable with
it) makes it possible to understand algorithms you could not understand the
iterative form of.

Also, recursion is simply more powerful than just while or for loops (you need
stacks as well). And when you add stacks into the mix, iterative programming
can get very confusing.

~~~
cycomachead
Simpler than the iterative is hard to argue -- it's closer to a strict
mathematical definition but the iterative version can also be setup very
simply too.

To say one is more powerful is really false. I think recursion is great, and
for some situations it should be the go to method, but anything you can write
iteratively you can write recursively and vice-versa.

~~~
thethirdone
> To say one is more powerful is really false. I think recursion is great, and
> for some situations it should be the go to method, but anything you can
> write iteratively you can write recursively and vice-versa.

I mean powerful in a strict theoretical sense. Without using a way to freely
allocate arbitrary memory, recursion can compute things loops cannot. The
ackerman-peter function is an example.

Loops however can be trivially converted into functions.

Therefore because loops can be converted into recursive calls and recursive
calls cannot always be converted into loops. Recursion is more powerful.

------
Edmond
Practice makes perfect, I think most learners can get comfortable with
recursion as they use it to implement simple recursive solutions (Fibonacci,
tree traversals..etc). Over time it becomes quite natural to recognize when
and how to use recursion.

One source of confusion that most learners have stems from the seemingly
mysterious way that intermediate results are kept track of. This is one of the
downsides of not having experience with low-level programming. If you're
familiar with assembly language programming and understand how register
contents are pushed/poped on/off the stack between sub-routine calls then it
is easier to understand what is happening behind the scenes.

~~~
projektir
> If you're familiar with assembly language programming and understand how
> register contents are pushed/poped on/off the stack between sub-routine
> calls then it is easier to understand what is happening behind the scenes.

Assembly programming in college is exactly what made recursion really click
for me!

~~~
khedoros1
I taught myself to be comfortable writing recursion through practice,
practice, and practice, but I feel like things clicked at an entirely
different level when I could trace something in assembly and see that the
arguments were actually being stored in _different_ locations as the stack
pointer increased.

------
aidenn0
Anything that is self-referential is hard. I remember when we covered proof-
by-induction in HS math. There was a significant fraction of the class that
did not believe a given proof was sufficient.

If you can't assure yourself of the correctness of a fairly simple recursive
function that is given to you, then composing one yourself is going to be nigh
impossible.

[edit]

From the article: _Instead trust that it returns correct sum for all elements
of an array a[i]_

This is exactly the point where many people get hung up. They cannot proceed
to reading the next line until they understand how the recursive call works.
They are unwilling to trust that the function works as advertised, and cannot
shift gears to continue past that. In my experience, telling them to "just
trust it" and such seems to not help much.

~~~
kazinator
> _In my experience, telling them to "just trust it" and such seems to not
> help much._

Just call that recursive function something else. Give it some other name, and
say that we must take it for granted that this other name is a function which
is already debugged and tested by another team of programmers:

    
    
       (defun our-member (item list)
         (cond
           ((atom list) nil)
           ((eql item (car list)) list)
           (t (member item (cdr list)))))
    

Here, we rely on _member_ , a standard function in the language already, and
so there is no recursion.

We first convince ourselves that what we have works.

And then, final step: since what we have works, it means that _our-member_ is
a correct implementation of _member_ , which means that the two are
equivalent. Equivalent functions are substitutable. And so we can just perform
that substitution: replace the _member_ call with _our-member_.

By the way, it occurs to me that Hofstadter's _Gödel, Escher and Bach_ might
be a good source of ideas on how to deal with presenting and teaching self-
referential topics.

~~~
userbinator
_Give it some other name, and say that we must take it for granted that this
other name is a function which is already debugged and tested by another team
of programmers_

That seems like it's "begging the question" \--- as in, "if there's already a
function written to do what I want, why do I need it to write one to do
it...?"

~~~
kazinator
But that problem is present in virtually all homework in CS.

Someone already wrote a Celsius to Fahrenheit conversion function, a simple
ray-tracer, a quicksort, a toy Lisp, a text-file-based simple inventory
system, etc ...

We could make up a back story about this to give it real-world plausibility:
yes, the function is already written, but it's in an expensive, proprietary
third-party library which we cannot ship without paying royalty fees and being
bogged down with various inconvenient restrictions.

~~~
dingo_bat
But you don't call a library bubble sort function _inside your own
implementation_ of bubble sort!

~~~
aidenn0
My idea for teaching it is to start with just a function that calls the
library function:

    
    
       (defun our-member (item list)
         (cond
           (t (member item list))))
    

Now everyone in the class should quickly agree that this function works. Next
say "our goal is to write our own implementation of member, so we will handle
one special case at a time until we don't need to call member anymore"

so step 1:

    
    
       (defun our-member (item list)
         (cond
           ((atom list) nil)     ;; if it's not a non-empty list, it can't be a list containing item
           (t (member item (cdr list)))))
    

step 2:

    
    
       (defun our-member (item list)
         (cond
           ((atom list) nil)            ;; if it's not a non-empty list, it can't be a list containing item
           ((eql (car list) item) list) ;; if the first element of the list is our item,
                                        ;; we have a match and return the whole list
           (t (member item list))))
    

step 3:

    
    
       (defun our-member (item list)
         (cond
           ((atom list) nil)              ;; if it's not a non-empty list, it can't be a list containing item
           ((eql (car list) item) list)   ;; if the first element of the list is our item,
                                          ;; we have a match and return the whole list
           (t (member item (cdr list))))) ;; we already know that the first item is not a match, so we can just run
                                          ;; member on the tail of the list
    

And now demonstrate that replacing member with our-member will still be
correct.

[edit]

derp _(atom nil)_ => T

------
analog31
When I was a kid, we learned the principle of mathematical induction in
algebra class. That made recursion seem easy.

As for the limitations of recursion on any given system, that's easy to learn
because... kids will be kids. Practically the first thing I tried with
recursion, ended up with the system operator yelling at me, and learning what
a core dump looked like.

------
zmonx
If you want to really delve into recursion, I recommend to learn a logic
programming language like Prolog.

Interestingly, Prolog programs are often efficient _and_ general because, due
to the power of unification, Prolog programs are more often tail recursive
than functional programs.

For example, we can define a relation called "append" in Prolog as follows:

    
    
        append([], Ls, Ls).
        append([A|As], Ls, [A|Rest]) :-
                append(As, Ls, Rest).
    

In Prolog, recursion naturally arises by considering the inductive definitions
of terms. For example, you can read the second clause as " _If_ it is the case
that append(As, Ls, Rest) holds, _then_ it is also the case that
append([A|As], Ls, [A|Rest]) holds".

Note that such cases are _not_ naturally tail recursive in Lisp, for example,
because in languages that do not support unification of logical variables,
there is one additional call of "cons" after the recursive call to construct
the result. Also, the Prolog version is much more general and can be used in
several directions! For this reason, "append" is not even a good name for such
a relation, because it already implies one particular direction.

------
falcolas
The two biggest problems with recursion in practical use is that not all
compilers can properly handle optimizations around tail calls, and not all TCO
capable compilers can handle all tail call cases.

If either case occurs, you run into the issue where you will seemingly
randomly blow out your stack space when your list is too long, or your tree
too deep. Then you have to fall back to a non-functional idiom (or do some
other magical incantations to avoid function stacks, like Clojure's `loop`
special form).

I think recursion is a great tool, but it's so poorly implemented in so many
languages that it's simply too dangerous to use.

~~~
vbezhenar
If recursion could be tail-optimized, it's usually better just to write
iterative `white`, unless you can't use mutable variables. The interesting
recursion which can't be trivially replaced with loop is not tail optimizable.

~~~
eru
You can implement state machines as a set of mutually-recursive functions with
one state = one function. Works very well, is tail-recursive and not trivially
replaced by loops.

~~~
AstralStorm
It is quite replaceable if you represent the actual graph as a data structure
with terminator nodes. No need for more recursion - the data structure is
recursive, but algorithm is not.

It is even easier to prove bounds. You get to prove correctness for cycles (if
applicable), connectedness of the graph and whether a terminator node is
connected to all acyclic arcs. (Latter can be a trivial proof on the
constructor.) Just like for recursive implementation except proving the cycle
lemmas is easier.

~~~
eru
Interesting. In the implementation as function you trivially can give
different states differently typed arguments.

How do you do that in the version you sketch?

------
agumonkey
Here are the steps in my recursion experience:

1) teenager reading a pascal book: clueless

2) college C course: imperative "print tree", easy

3) advanced compiler: more sophisticated environment thingies, clueless again

4) CGI quadtrees: want fully recursive solution but chokes

5) rediscovery of lisp: recursion wannabe

6) cons-based lists fundamentals: linear converging recursion is everywhere,
pattern is easily understood and memorize, everything makes sense

7) FP and lispy combinatorics (treerec, partition, countdown): it seems like
point 2) but something is different now.. deep recursion involves self closing
types. It's not just repeat until and do not care about what happens you'll
fix it down the road. It HAS to make sense, and to typecheck right away..
recursive steps will be used for the previous level it has to fit logically.

If one reaches that level your understanding shifts entirely. Recursion is
it's own solution, it's absurd but it's a deep reference point to solve
problems since you can assume something that doesn't exist and see if it
holds..

Now you get to rediscover math proofs and a lot of late HS and college math.
Finally something I wanted to know and recently found out about, list
recursion are only a linear pattern of recurrence. It's one dimensional and
often on an easy to proove converging variable (cdr + nil?, natural dec +
zero?). There are n-dimensional patterns, and litterature about it. I can't
recall the books but I'll dig for it.

And finally, Turing among other tried to rethink imperative programs as one
big while modifying one big vector variable (aka system state) and proof that
this n-dimensional vector could only have some sound values and would reach
terminating ones. (I think it's called linear ranking ..)

Cheers

~~~
AstralStorm
You mean a deterministic state machine? Sheer number of states makes the
approach less than useful.

Tree recursion is often also easy to prove. Kd-tree operations (incl.
quadtree) are tree recursion. The only problem with it may be cache locality.

The only trouble is you have to prove you're walking the tree in one direction
for correctness. To prove that in a row system rewrites some pretty
interesting monads.

It greets even trickier once the recursion tree is uneven or lopsided, like in
R-trees. Then you get to think about upper bounds etc.

------
JepZ
Some years ago I had problems with recursions too, because I always had some
edge cases which made my functions go wild (endless loops). But one day
someone told me, that most recursive programs can be written into the skeleton
below:

    
    
      function a(){
        if(/* Termination Condition */){
          return /* Special Element */
        } else {
          /* Recursion: e.g. a + a */
          return a() + a()
        }
      }
    

Since that day I love recursions.

~~~
AstralStorm
This is tree recursion that is not well optimized by modern compilers. AS
opposed to tail recursion.

~~~
aidenn0
It is very rare that the tail-recursive form of a function is more readable
than the non-tail recursive form. I find tail-recursion intellectually
interesting, but of little use for actual programming.

Contrasted with tree-recursion where the code is often so pithy that its
correctness is obvious (assuming you understand recursion).

~~~
eru
Tail recursion sometimes comes up naturally. It's essentially a goto with
parameters. Thus it comes up where goto is also (at least borderline) useful,
eg implementing state machines.

~~~
aidenn0
I almost added a parenthetical after "rarely" to the effect that state-
machines are the one exception I've encountered.

------
hyperpape
I admit this is complete speculation on my part, but I can't imagine it helps
to combine recursion with C-style for-loops.

Both pieces of advice from the article: "trust the recursive function" and
"don't try and trace the execution" seem easier if the control flow is
simpler.

Compare:

    
    
        function sum(a) {
            let result = 0;
    
            for (let i = 0; i < a.length; i++) {
                if (Array.isArray(a[i])) {
                    result += sum(a[i])
                } else {
                    result += a[i];
                }
            }
            return result;
        } 
    
        function findCeo(employee) {
            let boss = employee.getBoss();
            if (null == boss) {
                return findCeo(boss);
            }
            else {
                return employee;
            }
        }

~~~
mulmen
I have a problem with the second example. An attribute to track the title of
an employee would solve the problem of finding the CEO and any other type of
employee, this should not be done with recursion. If recursion is such an
important concept we should have real-life examples of it being employed
properly to lean on.

Also am I missing something or does your findCeo function return the employee
when they have a boss and recurse when they have no boss? Seems like this just
returns the employee in most cases and infinitely recurses for the CEO who
presumably has no boss.

~~~
poxrud
I think the author made a typo/bug. The function will crash on a null.getBoss
call as soon as it found an employee without a boss.

~~~
hyperpape
Correct.

------
wyager
The first problem in the OP is ill-typed, so if this was intended to help
people get a good grasp on recursion, that may be counterproductive.

I never had any problem using recursion, but my ability to reason about it at
an abstract level is due in large part to the excellent paper "Functional
Programming with Bananas, Lenses, Envelopes and Barbed Wire", which covers
various abstract approaches to dealing with recursion schemes.
[https://maartenfokkinga.github.io/utwente/mmf91m.pdf](https://maartenfokkinga.github.io/utwente/mmf91m.pdf)

------
jrs95
I think part of this can be explained by the way recursion is handled on major
platforms. Even something like Python which has some functional leanings is
prone to stack overflows with deep-ish recursion without hacky workarounds.
Once the tools most people are using are more recursion friendly, I think
we'll see it become more mainstream. I think JS has TCO now, at least on Node,
which is a huge step in that direction.

~~~
ballenf
Safari is the only browser that has TCO so far. V8 has it listed as a blocking
bug:

[https://bugs.chromium.org/p/v8/issues/detail?id=4698](https://bugs.chromium.org/p/v8/issues/detail?id=4698)

Unless I'm reading that wrong, TCO was pulled as even invokable by a harmony
flag because it was so broken to be a security risk.

I wonder if there are any explanations other than focus or priorities as to
why it appears to be harder to implement in Chrome than it was in
Safari/webkit.

Anyway, since it's not in V8 that means it's not in node right?

~~~
jrs95
I guess not, that's pretty disappointing. Hopefully we'll get it eventually,
though.

------
oldandtired
To understand recursion get "The Little Lisper" or the later book "The Little
Schemer". To know when not to use recursion and use iteration instead, look at
"Algorithms + Data Structures = Programs" and finally, you can delve into the
mathematics of primitive and non-primitive recursive function with "Recursive
Functions in Computer Theory".

Good reading to all.

------
Aardwolf
For one it's not handled well by many languages, expensive stack cost, not
tail call optimized, max limits on stack, ... It would be nice if you could
write recursive algorithms in code and be sure it ran as well and efficient as
when you do it manually with for loops and separate stack data structures.

~~~
mahyarm
Not to mention the debugger. Recursive loops create huge traces that are
difficult to follow vs an equivalent for loop.

~~~
icebraining
That's a feature, not a bug, in my opinion. Recursion preserves information
about the function's execution. The problem is debuggers being too dumb to
detect it and present it well.

------
eru
Imperative programmers learn special purpose constructs like loops that only
work in simple cases. But since they are so wedded to them, they only reach
for the more general tool in more complicated cases. Thus they blame the
general tool for the complication.

Just do away with the special cases, and us eg functional calls as the basis
for your control flow needs. Then recursion becomes simple.

If necessary, one can introduce loops as a special case later---if working
with legacy code or languages that don't properly support function calls. (Eg
languages without first class functions and tail call optimizations.)

------
icc97
Recursion is hard, but it is a fundamentally great tool.

If I talk to someone who doesn't understand programming, they can usually
understand the if and for statements that you might find in food recipes or in
legal contracts.

But almost nowhere outside mathematics and computer science (and certainly not
as common as recipes) do you come across recursion.

It's mentally hard to think of a problem so that each step gets simpler.

Solving the towers of hanoi is not straight forward, I can't follow it at the
speed that I can read as I could with basic math, I have to really think if
what they are suggesting works and how does it work.

However when you get to solving hard problems, recursion really helps. I know
from having spent hundreds of hours with interview questions. Bugs can make a
problem go from taking me 30 minutes to 4+ hours. Where as using recursion and
other functional programming concepts, once I got to the solution it would
usually be bug free. So whereas it might not be as fast, I would be more
confident in my final answer.

------
jacquesm
I love the Erlang way of dealing with this:

    
    
      % Problem #2
    
      % Each new term in the Fibonacci sequence is generated by
      % adding the previous two terms. By starting with 1 and 2,
      % the first 10 terms will be:
    
      % 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
    
      % By considering the terms in the Fibonacci sequence whose
      % values do not exceed four million, find the sum of the
      % even-valued terms.
    
      fibsum(_N,M,L) when M >= L -> 0;
      fibsum(N,M,L) when M rem 2 =:= 0 -> M + fibsum(M, N + M, L);
      fibsum(N,M,L) when M rem 2 =:= 1 ->     fibsum(M, N + M, L).
    
      problem_2(L) -> fibsum(1, 2, L) .
    
      problem_2_test() -> ?assert(problem_2(100) =:= 44).
    
      problem_2() -> problem_2(4000000) .
    

(When familiarizing myself with a new programming language I usually work
through several 10's of the project Euler problems, that way I only have to
learn one new thing.)

~~~
Pxtl
Name your variables.

------
eksemplar
Is recursion difficult? I remember when we did our first recursion in the
first year of CS and our professor told us it sould be hard. Then she went on
a rant about divide and conquer which made little sense and eventually got to
the bit about calling yourself until a criteria.

I think most of us thought it was the easiest part of that year.

~~~
mmanfrin
I think recursion as an implementation is easy; recursion as a concept is hard
to understand if you are just starting to learn. I think part of it is getting
your mind to accept the notion that a thing can know of itself before itself
has been fully defined.

~~~
amelius
I suppose it is easier if you learn proof-by-induction first in a math class.

------
bsder
Recursion is _different_. Almost in exactly the same way that geometric proofs
are different.

As such, it requires different ways of teaching.

I have used "The Little Schemer" (even back when it was called "The Little
Lisper") to teach lots of "non-programmers" (I hate that description) how to
do recursion.

~~~
sn9
_The Little Schemer_ is really probably the best way to teach recursion.

------
projektir
I don't think telling people who find a subject difficult that it is easy is
very convincing.

Recursion seems fairly straightforward to me if you think about it in terms of
how it's typically implemented (the stack and jumps), but that's not quite the
mathematical way of thinking about it, I suppose.

------
ccvannorman
I found this HN thread to be quite useful to this conversation:
[https://news.ycombinator.com/item?id=14949392](https://news.ycombinator.com/item?id=14949392)

------
rdtsc
Switching to using Erlang full time, closures and functional aspects
(returning a function from a function) and recursion was a lot harder for me
to grasp than the isolated processes and message passing.

It took a few months to get my head around it, but once you do, it becomes so
obviously simple and easy that you start thinking in it directly. All of the
sudden having to write "for(i=0;....)" seems eeky and cumbersome when going
back to a language which doesn't handle recursion as well (where it will blow
the stack for example).

~~~
runeks
> It took a few months to get my head around it, but once you do, it becomes
> so obviously simple and easy that you start thinking in it directly

This is exactly where functional programming shines, in my opinion. It gives
you the ability to write your program logic directly, rather than use
mechanical constructs (e.g. for loops with indexes) that _behave_ as some
logical construct.

The “for(i=0;...)” construct is something I’d give to a machine for the
purpose of execution, whereas the logical description is how I’d explain it to
a human — and the former can be deduced from the latter.

------
mysterydip
The concept of recursion was easiest to grasp by watching a kid do the maze on
a placemat at a restaurant. You go in a line until you find a dead end, then
backtrack. Repeat the process until you reach the goal. Each intersection
where you make a decision is your function call, and each backtrack is the
return result.

~~~
taeric
Which is an odd example, since maze solving by recursion typically requires
something like the Zipper data structure. And... no, that is not necessarily
easier to deal with than just using a standard stack and jotting down your
work as you go.

~~~
mysterydip
I was thinking the classic case of a 2D array representing the maze: value of
1 for walls, 0 for space, 2 is the goal. Recursion of it is simple:

Function Solve(x,y,dir) {

    
    
      Solved = false
    
      If (maze[x,y] == 2) {
        Print "goal at "+x+","+y
        Return true
      }
    
      If ((dir != 3) && (maze[x,y-1] != 1)) { solved = Solve(x,y-1,1) }
      If ((!solved) && (dir != 4) && (maze[x+1,y] != 1)) { solved = Solve(x+1,y,2) }
      If ((!solved) && (dir != 1) && (maze[x,y+1] != 1)) { solved = Solve(x,y+1,3) }
      If ((!solved) && (dir != 2) && (maze[x-1,y] != 1)) { solved = Solve(x-1,y,4) }
      If (solved) {
        Print "path "+x+","+y
      }
      Return solved

}

The pseudocode will search the maze, find the exit, then print its location
followed by the path to get there in reverse order. Nothing complicated, but
shows off the statefulness of recursion without needing anything else.

~~~
taeric
And this is one where I find myself falling back onto (ironically, per the
thread?) Dykstra's algorithm. Which I did not learn recursively. It makes much
more sense because the "path" that I am recording is a first class thing in
the algorithm and not a byproduct of the implementation.

------
naoe8nstaoeusnt
I liked this. It reminds me of proof by induction. I recall my teacher telling
us it was one of the harder concepts, and thinking, "this sounds just like
recursion, and seems pretty natural." I think if she had taken away the scary
introductory language, students would have been less afraid of it.

------
lngnmn
Recursion only begins with the notion of mutual recursion. Transforming
imperative loops into recursion (with the constant-space accumulation pattern
and TCO) is a nothing-special mechanical operation. Recursion really shines in
mutually recursive algorithms, like the famous Hanoi Tower solution.

------
blueyes
I think a good analogy to teach recursion are humans, or really any living
thing, since they too are functions that call themselves. That is, each time
you create a new human, it's function is to create a new human, and so on ad
infinitum.

------
c517402
IMHO, the first problem of summing a generalized array of numerical elements
is not best solved recursively. I suggest sum(flatten( )) is better and uses
less overhead. The subsequent problems under discussion more strongly require
recursion.

------
merb
well I do not think recursion is difficult. but I think doing it in a tail-
called optimised way, can sometimes be tricky (especially in compilers or node
traversals). that's why more and more cps style recursion comes into place
more than often. heck even scala has a way to do cps style tail calls:
[https://www.scala-
lang.org/api/current/scala/util/control/Ta...](https://www.scala-
lang.org/api/current/scala/util/control/TailCalls$.html)

------
leoc
Relevant older comment of mine:
[https://news.ycombinator.com/item?id=11152492](https://news.ycombinator.com/item?id=11152492)

------
hermanbergwerf
Didn't read the article. I usually don't have trouble with recursion, however
lately I have started to avoid it more and more often. It is true that linear
solutions usually look more complex. However in my opinion they also allow
better reuse of local variables. Their complexity also depends on the
datastructure. Designing a linear datastructure for nested problems is
sometimes possible and can allow simple non-recursive solutions which I find
more elegant.

~~~
chii
You'll find that since problems are notoriously hard to do iteratively, unless
you effectively simulate recursion via a stack.

------
charlieflowers
The Little Schemer is what made me fully comfortable with recursion. It's an
amazing teacher.

------
euske
"There's no such thing as a bad student. Only a bad teacher." \- Mr. Miyagi

------
voidhorse
The easiest way to learn recursion imo is by limiting the domain.

For example, we have a thing called a number:

x

how do we indicate the next number? Well we need a new symbol to distinguish
one number from another.

Lets use a marker called prime:

x, x`

Great, now we have two numbers. But let's get a third number. How could we
indicate that? Well we could introduce a totally new symbol, like a 'q'.

x, x`, q.

Well that works. but we seem to have just arbitrarily chosen q. There's no
immediate obvious aspect of q that lets us know what the next symbol should be
if someone asked us to add yet another number. We have three distinct number
things but no clear unified way to proceed... But wait, before we took some
number x and added a prime to it: x->x` which got us a separate symbol, aka a
different number. What if we just do that again but this time with our second
number: x`->x`` Great. Now we have three numbers, and we can always use the
same rule "add a prime mark" to generate the next symbol, so long as we input
the last symbol into our rule:

x, x`, x``

We also wind up with neat little equivalences:

x`` = x`, x = x, x, x

or if our prime operation is a function:

x`` = prime(prime(x)) = prime(x`) = x``

so x``, is just the result of prime of prime of x.

So on and so forth until infinity.

What makes recursion harder to work with in CS is that, unlike summation, you
_can 't_ go to infinity--you need to terminate the operation eventually if you
actually want it to do anything. Because of this and because of the fact that
our inputs are not little units like numbers but are usually data structures
which require more rules to recurse over, recursion in CS is tightly and
innately coupled with pattern matching and the notion of defining bottoms or
terminating cases which is not the case for operations in mathematics where we
do not care about terminating because it just encodes the rule and isn't
actually meant to do anything, and when we do need to do something with it we
limit it by deciding when to stop (so I guess some neuron firing is the
bottom/terminating case for human operations?).

Recursion in CS is also additionally difficult because we have to worry about
how the computer stores the results of each recursive application and feeds
them back whereas, kind of like a high level language, our brain just manages
all that for us when we do math. Also note that we take huge shortcuts with
things like Arabic numerals--which are an incredible advance in efficiency but
probably obfuscate the recursive roots of mathematics for most learners (e.g.
you just learn that some symbol 2 and some symbol 2 is another symbol 4 and
not that these are encapsulating successive applications of the same rule over
a set of arbitrary symbols--I mean, your brain knows this deep down--that's
why it works, but it makes it difficult to immediately comprehend that it is
in fact recursive unless people have pointed this out to you--at least that
was my experience; I may have just had bad teachers).

------
such_a_casual
"This article is aimed to introduce and explain a number of topics and ideas
that will help you when practicing recursive problems."

I know I'm not supposed to shit talk the article. I know someone spent a lot
of time making it for me and others. However, we've gotta move forward as a
community. We have to raise the bar. We have to stop this nonsense.

There is an issue here greater than recursion. I know that's bold of me to
say, but I'm serious. Programmers need STOP fucking telling people that they
can explain something that requires practice to understand. You cannot learn
multiplication from reading a blog post. It has to be drilled into you over
and over again. And only after it's been drilled into you what numbers are,
what addition and subtraction are, and what carrying and borrowing mean. You
learn this over the course of YEARS not 5 minutes reading some shitpost on the
internet. Recursion is just like math. It has to be drilled into you. I had
zero understanding of this for YEARS. And then I picked up a book that drills
recursion into you. Only after writing function after function (functions
essentially identical to one another in structure), was I able to understand
the IMPORTANCE of recursion. Not by thinking about it, or reasoning about it,
by actually doing it, over and over again. Recursion can be explained in 10
seconds. I'll do it right now. Recursion is when a set of instructions starts
over from step 1. Or for us programmers, a function which calls itself. Bam.
all you need to know about recursion. Guess what. It's not enough to
understand what multiplication is, you actually have to be able to use it. You
actually have to be able to use recursion. Explaining why multiplication is
important is frankly retarded. This post is frankly retarded. STOP pretending
that you can explain in 5 minutes what took you months and years to
understand. STOP. All you do is make people like me think they're dumb for not
getting it. I'M SMARTER THAN YOU! But it took me months of ACTUALLY WRITING
RECURSIVE FUNCTIONS to realize HOW FUCKING DUMB YOU ALL ARE. Stop fucking
telling people you can teach them calculus in 5 minutes. Stop telling them
that all it takes is this one clear explanation. It doesn't! It takes actual
work. Fuck the guy who wrote that post. You have no idea how many years I
wasted thinking blog posts like this were how to learn. If you don't get why
recursion is a big deal, do yourself a favor and forget everything you read in
that post, grab a pencil, some paper, and a copy of the Little Lisper. Email
me in 6 months when you're finished, so that you can thank me for being the
only one to tell you the truth about recursion.

------
Sohcahtoa82
Recursion isn't difficult. I never had a problem understanding it. Stepping
through a recursive function with a debugger that shows you the current stack
trace should make it plain as day how it works.

Not trying to toot my own horn or try to make an appearance on
/r/iamverysmart, but where are people struggling when it comes to
understanding recursion?

~~~
chuckdries
Third year CS student weighing in:

I never had trouble, but a lot of my friends really did. I helped a lot of
them, and I generally got the impression that the ones that really struggled
were the ones that fundamentally didn't understand how function calls and
returning values truly worked.

Like if you just were to do something like
`foo(bar(yuk(asdf(value)))).bind(reck(something)).toString()` (not that this
is a reasonable example), they'd have trouble tracing the order of execution.
This was freshman year, so I'm sure they're all fine now, but CS is so
abstract that every new concept has a barrier that can be surprisingly hard to
overcome. Things like recursion or `this` may seem obvious now, but they
weren't always.

~~~
Sohcahtoa82
> I generally got the impression that the ones that really struggled were the
> ones that fundamentally didn't understand how function calls and returning
> values truly worked.

You're probably right.

I don't think in the entire time I was in school, we were shown a debugger and
how to read a stack trace or step through code. In one of my classes, we might
have glossed over it for a couple minutes, but generally there was very little
real debugging shown. It was always "Oh, we didn't get the expected result,
let's look harder at the code we've entered." My school may have offered a
class on debuggers as an elective or maybe a Masters level class.

Really though, using a debugger, even if it's just the one in Visual Studio,
should be included in the first-year classes. By the end of your first year,
you should be able to know what a stack trace is and what it's telling you.

~~~
jayshua
I remember one of the books in my algorithms class actively discouraged
debuggers and advocated for harder reasoning and inserting print statements.
Their reasoning was that programmes are too quick to reach for a debugger
rather than really understanding their code. I'm still not sure how I feel
about that.

~~~
chii
If you were trying to learn math, your shouldn't be reaching for a calculator
everytime when there's a difficult arithmetic problem.

A debugger is the same. Use it when you've mastered skill of desk checking
code, and understand how to make hypothesis about where bugs might be and
binary search with prints to debug.

When actually developing software, you then reach for the debugger for
convenience. But the skills learnt above is still very useful, and is in fact
a good way to learn how to problem solve.

If you only ever relied on a debugger, it'd be like only knowing how to use
calculator and no mental arithmetic.

