Hacker News new | past | comments | ask | show | jobs | submit login

Explaining recursion is a lot easier than writing recursive functions.

I can actually describe explicitly why I've occasionally found writing a recursive function hard. In fact I expect a UI could be created that would make it much much easier.

Basically, for ordinary functions I'm able to remember the values stored in each of the variables. (I do not mean a real computation, but the pretend computation I do at all times when programming.)

However, with recursive functions, I sometimes find myself having to remember 5 or more levels of these same variables. Therefore if I have 5 variables, I might end up having to hold 25 values in my head; obviously it is less than this since often there is some pattern to the values that aids in remembering them.

Additionally, I need to flow the return values back through the functions potentially performing extra computations on them; this can be extra confusing if there were multiple entry-points at a particular recursion level of a function. For example, what line and column number was the function called at, and which block of a function was it within? And, do these values belong to the 7th or 9th call of the function? Due to this I'm no longer able to rely on my memory, and I don't get the cognitive benefits of associating computations to place.

Another issue is that if you write some code which never matches its 'base case' it never finishes computing. This is often difficult to debug, and it can cause your machine to lock up.

Perhaps the solution might be:

- A table to represent the values within a recursive function. With colour coded cells that show which level of function a value comes from (and whether it was an argument or return from a deeper level). Effectively, the idea is to reassociate computation with 'place'. Method of Loci.

- Some tools in order to troubleshoot the times in which they've misprogrammed the 'base case'. For example, set a particular recursion depth when developing and exit if this gets reached. Visualise separately how values are advancing towards matching this 'base case'.

I could probably come up with more, but I need to think concretely again to do so usefully; and I'd need to experiment with different recursive solutions to find out whether I'm solving their individual problems.




Maybe it's just that I'm not writing code as complicated as you, but 5 years of school and 5 years of industry, I've never had to hold more than a single level of a function in my head at once. My trick is to explicitly say what contract the function fulfills, i.e. if the input has X property, then the output has Y property. Then when you make the recursive call, you can just assume your called function behaves as advertised. I thought this was the whole point of recursion.


What you're describing is how normal functions are generally written.

If there is no difference in complexity between writing a function that calls itself and a function that just computes some data and returns it then surely this discussion wouldn't be happening. I think it's happening for a reason so I tried to articulate this.

I'm speculating that recursive solutions are more difficult for people to write, because (1) whether a function is said to have worked or not could be dependent on a call multiple levels deep from the original call, (2) the immutable constant named `foo` you created within the function body might contain different values at every deeper call (therefore it is no longer an effective name to refer to 'one thing'), (3) the function calls each relate to each other and likewise so do their values, but this mapping is generally not expressed well within the code or within outputs you might debug (after each function application, a value is received for which the context of how it was produced is often lost.)

I'm probably speaking in too abstract a way to be clear. I think the tools could be improved here; I was thinking about the problem earlier on and I realised that generally I solve it by breaking the problems up into tiny functions [0]. Unfortunately, there's a time cost to this approach, so I would prefer it if debugging tools would instead allow you to annotate blocks of code. They could then represent this information visually. This would help me.

I also think that good teaching is important. I remember that I used to be utterly terrible at writing these functions: my main issue was often that I didn't start by thinking about the 'base case' and the simplest inputs. That's easy to get right.

I do think that my mind has never been very well suited to recursive computation, however I believe I'm decent at understanding and problem-solving around this. It's because I have to think carefully and tool myself when dealing with recursive problems, that I'm able to describe the pitfalls. I have never been the kind of person to compel myself into understanding something merely by stating "this is easy".

[0] https://twitter.com/nouswaves/status/776384680652398592


In a way I find recursive functions _reduce_ the complexity required to reason about a loop. What made me grok recursion was realizing that it's just a more explicitly defined loop:

- You have to define an end condition. (Really this is no different than the 2nd clause of a C-style for() loop, or the expression evaluated by a `while()` loop.)

- You have to explicitly specify the inputs for each iteration of the loop (e.g: explicitly declare what you need in the function signature, instead of just using whatever variables happen to be in the parent scope as working memory.)

- It makes optimization concerns very quickly apparent, as you'll pretty quickly blow the stack or start waiting for the heat-death of the universe. The best heuristics I have for a quick optimization pass are: is it tail-recursive, and does it needlessly recompute values?

Once I realized recursion is just another way to tell the compiler "repeat this thing until [x] is true", it became a lot less mystical and much easier to reason about.

I'd agree with OP to an extent that syntax did make a huge difference in how I viewed recursion. I understood recursion on a theoretical level for a while, but avoided it whenever possible because I too thought it was difficult to reason about.

It wasn't until I learned Elixir that it "clicked" for me. Between pattern matching and multiple function heads: Elixir just made recursion really easy to think about. (Not having any traditional loop construct probably helped a bit, too.)


I agree with you that it is more explicit than looping: I actually think that with complicated logic, loops suffer from even worse problems, and for simple things most people just `filter`, `reduce` or `map`. That said, I do think the debuggability could be improved.


This is a really nice way to think about recursion! Thanks for sharing it.


> (1) whether a function is said to have worked or not could be dependent on a call multiple levels deep from the original call, (2) the immutable constant named `foo` you created within the function body might contain different values at every deeper call (therefore it is no longer an effective name to refer to 'one thing'),

These aren't issues as long as you can prove your recurrence relation is correct. If you treat the recursive call as a black box, there is no reason to think about how it actually behaves and computes its result.


It's like how proof by induction seems mind-bending until you learn to think in terms of induction hypotheses.


> learn to think in terms of induction hypotheses.

Can you expand on this or link to a good definition?


The induction hypothesis is a process in mathematics which assumes a formula for P(n) which is true, then uses this assumption and the problem definition to prove the case P(n+1).

As a concrete example, take the formula for the sum of integers: S(n)= n * (n+1)/2. Proving this inductively means asking what value should be assigned to S(n+1). We can calculate this either with a direct "change of variable" expansion of the hypothesized formula or, nothing that S() is the sum of a series, there is an alternate expansion of S(n)+(n+1). Refactoring both sides yields the same algebraic result, (n+1) * (n+2)/2.

[The final step for a full proof would be to establish a "baseline" case, traditionally either 0 or 1, where we show our formula yields the correct result. The inductive step, applied repeatedly, then proves the formula must be correct for the next higher input, and the one just after that, ad inifinitum.]


I find the pen and paper method of visualizing invaluable. Make a small sample input that hits full coverage and trace it through step by step, writing out each call point with its values. Then you see the shape of your recursion (https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.htm...) and if there's a bug it can quickly become apparent.


Buy yourself a copy of The Little Schemer. It is the best book on recursion I have ever seen. Also mbrock's statement is key.


Agreed, a tool to visualize what's going on with a sample input will be a great help in understanding and writing recursive programs.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: