Part of what makes Clojure a great programming language is that you don't have to believe this if you don't want to. Nobody has yet convinced me that recursion has any sustained advantage over looping.
Using a loop is generally bad practice if a more specialised operation is available (don't loop if something is a simple map or reduce for example). But if the situation justifies a recursion then it usually justifies a loop unless the recursion is particularly neat. Recursion has the same problem as looping - it doesn't tell anyone anything about what the code is really doing. If I see map then I have implicit and explicit expectations about what is about to happen.
I mostly agree, and proponents of FP shouldn't stress the recursion too much.
When I program in Haskell, I rarely use recursion myself. Most loops are just maps and folds. Need to build another structure from existing structure? Foldl'. Once I understood that the value being folded can be arbitrarily complex, FP became simpler (but terser) than iterative programming for me.
Agreed, I rarely reach for recursion myself when developing. Most of my transformations are map, filter, and reduce. Although from time to time, I will combine them with recursion when necessary. I find recurring from a reducing function to be great for traversing or searching trees or other recursive data structures. Usually these are accompanied by a ton of comments for the next developer. :)
Maybe it's my background but recursion feels more natural, either you are at the base, or at a point with N-1 items below. This gives you two different invariants that you can use to simplify the code.
Small example, though not exactly comparable to looping.
I needed to create a human readable qualified path appended by a hash (to avoid conflicts). This qualified path was needed ahead of the construction of an object, and the logical name must have been used in both the hash and qualified path as it was global (Yes, it was an S3 bucket).
I wrote a function
uniqueQualifiedName: Construct x Optional(logicalId) x Properties -> string.
The recursive step is at the top, where the logicalId is transformed to a construct in the scope of the first argument, and the function is called again with NewConstruct x Nothing x Properties. This could easily be adjusted to work with a list of logicalIds instead.
The invariant in the recursive step is that once the call returns, the construct at the end of the chain will be exactly the construct I inserted, and I can pop it safely, and the scope object will be none the wiser.
The invariant in the base is simply that I don't need to worry about logicalId being anything.
You can see then how easy it is to adapt this to work with a list, whereas with a loop you'd have to write it from zero, and then roll back. Too messy imho.
To just bash them together I'd have to either use the private api which is not a good thing. Alternatively I can push a list of items N steps to each object and have each one of them both adding and removing the logical names.
I think the difference between recursion and looping is that the former requires you to be very explicit about the state you intend to modify and persist between iterations, while it's generally somewhat of a free-for-all in imperative loops.
This is a general design principle that I appreciate about the Lisp family of languages in general. You can write highly imperative code that mutates things in-place, but it's a little harder to do and it's something you do only when you really need to do it. Whereas the default idioms are functional, and functional design is the path of least resistance. Perl is morally the opposite of functional programming in many ways, but I actually think this design is a great embodiment of the "make easy things easy, make hard things possible" ethos of Larry Wall.
Totally! Clojure is kind of great for this, the loop form in Clojure is really recursion as `loop` just provides a point(fn) for `recur` to return as well as a let binding. You could also just call `recur` in any fn. And, it has side-effect forms like do, doseq, dotimes, etc.
I mostly agree, but one really nice use case for recursion is when dealing with trees. For example, writing a function parse_object() that recursively calls itself to parse child objects is way more pleasant than manually managing your own stack, especially if the tree has many kinds of objects and many branches.
Unfortunately in most languages this pattern will lead to stack overflow on medium-sized inputs, so you can't often use it unless you're using a language like Racket or Erlang which can't really stack overflow.
> Nobody has yet convinced me that recursion has any sustained advantage over looping.
Looping may require trampolining or defunctionalisation, whilst recursion can be written much more directly and simply. As a very simple example (in pseudocode):
even(n: uint): boolean = n match {
case 0: true
case n: odd(n-1)
}
odd(n: uint): boolean = n match {
case 0: false
case n: even(n-1)
}
Whilst these are pretty silly implementations of odd & even, I don't know of a direct analogue using loops. For more realistic examples, just look at any "main loop"; AKA "trampoline boilerplate to work-around our language's lack of tail-call elimination" ;)
Many of the replies to this comment seem to have focused on the problem domain (numbers and booleans); and missed the main feature I was trying to show about recursion, which is a collection of functions delegating sub-tasks between themselves (rather than e.g. trampolining via a "main loop")
A closer analogy to my code would be something like this: the domain logic is still abstracted and encapsulated into separate units; the relationships between those units are preserved (e.g. if we set a breakpoint in `even_step`, it will be triggered by `odd`); etc. However, the loop here is literally just a trampoline for thunks, which makes it highly non-idiomatic for imperative/looping style:
type STEP[T] = either[T, unit => STEP[T]]
stepper[T](current: STEP[T]): T = {
for (result = current; result.isRight(); result = result.value(unit))
return result
}
even(n: uint): boolean = stepper(n match {
case 0: left(true)
case n: right(() => odd(n-1))
})
odd(n: uint): boolean = stepper(n match {
case 0: left(false)
case n: right(() => even(n-1))
})
Instead, we could defunctionalise; but that requires some separate data structure and "interpreter":
type STEP = ODD(n: uint) | EVEN(n: uint) | RETURN(x: boolean)
even_impl(n: uint): STEP = n match {
case 0: RETURN(true)
case n: ODD(n-1)
}
odd_impl(n: uint): STEP = n match {
case 0: RETURN(false)
case n: EVEN(n-1)
}
interpret(current: STEP): boolean = {
while (!current.isReturn) {
current = current match {
case ODD(n): odd_impl(n)
case EVEN(n): even_impl(n)
}
}
return current.x
}
odd(n: uint): boolean = interpret(ODD(n))
even(n: uint): boolean = interpret(EVEN(n))
FWIW for my comment I essentially used your code and cleaned up compiler-generated imperative code.
So yes it used the fact that it only recursed to n-1 but did not use any optimizations related to booleans. And looking at other examples can see general principles. You store a data structure for each function which acts as a cache. For example, suppose you said even(0) = True, even(1) = False, and even(n) = even(n - 2), then one way would to unroll it would be to use a size 2 ring buffer as a cache.
I think the underlying claim by the OP is that they don't really run into situations in practice that need this kind of mutual recursion. Certainly, mutual recursion is not at all necessary for this boolean example; you argue that the obvious imperative form is not a direct analogue, but it's not really clear in what situation a direct analogue is desirable in the first place.
> I think the underlying claim by the OP is that they don't really run into situations in practice that need this kind of mutual recursion
As mentioned in my comment, this is applicable wherever we have a "main loop". For example, a game might have a bunch of separate functions/methods to handle the various parts of the game; and a "main loop" which passes the return values of some as arguments to others:
function main() {
world = initWorld()
while(true) {
if (quitPressed()) break;
delta = calculateVelocities(world)
world = updatePositions(delta, world)
collisions = findCollisions(world)
world = handleEvents(mkEvents(collisions), world)
}
print("Goodbye")
quit()
}
Instead of passing data indirectly, by returning to the "main loop"; we could instead have those functions pass their results directly into whatever comes next. For example:
function main() { gameStep(initWorld()) }
function gameStep(world) {
if (quitPressed()) quit(print("Goodbye"))
else worldStep(world)
}
function worldStep(world) {
delta = calculateVelocities(world)
updatePositions(delta, world)
}
function updatePositions(delta, world) {
newWorld = <existing logic>
collisions = findCollisions(newWorld)
handleEvents(mkEvents(collisions), newWorld)
}
function handleEvents(events, world) {
newWorld = <existing logic>
gameStep(newWorld)
}
This is a lot more complicated than the even/odd example, but it's the same pattern. In this case, it's probably clear why we wouldn't want to in-line all of the calculations into one gigantic loop, mixing up physics simulation, collision detection, input handling, combat system, and whatever else this game does.
Again, not saying this is better/worse than a "main loop" (e.g. it can be helpful to have a "central location" to prevent spaghetti code); but (a) this style isn't possible without tail-call elimination, and (b) those who don't have tail-call elimination maybe wouldn't consider such an implementation.
> Again, not saying this is better/worse than a "main loop" (e.g. it can be helpful to have a "central location" to prevent spaghetti code); but (a) this style isn't possible without tail-call elimination, and (b) those who don't have tail-call elimination maybe wouldn't consider such an implementation.
Sure, this is an example where tail calls work as well as a central loop to solve the problem. But OP was looking specifically for a situation where recursion has an active "sustained advantage over looping", i.e., where the solution can be expressed far more naturally through recursion than through looping. Failing that, favoring recursion can just be boiled down to the peculiar preferences of the FP zeitgeist. (That is, even if the language does have tail-call elimination, that doesn't automatically make recursion the better solution.)
> In this case, it's probably clear why we wouldn't want to in-line all of the calculations into one gigantic loop, mixing up physics simulation, collision detection, input handling, combat system, and whatever else this game does.
I don't see what that has to do with loops vs. recursion. In real-world imperative code, we'd break up our main loop into calls to separate stages of the cycle, then break up each stage into calls to different encapsulated submodules, etc.; we wouldn't need to make the loop a 100k-line wall of code running every tiny step. (Compilers can do that part on their own.) Do you mean that this mutually-recursive style can assist with encapsulation in some particular way?
And adjust for all the off by 1 errors. You're managing the same amount of state both ways, but with the loop all the state mutation lives on one line instead of spread throughout a stack.
That's not a "direct analogue" of my implementation, for many reasons; you've changed the semantics and engineering tradeoffs so much that we might as well write `n % 2 == 0`.
The biggest problem is that, assuming we copy your snippet into a couple of function definitions, we've completely lost the encapsulation/separation-of-concerns/delegation/etc. provided by my `odd` and `even` functions; i.e. all of the "software engineering" stuff that makes source code more maintainable than disassembled binary.
Your loop is more like the following, which is not what I wrote:
even(n: uint): boolean = n match {
case 0: true
case n: !even(n-1)
}
odd(n: uint): boolean = n match {
case 0: false
case n: !odd(n-1)
}
The major difference is the use of iteration/tail-recursion in this implementation:
- This loop collapses all of the abstraction, forcing each function to implement the entire solution. In contrast, my implementation uses divide-and-conquer: only the zero case is handled directly, and the non-zero case is delegated to a more suitable handler.
- Wrapping two copies of this loop into `even` and `odd` functions will completely lose the relationships inherent in my implementation. For example, if we add instrumentation, optimisations, logging, etc. to the `even` function, that will affect my `odd` function but have no effect if we were to write independent loops.
- This looping implementation has extra dependencies, specifically on `range` and `!`. The `!` function requires knowledge of boolean algebra, the `range` function requires knowledge of lists/sequences/iterators, and `range` also seems to make more sophisticated use of number theory than the `odd`/`even`/`-` required to understand and maintain my implementation.
Note that I'm not claiming either of these is "better"/"worse" than the other. Simply that your loop is not representative of my example; that's specifically why I chose a mutually-recursive example, and not an iterative/tail-recursive one!
(Of course, all of these are exaggerations for such a simple example; but complex, real-world codebases require such engineering practices and tradeoffs to be taken seriously for the sake of maintenance)
I see, so the point you wanted to draw was that if you wanted to implement two functions, mutual recursion might let you separate the implementation logic. That is a nice trick.
> you've changed the semantics and engineering tradeoffs so much that we might as well write `n % 2 == 0`.
That is a fairly glaring weakness of the example you've chosen - it is taking a simple situation and overthinking it. It doesn't really matter because I see your point and surely one would exist, but do you have an example where this technique is an efficient solution?
fun odd (n: Uint32) -> Bool =
let Uint32(n0, _, _, _) = n in
let Byte(b0, _, _, _, _, _, _, _) = n0 in
b0
Who needs hardware support for integers anyway? If you want a two-complement arithmetic, you can build it yourself (although presumably it'll be in the standard library).
I used Scheme for a while so I like using recursion when loops get a bit hairy, but Guy Steele's talk on Parallel Programming* made me aware of the limitations of that way of working. Now that I've switched to Common Lisp I try to think about the algebraic properties of my functions so I can apply them using lparallel's preduce.
That's pretty cool. Armed with the fundamentals outlined in the post, my intent was for the reader to infer how he or she can design a system in FP. I'm sorry if it wasn't clear enough in the code.
If you're talking about recursive common table expressions, those are called recursive, but are really iterative. The PostgreSQL documentation describes the iterative evaluation:
Note how there is no way to remove a row from the result set once it has been added. That would not be the case with a truly recursive query, because you would be constructing a new result set at every step. As a concrete example, try to write a graph query which finds all nodes exactly three edges from some starting node. That would be trivial with true recursion, but is impossible with a recursive CTE alone.
Looping, as in using a loop keyword, is synchronous. Recursion is also a loop, but can iterate asynchronously as necessary. That is the primary advantage.
Recursion doesn't do that by default, though. And if you have to alter your code to run recursive fns on a new thread or executor, nothing prevents you from doing something similar with a loop, either.
Recursion does do that by default respective to what the given function executes. No alteration or special convention is required. Loops that require use of syntax other than a function call are synchronous only.
If it would you can find me on IRC and I would be more than happy to show you how to program.
I see you've gone from recursion "can iterate asynchronously" to it "does do that by default".
In most languages, recursion is not asynchronous, even if it could be, which is part of the point I was making. I'm guessing English is not your first language.
On paper perhaps, but I have yet to see a compiler that can take any arbitrary recursive subroutine and automatically optimize it into a Tail Call Optimized (that is, transform the recursive parts into goto/jumps similar to a loop) version. Non optimized recursion is not hard to understand, the problem is any performant recursive code needs to be manually rewritten as tail recursive which adds a lot of complexity. Built a better compiler and maybe more people would be willing to embrace functional programming.
> On paper perhaps, but I have yet to see a compiler that can take any arbitrary recursive subroutine and automatically optimize it into a Tail Call Optimized (that is, transform the recursive parts into goto/jumps similar to a loop) version.
Unless I'm mistaken, Erlang's compiler rewrites all recursion that explicitly returns a function call into tail-call, and eliminates all the stack in between
It doesn't make any sense to convert any arbitrary recursion into tail called optimized.
If the recursion can be tail called optimized then yes the loop is the optimized performant implementation.
But if the recursion fundamentally utilizes the call stack then the reverse is actually true. The recursion is now the performant implementation of a for loop. So a loop conversion optimization actually doesn't make sense here. That's partly why a compiler won't optimize this.
Why? Because in recursion you utilize the call stack, in the for loop you're going to create a heap allocated stack and allocate on it repeatedly. Allocation slows down the iterations.
The only advantage of the for loop in this case is that there won't be stack overflow, but overall the recursive version will actually be faster.
> If the recursion can be tail called optimized then yes the loop is the optimized performant implementation.
Is it? My mental model of tail-call optimization is that it’s just a replacement of the stack variable values in place and a jump to the function entry point (which seems like the same amount of work as updating local variables in a loop and jumping to the head of the loop).
This is quite false. Languages like Scheme (dialects), ML variants, Erlang and Haskell all make use of recursion in general and they produce very viable results.
If your team and you can read it later or when it goes wrong, then it's fine.
I don't want to need to think hard about what code does, it should be clear. Or jump through 100 files to find a simple algorithm that has been divided into 100 pieces.
I think people refactor to their own understanding or mental model or refactor-to-understand.
I think there are cases where imperative code is intuitive and others where functional code is intuitive.
I've worked on two professional projects in Clojure at a surface level but I still find Python easier to read, but that's my experience YMMV.
There's a point in my programming projects when my own understandability is weakened and I this week I've been trying to think of a mindset that simplifies the problem. I am experimenting with multithreaded code in Java that implements left-right concurrency control. The idea is readers don't block writers and writers don't block readers and writers don't block writers. Give each thread a shard and rely on commutative property of your tree data structure and have a coordinator thread handle merging. The benefit: each thread can operate at single core speeds without any synchronization for reading OR writing. Buffer flipping is handled by the coordinator thread.
Each thread has its own copy of state which is merged by the coordinating thread. So it's an eventually consistent system. The result: each thread can modify its own copy of the data as much as it wants and it can see its own snapshot of global state at the last snapshot point. Cross thread writes always happen on the inactive buffer.
How do you think about data transformation pipelines? If only there was an IDE for kafka or clojure transformation pipelines.
Agreed, cyclomatic complexity is definitely something to be aware of when designing functional programming systems.
Although, I disagree about Clojure's readability, but that's probably because I've been doing it for so long.
Interesting Java project you've got there. Reminds me of Clojure's core.async library and software transaction memory ;) (though, I know it's not exactly the same thing).
If I was transforming large streaming input from Kafka or Kinesis, I would probably reach for Cortex (a map-reduce style lib for Clojure), or I might rig up some strange, stream-to-lazy-seq adapter and reach for transducers as they have much better performance when dealing with larger input.
I like left-right concurrency control because it sidesteps a number of thread safety problems by ensuring that a thread can always safely read or write to its own buffer.
Fair warning, STM has largely not panned out in industry, and is no longer in common use. Clojure was built when it was still fashionable, but even having the advantage of immutable-by-default data structures, 99% of modern Clojure doesn't use complicated STM, and instead relies on the humble atom, which is updated with a glorified CAS.
I think the main thing is not really FP vs whatever else but the most fundamental thing is declarative vs imperative. That is probably the biggest jump in mindset when coming from e.g. C or C++ to something like Haskell - that instead of telling the computer/compiler what to do, you describe what you want.
You are still telling the computer what to do, describing what you want is more like prolog. The main difference between Haskell and C in a case like this would be the level at which you tell it what to do.
That's a truism that when extend equates Firefox or Excel with programming with C because at the end of the day you tell the computer what to do via some interface (text programming language or graphical with much more abstraction layers).
I think this would be correct for declarative programming languages, but I don't agree that Haskell is a declarative programming language. Haskell is pure functional programming in my mind. A declarative programming language might be something more akin to DML SQL for a RDBMS, or HCL for Terraform (pre-looping, v0.X).
Pure FP languages are declarative just with additional restrictions like referential transparity, no explicit handling of state and immutability of the underlying state. For these restrictions to be effectively held it requires for it to be declarative.
The advantage of clojure is in data centricity. It is not about shifting mind to recursion from looping. Clojure has a 'for' which is it's list comprehension loop.
The actual fun of clojure: https://bitslap.it/blog/posts/fun-of-clojure.html
Clojure data type's are fantastic, but the main thesis of my post isn't what's required for FP in Clojure, rather, what's required to become comfortable with pure functional programming concepts which is why I reference Haskell a lot in the post. The examples just happen to be in Clojure.
I get it. But my point is that clojure is not snobbish about pure functional programming. It's angle is data centricity. Very different to haskell in that respect and type centricity of it.
I agree with you, but I also never said it's snobbish about pure functional programming. The way I see it, pure functional programming, like anything, is just a tool in the belt to help think about solutions in a different manner.
The biggest problem of functional languages is that they teach things of doing things that are not how computers work internally. For that C is much better. Assembly language the best. Or any other procedural language will teach you more about how computers run under the hood.
Recursion is a purely math concept, in real life the chips don't work with recursion, they work strictly with if-then (more current/less current) logic. Assembly language is also procedural.
Also, there is another fundamental thing that is a blocker for functional programming - our brain. For recursion you have to keep in brain many things while you call yourself. This is extremely hard (hmm, I wonder why the Lisp Machine was so pricey ;) ) - especially for the majority of people whose brain has problem to remember more than 2 things in the span of 5 seconds.
If you like math, yeah, sure use Haskell or Clojure, but for the most people it's an overkill and for the most part people still tend to write very procedural code - you have for loops in Clojure - I am wondering why ;D
I like some things that are often associated with functional programming, e.g. immutability, but I am not a fan of recursion at all. Also there seem to be some obsession of having dynamic types in many Lisp and functional programming languages, for example Clojure or Elixir, and I hate it.
> For recursion you have to keep in brain many things while you call yourself. This is extremely hard
it depends on your problem domain. Recursion makes some things easier - much easier than iteration. Try solving tower of hanoi without using recursion and see how hard it is: https://www.geeksforgeeks.org/iterative-tower-of-hanoi/
Or try writing a merge sort algorithm without using recursion.
As a Clojure programmer, I find it a bit unfortunate that most of this post, like many other posts demonstrating Clojure, tend to paint a rosey picture and never mention the realities one has to reconcile in a Clojure codebase.
For instance, one of the sections in this blog post is titled "Functions over Objects" but any Clojure programmer with experience will agree that most code ends up becoming some sort of map munging. Rather than "functions over objects" you really have "map-munging and function instrumentation in dev over banging on concrete instances of classes".
There is also this statement
> functional programming languages often facilitate iteration through recursion
then the author proceeds to mention loop/recur, which is nice, but it is dishonest at best and lying at worst to imply loop/recur is the most natural or common way we iterate over a structure. Off the top of my head, I always see idioms like `(for [[k v] some-map] (do things with k v))`. It would also be nice to demonstrate how faux-recursive looks like with loop/recur rather than stating Clojure's lack of TCO and not elaborating further on how the idiomatic version of `recursive-map` would look like.
Next,
> by (nearly) eliminating side-effects, functional programming (nearly) elminates this whole class of bugs.
That may be the intent of purely functional code in general, but in Clojure one regularly interoperates with the host, to take advantage of their rich ecosystem. Many of the libraries one will interop with will involve some imperative or stateful things! So I think it'd be better if the author had mentioned the idea of "functional core, imperative shell." That is a pattern most Clojurists would agree with, and it is what people really do in a code base, or at least attempt to design.
Lastly, while Clojure allows one very naturally to mock up a finite state machine with a single map and writing a few functions to describe transitions given the current state of the machine, I want to emphasize that not much Clojure codebases I've seen ever used FSMs explicitly :-)
Overall, the article is okay, but my brain registers it as another "Clojure portrayed with rose-tinted glasses and contrived examples" article. Not that this is bad. It's just not the kind of sales pitch I'd want to show to non-Clojure programmers, because they will likely want to ask questions about maintainability, testing, instrumentation, etc. And it is possible to show Clojure being nice for each of these things. At a previous job I got to see ClojureScript code dating back to 2014, untouched, and surviving 8 years worth of language and library updates.
I am not a Clojure programmer, and am pretty skeptical of LISPs and FP more generally, but i have to say that transducers are pretty great. The descriptions of them, and the way the interface is expressed, are a bit off-putting, but once you grok them they're actually simple and useful.
They occupy the same space as Java's streams, but manage to do the same stuff with a smaller, more generic, more extensible interface, that can do more stuff (eg intermediate stages get told when input is finished). I'm jealous.
EDIT: IIUC, in Java terms, Clojure's "reducing functions" are like Collector, and a transducer is a function which takes a Collector and returns another one. There is then a little bit of top-level sugar so you can give a sequence of transducers, terminating in a concrete reducing function, and get back a reducing function which itself feeds things through the chain of reducing functions built by the transducers. You could probably replicate this in Java, but it might be too clunky even for Java programmers.
Personally, i would say that a blog post which starts "We can think of a transducer as a context-independent transformation composed of, say, many reducers" and then starts adding parentheses is not really demystifying. But perhaps i am not the target audience.
Hell yeah! The problem with "functional" programming is that a lot of people simply don't get you want to push as much of your program to be generic tools that transform things (ie FUNCTIONS!) When I start on a new problem I typically look at what kind of tools (functions) would make the solution concise and readable. Then create the tools and then write the solution with the tools.
Unfortunately most developers come with preconceived notions of how the program should be laid out and what the development process should be. If they come from OOP world, you will see code that pretty much looks like objects just without OOP machinery. If they come from scripting/procedural then you will see long stretches of instructions just split into smaller "functions".
> Recursion over Looping
Recursion is elegant but it is also hard to understand for many people. I object putting anything into code when the only purpose is making the code look elegant / more advanced at the cost of narrowing audience that can effectively work with it.
My goal is to make the code stupid simple. My main challenge is preventing my ego from trying to impress the reader.
Frequently recursion is more readable than looping. I don't hesitate using recursion in that case. But I make sure that the reader should be able to instantly recognise the pattern and the pattern is not too complex.
Another thing to push recursion to be more manageable is just structuring your code to extract recursion from everything else. Make one or two very small functions that are only responsible for recursion, extract all other logic into other functions that are meaningful on its own (and just happen to be used in recursive context).
If the recursion would be too complex, usually iteration is going to be easier to represent the solution in manageable chunks without requiring the reader to take everything in in one go.
> If the recursion would be too complex, usually iteration is going to be easier to represent the solution in manageable chunks without requiring the reader to take everything in in one go.
Perhaps? But the reason I reach for recursion is so that I don't have to first produce the "wrong answer", and then modify it until it's the "right answer". The sum of 1..10 is always 55. It isn't first 0, then 1, then 3, then 6, etc.
The only difference there is whether the "wrong answer" is visible while it's being calculated.
Sums and factorials are the worst possible way to teach recursion because they look like a loop with a weird twist that adds conceptual complexity but doesn't seem to be there for any other reason.
Recursion should really be taught by application to complex nested data structures. It's a lot clearer when you apply it to structures with multiple sub-levels that can be processed identically than with a simple linear list or range.
It isn't, though. I recently started learning Rust and've basically been having to teach myself loops again. For over 10 years, I've not constructed a single loop; I've been getting it done by constructing my program with different combinators and recursion instead. And I've not ever felt that what I'd done would have been better with a loop.
It's even very seldomly that I even need if-statements. It's clearly a difference and I really would like to think of myself that I'm not _that_ deep into the cool aid.
This has been done with Scala, F#, Haskell and Elm.
With Rust, not only do loops feel more natural, they're very clearly the right tool.
I don't understand where you come from with your argument.
Recursion is also producing results that are "wrong answers" although I prefer to call them partial results. Just like a loop that is producing partial results as long as it has not finished yet.
You tend to not put temporary results of loops in instance variables, but in local variables. Which aren't visible externally at all.
And yes, if you do have that exceptional case of a complex algorithm that needs to store partial results in ivars as they are required by different methods and inconvenient to pass around individually, those ivars tend to not be exposed.
In fact, you'd more likely create a separate context object that you pass around, so also not visible externally.
Have you heard about the concept of "loop invariant"?
s = 0
i = 0
// Invariant: before and after every iteration, s is equal to sum of [0..i]
// End condition: i == n
while i != n:
s += i
i += 1
// Here, i == n and so s is a sum of [0..n] because the invariant still holds
That's how loops were supposed to be thought about (and written) since the 70ies, and it's more or less equivalent to induction which is what you're supposed to use when you think about recursive functions.
Out of curiosity is there actually any chipset with the algorithmic ability to add more than 2 numbers altogether in one operation (without having to store intermediate results)?
I'd also think looping is far closer than recursion to how we tend to manually calculate things (either entirely in our heads or using a pen & paper or even a basic calculator).
(Edit: looks like a number of other posters have made the same point already, but in dead/downvoted posts, not sure why...)
Chips have ability to run vector operations that can add a bunch of numbers.
The issue is that recursion in software development is relatively rarely about arithmetic operation. For example, you might want to traverse a tree of objects (like a filesystem folder) to run some operation on them, etc.
In this case the intermediate information is pointers to where you are currently processing various folders in your tree structure, etc. In a loop you would have to create a data structure to hold these. In recursive function this would typically be spread in multiple frames on your stack pointing to objects in your heap.
> is there actually any chipset with the algorithmic ability to add more than 2 numbers altogether in one operation (without having to store intermediate results)
I was just answering your question. I am not responsible for quality of your questions.
What you do is you posted a question but had something else in mind. You got an answer but that answer does not match the question you had in your mind so now you think it is ok to berate somebody for not being able to read your mind.
My understanding is those vector operations don't
, in fact, add more than 2 numbers together, but rather simultaneously add lots of pairs of numbers. Happy to be corrected and no berating intended.
If there were such an ability at the machine language level, it would be a strong argument against expressing the addition of a set of numbers using either a loop or recursion. I don't really see that either manner of expressing such an operation is always preferable, and would rather simply use a built-in language or feature (e.g. reduce) that the compiler/interpreter is free to translate into the most appropriate machine instructions for the given hardware.
I agree with your sentiment about recursion. It took me a damn long time to get used to it. But that's why I call it a mindset shift ;). If the codebase was written in Haskell, then there'd be no looping. Clojure is a bit odd in this case as it has a form called "loop" but it's really just a let binding over a fn, providing a point for `recur` to, well, recur to.
Again, the ultimate goal is to make the code readable (without sacrificing too much other qualities like performance).
When you have a language and context that makes recursion easier to understand than loop -- go for recursion.
One other reason to go for loops is to make sure you control the stack. With recursion it is not always immediately clear that the loop is going to get tail call optimisation. And good 9/10ths of developers meet me with a blank stare when I mention it. At least with a loop it is clearly visible how much space and in what way you are allocating. I had one dev who said he likes recursion because he says it is more memory efficient. To which I had to point out that each level of recursion creates a new stack frame. The guy just wasn't aware of it...
Recursion isn't strictly worse though, due to TCE.
And if you can manage to avoid allocating a list by recursing instead, you do save memory. (But I find it hard to think of such a case, as you can usually just loop over an iterator/generator instead?)
This “recursion is more complicated” idea is a myth.
Yes, a lot of people seem to have trouble with it, but it isn’t fundamentally more complex or difficult. The hardest thing about learning recursion is unlearning the other patterns. It’s totally a matter of familiarity.
Part of what makes Clojure a great programming language is that you don't have to believe this if you don't want to. Nobody has yet convinced me that recursion has any sustained advantage over looping.
Using a loop is generally bad practice if a more specialised operation is available (don't loop if something is a simple map or reduce for example). But if the situation justifies a recursion then it usually justifies a loop unless the recursion is particularly neat. Recursion has the same problem as looping - it doesn't tell anyone anything about what the code is really doing. If I see map then I have implicit and explicit expectations about what is about to happen.
I enjoyed the article though.