Hacker News new | past | comments | ask | show | jobs | submit login

Why?



Just so I can answer, why what?


Just in case this is what you were trying to understand:

https://www.google.com/search?q=recursion+is+dangerous&rlz=1...

http://m.slashdot.org/story/198499

You can certainly dig up more information. For me, and this is just my opinion based on 35 years of software and hardware engineering, recursion is a neat academic idea but in the real world it is useless and dangerous.


I'm seeing stack overflows (not a problem with tail-call optimization) as the biggest problem here. There are equally dangerous problems, though. Buffer overflows are a very real concern, especially in embedded software; but that doesn't mean that people don't ever use pointers. Everything about software has risks. Yes, they probably shouldn't have used recursion there in that car; but honestly, it's just a stack. Enough calls and some state could have been frozen. They should have put mitigations in place to stop the potential for a stack overflow to have actual, detrimental results. Do you refuse to ever make a function call for fear that you might be at the top of the stack somehow? After all, how can you know that you haven't accidentally created a state machine with function calls and are at the peak of it now, and end up causing the same problem. Recursion may have been the final manifestation, but it's certainly not the only place that failed.

I argue against your thought that recursion is useless, as well. It's a natural and intuitive solution for a lot of problems. The very concept of a dynamic programming solution starts with a recurrence relationship. It starts with recursion. Sorting algorithms are intutively recursive in many cases, Quicksort and Mergesort are just two examples.

And, 'dangerous' is an interesting term. I understand you've worked in embedded systems and hardware, and so those things are often critical systems that have to be super secure and completely bug free. Not every system is like that, though.

I want to understand why you believe that recursion is both useless and dangerous. Could you give some broad examples of the dangers of recursion from your years of experience?


It's trivial to blow the stack even on languages with tail call optimization.


Ok, try to blow the stack in a stackless language implementation.


F(x) = F(x) + F(x)

Eventually you hit OOM or some death limit even on a stackless language.


This is a memory leak now, not a stack overflow.


Thus demonstrating a finite stack. "Stackless" just a more efficent way to use memory. You could get the same things from dynamicly allocating and dealocating stack space as needed.

Put another way, allocating 1GB of stack space on a machine with 2GB of RAM is at worst the equivelent of using a stackless lanugage on a machine limited 1 GB of RAM.

PS: It's actually worse as stackless languages are stricly slower than actually having the equivelent stack space pre alocated and you need to put an pointer for each stack call pluss overhead for alocating memory.


By writing incorrect recursion? Or in some other way?


Both, the obvious example is the fibinatchi sequence f(x) = f(x-1) + f(x-2). There are lots of efficient ways of coding it but the most obvious O(n^2) approch is n depth and it can't be tail-call optimized.

Granted, you can write much more efficient code that can be tail call optimized, but there complex and less obvious. There are also languages which cache previous results so the native approch becomes O(n) with fairly elegant code that's still depth n.

PS: Anyway, tail call optimization only works when you can rewrite the code as a loop for more complex structures it's less useful.


> Anyway, tail call optimization only works when you can rewrite the code as a loop

You cannot turn a virtual tail call into a static jump.


Sure, it needs bookkeeping but it's still trivial to create a loop equivalent of any tail call optimizable code that your compiler recognizes.

PS: Feel free to look for any counter example.


> PS: Feel free to look for any counter example.

Ok, create a loop equivalent for `(define (f g x) (g x))`.


x

Though the mechanical equivalent would be something like:

  huge(input)
  {
  output, continue, current_function = f;
  do{
  if (current_function == f)
  {input = x ; continue = true; current_function=g}
  if (current_function == g)
  { output = input; continue = false;}
  }while(continue);
  return output;
  }
You would then add any function you would tail call optimize into that function. Granted, with the right structure (inside > outside >... > inside) it can show up more than once on the stack but the same thing can happen despite tail call optimization.


Mini-interpreter implementations do not count, for performance reasons. You could have called a VM bytecode interpreter loop a "loop alternative to tail call" here.


My first example of (f x) is about as fast as you can get.

The longer example is not great the point is it's a very language agnostic tradeoff of speed vs memory. Replace the ‘if’ conditionals with a case statement and its much faster you can speed it up further by using continue.

You can speed it up even more by having a jump at the end of each statement, but that stops looking like a loop and is basically just the tail call optimization.

Anyway, the important part is its low memory utilization and compiler independent.

PS: I in no way suggest tail call optimization was not useful; just you can mechanically simulate it when not available. You can usually beat that mechanical approach if you need to speed things up further.


> as fast as you can get.

But yet 10x slower than a single indirect jump.

> just you can mechanically simulate it when not available.

It's too slow to ever be anywhere near practical - for this reason, almost no JVM language implementation really does anything like this, besides kawa, bigloo and alike, which are slower than some of the dumb interpreters like SISC.


I don't think I was clear enough, it simplifies to zero code (not even a jump) just the original input. (I thought about saying noop but even that's wasteful.)

>"it's slow"

Again, it's just a technique; think writing embedded code with a really dumb compiler. Anyway, saying its slow is not really a counter argument if your limited to say 2,000 bytes of ram you will make lot's of tradeoffs between efficiency and speed. Perhaps you have a select statement perhaps you don't, but starting off with pure ASM is often a pain.

EDIT: This is all from the perspective of dealing with tools that don't do tail call optimization, not writing a compiler / VM etc.


And how exactly it is related to recursion? You should have said "memory leaks are dangerous", and everyone would agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: