8

Now I'm only a novice programmer, but what my friends that are studying programming tell me is that recursive code is a good thing, it leads to less code duplication and it looks more elegant.

Now it may be so, but everytime I've tried my hand with recursive code I've always found it to be significantly slower than non-recursive code, I'm not sure if it's something about the ways I've used it but I highly doubt it, I've used it for fairly simply thing's like Collatz Conjecture and Fibonacci number generation but whenever I've compared it against normal iterative code the recursive approach will consistently clock at around twice the time of the iterative solutions.

Overly Excessive
  • 191
  • 1
  • 1
  • 4
  • well if you're calculating Fibonacci using something like `return f(n-1) + f(n-2)`, you're actually doing twice the work. You can use memoization to make sure you're only calculating each term once. – Lescai Ionel May 04 '14 at 13:43
  • 4
    @LescaiIonel You're actually doing *exponentially* more work, since each of those recursive calls itself duplicates work. If it was just a constant factor of two, memoization might not necessarily be a win. OP should give some examples of "the" recursive approach (side by side with the iterative version that it was compared to) to determine whether the two algorithms are different of whether it's really just recursion vs. iteration. –  May 04 '14 at 13:45
  • I've found traversing a node tree performs faster recursively, then non-recursively. Non-recursive while loops require more work to get around what can be done easily recursively. It seems counter intuitive but I implemented a non-recursive tree walker thinking it would be faster to only revert my changes after benchmarking showed it was slower. – Reactgular May 04 '14 at 15:51
  • Fibonacci is actually a nice example of how to do recursion and then in the next lesson learn how to do it faster non-recursive. – Pieter B May 04 '14 at 16:58
  • @PieterB: There is also the faster recursive version of Fibonacci. No need to avoid recursion to have a fast implementation. – Giorgio May 04 '14 at 21:03
  • Could you please add more details (language used, problems tried)? The performance of software in notoriously touchy. – Telastyn May 05 '14 at 02:18

2 Answers2

13

It is good to at least understand how recursion works, because some algorithms are naturally recursive and thus much easier to express using recursion.

Also, recursive algorithms (or implementations) are not inherently slower than iterative ones. In fact, every recursive algorithm can be implemented by an equivalent iterative implementation at the cost of having to keep track some intermediate/temporary values yourself, where the recursive version automatically keeps track of those in the function call stack.

One example of a naturally recursive algorithm is if you want to apply an operation to all the nodes of a tree. The most natural implementation here is to have a function that performs the operation on one node and calls itself recursively for each of the children of the node.


For some algorithms, like calculating a Fibonacci sequence, recursion seems natural, but a naive implementation will be much slower than an iterative implementation, because the recursive version keeps re-doing the same work over and over again. This just means that for those particular algorithms, recursion may not be the best choice, or that you need to use some techniques to remember intermediate values that might be needed again elsewhere in the algorithm.

For other algorithms, in particular those that use divide-and-conquer tactics, you will find that you need a stack to keep track of some thing in the iterative solution. In those cases, a recursive version is much cleaner, because the stack handling becomes implicit.

John R. Strohm
  • 18,043
  • 5
  • 46
  • 56
Bart van Ingen Schenau
  • 71,712
  • 20
  • 110
  • 179
6

Recursion is slower and it consumes more memory since it can fill up the stack. But there is a work-around called tail-call optimization which requires a little more complex code (since you need another parameter to the function to pass around) but is more efficient since it doesn't fill the stack.

Unfortunately C# does not support this so I would suggest you to use recursion with care, because you might get stack overflow for large values.

As a conclusion, in functional languages you only have recursion and have to deal with it; but in imperative languages iteration is more natural and more efficient. The majority of the most-used imperative languages either does not support (Java, C#, Python) or does not guarantee (C, C++) tail call optimization.

Random42
  • 10,370
  • 10
  • 48
  • 65
  • Yes, I've ran into a couple of stack overflows because of using recursion "carelessly" .. – Overly Excessive May 04 '14 at 14:14
  • 3
    As far as I know, a clever implementation of tail-call optimization will run as fast as a while loop using local variables. The difference is that using recursion you have to specify inputs and outputs to each iteration explicitly, which makes it easier to write correct code. Also, while the underlying implementation can still use side-effects for efficiency, at the level of your programming language semantics you can write purely functional (side-effect free) code. – Giorgio May 04 '14 at 14:40
  • @Giorgio The only thing is that most imperative languages does not have support tail-call optimization. And in functional languages there is no discussion since you only have recursion. – Random42 May 04 '14 at 15:08
  • 1
    @m3th0dman, in this, the second decade of the 21st century, pretty much ALL production-quality compilers support tail-call optimization. There was a StackOverflow question about strange 8051 code generation (may have been PIC), that turned out to be tail-call optimization. Read the various "Lambda: The Ultimate ..." papers from the MIT AI Lab, from decades ago. Tail-call optimization is in fact a very straightforward code generator optimization. – John R. Strohm May 05 '14 at 02:51
  • 1
    @JohnR.Strohm There are many languages without optimizing compilers (mostly those commonly called "interpreted"). The JVM doesn't do TCO -- Scala and Clojure have to implement their own tail recursion elimination and don't have TCO. The CLR does support tail calls, but it must be opted into and at least C# (and probably other imperative CLR languages) doesn't do that. Finally, even in imperative language compilers that support TCO in principle, it's optional and very restricted/fragile (just check the restrictions on LLVM's `tailcall`). –  May 05 '14 at 05:34
  • @delnan: Read Guy Lewis Steele, Jr.. "Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO". MIT AI Lab. AI Lab Memo AIM-443. October 1977. http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-443.pdf – John R. Strohm May 05 '14 at 06:02
  • 3
    @JohnR.Strohm What claim exactly do you want to support with that reference? I did not say TCO can't happen or can't be very effective. I supported m3th0dman's claim that many imperative languages *don't have TCO* by given numerous examples. Regardless of how effective TCO is, or how easy it might be to implement, if it *isn't actually implemented* users of those languages don't benefit from TCO. I see nothing in the Steele paper arguing against that. –  May 05 '14 at 06:17
  • @delnan, Steele's paper explains TCO and shows that TCO is not particularly difficult to implement: it is a very straightforward code generator optimization *IF* the code generator sees stacking the return address and jumping to the subroutine as two separate operations, and then allows the optimizer to decide whether it actually NEEDS to stack the return address, or any intermediate data. If the optimizer decides it does need to do this, it generates a CALL. If the optimizer decides otherwise, it generates a plain jump. – John R. Strohm May 05 '14 at 18:30
  • @JohnR.Strohm You can discuss how easy and desirable TCO is with the maintainers of the various language implementations mentioned before. As of right now, a large number of very popular languages have no implementation that does TCO, and another set has no *reliable* TCO. (And to be honest, I'd be surprised if your evangelism changed anything about that.) –  May 05 '14 at 18:33
  • See also http://stackoverflow.com/questions/34125/which-if-any-c-compilers-do-tail-recursion-optimization – John R. Strohm May 06 '14 at 00:57
  • @delnan: I still have to understand why a JVM limitation should prevent a Scala or Clojure compiler from generating code that does TCO. This is really beyond my understanding. Maybe there is no direct translation, but I cannot see why it is so difficult to generate code that does TCO. – Giorgio May 06 '14 at 19:01
  • 1
    @Giorgio Tail *recursion* elimination is easy enough, and indeed implemented. TCO for arbitrary callees would require full trampolining everywhere, which kills Java interop (extremely important for both) and is very slow. You can't pull local tricks like jumping to the target code without setting up another stack frame because the JVM won't allow it. There's no cross-method jump, only the `invoke*` opcodes which always set up the call frame for you. –  May 06 '14 at 19:20