129

There are times where using recursion is better than using a loop, and times where using a loop is better than using recursion. Choosing the "right" one can save resources and/or result in fewer lines of code.

Are there any cases where a task can only be done using recursion, rather than a loop?

Pikamander2
  • 1,259
  • 2
  • 9
  • 9
  • 14
    I seriously doubt it. Recursion is a glorified loop. – Lightness Races in Orbit Nov 22 '15 at 05:06
  • 6
    Seeing the diverging directions into which the answers go (and having just failed myself at providing a better one) you might do anyone trying to answer a favor if you provide a bit more background and what kind of answer you are after. Do you want a theoretical proof for hypothetical machines (with unlimited storage and run-time)? Or practical examples? (Where “would be ridiculously complicated” might qualify as “can't be done”.) Or something different? – 5gon12eder Nov 22 '15 at 06:51
  • A more interesting question would be, is there anything which can be done with help of LIFO (stack) of some kind, that can't be done with other kind of temporary storage (without simulating LIFO access). – hyde Nov 22 '15 at 12:58
  • 8
    @LightnessRacesinOrbit "glorified loop" is a bit harsh. Try implementing reasonable general O(NlogN) sorting algorithm with loop and without emulating recursion, why which I mean having your own stack which does same thing as stack in "real" recursion. Note: there might be such an algorithm, but I'm not aware of one. – hyde Nov 22 '15 at 14:21
  • @hyde: _"without emulating recursion"_ That introduces a circular argument! If we're to use the "fact" that recursion is a glorified loop to rewrite a recursive algorithm in a loop, then of course the logic is going to be extremely similar, with the addition of our own stack for data storage. I agree that logically it'll look pretty much the same, and that's kind of my point. – Lightness Races in Orbit Nov 22 '15 at 14:43
  • @LightnessRacesinOrbit I was picking on "glorified something" idiom (a dismissive, pejorative expression in English), which I'd say is unfair, since recursion both demands something extra (a stack) over just being able to do looping, and enables certain kind of algorithms as a trade-off. – hyde Nov 22 '15 at 15:10
  • 10
    @LightnessRacesinOrbit To my non-native-English-speaker ear, "Recursion is a glorified loop" sounds you mean "You might as well use a looping construct instead of recursive call anywhere, and the concept doesn't really deserve it's own name". Perhaps I interpret the "glorified something" idiom wrong, then. – hyde Nov 22 '15 at 15:24
  • 13
    What about Ackermann function? https://en.wikipedia.org/wiki/Ackermann_function, not particularly useful but impossible to do via looping. (You may also want to check this video https://www.youtube.com/watch?v=i7sm9dzFtEI by Computerphile) – WizardOfMenlo Nov 22 '15 at 21:12
  • 5
    @WizardOfMenlo glance at [Rosetta Code](http://rosettacode.org/wiki/Ackermann_function) and then pick one of the Esolangs. Like Befunge-93: `Since Befunge-93 doesn't have recursive capabilities we need to use an iterative algorithm.` and note that it does it. –  Nov 22 '15 at 22:06
  • @MichaelT Oh, that's interesting... I thought that the Ackermann function was not a primitive recursive, and as such it couldn't be de-recursified, are you sure that it doesn't simply print the results without going through the function? Or that the Befunge code is emulating recursion (possibly)? Otherwise this leaves me quite puzzled :) – WizardOfMenlo Nov 22 '15 at 22:13
  • 8
    @WizardOfMenlo the befunge code is a implementation of the [ERRE](http://rosettacode.org/wiki/Ackermann_function#ERRE) solution (which is also an interactive solution... with a stack). An iterative approach with a stack can emulate a recursive call. On any programming that is suitably powerful, one looping construct can be used to emulate another one. The register machine with the instructions `INC (r)`, `JZDEC (r, z)` can implement a Turing machine. It has no 'recursion' - thats a Jump if Zero else DECrement. If the Ackermann function is computable (it is), that register machine can do it. –  Nov 22 '15 at 22:23
  • Recursion should be used sparingly because it becomes complex to unpack the logic for the first time, and is thus less readable. Also changing the logic can become a rubix cube style puzzle, better to generally avoid it, except when doing obvious and simple node traversal. – Mark Rogers Nov 23 '15 at 00:03
  • Unless the loop also includes a stack it can hardly do anything interesting that recursion can do. – user207421 Nov 23 '15 at 03:00
  • Well there's lots of answers already... but I think the bottom line is there is one thereotical cs interpretation of this question where the answer is provably "yes" and another where the answer is provably "no". – djechlin Nov 23 '15 at 04:48
  • 1
    Maintaining clarity while traversing a recursive structure... – recursion.ninja Nov 23 '15 at 14:40
  • 4
    In most languages, the one thing recursion can do that you can't do with loops is make your program crash with no way to catch or report the error. – R.. GitHub STOP HELPING ICE Nov 23 '15 at 15:25
  • In async runtimes (eg javascript), you can't use a loop that involves any async call and expect the loop to resume after the async call returns. Eg using `setTimeout` to increment the seconds each second on a digital clock application. But recursion works a charm. Would have done this as an answer, but I don't have the reputation yet apparently. :P – poshest Nov 23 '15 at 17:39
  • So many wrong answers. The ackermann function is designed as a counter-example to this. – Filip Haglund Nov 23 '15 at 18:28
  • 4
    @Pinoniq To understand recursion, you must first understand recursion. – Mason Wheeler Nov 23 '15 at 19:56
  • 1
    http://stackoverflow.com/questions/10742322/how-to-rewrite-ackermann-function-in-non-recursive-style has an implementation of Ackermann that isn't recursive. – JB King Nov 23 '15 at 21:53
  • Yes, when the concept is used as a heuristic to evaluate a programmer's understanding of the machine. The barrier for understanding loops is a bit lower. Quoting Joel Spolsky, "If I may be so brash, it has been my humble experience that there are two things traditionally taught in universities as a part of a computer science curriculum which many people just never really fully comprehend: pointers and recursion" (http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html). – GargantuChet Nov 23 '15 at 22:29
  • **Comments are not for extended discussions. Please use the chat room if you wish to continue your discussion. Thank you.** – maple_shaft Nov 24 '15 at 03:44
  • 1
    A much better question would be if there are things which can be done via looping and can't with recursion... – jcora Nov 24 '15 at 12:07
  • 4
    Beware that this question has attracted many answers that are mediocre or downright wrong, yet have a very high score. I recommend asking questions that require some knowledge of computer science on [cs.se] rather than here. – Gilles 'SO- stop being evil' Nov 24 '15 at 22:09
  • "Prettier" is a technical term...? – user541686 Nov 25 '15 at 00:15
  • 2
    @WizardOfMenlo - Regarding the Ackermann function, what you wrote is not true. Writing that in a recursive language of choice and in FORTRAN and in assembly (sans recursion) was an assignment in a comparative languages class I took ages and ages ago. The recursive implementation can be a one liner. For example, `def a(x,y):return y+1 if x==0 else a(x-1,1) if y==0 else a(x-1,a(x,y-1))`. The non-recursive implementations were a mess, but it can be done. – David Hammen Nov 25 '15 at 00:22
  • I dont have enough points to write an answer, so adding a comment. A task that needs to run in specific interval, you can achieve using recursion but will be difficult using loops. You can check following [Fiddle](http://jsfiddle.net/RajeshDixit/z597hevr/) for reference. – Rajesh Nov 27 '15 at 11:46
  • For me, the main conceptual difference between recursion and looping is that recursion expresses locality. In recursion the result of each step only depends on its input and the result of anything called by that step. A loop on the other hand requires non-local state, shared by every iteration. The result of each iteration depends on the input and the result of all the iterations preceding it. – biziclop Nov 28 '15 at 19:37
  • A loop is a self-recalling block of code. Loops are recursion with defined exit conditions – That Realtor Programmer Guy Sep 04 '21 at 17:18

11 Answers11

169

Yes and no. Ultimately, there's nothing recursion can compute that looping can't, but looping takes a lot more plumbing. Therefore, the one thing recursion can do that loops can't is make some tasks super easy.

Take walking a tree. Walking a tree with recursion is stupid-easy. It's the most natural thing in the world. Walking a tree with loops is a lot less straightforward. You have to maintain a stack or some other data structure to keep track of what you've done.

Often, the recursive solution to a problem is prettier. That's a technical term, and it matters.

Scant Roger
  • 9,038
  • 2
  • 29
  • 47
  • 121
    Basicly, doing loops instead of recursion means to manually handle the stack. – Silviu Burcea Nov 22 '15 at 06:27
  • 16
    ... the *stack(s)*. The following situation may strongly prefer having more than one stack. Consider one recursive function `A` that finds something in a tree. Everytime `A` encounters that thing, it launches another recursive function `B` which finds a related thing in the subtree at the position where it was launched by `A`. Once `B` finishes the recursion it returns to `A`, and the latter continues its own recursion. One may declare one stack for `A` and one for `B`, or put the `B` stack inside the `A` loop. If one insists using a single stack, things get *really* complicated. – rwong Nov 22 '15 at 06:48
  • Correct!​​​​​​​ – Lightness Races in Orbit Nov 22 '15 at 14:44
  • 38
    `Therefore, the one thing recursion can do that loops can't is make some tasks super easy.` And the one thing that loops can do that recursion can't is make some tasks super easy. Have you seen the ugly, unintuitive things you have to do to transform most naturally-iterative problems from naive recursion to tail recursion so they won't blow the stack? – Mason Wheeler Nov 22 '15 at 17:29
  • 10
    @MasonWheeler 99% of the time those "things" can be better encapsulated inside a recursion-operator like `map` or `fold` (in fact if you choose to consider them primitives, I think you can use `fold`/`unfold` as a third alternative to loops *or* recursion). Unless you're writing library code there aren't that many cases where you should be worrying about the implementation of the iteration, rather than the task it's supposed to be accomplishing - in practice, that means explicit loops and explicit recursion are both equally poor abstractions that should be avoided at the top level. – Alex Celeste Nov 22 '15 at 18:06
  • 7
    You could compare two strings by recursively comparing substrings, but just comparing each character, one-by-one, until you get a mismatch is apt to perform better and be more clear to the reader. – Gort the Robot Nov 22 '15 at 18:35
  • @MasonWheeler I think you missed my qualifier "some". I'm not insisting that _everything_ goes better with recursion. Hardly. I say use the best tool for the job. That's not what the OP asked, though. They asked recursion > loop versus loop > recursion. – Scant Roger Nov 22 '15 at 19:05
  • 2
    @StevenBurnap Nice example. I agree that _some_ tasks are clearer using iteration. Just like _some_ tasks are clearer using recursion. – Scant Roger Nov 22 '15 at 19:07
  • "Have you seen the ugly, unintuitive things you have to do to transform most naturally-iterative problems from naive recursion to tail recursion so they won't blow the stack?": Er..., not really. A tail recursive function is just a straightforward representation of a while loop in which you represent the loop variables explicitly as formal parameters. If you have a loop in mind, the corresponding tail-recursive function is just another way to write the body of the loop. The initial call to that function corresponds to the loop initialization. I find this pretty intuitive and straightforward. – Giorgio Nov 22 '15 at 22:38
  • 4
    @Giorgio, when I taught programming years ago, I've seen multiple people coming up with the idea of loop and never seen someone not understanding the idea. I've never seen one coming up with the idea of recursion and saw multiple persons having trouble to understand it (it got better after I stopped using examples like factorial where the loop was so clearly better for them that they could not go over the apparent lack of interest). – AProgrammer Nov 23 '15 at 14:04
  • @AProgrammer I'm guessing you teach people imperative programming languages first. Functional programming languages tend to encourage recursion more than they do iteration. In imperative languages, looping is more natural. – Yakk Nov 23 '15 at 15:52
  • 2
    @Yakk, I've taught lisp as an extension language for a program to people who are not programmers. Even when presenting loops after recursion (I've not done that too many times, that approach was too weird for the other trainers as well), loops are more natural for my trainee. They would use them even in cases when recursion is more natural for me that a loop, so that's not a bias I was passing to them. – AProgrammer Nov 23 '15 at 16:19
  • Exactly wrong. Things involving traversing trees and graphs are literally the only cases where recursion is faster, simpler or easier to understand. The thing that recursion invariably does that loops do not is to make implementations obtuse and hard to decipher. – RBarryYoung Nov 23 '15 at 21:18
  • @AProgrammer: that has a simple explanation: implicit things (like stack use in recursion-based solutions) are harder to grok. OTOH, once used to them, they allow you to stop spelling out things that are the same in countless pieces of code. – ninjalj Nov 24 '15 at 02:07
  • 4
    Exactly right! Nope, you're wrong. I'm right. No, you're wrong. Right. Wrong. Right. Wrong! Right, I said it last. Wrong! You have no fashion sense. At least I'm not wrong. That tie is wrong. Hey, my mom gave me this tie. Well then she's wrong. Not about recursion, she isn't. You're undateable. Non sequitur! You're not smart enough to understand my point of view. Dunning Kruger effect. Non sequitur! I rest my case, look it up. You don't know Vim. You don't know Emacs. Functional. OO. Angular. React. Whatever you're into "considered harmful." You're why I abandoned Ruby. You're why your mom... – Scant Roger Nov 24 '15 at 02:25
  • Aside: with parent pointers, you can walk a tree in a loop without a stack. –  Nov 24 '15 at 06:48
  • 1
    @SilviuBurcea Unless it's tail-recursion, which really *is* just a loop and if supported, avoids the stack even in the recursive implementation. Using recursion comes naturally when all you're working with are trees - which is often the case in functional languages. The lists are usually recursive, the functions *themselves* are functions of functions of functions of ... It's no accident that functional programming likes recursion - it's an integral part of the design (and the inherent "Math-ness" of functional algebras; describing imperative code is a bit trickier). – Luaan Nov 24 '15 at 09:25
  • @StevenBurnap I'm pretty sure some people from the functional world disagree. – theDmi Nov 24 '15 at 16:17
  • @StevenBurnap You also do compare each character in a recursive version. Performance is the same with tail-recursion. And it depends on the language what is more readable. In F# an iterative approach is harder to write and harder to read compared to a recursive version. That should be the case for most functional languages. In imperative languages it is likely the other way around. – David Raab Nov 24 '15 at 22:09
  • 2
    @RBarryYoung As a single countrexample I can give you Quicksort. it's neither tree or graph problem and the recursive implementation is way more intuitive than the iterative one. And of course many many problems can be expressed as traversing graphs so it is not something rare as you make it sound. – Honza Brabec Nov 27 '15 at 07:58
  • @AProgrammer By far the easiest way to teach recursion is to draw a Koch snowflake. That is something that people intuitively see as recursive and would be really difficult to write with loops compared to recursion. Even in imperative languages. An added bonus is that you can immediately see whether you succeeded or not. – biziclop Nov 27 '15 at 10:24
  • @HonzaBrabec Quicksort is clearly a tree problem where the binary tree you're traversing is constructed during traversal. – Veedrac Nov 29 '15 at 03:53
  • 1
    @Veedrac Of course, everything is a tree problem since you can say that the call stack is traversing a tree. – Honza Brabec Nov 29 '15 at 08:21
  • @HonzaBrabec Everything a branching call stack is useful for, yes. Hence the claim that tree traversal is the key thing that recursion does. Non-tree problems (including degenerate trees like arrays and matrices) have no favouritism towards recursion. – Veedrac Nov 29 '15 at 08:29
  • @Veedrac Sorting an array is definitely an array problem. Yes, you can divide and conquer and see it as a tree but that was kind of the point that trees and graphs are involved almost everywhere and not just in problems that are explicitly called graph problems. – Honza Brabec Nov 29 '15 at 08:39
  • @HonzaBrabec "trees and graphs are involved almost everywhere and not just in problems that are explicitly called graph problems" → But perhaps less than you might think. Surprisingly few problems need anything more than a few levels of linear traversal with appropriate supporting data structures. – Veedrac Nov 29 '15 at 08:42
  • @HonzaBrabec "Sorting an array is definitely an array problem." → But sorting an array by constructing a binary search tree is certainly a tree problem. Something like insertion sort is not. – Veedrac Nov 29 '15 at 08:43
80

No.

Getting down to the very basics of the necessary minimums in order to compute, you just need to be able to loop (this alone isn't sufficient, but rather is a necessary component). It doesn't matter how.

Any programming language that can implement a Turing Machine, is called Turing complete. And there are lots of languages that are turing complete.

My favorite language in the way down there of "that actually works?" Turing completeness is that of FRACTRAN, which is Turing complete. It has one loop structure, and you can implement a Turing machine in it. Thus, anything that is computable, can be implemented in a language that doesn't have recursion. Therefore, there is nothing that recursion can give you in terms of computability that simple looping cannot.

This really boils down to a few points:

  • Anything that is computable is computable on a Turing machine
  • Any language that can implement a Turing machine (called Turing complete), can compute anything that any other language can
  • Since there are Turing machines in languages that lack recursion (and there are others that only have recursion when you get into some of the other esolangs), it is necessarily true that there is nothing that you can do with recursion that you cannot do with a loop (and nothing you can do with a loop that you can't do with recursion).

This isn't to say that there are some problem classes that more easily be thought of with recursion rather than with looping, or with looping rather than with recursion. However, these too tools are equally powerful.

And while I took this to the 'esolang' extreme (mostly because you can find things that are Turing complete and implemented in rather strange ways), this doesn't mean that the esolangs are by any means optional. There is a whole list of things that are accidentally Turing complete including Magic the Gathering, Sendmail, MediaWiki templates, and the Scala type system. Many of these are far from optimal when it comes to actually doing anything practical, its just that you can compute anything that is computable using these tools.


This equivalence can get particularly interesting when you get into a particular type of recursion known as tail call.

If you have, lets say, a factorial method written as:

int fact(int n) {
    return fact(n, 1);
}

int fact(int n, int accum) {
    if(n == 0) { return 1; }
    if(n == 1) { return accum; }
    return fact(n-1, n * accum);
}

This type of recursion will be rewritten as a loop - no stack used. Such approaches are indeed often more elegant and easier to understand than the equivalent loop being written, but again, for every recursive call there can be an equivalent loop written and for every loop there can be a recursive call written.

There are also times where converting the simple loop into a tail call recursive call can be convoluted and more difficult to understand.


If you want to get into the theory side of it, see the Church Turing thesis. You may also find the church-turing-thesis on CS.SE to be useful.

  • 30
    Turing completeness is thrown around too much like it matters. A lot of things are Turing Complete ([like Magic the Gathering](http://beza1e1.tuxen.de/articles/accidentally_turing_complete.html)), but that doesn't mean it's the same as something else that's Turing Complete. At least not at a level that matters. I don't want to walk a tree with Magic the Gathering. – Scant Roger Nov 22 '15 at 06:03
  • 7
    Once you can reduce a problem to "this has equal power to a Turing machine" it is enough to get it there. Turing machines are a rather low hurdle, but it's all that is needed. There is nothing a loop can do that recursion cannot do, nor vice versa. –  Nov 22 '15 at 06:06
  • 4
    The statement made in this answer is of course correct, but I dare to say that the argument is not really convincing. Turing machines have no direct concept of recursion so saying “you can simulate a Turing machine without recursion” doesn't really prove anything. What you'd have to show in order to prove the statement is that Turing machines can simulate recursion. If you don't show this, you have to faithfully assume that the Church-Turing hypothesis also holds for recursion (which it does) but the OP questioned this. – 5gon12eder Nov 22 '15 at 06:25
  • FRACTRAN++ is coming and it's going to take over the world so get ready!! http://esolangs.org/wiki/Fractran_plus_plus – John Doe Nov 22 '15 at 14:14
  • 10
    The OP's question is "can", not "best", or "most efficiently" or some other qualifier. "Turing Complete" means anything that can be done with recursion can also be done with a loop. Whether that is the best way to do it in any particular language implementation is an entirely different question. – Gort the Robot Nov 22 '15 at 17:00
  • @StevenBurnap Sometimes "can" is the same thing as "best". For example, in some cases, looping "can't" be expressive. If the task is to represent a computation elegantly, looping cannot. Take my silly example, the question probably would have been closed if instead of "recursion" the OP would have said "Magic the Gathering", but if your argument is correct, that's a completely legit question. – Scant Roger Nov 22 '15 at 17:07
  • 7
    "Can" is very much NOT the same thing as "best". When you mistake "not best" for "can't", you become paralyzed because no matter what way you do something, there's nearly always a better way. – Gort the Robot Nov 22 '15 at 18:31
  • @StevenBurnap Agreed that "can" do X is not the same as "best" X, unless your "can" is "can be the best" X. – Scant Roger Nov 22 '15 at 19:10
  • @ScantRoger: you wouldn’t [walk a tree with MtG](http://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=4767)? – Peter LeFanu Lumsdaine Nov 22 '15 at 20:37
  • @ScantRoger: Agreeing with you totally, although I tend to prefer [BF](https://en.wikipedia.org/wiki/Brainfuck) as an example for "Turing complete, but would you *really* want to do things this way?". – DevSolar Nov 23 '15 at 13:21
  • fact(0) should return 1, the factorial of zero is one. – Christoffer Hammarström Nov 23 '15 at 13:38
  • @DevSolar I prefer FRACTRAN because of the use of [Church numerals](https://en.wikipedia.org/wiki/Church_encoding) and the implications that has on computation... and that I have an answer about how it is Turing complete. You can always [convert FRACTRAN to BF](http://codegolf.stackexchange.com/questions/5210/convert-fractran-into-brainfuck) if you want. –  Nov 23 '15 at 14:24
  • It doesn't even have to be a theoretical Turing machine. Your compiler translates _everything_ into a loop. It has to, because your CPU knows nothing about recursion. – 200_success Nov 23 '15 at 17:56
  • 2
    @200_success “CPU knows nothing about recursion” – nor does it know about loops. The compiler doesn't translate recursion to loops, rather it translates either to conditional jumps. Yup, good old `GOTO` is still closest to “the real thing”... (Unless you're programming in Java; I understand it that the JVM knows loops. But that just means there's one more intermediate translation.) – leftaroundabout Nov 23 '15 at 21:33
  • 1
    @leftaroundabout Arguably, the CPU is an infinite run loop (until a HALT instruction is reached). – 200_success Nov 23 '15 at 21:35
  • 1
    @200_success That is indeed arguable, but I don't see how it's relevant anyway. It's a bit like saying a rat knows electronics because its brain and nerves work with electricity. That doesn't mean you need to translate the instructions to a circuit schematic if you want to train a rat to pull a lever. – leftaroundabout Nov 23 '15 at 21:53
  • 1
    This answer is wrong. Loops alone are not enough to make a language Turing-complete: you also need a way to manage an unbounded amount of memory. (If a bound on the amount of memory is known a priori, you only get the power of finite automata. If a bound on the amount of memory can be calculated from the size of the input [warning: I'm simplifying somewhat], you only get primitive recursive functions.) – Gilles 'SO- stop being evil' Nov 24 '15 at 22:05
  • @Gilles Being able to loop is a necessary part of turing complete. I did indeed leave out the "being able to access arbitrary data". The point really is that *looping*, however it is done, is sufficient for making a turing machine, and thus recursion. The memory aspect is one I intentionally left out as not relevant to this question (and thus answer). –  Nov 24 '15 at 22:09
32

Are there any cases where a task can only be done using recursion, rather than a loop?

You can always turn recursive algorithm into a loop, which uses a Last-In-First-Out data structure (AKA stack) to store temporary state, because recursive call is exactly that, storing current state in a stack, proceeding with the algorithm, then later restoring the state. So short answer is: No, there are no such cases.

However, an argument can be made for "yes". Let's take a concrete, easy example: merge sort. You need to divide data in two parts, merge sort the parts, and then combine them. Even if you don't do an actual programming language function call to merge sort in order to do merge sort on the parts, you need to implement functionality which is identical to actually doing a function call (push state to your own stack, jump to start of loop with different starting parameters, then later pop the state from your stack).

Is it recursion, if you implement the recursion call yourself, as separate "push state" and "jump to beginning" and "pop state" steps? And the answer to that is: No, it still isn't called recursion, it is called iteration with explicit stack (if you want to use established terminology).


Note, this also depends on definition of "task". If task is to sort, then you can do it with many algorithms, many of which don't need any kind of recursion. If task is to implement specific algorithm, like merge sort , then above ambiguity applies.

So let's consider question, are there general tasks, for which there are only recursion-like algorithms. From comment of @WizardOfMenlo under the question, Ackermann function is a simple example of that. So the concept of recursion stands on its own, even if it can be implemented with a different computer program construct (iteration with explicit stack).

hyde
  • 3,744
  • 4
  • 25
  • 35
  • 2
    When dealing with an assembly for a stackless processor, these two techniques suddenly become one. – Joshua Nov 24 '15 at 18:59
  • @Joshua Indeed! It is a matter of level of abstraction. If you go a level or two lower, it's just logic gates,. – hyde Nov 24 '15 at 20:51
  • 2
    That's not quite correct. To emulate recursion with iteration, you need a stack where random access is possible. A single stack without random access plus a finite amount of directly-accessible memory would be a PDA, which is not Turing-complete. – Gilles 'SO- stop being evil' Nov 24 '15 at 22:06
  • @Gilles Old post, but why is random access stack needed? Also, aren't all real computers then even less than PDAs, as they have only finite amount of directly accessible memory, and no stack at all (except by using that memory)? This doesn't seem very practical abstraction, if it says "we can't do recursion in reality". – hyde Apr 05 '19 at 08:01
21

It depends on how strictly you define "recursion".

If we strictly require it to involve the call-stack (or whatever mechanism for maintaining program state is used), then we can always replace it with something that doesn't. Indeed, languages that lead naturally to heavy use of recursion tend to have compilers that make heavy use of tail-call optimisation, so what you write is recursive but what you run is iterative.

But lets consider a case where we make a recursive call and use the result of a recursive call for that recursive call.

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
  if (m == 0)
    return  n+1;
  if (n == 0)
    return Ackermann(m - 1, 1);
  else
    return Ackermann(m - 1, Ackermann(m, n - 1));
}

Making the first recursive call iterative is easy:

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
restart:
  if (m == 0)
    return  n+1;
  if (n == 0)
  {
    m--;
    n = 1;
    goto restart;
  }
  else
    return Ackermann(m - 1, Ackermann(m, n - 1));
}

We can then clean-up remove the goto to ward off velociraptors and the shade of Dijkstra:

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
  while(m != 0)
  {
    if (n == 0)
    {
      m--;
      n = 1;
    }
    else
      return Ackermann(m - 1, Ackermann(m, n - 1));
  }
  return  n+1;
}

But to remove the other recursive calls we're going to have to store the values of some calls into a stack:

public static BigInteger Ackermann(BigInteger m, BigInteger n)
{
  Stack<BigInteger> stack = new Stack<BigInteger>();
  stack.Push(m);
  while(stack.Count != 0)
  {
    m = stack.Pop();
    if(m == 0)
      n = n + 1;
    else if(n == 0)
    {
      stack.Push(m - 1);
      n = 1;
    }
    else
    {
      stack.Push(m - 1);
      stack.Push(m);
      --n;
    }
  }
  return n;
}

Now, when we consider the source code, we have certainly turned our recursive method into an iterative one.

Considering what this has been compiled to, we have turned code that uses the call stack to implement recursion into code that does not (and in doing so turned code that will throw a stack-overflow exception for even quite small values into code that will merely take an excruciatingly long time to return [see How can I prevent my Ackerman function from overflowing the stack? for some further optimisations that make it actually return for many more possible inputs]).

Considering how recursion is implemented generally, we have turned code that uses the call-stack into code that uses a different stack to hold pending operations. We could therefore argue that it is still recursive, when considered at that low level.

And at that level, there are indeed no other ways around it. So if you do consider that method to be recursive, then there are indeed things we cannot do without it. Generally though we do not label such code recursive. The term recursion is useful because it covers a certain set of approaches and gives us a way to talk about them, and we are no longer using one of them.

Of course, all of this assumes you have a choice. There are both languages that prohibit recursive calls, and languages that lack the looping structures necessary for iterating.

Jon Hanna
  • 2,115
  • 12
  • 15
  • It's only possible to replace the call stack with something equivalent if either the call stack is bounded or one has access to an unbounded memory outside the call stack. There is a significant class of problems which are solvable by push-down automata which have an unlimited call stack but can only have a finite number of states otherwise. – supercat Nov 24 '15 at 00:10
  • This is the best answer, perhaps the only correct answer. Even the second example is still recursive, and at this level, the answer to the original question is *no*. With a broader definition of recursion, recursion for the Ackermann function is impossible to avoid. – gerrit Nov 24 '15 at 10:26
  • @gerrit and with a narrower, it does avoid it. Ultimately it comes down to the edges of just what we do or do not apply this useful label we use for certain code to. – Jon Hanna Nov 24 '15 at 10:34
  • 1
    Joined the site to up vote this. The Ackermann function /is/ recursive in nature. Implementing a recursive structure with a loop and a stack does not make it an iterative solution, you've just moved the recursion into userspace. – Aaron McMillin Nov 25 '15 at 16:22
9

The classical answer is "no", but allow me to elaborate on why I think "yes" is a better answer.


Before going on, let's get something out of the way: from a computability & complexity standpoint:

  • The answer is "no" if you are permitted to have an auxiliary stack when looping.
  • The answer is "yes" if you are not permitted any extra data when looping.

Okay, now, let's put one foot in practice-land, keeping the other foot in theory-land.


The call stack is a control structure, whereas a manual stack is a data structure. Control and data are not equal concepts, but they're equivalent in the sense that they can be reduced to each other (or "emulated" via one another) from a computability or complexity standpoint.

When might this distinction matter? When you're working with real-world tools. Here's an example:

Say you're implementing N-way mergesort. You might have a for loop that goes through each of the N segments, calls mergesort on them separately, then merges the results.

How might you parallelize this with OpenMP?

In the recursive realm, it's extremely simple: just put #pragma omp parallel for around your loop that goes from 1 to N, and you're done. In the iterative realm, you can't do this. You have to spawn threads manually and pass them the appropriate data manually so that they know what to do.

On the other hand, there are others tools (such as automatic vectorizers, e.g. #pragma vector) that work with loops but are utterly useless with recursion.

Point being, just because you can prove the two paradigms are equivalent mathematically, that doesn't mean they are equal in practice. A problem that might be trivial to automate in one paradigm (say, loop parallelization) might be much more difficult to solve in the other paradigm.

i.e.: Tools for one paradigm do not automatically translate to other paradigms.

Consequently, if you require a tool to solve a problem, chances are that the tool will only work with one particular kind of approach, and consequently you will fail to solve the problem with a different approach, even if you can mathematically prove the problem can be solved either way.

user541686
  • 8,074
  • 8
  • 38
  • 49
  • Even beyond that, consider that the set of problems that can be solved with a push-down automaton is larger than the set which can be solved with a finite automaton (whether deterministic or non) but smaller than the set which can be solved with a Turing machine. – supercat Nov 24 '15 at 23:30
8

Setting aside theoretical reasoning, let's have a look at what recursion and loops look like from a (hardware or virtual) machine point of view. Recursion is a combination of control flow that allows to start execution of some code and to return on completion (in a simplistic view when signals and exceptions are ignored) and of data that is passed to that other code (arguments) and that is returned from it (result). Usually no explicit memory management is involved, however there is implicit allocation of stack memory to save return addresses, arguments, results and intermediate local data.

A loop is a combination of control flow and local data. Comparing this to recursion we can see that the amount of data in this case is fixed. The only way to break this limitation is to use dynamic memory (also known as heap) that can be allocated (and freed) whenever needed.

To summarize:

  • Recursion case = Control flow + Stack (+ Heap)
  • Loop case = Control flow + Heap

Assuming that control flow part is reasonably powerful, the only difference is in available memory types. So, we are left with 4 cases (expressiveness power is listed in parentheses):

  1. No stack, no heap: recursion and dynamic structures are impossible. (recursion = loop)
  2. Stack, no heap: recursion is OK, dynamic structures are impossible. (recursion > loop)
  3. No stack, heap: recursion is impossible, dynamic structures are OK. (recursion = loop)
  4. Stack, heap: recursion and dynamic structures are OK. (recursion = loop)

If rules of the game are a bit stricter and recursive implementation is disallowed to use loops, we get this instead:

  1. No stack, no heap: recursion and dynamic structures are impossible. (recursion < loop)
  2. Stack, no heap: recursion is OK, dynamic structures are impossible. (recursion > loop)
  3. No stack, heap: recursion is impossible, dynamic structures are OK. (recursion < loop)
  4. Stack, heap: recursion and dynamic structures are OK. (recursion = loop)

The key difference with the previous scenario is that lack of stack memory does not allow recursion without loops to do more steps during execution than there are lines of code.

2

Yes. There are several common tasks that are easy to accomplish using recursion but impossible with just loops:

  • Causing stack overflows.
  • Totally confusing beginner programmers.
  • Creating fast looking functions that actually are O(n^n).
jpa
  • 1,368
  • 7
  • 11
  • 3
    Please, these are really easy with loops, I see them all the time. Heck, with a bit of effort you don't even need the loops. Even if recursion is easier. – AviD Nov 24 '15 at 10:10
  • 1
    actually, A(0,n)=n+1; A(m,0)=A(m-1,1) if m>0; A(m,n) = A(m-1,A(m,n-1)) if m>0,n>0 grows even a bit faster than O(n^n) (for m=n) :) – John Donn Nov 24 '15 at 11:31
  • 1
    @JohnDonn More than a bit, it's super exponential. for n=3 n^n^n for n=4 n^n^n^n^n and so on. n to the n power n times. – Aaron McMillin Nov 25 '15 at 16:25
1

There's a difference between recursive functions and primitive recursive functions. Primitive recursive functions are those that are calculated using loops, where the maximum iteration count of each loop is calculated before the loop execution starts. (And "recursive" here has nothing to do with the use of recursion).

Primitive recursive functions are strictly less powerful than recursive functions. You would get the same result if you took functions that use recursion, where the maximum depth of the recursion has to be calculated beforehand.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
  • 3
    I'm not sure how this applies to the question above? Can you please make that connection more clear? – Yakk Nov 23 '15 at 15:49
  • 1
    Replacing the imprecise "loop" with the important distinction between "loop with limited iteration count" and "loop with unlimited iteration count", which I thought everyone would know from CS 101. – gnasher729 Nov 29 '15 at 00:30
  • sure, but it still doesn't apply to the question. The question is about looping and recursion, not primitive recursion and recursion. Imagine if someone asked about C/C++ differences, and you answered about the difference between K&R C and Ansi C. Sure that makes things more precise, but it does not answer the question. – Yakk Nov 29 '15 at 01:40
1

If you are programming in c++, and use c++11, then there is one thing that has to be done using recursions : constexpr functions. But the standard limits this to 512, as explained in this answer. Using loops in this case is not possible, since in that case the function can not be constexpr, but this is changed in c++14.

BЈовић
  • 13,981
  • 8
  • 61
  • 81
0
  • If the recursive call is the very first or very last statement(excluding condition checking) of a recursive function, it is pretty easy to translate into a looping structure.
  • But if the function does some other things before and after the recursive call, then it would be cumbersome to convert it to loops.
  • If the function have multiple recursive calls, the converting it to code which is using just loops will be pretty much impossible. Some stack will be needed to keep up with the data. In recursion the call-stack itself will work as the data-stack.
Gulshan
  • 9,402
  • 10
  • 58
  • 89
  • Tree walking has multiple recursive calls (one for each child), yet it's trivially transformed into a loop using an explicit stack. Parsers on the other hand are often annoying to transform. – CodesInChaos Nov 27 '15 at 09:37
  • @CodesInChaos Edited. – Gulshan Nov 27 '15 at 09:47
-6

I agree with the other questions. There is nothing you can do with recursion you can't do with a loop.

BUT, in my opinion recursion can be very dangerous. First, for some its more difficult to understand what is actually happening in the code. Second, at least for C++ (Java I am not sure) each recursion step has an impact on the memory because each method call causes memory accumulation and initialization of the methods header. This way you can blow up your stack. Simply try recursion of the Fibonacci numbers with a high input value.

Ben1980
  • 11
  • 2
  • 2
    A naive recursive implementation of Fibonacci numbers with recursion will run "out of time" before it runs out of stack space. I guess there are other problems which are better for this example. Also, for many problems a loop version has just the same memory impact as a recursive one, just on the heap instead of the stack (if your programming language distinguishes those). – Paŭlo Ebermann Nov 22 '15 at 10:27
  • 6
    Loop can also be "very dangerous" if you just forget to increment the loop variable ... – h22 Nov 22 '15 at 12:05
  • 2
    So, indeed, deliberately producing a stack overflow is a task that becomes very tricky without using recursion. – 5gon12eder Nov 22 '15 at 20:16
  • @5gon12eder which brings us to [What methods are there to avoid a stack overflow in a recursive algorithm?](http://programmers.stackexchange.com/q/194646/40980) - writing to engage TCO, or Memoisation can be useful. [Iterative vs. Recursive Approaches](http://www.codeproject.com/Articles/21194/Iterative-vs-Recursive-Approaches) is also interesting as it deals with two different recursive approaches for Fibonacci. –  Nov 22 '15 at 20:29
  • 1
    Most of the time if you do get a stack overflow on recursion, you would have had a hang on the iterative version. At least the former throws with a stack trace. – Jon Hanna Nov 23 '15 at 13:17
  • @JonHanna That depends on how stack-happy your functions are. With a shallow 1 MB stack, it only takes 1000 recursions with 1kb/recursion to blow it. And if you start the recursion at half-way up the stack, now you only need 500 depth. I, personally, have modified an algorithm from recursive to iterative in order to avoid a stack-blowout in a complex situation. (it took a few seconds in the iterative version, but it wasn't completely horrible) – Yakk Nov 23 '15 at 15:48
  • @Yakk why I say "Most of the time". The example in my answer does similarly go from something that will quickly blow a stack to something that will work correctly. The "most of the time" of course depends on just how often one writes recursively at all, but considering that the question is between the choice of iterative and recursive I'm assuming a language were both are commonly done, rather than a functional language where one makes very heavy use of recursion and then depend upon tail-call elimination to make it work. In languages where one can iterate, one generally will for large loops. – Jon Hanna Nov 23 '15 at 15:53
  • @JonHanna Parsers/deserializers (e.g. json) are probably the best example. They often process untrusted data where pathological configurations can easily blow the stack, while a loop would terminate within a reasonable time. Tail recursion doesn't help and rewriting them to a loop+explicit stack is tricky, especially if you value performance. – CodesInChaos Nov 27 '15 at 12:16