-2

Naive sorts like Bubble Sort and Insertion Sort are inefficient and hence we use more efficient algorithms such as Quicksort and Merge Sort. But then, these two sorts are recursive in nature, and recursion takes up much more stack memory than iteration (which is used in naive sorts) unless implemented as a tail call. How can then Quicksort and Merge Sort be more efficient than the naive sorts?

(People have identified this question as a duplicate of this question. But this is not quite the question I asked. The linked question asks about Big O notation in general whereas I am specifically concerned about the space complexity of recursive sorts. Please understand the difference.)

  • If language does not support *Tail Call Optimization* then yes - recursion takes up stack memory. But we have *TCO* around since [1977](https://en.wikipedia.org/wiki/Tail_call#History), so it should not be a problem. – rsm Nov 24 '19 at 19:34
  • You don’t need tail recursion for Quicksort. Stack usage = 2 log n integers. – gnasher729 Nov 25 '19 at 11:14
  • @rsm It's not that easy. TCO isn't possible when there is memory to be cleaned up at the end of a function call, when reference counting is used (`std::shared_ptr` in C++, `Rc`/`Arc` in Rust, references in Swift and Python, etc.) – Alexander Nov 26 '19 at 20:46

1 Answers1

4

What you asked is:

"How can Quicksort and Merge Sort be more efficient in terms of memory usage than the naive sorts?"

And the answer is: they aren't.

  • Complex sort algorithms like Quicksort and Mergesort are more efficient than Bubble Sort and Insertion Sort regarding to their running time, not regarding memory usage.

  • Of course, the more complex algorithms are not particular less efficient than the simple ones in memory usage , either. At least not asymptotically, since all of them require O(N) memory, where N is the number of elements to be sorted. So don't think those algorithms are generally faster because of using more memory. They are generally faster because they use more sophisticated ideas, which leads (asymptotically) to less comparisons and less movements of the sorted elements.

  • Any recursive algorithm can be converted into an iterative one. When done correctly, this is a common way to micro optimize the speed of algorithms like Quicksort and Mergesort (when done blindly, this can also slow down things). This does not change their asymptotic behaviour, however.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • Merge-sort can be done iteratively without needing any extra-memory, as long as the sequence is efficiently random-accessible. The standard "convert recursion to iteration"-transformation would use an explicit instead of implicit stack and is thus much less efficient. – Deduplicator Nov 24 '19 at 06:14
  • @Deduplicator: memory != extra memory. Correct me, if I am wrong, but asymptotically, all of those algorithms require O(n) space, where n is the number of elements to sort. – Doc Brown Nov 24 '19 at 06:22
  • 1
    ... and I don't know which "standard conversion to iteration" you have in mind, but I am speaking of one where the "explicit stack" is exactly used the way like the otherwise implicit stack, so I fail to see where this could become "much less efficient". – Doc Brown Nov 24 '19 at 06:33
  • Iterative merge-sort does not use any stack, not even an explicit one. And with extra-memory I mean anything besides the sequence to be sorted. – Deduplicator Nov 24 '19 at 06:49
  • @Deduplicator: applying a standard "convert recursion to iteration"-transformation to an already iterative algorithm like "iterative Mergesort" does not make really sense, don't you agree? – Doc Brown Nov 24 '19 at 07:43