0

Suppose I have a function CalculateOutput(n) which creates an array of size n and repeatedly modifies this array by iterating through every element from 0 to n - 1 (say this is done in linear time). When the array is in a particular order then the number of times the CalculateOutput has walked the array is returned. The thing is that as n increases the output does not necessarily increase (e.g. CalculateOutput(4) = 5 while CalculateOutput(5) = 2). How could I determine the time complexity of this algorithm? Or what other information would I need to be able to determine the running time?

I believe that if there were some other method to determine the number of iterations over the array (call it m) for a given n then it would be that CalculateOutput = O(m * n). But I only know what this m is by running the algorithm previously described.

restin84
  • 19
  • 2
  • You seem to be conflating run time versus time complexity. Also, the actual output doesn't have much to do with the time complexity of an algorithm. You may want to post the actual algorithm. You can also read the question [What is O(…) and how do I calculate it?](https://softwareengineering.stackexchange.com/questions/132331/what-is-o-and-how-do-i-calculate-it?rq=1) – Vincent Savard Jan 22 '20 at 20:19
  • @VincentSavard the algorithm repeatedly reorders an array of size n by walking it from 0 to n - 1. When it is back to original order then the number of times the array was walked is returned. – restin84 Jan 22 '20 at 20:34
  • 1
    Bubble sort is one such algorithm. – trent Jan 22 '20 at 20:40
  • 3
    Possible duplicate of [What is O(...) and how do I calculate it?](https://softwareengineering.stackexchange.com/questions/132331/what-is-o-and-how-do-i-calculate-it) – Doc Brown Jan 22 '20 at 20:42
  • @trentcl your comment is helpful. I would upvote it but I don't have enough rep yet. – restin84 Jan 22 '20 at 21:10

1 Answers1

2

This is why we talk about algorithm complexity in terms of bounds and cases. That provides a reasonable way to compare two algorithms even if it is very difficult to pin down exact run times.

In the best case, you might only have to traverse your array once, giving you O(n). In the worst case, you might have to traverse your array n times, giving you O(n2). If you have a uniform distribution of possibilities, your average case might result in traversing your array n/2 times, giving you O(n*n/2) which simplifies after constant removal to O(n2).

If you are forced to give one result, you'd give the upper bound of your worst case, which is O(n2). If your real world data is almost always closer to the best case, this algorithm becomes a lot more attractive.

Note that just counting the required passes might be able to be done more efficiently than actually performing the passes. A similar concept is finding the Kendall tau distance.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479