6

We know that binary search takes O(log n) in Big O notation but if we need to run twice an algorithm of O(log n), would it be the same as O(n) in time complexity?

For example, if I have a method to calculate the first occurrence of a number in a sorted array with duplicates and another method to calculate the last occurrence. Each of both methods takes O(log n) to run, so at the end if I want to know the number of occurrences of an specific number in the array using both methods I would have a O(2 log n), right? Can we infer that O(2 log n) is equal to O(n) or lower?

gnat
  • 21,442
  • 29
  • 112
  • 288
Walter R
  • 171
  • 2
  • 2
  • 5
  • 5
    If you look something up in a binary tree twice, are you iterating over all of the elements? –  Sep 15 '15 at 20:14
  • O(logn)+O(logn)=O(2logn),For BigOh notation the constant multiplier like 2 really does not matter. All that it does is, it gives the order of the running time complexity. 2logn is in order of log n. – AjayGohil Oct 12 '20 at 03:07

3 Answers3

16

No, O(log n) + O(log n) is not O(n). It is still O(log n). When we have a big-O formula multiplied by a constant, it is equivalent to the value without the multiplied constant. So:

O(k * g) = O(g)

where k is a constant, non-zero factor and g is a bounding function.

An O(log n) operation is an operation that takes a number of steps proportional to the logarithm (the base just contributes a constant factor) of the size of the input, denoted by n. When you have multiple operations of the same big-O order, you can add them together as you have done, and come out with something like O(2 * log n), but big-O notation is not concerned with constant multiplicative factors, so we ignore the 2 and write O(log n).

The reason we are not concerned with constant factors* is that we are examining an algorithm's potential for growth based on input size. We don't use it to calculate the exact number of steps, just to see how the number of steps might grow.

Let's look at some real numbers to give ourselves a less mathematically abstract example. We have an algorithm that runs in O(log n) time and takes exactly log2(n) steps. We have a method where we run it once with an input of size 8. It takes 3 steps to complete. We have another method where we run it twice, so it takes 6 steps to complete. This is not the comparison we care about, though. We want to know about the growth. We run the first method again with an input size of 16. It takes 4 steps to complete, +33% more steps. We run the second method with an input size of 16. It takes 8 steps to complete, also +33% more steps. We can see that, though the number of steps is different, the growth is the same between the functions. The rate of growth is O(log n) despite how many times it is called, and it is the rate of growth that we are interested in.


* We are also not concerned with lower-growth bounding function parts, so O(n + log n) is equivalent to O(n).

Deduplicator
  • 8,591
  • 5
  • 31
  • 50
cbojar
  • 4,211
  • 1
  • 17
  • 18
  • 1
    While O(2 log n) is equivalent to O(log n), it is also a subset of O(n) so the sentence is not entirely wrong. It's a rather questionable estimate in practice, however. But sometimes – for example if it is only interesting that something is of polynomial complexity – we might not care. I suspect that @JörgWMittag wanted to point this out but arguably did so in an overly terse manner. Unfortunately, the big-O-Notation is a complete mess with people using equality, membership and inclusion all interchangeably. Leave alone with confusing functions with expressions. – 5gon12eder Sep 15 '15 at 21:17
  • 2
    @5gon12eder You are **absolutely right**, and I thought the same thing when I saw JörgWMittag's answer. Since we're programmers, though, and not mathematicians, we are interested in the fact that a binary search is faster than a linear search because the slowest-growing bounding function grows more slowly. They can both be bounded by `n`, but the binary search can also be bounded by `log n`, and that means a real, perceivable performance difference. That's what I would care about. – cbojar Sep 15 '15 at 21:38
5

Both O(log n) and O(2 log n) are subsets of O(n). They are also both equal to O(log n).

You must remember that O(log n) is a set, it is (informally) "the set of all functions that don't grow significantly faster than f(x) = log x". So, all of the following are true

  • O(log n) = O(2 log n)
  • O(log n) ⊂ O(n)
  • O(2 log n) ⊂ O(n)
  • ∀f: f ∈ O(2 log n) ⇒ f ∈ O(n)

You can easily verify that by plugging the functions into the definition of O(g).

Unfortunately, there is a lot of abuse of notation going on in this regard. For example, f ∈ O(g) is often written f = O(g), even though it doesn't make sense: f is a function, O(g) is a set of functions. An apple can never be equal to a set of apples! Likewise, it is often written that O(f) = O(g), when what is really meant is that O(f) ⊂ O(g). And lastly, people will sometimes claim that f(n) = 2n is an element of O(n) but not of O(n²), which is wrong. O(n) is a subset of O(n²), so any function that is in O(n) is also in O(n²).

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
0

For large values of n, the constants are ignored in Big O notation.

Hence

O(log n)+O(log n)=O(2log n)->O(log n)

Temp Id
  • 19
  • 3