What is the difference between O(N^2) and Θ(N^2)? I know that O is the upper bound and Θ is tight bound. Could someone explain me this concept with an example.
-
@gnat that question seems to be written with the assumption that you already know the answer to this question. Its answer certainly doesn't provide a useful answer to this question, as the only definition provided of $\Theta$ is in terms of $\Omega$, which anyone unfamiliar with $\Theta$ is also unlikely to understand. It doesn't have an example, as requested in this question. – Jules Sep 08 '15 at 22:39
-
2As @Raphael mentions, industry programmers tend to be haphazard with O and even more rarely touch on Θ. If you are after the proper definition, CS would be the better place to read about this. If you are after what your co-worker might mean if they mention big theta and are more than a year out of college, we might have an adequate answer - but certainly one that would be marked wrong or incomplete on any homework or exam. – Sep 09 '15 at 16:40
2 Answers
This notation has nothing to do with the difference between best/average/worst case complexity. You can apply both notations to each of them.
An algorithm which has linear runtime is in both O(n)
and O(n^2)
, since O only denotes an upper bound. Any algorithm that's asymptotically faster than n^2
is also in O(n^2)
. Mathematically O(n^2)
is a set of functions which grows at most as fast as c * n^2
. For example this set contains c
, c * x
, c * x^1.5
, c x^2
, c1 * x^2 + c2 * x
for any c
.
Θ on the other hand is both a lower and an upper bound. So a linear runtime algorithm is in Θ(n)
but not in Θ(n^2)
. Mathematically Θ(n^2)
is a set of functions which don't grow faster that c*n^2
for some c
but also doesn't grow slower than c*n^2
for another smaller c. For example this set contains c * x^2
, c1 * x^2 + c2 * x
, arctan(x) * x^2
for any positive c
and other functions where the fastest growing term grows like c * x^2
but not c
, c x
, c x^1.5
because all the terms grow slower than c * x^2
.

- 5,697
- 4
- 19
- 26
-
-
1`0.5 <= arctan(x) <= 1` for sufficiently large `x`. Thus `0.5 * x^2 <= arctan(x) x^2 <= 1 * x^2`, which means it has both a lower and an upper bound that grows like x^2. – CodesInChaos Jul 17 '22 at 10:04
-
Compare Selection Sort and QuickSort.
Selection sort has a n^2
behavious in both the best and worst case. It will always perform bad.
Quicksort on the other hand, might perform with n^2
complexity if the odds are against you, but it might (and in most cases will) perform with n*log(n)
complexity.
Note how both algorithms are O(n^2)
in the worst case, however only Selection Sort is Θ(n^2)
. This is essentially saying the above - we can bound Selection Sort running time from below by the same function, whereas Quicksort may perform asymptotically better in the best case vs worst case.

- 198,589
- 55
- 464
- 673

- 1,917
- 13
- 12
-
This is *wrong*; see [here](http://cs.stackexchange.com/questions/23068/how-do-o-and-relate-to-worst-and-best-case). – Raphael Sep 09 '15 at 11:27
-
@Raphael It is common outside of academia to denote an *algorithm* (not a function) to be in `O(f(n))` if the worst case behaves like `O(f(n))` and `o(g(n))` if the best case behaves like `o(g(n))`. It's consistent with the usual notation, albeit a slight abuse of such, and unless you're talking to someone who's either a CS student or looking for the mathy stuffs, this is what they mean. – Ordous Sep 09 '15 at 13:38
-
1I sincerly hope that you are wrong in the generality of your statement, because that's not only wrong but also not at all consistent. You define a new symbolism that does not allow you to understand any scientific material. Or the Java API documentation, which (afaik) uses the "academic" definition, for that matter. (Saying "Algorithm A is in O(_)" is not the main issue here, even though it's a rather harmful abuse since it reduces algorithms to *one* cost measure, a gross oversimplification.) – Raphael Sep 09 '15 at 13:46
-
@Raphael Java API uses both - it has instances where an algorithm is described in terms of its worst case and average case (or amortized average case), and instances where it just says `this method has O(...) performance`. Saying that defining a new symbolism that has a different meaning prevents you from understanding scientific material is just silly - there is a plethora of different and starkly inconsistent definitions of `=`, yet everyone understands what is meant from context. Here it is apparent which meaning is used, because one is applicable to functions, the other - algorithms – Ordous Sep 09 '15 at 14:03
-
@Raphael Also, reducing an algorithm analysis to a few measures is exactly the point - since the job of a developer is to make good code on time, not perfect code in a thousand years time. If that does make a difference in a particular case - it's always possible to fall back to a more detailed analysis, but giving a 10+ figures comparison of several algos every time you just have to not do the stupid thing is a waste of everyones time. – Ordous Sep 09 '15 at 14:07
-
Please give a reference where `o` is used in this way. I've never seen it. (And it's silly, too: using the "academic" notation, you can say "worst-case O(f) and best-case O(g)" -- no other symbol is needed.) – Raphael Sep 10 '15 at 08:20
-
"reducing an algorithm analysis to a few measures is exactly the point" -- it's a necessary evil because more comprehensive are hard or even impossible to do (in part because effects by hardware and OS are all but impossible to nail down), but it's certainly not done to make programmers' lives easier. – Raphael Sep 10 '15 at 08:22
-
Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/28997/discussion-between-ordous-and-raphael). – Ordous Sep 10 '15 at 13:51