Big O notation provides an upper bound to a function whereas Big Theta provides a tight bound. However I find that Big O notation is typically (and informally) taught and used when they really mean Big Theta.
e.g. "Quicksort is O(N^2)" can turned into the much stronger statement "Quicksort is Θ(N^2)"
While usage of Big O is technically correct, wouldn't a more prevalent use of Big Theta be more expressive and lead to less confusion? Is there some historical reason why this Big O is more commonly used?
Wikipedia notes:
Informally, especially in computer science, the Big O notation often is permitted to be somewhat abused to describe an asymptotic tight bound where using Big Theta Θ notation might be more factually appropriate in a given context.