I recently asked a question on CodeReview SE where using a binary search was recommended, and the author of the answer proved that it was faster than my implementation. During my research on the topic, I came across a table that shows the complexities of a binary search:
These are the complexities of a binary search −
Worst-case Best-case Average Worst-case space complexity O(log n) O(1) O(log n) O(1)
I think I'm misunderstanding the information this table is meant to relay. I'm perceiving it as:
The average performance of a binary search is the worst-case performance.
However, if this were true, wouldn't a binary search be avoided in many cases? I'm new to this level of optimization so my impediment could just be some underlying prerequisite that I don't have knowledge of yet.
Edit: For clarity in hindsight, the question I sought answers to is below this edit. The one above it is strictly a questioning of my reasoning during my research phase that I included to demonstrate my lack of understanding. My apologies for any confusion on the matter.
What is meant by "the complexities of a binary search", what are those complexities, and what is this table actually telling me relative to those complexities?