Do computer scientists and mathematicians research such algorithms just for fun/out of curiosity/so they have an original dissertation topic or are there really some real life situations in which naive algorithms are too slow and we need a more efficient one?
Efficiency is key not for fun or for curiosity but to be able to solve problems faster in computer science more often than general software development.
Though it is true that computers are fast enough to do computations O(n)
like:
List x = [1,2,3,4,5,6]
print [i for i in x]
Sometimes when we have algorithms that have complexity like O(n^2)
or even O(n log(n))
we face issues of scalibility. When the data-set we are iterating through or doing any computation on expands, it takes more time to compute. So if we can find algorithms that can reduce time complexity from say O(n)
to O(log n)
, we are able to see a bigger difference on larger data sets.
See this for example:

We want to try and get algorithms as close to the green-areas as possible.
One big reason that complexity is a big topic in computer science is because of the complexity classes P
and NP
and attempting to prove if P = NP
, which may not seem like a big deal in context of programming; but becomes a huge deals with concepts like cryptography.
For a real-life scenario, let's take the travelling salesmen problem which is O(2^nn^2)
. This problem is not solvable in polynomail time (P
) and so belongs to the class of NP
problems because there isn't yet an algorithm to solve it in P
. However, if there is research and a solution to it being solved in P
then there are a lot of implications of it that we can apply to other algorithms.
It's also worth having a look at sorting algoritms for instance bubble sort has a complexity of O(n^2)
, while we have found better algorithms for sorting like Heapsort which has worst case O(n log(n))
.