In university, at our algorithms courses, we learn how to precisely compute the complexity of various simple algorithms that are used in practice, such as hash tables or quick sort.
But now in a big software project, when we want to make it faster, all we do is look at individual pieces -a a few nested loops there that can be replaces by a faster hash table, a slow search here that can be sped up by a more fancy technique- but we never compute the complexity of our whole pipeline.
Is there any way to do that? Or are people in practice just relying on "locally" using a fast algorithm, to make the whole application faster, instead of globally considering the application as whole?
(Because it seems to me to be nontrivial to show that if you pile up a large number of algorithms that are known to be very fast on their own, you also end up with a fast application as a whole.)
I am asking this, because I'm tasked with speeding up a large project someone else has written, where a lot of algorithms are interacting and working on input data, so it is unclear to me how the impact of making on single algorithm faster on the whole application.