2

I was wondering if there are any calculus relationships implicit in Big-O notation.

For example, an algorithm linear according to Big-O notation reduces the size of the problem by a constant amount at each step, and also involves looking at each part of the input a constant number of times. The derivative of a linear expression is a constant expression, so there is some hint of a pattern. However, I haven't been able to figure out how to generalize these facts to other Big-O classes of algorithms.

Do derivatives/antiderivatives help in matching an algorithm's Big-O description with its behavior?

user10478
  • 129
  • 1
  • This is more of a [Computer Science](https://cs.stackexchange.com/) question. And the answer is why would you do that? Generating the derivative would imply that you already have (or nearly have) the actual behaviour worked out. If you do this algebraically, it means you have the actual formula. If you do it empirically you've already sampled the actual graph of interest. The only occasion I surmise this might be effective is when you do not have any direct ability to analyse and are presented with integrals/derivatives. In which case yes but with caveats caused by derivative information loss. – Kain0_0 Aug 15 '19 at 01:02
  • I think you're right, I'll try that site. As for why one might do that, I suppose for me it's simply to understand the concept of Big-O more precisely. – user10478 Aug 15 '19 at 15:31
  • Big O actually has a very simple explanation that doesn't require calculus. It describes how a function scales as n increases. – Robert Harvey Aug 15 '19 at 16:13
  • I am pretty sure someone can find an algorithm where calculating the order of growth correlates to some [finite difference relationship](https://en.wikipedia.org/wiki/Finite_difference_method). And such an equation may be solved using calculus. However, I have my doubts about any practical relevance. – Doc Brown Aug 19 '19 at 20:20

1 Answers1

2

Nope.

Here's why. A derivative of O(n2) and O(n3) will tell you that the difference of these upper bounds is 1. The problem is I don't care about that difference. I care about the differences between O(n2) and O(n!). Neither of which I like (I like O(n*log n)) but O(n!) is nasty when n is significant (sometimes n isn't and then no one cares about any of this). O(n2) and O(n10) are close enough that I simply don't care. So no, derivatives here put emphasis on the wrong details. I'm not concerned with rates of change. I'm concerned with categories of change.

The reason why is because code in the n log n category is often as good as it gets (unless there is a trivial solution) and the others are usually leaving room for significant improvement. That is, when n is big enough that anyone cares.

In fact I have 3 categories for applied algorithms: trivial, optimized, crap.

Find a way to help me sort applied algorithms into those and I'll get you a Turing award.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
  • This is helpful in highlighting the lack of application, but I still find the question potentially helpful for understanding Big-O from a different angle. – user10478 Aug 15 '19 at 15:35
  • @user10478: What does that mean? You want to know every possible way that calculus ***does not*** apply to Big O? – Robert Harvey Aug 15 '19 at 15:44