Say I had a software algorithm (for example, an FFT), and I need it to process (n) amounts of data in (t) milliseconds. This is a real-time task written in C. There are a lot of CPUs out there, and you could just select the fastest one, but we also want something just right for the job while reducing cost.
FFTs are O(n log n) as far as I know, so maybe one can say that it would take k * (n log n) to perform an FFT on n units of data. Even if the constant was known, how would I translate that to actual CPU cycles, in order to determine which CPU is suitable?
A colleague from work posed this question to me and I couldn't answer it, as this is in computer engineering territory which I'm not familiar with.
Assume that this software program runs on its own, with no OS or other overhead involved.