0

Say we have an interrupt that is generated once each time that 1024 bytes of network traffic arrives. Each interrupt takes 3.5 microseconds to process and the network speed is 100Mb.We want the amount of cpu used per second

Is it correct that:

1 interrupt      3.5e-6 seconds     3.4e-9 seconds      1.25e7 bytes
----------    x  -------------- = ------------------ x ------------- = .043
1024 bytes       1 interrupt        1  byte              1 second
Tulains Córdova
  • 39,201
  • 12
  • 97
  • 154
user63210
  • 31
  • 1
  • 1
    Related: [Is micro-optimization important while coding?](http://softwareengineering.stackexchange.com/questions/99445) – Robert Harvey Feb 21 '17 at 15:53
  • 3
    You're making a lot of the assumptions that aren't necessarily true. – whatsisname Feb 21 '17 at 17:15
  • @whatsisname: Agreed - I'd assume it's a university question (they love to ignore "inconvenient reality"; like how many bits on a 100Mb network is per-packet overhead and not data). – Brendan Mar 24 '17 at 11:28

2 Answers2

1

In order to calculate CPU per second, we need to have a clear definition of what that is. The only sensible way to define that is based on the number of instructions the CPU can execute in a second. Then you need to know how many instructions your application trying to execute per second and divide that by the CPU's capacity.

For example, if the CPU can execute 1 million instructions per second and your application executes 500 K instructions in 2 seconds, your program used up 25% of the CPU during those 2 seconds.

The details you have here about bandwidth and interrupts tell you nothing about how many instructions this application processes per second. Either you trying to calculate something other than CPU usage or you don't have enough information.

JimmyJames
  • 24,682
  • 2
  • 50
  • 92
  • The detail given of the amount of processing time used to handle each interrupt does seem relevant, however, and as you can use the other data to calculate interrupts per second, the question died have a solution. – Jules Feb 21 '17 at 18:41
  • @Jules as noted in a comment in another answer, I think you are assuming that 3.5 micro-seconds is CPU time but I don't see anything in the wording of the question that suggests that is the case. – JimmyJames Feb 21 '17 at 19:09
  • Instructions can be wildly different in execution time unless you're running a RISC CPU. Instruction can be pipelined and run in parallel, unless you run a rather simple core like ARM5. How much CPU would be spent under a real load can only be estimated, and then measured. – 9000 Mar 23 '17 at 19:12
  • @9000 My understanding is that this question is about (hypothetical) measurement. – JimmyJames Mar 24 '17 at 13:27
0

I come up with the same result:

(100E6 / 8 / 1024) * 3.5E-6 = 0.0427 CPU seconds per second.

Simon B
  • 9,167
  • 4
  • 26
  • 33
  • This entire question is problematic but why would you divide by the number of bytes to get cpu per second? – JimmyJames Feb 21 '17 at 18:01
  • @JimmyJames this calculation seems reasonable to me. The data given is (amount of CPU time used per handling an interrupt) x (number of bytes per second) / (number of bytes per interrupt). The units of bytes and interrupts cancel to give CPU-seconds per second, which is the correct dimension for the result. The calculation ignores a lot of the real world issues that crop up in actual applications (jitter in data arrival, bus contention, etc) but should be roughly in the right ball park. – Jules Feb 21 '17 at 18:38
  • @Jules Are you saying 3.5 microseconds is CPU time? The way it is stated in the question looks like clock time to me. – JimmyJames Feb 21 '17 at 19:02
  • @JimmyJames I take the link speed to be 100 megabits per second, but the question says that the interrupt happens every 1024 bytes. So I converted the link speed to megabytes per second. – Simon B Feb 21 '17 at 23:38
  • @SimonB I get it now but this assumes that the 3.5 microseconds is either CPU time or that it during that 3.5 microseconds there is 100% CPU utilization by this program. That may very well be what the OP meant but I didn't read it that way initially and I think it's unclear. – JimmyJames Feb 22 '17 at 14:34