-3

I was wondering if there is some cost saving, either in time or space by passing and/or returning smaller arguments? char vs int.

I have heard the compiler will optimize the code based on the type of the processor 8,16,32 bits. I think it will pass using a register, so that would be the max value of the register. Personally, I think if the count ever needs to grow, the function does not need to be changed. Still others will argue that "Since we are only counting to say 254, we only need an unsigned char. We are saving 24 bits of space, etc". Also, I think that there is more trouble with casting and it's better to use the largest register value type. I think it is better to pass using the larger arguments. Am I right or wrong?

/* Using smaller parameter */
unsigned char max_count = 10;
count(max_count)

void count(unsigned char max_count)
{
}

/* Using larger parameter */
unsigned long max_count = 10;
count(max_count)

void count(unsigned long max_count)
{
}

/* Examples that I come across (MISRA goes nuts with this statement) */
unsigned char do_something(unsigned char a)
{
    ...
    return ((b | 0x12) << 2);
}
  • 1
    Ask the people arguing for saving 24 bits of space this: "O.K., how much do 24 bits cost?" You have spent way more money _arguing_ over the cost than the actual cost. – user949300 Apr 15 '20 at 18:56
  • 2
    Does this answer your question? [Is micro-optimisation important when coding?](https://softwareengineering.stackexchange.com/questions/99445/is-micro-optimisation-important-when-coding) – gnat Apr 15 '20 at 19:20

2 Answers2

1

if the method requires a char then use a char, if an integer then an integer. Think in terms of required functionality and not speed performance. It will depend on the architecture so on some hardware it may speed things up (using low end cpus with fast IO throughput bus) - the java compiler and and platform will do things differently depending on the underlying architecture...I know that this is not a direct answer but most of the performance is now handled by the jit compilers which are more than likely going to outdo any of the small tweeks on data types that we may attempt to do.

This was more important where the instruction machine code instruction set would have specific functions for passing bytes/chars (Z80/81...etc)...today they are probably 64bit bus so it isn't something I would worry too much about unless i am sending the data between servers and I want to serialise to the smallest packet possible...this is what I think is more neglected today (especially with XML/HTML/JSON being sent across the wire most likely as https...admittedly it is more readable but the size increase is enormous compared to how it was done...I can remember developing on a PDP11 years ago (which sounded like a warehouse generator)...it only had 3meg of memory but ran a 50 user system...had plenty of storage...yes is used wyse/ascii terminals as the user interface but the data input on these was much quicker than what we have today...going off topic slightly but for interest thought I would add that in

CCS
  • 21
  • 2
1

You should choose the data type that correctly expresses the intent of the program. For a count, this will likely be one of unsigned, unsigned long, uint_fast32_t, or size_t (depending on exact intent). Using char for anything other than to represent “bytes”/memory is ill-advised. It is unsuitable for counts because it may or may not be signed. Instead, express your intent with something like uint8_t or uint8_fast_t when you need an unsigned int with (at least) 8 bits of storage.

Whether there's any overhead from choosing a larger/smaller data type depends entirely on the platform. In the amd64 (x86_64) calling convention, both the first argument to a function and the return value from a function are passed in a register. For arguments that are spilled onto the stack, you'll notice that stack frames are aligned to 16 bytes, so there's effectively no difference in space overhead between a char and two whole pointers. This will of course be entirely different on embedded platforms.

Using a smaller size than necessary may actually have extra overhead, especially since mathematical operations on smaller types promotes them to ints! Of course, a sneaky compiler might not emit extra instructions to truncate the result back to a char-sized value, but is allowed to assume that no larger value will ever be produced. Otherwise: UB.

amon
  • 132,749
  • 27
  • 279
  • 375
  • "especially since mathematical operations on smaller types promotes them to ints!" It is this that drives me mad sometimes, especially having to deal with MISRA. So, I am more apt to use the largest type unless it is absolutely necessary to use a smaller type. – Christopher J. Holland Apr 16 '20 at 17:57