With absolutely no prior knowledge of the specific case, I would guess: performance.
Given a function with a fixed set of parameters, a compiler can optimise how those parameters are laid out in memory, and even optimise the entire function to a fixed set of processor instructions.
A generalised implementation instead needs machinery for detecting the number of parameters actually passed, collecting them into some intermediate representation, and iterating over them.
Consider an example of sum
, in an imaginary language, implemented variadically:
function sum(int[] ...args) {
local accumulator = 0;
foreach ( args as next_arg ) {
accumulator += next_arg;
}
return accumulator;
}
Even without knowing the low-level mechanics of passing and allocating variables, we can see that there's a lot going on here. Now compare to implementations with fixed argument lists:
function sum(int a) { return a; }
function sum(int a, int b) { return a + b; }
function sum(int a, int b, int c) { return a + b + c; }
function sum(int a, int b, int c, int d) { return a + b + c + d; }
Here, no intermediate storage needs to be allocated for args
and accumulator
, and the implementations are easy to map to low-level instructions.
While it's possible for a sufficiently powerful compiler to generate the optimised cases from the generalised one, it's far from easy. So where performance matters more than human convenience, it can be worth hand-crafting at least some of the cases.