I have a number of functions that are pretty close to the mathematical definition of a function. For example, a simplified version of one of these functions may look like:
int function foo(int a, int b) {
return a * b;
}
I'd like to unit test them, but I'm not sure what the best way to approach those tests, as these functions have very large ranges for parameter values. Every approach I've come up with feels lacking:
- If I test with a discrete pre-defined subset of possible values, I fear the set I choose may not be representative of the whole, leading to a potentially endless cycle of adding new values to the subset every time an unexpected value is found. This seems like defeating the point of having a unit test in the first place.
- If I test with a discrete subset of possible values that's randomly determined, I inherit all of the problems of picking the subset manually as well as the real chance that tests will fail randomly every run.
- If I test with every possible value (e.g. 0 to 264), the tests will take forever.
My final thought on how to approach the problem was to avoid testing the implementation of the function entirely, either by:
- Only testing that the function accepts two integers and returns an integer, or
- Not testing the function at all
But if I'm not testing the functions at all—either in practice or in principle—I'm not sure how I can guarantee the correctness of the functions in the case of an eventual refactor.
What is the best way to test functions that have completely deterministic output, but can operate on a large range of possible inputs?