Unit testing has been widely lauded as an indispensable part of software development, and TDD has many proponents.
However, due to the combinatorial nature of various pieces of code, it can be difficult to know when you have enough coverage of even a single method.
Consider a method that checks for an intersection between two rectangles. These rectangles are defined with four float
values, X
, Y
, Width
, and Height
and returns true
if the rectangles intersect and false
if they don't. Already I can think of quite a few test cases:
- One where the rectangles intersect
- One where the rectangles do not intersect
- One where the rectangles are the same size and have the same position
- One where one or both of the rectangles have zero width, height, or both
- One where a rectangle has odd
float
values likeInfinity
orNaN
- One where the right edge of the first rectangle has equal coordinates to the left edge of the second (repeat for every combination of edges)
- One where one or both rectangle bounds all of 2D space...
...and so on, and so on. And that's just for one method.
How many of these test cases are valuable? Am I approaching unit testing the wrong way?