You don't mention your testing tool. Many have "combine" functions that let you aggregate the results of multiple runs or suites. If you want an aggregate coverage metric, explore the combine feature in your coverage tool.
Now, can we talk about the elephant in the room?
There is no spoon. And there is no "total coverage percentage." At least, no simple one.
Coverage percentage is a readily-comprehended metric presented to help understand the scope, depth, and range of testing suites. But like any simple benchmark, it's very easy to become target fixated on this value as some sort of magical talisman of "complete testing."
Let's say you have achieved the glory of "100% test coverage." Yay! But what does that mean? 100% of code lines are tested, right? Then what about this line?
launch_missile = launch_authorized and launch_cmd_given else previous_launch_status
"Covering" that line means something--but not a whole lot, because there are a variety of conditions which are True
or False
with some probability, but it's unlikely that you have tested all of the combinations of those conditions. Even if that line is covered a dozen times, if one of the conditions is relatively uncommon, you haven't come close to testing all of the real results that might occur in practice. To make that clearer, a more synthetic example:
engage_laser = (laser_armed and safety_disengaged) or random.random() < 0.0000003
How many times would you have to cover that line to really exhaustively test it? How many times would you have to cover it to test it in combination with all of the other variables in the program (with their own, possibly similarly rare) probabilities?
I'm not saying that coverage metrics are useless. They're actually great. They focus on one of the key issues: How extensively is my software system tested? They help move from "we have some tests" to "we have thoroughly tested."
But while you're working on "combined scores," the reality is that your score will typically be for "statement coverage" rather than "condition," "predicate," or "path" coverage. So whatever number your aggregate scores give you, it's unlikely that it's giving you a true picture of how much of your program potential states and state combinations are being tested. While you're working on increasing your coverage percentage, consider also measuring your predicate coverage. It will give you a more realistic--and almost invariably, a more sobering--view of test extensiveness.