0

There are various metrics like "test case effectiveness", that is calculated as (Total number of bugs found / Total number of test cases executed).

While this produces some numbers in the beginning, when run on a mature product, usually no bugs are found. But then I would get 0/100 = 0% and it does not make sense to report 0% test case effectiveness.

How to work with these metrics to actually get meaningful data?

gnat
  • 21,442
  • 29
  • 112
  • 288
John V
  • 4,898
  • 10
  • 47
  • 73
  • Perhaps replace "total number of test cases executed" with "total number of *new* test cases executed"? Define "new" as needed. – Dan Pichelman Oct 13 '16 at 15:33
  • What number were you expecting? The equation you provided is a "lower is better" number, so it's not going to return 100 percent unless all of your test cases fail. The number you're probably looking for is Tests Passed/Tests Executed. – Robert Harvey Oct 13 '16 at 15:42
  • 3
    Possible duplicate of [What is a good measure of testing/tester efficiency?](http://programmers.stackexchange.com/questions/186400/what-is-a-good-measure-of-testing-tester-efficiency) – gnat Oct 13 '16 at 15:56
  • see also: [Is it good that testers are competing to see who opens more bugs?](http://programmers.stackexchange.com/questions/285097/is-it-good-that-testers-are-competing-to-see-who-opens-more-bugs) – gnat Oct 13 '16 at 15:56

2 Answers2

2

Turn it on it's head. Change the formula to:

100%-bugs found/tests ran. Call it build stability or something like that.

Then you'll be able to tell the relevant stakeholders that this build was 100% stable with regards to things you know you need to test . However, this breaks down if people find bugs no one's ever seen/ you don't have test cases for.

Ian Jacobs
  • 654
  • 1
  • 5
  • 8
  • 1
    `However, this breaks down if people find bugs no one's ever seen/ you don't have test cases for.` By itself, yeah. But what if you coupled it with some kind of code coverage metric? If you had 100% stability and 10% branch coverage, that's different than 95% stability and 80% branch coverage. Sometimes, you can't use a metric in isolation for reasons like this, but you can combine two or more things into a useful understanding. – Thomas Owens Oct 13 '16 at 15:53
0

First off, congratulations on working hard enough on quality that this is a problem.

The most effective way of dealing with this problem, if you're lucky enough to have it, is to seed the code under test with known bugs.

The testers and anyone coaching them shouldn't know this is happening or how many bugs are introduced when it happens. Typically the number is randomized in some way. This allows you to test your testers. You get an objective X out of Y score.

Not something to do every time but it keeps the testers awake and lets them show off their skills even while the programmers are doing their darnest to put them out of a job.

.

candied_orange
  • 102,279
  • 24
  • 197
  • 315