0

When writing a unit test for a scenario believed to be already covered, ie, the first run of the test would be green, what is a good guideline to ensure that it is in fact testing the proper test case, and not a test that 'always' passes, or passes for reasons that have nothing to do with the test case being considered ?

I believe the best practice is to change your class under test, so that it fails first, and then remove the intentional bug to see if it turns green, however that explanation is too vague. At an extreme, I could simply make my constructor throw an exception to get a red test, then remove the throw to make it green. I know the breaking change should be something 'close' to the scenario under test, but am not sure how to word it as a proper guideline that my juniors could follow.

Kindread
  • 103
  • 2
  • 3
    forced fail approach is the way to go. It's totally legit and even "officially" endorsed in TDD's [Red-Green-Refactor](http://programmers.stackexchange.com/a/175439/31260) "- What if I can't make my unit test fail - Then don't write it" – gnat Mar 05 '15 at 20:45

2 Answers2

3

what is a good guideline to ensure that it is in fact testing the proper test case

This falls back to the old "who tests the tests?" line of thinking.

And in general, nobody can test the tests. Unit tests have proven themselves so successful in part because they are small enough that they can be implemented without error. A good guideline is that another developer can look at the test and say "yes, that appears to test what it says it does".

That is enough to catch the majority of issues. The rest is part of a good layered approach to software quality. Even if the test is wrong, QA or user acceptance testing can help catch things that escape the test. That reduces the risk of a bug escaping into production to an acceptable amount.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • Something as simple, and possibly hard to spot, as a missing = in the unit test might cause the test to always pass though. I feel this could be misleading for the next person to change the tested class. They might feel their change introduces a risk or a new boundary condition, but see there is already a unit test that covers it. I feel its worth it for the original writer to check the test is valid, if the breaking change they make is simple and quick to do. – Kindread Mar 05 '15 at 21:05
  • @Kindread - if you don't trust your developers to write the simple, straight-forward code correctly, why are they still employed? The odds of screwing up both the code and the unit test are slim. The cost of doing this test testing isn't going to be worth it. – Telastyn Mar 05 '15 at 21:32
  • @Kindread I'm not sure why this bothers you so much. Even if none of the tests have bugs the code still isn't guaranteed to be correct just because they all pass, so in some sense having any number of unit tests is misleading to the next person to change the tested class. It's probably faster to just slow down for a second and read the test a second time than to start messing with the code just to see it turn red. – Doval Mar 06 '15 at 12:53
  • @Doval I didn't mean to imply that it's the end of the world if a unit test is bugged. I simply believe it is worth the effort to check the unit test in a way that minimises human error when first writing it. There is a size-able difference between expecting that if a class has 100% bug free unit tests, the class is bug-free, and expecting that if a particular unit test, naming a particular scenario, passes, then that scenario is working. – Kindread Mar 09 '15 at 10:37
1

Even though it is right that you should not start testing your tests the ability of a test to fail is crucial. A test which cannot fail is not only useless - it also indicates that you ...

  • made logical mistakes
  • just wanted to do some quick testing without thinking it through or
  • are testing the language (e.g. unsigned >= 0) or general logic (true || false)

If you already have the code implemented please don't make the test fail by crashing the program. The following test will allways fail, even though it is a tautology.

#include <gtest/gtest.h>

TEST(test,testcase) {
    throw 1;
    EXPECT_TRUE(true);
}

int main(int argc, char ** argv) {
    ::testing::InitGoogleTest(&argc,argv);
    return RUN_ALL_TESTS();
}

(Maybe there are test frameworks in which this test would still pass, but there is at least one in which it gives you false security.)

Instead just write a quick "empty implementation" which has the same interface and simply does nothing. You then - when all tests failed - switch out the dummy with the real implementation.

Otomo
  • 245
  • 1
  • 7