38

If my code contains a known defect which should be fixed, but isn't yet, and won't be fixed for the current release, and might not be fixed in the forseeable future, should there be a failing unit test for that bug in the test suite? If I add the unit test, it will (obviously) fail, and getting used to having failing tests seems like a bad idea. On the other hand, if it is a known defect, and there is a known failing case, it seems odd to keep it out of the test suite, as it should at some point be fixed, and the test is already available.

Martijn
  • 1,016
  • 9
  • 14
  • possible duplicate of [Should developers be responsible for tests other than unit tests, if so which ones are the most common?](http://programmers.stackexchange.com/questions/179746/should-developers-be-responsible-for-tests-other-than-unit-tests-if-so-which-on) – gnat Jan 31 '14 at 13:40
  • 6
    I don't think so gnat, I specifically ask about unit test – Martijn Jan 31 '14 at 13:57
  • 3
    tests for known defects are known as [regression tests](http://en.wikipedia.org/wiki/Regression_testing), these have nothing to do with unit tests... to be precise, the latter depends on developer opinion - your question is maybe not a duplicate after all, but rather a poll for opinions. It is especially prominent that the [answer you accepted](http://programmers.stackexchange.com/a/226231/31260) doesn't use term "unit tests" at all, but instead fairly reasonably calls these differently "known failing tests" – gnat Jan 31 '14 at 14:25
  • That's not how I - or the article you link to - define regression tests. We might be misunderstanding eachother. As I understand you there is no such entity as a failing unit test, and that can't be what you mean, do you? – Martijn Jan 31 '14 at 15:06
  • in my _opinion_, unit tests should be kept strictly separate from regression tests to start with (as these serve very different purposes, in my _opinion_). Also, in my _opinion_, failing unit test should break the project build - and, adding that in my _opinion_, build should be kept from breaking first of all, keeping them failing doesn't make sense. Anyone else is entitled to differing _opinions_, I don't mind - except that opinion polls are off-topic at Stack Exchange sites – gnat Jan 31 '14 at 15:14
  • 1
    See also on SO: [Mark unit test as an expected failure in JUnit](http://stackoverflow.com/questions/4055022/mark-unit-test-as-an-expected-failure-in-junit) and [Mark unit test as an expected failure in JUnit4](http://stackoverflow.com/questions/5889966/mark-unit-test-as-an-expected-failure-in-junit4) –  Jan 31 '14 at 15:14
  • 3
    Thanks Michael, that does help on *how* to mark such tests in JUnit, but not really on the testing practice. gnat, I still don't understand how you see a failing unit test as a regression test. I'm also getting a distinctively hostile passive/aggressive vibe from your comments. If you think I should ask questions differently, please say so, because I can't address your concerns if you phrase them like this. – Martijn Jan 31 '14 at 15:35
  • 3
    @gnat: honestly, IMHO it does not matter if we call the tests here "unit" or "regression" test - the question you linked too has a different focus, and the answers there don't apply here. – Doc Brown Jan 31 '14 at 17:34
  • @DocBrown "Whether that means you are personally responsible for functional and regression tests is mostly a function of how your company is organized. ...programmers I know don't ask themselves "is it my responsibility to write tests of type X?". Instead, they ask themselves "what must I do to make sure my code is properly tested?". The answer might be to write unit tests, or to add tests to the regression, or it might mean to talk to a QA professional and help them understand what tests need to be written..." ([top answer](http://programmers.stackexchange.com/a/179747)) – gnat Jan 31 '14 at 19:24
  • 4
    Related: [Should I intentionally break the build when a bug is found in production?](http://programmers.stackexchange.com/questions/131006/should-i-intentionally-break-the-build-when-a-bug-is-found-in-production/131009) – sleske Jan 31 '14 at 20:42

7 Answers7

50

The answer is yes, you should write them and you should run them.

Your testing framework needs a category of "known failing tests" and you should mark these tests as falling into that category. How you do that depends on the framework.

Curiously, a failing test that suddenly passes can be just as interesting as a passing test that unexpectedly fails.

david.pfx
  • 8,105
  • 2
  • 21
  • 44
  • 7
    An example of the above feature in Python unittest framework: http://docs.python.org/3.3/library/unittest.html#unittest.expectedFailure – Jace Browning Jan 31 '14 at 13:52
5

I think you should have an unit test with the current behaviour and in the comments, add the right test and right behaviour. Example:

@Test
public void test() {
  // this is wrong, it should be fixed some time
  Assert.assertEquals(2, new Calculator().plus(2,2));
  // this is the expected behaviour, replace the above test when the fix is available
  // Assert.assertEquals(4, new Calculator().plus(2, 2));
}

This way, when the fix is available, the build will fail, noticing you the failed test. When you'll look at the test, you will know that you changed the behaviour and the test must be updated.

EDIT: As Captain Man said, in large projects, this will not get fixed anytime soon but for the documentation sake, the original answer is better than nothing.

A better way to do it is duplicating the current test, making the clone assert the right thing and @Ignore it with a message, e.g.

@Test
public void test() {
  Assert.assertEquals(2, new Calculator().plus(2,2));
}

@Ignore("fix me, Calculator is giving the wrong result, see ticket BUG-12345 and delete #test() when fixed")
@Test
public void fixMe() {
  Assert.assertEquals(4, new Calculator().plus(2, 2));
}

This comes with the convention in your team to reduce the number of @Ignored tests. The same way you'd be doing with introducing or changing the test to reflect the bug, except you don't fail the build if this is critical for your team, like OP said that the bugfix won't be included in the current release.

Silviu Burcea
  • 618
  • 4
  • 13
  • 1
    This is bad advice. No one will ever attempt to fix it. People are only going to open up old unit tests if there are compilation issues or test failures. – Captain Man May 22 '18 at 14:50
  • @CaptainMan I agree, I have updated my answer to provide a better way of the dev team being aware of a bug without failing the build. Your downvote was justified for the original answer I posted 3 years ago, I believe the current answer is more appropriate. Would you do it another way? – Silviu Burcea May 23 '18 at 14:23
  • This is nearly exactly what I do on the rare occasions I can’t fix the bug now for some reason. I’d love to hear how you handle the situation @CaptainMan – RubberDuck May 24 '18 at 09:55
  • @RubberDuck There isn't really any ideal situation here (other than fixing the bug now haha). To me, at least seeing in the test results "10 passed, 0 failed, 1 skipped" is at least some indication something is fishy to people not familiar with it. I prefer the `@Ignore` approach. The reason using just a comment doesn't seem like a good idea to me is because I don't think people will often open unit tests to check them (unless they are failing, or (hopefully) when they wonder why something is being ignored). – Captain Man May 24 '18 at 15:25
  • @RubberDuck There isn't really any ideal situation here (other than fixing the bug now haha). To me, at least seeing in the test results "10 passed, 0 failed, 1 skipped" is at least some indication something is fishy to people not familiar with it. I prefer the `@Ignore` approach. The reason using just a comment doesn't seem like a good idea to me is because I don't think people will often open unit tests to check them (unless they are failing, or (hopefully) when they wonder why something is being skipped). – Captain Man May 24 '18 at 15:25
3

Depending on the test tool you may use an omit or pend function.

Example in ruby:

gem 'test-unit', '>= 2.1.1'
require 'test/unit'

MYVERSION = '0.9.0' #Version of the class you test 


class Test_omit < Test::Unit::TestCase
  def test_omit
    omit('The following assertion fails - it will be corrected in the next release')
    assert_equal(1,2)
  end

  def test_omit_if
    omit_if(MYVERSION < '1.0.0', "Test skipped for version #{MYVERSION}")
    assert_equal(1,2)
  end

end

The omit command skips a test, the omit_if combines it with a test - in my example I test the version number and execute the test only for versions where I expect the error is solved.

The output of my example is:

Loaded suite test
Started
O
===============================================================================
The following assertion fails - it will be corrected in the next release [test_omit(Test_omit)]
test.rb:10:in `test_omit'
===============================================================================
O
===============================================================================
Test skipped for version 0.9.0 [test_omit_if(Test_omit)]
test.rb:15:in `test_omit_if'
===============================================================================


Finished in 0.0 seconds.

2 tests, 0 assertions, 0 failures, 0 errors, 0 pendings, 2 omissions, 0 notifications
0% passed

So my answer: Yes, implement the test. But don't confuse a tester with errors, where you know it will fail.

knut
  • 1,398
  • 1
  • 10
  • 16
2

If the bug is fresh in your mind and you have the time to write the unit test now, then I would write it now and flag it as a known failure so it doesn't fail the build itself. Your bug tracker should be updated to reflect that there is a unit test that is currently failing for this bug so that the person assigned to eventually fix it doesn't write it all over again. This supposes that the buggy code doesn't need a lot of refactoring and that the API changes significantly -- if that's the case then you might be better off not writing the unit test until you have a better idea of how the test should be written.

kaared
  • 114
  • 4
1

The answer is NO IMHO. You should not add a unit test for the bug until you start working on the fix for the bug and than you will write the test(s) that proves the bug and when that test(s) is failing in accordance to the bug report(s) you will go and correct the actual code to make the test(s) pass and the bug will be solved and it will be covered after that.

In my world we would have a manual test case that the QEs have failing until the bug gets fixed. And us as developers would be aware of it via the manual failing TC and via the bug tracker.

The reason for not adding failing UTs is simple. UTs are for direct feedback and validation of what I as a developer are currently working. And UTs are used in the CI system to make sure I didn't break something unintentionally in some other area of code for that module. Having UTs failing intentionally for a know bug IMHO would be counter productive and just plain wrong.

grenangen
  • 21
  • 3
0

I suppose the answer really is, it depends. Be pragmatic about it. What does writing it now gain you? Maybe it is fresh in your mind?

When fixing the bug, it makes perfect sense to prove it exists by writing a unit test that exposes the bug. You then fix the bug, and the unit test should pass.

Do you have time to write the failing unit test just now? Are there more pressing features or bug that need to be written/fixed.

Assuming you have competent bug tracking software with the bug logged in it, there is really no need to write the failing unit test right now.

Arguably you might introduce some confusion if you introduce a failing unit test before a release that is happening without the bug fix.

ozz
  • 8,322
  • 2
  • 29
  • 62
0

I usually feel uneasy about having known failures in test suites, because it's too easy for the list to grow over time, or for unrelated failures in the same tests to be dismissed as "expected". The same things goes for intermittent failures - there could be something evil lurking the code. I'd vote for writing the test for the code as it is now, and as it should be once it's fixed but commented out or disabled somehow.

Rory Hunter
  • 1,737
  • 9
  • 15