4

Let's assume you are convinced that the extra time spent unit testing has merit and improves production. Does that still hold up when everyone working on the same code doesn't use them? This question makes me wonder if fixing tests that everyone doesn't use is a waste of time. If you correct a test so the new code will pass, you're assuming the new code is correct. The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well.

There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?

JeffO
  • 36,816
  • 2
  • 57
  • 124

8 Answers8

16

One's tests must be trusted to be effective. A man with two watches does not know what time it is. That is: bad, inaccurate, unmaintained tests ruins the good stuff. Further, benign neglect is a death spiral, ultimately to the point that the test suite as a whole will be discredited and abandoned.

A test may be bad because it is not current or it lies. It does not matter which, the damage to trust is the same.

When it comes to tests, maintained accuracy and quality over quantity is key.

radarbob
  • 5,808
  • 18
  • 31
  • Surely this is a case of *malign* neglect rather than benign neglect :) - or more likely ignorance / communication failure – MarkJ Apr 12 '12 at 16:30
  • I would add that one form of *malign* neglect is, certainly, simply forcing a test to pass, period. It's bad enough when it's a logic error, but when it's a "f"orce yo"u" : `Assert.IsTrue(true)` then one must question management's commitment to quality, customers, and coding professionalism. – radarbob Apr 12 '12 at 22:35
  • What if the two watches have the same time? – Thomas Eding Apr 18 '12 at 20:44
  • "What if the two watches have the same time?" Well, then you have too many watches. i.e. you're wasting effort and time on superfluous tests. – radarbob May 23 '15 at 05:28
3

Unit testing is an all or nothing decision for a team. By that I mean everyone has to be on board with unit testing for it to work. If there is no team wide buy in for unit testing you are just creating more code to maintain that is potentially broken and out of sync, wasting everyones time and effort.

Ryathal
  • 13,317
  • 1
  • 33
  • 48
  • 5
    +1. It's a waste of time to create *any* shared work product unless the team is on board. This applies to tests, production code, documentation, ... – MarkJ Apr 12 '12 at 16:28
  • @MarkJ - There's a lot of time wasting going on. I wish more teams would confront these issues from the start. – JeffO Apr 12 '12 at 17:57
  • 1
    I think the solution to the problem is going to be about improving communication and teamwork, not anything technical – MarkJ Apr 12 '12 at 19:34
2

This is not a problem with unit testing specifically. It is a general problem that programming techniques will not work for a team if some team members do no support it.

Seperation of layers will fall apart if one team member decides to put some business logic in the presentation layer.

My security scheme for a database centric application would fail if any team member wrote front facing stored procedures which did not call my procedure which validates the session and checks permissions.

If a team member violates naming conventions, you cannot rely on names to be how you expect.

If your team wants to use unit testing, and a team member is ruining it, that member is a bad fit for your team.

JGWeissman
  • 1,061
  • 8
  • 12
  • The same can also be said for the lone team member who wants to introduce unit tests while his colleagues can't be bothered and end up ruining the value of his tests by not maintaining them when modifying test covered code. Yeah, I've been that solo pioneer before. This is where continuous integration can really help. If the build fails because the tests fail, the lone team member who isn't testing will be caught out if he modifies unit tested code. His own code on the other hand will likely need some serious effort to fix. If you can't get a team member to commit, then I agree he's got to go. – S.Robins Apr 13 '12 at 00:50
1

Unit testing goes hand-in-hand with continuous integration. When someone changes code and checks it into your source control, your CI tool (e.g., Jenkins) should be rebuilding your project with the new code and running EVERY unit test. If any test fails, the build fails and the CI should inform everyone of the fact by email.

When people get sick of getting the email, they may realize that if they just run the tests BEFORE they check in code, then life will be easier and everyone will be happy.

Matthew Flynn
  • 13,345
  • 2
  • 38
  • 57
1

The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well.

I think it's the other way around. When test is good, its failure should guide one in deciding whether (and why, and, ideally, how) to modify the code or test.

In that sense, responsibility starts not with the person updating the test but with person creating test.

The one who creates test would better have a firm understanding of the reasoning behind the code. If this is not the case then test is of pretty little value and one would better stop worrying about changing it - or even removing it if it indeed stands in a way of development.

If test produces nothing but incomprehensible crywolfs at reasonable code changes, this is an issue in test - this is what needs to be fixed.


There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?

Less productive - maybe. Counter productive - hardly.

Time lag between code changes and test execution may feel unnatural for someone used to mature forms of unit testing and it indeed may incur some effectiveness losses but in other forms of testing (functional, integration, QA) it is quite a common thing and it is not considered a reason to drop testing.

To deal with test execution lagged that way, one probably has to get out of the box of "classical" unit testing approaches and look at the way how our colleagues testers do their stuff.

Now, imagine someone some time ago has changed code and some time later you run test and it fails. Think of how tester would handle that...

  1. Study test failure.
  2. If test helps you to find bug in code, great - open the ticket in issue tracker to get this fixed.
    Note, later you can use these tickets to justify benefits of running tests in the build time - that is, if tests are indeed that helpful.
  3. If test does not help, modify it to match the code changes (are you afraid of that? if yes, that indeed can become counter-productive).
  4. If you feel there is an issue in test... guess what? open the ticket in issue tracker to get this fixed.
    Note, later you can use this data to explain why regression bugs slip through and justify the need to invest efforts in testing.

Here is where you may experience some loss in productivity - because you act as a tester with the tool that is intended to be used by developer. This is not good, but not really the end of the world don't you think.

gnat
  • 21,442
  • 29
  • 112
  • 288
  • Remember, there is a member who isn't using tests at all. This person changes code and doesn't run any tests. Eventually, the person who covered the code with a test will run the tests and they will fail. The test may no longer be valid. – JeffO Apr 12 '12 at 16:13
  • 1
    @JeffO my point is good tests should remain valid, no matter run or not - self-contained if you wish. Think about SQA / functional / integration / UAT tests - developers don't run these at build or commit - would that justify ignoring these. When test (good test) runs and fails, its failure should guide for improvement - no matter if it run at the time of build / commit or later – gnat Apr 12 '12 at 16:26
  • but in this scenario, are the tests good? – JeffO Apr 12 '12 at 17:46
  • I'd say really good tests should be capable of "surviving" such a scenario. At least that's the way I try to design my own tests when there's no enforced policy for mandatory test execution and maintenance. – gnat Apr 12 '12 at 20:19
1

Everyone has to appreciate the benefit that unit tests bring in order for them to have any value whatsoever. I've compared unit tests to a new puppy before: They're great, but they require love and care, otherwise you find them dead in the corner crawling with ants after a few weeks.

I've seen a few scenarios in my career thus far:

  1. Everyone writes good unit tests, the tests are part of a CI build, and code is written to the tests, not the other way around.
  2. I'm the only person writing unit tests, and the solution to a failing test is either to comment it out or just stick an Assert.IsTrue(true); in there
  3. People write tests to their code. By this, I mean someone writes a bunch of code, runs it with a set of inputs, gets a set of outputs, and blindly sticks the output into a test. This is how you end up with a legitimate bug fix resulting in a slew of tests breaking, which tends to lead to scenario #2 ("We don't have time to fix these 30 tests, just comment them out!")

If you want to write unit tests, good. If no one else wants to, it's time to go find another team to work with that does.

Daniel Mann
  • 349
  • 3
  • 8
0

Ideally the whole team should be using tests and other good practices, but that isn't always realistic - If you don't have the power to tell the rest of the team what to use, having to get the entire team to agree can mean that improvements never happen.

Fortunately most of these improvements are worthwhile even if you are the only one using them. Normally tests are run on commit to answer the question "Did my code break anything?", but the same tests can be run on checkout to answer the question "Did anyone else break my code?", which is even more useful in some environments.

This approach lets you introduce testing one developer at a time, which is much easier than getting everyone to agree up front. Eventually everyone will be using the tests and you can start doing things like requiring tests to pass on the build server.

Tom Clarkson
  • 1,342
  • 10
  • 8
-1

EDIT: My views have completely changed since I first was introduced to the concept of code coverage. I don't like this metric anymore, and below advice goes against my believes now. One should use code coverage metric to learn about the code that might need additional attention. Never should the target of X% coverage be set as it will often make developers write unit tests for a wrong reason (get the coverage up instead of properly testing the code) and waste time that could be spent on something more improtant.

Thanks to someone who down voted this post, I wouldn't have found it otherwise.


[Original answer] You could use code coverage as a way to require developers to work on the tests. Tests must be run before check-in and if new code results in regression of code coverage it is simple to observe. Code coverage reports also have nice reports on per-module/ per-class basis - so you will be able to see clearly whose code is not covered and where you need to spend more time to get a better coverage.

Set up a goal for you team, say 60% code coverage and try to maintain it whether through automated scripts or via regular reviews of the statistics.

Paul
  • 394
  • 2
  • 8