0

In the company I work for there is a requirement that all the code should be covered by a test of any kind because they want to have as few user reported defects as possible.

With this is mind I decided to put a policy that the code should be unit tested to C0 (100% coverage of each line, not each condition).

Now others say that this is more expensive that doing manual test.

Intuitively I would say that this is wrong as each time you do a change you have to retest everything manually and this is probably more effort than just executing the tests and updating the ones that need to be updated but I don't find a way to justify myself with numbers, papers or other information.

Can you give me some good points to proof my approach to the people I report to?

EDIT: Thanks to all that helped on the question. I see that there were some key points missing on it:

  • This is a new product that we started developing one year ago with testing in mind so we use DI everywhere and everything is prepared for testing.
  • We already have a commercial product that allows us to reach a 100% coverage as we can mock almost everything, including .Net classes so we can truly isolate classes (JustMock).
  • We have tools to calculate testing coverage.
  • We are not removing testers and manual testing. We are removing manual testing done by developers. We have a separate SQ team but management wants that the number of bugs that reach the SQ team is as small as possible so developers must reach 100% coverage by any mean before delivering code to the SQ team. So what I did was to replace developer manual testing with automated testing (unit and integration tests) and that's what management wants to revert.

The question is not the same as the "duplicated" one as I already have a 100% coverage requirement.

Ignacio Soler Garcia
  • 1,574
  • 2
  • 11
  • 17
  • 2
    Did you find any resources when you searched for 'cost of unit testing'? Which ones were helpful? Have you read these questions on SO: [Is Unit Testing worth the effort?](http://stackoverflow.com/questions/67299/) and [If unit testing is so great, why aren't more companies doing it?](http://stackoverflow.com/questions/557764/) along with [When is unit testing inappropriate or unnecessary?](http://programmers.stackexchange.com/questions/147055/) and the associated linked questions. –  May 07 '14 at 17:17
  • 5
    They may be objecting to the 100% requirement. There is quite a bit of diminishing returns the closer you get to 100%. Also, do you have test coverage reporting tools? Do you have test effectiveness or test power reports available? – BobDalgleish May 07 '14 at 17:18
  • This is also a good resource: http://www.drdobbs.com/testing/unit-testing-is-there-really-any-debate/240007176 ... the first line is key: "Continued resistance to unit tests says more about the organization than it does about the practice." You may have to define an ideal and then accept compromise, because what you're really dealing with is an adoption process. – sea-rob May 07 '14 at 17:22
  • 1
    I think unit testing argument validation just so you get to 100% coverage is very little value for the effort. Doing so may be worthwhile when building a high quality library (like the BCL), but I wouldn't do so in an application. – CodesInChaos May 07 '14 at 17:23
  • 2
    Also note that using some process, like unit testing, doesnt guarantee success. It is all up to the developers to do it the "right" way. You will probably first have to somehow prove to management that the developers fully understand unit testing concepts. If the developers do not, then it will slow down the project and never end up paying off. – jordan May 07 '14 at 18:02
  • 1
    @jordan that sounds a reasonable concern, see eg [Functional testing must be done by external party to avoid bias?](http://programmers.stackexchange.com/q/100637/31260) – gnat May 08 '14 at 08:08

3 Answers3

5

Most of the industry specialists agree that unit testing helps and reduces defect rate; there have been studies based on multiple projects that show this. You can find concrete results of these studies in software engineering books; take a look for example in Steve McConnell's Code Complete at chapter 20.3 Relative Effectiveness of Quality Techniques.

On the critical side, you're making an assumption that might not hold true in all the cases:

  • it really depends on your application, on the domain type (I can't see how you can release a complex game for example without any manual testing)
  • it depends if you're at the beginning with the application and change lots of things or you already have a pretty stable application.
  • maybe your application has a short life-span and investing much in automated testing is not worth it
  • it depends on the workforce; maybe the human resources in your area in manual testing are much more cheaper than developers or testers that know how to code

Having 100% unit test code coverage doesn't make sense (in most of the cases) because:

  1. It does not guarantee you're application is bug free; there still are integration issues, performance issues, security issues etc. The most important reason of unit tests I believe it to be the assurance of refactoring code without breaking it up; not a replacement for other types of issues.

  2. It's extremely difficult to reach. Basically unit tests show you that you have working separate units, but your applications have many interconnected units. The correctness of the application also depends on how these units are interconnected.

  3. See a much better argument by Joel Spolsky .

To sum up it's a very good idea to add unit tests if you don't have any; but it's a bad idea to fully replace manual testing with 100% line coverage in unit tests.

Random42
  • 10,370
  • 10
  • 48
  • 65
1

Do some back of the envelope math to figure out where the break-even point is.

If it takes x hours for a manual tester that gets paid $y/hr to run through a manual test and the manual test has to be run z times per year, the cost of the manual test is x * y * z per year. If it takes n hours for someone making $m/hr to automate that test, that's a one-time cost of n*m. Then figure out how long it takes for the one-time investment in building the automated test (n*m) to outweigh the (x*y*z) annual cost of the manual test.

Most of the time, building the automated test is a no-brainer. It's a bit more expensive as a one-time cost but it saves the recurring cost of running the manual test. And that's before you account for the fact that manual tests are generally less reliable since humans are fallible and will occasionally leave out steps. Occasionally, though, you'll find tests that you want to run occasionally that are hard to automate and easier to just have a human run manually before major releases.

Justin Cave
  • 12,691
  • 3
  • 44
  • 53
  • 1
    All true, but all hypothetical. See http://programmers.stackexchange.com/a/186691 – Robert Harvey May 07 '14 at 17:20
  • @RobertHarvey - In the same way that all predictions about the future are hypothetical. Unless you're prepared to throw out all approaches that involve predictions about the future (no more schedules, no estimates of difficulty, etc.), you're going to be making tons of decisions based on guesses about the future. Any model you come up with is going to fail to mimic reality in some way-- that's the point of the model. The hope is that the model helps you get the first-order effects right but even then models are imperfect. – Justin Cave May 07 '14 at 17:31
  • 3
    _"one-time cost of n*m"_ is over-simplified, even for back of the envelope math. It happened often in my experience (I'd say way too often) that automated tests required maintenance efforts, this would better be taken into account – gnat May 08 '14 at 00:22
  • I second gnat's comment. In actuality, writing unit tests the first time tends to be fairly easy (90% of the time) and adds minimal time to development. It's the keeping those tests working as the code and requirements change that gets to be quite onerous. – Dunk May 08 '14 at 18:13
  • @gnat - Agreed, the model is a simplification. You can certainly extend it to account for maintenance efforts. I wouldn't expect the amount of maintenance on an automated test to be substantially different than the amount of maintenance on a manual test, though. I agree that's a substantial portion of the cost of the test and would need to be factored in to determining whether it's worth adding (or keeping) the test in the future. If keeping the automated test updated is substantially more work than keeping the manual test updated, by all means add that to the model. – Justin Cave May 08 '14 at 18:34
1

Whilst there is merit in having unit tests, most people take it too far - firstly trying for 100% unit test coverage is trying to turn your company into this. You need a more pragmatic approach,especially as unit testing does not guarantee your application is bug-free, you still need plenty of integration testing to prove all the isolated bits work together.

Manual testing is best replaced with automated integration test tools. There are plenty around (eg Cucumber or SpecFlow) that should be much more easily recognised as something to replace the manual testing without going into the maintenance nightmare of 100% unit test coverage.

Unit testing is often abused, 100% unit test coverage is definitely a bad smell.

gbjbaanb
  • 48,354
  • 6
  • 102
  • 172
  • As I said I already have the 100% coverage requirement and this is a rule on all the company. Anyway, if you don't setup a 100% coverage target what you allow developers to not test? I think that they will focus on the simple tests and leave the complex parts without testing when the interesting this is to test just that parts. – Ignacio Soler Garcia May 08 '14 at 08:14
  • Usually developersa will test the complex parts and not the simple bits. Who wants to write unit tests for the getters like the WTF link? That's a huge load of make-work for no benefit. Devs don't like that. We write tests to prove the awkward stuff works, even if you don't do unit testing, we still write those tests. 100% coverage sounds like a rule from ignorant management rather than a way to actually obtain real quality. I can write unit tests that pass, yet still have a broken app. – gbjbaanb May 08 '14 at 09:48
  • We do tests at the end of the development (we don't TDD) so developers when are in a rush at the end of the sprint to deliver what they said they would need to finish things fast so I'm pretty sure they will go to the easy bits just to get the coverage up enough to reach the rule (whatever is the new rule if there is any). – Ignacio Soler Garcia May 08 '14 at 10:43
  • Quite probably - but that's the problem with such rules, you get work done just to fulfil them. This rule is the cause of the problem, and you're trying to treat the symptoms. – gbjbaanb May 08 '14 at 11:15
  • I understand but then what do you do so you are sure that the complex parts of the application are tested? As I said we are in a company that is really worried about delivering defects to the end user. – Ignacio Soler Garcia May 08 '14 at 13:01