In the company I work for there is a requirement that all the code should be covered by a test of any kind because they want to have as few user reported defects as possible.
With this is mind I decided to put a policy that the code should be unit tested to C0 (100% coverage of each line, not each condition).
Now others say that this is more expensive that doing manual test.
Intuitively I would say that this is wrong as each time you do a change you have to retest everything manually and this is probably more effort than just executing the tests and updating the ones that need to be updated but I don't find a way to justify myself with numbers, papers or other information.
Can you give me some good points to proof my approach to the people I report to?
EDIT: Thanks to all that helped on the question. I see that there were some key points missing on it:
- This is a new product that we started developing one year ago with testing in mind so we use DI everywhere and everything is prepared for testing.
- We already have a commercial product that allows us to reach a 100% coverage as we can mock almost everything, including .Net classes so we can truly isolate classes (JustMock).
- We have tools to calculate testing coverage.
- We are not removing testers and manual testing. We are removing manual testing done by developers. We have a separate SQ team but management wants that the number of bugs that reach the SQ team is as small as possible so developers must reach 100% coverage by any mean before delivering code to the SQ team. So what I did was to replace developer manual testing with automated testing (unit and integration tests) and that's what management wants to revert.
The question is not the same as the "duplicated" one as I already have a 100% coverage requirement.