How do I solve this riddle of contradicting "good practices" to properly cover my app with unit-tests?
These are principles I found about writing unit-tests:
- Pyramid of testing says unit-tests have to be written first and there should be more unit-tests than integration tests
- It is useful to automatically calculate test coverage and it should be a very big number. At least 80%? 5% has to be too low, right?
- Mocks in unit-tests are code smell. Especially if there are many. If we have them then we should refactor code to separate "glue code" from actual logic put in "pure functions".
But it looks like I can't make it work. Complex applications make a lot of library calls and have a lot of dependencies, especially those concerning communication, storage and mundane tasks implementations like auth. After refactoring there appears to be a ton of boring glue code with cyclomatic complexity of "one", i.e. no loops and no branches. Pure functions with complex code logic seem to be very few.
I wanted to cover my small exercise project with unit-tests but it seems like I can write only literally several tests for actual pure functions with custom logic. Almost everything is glue code. It doesn't mean everything is easy but the complexity comes from correctly understanding 3rd party code interfaces, passing it correct configuration objects and correct architecture.
The code coverage with unit-tests appears to be 5% maybe. I don't think integration tests will contribute to code coverage because how the system will know what they cover if it's black box? If I try to cover glue code it seems like the test is 95% complex mock configuration and the structure of the test repeats the tested code itself but made more strange.
I am thinking about making the pyramid upside down literally with mostly integration tests without isolation and then only a handful of true unit-tests where they work. But such tests are not going to be very fast and easy to execute. Maybe I am doing something wrong?