8

A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).

Let's say you have a "controller" class something like:

public class SomeController {
    @Autowired Validator val;
    @Autowired DataAccess da;
    @Autowired SomeTransformer tr;
    @Autowired Calculator calc;

    public boolean doCheck(Input input) {
        if (val.validate(input)) {
             return false;
        }

        List<Stuff> stuffs = da.loadStuffs(input);
        if (stuffs.isEmpty()) {
             return false;
        }

        BusinessStuff businessStuff = tr.transform(stuffs);
        if (null == businessStuff) {
            return false;
        }

       return calc.check(businessStuff);
    }
}

We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, ...), that's out of question.

Our main issue and on what we cannot agree is that how much integration tests shall cover it :-)

I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.

The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.

Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?

Thanks a lot in advance!

PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.

PS2: I moved this topic from SO here (original question, marked on hold)

rlegendi
  • 191
  • 4

2 Answers2

8

Contrary to this answer, I find testing at different levels important part of software testing process. Unit, functional, integration, smoke, acceptance and other kind of tests are testing different things, and the more the merrier.

And if you manage to automate them, then they can be executed as a job of your continuous integration (like jenkins). Nobody needs to execute them to see if they broke, since everyone can see if tests failed.

In my integration tests, I do not go into details - details and corner cases are for unit tests. What I test is just the the main functionality, passing all correct parameters. That means, in your example, in the integration tests, I would not cover false paths.

BЈовић
  • 13,981
  • 8
  • 61
  • 81
  • 1
    +1 Yep, if you *can* do integration tests in the build then so much the better but this can be a significant undertaking in enterprise level solutions. – Robbie Dee Mar 02 '17 at 17:48
7

There is a school of thought that seriously questions the value of integration tests.

I suspect you'll get a broad spectrum of answers here, but to my mind you should only use them when they deliver clear value.

Firstly, you should be writing as many unit tests as you can, because a) they're cheap and b) they may be the only tests that run on the build server (should you have one).

It is tempting to write client side tests to verify the database or some other layer but these should really happen at the appropriate level if possible rather than contriving an integration test. This of course can require some groundwork in the form of interfaces etc to get mocking and the like working.

Consider also the scope of your testing. If you're simply writing an integration test to cover off what happens anyway when your suite is run or duplicating a smoke test, then this has limited value. Also, think about whether more logging in pertinent places would be a better option rather than getting an obscure message from a dozen interconnected components and having to track it thru.

So assuming you've decided to write some, you need to think about when they'll run. If they can catch some nasty issue taking place, then that is great, but that becomes pretty pointless if developers never remember to run them.

Finally, integration tests are an utterly unsuitable replacement for smoke testing and user testing. If you bother with them at all they should form a small part of a well designed testing process.

Robbie Dee
  • 9,717
  • 2
  • 23
  • 53
  • 4
    Just a comment: note that, conversely, there is a school of thought that [seriously questions the value of unit tests](http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-testing.html) (DHH claims to be questioning TDD, but if you read his post, he also questions unit tests). To be honest, I find things to agree and disagree with both extremist schools of thought :) – Andres F. Feb 28 '17 at 16:28
  • 1
    A lot of the dogma around TDD is around not writing anything you don't need which any seasoned developer knows thru YAGNI anyway... – Robbie Dee Feb 28 '17 at 16:37
  • 4
    Your answer is fine (+1), the article you cited, however, has the problem the author gives the impression just because integration tests did not work for *his* case, they won't work for anyone else. In our team, we are using integration tests here for our product for more than 10 years in practice, and they prevented us from deploying severe bugs many times, so I think it is actually possible to write integration tests which deliver high value. – Doc Brown Mar 02 '17 at 14:27
  • The author also writes in a comment "Integrated tests (not "integration tests"!) [. . .]" - I'm not sure about the difference but it seems to me there's a lot of nuance involved in whether an integration test is useful or good. – Carson Myers Mar 03 '17 at 10:01
  • In my experience, only integration tests truly deliver value. Unit tests, in particular those that use mocking, are more trouble than they are worth. In my current project, for example, we have 650 JUnit-based integration tests which run on every commit on a Jenkins server, with the full build taking less than 10 minutes. Of course, we also run tests on our IDEs, while doing TDD. We don't see a need to have an additional suite of unit tests - it would add a lot more effort for no apparent gain. Perhaps the reason some developers avoid integration tests is simply they haven't really tried? – Rogério Mar 27 '18 at 21:45
  • @Rogério Read my 2nd paragraph again. Systems that need to be integration test bound are certainly out there, but they are a tiny minority. – Robbie Dee Mar 28 '18 at 08:51
  • I don't think so. Any system, application or library will benefit more from integration tests than from unit tests. I also develop a OSS Java library, with an integration test suite of over 1000 tests. I used integration tests in multiple commercial projects, including one in C#.NET. Unit tests tend to be preferred by inexperienced developers, though. – Rogério Mar 28 '18 at 14:24
  • @Rogério Sorry, but the evidence is firmly stacked against you on this one. But glad it is working for you. – Robbie Dee Mar 28 '18 at 14:28
  • *What* evidence, Robbie? I would be very interested in seeing any... Can you provide details of your own testing experience, or links? As for myself, [here is a link](https://github.com/jmockit/jmockit1/tree/master/samples/petclinic) showing a sample integration test suite which demonstrates the approach I use. – Rogério Mar 28 '18 at 21:35
  • @Rogério Open google - type in "test pyramid". Pull some open source projects. Compare vanilla tests with those marked "integration". I'll wait. – Robbie Dee Mar 28 '18 at 21:44