7

In my C# solution, I have a Tests project containing unit tests (xUnit) that can run on every build. So far so good.

I also want to add integration tests, which won't run on every build but can run on request. Where do you put those, and how do you separate unit tests from integration tests? If it's in the same solution with the same [Fact] attributes, it will run in the exact same way.

What's the preferred approach? A second separate test project for integration tests?

Etienne Charland
  • 361
  • 3
  • 10
  • There's no one correct way to do this. The best approach for you depends on various factors, such as how you run xUnit (cmd line or within VS), what you mean by "run on request" etc. You could use skippable facts and have them all in the one project. Or you could use a separate project. Just one thing to be careful of, "run on request" all too quickly can become "forget to run for ages" and suddenly you find you have a number of failing tests to deal with. And that all too often can lead to "never run as it's broken". – David Arno Feb 14 '19 at 12:10

6 Answers6

6

The separation is not unit versus integration test. The separation is fast versus slow tests.

How you organize these tests to make them easier to run is really up to you. Separate folders is a good start, but other annotations like traits or [Fact] can work just as well.


I think there is a fundamental misconception here about what constitutes an integration test and a unit test. The beginning of Flater's answer gives you the differences between the two (and yes, sadly, I'm going to quote an answer already on this question)

Flater said:

The difference between unit tests and integration tests is that they test different things. Very simply put:

  • Unit tests test if one thing does what it's supposed to do. ("Can Tommy throw a ball?" or "Can Timmy catch a ball?")
  • Integration tests test if two (or more) things can work together. ("Can Tommy throw a ball to Timmy?")

And some supporting literature from Martin Fowler:

Integration tests determine if independently developed units of software work correctly when they are connected to each other. The term has become blurred even by the diffuse standards of the software industry, so I've been wary of using it in my writing. In particular, many people assume integration tests are necessarily broad in scope, while they can be more effectively done with a narrower scope.

(emphasis, mine). Later on he elaborates on integration tests:

The point of integration testing, as the name suggests, is to test whether many separately developed modules work together as expected.

(emphasis, his)

With regard to the "narrower scope" of integration testing:

The problem is that we have (at least) two different notions of what constitutes an integration test.

narrow integration tests

  • exercise only that portion of the code in my service that talks to a separate service
  • uses test doubles of those services, either in process or remote
  • thus consist of many narrowly scoped tests, often no larger in scope than a unit test (and usually run with the same test framework that's used for unit tests)

broad integration tests

  • require live versions of all services, requiring substantial test environment and network access
  • exercise code paths through all services, not just code responsible for interactions

(emphasis, mine)

Now we start getting to the root of the problem: an integration test can execute quickly or slowly.

If the integration tests execute quickly, then always run them whenever you run unit tests.

If the integration tests execute slowly, because they need to interact with outside resources like the file system, databases or web services then those should be run during a continuous integration build, and run by developers on command. For instance, right before code review run all of the tests (unit, integration or otherwise) that apply to the code you have changed.

This is the best balance between developer time and finding defects early on in the development life cycle.

Greg Burghardt
  • 34,276
  • 8
  • 63
  • 114
  • 1
    I think it a great sham that you ruin an otherwise good answer with a load of opinionated semantic waffle over your take on the difference between unit and integration tests. Cut that out and get straight to the point re fast and slow tests and this is a good attempt at answering the question. – David Arno Feb 17 '19 at 19:54
  • purely opinion, not fact ( The separation is not unit versus integration test. The separation is fast versus slow tests ) – RyBolt Jun 24 '20 at 16:03
4

The difference between unit tests and integration tests is that they test different things. Very simply put:

  • Unit tests test if one thing does what it's supposed to do. ("Can Tommy throw a ball?" or "Can Timmy catch a ball?")
  • Integration tests test if two (or more) things can work together. ("Can Tommy throw a ball to Timmy?")

The example integration test I gave may seem so simple that it's not worth testing after having done the example unit tests; but keep in mind that I've oversimplified this for the sake of explanation.

integration tests, which won't run on every build but can run on request

That isn't really how you're supposed to approach integration tests. They are just as essential as unit tests and should be run on the same schedule.

You should think of unit and integration tests as two pieces of "the test package", and it's this package that you need to run when testing. It makes no sense to only test half of your application's purpose and consider that a meaningfully conclusive test.

Without adding integration tests to your general testing schedule, you're simply ignoring any test failures that would be popping up. Tests exist specifically because you want to be alerted of their failures, so why would you intentionally hide those alerts? It's the equivalent of turning on the radio and then sticking your fingers in your ears.

and how do you separate unit tests from integration tests?

While they are commonly separated into different projects (usually of the same solution), that's not a technical requirement but rather a way of structuring the codebase.

If it's in the same solution with the same [Fact] attributes, it will run in the exact same way.

As I mentioned, running them in the exact same way is what you're supposed to be doing.

I assume you mean "in the same project" and not "in the same solution". They are often separated into separate projects as a matter of structuring the codebase, but there's no technical requirement forcing you to separate the unit and integration tests. You could just as well put them in the same project, which would make sense for small codebases with only a handful of tests.

Flater
  • 44,596
  • 8
  • 88
  • 122
  • Whether to run them automatically on every build or not depends on how long it takes to run the test. If a test takes an hour, I probably want to run it manually (or in a dedicated automated environment for bigger projects). If it takes a few seconds, can do it on every build no problem. If it takes a minute... – Etienne Charland Feb 14 '19 at 09:21
  • @EtienneCharland: Sure, but what is the point of then still running the unit tests at an earlier stage? You're going to get an incomplete answer anyway. Yes, half an answer is better than no answer, but I don't quite agree that the effort to implement their separation is worth the benefit of having a half-tested response. If anything, a half-tested pass (with an untested failure) is going to lead to wrong expectations and miscommunication in the team. – Flater Feb 14 '19 at 09:23
  • @EtienneCharland: Maybe a better response: if the tests take that long to perform, then it's going to take that long to perform the tests. That's not an argument for not running the tests or doing so electively. That's an argument for accepting that you are rigorously testing your application. **You should run tests before every build** because that prevents building faulty applications, but you maybe shouldn't build for every single commit (e.g. only force a test+build when merging into the master branch) – Flater Feb 14 '19 at 10:57
  • @Flater you cannot execute tests on assemblies before they are built, so it's literally not possible to run tests before every build. Integration tests are for testing new features or changes in a local program with external systems. This is sometimes not efficient or possible to do for the programmer, but should be done asap (like as part of a GCI build). – StingyJack Feb 14 '19 at 12:10
  • @StingyJack: In the current context, "compiled" and "built" is not synonymous. Yes, they need to be compiled, but the **build process** is not finalized (no artifacts are created, no release is made) when the tests fail. – Flater Feb 14 '19 at 12:12
  • 1
    @Flater, "*the build process is not finalized (no artifacts are created, no release is made)*". That is an *unusual* definition of build. "Built" and "released" are not synonymous... – David Arno Feb 14 '19 at 14:38
  • 1
    @DavidArno: Not every build leads to a automated release (i.e. install on an application server). Some builds simply make a new version available that can then be installed at will. I was listing two possible expected outcomes of the build, one where there is no automated release, one where there is. – Flater Feb 14 '19 at 14:51
  • 1
    As I say, you are using an unusual definition of "build". But since there's no correct definition of it, you are free to do so (at the risk of causing confusion of course). – David Arno Feb 14 '19 at 15:39
  • I think this answer would benefit from better examples for unit test and integration test. Also, integration tests often *do* run on a separate schedule - it may be an unpleasant truth, but it is a fact of life, especially in CI/CD environments. – Robbie Dee Feb 14 '19 at 16:41
  • Not sure why this answer has gotten a down vote. The first sentence and bullet points accurately describe the difference. Remember that integration tests do not mean *slow* tests. They merely test the interaction between more than one component, for example a service class that takes user input and calls methods on business classes would be a integration test. If you mock any unnecessary outside resources you *also* have a *fast* integration test. +1 for this answer. – Greg Burghardt Feb 14 '19 at 18:14
  • 1
    And I think what @Flater is saying is to *not* differentiate unit and integration tests. If the tests execute quickly (in similar time frames as unit tests) then there is no point to separating them. – Greg Burghardt Feb 14 '19 at 18:15
2

There is no "one size fits all" approach.

Some shops have them in the same project as the unit tests, others prefer to have them in a separate project by themselves.

However you do it, I'd recommend flagging them as such so the build server (assuming you have one) can be set to run a subset of tests as the build manager and/or developers deems fit.

I'm not au fait with xUnit to be honest, but it appears that this can be done by trait.

Once these are in place, developers can then pick and choose which categories of test they wish to run locally which can speed up development.

Robbie Dee
  • 9,717
  • 2
  • 23
  • 53
1

It probably depends on testing framework (you are using one, right?)

For JUnit 4/5, we have separate test suites.

We run one set locally, another before group level pull requests (unit tests for all jars). The 126k integration tests, which can take several hours to run (end to end / integration tests using Selenium, Weblogic, Oracle RDBMS, and Oracle BI Publisher, docker, bunch of other 3rd party COTS) are run before release pull requests, and those run every 6 hours.

At need, of course, we can run the those tests locally, too, but it is painful due to the duration and the special setup needed for the local docker images, so we generally just wait and see how the group build servers handle it.

Kristian H
  • 1,261
  • 8
  • 12
0

There is probably no "correct" answer, but I have a few simple rules of thumb, which work well for me:

  • Tests should check if business (use-case) requirements are met. It means there no simple or complicated component to test, there is just a requirement, which is the only reason to add a new test.

  • Only APIs are tested, never the implementation.

  • Mocks should be avoided at minimum (only for very slow or expensive resources).

  • Unit tests can be run on the level of code without need to be deployed.

  • Integration tests validate a component is correctly integrated in the system. It means the component must be deployed.

This difference, deployed or not, determines the place in your deployment process (pipeline stage).

ttulka
  • 353
  • 3
  • 13
0

I'm reading the question a bit differently. I guess the problem is in the definition of "differentiate". Above answers seems to interpret as "describe the difference". The person asking the question seems to be asking for how do I tell the software to run them differently (e.g. I also want to add integration tests, which won't run on every build but can run on request. ). This is clearly a request on how do I tell the CI environment to run unit tests only versus running integration tests only versus running both. As such there are much better answers available:

hockey_dave
  • 101
  • 2
  • Most of your solutions are for jUnit and not xUnit though – Etienne Charland Feb 02 '23 at 22:10
  • Sure, but the title of this question is: "How to Differentiate Unit Tests from Integration Tests?" which does not mention xUnit. So anyone searching for an answer to this question could be asking about integration/testing beyond xUnit. which technically in fact was what I was searching for when I stumbled upon this thread so I'm trying to help any other person like me who stumbles in. – hockey_dave Feb 04 '23 at 11:24
  • And I see others answering with jUnit above which makes this thread more generally usable than just people asking about xUnit. – hockey_dave Feb 04 '23 at 11:31
  • For xUnit specifically the answer from @Robbie Dee seems most appropriately helpful. – hockey_dave Feb 04 '23 at 11:37