2

We are going to implement Integration Testing at our project for an Embedded Product. The plan is to develop the Tests and execute them at each sprint.

I suggested that the tests could be executed after all the changes for that round are implemented, in parallel with the System Tests. A coworker suggests that it would be better to run the tests each time a set of related changes are implemented.

For example, if for an iteration XX.YY are planned 6 changes, three for feature A, two for feature B and one for C and they are not related, we should execute our round three times (1 after changes for each feature realized). It is planned to have full automation, so the round can run overnight and there are not so much "time constrains". What is a better approach? What are the advantages of each one of them?

  • 4
    Take it one step further than you co worker suggested and execute all test after any change, related or not. Only reason not to is the cost of running tests exceeds the cost of finding regression issues late, and we all should know the theory of exponential cost of late finds. – mattnz Sep 29 '17 at 04:00
  • 1
    You might be interested [Continous Integration](https://en.wikipedia.org/wiki/Continuous_integration). [Here](https://softwareengineering.stackexchange.com/questions/358096/what-does-continuous-mean-in-continuous-deployment-continuous-delivery-an) J.W Mittag explains quite well what @mattnz is commenting. – Laiv Sep 29 '17 at 08:35

1 Answers1

4

Testing should not only be used to prove that a feature works, but also to find out as soon as possible that a change has caused a feature to stop working.

The advantages of testing more often is that you will typically have more time before the next big milestone to find and fix the problem that causes the tests to fail, and because the time between test executions is short, there is only a limited number of changes that could have introduced the problem.


Currently, I am also working on an embedded product and we have organized our testing activities like this:

  • Development is done according to the Git Flow model: Changes are made on feature branches and are only integrated into the main software after a (successful) review. In parallel with the review, unittests (on a build machine) and a limited integration test (on the real target hardware) are executed. This limited integration test takes about 20 minutes to execute.
  • Every night, if there are changes to the main software branch, a regression run of nearly all integration tests is executed. This test run takes about 5 to 6 hours to execute on the target hardware and excludes only the test cases that take an excessive amount of time.
  • Every weekend, the tests that require a lot of time (typically between 10 and 24 hours per testcase) are executed.

A story/feature is not considered to be completed until the code has been integrated into the main software branch and the nightly test run shows no problems (that can be attributed to the changes made for the story/feature).

Besides these internal integration tests, there are also other teams that perform integration tests on a system level to verify how well our product works in the wider ecosystem.

The advantage of this way of working is that you get faster feedback when your change broke something that you didn't intend, but it does require a way of working where the work-in-progress is not mixed with the work that was already completed.

Bart van Ingen Schenau
  • 71,712
  • 20
  • 110
  • 179