31

When it comes to tests, I can think of two options:

  1. Put both test and application in one image.
  2. Include only application code in the image. Create a test-specific container that builds after the main image and adds some layers to it (test code, dependencies etc).

With the first option, I can test the container and ship it exactly as tested. An obvious downside is unnecessary code (and potentially test data) will be included in the image.

With the second option, the image that is shipped is not quite the same as the one that is tested.

Both look like bad strategies. Is there a third, better strategy?

lfk
  • 419
  • 1
  • 4
  • 4
  • 1
    You basically answered yourself. Both are bad idea. You will ship already tested runnable processes into a container sized and customized to the needs. You don't want dev-dependencies nor src code. In production its considered a risk. – Laiv Jan 24 '18 at 07:13
  • 2
    Testing before containerization means the environment is not tested, only the code is. You'll have tested only part of what you're shipping, not all of it. – lfk Jan 25 '18 at 11:51

4 Answers4

13

For running build-time tests, the preferred way would be to use a multi-stage build. Multi-stage Dockerfiles allow you to have a larger stage with all the dependencies for building and testing, then copy the exact artifacts you tested into another stage for a smaller runtime image.

You also want system-level tests of multiple containers, using their external interfaces instead of running within the container. Since those tests involve coordination between services, require different dependencies such as access to your orchestration, are not as thorough as build-time tests, and are often written in completely different languages anyway, it's not a big deal to run them from a separate Docker container dedicated just to system testing.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
  • 2
    So that's pretty much option 2 -- I run the tests in an environment/container that is very similar to production, but not quite the same. Is that right? – lfk Jan 25 '18 at 11:55
10

There is a third way, as you said yourself. I think you are mixing up development, testing and deployment. I propose that the whole SDLC be looked at as a whole, first, to understand what it is you are trying to achieve. This is a big topic, but I will do my best to summarise.

TL;DR;

In short, you need to separate:

  • your code, from
  • the application configuration, from
  • the system environment configuration.

Each needs to be independent of each other and suitably:

  • version controlled
  • tested
  • deployable

Longer Version

First, you have an application made up of code and (separate sets of) configuration. This needs to be tested, for both build and intentional function - this is called continuous integration (CI). There are many providers of this service both online and locally - for example CircleCI for a cloud provider that links to your repository and builds and tests whenever you commit. If your repository is on-prem and cannot use a cloud provider, something like Jenkins would be an equivalent. If your application is fairly standard, there is probably an existing Docker image that the CI service can use. If not you will have to create one, or a cluster of such, that your application code and configuration can be deployed to. Correctly configured, you will have a wealth of statistics on the quality of your application code.

Next, once you are satisfied with the functionality and correctness of your application, the codebase should be suitably tagged for a specific release. This build should then be deployed to a test environment. Note that the code will be the same as tested in your CI (provably so, if you have done this correctly), but your configuration may differ. Again some CI providers can offer this step so you can test your deployment of a packaged application and discrete configuration. This stage typically will include user functional testing (for new functionality), as well as automated testing (for known functionality). If the release passes this stage, you have a release candidate for integration testing. You can run the automation tests from another Docker container, depending on your application these may be as large and elaborate as your application itself - indeed, there are some metrics that state testing effort is 1:1 to coding effort (though I am unsure on this myself).

Penultimately, the next step is where you build your (system) environment as if it were production. If you are using Docker in production, this is where you will think of security hardening, network and server optimisaton etc. Your Docker images may be based on those you used in Development (ideally so), but there may be changes for scaling and security, as I said. By now the functional testing of the application should be complete, you are more concerned with security and performance. As per the functional testing, your tests here can be developed, deployed and run from other Docker images. This step used to be horrifically expensive and rarely done as to do so you needed dedicated hardware in place that reproduced production. Today, this is completely viable as you can stand up and tear down the whole environment of almost any scale on demand.

Finally, you have a release that should be production ready with only a small set of configuration deltas from that of your integration testing (IP addresses, database URIs, passwords etc.) Your code base has been tested at least in three different environments at this point and the majority of system configuration at least once.

avastmick
  • 101
  • 3
  • 1
    Does that means your CI won't be testing your Dockerfiles at all? For example, if your Dockerfile were missing a dependency, the tests would still pass? – lfk Jan 26 '18 at 00:02
  • 1
    Not at all. First test the code, then test the app config, then test the system. What I am saying is that these are discrete activities. The great thing about containerization is that the dream of development in an environment that is the same as prod is very close. But the hardening would make development too hard. – avastmick Jan 27 '18 at 10:36
2

I think you are mixing up different kinds of tests. Basically you need to ask yourself: What is the unit under test here?

The most common scenario when you are working as a developer is to write unit/integration tests for some piece of code you're working on, where that piece of code is the unit under test. You run those tests locally and/or in CI.

When you've built a new docker image, it becomes a new unit which you can test. What kinds of things would you like to test for this image? What is the API it is providing? How do you test that?

If it is a web application you could start a container based on the image and do some HTTP requests and check that the responses are what you expect. The problem I think you are experiencing is that you are very used to the test framework being coupled to the application code. That's fine during development, but now you want to test a docker image and so you need a new kind of test framework which can do that and isn't tied to the application code.

So I think the 3rd option that you are looking for is:

  • Run your unit/integration tests before building a docker image.
  • Build a docker image containing just the application you want to distribute.
  • Instead of adding additional layers on top of that application image, you test it as-is by running it with some given parameters and assert your expected outputs.

So the CI/CD steps would be:

Setup development environment -> Run tests on code -> Build final image -> Run tests on image -> Deploy image.

0

If you use multiple stages, you can run docker build and avoid running the tests, and if you want to run the tests afterwards, running docker build --target test would then run the tests on what was previously built. This approach is explained on the docker official documentation.

This way not only we avoid running the build twice due to the Docker caching mechanism, we also can avoid having the test code being shipped in the image.

A possible use case of this implementation when doing CI/CD is to run the two commands when developing locally and in the CI; and not run the tests command in the CD, because the code being deployed will already be tested.

ccoutinho
  • 101
  • 2