14

Whenever I write unit tests I have always tried to have a single assert per test to make debugging easier when tests fail. However as I follow this rule I feel like I am constantly copying the same code in each test and by having more tests it becomes harder to go back to read and maintain.

So does single-assertion testing violate DRY?

And Is there a good rule to follow to find a good balance, like just having one test per method?*

*I realize there probably isn't a one-size fits all solution to this but is there a recommended way to approach this?

Korey Hinton
  • 2,656
  • 3
  • 20
  • 30
  • 5
    You could extract the code you copy into methods – Ismail Badawi Aug 26 '13 at 20:22
  • @IsmailBadawi that sounds like a good idea. I would assume these methods should return an instance of the class object that I am testing – Korey Hinton Aug 26 '13 at 20:30
  • 1
    You're creating something in a given state to be tested? That sounds like a fixture. – Dave Hillier Aug 26 '13 at 20:41
  • @DaveHillier yes, I learned a new word today. Thanks :) – Korey Hinton Aug 26 '13 at 20:53
  • depends on how you interpret 1 assert per test, if you mean one Assert* call then yeah if you also want to ensure the invariants still hold (then again extract that into a method), or if there are multiple effects that you just can't test in a single Assert (or if you did then it wouldn't be clear why it failed) – ratchet freak Aug 26 '13 at 21:57

3 Answers3

15

Proper unit tests have a naming convention that helps you immediately identify what has failed:

public void AddNewCustomer_CustomerExists_ThrowsException()

This is why you have one assertion per test, so that each method (and it's name) corresponds to the condition that you are asserting.

As you've correctly pointed out, each new test is going to have similar setup code. As with any code, you can refactor the common code into its own method to reduce or eliminate the duplication and make your code more DRY. Some testing frameworks are specifically designed to allow you to put that setup code in one place.

In TDD, no test is YAGNI, because you write tests based only on what you require your code to do. If you don't need it, you won't write the test.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • Refactoring into a single method sounds good. If I were to need an instance of the class to be in a certain state I could create and return a new instance of that object in the refactored method or like you suggest use the methods provided by the testing framework for initial test setup. – Korey Hinton Aug 26 '13 at 20:39
  • on your last point, I'm sure I could write tests for functionality I'd _like_ the code to have, rather than functionality it needed – joel Aug 30 '19 at 14:22
5

So does single-assertion testing violate DRY?

No, but it promotes violation.

That said, good object oriented design tends to go out the window for unit tests - mostly for good reason. It's more important that unit tests be isolated from one another so that the test can be interrogated in isolation and if need be, fixed with confidence that you don't break other tests. Basically that test correctness and readability is more important than its size or maintainability.

Frankly, I've never been a fan of the one assert per test rule for the reasons you describe: it leads to a lot of boilerplate code that is hard to read, easy to mis-boilterplate, and hard to fix well if you refactor (which drives you to refactor less).

If a function is supposed to return a list of "foo" and "bar" for a given input, but in any order it is perfectly fine to use two asserts to check that both of them are in the result set. Where you get into trouble is when a single test is checking two inputs or two side effects and you don't know which of the two caused the failure.

I view it as a variation on the Single Responsibility Principle: there should be only one thing can cause a test to fail, and in an ideal world that change should only break one test.

But in the end it's a trade off. Are you more likely to spend more time maintaining all the copy pastey code, or will you spend more time hunting down root causes when tests could be broken by multiple sources. As long as you write -some- tests, it probably doesn't matter too much. Despite my disdain for single-assert tests, I tend to err on the side of more tests. Your mileage may vary.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
2

No. This seems to be just the way you're doing it. Unless you've found a notable reference where they claim this is a good practice.

Use a test fixture (although in XUnit terminology the set of tests, the setup and tear down is the fixture), that is, some set up or examples that applies to all your tests.

Use methods like you would normally to structure your code. When refactoring tests the usual TDD Red-Green-Refactor does not apply, instead apply, "Refactoring in the Red". That is,

  1. deliberately break your test,
  2. do your refactoring
  3. fix your test

This way you know that the tests still give positive and negative results.

There are several standard formats for tests. For example, Arrange, Act, Assert or Given When, Then (BDD). Consider using a separate function for each step. You should be able to call the function to reduce boilerplate.

Dave Hillier
  • 3,940
  • 1
  • 25
  • 37