5

I have little experience with unit testing, but at the project we're working on right now we decided to do unit testing. The project is a WPF/Entity Framework application, and the part I'm being confused about is a filter function.

Our universe of entities is like this: there are Products, Evaluations and Collections.

Evaluation 1 - n Product

Collection n - m Evaluation

Collection n - m Product

Now the filter function in question is called "in collection", and the logic is like this: Include the Product if it has a Collection where Collection.Active == true, or if the Product has an Evaluation which in turn has a Collection.

So when unit testing this, I'm trying to find out the different possible corner cases, and make assertions about wether they are included in the filtered list or not. Products which has an active collection, Products which has no active collection but has an evaluation that has a collection, and so on.

The thing is, working out these corner cases is much harder work than writing a simple linq-to-entity query, and it seems that the probability of making an error in the test is much higher than making an error in the implementation of the filter.

How would you handle such situations? It doesn't seem uncommon, especially where there are tools that let you write logic in a simple way. Is it possible to benefit from unit testing here? How?

Mårten
  • 287
  • 1
  • 7
  • Implement the test, implement the logic. Analyze every discrepancy. – mouviciel Oct 23 '14 at 07:26
  • 2
    do you measure and analyze [tag:test-coverage]? I ask because to me this was the easiest way to discover concrete cases when logic is complicated and to ensure that I did not miss anything – gnat Oct 23 '14 at 07:33
  • regarding possible errors in tests, this has been covered in answers to prior question [How to test the tests?](http://programmers.stackexchange.com/q/11485/31260) – gnat Oct 23 '14 at 08:59
  • 1
    If your tests are very complex, then read this : http://programmers.stackexchange.com/a/11496/20065 – BЈовић Oct 23 '14 at 12:36

4 Answers4

4

The common wisdom is that when a test fails, then the tested code must be at fault. But that only works if you can verify at a glance that the test is correct.

The reality is that when a test fails, then you have found a discrepancy between the test-case and the code under test. This discrepancy is either caused by a simple error (in either the test-case or the code under test) or a misunderstanding of the requirements.
When a test fails, you have to find out what the cause for the discrepancy is and then you fix either the test-case or the code under test (or in rare cases, both).

Even though writing the tests for this feature is hard, I would encourage you to do it anyway, as writing the tests can reveal ambiguities in the requirements and (repeatedly) executing the tests gives confidence that unrelated changes didn't break the feature.

Bart van Ingen Schenau
  • 71,712
  • 20
  • 110
  • 179
4

If you were going with a test driven development way of unit testing, then I'd suggest you go with the Purist way of unit testing, that being, make a unit test that would fail then refactor your code to make it pass.

Even if its very tempting to skip testing very simple pieces of code, please don't because:

  • Writing Unit Tests puts the source of truthiness into the test cases and not the source code (meaning if the test magically fails even if the logic is simple, it means someone messed with the code base that cause the logic in the test to fail. Imagine if the source of truthiness is in your code, if the test were to fail, then the assumption of some people is that the test was written incorrectly instead of thinking that the implementation of their code is wrong)
  • Insurance. Unit tests allow you to catch bugs early on in the development phase, so use them as often as possible. Heck I'd even encourage aiming for a 100% test coverage of your code base because this would save you the trouble of having to test out your entire code base just to be sure that your refactor didn't break an innocent looking conditional statement in a method that you only use once in a blue moon.

Note: Also, If you're having a bit of trouble with writing plain old unit tests, I'd suggest you take a look at cucumber and its ilk. BDD lets you write test cases more naturally so it might be easier for you in some cases.

Note 2: As @gbjbaanb, said in the comments below, even with 100% LOC test coverage, it is still no substitute of manual testing and QA. Though writing automated tests like these do help reduce the burden on the tester.

Maru
  • 1,402
  • 10
  • 19
  • You still have to test your product even with 100% unit test coverage. Even though units 'work', you haven't tested their interaction. I find this is where the majority of bugs come from. +1 for using BDD testing using Cucumber though. – gbjbaanb Oct 23 '14 at 13:14
  • Of course, unit tests are not substitutes for integration tests and real QA, but they do help you find the simpler bugs that people tend to make :D – Maru Oct 23 '14 at 22:40
3

Write the test anyway. If it is part of the software contract that needs testing, it is irrelevant how simple the feature is. It is the process of rigorously testing in a secondary notation that matters. Combined with the target code and the other clients of the code, the test makes up a quorum of three that verify the correctness. You may write a bug once, but it is less likely that you'll do it twice unless you misunderstood the logic or the specification was wrong.

Your unit tests:

  1. Validate your contracts as you develop
  2. Validate your thinking process. (Heard of rubber duck debugging?)
  3. Become documentation
  4. Protect the APIs from being accidentally broken. If the code breaks, the test yells loudly. New dev guy in the next cube may think your odd looking algorithm is a bug or useless code, until his "fix" breaks the test.
  5. Watch after you when you are tired and protect you from checking in something dumb. We all do it from time to time. Better a test yells at you than your teammates. :)

Tests are the only way to systematically prove your software does what it is supposed to do. Trust me there is something very comforting and satisfying about seeing your unit tests pass 100% after a long revision cycle.

Later on, if you leave, another developer can benefit from your tests in order to know whether he is going to break some undocumented code when he needs to change or add a feature.

codenheim
  • 2,963
  • 16
  • 18
1

One strategy I've found that's helped, especially if tests are being added after (potentially long after) the code was developed is to write "component" or "integration" tests first. This isn't to say you should avoid pure unit testing, but if you understand the system's (desired) behavior at a higher level / in a way that seems hard to unit-test, then start by adding tests that gauge correct operation at the level you do understand.

In your case, for instance, build a Collection of Evaluations or a Collection of Products and try some filtering, sifting, sorting or whatever other methods / operations are common to those things.

Yes this will involve multiple classes or modules, and won't be pure unit testing. And yes you should go back and add unit tests to lock down testing at the individual class / module / function level. But starting at the level you do understand gets you going with testing and will help you further discover, reason about, and ultimately add direct unit tests for the more atomic units.

Jonathan Eunice
  • 9,710
  • 1
  • 31
  • 42