123

It seems to be generally assumed (on Stack Overflow at least) that there should always be unit tests, and they should be kept up to date. But I suspect the programmers making these assertions work on different kinds of projects from me - they work on high quality, long-lasting, shipped software where bugs are A Really Bad Thing.

If that's the end of the spectrum where unit testing is most valuable, then what's the other end? Where are unit tests least valuable? In what situations would you not bother? Where would the effort of maintaining tests not be worth the cost?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Steve Bennett
  • 3,419
  • 4
  • 20
  • 24
  • 5
    "[in Big Ball Of Mud kind code] ...unit tests, these do things opposite to how I use them in good code, like breaking at reasonable changes and failing to catch the real mistakes I make. Which is painful but not surprising - what else would one expect from testing units which have bad design to start with? Instead of helping you improve the design, unit tests often work to preserve bad code..." - I wrote about that in [another answer](http://programmers.stackexchange.com/a/135369/31260) – gnat May 03 '12 at 11:39
  • 6
    @gnat That depends on the direction the testing comes from. Unit tests are especially brittle when they delve into minutiae and when written after implementation, so unit testing becomes a fruitless catch-up exercise and therefore become very expensive to maintain. Targeted testing of legacy code with judicious refactoring can help, but only to a certain degree. OTOH, a green field behavior-driven project tends to result in tests that preserve the requirements instead of the tests, because changed requirements necessitate the removal of older unsuitable tests that no longer match requirements. – S.Robins May 03 '12 at 12:06
  • @S.Robins sounds like you favor unit tests in _green field behavior-driven project_. If so then I am 200% with you here; in cases like this unit tests are as close to silver bullet as it gets. And, yeah it feels fun to adopt tests to changing requirements - when things are understood, white box testing is cool – gnat May 03 '12 at 12:11
  • 3
    This question asked something similar, and the conclusion was that it was not worth doing it for them, as they would get payed to fix bugs later: http://programmers.stackexchange.com/q/64592/1249 –  May 03 '12 at 12:11
  • Oh the joys of answers starting with "You _should_"... –  May 03 '12 at 12:22
  • How would you know if something you changed broke something else if you didn't unit test? – Engineer2021 May 03 '12 at 18:11
  • The usual way: runtime failures. That's a terrible thing if your product is an Xbox game, but it's not so bad if your product is an in-house script that is constantly generating reports. – Steve Bennett May 04 '12 at 07:58
  • see also: [what kind of functions and/or classes are impossible to unit test and why](http://programmers.stackexchange.com/questions/222386/what-kind-of-functions-and-or-classes-are-impossible-to-unit-test-and-why) – gnat Dec 30 '13 at 04:59

16 Answers16

125

I will personally not write unit tests for situations where:

  1. The code has no branches is trivial. A getter that returns 0 doesn't need to be tested, and changes will be covered by tests for its consumers.

  2. The code simply passes through into a stable API. I'll assume that the standard library works properly.

  3. The code needs to interact with other deployed systems; then an integration test is called for.

  4. If the test of success/fail is something that is so difficult to quantify as to not be reliably measurable, such as steganography being unnoticeable to humans.

  5. If the test itself is an order of magnitude more difficult to write than the code.

  6. If the code is throw-away or placeholder code. If there's any doubt, test.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • 9
    Overall I agree, +1. But regarding #1: Branchless code doesn't have to be trivial, and even if it currently is due to fortunate circumstances, the intended behaviour may still warrant a test. –  May 03 '12 at 12:32
  • 75
    _If the test itself is an order of magnitude more difficult to write than the code._ Actually I get that feeling most of the time :P – Songo May 03 '12 at 12:34
  • If a method has two or fewer calls to other methods which become the determining factors of the result of the method, test should be placed on those methods and not the parent. However, beyond two such calls, the ability to not lose sight over how it ought to work with 100% precision diminishes greatly. – Neil May 03 '12 at 13:15
  • @delnan: Indeed. I should perhaps phrase that differently. – Telastyn May 03 '12 at 13:17
  • 3
    @Songo: I used to get that quite a lot as well; it's usually a code smell that the code you're trying to test is too coupled or otherwise not flexible/testable enough. – Telastyn May 03 '12 at 13:22
  • 5
    But if the test is an order of magnitude more difficult to write - you're doing something wrong in the design of your code. And dependent code will be as hard to write as the tests - and buggy. – Carl Manaster May 03 '12 at 13:41
  • 16
    @CarlManaster I think that's the whole point. If it's difficult to write a test case for, it's consequently difficult to understand and will likely create many bugs in your program. The trick is how to avoid methods like these. – Neil May 03 '12 at 14:41
  • 1
    @CarlManaster: I've seen scenarios where the design chosen was difficult to test in favor of maintainability or scalability or performance. Like I said above, it's a code smell but in the end it is a design trade-off like any other. – Telastyn May 03 '12 at 14:49
  • 9
    +1 for integration tests over unit tests. If you are interacting / retrieving data from an external system, then it is actually counter-productive and may introduce bugs to have stand-alone tests that use "mocks." – noahz May 03 '12 at 15:04
  • @noahz: mocks are fine, but sometimes complex (see #5). But mocks also don't cover integration concerns. Sometimes both are prudent (imo). – Telastyn May 03 '12 at 15:07
  • @Neil first post: ...or more than two if they're really simple, like say `vec_add(vec_mul(a, b), vec_mul(c, d))` (example is an example) – amara May 03 '12 at 15:22
  • 29
    *"The code needs to interact with other deployed systems; then an integration test is called for."* - that's what [mock objects](http://en.wikipedia.org/wiki/Mock_object) are for – BlueRaja - Danny Pflughoeft May 03 '12 at 15:41
  • 1
    @sparkleshy, I think if they were simple, you could safely ignore them. However whether it's simple is entirely subjective, unfortunately. If you think the outcome of those operations is simple, good for you. I would probably throw it in its own method and unit test it for good measure. – Neil May 04 '12 at 08:39
  • @Neil: would you do the same for `a*b + c*d`? – amara May 04 '12 at 13:05
  • @sparkleshy Probably, yes, though I don't think I'd have to add unit testing to that, unless the final product was supposed to be more than the sum of the products between a and b, and c and d. – Neil May 04 '12 at 14:15
  • @Neil: oh, I was asking if you'd to unit testing for that. So you wouldn't. But why not? `a*b + c*d` is exactly the same as `vec_add(vec_mul(a, b), vec_mul(c, d))`, just a different syntax. Rather, why would you unit test the latter? – amara May 04 '12 at 14:39
  • @sparkleshy Because I assume vec_add and vec_mul are your own methods, within which have their own inner operations to perform. It makes things slightly more complicated. That's all I meant. – Neil May 04 '12 at 14:52
  • @Neil: Then unit test those. How does it make things in the code I'm talking about any more complicated? – amara May 04 '12 at 15:05
  • @sparkleshy For the same reason that anything written is code can be complicated. The invididual instructions aren't complicated, but their combined behavior is. – Neil May 04 '12 at 15:50
  • @Neil: ...circular. Why is `a*b + c*d` simple enough for no unit tests but not `vec_add(vec_mul(a, b), vec_mul(c, d))`? – amara May 04 '12 at 16:35
  • 1
    @sparkleshy because you learned about a*b + c*d starting in 1st grade, making it easy to look at and know what it does. The other batch of functions you just wrote, and nobody but you knows what it does. – Rangoric May 05 '12 at 16:50
  • @Rangoric: It doesn't matter; by now, our intuitive understanding of addition and multiplication covers things other than perfect timeless integer. – amara May 05 '12 at 20:13
  • @Rangoric: Or, depending on what you meant: You can't read "add"? It says add. That's addition. Now, we should probably have unit tests and stuff for the vec_add method, but the whole _point_ of encapsulation is that you don't have to worry about things; they're just what they look like. – amara May 05 '12 at 20:19
  • @Telastyn - are any of these comments useful? Could any information be incorporated into the answer. Please let us know by flagging so we can clean up. – ChrisF May 05 '12 at 20:54
  • @ChrisF: sorry, I do not understand. The use of mocking for integration concerns might be included for clarification, but I wasn't sold enough to do the edit. I thought the flags were for problematic responses, and while some of them might be a little off-topic, nothing here seems to warrant that. – Telastyn May 05 '12 at 23:24
  • 2
    @sparkleshy You mean vec_add? Now we have 2 meanings. But could that be vec_mul and vec_add as in Very Elaborate Cards Mulligan? And that is to determine if both had it (a and b are the conditions in player 1's hand, and c, d are for player 2. Now we have 3 meanings. Maybe it's it's a formula to see if a character is addicted to something after engaging in multiple use levels. Now we have 4 meanings. So no it doesn't say "Add" as in math it says "vec_add" and only you, who wrote is, knows what that means. – Rangoric May 06 '12 at 01:00
  • @Rangoric: Erk. Idiot. Why do I bother commenting... I don't have in-person access to you, so it would take more than comments to express a strange complex idea. (was writing: `def func [a b c d]: a*b + c*d` _is_ sufficiently simple; if you're passing in types with weird `+` implementations then it's not `func`'s fault. etc, other things to say, sigh) – amara May 06 '12 at 08:21
  • @sparkleshy But that's the point, you have to quantify "weird + implementation". Because normally when people see + they have a context going back to 1 + 1 = 2. So while it is not overly simple (a * b + c * d) it is much simpler than vec_add(vec_mul(a, b), vec_mul(c, d)) because we don't have that context. I'd still unit test it because there is likely some business rule for it, but I could see just ignoring it based on context (size of 2 rectangles? no need to unit test that, because again it goes back to elementary school) while the vec_add/vec_mul usage I would unit test almost every time. – Rangoric May 06 '12 at 12:41
  • @Rangoric: Argh. – amara May 06 '12 at 18:39
  • @Rangoric, assuming the methods vec_mul and vec_add do what they're supposed to do, which it isn't given. But isn't that the point behind unit testing to begin with? I assume a, b, c, and d are vectors of size n, meaning there's going to be some sort of loop involved, so we're reaching a level of complication worthy of a unit test in my humble opinion. – Neil May 07 '12 at 07:27
  • 1
    @Neil I'm saying that. vec_add/vec_mul and their usage is much more worthy of a unit test than +/* will tend to be. – Rangoric May 07 '12 at 12:41
85

Unless you are going to write code without testing it, you are always going to incur the cost of testing.

The difference between having unit tests and not having them is the difference between the cost of writing the test and the cost of running it compared to the cost of testing by hand.

If the cost of writing a unit test is 2 minutes and the cost of running the unit test is practically 0, but the cost of manually testing the code is 1 minute, then you break even when you have run the test twice.


For many years I was under the misapprehension that I didn't have enough time to write unit tests for my code. When I did write tests, they were bloated, heavy things which only encouraged me to think that I should only ever write unit tests when I knew they were needed.

Recently I've been encouraged to use Test Driven Development and I found it to be a complete revelation. I'm now firmly convinced that I don't have the time not to write unit-tests.

In my experience, by developing with testing in mind you end up with cleaner interfaces, more focussed classes & modules and generally more SOLID, testable code.

Every time I work with legacy code which doesn't have unit tests and I have to manually test something, I keep thinking "this would be so much quicker if this code already had unit tests". Every time I have to try and add unit test functionality to code with high coupling, I keep thinking "this would be so much easier if it had been written in a de-coupled way".

Comparing and contrasting the two experimental stations that I support. One has been around for a while and has a great deal of legacy code, while the other is relatively new.

When adding functionality to the old lab, it is often a case of getting down to the lab and spending many hours working through the implications of the functionality they need and how I can add that functionality without affecting any of the other functionality. The code is simply not set up to allow off-line testing, so pretty much everything has to be developed on-line. If I did try to develop off-line then I would end up with more mock objects than would be reasonable.

In the newer lab, I can usually add functionality by developing it off-line at my desk, mocking out only those things which are immediately required, and then only spending a short time in the lab, ironing out any remaining problems not picked up off-line.


TL;DR version:

Write a test when the cost of writing the test, plus the cost of running it as many times as you need to is likely to be less than the cost of manually testing it as many times as you need to.

Remember though that if you use TDD, the cost of writing tests is likely to come down as you get better at it, and unless the code is absolutely trivial, you probably end up running your tests more often than you expect.

Mark Booth
  • 14,214
  • 3
  • 40
  • 79
  • 4
    Exactly this! I have colleagues who time after time say "I'm done with the ticket, but now I have to write some unit tests." Those unit tests often end up bloated, awkward, and difficult to maintain. – waxwing May 03 '12 at 21:16
  • 2
    dude, you speak my mind and my experience exactly. Recently I just bite the bullet and wrote mock object for some legacy API for my unit tests because it takes about 10 minute to upload my code to server, deploy and test. – Alvin May 03 '12 at 23:40
  • 10
    Ok, but your answer is "TDD is great!" and my question was "when wouldn't you write unit tests?" Do you really never do any coding where unit testing is inappropriate? (That is, ignoring situations where you'd like to, but can't.) – Steve Bennett May 04 '12 at 08:02
  • 4
    @SteveBennett - I answer that the first section, you don't write tests if the cost of manual testing is very low and/or the cost of writing tests is very high. TDD helps shift the economics involved by making writing tests cheaper and running tests routine. Hopefully the summary I've added at the end helps to make that clear. – Mark Booth May 04 '12 at 08:40
  • +1 for developping online (with the wholw system) vs offline (only one module plus some mocks plus the unittest-gui). your cost-aspect should also consider the maintainance-consts of unittests: if code changes the unittests have to be updated – k3b May 04 '12 at 09:59
  • 1
    @k3b - Indeed, but if you are in a new requirement change cycle then you will have new manual testing costs as well as unit test update and code update costs. You shouldn't need to change unittests unless requirements change, and if code updates *break* tests you need to understand which is broken, the test or the code, based on those requirements. One of the advantages of having unit tests is that you can *see* when code changes due to changes in requirements have unintended side effects. Otherwise those side effects might not be picked up until it's too late. – Mark Booth May 04 '12 at 10:05
  • Do you write tests to test the tests as well? You should. – Thomas Eding May 04 '12 at 21:32
  • Slightly off topic, but any sources for tips on writing tests that "end up bloated, awkward, and difficult to maintain"? – Thymine May 04 '12 at 21:45
  • @trinithis - One of the tenets of TDD is write the test to fail first, then write the code to make it pass - the so called *Red/Green* cycle. By doing it this way you test the tests as you're writing them. – Mark Booth May 05 '12 at 19:11
  • 1
    @Thymine - Feel free to start a [chat] about this, the comments here are getting bloated, but basically TDD helps make tests cheap and easy by guiding you into writing interfaces which are easy to test. Before doing TDD I wrote interfaces which seemed convenient for me, but which were often difficult to write tests for, so my tests had to set up too much state and coupling was too high to test single components, I had to test a block of functionality together. – Mark Booth May 05 '12 at 19:17
  • @Thymine - Incidentally, I assumed you meant that you wanted to *avoid* writing bloated tests. – Mark Booth May 05 '12 at 19:18
  • I must add, that unit tests are especially important for APIs. Since you're not always in control of the users of the API, unit tests are the closet thing you have to ensure what you build is truly up to spec. – Davina Leong Feb 13 '19 at 09:12
25

Personally, I generally do not write unit tests for:

  • Changes to existing non-unit-tested (legacy) code. While theoretically it's never too late to start unit testing, in practice the gains from adding unit tests to code that was never TDDed are small. The sole advantage is regression prevention; you can see what effect a change makes on areas you may or may not have expected it to affect. However, the unit test only proves that the code still works the way it did when you wrote the test; because the test wasn't written based on the original requirements, the test does not prove the current implementation is "correct". Making unit-tested changes to such a codebase is even more of a nightmare in my experience then simply making the change and manually exercising affected areas of the program; you first have to write a proper unit test for the code in question (which may require refactoring the code to allow for mocking and other TDD-friendly architecture), prove it passes AND that the refactor didn't break anything (and when this is the first unit test ever written for a million-LoC codebase, that can be difficult), then change the test to expect something different and change the code to make the changed test pass.

  • UI Layout and top-level behavior. In some cases it's just not feasible; many IDEs with "designers" declare UI elements as private, and they shouldn't be public conceptually. Making them internal would require putting the unit tests into the same assemblies as production code (bad) and making them protected would require a "test proxy" that exposes what we need to know about the controls (also generally not the way it's done). Even if it was acceptable to make all controls public and thus have access to their type, position, formatting properties, etc, tests that verify the UI is designed the way you expect are extremely brittle. You can usually still unit-test event handlers in codebehinds/viewmodels and then it's trivial to prove that clicking the button fires the handler.

  • Database Persistence. This is, by definition, an integration test. You always mock the repository when dealing with code that passes data to the repository, and you mock the ORM's hooks when developing within the repository itself. While there are "database unit tests" and you can "mock" a database using something like SQLite, I find that these tests just never fit at the unit level.

  • Third-party code. You test YOUR code however you want, and trust that the developers of the third-party tools you're using (including components of the .NET Framework itself like WPF, WCF, EF, etc) did their job right, until proven wrong. A unit test that proves you make the correct WCF call does not require actually crossing the service proxy, and it is unnecessary to test that a particular WCF service call works correctly in a unit test (whether it works or not depends on environment-specific variables that may be perfect in a local testing environment but not at the build-bot, or in the QA, Staging or UAT environments; this is where higher-level integration, AAT and manual testing still have value).

In almost every other case, a test-first methodology has significant value in enforcing requirements and YAGNI, preventing regression, identifying requirements conflicts, etc. Even in cases of "trivial" code, that verification that "yes, this property does still return this value and so you can trust that in other tests" has value to me as a developer.

KeithS
  • 21,994
  • 6
  • 52
  • 79
  • 2
    One more case: A routine that returns something random, especially something complex and random. This is very rare in the business world, in the game world plenty of programs will have such routines in them. – Loren Pechtel May 05 '12 at 00:50
  • 5
    -1. If you don't write tests for existing code, then how can you have any confidence that your code works?? You have 0% code coverage, or no confidence at all that your existing code works. – CodeART May 05 '12 at 14:28
  • 11
    If I am working in a legacy codebase, it already has 0% code coverage. Am I confident it works? Well, several major in-house applications have been in use through three iterations of the company through M&A, so I'd have to say yes. While adding unit tests to these codebases is a noble goal for regression, the tests only prove the code works the same way it always has, not necessarily that the results are "correct". I stand by my original statement but have edited to clarify the specific situation. – KeithS May 07 '12 at 15:52
  • @CodeWorks: I'd tend to agree with OP. Yes, adding unit tests is a good idea, but there is an initial cost to adding them after the fact (refactorings, framework setup etc.). If you only make small changes, the work may not be justified. – sleske May 09 '12 at 08:25
  • @sleske Hi, I can see your point. I also believe that small changes would require small unit tests, therefore it shouldn't be difficult to justify the cost. – CodeART May 09 '12 at 08:37
  • @CodeWorks: Well, the "initial cost" I mentioned is often independent of how big your change is - so it's precisely for small changes that it may not be worth it. Still, sometimes it's possible to implement unit tests quickly in some areas (e.g. for new code). – sleske May 09 '12 at 10:40
  • "Making them internal would require putting the unit tests into the same assemblies as production code" - At least in .net, you can use `InternalsVisibleTo` to avoid this. – CodesInChaos Feb 08 '13 at 09:40
  • 1
    That's true, but now you're changing how the class is defined, for no other reason than to be able to test it. Most other changes you'd make for this purpose have value elsewhere, like injecting dependencies. I'm not saying it's the wrong way to do it or that the methods I mentioned are any better, but you have to weigh the pros and cons of each approach, especially considering the cost of what you're trying to include (do you really need to unit-test that the event handler is called when the button's clicked? *Really*?). – KeithS Feb 08 '13 at 15:41
25

There is a trade-off between the cost of writing unit tests, and the value they add. Neglecting this risks the possibility of over-testing.

blueberryfields touched on one variable that affects the value of unit tests: stability of the project (or pace of development). I'd like to expand on that.

The value to having unit tests roughly follows the following plot (yay, MS Paint!):

utility vs stability

  1. In the first zone, the code is unstable to the point that unit tests will often need to be rewritten or entirely scrapped. This might be a prototype that you know won't be used. Or it might be a project where you are new to the programming language or framework (and are often needing to redo large amounts of code).

    Unit tests are good for refactoring at the module level (i.e. changing functions). The problem is when code is unstable at the architectural level, where entire modules might be rewritten, removed, or completely redesigned. The unit tests then must be changed/removed/added, which is a burden.

    At this stage, limiting to system level integration tests, or even manual testing is a better approach (which one you choose again being dependent on the cost of writing the test vs. the value you get).

  2. The unit tests add the most value in the second zone, where code becomes stable above the module level, but not yet at the module level. If you have a stable codebase, and you are adding a module that has a high chance of staying, this is when unit testing is a big win.

  3. Unit tests also begin to have less value when the stability of the code under test is very high. Testing a simple wrapper of a standard library function is one example where unit testing is overkill.

    If a piece of code you are writing will likely never need to change, and you are confident you've got it right (maybe it's super simple?), there is less value to testing.

Unit testing becomes beneficial when the utility is greater than the cost, as shown in the below graph in zone II.

utility and cost

This cost line could be at different levels for different people (line 1 vs line 2). If you are new to the test framework used in a project, this increases the level, and makes the net utility less. A higher cost (line 2) shrinks the range that unit testing is appropriate.

The cost line doesn't necessarily need to be flat. Especially when you consider other variables such as project size, and whether a test framework already exists for the project, this might be so.

jakesandlund
  • 161
  • 1
  • 4
  • 3
    Great answer - testing is least valuable in fast-moving projects with unknown requirements, and also slow-moving projects with very stable code. Given that most of my work is in the former, this gels with me - but also makes clear that we need to get testing happening as things stabilise. – Steve Bennett May 06 '12 at 15:14
  • 1
    Actually, testing can help in fast-moving projects - but you must be more careful about what you test. You cannot test things where you have not yet decided what the right result is :-). – sleske May 09 '12 at 08:56
  • 1
    @sleske: Yes, but fast-moving projects tend to lack the specific (and stable) requirements that you can unit test for. Often, all that's known is the higher level requirements, which lend more to integration tests. – jakesandlund May 09 '12 at 17:43
  • 3
    This is an outstanding answer. It addresses the trade-offs involved in choosing a testing strategy in a measured and results-oriented way. – Benjamin Hodgson Sep 28 '14 at 21:39
12

Where are unit tests least valuable?

You should be writing unit tests only for logical code. Definition of logical code is below:

Logical code is any piece of code that has some sort of logic in it, small as it may be. It’s logical code if it has one or more of the following: an IF statement, a loop, switch or case statements, calculations, or any other type of decision-making code.

If that's the end of the spectrum where unit testing is most valuable, then what's the other end?

  • Tightly coupled code can't be easily tested. Sometimes it can't be tested at all. For example ASP.NET web forms are a nightmare and sometimes impossible to test, as opposed to ASP.NET MVC which is built with unit testing in mind.

  • I may choose not to write tests for very small and simple projects. This is perfectly justifiable in a world with limited time and resources. Yes, it's risky, but it's perfectly acceptable when the company you work for has a limited time to market.

What situations would you not bother? Where would the effort of maintaining tests not be worth the cost?

Once unit tests are in place, their maintenance cost becomes very insignificant. Unit tests with high code coverage give you a valuable sense of confidence that system behaves how you expect it to. The same applies to integration tests, but that's not your question.

My answer is based on the following assumptions:

  • You know difference between unit, integration and functional tests.
  • You are proficient with one of the unit-testing frameworks, and it takes you very little effort to write a unit test.
  • You don't feel like you are doing extra work when you are writing unit tests.

Reference: Art of Unit Testing

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
CodeART
  • 3,992
  • 1
  • 20
  • 23
  • 4
    This answer (in light of the Q) would really benefit from an example of code (as in: a function / a class / ...) that *isn't* "logical". – Martin Ba May 03 '12 at 11:41
  • 1
    Auto property isn't logical, unless it encapsulates some logic inside, i.e. if/else statement. – CodeART May 03 '12 at 11:56
  • Would you like to quantify "very small and simple"? < 500 lines? – Steve Bennett May 04 '12 at 08:07
  • Also, thanks for spelling out the assumption "you are proficient with one of the unit-testing frameworks". This has been a killer for me. A recent (fairly small) project required me to learn nodeJS, Underscore, Atom, Handlebar and one other tech I've forgotten. The testing framework was Vows, which was one hurdle too many. – Steve Bennett May 04 '12 at 08:09
  • For me small and simple is a codebase which I can fully understand within 30 minutes. I do see you point though. – CodeART May 04 '12 at 08:11
10

Kent Beck (the man behind the idea of Unit Testing) says that it's fine not to write Unit Tests when working on an idea (eg. prototype) that you might throw away (if you decide not to work further on). On the contrary, if you write your prototype and decide to go for it, Unit Testing will be helpful for all the reasons covered in the previous answers.

sakisk
  • 3,377
  • 2
  • 24
  • 24
7

You should really have tests for any code that you intend to run on more than 1 occasion.

If you intend to run it once then throw it away, tests may not be useful, though if the domain is complex, I might well write the tests anyway to help me with my design and implementation.

Otherwise, even if the only thing the tests do is confirm that your assumptions are still correct, they are valuable. And any code that lasts more than a single session will need modifications, at which point the tests will pay for themselves 10 times over.

Bill Michell
  • 1,980
  • 14
  • 15
  • 2
    I'd appreciate an insight into why somebody down voted this post. – Bill Michell May 03 '12 at 13:38
  • 13
    Did not down-vote, but I would offer that the idea that all code must have a test is border-line religious, and not always practical. – noahz May 03 '12 at 15:03
  • 2
    @noahz Yes, I'll buy that. But on the "must, should, would" scale, I selected "should" and not "must"... – Bill Michell May 03 '12 at 16:34
  • 1
    Ok - but if the code "will need modifications", the tests will also cost more, right? Also, do you test shell scripts? Build scripts? – Steve Bennett May 04 '12 at 08:04
  • Yes, I test those scripts. And they can't be tested with unit tests, but they can still be tested. – Bill Michell May 04 '12 at 13:47
  • @SteveBennett It should be only "If the **requirements** will need modification, the tests will also cost more" – Thymine May 04 '12 at 21:48
  • While I agree that all production code needs tests, they do not necessarily need to be unit tests - in some scenarios integration tests make more sense. – sleske May 09 '12 at 08:54
7

I have no clue about unnecessary but I'll shoot for when I think Unit Tests are inappropriate in that they add cost without proportional benefit:

If our organization / dev team isn't doing any unit tests, then unless you are able to convince your fellow developers of running, fixing (and also creating on their own) unit tests, the tests you create aren't going to be run and maintained. (This implies you are working on a piece of software together with someone else.)

Not everyone is doing unit testing, they're sometimes shoddily supported by local team culture, and in such environments you first need to get them in place for the majority of devs and probably also for the build-process ("[continuous] integration") before actually starting to write unit tests makes sense.

Given time pressure and delivery dates and "your" relative standing with the team -- the time remaining until writing unit tests makes sense can easily be months or years.

Martin Ba
  • 7,578
  • 7
  • 34
  • 56
  • 3
    +1 for mentioning the maintenance problem. Though I've used unit tests with success as the only dev using them. Only, when I was on a different project for a few months, it took weeks until all the unit tests worked again... – sleske May 09 '12 at 08:58
5

Sometimes you don't have time to unit test. That's okay.

External constraints exists and you may not feel the need for tests in a given moment, as you may think you have a firm grasp on the whole system state and know how changes will ripple trough it.

But I still believe you should feel a little guilty about it. So when things start to fall apart, you will know it's because: "oh, yeah, I'm not testing stuff, let's start testing."

Before that moment, the feel of guilt will help you avoid certain architectural choices that would prevent you from being able to automate code tests later.

ZJR
  • 6,301
  • 28
  • 36
  • 7
    ...and btw, I'm up to my neck in guilt right now. – ZJR May 03 '12 at 16:38
  • 2
    *you don't have time to unit test* - huge BS. So, no time to unit test, but there is always time to fix bugs. – BЈовић May 03 '12 at 19:38
  • tru, @VJovic. But if the OP doesn't really feel the NEED to test, because not testing still hasn't burned him, and just does things out of *blind faith*. Well, then I think that would be even worse, as getting the abit of doing things out of blind faith gets you into very strange corners. – ZJR May 03 '12 at 20:23
  • 1
    Actually, my reason for not unit testing isn't lack of time - it's usually either I don't know what the code will do well enough to spec it out, or I don't know the technologies well enough, so that learning and deploying a testing framework would be an additional burden. – Steve Bennett May 04 '12 at 08:06
  • Coding to requirements is key in TDD. You should have some sort of document telling you what the code should do in a particular situation (this could be as simple as "when I click this button I expect to go to page X", as PD as GAAP rules for accounting, and as generic as "we need a system that will take in these files and produce this output; go do it"). If your requirements in a particular area aren't granular enough to write a unit test that mirrors them, then you have to interpolate based on what you do know. If you can't do even that then you need more requirements. – KeithS May 07 '12 at 16:08
  • at work i TDD. on personal projects the concept of a test does not exist. my personal projects speed of milestones and features are much faster (maybe 10x), but millions of dollars don't flow through them. I figure if they become successful, pay some other schmuck to write my tests (kind of like I am now, that schmuck). – FlavorScape Apr 08 '15 at 22:14
2

The cost isn't in maintaining tests, but rather in not maintaining working code. If you divorce testing and implementation, you are assuming that what you write will work without the benefit of proof. Tests give you that proof, and the confidence to maintain the code as time passes. Sure you might think that you'll never revisit that app you thought was a throw-away, however the reality is that any time you spend writing software is an investment that your employer doesn't want to treat lightly. Let's face it, software developers don't generally come cheap, and time spent on code to throw away is money simply burned to your employer.

If you are spiking a problem, writing a few lines of code to see how something works, or simply throwing together an example for Programmers.SE, then it's really a case of creating something that you fully intend to throw away once you've satisfied your understanding of a problem. In such cases, unit testing can be a hindrance to the task and can safely be left out. In all other cases however, I'd say that testing is always appropriate, even if the program seems relatively unimportant, and you should always aim to write software to a professional standard with enough test support to reduce the difficulty in maintaining the code in the future.

S.Robins
  • 11,385
  • 2
  • 36
  • 52
  • 6
    Tests don't prove that code works, they give you proof that they don't fail in the way the tests test for, assuming the tests are correct. – dan_waterworth May 03 '12 at 10:55
  • @dan_waterworth Your comment doesn't invalidate my point. There are always assumptions made when tests are used, and naturally if your testing methodology is flawed, then you cannot be confident that your product works as required. It's a leap of faith, yet also based on your knowledge of the project requirements, and on your knowledge that you have written both tests and code to meet those requirements. Thus, if you test your product to confirm that it works as required, then it works. So to answer your comment, if you never "test" your code, how will you know if it works? – S.Robins May 03 '12 at 11:12
  • 2
    If you don't think my comment invalidates your point then you don't understand my comment. Tests don't prove correctness, but you assert that they do when you say "Tests give you that proof [that what you write will work]". – dan_waterworth May 03 '12 at 11:22
  • @dan_waterworth Explain then if you will, how your comment invalidates the answer that I have written here, and explain also the specific mechanism you would employ to provide proof that your code does what it is supposed to do - I.e.: That it "works" as intended. Do please support your assertion that I've misunderstood your comment, when I went to the effort to clarify statements relating to "proofs" and "working code" in my previous comment to you. If you wish to help improve the quality of the answer then do so, but let's avoid a silly tit-for-tat argument over what amounts to a triviality. – S.Robins May 03 '12 at 11:40
  • 2
    To clarify, the word proof has a definite meaning and I understand working code to mean code that does what it was intended to do without exception. I would use an interactive proof assistant to create proofs. Tests provide assurance, but they can't provide a proof that your code works unless they do exhaustive model checking, which is hardly ever possible. – dan_waterworth May 03 '12 at 12:04
  • @dan_waterworth I think you're splitting hairs over the use of a word which you've taken well out of context with the answer given. Further, your stated *understanding* given the context of the answer is flawed, particularly in relation to the defined meaning of the word *Proof*. To state [proof](http://dictionary.reference.com/browse/proof?s=t) as being "without exception" is incorrect within the meaning of the word. All of this however is effectively a strawman which still doesn't address the issue that you feel your commentary invalidates the answer I have given to the OP's question. – S.Robins May 03 '12 at 12:20
  • 4
    The context is software. If I ask for a proof of correctness I don't expect a test suite. – dan_waterworth May 03 '12 at 12:59
  • You must be confident in your code. +1 – CodeART May 03 '12 at 14:09
  • @dan_ Wrong again. The context is *Working software*. The distinction is important given the position you've taken. Granted, there are any number of mechanisms that can be used to form a position asserting that software is working correctly. Good luck with the maintenance overheads if you're the sort of person to prefer to rely on hand-checks and costly & document heavy Q/A systems. Me, I'll take the test suite every time over other systems for the better cost/benefit, and for the reliability that automation brings, particularly when maintenance requires tests to change. – S.Robins May 04 '12 at 09:26
  • 3
    @S.Robins, You're completely missing the point. If I wanted to write provable correct software, I'd use a dependently typed language or other formal methods. "Hand-checks and costly & document heavy Q/A systems" are forms of testing which as I've already said do not prove correctness. – dan_waterworth May 04 '12 at 10:00
  • @dan this is really getting ridiculous. You continue to ignore The point of the answer (About when to test or not, and not about provability), and you have yet to explain how this disingenuous need to continually deflect to a point of triviality is in any way relevant to your assertion that my answer is invalidated by this triviality. Further, repeating the same very narrow point of view ad-nauseum proves nothing per the context of the answer as written. If you still feel the need to beat your chest over this, take the matter offline and be done with it. Pointless argument here helps nobody. – S.Robins May 04 '12 at 12:20
  • 1
    @S.Robins, we agree, this is ridiculous and I actually agree with the rest of your answer, but your use of the word 'proof' in this context in wrong. I have labored the point because you haven't conceded it and have appeared not to understand it. I didn't want or intend to start a long discussion on this minor point. I'm done arguing. – dan_waterworth May 04 '12 at 13:40
2

The amount and quality of unit testing to be done in a project is directly proportional to the pace of development and the required stability of the product.

At one end, for a one-off script, trivial demo, or in a quick-paced-startup environment during the very early stages, it's unlikely unit tests will matter or be useful for anything whatsoever. They are almost always, almost certainly, a waste of time and resources.

At the other end, when writing software to control pace makers, or pilot rocket-ships, or when working at maintaining stable, highly complex, software systems (say, a compiler with heavy adoption across the multiverse) multiple layers of unit test, and other coverage, are crucial.

[There are loads of other situation specific factors which each contribute one way or the other, such as the size of your team, your budget, the talent of your developers, your preferred software dev strategy, etc... Generally, and at the highest level of abstraction, I find these two make the largest difference]

blueberryfields
  • 13,200
  • 8
  • 51
  • 87
1

In my view, product/code prototypes don't require testing as they are often delivered using a completely different process and for completely different purposes than a finished product.

Imagine this common scenario. You have some sort of technology that your company is famous for, your sales team or a chief exec goes to a client and is able to sell them a different product that is leveraging your technology. They sell it there and then and promise the client they'll see a beta version in a month time. You don't have anything, even a development team.

What happens next is you quickly design a solution based on the points agreed/sold to the client, assemble a team of (often star) engineers and start writing code. It would be ideal of course if your dates didn't slip (they always do) and you managed to write code in three weeks time rather than four. In this case you have an extra week, and if you were able to secure a QA team's time for this week - you will test it. In reality you won't have an extra week and/or your QA team is booked for another two months supporting other product deliveries. Moreover you won't have any design specs, so QA team won't have any idea what they are supposed to be testing. So you end up delivering what you wrote with a bit of your own smoke testing, which isn't really a testing as such, simply because you aren't qualified to test your own code. The biggest factor here, is that the client will see some of the promised functionality very quickly. If your prototype does the job, then you're back to the drawing board, where you design and plan the actual version 1.0 - properly this time.

As far as unit-tests go, they are good to maintain certain contracts (I promise my code will do this, if it doesn't - it's broken). In case of prototyping, you may not care so much for contracts, because often enough your prototypes will not live to become version 1.0, whether because the client has went away, or because you ended up developing your version 1.0 using different technologies/tools. An example here would be a quick prototype website written very quickly on PHP in a month's time by one person, and then you assemble a team to rewrite it in JEE. Even the database will often not survive this change.

maksimov
  • 109
  • 4
  • 4
    I've seen too much "protype" and "temporary" code go into production to ever assume that it won't. It's one thing to hope/plan on re-implementing. It's another thing to find the time in the schedule to do it. – jwernerny May 03 '12 at 14:55
  • @jwernerny You speaking from your experience, I'm speaking from mine. I think this argument is counter-productive. Oftentimes there're valid commercial reasons for time-to-market to be way more important than quality. It's a sad thing, no doubt about it, but it's a fact of life. – maksimov May 04 '12 at 09:09
0

You should write unit tests when when the logical comprehension required to maintain or extend or communicate the logic of the application without flaws exceeds (or will likely in the future exceed) the ability of the least knowledgeable member of your team to meet that understanding.

Basically depends how much you can keep in your head at once. On your own you might have more confidence that you can code (in full knowledge of the impact of your changes) until a certain complexity has been reached. Until that point unit tests are a distraction, a hindrance and will slow you down. In a team, the threshold is going to be much lower so you pretty much need unit tests straight away to counteract group-stupidity. If you pick up a testless project from someone else (or you've forgotten about a project from a while ago) you'll want to write unit tests before you change anything because your understanding will be zero.

Note: writing less code and practising YAGNI, keeps down complexity and hence the number of required unit tests.

Benedict
  • 1,047
  • 5
  • 12
  • 1
    So... you're advocating to only write unit tests to help you learn/protect a system you're unfamiliar with, otherwise don't unit test?!! Maybe I've misunderstood your answer. If not, then this really isn't what unit testing is meant for. In particular, if you're a TDD advocate, you'll appreciate that writing a test first guides your efforts to implement minimal code, while Refactoring helps you to reinforce those efforts. Thus YAGNI ends up both a central tenet and a byproduct of your method of development. If your unit testing is getting complicated, IMHO you're probably doing it "wrong". – S.Robins May 04 '12 at 09:37
  • I'm saying unit testing provides a bridge over gaps in the understanding of an application. I did say that they were required to "extend" the application, in the sense that sometimes a feature and the impact of it is too difficult to keep track of all at once so unit tests can act as a framework to make that possible. – Benedict May 09 '12 at 13:56
  • Yes, that is another use for Unit Testing, but certainly not the specific purpose of unit tests per-se. My comment however was to address the answer given, which I feel doesn't really address the OPs question satisfactorily. In particular, if you feel that unit tests are a distraction, or that they are only needed to help the least knowledgeable team member's understanding, then either you don't really understand how to apply unit tests effectively, or you perhaps feel tests are a last mile task. Either way, that is not the direction the OP was looking for his answers to come from. – S.Robins May 15 '12 at 06:28
  • For what it's worth, I do agree with you that where an application is very difficult to understand, unit tests can allow you to test ideas and use cases separately without needing to create a fully-fledged application. Personally I use unit test APIs to help me spike difficult problems. I will however always write unit tests for my code, even where it may seem fairly trivial, and I do this test first in order to get the real benefit of tests, which is confidence to change code over time without fearing the code will break and require extensive independent testing later. – S.Robins May 15 '12 at 06:33
  • In expressing my answer I was attempting to come up with a concise and all encompassing rule of thumb which answers whether unit testing is appropriate to what you are doing or not. In that respect therefore responding directly to the OPs question. I said also to write tests if the complexity "will likely in the future exceed" understanding to cover the case you suggested of "it may seem fairly trivial [now, but later...]". – Benedict May 15 '12 at 11:38
0

Unit tests are nice and all, but in complicated systems with lots of "moving parts", they really don't actually test much.

As we know, Unit tests ensure that given input X for a unit, you get output Y where X is all appropriate input and Y is all appropriate output. However, I find that most code does what the developer thought it should.. but not necessary what it actually should do.
(You might think of it like a gun. A gun has a bunch of precision parts manufactured separately and then assembled. It doesn't matter if each particular part is tests well for functionality if the gun explodes when actually fired. Disclaimer: I know nothing about fire arms (: )

In this cases, we need actual system tests in a model production environment. Since functionality is tested here anyway, the value of unit tests is slightly diminished.

However, unit tests are still valuable in this case because they provide assurance that functionality that you wished to remain unchanged does in fact remain unchanged. This is quite valuable because it gives you confidence for areas of the system need focused testing.

user606723
  • 1,169
  • 9
  • 13
  • any explanation for downvote? – user606723 May 03 '12 at 18:59
  • 2
    Remember that down votes aren't supposed to be an *I don't like your answer* they are simply a *I didn't find your answer useful*, so you have to learn not to take it personally. As it is, I'm not sure how this answers the actual question. – Mark Booth May 04 '12 at 08:50
  • @mark Sadly, many people do use their votes to say they don't like what is written, or that they personally think that the answer (or part of it) is wrong in their view, and it seems this is often without giving the author the benefit of reading the entire answer to keep things entirely in perspective. – S.Robins May 04 '12 at 12:27
  • @user606723 The downvote is probably because the OP is asking *when* it's inappropriate to create unit tests, whereas your answer starts by effectively stating that unit tests don't test much in a complicated system - which is a huge generalization that may be factually incorrect - and ends by effectively stating that unit tests are valuable because they protect a system from functional change - which is not entirely accurate, nor contextually appropriate to the question posted. – S.Robins May 04 '12 at 12:44
  • @S.Robins - Agreed, part of the reason for occasionally mentioning what votes *are for* is to remind people who may be casting those votes that they aren't meant to be *like* or *dislike*. – Mark Booth May 04 '12 at 13:23
0

IMO unit tests are inappropriate (not unnecessary!) in any codebase where it would require tons of hours of refactoring or even an outright rewrite in order to introduce unit testing, because it would be a very hard sell to management to allow for that time allotment.

Wayne Molina
  • 15,644
  • 10
  • 56
  • 87
0

In a fast paced smaller organization, I find that developers are so pressured to stay productive and move fast that they often simply don't bother to test their own changes.

We have a general policy that states that if you make a change (whether it is a bug fix or enhancement) the developer should test it him/herself before submitting the code for further QA.

We don't actually write unit tests for most instances because our screens are mostly designed in a way where if a change is made on the screen, the rest of the logic has to be traversed through in order to find out if the change was successful regardless.

I find it imperative that the developer does their own testing, whether it is documented or not, because they are the ones who best know where the potential weak points in the logic may be.

So I would say, even if you don't write unit tests for every little part of the system, common sense testing should be done regardless.

raffi
  • 1