69

Short introduction to this question. I have used now TDD and lately BDD for over one year now. I use techniques like mocking to make writing my tests more efficiently. Lately I have started a personal project to write a little money management program for myself. Since I had no legacy code it was the perfect project to start with TDD. Unfortunate I did not experience the joy of TDD so much. It even spoiled my fun so much that I have given up on the project.

What was the problem? Well, I have used the TDD like approach to let the tests / requirements evolve the design of the program. The problem was that over one half of the development time as for writing / refactor tests. So in the end I did not want to implement any more features because I would need to refactor and write to many test.

At work I have a lot of legacy code. Here I write more and more integration and acceptance tests and less unit tests. This does not seem to be a bad approach since bugs are mostly detected by the acceptance and integration tests.

My idea was, that I could in the end write more integration and acceptance tests than unit tests. Like I said for detecting bugs the unit tests are not better than integration / acceptance tests. Unit test are also good for the design. Since I used to write a lot of them my classes are always designed to be good testable. Additionally, the approach to let the tests / requirements guide the design leads in most cases to a better design. The last advantage of unit tests is that they are faster. I have written enough integration tests to know, that they can be nearly as fast as the unit tests.

After I was looking through the web I found out that there are very similar ideas to mine mentioned here and there. What do you think of this idea?

Edit

Responding to the questions one example where the design was good,but I needed a huge refactoring for the next requirement:

At first there were some requirements to execute certain commands. I wrote an extendable command parser - which parsed commands from some kind of command prompt and called the correct one on the model. The result were represented in a view model class: First design

There was nothing wrong here. All classes were independent from each other and the I could easily add new commands, show new data.

The next requirement was, that every command should have its own view representation - some kind of preview of the result of the command. I redesigned the program to achieve a better design for the new requirement: Second design

This was also good because now every command has its own view model and therefore its own preview.

The thing is, that the command parser was changed to use a token based parsing of the commands and was stripped from its ability to execute the commands. Every command got its own view model and the data view model only knows the current command view model which than knows the data which has to be shown.

All I wanted to know at this point is, if the new design did not break any existing requirement. I did not have to change ANY of my acceptance test. I had to refactor or delete nearly EVERY unit tests, which was a huge pile of work.

What I wanted to show here is a common situation which happened often during the development. There were no problem with the old or the new designs, they just changed naturally with the requirements - how I understood it, this is one advantage of TDD, that the design evolves.

Conclusion

Thanks for all the answers and discussions. In summary of this discussion I have thought of an approach which I will test with my next project.

  • First of all I write all tests before implementing anything like I always did.
  • For requirements I write at first some acceptance tests which tests the whole program. Then I write some integration tests for the components where I need to implement the requirement. If there is a component which work closely together with another component to implement this requirement I would also write some integration tests where both components are tested together. Last but not least if I have to write an algorithm or any other class with a high permutation - e.g. a serializer - I would write unit tests for this particular classes. All other classes are not tested but any unit tests.
  • For bugs the process can be simplified. Normally a bug is caused by one or two components. In this case I would write one integration test for the components which tests the bug. If it related to a algorithm I would only write a unit test. If it is not easy to detect the component where the bug occurs I would write an acceptance test to locate the bug - this should be an exception.
Yggdrasil
  • 908
  • 1
  • 7
  • 10
  • 1
    possible duplicate of [When is unit testing inappropriate or unnecessary?](http://programmers.stackexchange.com/questions/147055/when-is-unit-testing-inappropriate-or-unnecessary) See also: [What is the most effective way to add functionality to unfamiliar, structurally unsound code?](http://programmers.stackexchange.com/questions/135311/what-is-the-most-effective-way-to-add-functionality-to-unfamiliar-structurally) – gnat Jan 13 '14 at 10:11
  • These questions seem to address more the problem of why writing tests at all. I want to discuss if writing functional tests instead of unit tests might be a better approach. – Yggdrasil Jan 13 '14 at 11:10
  • per my reading, answers in duplicate question are primarily about when *non*-unit tests make more sense – gnat Jan 13 '14 at 11:13
  • That first link is itself a duplicate. Think you mean: http://programmers.stackexchange.com/questions/66480/when-is-it-appropriate-to-not-unit-test – Robbie Dee Jan 13 '14 at 11:15
  • The answers from the link from Robbie Dee is even more about why to test at all. – Yggdrasil Jan 13 '14 at 11:33
  • Sorry, I wasn't endorsing the link - merely pointing out that the posted question was itself a duplicate... – Robbie Dee Jan 13 '14 at 15:29
  • http://www.javaranch.com/unit-testing.jsp -- might be interesting read for you – DXM Jan 13 '14 at 16:29

7 Answers7

45

TL;DR: As long as it meets your needs, yes.

I've been doing Acceptance Test Driven Development (ATDD) development for many years now. It can be very successful. There are a few things to be aware of.

  • Unit tests really do help enforce IOC. Without unit tests the onus is on the developers to make sure they meet the requirements of well written code (in so far as unit tests drive well written code)
  • They can be slower and have false failures if you are actually using resources that are typically mocked.
  • The test do not pinpoint the specific problem as unit tests would. You need to do more investigation to fix test failures.

Now the benefits

  • Much better test coverage, covers integration points.
  • Ensures the system as a whole meets the acceptance criteria, which is the whole point of software development.
  • Makes large refactors much easier, faster, and cheaper.

As always it's up to you to do the analysis and figure out if this practice is appropriate for your situation. Unlike many people I don't think there is an idealized right answer. It will depend on your needs and requirements.

dietbuddha
  • 8,677
  • 24
  • 36
  • 9
    Excellent point. It is all too easy to become a little space cadetish about testing and write hundreds of cases to get a warm glow of satisfaction when it "passes" with flying colours. Simply put: if your software doesn't do what it needs to do from the USER'S point of view, you have failed the first and most important test. – Robbie Dee Jan 13 '14 at 16:03
  • Good point for pinpointing the specific problem. If I have a huge requirement I write acceptance tests which test the whole system and than writing tests which tests sub tasks of certain components of the system to achieve the requirement. With this I can pinpoint in most cases the component where the defect lies in. – Yggdrasil Jan 14 '14 at 10:31
  • 2
    "Unit tests help enforce IOC"?!? I am sure you meant DI there instead of IoC, but anyway, why would someone want to *enforce* the use of DI? Personally, I find that in practice DI leads to non-objects (procedural-style programming). – Rogério Apr 16 '15 at 21:58
  • Can you give your input on the (IMO best) option of doing BOTH integration and unit testing vs the argument of only doing integration testing? Your answer here is good, but seems to frame these things as mutually-exclusive, which I do not believe they are. – starmandeluxe Sep 20 '17 at 06:17
  • @starmandeluxe They are indeed not mutually exclusive. Rather it is a question of the value you want to derive from testing. I would unit test anywhere where the value exceeded the development/support cost of writing the unit tests. ex. I would certainly unit test the compounding interest function in a financial application. – dietbuddha Oct 13 '17 at 00:05
  • Truly the best answer IMO. It's rare that someone in the programming world doesn't try and push their own opinion as gospel (especially in TDD) and this answer simply outlines the pros and cons of only testing at a high level. Everyone's situation is different and often requires different strategies to testing. Well done @dietbuddha – Ryall Nov 28 '18 at 16:43
  • Could u give me your lights on a similar but more detailed question about the "pyramid of tests" principle : https://softwareengineering.stackexchange.com/questions/445771/how-to-deal-with-contradictory-testing-good-practices – Tristan May 29 '23 at 06:53
37

It's comparing oranges and apples.

Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different.

I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them:

Integration tests:

Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified.

Acceptance tests:

An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify.

Unit tests:

For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test.

Behaviour tests:

Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful.

These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value.

elixenide
  • 442
  • 1
  • 6
  • 17
Michael
  • 2,979
  • 19
  • 13
  • In your definition I assume what I mean is to write more behaviour tests (?) than unit test. Unit test for me is a test which tests a single class with all dependencies mocked. There are cases where unit tests are most useful. That is e.g. when I write a complex algorithm. Then I have many examples with expected output of the algorithm. I want test this on unit level because it is actually faster than the behaviour test. I do not see the value of test a class on unit level which has only a hand full a paths through the class which can easily be tested by an behaviour test. – Yggdrasil Jan 13 '14 at 09:58
  • 15
    I personally think acceptance tests are the most important, behaviour tests are important when testing things like communication, reliability and error cases and unit tests are important when testing small complex features (algorithms would be a good example of this) – Michael Jan 13 '14 at 10:02
  • I am not so firm with your terminology. We program a programming suite. In there I am responsible for the graphical editor. My tests testing the editor with mocked services from the rest of the suite and with mocked UI. What kind of test would that be? – Yggdrasil Jan 13 '14 at 10:11
  • 1
    Depends on what you're testing - are you testing business features (acceptance tests)? Are you testing integration (integration tests)? Are you testing what happens when you click a button (behaviour tests)? Are you testing algorithms (unit tests)? – Michael Jan 13 '14 at 15:32
  • In this case I mean acceptance tests. I actually test the business requirements or bugs. – Yggdrasil Jan 14 '14 at 10:35
  • Agreed you generally need a mix of one or more of the above on most projects. Different elements of a function/component/feature/story may require testing in one or more of these areas. It ultimately comes down to the level of test coverage vs effort vs confidence you and your team are willing to take on. – Chris Lee Jan 15 '14 at 22:21
  • 4
    "I don't like to focus on the textbook approaches - rather, focus on what gives your tests value" Oh, so true! First question to *always* ask is "what problem do I solve by doing this?". And different project may have different problem to solve! – Laurent Bourgault-Roy Mar 13 '14 at 15:05
  • Could u give me your lights on a similar but more detailed question about the "pyramid of tests" principle : https://softwareengineering.stackexchange.com/questions/445771/how-to-deal-with-contradictory-testing-good-practices – Tristan May 29 '23 at 06:53
21

Well, I have used the TDD like approach to let the tests / requirements evolve the design of the program. The problem was that over one half of the development time as for writing / refactor tests

Unit tests work best when the public interface of the components they are used for does not change too often. This means, when the components already are designed well (for example, following the SOLID principles).

So believing a good design just "evolves" from "throwing" a lot of unit tests at a component is a fallacy. TDD is no "teacher" for good design, it can only help a little bit to verify that certain aspects of the design are good (especially testability).

When your requirements change, and you have to change the internals of a component, and this will break 90% of your unit tests, so you have to refactor them very often, then the design most probably was not so good.

So my advice is: think about the design of the components you have created, and how you can make them more following the open/closed principle. The idea of the latter is to make sure the functionality of your components can be extended later without changing them (and thus not breaking the component's API used by your unit tests). Such components can (and should be) covered by unit test tests, and the experience should not be as painful as you have described it.

When you cannot come up with such a design immediately, acceptance and integration tests may be indeed a better start.

EDIT: Sometimes the design of your components can be fine, but the design of your unit tests may cause issues. Simple example: You want to test the method "MyMethod" of the class X and write

    var x= new X();
    Assert.AreEqual("expected value 1" x.MyMethod("value 1"));
    Assert.AreEqual("expected value 2" x.MyMethod("value 2"));
    // ...
    Assert.AreEqual("expected value 500" x.MyMethod("value 500"));

(assume the values have some kind of meaning).

Assume further, that in production code there is just one call to X.MyMethod. Now, for a new requirement, the method "MyMethod" needs an additional parameter (for example, something like context), which cannot be omitted. Without unit tests, one would have to refactor the calling code in just one place. With unit tests, one has to refactor 500 places.

But the cause here is not the unit tests itself, it is just the fact that the same call to "X.MyMethod" is repeated again and again, not strictly following the "Don't Repeat Yourself (DRY) principle. So the solution here is to put the test data and the related expected values in a list and run the calls to "MyMethod" in a loop (or, if the testing tool supports so called "data drive tests", to use that feature). This reduces the number of places to change in the unit tests when the method signature changes to 1 (as opposed to 500).

In your real world case, the situation might be more complex, but I hope you get the idea - when your unit tests use a components API for which you don't know if it may become subject to change, make sure you reduce the number of calls to that API to a minimum.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • "This means, when the components already are designed well.": I agree with you, but how can the components be already designed if you write the tests before writing the code, and the code *is* the design? At least this is how I have understood TDD. – Giorgio Jan 13 '14 at 20:01
  • 2
    @Giorgio: actually, it does not really matter if you write the tests first or later. Design means to make decisions about the responsibility of a component, about the public interface, about dependencies (direct or injected), about run time or compile time behaviour, about mutability, about names, about data flow, control flow, layers etc. Good design also means to defer some decisions to the latest possible point in time. Unit test can show you indirectly if your design was ok: if you habe to refactor a lot of them afterwards when requirements change, it was probably not. – Doc Brown Jan 13 '14 at 21:19
  • @Giorgio: an example might clarify: say you have a component X with a method "MyMethod" and 2 parameters. Using TDD, you write `X x= new X(); AssertTrue(x.MyMethod(12,"abc"))` before actually implementing the method. Using upfront design, you may write `class X{ public bool MyMethod(int p, string q){/*...*/}}` first, and write the tests later. In both cases, you have made the same design decision. If the decision was a good or a bad one, TDD will not tell you. – Doc Brown Jan 13 '14 at 21:27
  • 1
    I agree with you: I am a bit skeptical when I see TDD applied blindly with the assumption that it will automatically produce good design. Furthermore, sometimes TDD gets in the way if the design is not clear yet: I am forced to test the details before I have an overview of what I am doing. So, if I understand correctly, we agree. I think that (1) unit testing helps to verify a design but design is a separate activity, and (2) TDD is not always the best solution because you need to organize your ideas before starting to write tests and TDD can slow you down in this. – Giorgio Jan 13 '14 at 21:36
  • 1
    Shortly, unit tests can show flaws in the internal design of a component. The interface, pre- and post conditions must be known beforehand, otherwise you cannot create the unit test. So the design of the component what the component does needs to be performed before the unit test can be written. How it does this - the lower level design, detailed design or inner design or whatever you want to call it - can take place after the unit test has been written. – Maarten Bodewes Jan 13 '14 at 23:47
  • Please look at my edits. There I have shown, that it is common without a bad previous design for it to change. – Yggdrasil Jan 14 '14 at 10:13
  • @Yggdrasil: actually, what I miss in your example is how your unit tests looked before, and how they looked after the change. Without that, its hard to give you a recommendation. Maybe your unit test code did not follow the DRY principle consequently? – Doc Brown Jan 15 '14 at 19:43
  • @Yggdrasil: I edited my answer to include an example about the design of tests. Don't know if this can be transfered to your case, but maybe it's worth a look. – Doc Brown Jan 16 '14 at 07:20
  • @DocBrown I got your idea. That is not the problem here. The problem is, that e.g. the data view model in the old desgin was tested to update its data on changes in the model and that this update is forwarded to the view. After the change it needs to look for update of the command prompt - than fetches the current command view model from the command chooser and at last forwards the update to the view. The only test, that stays nearly the same is the test of the forwarding the update to the view. – Yggdrasil Jan 17 '14 at 07:59
9

Yes, of course it is.

Consider this:

  • a unit test is a small, targeted piece of testing that exercises a small piece of code. You write lots of them to achieve a decent code coverage, so that all (or the majority of the awkward bits) are tested.
  • an integration test is a large, broad piece of testing that exercises a large surface of your code. You write few of them to achieve a decent code coverage, so that all (or the majority of the awkward bits) are tested.

See the overall difference....

The issue is one of code coverage, if you can achieve a full test of all your code using integration/acceptance testing, then there's not a problem. Your code is tested. That's the goal.

I think you may need to mix them up, as every TDD-based project will require some integration testing just to make sure that all the units actually work well together (I know from experience that a 100% passed unit tested codebase does not necessarily work when you put them all together!)

The problem really comes down to the ease of testing, debugging the failures, and fixing them. Some people find their unit tests are very good at this, they are small and simple and failures are easy to see, but the disadvantage is that you have to reorganise your code to suit the unit test tools, and write very many of them. An integration test is more difficult to write to cover a lot of code, and you will probably have to use techniques like logging to debug any failures (though, I'd say you have to do this anyway, you can't unit test failures when on-site!).

Either way though, you still get tested code, you just need to decide which mechanism suits you better. (I'd go with a bit of a mix, unit test the complex algorithms, and integrate test the rest).

gbjbaanb
  • 48,354
  • 6
  • 102
  • 172
  • 7
    Not quite true... With integration tests its possible to have two components that are both buggy, but their bugs cancel out in the integration test. It doesn't matter much until the end user uses it in a way in which only one of these components is used... – Michael Shaw Jan 13 '14 at 15:15
  • 1
    Code coverage != tested - aside from bugs cancelling each other out, what about scenarios you've never thought about? Integration testing is fine for happy path testing but I rarely see adequate integration testing when things aren't going well. – Michael Jan 13 '14 at 16:34
  • And from experience, it's rarely bugs in the happy path that are the problem! – Michael Jan 13 '14 at 16:34
  • 4
    @Ptolemy I think the rarity of 2 buggy components cancelling each other out is far,far lower than 2 working components interfering with each other. – gbjbaanb Jan 13 '14 at 16:37
  • 2
    @Michael then you just haven't put enough effort into the testing, I did say it is more difficult to do good integration testing as the tests have to be much more detailed. You can supply bad data in an integration test just as easily as you can in a unit test. Integration testing != happy path. Its about exercising as much code as you can, that's why there are code coverage tools that show you how much of your code has been exercised. – gbjbaanb Jan 13 '14 at 16:39
  • Oh I agree that integration tests CAN exercise bad data and how faults are handled, my point was that in my experience, that isn't what integration tests do - YMMV though - I'd also say that hitting *every* exceptional combination in an integration test is very, very hard - I like integration tests, but less tests testing huge swathes of code (and I say testing loosely) would not give me the same confidence of detailed unit tests though I do think they are important to have (just not as a replacement). – Michael Jan 13 '14 at 16:42
  • Absolutely, but that's why I prefer to use both - it means I don't have to spend all day writing a thousand unit tests, I use them for the more complicated bits only, and use integration tests for the rest, its a trade off between dev time and risk. I never found unit tests to catch all bugs anyway so I don't miss the 'gamefication' of using them so much. And besides, you still need integration style tests even if its just for performance testing. – gbjbaanb Jan 13 '14 at 17:46
  • Two bugs cancelling each other out is more common than you might think, especially if both components were written by the same developer with the same misunderstanding of the system. – Michael Shaw Jan 13 '14 at 18:43
  • 1
    @Michael When I use tools like Cucumber or SpecFlow correctly I can create integration tests which also tests exceptions and extreme situations as fast as unit tests. But I agree if one class has to much permutations I prefer writing a unit test for this class. But this will be less often the case than having classes with only a handful paths. – Yggdrasil Jan 14 '14 at 10:23
  • @gbjbaanb I do not like to write integration tests which covers a huge amount of code. They are most brittle and hard to debug. In my case my integration tests concentrate on one aspect of the program. Therefore I have many of them. But that is easy to write when using a BDD approach. – Yggdrasil Jan 14 '14 at 10:25
2

The "win" with TDD, is that once the tests have been written, they can be automated. The flip side is that it can consume a significant chunk of the development time. Whether this actually slows the whole process down is moot. The argument being that the upfront testing reduces the number of errors to be fixed at the end of the development cycle.

This is where BDD comes in as behaviours can be included within the unit testing so the process is by definition less abstract and more tangible.

Clearly, if an infinite amount of time was available, you'd do as many tests of various varieties as possible. However, time is generally limited and continual testing is only cost effective to a point.

This all leads to the conclusion that the tests that provide the most value should be at the front of the process. This in itself doesn't automatically favour one type of testing over another - more that each case has to be taken on its merits.

If you're writing a command line widget for personal use, you'd primarily be interested in unit tests. Whereas a web service say, would require a substantial amount of integration/behavioural testing.

Whilst most types of test concentrate on what could be called the "racing line" i.e. testing what is required by the business today, unit testing is excellent at weeding out subtle bugs that could surface in later development phases. Since this is a benefit that can't readily be measured, it is often overlooked.

Robbie Dee
  • 9,717
  • 2
  • 23
  • 53
  • 1
    I write my tests upfront and I write enough tests to cover most of the errors. As for bugs which surfaces later. This sounds to me, that a requirement has changed or a new requirement comes into play. Than the integration/behavior tests need to be changed, added. If then a bug is shown in an old requirement my test for this will show it. As for the automation. all my tests run all the time. – Yggdrasil Jan 13 '14 at 11:01
  • The example I was thinking of in the last paragraph is where say, a library was used exclusively by a single application but there was then a business requirement to make this a general purpose library. In this case it might better serve you to have at least some unit testing rather than write new integration/behaviour tests for every system you attach to the library. – Robbie Dee Jan 13 '14 at 13:18
  • 2
    Automating testing and unit testing are completely orthogonal matter. Any self-respecting project will have automated integration and functional testing. Granted, you don't often see manual unit tests, but they can exist (basically manual unit test is a testing utility for some specific functionality). – Jan Hudec Jan 13 '14 at 14:42
  • Well indeed. There has been a flourishing market for automated 3rd party tools that exist outside the development sphere for some considerable time. – Robbie Dee Jan 13 '14 at 15:55
2

I think it's a horrible idea.

Since acceptance tests and integration test touch broader portions of your code to test a specific target, they're going to need more refactoring over time, not less. Worse yet, since they do cover broad sections of the code, they increase the time you spend tracking down the root cause since you've got a broader area to search from.

No, you should usually write more unit tests unless you have a on odd app that is 90% UI or something else that's awkward to unit test. The pain you're running into isn't from unit tests, but doing test first development. Generally, you should only spend 1/3 of your time at most writing tests. After all, they're there to serve you, not vice versa.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • 2
    The main beef I hear against TDD is it disrupts the natural development flow and enforces defensive programming from the start. If a programmer is already under time pressure, they're likely to want to just cut the code and polish it later. Of course, meeting an arbitrary deadline with buggy code is false economy. – Robbie Dee Jan 13 '14 at 16:13
  • 2
    Indeed, especially as "polish it later" never seems to actually happen - every review I do where a developer pushes the "it needs to go out, we'll do it later" when we all know, that's not going to happen - technical debt = bankrupt developers in my opinion. – Michael Jan 13 '14 at 16:36
  • @Robbie Dee: I agree that TDD can disrupt the natural development flow, not because I'd like to save time and meet a deadline, but because it forces me to write tests for code whose structure is still not clear to me. I become very frustrated when I write a test, implement a function, and then discover that I need a different function and have to throw away the test as well. So, instead of helping me understand my code, writing tests often interrupts my flow of thought and forces me to bother about many details before I have an overall picture of the code I want to write. – Giorgio Jan 13 '14 at 20:12
  • 1
    As Doc Brown has pointed out, unit tests only work well if you have well-designed code. And sometimes you have to write some code as a proof of concept for your design. Writing tests in this phase just comes in your way. At least this is what I have experienced several times. – Giorgio Jan 13 '14 at 20:14
  • 1
    This is where I believe BDD has the edge on TDD and DDD. You're tested fully formulated behaviours. Furthermore, the bonus from BDD is that the test output acts as documentation which if you're an Agile house, is a real boon. – Robbie Dee Jan 14 '14 at 08:59
  • Some are that annoyed with how TDD interferes that they only retrofit it at the end. There is still some definite benefit to be had with this approach but I think personally there is some middle ground. For example, I used to write a test stub, then a coherent piece of code and then flesh out the test which seemed to work pretty well for me. – Robbie Dee Jan 14 '14 at 09:05
  • 3
    The answer makes sense to me, don't know why it got so many minuses. To quote Mike Cohn : *"Unit testing should be the foundation of a solid test automation strategy and as such represents **the largest part** of the pyramid. Automated unit tests are wonderful because they give specific data to a programmer—there is a bug and it’s on line 47"* http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid – guillaume31 Jan 14 '14 at 13:47
  • I couldn't agree more but opening with: "I think it's a horrible idea" probably hasn't helped our friend any... :-/ – Robbie Dee Jan 14 '14 at 15:32
  • 4
    @guillaume31 Just because some guy has said once that they are good does not mean, that they are good. In my experience bugs are NOT detected by unit tests, because they were changed to the new requirement up front. Also most bugs are integration bugs. Therefor I detect the most of them with my integration tests. – Yggdrasil Jan 20 '14 at 07:44
  • 2
    @Yggdrasil I could also quote Martin Fowler : "If you get a failure in a high level test, not just do you have a bug in your functional code, you also have a missing unit test". http://martinfowler.com/bliki/TestPyramid.html Anyway, if integration tests alone work for you, then fine. My experience is that, while needed, they are slower, deliver less precise failure messages and less maneuvrable (more combinatorial) than unit tests. Also, I tend to be less future-proof when I write integration tests - reasoning about pre-(mis?)conceived scenarios rather than the correctness of an object per se. – guillaume31 Jan 20 '14 at 09:33
  • I think it's a common misconception that unit tests should "find bugs". unit tests don't explore your codebase and look for bugs, they are a documentation and enforcement of your business rules. they are a formal way of saying "this is how this module/class/method is defined and how it works", and to make sure that this behavior doesn't change over time (regression testing). It's more a design tool and a shield against code rot than it is a QA employee sitting looking for bugs. – sara Feb 05 '16 at 14:29
1

The last advantage of unit tests is that they are faster. I have written enough integration tests to know, that they can be nearly as fast as the unit tests.

This is the key point, and not only "the last advantage". When the project gets bigger and bigger, your integration acceptance tests are becoming slower and slower. And here, I mean so slow that you are going to stop executing them.

Of course, unit tests are becoming slower as well, but they are still more than order of magnitude faster. For example, in my previous project (c++, some 600 kLOC, 4000 unit tests and 200 integration tests), it took about one minute to execute all and more then 15 to execute integration tests. To build and execute unit tests for the part being changed, would take less then 30 seconds on average. When you can do it so fast, you'll want to do it all the time.

Just to make it clear: I do not say not to add integration and acceptance tests, but it looks like you did TDD/BDD in a wrong way.

Unit test are also good for the design.

Yes, designing with testability in mind will make the design better.

The problem was that over one half of the development time as for writing / refactor tests. So in the end I did not want to implement any more features because I would need to refactor and write to many test.

Well, when requirements change, you do have to change the code. I would tell you didn't finish your work if you didn't write unit tests. But this doesn't mean you should have 100% coverage with unit tests - that is not the goal. Some things (like GUI, or accessing a file, ...) are not even meant to be unit tested.

The result of this is better code quality, and another layer of testing. I would say it is worth it.


We also had several 1000s acceptance tests, and it would take whole week to execute all.

BЈовић
  • 13,981
  • 8
  • 61
  • 81
  • 1
    Have you looked at my example? This happened all the time. The point is, when I implement a new feature I change/add the unit test so that they test the new feature - therefore no unit test will be broken. In most cases I have side effects of the changes which are not detected by unit tests - because the environment is mocked. In my experience because of this no unit test ever told me that I have broken a existing feature. It was always the integration and acceptance tests which showed me my mistakes. – Yggdrasil Jan 17 '14 at 08:07
  • As for the execution time. With a growing application I have mostly a growing number of isolated components. If not I have done something wrong. When I implement a new feature it is mostly only in a limited number of components. I write one or more acceptance tests in scope of the whole application - which could be slow. Additionally I write the same tests from the components point of view - this tests are fast, because the components are usually fast. I can execute the component tests all the time, because they are fast enough. – Yggdrasil Jan 17 '14 at 08:12
  • @Yggdrasil As I said, unit tests are not all mighty, but they are usually the first layer of testing, since they are the fastest. Other tests are also useful, and should be combined. – BЈовић Jan 17 '14 at 08:14
  • 1
    Just because they are faster does not mean, that they should be used just because of this or because it is common to write those. Like I said my unit tests do not break - so they do not have any value for me. – Yggdrasil Jan 17 '14 at 14:44
  • @Yggdrasil Since your unit tests never break, and have no value, then it is obvious that you did something wrong. If you think they provide no value, then just delete them and forget about unit testing. – BЈовић Jan 17 '14 at 16:36
  • When I do TDD the following scenario happens: Before I make a change in a class I change my tests for this change and/or write new one. They are red than. I implement the change and they are green again. But that does not mean, that I have not introduced a new bug. Although unless I have broken something inside the class I changed no unit test will tell me my mistake, will they? This mistakes are shown only by my component tests. The case that I broke something inside the class where I made the change is rare. I hope this example could show you my problem. – Yggdrasil Jan 18 '14 at 10:10
  • @Yggdrasil Sounds good. In that case, you expect too much from unit tests They do not catch all kind of errors. That is why it is needed to test at every possible level. – BЈовић Jan 18 '14 at 17:56
  • 1
    The question was, what value do I have from unit test, when they do not break? Why bother writing them, when I always need to adjust them to new requirements? The only value from them I see is for algorithms and other classes with a high permutation. But these are less than the component and acceptance tests. – Yggdrasil Jan 18 '14 at 18:07
  • @Yggdrasil You keep on asking the same questions over and over. They do catch of bugs, but that doesn't mean they will catch all. Another point is improved design - something that can not be done with acceptance or integration tests. – BЈовић Jan 20 '14 at 07:35