59

I’m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests.

  • Generating and maintaining mock data

It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes.

  • Testing GUI

Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario.

  • Testing the business

I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large.

  • Contradiction in requirements

In reality it’s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenarios.

I have read this question: Why does TDD work?

Does TDD really work for complex enterprise projects, or is it practically limit to project type?

Amir Rezaei
  • 10,938
  • 6
  • 61
  • 86
  • +1 I had the same question after reading that question - I use it in a limited sense with the same problem with mock data. – Michael K Jan 31 '11 at 13:54
  • 22
    "TDD requires that requirements are 100% correct" where "requirements" means "I need to know how this single method must work". And if you don't know how the method's supposed to work, how are you supposed to implement it? – Frank Shearar Apr 14 '11 at 09:51
  • @FrankShearar: You know how method should work on expected input. Say strcmp must take 2 pointers of which none of them is nullptr and both are valid. You don't know what will happen when you feed bad pointer. Maybe on some architectures you can catch AV and do something sane, but you can't imagine such scenario is possible, so your tests are not covering it. – Coder Jan 26 '12 at 20:57
  • 7
    I would say TDD is the only thing that works for large projects! The larger the project the more complex the interactions and the more requirements randomly change - only TDD can keep up – Martin Beckett Jan 26 '12 at 21:42
  • 2
    Actually, the great thing about TDD in terms of requirement changes is when the requirements change, you can just write a new test for that requirement and be certain it won't break any of the rest of the project. If you didn't already have a test written, you'd have to also write tests to make sure your change didn't break anything else. Also, I love it for bug fixes. Even if you didn't develop everything using TDD, use it for bug fixes: Write a test that reproduces the bug, then fix the bug and run the test again. – Jordan Reiter Feb 24 '17 at 20:21

14 Answers14

55

It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes.

False.

Unit testing doesn't require "large" mock data. It requires enough mock data to test the scenarios and nothing more.

Also, the truly lazy programmers ask the subject matter experts to create simple spreadsheets of the various test cases. Just a simple spreadsheet.

Then the lazy programmer writes a simple script to transform the spreadsheet rows into unit test cases. It's pretty simple, really.

When the product evolves, the spreadsheets of test cases are updated and new unit tests generated. Do it all the time. It really works.

Even with MVVM and ability to test GUI, it’s takes a lot of code to reproduce the GUI scenario.

What? "Reproduce"?

The point of TDD is to Design things for Testability (Test Drive Development). If the GUI is that complex, then it has to be redesigned to be simpler and more testable. Simpler also means faster, more maintainable and more flexible. But mostly simpler will mean more testable.

I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combination of test (test space) is very large.

That can be true.

However, asking the subject matter experts to provide the core test cases in a simple form (like a spreadsheet) really helps.

The spreadsheets can become rather large. But that's okay, since I used a simple Python script to turn the spreadsheets into test cases.

And. I did have to write some test cases manually because the spreadsheets were incomplete.

However. When the users reported "bugs", I simply asked which test case in the spreadsheet was wrong.

At that moment, the subject matter experts would either correct the spreadsheet or they would add examples to explain what was supposed to happen. The bug reports can -- in many cases -- be clearly defined as a test case problem. Indeed, from my experience, defining the bug as a broken test case makes the discussion much, much simpler.

Rather than listen to experts try to explain a super-complex business process, the experts have to produce concrete examples of the process.

TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenario.

Not using TDD absolutely mandates that the requirements be 100% correct. Some claim that TDD can tolerate incomplete and changing requirements, where a non-TDD approach can't work with incomplete requirements.

If you don't use TDD, the contradiction is found late under implementation phase.

If you use TDD the contradiction is found earlier when the code passes some tests and fails other tests. Indeed, TDD gives you proof of a contradiction earlier in the process, long before implementation (and arguments during user acceptance testing).

You have code which passes some tests and fails others. You look at only those tests and you find the contradiction. It works out really, really well in practice because now the users have to argue about the contradiction and produce consistent, concrete examples of the desired behavior.

S.Lott
  • 45,264
  • 6
  • 90
  • 154
  • +1 Out of curiousity, what language are you generating tests for? – Michael K Jan 31 '11 at 13:55
  • 2
    The most complex unit tests I've had to work with were for applications written in Java. The spreadsheets with the examples were very large: many files, each with many tabs with many cases per tab. I used Python to build unittest cases from the spreadsheets. Java, however, was the language in which the final application was written. – S.Lott Jan 31 '11 at 13:59
  • 4
    @S.Lott Since the OP is most likely talking about WPF/SL with regard to MVVM your GUI testing comments are a bit off base. Even with decoupling and a strict MVVM approach the View by definition is still cumbersome to test. This is with any UI. Testing the View is notoriously time consuming, cumbersome and a low ROI. This is where the argument with regard to MVVM surfaces that testing the M/VM and disregarding the V may be the best approach, however testing components on the View such as placement of controls, coloring, etc...is still extremely time consuming and complex. – Aaron McIver Jan 31 '11 at 14:40
  • @Aaron: Many folks eschew unit testing placement of controls, coloring, etc., because it is so painful and so low ROI. That -- in now way -- devalues TDD. It merely accepts that fact that some aspects are harder to test than others. – S.Lott Jan 31 '11 at 14:46
  • @S.Lott My comment was around..."If the GUI is that complex, then it has to be redesigned to be simpler and more testable. Simpler also means faster, more maintainable and more flexible. But mostly simpler will mean more testable." Therefore making the View simpler (almost never a real option if defined by a UX team) and more testable will in no way address the problems associated with what I pointed out with regard to testing the View; it is inherently a problem child. Since the goal of the OP's post was about TDD working for complex projects the issues with testing a View were contextual. – Aaron McIver Jan 31 '11 at 15:00
  • @Aaron: All true. But. It -- in no way -- devalues TDD. It merely reflects on the complexity of testing "View ... defined by a UX team". "it is inherently a problem child". But that has little bearing on the value of TDD, does it? – S.Lott Jan 31 '11 at 15:03
  • 3
    @S.Lott It depends on the scope. TDD does not provide substantial value with regard to a testing a View. TDD however does provide substantial value with regards to testing the Model and ViewModel. If your scope was the ViewModel and View then TDD's value would be much different based on your scope then if your scope was the Model and required services. Don't get me wrong, I believe TDD has substantial value across complex projects...its value just differs based on scope. – Aaron McIver Jan 31 '11 at 15:10
  • 2
    Thanks for the perspective about stakeholders creating their own data scenarios in a spreadsheet, and developing unit tests directly from that. I've never heard TDD explained quite that way before. Is that your invention, or is there some place on the internet that has examples of this process? – Robert Harvey Jan 31 '11 at 16:46
  • 6
    @Robert Harvey: It can't be my invention. I'm too lazy to invent anything. – S.Lott Jan 31 '11 at 16:57
  • "Unit testing doesn't require "large" mock data. It requires enough mock data to test the scenarios and nothing more." I disagree! In our project with more than 200 tables a unit test has to fetch data from many mock tables. Each permutation of unit test (Unit test of the same business logic with different parameter) also requires data from different mock table. In our case a setup of spreadsheet is impossible because of complexity of data and unrealistic maintenance of such spreadsheet. – Amir Rezaei Feb 01 '11 at 11:31
  • “What? "Reproduce"? The point of TDD is …. “ . I don’t think that you get the point of complex GUI and reproducing the scenario. In our project when you are going to test a GUI component you have to prepare it with right data and later with a specific state. To load the component with domain object and setting it up with right state isn’t about laziness. It’s simply unrealistic. The number of permutation of state and data grows unrealistic! Adding to that for every change in GUI component would also lead in huge amount of man hour. – Amir Rezaei Feb 01 '11 at 11:38
  • 5
    @Amir Rezaei: I'm sorry that your minimal unit test data is complex. That has nothing to do with TDD. Your application is complex. You still need to test it, right? You still must produce test data? If you're not going to follow TDD, how are you going to create a testable application? Luck? Hope? Yes. It's complex. Nothing removes the complexity. TDD assures that you will actually test that complexity. – S.Lott Feb 01 '11 at 12:28
  • 3
    @Amir Rezaei: "The number of permutation of state and data grows unrealistic!". It shouldn't. If it does, you don't seem to be designing for testability. It almost sounds like you're designing to create complexity. The point is this. You **must** test. You can either design the software for testability (TDD). Or you can design the software that makes testing hard or impossible. I'm suggesting that -- since you **must** test -- you must design for testing. TDD. – S.Lott Feb 01 '11 at 12:30
  • 3
    The fact is that we are designing for reality. We didn’t make up the business rules. They are there and they are complex. – Amir Rezaei Feb 01 '11 at 13:39
  • @S.Lott Sure we can do unit tests for everything as long as there are **resource** in project to do so. The really of many projects are that resources are limited. There comes my point if this is realistic for such projects. – Amir Rezaei Feb 01 '11 at 13:46
  • 5
    @Amir Rezaei: "we are designing for reality". Are you going to write tests? If so, then design for testability. If you are not going to write tests, then how will you know anything works? – S.Lott Feb 01 '11 at 15:08
  • @S.Lott We have a test group that writes test spec. and do the testing. – Amir Rezaei Feb 01 '11 at 15:13
  • 2
    @Amir Rezaei: The point of TDD is to design software so it can be tested. That's why TDD works for complex projects. Are you saying that because there is some "test group" that you refuse to design things to support unit testing? – S.Lott Feb 01 '11 at 15:21
  • 1
    @S.Lott No, I'm no saying that I refuse support for unit testing. I saying creating and maintaining unit tests are resource demanding and unrealistic. Check out a simple scenario provided by @k3b. – Amir Rezaei Feb 01 '11 at 15:31
  • 2
    @Amir Rezaei: "I saying creating and maintaining unit tests are resource demanding and unrealistic." I guess you didn't read my answer. It's neither resource demanding nor unrealistic. Indeed, it's the only way to get software to work. You **must** test. Therefore you must design for testing. Writing unit tests is -- at that time -- cheaper and simpler than any other way of testing. – S.Lott Feb 01 '11 at 15:34
  • 2
    @AmirRezaei and S.Lott. There is some mixing of terms here. You both seem to mix testing with TDD. They are not the same. I think that is why you both aren't really "connecting" in views. Test driven development, ensures that you write modular, easily testable code by first writing the tests. This has more to do with the development process, than actual testing. Creating easily testable interfaces usually ensures good modularity and clean separation of concerns. Testing is a separate process and discipline that has to be done anyway. TDD does _not_ replace it. – oligofren Jan 31 '13 at 14:12
  • @S.Lott Good point on code generation from spread sheets. Very nice, and potentially avoids introducing tools like Cucumber to the business side. I would still like to know more about this process. How do you specify formats that are easily processable, etc? – oligofren Jan 31 '13 at 14:14
32

Yes

My first exposure to TDD was working on the middleware components for a Linux-based cell phone. That eventually wound up being millions of lines of source code, which in turn called into about 9 gigabytes of source code for various open-source components.

All component authors were expected to propose both an API and a set of unit tests, and have them design-reviewed by a peer committee. Nobody was expecting perfection in testing, but all publicly-exposed functions had to have at least one test, and once a component was submitted to source control, all unit tests had to always pass (even if they did so because the component was falsely reporting it worked okay).

No doubt due at least in part to TDD and an insistence that all unit tests always pass, the 1.0 release came in early, under budget, and with astonishing stability.

After the 1.0 release, because corporate wanted to be able to rapidly change scope due to customer demands, they told us to quit doing TDD, and removed the requirement that unit tests pass. It was astonishing how quickly quality went down the toilet, and then the schedule followed it.

Bob Murphy
  • 16,028
  • 3
  • 51
  • 77
  • 11
    `removed the requirement that unit tests pass. It was astonishing how quickly quality went down the toilet, and then the schedule followed it.` - it's like telling your F1 driver he's not allowed Pit Stops because it takes too much time... Idiotic. – Jess Telford Sep 24 '13 at 19:55
  • 2
    This exemplifies what I keep saying: *The only way to go fast is to go well*! – TheCatWhisperer Jun 06 '18 at 14:07
19

I'd argue the more complex the project, the more benefit you get out of TDD. The main benefits are side-effects of how TDD will force you to write the code in much smaller, much more independent chunks. Key benefits are:

a) You get much, much earlier validation of your design because your feedback loop is much tighter due to tests from the get go.

b) You can change bits and pieces and see how the system reacts because you've been building a quilt of test coverage the whole time.

c) Finished code will be much better as a result.

Wyatt Barnett
  • 20,685
  • 50
  • 69
  • 1
    I see and know the benefits of TDD. However I argue about how realistic and how much resource and afford is needed to do TDD in such projects. – Amir Rezaei Feb 01 '11 at 11:50
  • I have to agree with you. In complex projects there is (in my opinion) no other way to make sure that everything works than the tests... If many programmers work on you code-base, you can't be sure that no one altered your working stuff. If the test keeps passing - no problem. If not, you know **where** to look. – mhr Jan 31 '13 at 13:58
10

Does TDD really work for complex projects?
Yes. Not every project so I'm told works well with TDD, but most business applications are fine, and I bet the ones which do not work well when they are written in a pure TDD manner could be written in an ATDD way without major issues.

Generating and maintaining mock data
Keep it small and only have what you need and this is not the scary issue it seems. Don't get me wrong, it is a pain. But it is worthwhile.

Testing GUI
Test the MVVM and make sure that can be tested without the view. I've found this no harder than testing any other bit of business logic. Testing the view in code I don't do, all you are testing however at this point is binding logic, which one hopes will be caught quickly when you do a quick manual test.

Testing the business
Not found to be an issue. Lots of small tests. As I said above, some cases (Sudoku puzzle solvers seems to be a popular one) are apparently difficult to do TDD.

TDD requires that requirements are 100% correct
No it does not. Where did you get this idea from? All Agile practices accept that requirements change. You do need to know what you are doing before you do it, but that is not the same as requiring the requirements to be 100%. TDD is a common practice in Scrum, where the requirements (User Stories) are, by very definition, not 100% complete.

Pang
  • 313
  • 4
  • 7
mlk
  • 1,049
  • 8
  • 18
  • If you don't have an accurate requirement how do you even start with unit tests? Do you jump back and forward between implementation and design in the middle of a sprint? – Amir Rezaei Feb 01 '11 at 11:56
  • A "unit" is smaller than a requirement, and yes can generally be done without having all the UAC tied down. – mlk Feb 02 '11 at 10:14
  • We Unit Test each Unit and also Unit Test combination of Units, that is the requirement. – Amir Rezaei Feb 03 '11 at 17:52
9

First off, I believe your issue is more about unit testing in general than TDD, since I see nothing really TDD-specific (test-first + red-green-refactor cycle) in what you say.

It’s hard and unrealistic to maintain large mock data.

What do you mean by mock data ? A mock is precisely supposed to contain barely any data, ie no fields other than the one or two needed in the test, and no dependencies other than the system under test. Setting up a mock expectation or return value can be done in one line, so nothing terrible.

It’s is even harder when database structure undergoes changes.

If you mean the database undergoes changes without the proper modifications having been made to the object model, well unit tests are precisely here to warn you of that. Otherwise, changes to the model must be reflected in the unit tests obviously, but with compilation indications it's an easy thing to do.

Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario.

You're right, unit testing the GUI (View) is not easy, and many people are doing well without it (besides, testing the GUI is not part of TDD). In contrast, unit testing your Controller/Presenter/ViewModel/whatever intermediate layer is highly recommended, actually it's one of the main reasons that patterns such as MVC or MVVM are.

I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large.

If your business logic is complex, its normal that your unit tests are hard to design. It is up to you to to make them as atomic as possible, each testing only one responsibility of the object under test. Unit tests are all the more needed in a complex environment because they provide a security net guaranteeing that you don't break business rules or requirements as you make changes to the code.

TDD requires that requirements are 100% correct.

Absolutely not. Successful software requires that requirements are 100% correct ;) Unit tests just reflect what your vision of the requirements currently is ; if the vision is flawed, your code and your software will be too, unit tests or not... And that's where unit tests shine : with explicit enough test titles, your design decisions and requirements interpretation become transparent, which makes it easier to point your finger at what needs to be changed next time your customer says, "this business rule is not quite as I'd like".

guillaume31
  • 8,358
  • 22
  • 33
7

I gotta laugh when I hear someone complain that the reason they cannot use TDD to test their application is because their application is so complicated. What is the alternative? Have test monkeys pounding on acres of keyboards? Let the users be the testers? What else? Of course it is hard and complex. Do you think Intel does not test their chips until they ship? How "head-in-the-sand" is that?

SnoopDougieDoug
  • 548
  • 2
  • 3
  • 5
    Have highly skilled, professional workers who write simple and effective code. And use testers. This approach has worked for many successful companies. – Coder Jan 26 '12 at 21:03
  • One alternative is regression testing. Think about, say, testing a web browser. Let's say you're Google and you want to test a new version of Chrome. You can test each individual CSS element, and every attribute of every HTML tag, and every kind of basic thing that JavaScript can do. But how many possible combinations of these features are there? I don't think anyone can possibly know that. So they do all kinds of testing of individual features in various harnesses, but ultimately, they run regression against a known bank of websites. That's the million monkeys right there. – Dan Korn Jun 05 '15 at 00:24
  • The realistic alternative is to deliver software that doesn't work; under the right circumstances, this can still be profitable. Pick your favorite example. – soru Jun 28 '18 at 10:54
4
> Does TDD really work for complex projects?

From my experience: Yes for Unittests (test of modules/features in isolation) because these mostly do not have the problems you mention: (Gui, Mvvm, Business-Modell). I never had more that 3 mocks/stubs to fullfill one unittest (but maybe your domain requires more).

However i am not shure if TDD could solve the problems you mentioned on the integration or end-to-end testing with BDD-style tests.

But at least some problems can be reduced.

> However complex business logic is hard to test since the number 
> of combinations of tests (test space) is very large.

This is true if you want to do complete coverage on the level of integration-test or end-to-end test. It might be easier doing the complete coverage on a unittest-level.

Example: Checking complex user permissions

Testing the Function IsAllowedToEditCusterData() on an integration-test level would require to ask different objects for information about user, domain, customer , environment.... .

Mocking these parts is quite difficuilt. This is especially true if IsAllowedToEditCusterData() has to know these different objects.

On a Unittest-Level you would have Function IsAllowedToEditCusterData() that takes for example 20 parameters that contain everything the function needs to know. Since IsAllowedToEditCusterData() does not need to know what fields a user, a domain, a customer, .... has this is easy to test.

When i had to implement IsAllowedToEditCusterData() i had two overloads of it:

One overload that does nothing more than getting those 20 parameters and then calling the overload with the 20 parameters that does decision making.

(my IsAllowedToEditCusterData() had only 5 parameters and i needed 32 different combinations to test it completely)

Example

// method used by businesslogic
// difficuilt to test because you have to construct
// many dependant objects for the test
public boolean IsAllowedToEditCusterData() {
    Employee employee = getCurrentEmployee();
    Department employeeDepartment = employee.getDepartment();
    Customer customer = getCustomer();
    Shop shop = customer.getShop();

    // many more objects where the permittions depend on

    return IsAllowedToEditCusterData(
            employee.getAge(),
            employeeDepartment.getName(),
            shop.getName(),
            ...
        );
}

// method used by junittests
// much more easy to test because only primitives
// and no internal state is needed
public static boolean IsAllowedToEditCusterData(
        int employeeAge,
        String employeeDepartmentName,
        String shopName,
        ... ) 
{
    boolean isAllowed; 
    // logic goes here

    return isAllowed;
}
k3b
  • 7,488
  • 1
  • 18
  • 31
4

I've found TDD (and unit testing in general) to be virtually impossible for a related reason: Complex, novel, and/or fuzzy algorithms. The issue I run into most in the research prototypes I write is that I have no idea what the right answer is other than by running my code. It's too complicated to reasonably figure out by hand for anything but ridiculously trivial cases. This is especially true if the algorithm involves heuristics, approximations, or non-determinism. I still try to test the lower-level functionality that this code depends on and use asserts heavily as sanity checks. My last resort testing method is to write two different implementations, ideally in two different languages using two different sets of libraries and compare the results.

dsimcha
  • 17,224
  • 9
  • 64
  • 81
  • I've had this problem. You need simple cases worked out "by hand", and a sufficently complex case worked out & validated by a domain expert. If nobody can do that, you have a specification problem. When you can encode an algorithmic acceptance function, even if it's not quit the right shape state space, you can use it with statistical testing (run the test 10000 times & look at the answer acceptance trend) – Tim Williscroft Apr 14 '11 at 03:33
  • "and a sufficently complex case worked out & validated by a domain expert" - It is a unit test then, or a regression test? – quant_dev Apr 14 '11 at 06:12
  • 2
    @Tim: I **am** the domain expert (in my line of work one person is usually both the domain expert and the programmer) and I can't sanely work out this stuff by hand. On the other hand, I almost always know **approximately** what the answer should be (for example, a machine learning algorithm should make reasonably accurate predictions, an algorithm fed random data should yield no "interesting" results) but this is hard to automate. Also, for research prototypes, there is almost never a formal specification. – dsimcha Apr 14 '11 at 12:56
  • @quant_dev it's a unit test. It test the behaviour of the unit on a more complex test data set. You can use unit tests for regression testing. You should also write regression tests for bugs as they occur to prevent their recurrence. ( there is strong evidence that bugs cluster) – Tim Williscroft Apr 14 '11 at 22:59
  • @dsimcha: so a statistical approach to the unit testing may work for you, as you can make an approximate predictor. I used this approach in a weapon system to select & debug the moving target, moving shooter engagement code. It's very difficult to work out answers for that by hand, but relatively easy to work out that the predictor worked ( you virtually fire a projectile, and see where it virtually hits, lather, rinse repeat 100000 times nd you get nice results like "Algorithm A works 91% of the time , AlgorithmB works 85% of the time. ) – Tim Williscroft Apr 14 '11 at 23:03
  • @dsimcha there is some very interesting literature on testing of safety critical complex systems like aircraft FADEC (engine control) where a model to use for testing would be the same as the thing being tested. As a result, the recommendation was to model a much simplified acceptance state space. It ends up restricting thefinal system to be less it could potentially be, but it renders it testable. For testing your ML algorithm, a though: to test a car driver, when it can play SEGA rally, on a separate game console via actuators on the joystick, it works as a car driver. Hope this helps. – Tim Williscroft Apr 14 '11 at 23:09
  • @Tim: The statistical testing is exactly what I often do. Ideally I'd have a more automated, more objective, less fuzzy, more fine-grained, etc. approach, but since it's not feasible the statistical approach is the next best thing. – dsimcha Apr 20 '11 at 13:02
  • 100% agreed, and this is more a problem with machine based testing in general. If the answer is a fuzzy one, it may be much easier for a human being to identify a "good enough" answer. Otherwise, you end up having to write another fuzzy matching algorithm to check the output of your fuzzy algorithm! – Jordan Reiter Feb 24 '17 at 20:26
3

The sad answer is that nothing really works for large complex projects!

TDD is as good as anything else and better than most, but TDD alone will not guarantee success in a large project. It will however increase your chances of success. Especially when used in combination with other project management disciplines (requirements verification, use cases, requirement tractability matrix, code walkthroughs et.c etc.).

James Anderson
  • 18,049
  • 1
  • 42
  • 72
0

I think so, see Test Driven Development really works

In 2008, Nachiappan Nagappan, E. Michael Maximilien, Thirumalesh Bhat, and Laurie Williams wrote a paper called “Realizing quality improvement through test driven development: results and experiences of four industrial teams“ (PDF link). The abstract:

Test-driven development (TDD) is a software development practice that has been used sporadically for decades. With this practice, a software engineer cycles minute-by-minute between writing failing unit tests and writing implementation code to pass those tests. Test-driven development has recently re-emerged as a critical enabling practice of agile software development methodologies. However, little empirical evidence supports or refutes the utility of this practice in an industrial context. Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

In 2012, Ruby on Rails development practices assume TDD. I personally rely on tools like rspec for writing tests and mocks, factory_girl for creating objects, capybara for browser automation, simplecov for code coverage and guard for automating these tests.

As a result of using this methodology and these tools, I tend to agree subjectively with Nagappan et al...

gnat
  • 21,442
  • 29
  • 112
  • 288
Hiltmon
  • 209
  • 1
  • 3
0

I've seen a large complex project completely fail when TDD was used exclusively, i.e. without at least setting up in an debugger/IDE. The mock data and/or tests proved insufficient. The Beta clients real data was sensitive and could not be copied or logged. So, the dev team could never fix the fatal bugs that manifested when pointed at real data, and the whole project got scrapped, everyone fired, the whole bit.

The way to have fixed this problem would have been to fire it up in a debugger at the client site, live against the real data, step through the code, with break points, watch variables, watch memory, etc. However, this team, who thought their code to be fit to adorn the finest of ivory towers, over a period of nearly one year had never once fired up their app. That blew my mind.

So, like everything, balance is the key. TDD may be good but don't rely on it exclusively.

SPA
  • 9
  • 1
  • 1
    TDD does not prevent idiocy. TDD is one part of being agile, but another important bit is about delivering executable, runnable code in each and every sprint... – oligofren Jan 31 '13 at 14:29
0

If combination of budget, requirements and team skills are in the quadrant of the project-space lebelled 'abandon hope all ye who enter here', then by definition it is overwhelmingly likely the project will fail.

Perhaps the requirements are complex and volatile, the infrastructure unstable, the team junior and with high turnover, or the architect is an idiot.

On a TDD project, the symptom of this impending failure is that tests cannot be written on schedule; you try, only to discover 'that's going to take this long, and we only have that'.

Other approaches will show different symptoms when they fail; most commonly delivery of a system that doesn't work. Politics and contracts will determine whether that is preferable.

soru
  • 3,625
  • 23
  • 15
0

Remember that unit tests are enforced specifications. This is especially valuable in complex projects. If your old code-base does not have any tests to back it up, no one will dare to change anything because they will be afraid of breaking anything.

"Wtf. Why is this code branch even there? Don't know, maybe someone needs it, better leave it there than to upset anyone..." Over time the complex projects becomes a garbage land.

With tests, anyone can confidently say "I have made drastic changes, but all tests are still passing." By definition, he has not broken anything. This leads to more agile projects that can evolve. Maybe one of the reasons we still need people to maintain COBOL is because testing wasn't popular since then :P

kizzx2
  • 269
  • 2
  • 4
-1

TDD might sound as pain upfront but in long run it would be your best friend, trust me TDD will really make application maintainable and secure in long run.

Rachel
  • 767
  • 5
  • 18