102

What is a negative side of your TDD experience? Do you find baby steps (the simplest fix to make test green) annoying and useless? Do you find no-value tests (when test has sense initially but in final implementation checks the same logic as other test) maintanence critical? etc.

The questions above are about things which I am uncomfortable with during my TDD experience. So I am interested whether other developers have similar feelings and what do they think about them.

Would be thankful for the links to articles describing negative sides of TDD (Google is fullfilled by positive and often fanatic articles).

SiberianGuy
  • 4,753
  • 6
  • 34
  • 46
  • 1
    This question explain exactly what you asking [Should I Use TDD?](http://stackoverflow.com/questions/917334/should-i-use-tdd) Specialy read [Kevin Pang](http://stackoverflow.com/questions/917334/should-i-use-tdd/917355#917355), [Tim](http://stackoverflow.com/questions/917334/should-i-use-tdd/917354#917354) and [Randolpho](http://stackoverflow.com/questions/917334/should-i-use-tdd/917345#917345)'a answers. – Soner Gönül Aug 04 '11 at 07:26
  • 10
    My guess is that you won't hear a lot of honest answers about people's negative experiences, because TDD is still in somewhat of an "early adopter" state and *most* people who are interested and committed enough to try it aren't objective enough to evaluate it on its relative merits. It usually takes several years for the industry to really establish the long-term effects of new processes and techniques, and even then it's hard to get straight answers because of the lack of controls. Good question nevertheless, and good luck on getting helpful answers! – Aaronaught Aug 04 '11 at 15:13
  • @Aaronaught, but TDD is for almost decade with us – SiberianGuy Aug 04 '11 at 15:41
  • 20
    Are you having trouble finding negativity on the internet?! – Eric Wilson Aug 04 '11 at 15:47
  • @FarmBoy, in most cases I have trouble in finding positivity. But not in this case :) – SiberianGuy Aug 04 '11 at 15:49
  • 4
    @Job (and possibly other): don't forget he asked about TDD, not unit testing. TDD != unit testing. – n1ckp Aug 04 '11 at 20:27
  • 3
    I'm tempted to answer this question, but I don't really want to get started because it'll take up half my day. – Rei Miyasaka Aug 04 '11 at 20:40
  • 2
    When I find myself spending enough time on bugs that it seems I could spend less time writing tests for every single thing that I write before I write it, I will not adopt test-first-all-things-TDD. I will instead hang my head in shame and start searching for a new career. Not about bugs you say? Design? Yes. That's it. That's it exactly. And you haven't learned a damned thing about robust design by giving yourself a safety net and a license to continue working stupid. Put the IDE away and try to read your code if you want to discover the real problem. – Erik Reppen Mar 09 '13 at 05:31
  • possible duplicate of [The Relative Cost Efficiency of Test Driven Development](http://programmers.stackexchange.com/questions/206355/the-relative-cost-efficiency-of-test-driven-development) – gnat Jul 29 '13 at 14:16
  • Interesting link https://news.ycombinator.com/item?id=3033129 about a failed attempt to build a Sudoku solver with TDD – xuesheng Dec 14 '20 at 17:09

16 Answers16

105

Like everything that comes under the "Agile" banner, TDD is something that sounds good in theory, but in practice it's not so clear how good it is (and also like most "Agile" things, you are told that if you don't like it, you are doing it wrong).

The definition of TDD is not etched in stone: guys like Kent Beck demand that a non-compiling test must be written before a single line of code and every single line of code should be written to pass a failing test. Up front design is minimal and everything is driven by the tests. It just doesn't work. I've seen a big enterprise app developed using that methodology and I hope that it is the worse code I see in my career (it won't be far off; and that was despite having some talented developers working on it). From what I've seen it results in a huge number of poorly thought out tests that mainly validate that function calls occur, that exceptions are thrown when variables are null and the mocking framework gets a thorough workout (whoop-de-whoop); your production code gets heavily coupled to these tests and the dream of constant and easy refactoring does not appear - in fact people are even less likely to fix bad code because of all the test it will break. In this kind of environment software managers would rather have bad software with passing tests and high code coverage than good software with less tests.

Conversely I've heard people argue that TDD means designing the tests up front on a high level as part of the planning phase - alongside the architectural design. These tests may change during development as more information becomes available, but they have been carefully considered and offer a good guide as to what the code should actually do. To me that makes perfect sense.

  • 7
    +1 _"designing the tests up front on a high level as part of the planning phase - alongside the architectural design"_ Sounds a lot more reasonable to me as well. – Steven Jeuris Aug 04 '11 at 12:45
  • 4
    The irony is that "designing the tests up front on a high level as part of the planning phase" is part of the *Waterfall* model. The difference with TDD is that no planning goes into it. Par for the course in strict Agile. – Aaronaught Aug 04 '11 at 13:21
  • 11
    @Aaronaught Agile doesn't mean *no* planning, it means *just in time* planning. – Adam Jaskiewicz Aug 04 '11 at 13:54
  • 4
    @Adam: "Just in time" and "planned" are exact opposites. You're partially right; in a philosophical sense, Agile does *not* mean no planning, it just seeks to minimize *unnecessary* planning - however, the most common interpretation of specific Agile methods among programmers is to simply not plan. – Aaronaught Aug 04 '11 at 14:01
  • 2
    No, it means you don't plan *up front*. You do your planning as you need it, during the iterations/sprints. – Adam Jaskiewicz Aug 04 '11 at 16:31
  • 27
    @Adam Jaskiewicz: I love the "no upfront planning" thing. C'mon, planning is upfront *by definition*. If you don't plan beforehand but during the event you are not planning at all; you are improvising. ;-) – CesarGon Aug 04 '11 at 16:48
  • 4
    @Aaronaught - I think a lot of people who have seen 'strict Agile' know that is just plain dangerous. The whole problem with the Agile mantra is that lazy managers interpret it as "I don't need to write anything down, all I need to do is chew the cud and get my guys to write code". Once you get through that you realise that planning a large piece of software is vital. That does't need to be "100s of pages of obsolete Documentation", but it does mean being analytical. Planning the tests is an important part of that. –  Aug 04 '11 at 16:49
  • I mean "up front" in the sense of "at the start of the project". Of course you plan before you start coding a user story; just not six months before. If you do, your plan is out of date by the time you can fit the story into an iteration. Do people really jump straight into coding on the first day of an iteration? I mean, if you have some simple stories that don't require much in the way of design, sure, but in my experience, the first few days are mostly spent sketching stuff on a whiteboard or legal pad. – Adam Jaskiewicz Aug 04 '11 at 17:12
  • 3
    @Adam: That statement is really not true. There are lots of things you can and should plan at the beginning of a project, even if there's a chance it might change. There are also a lot of things you *shouldn't* plan, even if they probably *won't* change. I've seen both extremes and lowercase "agile" is generally somewhere in the middle, but uppercase "Agile" methods all too often party in the cowboy district while lying to managers that they actually spent the night doing the safety dance. When used as tools, techniques like TDD are fine; as an end unto themselves, they're dangerous. – Aaronaught Aug 04 '11 at 17:20
  • 39
    @Adam - "Do people really jump straight into coding on the first day of an iteration?" erm yep. That's "Agile" man. Last place I worked (and got fired from for not being "Agile") they did an entire 3 month release cycle without ever planning a single line of code or doing a single page of documentation. And yes the code was terrible and yes the software was slow, clunky and buggy. The day I joined I was told proudly by the manager that they were "The most Agile shop in London". They sure were. –  Aug 04 '11 at 18:20
  • 4
    @The Mouth of a Cow - *lazy managers interpret it as "I don't need to write anything down, all I need to do is chew the cud and get my guys to write code"* - This is the typical problem with Agile adoption. To a lazy organisation, agile is an excuse to throw too few resources at a problem and then cry about it when it's not perfect... "But it was agile!!! How did we fail?!?!" – ocodo Aug 05 '11 at 01:39
  • 4
    Good comment. I disagree with one point though and that is that "TDD sounds good in theory". When I read Beck's book I instantly felt it was a bad idea. Note I don't disagree with unit testing, just TDD. – Antonio2011a Aug 05 '11 at 02:21
  • 9
    Can add another problem: as long as it passes the test, it mush be good. Never mind that the test itself may well be flawed and thus causes false negatives and/or false positives. And of course requiring "100% test coverage" and anything that has such is by definition perfect, causing useless tests that don't actually test anything but are written solely to achieve that 100% coverage, code that's undocumented because your coverage counter counts comments as uncovered code, etc. etc. – jwenting Aug 09 '11 at 06:22
  • 1
    Without discipline, all projects are destined for failure. Anyone can throw together some buzzwords and be "agile." TDD is a tool in the toolbox. If you start with TDD without doing some sketches or whiteboard planning for something, you're using your hammer as a screwdriver. Sketch out rough ideas until you have something that makes sense. Start TDD. Change the design when you realize you didn't accommodate for X and Y. Get something finished, watch it run. Measure code coverage and add additional test cases. Move on to your next user story. – Corey D Oct 28 '11 at 15:33
  • 1
    @jwenting TDD is like a lock on the door: it won't guarantee that an intruder will break in, but it makes it more difficult. Yes, you can write a bad test and write code that happens to allow that test to pass, but the odds of doing so are much lower than if you'd never written the test at all. – weberc2 Sep 26 '14 at 22:22
  • "in fact people are even less likely to fix bad code because of all the test it will break" lol. Then those are the changes you probably don't want your team making. – weberc2 Sep 26 '14 at 22:23
  • "Guys like Kent Beck demand..." Wrong. Have a look at what Kent Beck _really_ says: [How deep are your unit tests?](https://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests/153565#153565) – Kyralessa Jul 27 '18 at 12:54
  • **"Like everything that comes under the "Agile" banner, TDD is something that sounds good in theory, but in practice"** .....is mere confirmation bias. TDD falls into the same-old bucket of hyped-but-fruitless anti-ideas, becoming a co-denizen of eXtreme Programming, Rapid Prototyping and Flowcharts. I'd put Scrum in there as well, but my soapbox is only so big. – tgm1024--Monica was mistreated Apr 07 '19 at 13:06
  • @CoreyD, you said "Without discipline, all projects are destined for failure." Perhaps I'd change that to "Without _talent_ [...]" Or perhaps "Without _expertise_ [...]". You cannot establish TDD as something constructive just by declaring it a discipline. There are plenty of disciplines that have no hope for improving a project quality or timeline. – tgm1024--Monica was mistreated Apr 07 '19 at 13:11
75

This (Clojure author) Rich Hickey interview contains the following. I feel 100 % sympathetic:

Life is short and there are only a finite number of hours in a day. So, we have to make choices about how we spend our time. If we spend it writing tests, that is time we are not spending doing something else. Each of us needs to assess how best to spend our time in order to maximize our results, both in quantity and quality. If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather spend that time thinking about my problem. I’m certain that, for me, this produces better solutions, with fewer defects, than any other use of my time. A bad design with a complete test suite is still a bad design.

Another similar statement from Donald Knuth in Coders at Work book interview, copy-pasted from here:

Seibel: Speaking of practical work, in the middle of working on The Art of Computer Programming you took what turned into a ten-year break to write your typesetting system TeX. I understand you wrote the first version of TeX completely away from the computer.

Knuth: When I wrote TeX originally in 1977 and ’78, of course I didn’t have literate programming but I did have structured programming. I wrote it in a big notebook in longhand, in pencil. Six months later, after I had gone through the whole project, I started typing into the computer. And did the debugging in March of ’78 while I had started writing the program in October of ’77. The code for that is in the Stanford archives—it’s all in pencil—and of course I would come back and change a subroutine as I learned what it should be. This was a first-generation system, so lots of different architectures were possible and had to be discarded until I’d lived with it for a while and knew what was there. And it was a chicken-and-egg problem—you couldn’t typeset until you had fonts but then you couldn’t have fonts until you could typeset. But structured programming gave me the idea of invariants and knowing how to make black boxes that I could understand. So I had the confidence that the code would work when I finally would debug it. I felt that I would be saving a lot of time if I waited six months before testing anything. I had enough confidence that the code was approximately right.

Seibel: And the time savings would be because you wouldn’t spend time building scaffolding and stubs to test incomplete code?

Knuth: Right.

Joonas Pulakka
  • 23,534
  • 9
  • 64
  • 93
  • 2
    That sounds like thinking about a problem can reduce the number of defects resulting from flaws in the execution (of the solution that results from thinking) to zero, which I'd say is flat-out wrong for any human being. But I would agree that thinking about *design* is probably going to produce a better design than relying on TDD to guide you there; but reducing defects is an entirely different matter. – Michael Borgwardt Aug 04 '11 at 09:07
  • I read an interview with jwz about how they wrote the first Netscape browser. They simply did not have time for tests, and (I've concluded) he was smart enough to hold it all in his head. –  Aug 04 '11 at 09:40
  • 25
    I think you have to look at the type of work you are doing. Knuth and Hickey are talking about the design of new (innovative) applications. If you look at where TDD is popular (broad overgeneralization), then you come to the realization that most of the applications written in a TDD manner already have a well known architecture. – sebastiangeiger Aug 04 '11 at 11:26
  • 4
    I can see what Rich Hickey means, but I would like to add this: My experience is, that when you are writing the tests, you need to really think about your design and think about how you make your code testable, which, in my experience, results in better design. – Niklas H Aug 04 '11 at 11:51
  • 3
    Seeing as the OP asks for negative experiences with TDD neither of your example seem relevant to this question as neither shows an example of TDD in action. – Winston Ewert Aug 04 '11 at 15:05
  • Just because the first netscape browser was written without tests doesn't mean it was a good idea. Plenty of people write stuff without tests. Was it without bugs also? – Kevin Aug 04 '11 at 15:47
  • 3
    @Winston Ewert: The OP asked for *links to articles describing negative sides of TDD*. Both of my examples are saying the same thing - Hickey and Knuth both feel that their time is better spent on thinking than on writing tests, "better spent" meaning a better design and/or time savings. That is, time spent on writing tests instead of thinking would result in worse design and/or waste of time. Tests are not bad *per se*, but they don't come for free. – Joonas Pulakka Aug 04 '11 at 15:57
  • 5
    @Michael Borgwardt: It's relatively easy to add tests to an existing, good design, to get rid of bugs in the implementation. But getting rid of bad design often means a complete rewrite. So the primary objective should be in getting the *design* right; *execution* is easier to fix later on. – Joonas Pulakka Aug 04 '11 at 16:18
  • Can TDD can identify bad design? – Christopher Mahan Aug 04 '11 at 21:04
  • 5
    Most business applications don't have the benefit of being written and/or maintained by Donald Knuth. The lifetime of an application is usually MUCH longer than its primary development. I wish people had written more tests up front on my current project. Now maintenance is like a minefield of infinite regressions. – Matt H Aug 05 '11 at 00:49
  • 1
    @Christopher Mahan: TDD may be able to identify some bad design patterns, e.g. tests shouldn't be hard to write. But *fixing* a bad design is a completely different matter from *identifying* it. – Joonas Pulakka Aug 05 '11 at 06:00
  • 3
    Whether or not TDD has benefits isn't being questioned here. (And it does, but it's not worth the time, in light of other techniques and knowledge that achieve the same effect in much shorter time.) – Rei Miyasaka Aug 05 '11 at 07:20
  • @Rei: It's not worth WHOSE time? How does "thinking about the problem" ensure long-term stability and quality after the problem is initially solved and modifications are required? A good design alone doesn't fight rot or regression thrashing on a large codebase. – Matt H Aug 05 '11 at 15:51
  • @Joonas: Then perhaps TDD can help make a bad design smelly earlier, if test are hard to write? – Christopher Mahan Aug 05 '11 at 17:07
  • 2
    @Matt Unit testing, and inherently TDD, [doesn't catch regressions bugs](http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/). What does fight code rot is a proper understanding of [composability](http://en.wikipedia.org/wiki/Composability). While TDD yields composable designs, it doesn't teach the underlying principles at all. It's not worth *anyone's* time (except possibly for those implementing mathematical functions with straightforward mappings) because the problems it solves can be solved much more quickly by other means. – Rei Miyasaka Aug 05 '11 at 20:24
  • @Rei: What are some examples of these other means? – Matt H Aug 06 '11 at 01:56
  • 3
    @Matt Taking some tips from functional programming: isolate state and IO, make functions pure, favor composition over inheritance, pass functions (or interface instances) around to connect code. It's no coincidence that functional programming and TDD are both somewhat clumsy when it comes to databases/UIs and end up minimizing the code around them; *they both yield very similar code*, but with slightly different idioms. – Rei Miyasaka Aug 06 '11 at 08:09
  • @MattH TeX is perhaps one of the most dangerous tools in the hands of non-professionals. Very very powerful, but not intended for the casual user. Remember he just wrote it for typesetting his books - very few use plain TeX, most use LaTeX where users are mostly shielded from this. –  Apr 10 '12 at 10:31
  • "If we spend it writing tests, that is time we are not spending doing something else." If the alternative is debugging bad code, or trying to make changes in tightly-coupled code, or manually testing a crappy system for the 100th time (because I have no automatic tests) then I'll settle for writing the automatic tests. TDD saves time on iteration 0. – weberc2 Sep 26 '14 at 22:27
63

My negative experience with TDD was my first one. TDD sounded great to me, I'd done QA for years and still had the horrors fresh in my mind. I wanted to squash every bug before it made it into a build. Unfortunately, using TDD doesn't guarantee that you'll write good tests. In fact, my initial predisposition was to write simple tests that generated simple code. Really simple code that contained few abstractions. Really simple tests that were intertwined with the class internals. And once you have a few thousand itty bitty tests in place you sure don't feel like you're moving any faster when you have to change a hundred of them to refactor your code to use very important domain concept X.

The light came on for me - TDD is not a testing skill. It's a design skill. It can only lead you to good, simple, workable code with practice and a constant awareness of the design directions it leads you in. If you're writing tests for the sake of code coverage you're going to create brittle tests. If you're writing tests to help you design your abstractions then it's just a more rigorous way of writing top-down code. You get to see the code from the caller's perspective first, which encourages you to make his life easier, rather than mirroring a class's internals to its outside edge.

I think TDD is useful, but I'm not dogmatic about it. If those "no-value tests" are making maintenance difficult - Delete them! I treat tests the same way as the rest of the code. If it can be refactored away and make things simpler, then do it!

I haven't seen it personally, but I've heard that some places track code coverage and test counts. So if metrics gathering is a side effect of TDD, then I could see that as a negative as well. I will enthusiastically delete 1000 lines of code, and if that obsoletes 20 tests and drops my code coverage %, oh well.

Steve Jackson
  • 3,595
  • 1
  • 20
  • 27
  • 7
    You nailed it in paragraph 2. – Sheldon Warkentin Aug 05 '11 at 12:58
  • 5
    "I haven't seen it personally, but I've heard that some places track code coverage and test counts" I've lived in such an environment and indeed no code ever got thrown out because doing so would cause a test to fail. Until I started debugging the actual tests that is, and found a lot of them having such serious flaws they forced code to produce incorrect results in order for the tests to pass. That's when I coined the question: "who is testing the tests" to which so far I've never had a satisfactory answer from the TDD community. Unit tests for unit tests, anyone? – jwenting Aug 10 '11 at 07:36
  • 1
    @jwenting - And this anecdote supports Rei's argument rather nicely. I've found the regression protection aspect of TDD to be overblown in practice, even if it's a solid idea in theory. The tests have to be held to be maintained at the same level as production code for it to work, and it's a bit unnatural to treat non-production code this way - I see the same "code rot" with hardware simulators, code generators, etc all the time. – Steve Jackson Aug 10 '11 at 12:32
  • "Really simple tests that were intertwined with the class internals" <-- there's your problem right there. Test only to public interfaces. TDD != UT – Steven A. Lowe Feb 10 '12 at 17:47
  • 2
    @StevenA.Lowe - I know that now, but 9 years ago it wasn't so clear :) "TDD is not a testing skill" – Steve Jackson Feb 10 '12 at 17:50
48

I'm going to go out on a limb here and declare with brutal honesty that it's literally a ritualistic waste of time. (In the majority of situations.)

I bought a book on Unit Testing which also discussed TDD, and while I agree with the benefits of UT, after about a hundred hours of trying out TDD, I gave up on it for a myriad of reasons. I'm kind of cross-posting here, but TDD:

  1. Isn't better documentation than real documentation.
  2. Doesn't catch bugs or regressions.
  3. Doesn't really make my designs better than they end up being if I apply some functional programming and composability concepts.
  4. Is time that could be better spent doing code reviews or polishing documentation and specifications.
  5. Gives managers a false sense of security when they see a list of hundreds of green icons.
  6. Improves productivity in the implementation of algorithms with limited input-output mappings.
  7. Is clumsy in that you may know what you're doing as a result of TDD, but you aren't earning any understanding of why it works so well, why your designs come out the way they do.

Another concern is the debated degree of perfection to which one must do TDD to do it successfully. Some insist that if TDD isn't done persistently by everyone on the team from the beginning of the project, you'll only suffer. Others insist that no one ever does TDD by the book. If these are both true, then it follows that TDD practitioners are suffering, whether they realize it or not.

Of course, if it's being argued that by doing things in a TDD-like manner you'll come to designs that can work easily with TDD, well, there are much quicker ways to achieve that -- namely, by actually studying the concepts of composability. There's plenty of resources out there, a lot of rigorous mathematical theory even (largely in functional programming but also in other fields). Why not spend all your TDD time learning?

Culturally, TDD shows symptoms of being a ritualistic practice. It rides on guilt; it encourages procedure over understanding; it has tons of doctrines and slogans ("fake it 'til you make it" is really quite alarming if you look at it objectively). Wikipedia's definition of the word "ritual" is in fact quite apposite:

In psychology, the term ritual is sometimes used in a technical sense for a repetitive behavior systematically used by a person to neutralize or prevent anxiety; it is a symptom of obsessive–compulsive disorder.

Rei Miyasaka
  • 4,541
  • 1
  • 32
  • 36
  • Very interesting perspective re: ritual. You also get the sense, in some circles, that a programmer's commitment to his craft is judged to be solely proportional to his adherence to TDD. – yalestar Aug 09 '11 at 19:56
  • 1
    I'd say in some situations it can improve a design, but mostly that's when the design needs to be highly modular with a very well defined and easy to use public interface. Writing the tests for it before implementing the library itself in those cases can help iron out the bugs in the public interface as well as force that modularity. – jwenting Aug 10 '11 at 07:39
  • 11
    @jwenting People do TDD because they don't know what makes a design modular, so they can never take their 35" training wheels off of their 40" bikes. Making a modular design is *always* a manageable task if you understand the concepts, because each dilemma in your design will be of a manageable size due to the fact that it is, in its conception, in the process of becoming modular. I don't disagree that TDD is effective at forcing modularity, but I disagree that one needs to be *forced* into creating modular designs. Modular design is a perfectly teachable and learnable skill. – Rei Miyasaka Aug 10 '11 at 09:03
  • 1
    @jwenting Regarding the usability of public interfaces, there are two schools of thought amongst TDD practitioners on that matter, both of which are undesirable: make everything public so that it can be tested, or leave things private if they shouldn't be tested anyway. The former forces unnecessary implementation details to be exposed to end users (which can potentially be misused or exploited), and the latter forces unit tests to be closer to system tests. Sure, you could use unit testing tools to access privates, but that doesn't make much sense to do in TDD. – Rei Miyasaka Aug 10 '11 at 09:16
  • I don't agree with the fact that it doesn't catch bugs or regression bugs, nor does this Microsoft study: http://channel9.msdn.com/blogs/peli/experimental-study-about-test-driven-development. According to that study, it took about 15% more time to write code, but IBM saw a 40% reduction in defect density, and Microsoft's teams saw a reduction of 60-90% in defect density. – KallDrexx Oct 28 '11 at 17:58
  • 1
    @Kall In that interview, he says "about 15 to 35% more time" -- not just 15% as you've quoted him. The study also only involves Java/C++/C# (probably C# 2, given the date) -- all languages of the same imperative OOP paradigm. I'm certainly more than 15% and probably more than 35% more productive in a functional language (and in C# 3 even), and I produce much fewer bugs writing stateless, composable code -- the same kinds of bugs that testing ameliorates, because both things solve the exact same types of problems. In other words, sure, 60-90% reduction in bugs, but compared to *what*? – Rei Miyasaka Oct 28 '11 at 18:59
  • The paper itself is at http://research.microsoft.com/en-us/groups/ese/nagappan_tdd.pdf. It shows how they came to the defect density reduction numbers. – KallDrexx Oct 28 '11 at 19:25
  • Also, I'd bet that programmers like to *think* they are faster and write less buggy software than is actually true. It also does not mean that just because you don't have improvements because you write code better without TDD, that your development team as a whole can be spoken for the same way – KallDrexx Oct 28 '11 at 19:27
  • @Kall The improvements for the team that functional programming provides are of the same kind that TDD provides -- the reason for this is that, like I described, TDD is beneficial for the same reasons that understanding things like composition and immutability are. In fact FP and TDD often arrive at similar if not same designs, and are both difficult to use in similar situations, namely IO-heavy applications and UIs. The overhead of teaching your team members about composition or functional programming is fairly high, but the running cost is *much, much* lower. – Rei Miyasaka Oct 28 '11 at 20:48
  • As for code quality with FP, while I don't know of any quantitative studies off the top of my head, qualitatively, there is very good reason to believe that the code quality is much better with FP than with imperative code, and possibly (probably?) better than imperative code with TDD. Many classes of bugs that exist in imperative programming simply cannot exist in functional style, and the fact that functional code is generally much shorter than imperative code means that while the error density may be the same or higher, the total number of bugs would be reduced. – Rei Miyasaka Oct 28 '11 at 20:51
  • Except that functional programming and composition by themselves doesn't provide a solution for bugs or regression, which Microsoft's study shows that TDD does. I know for a fact that TDD has caught bugs due to changes in requirements for my in-production app. I'm not saying TDD is the best thing ever, it has to be evaluated on a per-project basis, and there are several projects of mine where TDD is not a good fit, but for the right projects it definitely has proven its worth for maintainability and helping reduce production bugs, and Microsoft's study backs that up. – KallDrexx Oct 29 '11 at 14:52
  • How is reducing bugs not a solution for bugs or regressions? TDD isn't a solution for bugs either; if you're catching and preventing a lot of bugs with TDD, then chances are you're actually doing integration tests/unit tests and thus not actually improving the modularity of the code. You can't do both with the same kind of test set. Given that the projects in the study involved teams doing TDD for the first time, and the common knowledge that TDD beginners tend to do a mix, they were probably not getting much of the design benefits of TDD. – Rei Miyasaka Oct 30 '11 at 04:45
  • @ReiMiyasaka, your first sentence and #3 in your list couldn't be better written. – tgm1024--Monica was mistreated Apr 07 '19 at 13:15
25

To add, another concern with TDD I noticed is:

TDD causes an inadvertent shift in development team's focus from Quality Code to Testcases and Code Coverage! I personally did not like TDD as it makes me less creative and makes software development a dull mechanical process! Unit testcases are useful when judiciously used but becomes a burden when treated the goal of software development.

I know a guy who is a manager and is technically dull once got obsessed with TDD. It was such a magical thing for him that he believed would bring in magical solutions to all the issues in his poorly architectured software with least maintainable code. Not to say what happened to that project-it failed miserably in his hands, while all his testcases were green. I guess TDD helped him get some kind of statistical information like "99/ 100 of my cases are green" etc and that was the reason for his obsession as he would never be able to evaluate the quality or suggest enhancements in design.

WinW
  • 1,003
  • 1
  • 8
  • 19
  • 2
    Sounds like PHB hell! It reminds me of companies that introduce bonus schemes - what happens of course is that developers instead of focusing on quality code, focus on meeting whatever the bonus requirements are. Inevitably you get crappier code. (There's also an analogy here to the current banking crisis :-) ) – TrojanName Aug 24 '11 at 11:27
  • Well TDD is no Project-Management technique, so it is no wonder that your wannabe-manager failed. On the other hand I do not feel less creative, I even feel more creative because developing tests automatically give me another view on my code. However I agree that the focus has to be production code, and tests should not mess up a good software architecture. – Alex Nov 25 '11 at 19:33
  • Code coverage is a concern of Unit Testing, *not* TDD. TDD cares only about features and tests through public interfaces. – Steven A. Lowe Feb 10 '12 at 17:49
14

My primary negative experience is trying to use TDD to edit another programmer's code who has no tests, or very, very basic integration tests. When I go to add a feature, or fix a problem with said code; I would prefer to write a test first (the TDD way). Unfortunately, the code is tightly coupled, and I cannot test anything without a lot of refactoring.

The refactoring is a great exercise anyway, but it is required to get the code into a testable state. And after this step, I have no checks and balances to see if my changes broke anything; short of running the application and examining every use case.

On the other hand, adding features/fixing bugs to a TDD project becomes very straightforward. By nature, code written with TDD is usually quite decoupled with small pieces to work with.

In any case, TDD is a guideline. Follow it to the point where you find you get maximum effectiveness. Decent test coverage and decoupled code, well written code.

Sheldon Warkentin
  • 1,272
  • 8
  • 12
  • 1
    If its hard to write unit tests, there are generally poor separation of concerns. I don't see this as TDD's fault, if anything it quickly makes these problems obvious. – Tom Kerr Aug 04 '11 at 22:25
  • 7
    That's not a negative experience with TDD, that's a negative experience with crappy code. – Rei Miyasaka Aug 05 '11 at 20:35
13

I made the experience that I sometimes rely on my tests too much when it comes to the design of the system. I am basically too low in the nitty-gritty implementation details to take the step back to look at the bigger picture. This often results in an unnecessarily complex design. I know, I am supposed to refactor the code but sometimes I have the impression that I could save a lot of time by taking the step back more often.

That being said, if you have a framework like rails where your architectural decisions are very limited these problems are basically non existent.

Another problem is when you trust your tests blindly. Truth is - like any other code - your tests can have bugs too. So be as critical towards your tests as you are towards your implementation.

Flimzy
  • 704
  • 4
  • 13
sebastiangeiger
  • 1,907
  • 13
  • 15
12

As a big fan of TDD I sometimes see these drawbacks

  • Temptation to write too many test for the sake of nearly 100% code coverage. In my opinion it is not necessary to write tests
    • for simple property getters/setters
    • for every case when an exception is thrown
    • that checks the same functionality through different layers. (Example: if you have a unittest to check input validation for every parameter then it is not necessary to repeat all these tests through an integration test, too)
  • Maintenance costs of test code for similar tests, that vary only slightly (created through code-duplication (aka copy-paste-inheritance)). If you already have one it is easy to create a similar one. But if you do not refactor the test code, by eliminating code duplication into helper-methods you might need some time to fix the tests if implementation details of you business-code changes.

  • If you are under time pressure you may be tempted to eliminate broken tests (or comment them out) instead of fixing them. This way you lose the investment in the tests

Carl Manaster
  • 4,173
  • 18
  • 31
k3b
  • 7,488
  • 1
  • 18
  • 31
  • 2
    +1: "Temptation to write too many test for the sake of nearly 100% codecoverage.": I fell into this trap once, and after spending so many hours writing all those unit tests, the only three bugs I found in my code were not covered by the unit tests and could easily be found by debugging the code step-by-step. – Giorgio Dec 10 '12 at 09:59
10

TDD zealots.

For me, they are just one of a long line of religious loonies knocking on my door, trying to prove that my way of doing things is irreparably broken and the only path to salvation is Jesus, Kent Back, or Unit Testing.

IMO, their biggest lie is that TDD will lead you to salvation a better algorithm design. See the famous Soduku solver written in TDD: here, here, here, here and here

And compare it Peter Norvig sudoku solver done not by using TDD, but using old-fashioned engineering: http://norvig.com/sudoku.html

Cesar Canassa
  • 1,390
  • 11
  • 13
  • Look we could argue on and on about this. But I've got way too much work to do since I graduated from Fullsail university with my degree in game design. Based on my courses and my very demanding job I can say that TDD truly trumps the frantic line-by-line (lack of design) development of lazy programmers. Look I wouldn't be saying it but it's true: Most developers who went to a normal university's CS program mostly did not graduate, the few who did overwhelmingly did not move into software development, and on top of that many of those barely get by, line by line. Fullsail university has a – Zombies Oct 28 '11 at 17:11
  • complete course in test driven development alone and that really gets developers on the right track (as opposed to implementing a linked list in c++). – Zombies Oct 28 '11 at 17:12
  • Links are broken dude! – lmiguelvargasf Apr 03 '16 at 20:54
  • This is many years later, but @Zombies, look into "Confirmation Bias". A lot of what all of us are taught in CS in college falls precisely into that category. Take a look at the out-of-hand dismissal of "magic numbers" and the crazy bashing of the goto in C. – tgm1024--Monica was mistreated Apr 07 '19 at 13:35
  • Lol man I was trolling... I wrote that so long ago I had forgotten about that little gem. – Zombies Apr 07 '19 at 13:49
10

I have yet to come across more than one scenario as a game developer where TDD was worthwhile. And the instance in which it was, was a piece of code which was purely mathematical in nature and needed a solid approach to test a huge number of edge cases simultaneously - a rare need.

Maybe something, someday will change my mind, but amongst XP practices, the idea of refactoring mercilessly, and of code evolving its own form are far more important and lead to the greatest productivity for me, cf. a quote from a paper by James Newkirk:

Simplicity - "What is the simplest thing that could possibly work?"
The message is very clear. Given the requirements for today, design and write your software. Do not try to anticipate the future, let the future unfold. It will often do this in very unpredictable ways making anticipation a cost that is often too expensive to afford."

The concepts of courage and of tightening feedback loops which he mentions are also, to my mind, key to productivity.

Engineer
  • 767
  • 5
  • 16
  • 9
    The trouble is, without unit tests how do you know that your *merciless refactoring* results in code which does the same thing? In my experience, I've found that depending on the problem it can take me *less* time to write tests+code than it does to write the code on it's own! The reason mainly boils down to the fact that for some problems I can try out a re-factor and retest automatically much quicker than I could test it manually, which can increase the velocity of iterations quite significantly. – Mark Booth Aug 04 '11 at 12:21
  • 7
    For games, very often the results can be seen. If they can be seen, and appear good enough, they will be accepted, since a game is intended to be a subjective experience anyway. On the other hand, taking eg. Diablo 2 as an example, the number of errors in combat formulae showed where TDD would have brought huge value and saved them vast amounts of work on patches. For very well-defined problems such as solving equations, and where these cannot be judged by visual outputs at runtime, TDD is a must-have to ensure correctness. But that is a small fraction of the code in most games. – Engineer Aug 04 '11 at 12:27
  • Also worth mentioning that due to the rate at which simulations run, it is better to be watching variables in real-time, onscreen, as the sim runs, than to sit with million-line logfiles to look through post the fact. – Engineer Aug 04 '11 at 12:29
  • 3
    Unit tests make refactoring *much* easier, in my experience. – Tom Kerr Aug 04 '11 at 14:34
  • 1
    @Nick The big problem in gaming industry are the deadlines, which are always - "we had to deliver half a year ago". I think the time plays against unit-tests in time constrained environments. In some cases it's not a correct decision, but in most cases shipping without test writing is faster. Depends, really depends... – Coder Aug 08 '11 at 21:53
8

My negative TDD experience, limited as it may be, is simply knowing where to start! For instance, I'll try to do something TDD and either have no idea where to begin barring testing trivial things (can I new up a Foo object, can I pass in a Quux to the Baz, and the like. Tests that don't test anything), or if I am trying to implement it in an existing project then I find that I would have to rewrite various classes to be able to use them in TDD. The end result is that I quickly abandon the notion entirely.

It probably doesn't help that often I'm the only person in the entire company who knows what unit testing (TDD or otherwise) is and why it's a good thing.

Wayne Molina
  • 15,644
  • 10
  • 56
  • 87
  • 1
    This is where mocking frameworks come in. Instantiate `Foo` with *Mock* objects rather than `Quux` and `Baz` directly, then you can call the function you want to test and then check that the mocks were called with the functions you expect. Mock objects are the enabling technology that helps decouple units and make them unit testable. This is why singletons are evil, since you often can't just *Mock* them out. *8') – Mark Booth Aug 04 '11 at 16:32
6

If you use TDD from these "fanatic" articles you will get wrong safety feeling that your software have no errors.

Dainius
  • 340
  • 1
  • 8
  • 1
    Can you get any other feeling than knowing that for a given set of inputs your software returns a given set of outputs? –  Aug 04 '11 at 20:22
  • as long as you understand that tdd is development process and not gold rule to solve any kind of problems, it's Ok. But most of people who propose to use this process forgot that it's development process and as any other process have bright side and dark side. They saying for everyone that if you will use tdd you will have bugs free software, because you will use test to cover every feature. And usually it's not right. In best way there will be test for each case (or at least feature), but tests is programs (who have bugs) and it's only black box tests. – Dainius Aug 05 '11 at 06:37
4

TDD have some benefits:

  • You focus on how to call your code and what to expect first (as you write the test first) instead of focusing on solving the problem and then you glue in a call from the application. The modularity makes it easier to mock and wrap.
  • The tests ensure that your program works the same before and after a refactoring. This does not mean your program is error free, but that it keeps working the same way.

TDD is about long term investment. The effort pays off when you reach maintenance mode of your application, and if the application is not planned to reach that point you may never recover the investment.

I consider the TDD red-green-cycle with the baby steps similar to a checklist for a plane. It is annoying and tedious to check each and every thing in the plane before take-off especially if it is trivially simple (the TDD baby steps) but it has been found that it increases safety. Besides verifying everything works, it essentially resets the plane. In other words, a plane is rebooted before every take-off.

  • 3
    Benefit point 2 can be achieved with simple unit tests without a TDD approach as well. Benefit 1 you should be doing anyhow. (Focusing on the API) It's still entirely possible to create a crappy API using TDD, but yes, you are guaranteed that it will work (for the written tests). – Steven Jeuris Aug 04 '11 at 12:34
  • 2
    The question was not asking about the benefits of TDD. There are already plenty of other questions on that. – Aaronaught Aug 04 '11 at 13:18
  • 1
    @aaronaught, I am addressing his pain points. –  Aug 04 '11 at 15:05
  • Answers should address the *question*. – Aaronaught Aug 04 '11 at 15:09
  • your first sentence about API focusing is why anyone should avoid this kind of development process, because your goal is to solve problems not write a good looking api. – Dainius Aug 04 '11 at 15:29
  • @Danious, I've reworded this as you are correct in stating that this is not about writing an API but focusing on how to actually _verify_ that the code returns a certain output given a certain input. I consider that quite an important part of solving a problem. –  Aug 04 '11 at 20:19
  • 1
    @aaronaught, then write some of those. –  Aug 05 '11 at 00:10
4

My negative experience about TDD is something I feel with a lot of new and hyped stuff. In fact I enjoy TDD because it ensures the validity of my code, and even more important: I can recognize failing tests, after adding new code or any kind of refactoring.

What annoys me about TDD is the fact that there are a lot of rules or guidelines about it. As it is still quite new, we most of us experience to be beginners at TDD. So what works well for some of us, might not work for others. What I want to say is, that there is no real "wrong or right" way to perform TDD: There is the way that works for me - and my team if I have one.

So as long as you write tests - before or after the production code doesn't really matter IMHO - I am not sure if test driven really means you have to follow all guidelines that are stated right now, since they are not yet proven to be the ideal solution for every day work. If you find a better way in writing tests, you should post it in a blog, discuss it here, or write an article about it. So in ten years or so we might have shared enough experience to be able to tell which rule of TDD can be assumed to be good or not in a certain situation.

Alex
  • 121
  • 3
4

I've found TDD performs poorly when it comes to emergent systems. I'm a video games developer, and recently used TDD to create an system that uses multiple simple behaviours to create realistic-looking movement for an entity.

For instance, there are behaviours responsible for moving you away from dangerous areas of different types, and ones responsible for moving you towards interesting areas of different types. Amalgamating the output of each behaviour creates a final movement.

The guts of the system were implemented easily, and TDD was useful here for specifying what each subsystem should be responsible for.

However I ran into problems when it came to specifying how the behaviours interact, and more importantly how they interact over time. Often there was no right answer, and although my initial tests were passing, QA could keep finding edge cases where the system didn't work. To find the correct solution I had to iterate thorugh several different behaviours, and if I updated the tests each time to reflect the new behaviours before I checked they worked in-game, I may have ended up throwing out the tests time and time again. So I deleted those tests.

I should have possibly had stronger tests that captured the edge cases QA discovered, but when you have a system like this that sits on top of many physics and gameplay systems, and you're dealing with behaviours over time, it becomes a bit of a nightmare to specify exactly what's happening.

I almost certainly made mistakes in my approach, and like I said for the guts of the system TDD worked brilliantly, and even supported a few optimising refactors.

tenpn
  • 407
  • 2
  • 9
  • Have you tried TDD recently for a project at work ? How was your experience ? Do you still feel the same way about TDD after 9 years ? – MasterJoe Apr 22 '20 at 19:13
  • 1
    @MasterJoe2 blast from the past! my thinking has matured: TDD is very important but should be stopped from reaching the top, player-facing part of the code, because that's only ever "fun" not "correct". Eg, you wouldn't TDD that the blue shell seeks out the leader, but instead that the leader-seeking function works, and that the moving-down-a-path function works, etc. you could maybe functional-test that blue shell seeks leader and walks down a path, but as a post-hoc test, not TDD! – tenpn Apr 23 '20 at 20:40
3

I have, on more than one occasion, written code that I discarded the next day since it was clumsy. I restarted with TDD and the solution was better. So I have not had too much in the line of negative TDD experience. However, that being said, I have spent time thinking about a problem and coming up with a better solution outside of the TDD space.