29

In Test Driven Development (TDD) you start with a suboptimal solution and then iteratively produce better ones by adding test cases and by refactoring. The steps are supposed to be small, meaning that each new solution will somehow be in the neighborhood of the previous one.

This resembles mathematical local optimization methods like gradient descent or local search. A well-known limitation of such methods is that they do not guarantee to find the global optimum, or even an acceptable local optimum. If your starting point is separated from all acceptable solutions by a large region of bad solutions, it is impossible to get there and the method will fail.

To be more specific: I am thinking of a scenario where you have implemented a number of test cases and then find that the next test case would require a competely different approach. You will have to throw away your previous work and start over again.

This thought can actually be applied to all agile methods that proceed in small steps, not only to TDD. Does this proposed analogy between TDD and local optimization have any serious flaws?

jonrsharpe
  • 1,318
  • 2
  • 12
  • 17
Frank Puffer
  • 6,411
  • 5
  • 21
  • 38
  • Are you referring to the TDD sub-technique called [triangulation](http://feelings-erased.blogspot.be/2013/03/the-two-main-techniques-in-test-driven.html)? By "acceptable solution", do you mean a correct one or a maintainable/elegant/readable one? – guillaume31 Jan 09 '17 at 10:48
  • 6
    I think this is a real problem. Since it's just my opinion, I won't write an answer. But yes, since TDD is touted as a *design practice*, it's a flaw that it can lead to either local maxima or no solution at all. I'd say in general TDD is NOT well-suited for algorithmic design. See the related discussion on the limitations of TDD: [Solving Sudoku with TDD](https://www.infoq.com/news/2007/05/tdd-sudoku), in which Ron Jeffries makes an ass of himself while running in circles and "doing TDD", while Peter Norvig provides the actual solution by actually knowing about the subject matter, – Andres F. Jan 09 '17 at 13:27
  • 5
    In other words, I'd offer the (hopefully) uncontroversial statement that TDD is good for minimizing the amount of classes you write in "known" problems, therefore producing cleaner and simpler code, but is unsuitable for algorithmic problems or for complex problems where actually looking at the big picture and having domain-specific knowledge is more useful than writing piecemeal tests and "discovering" the code you must write. – Andres F. Jan 09 '17 at 13:30
  • 2
    The problem exists, but isn't limited to TDD or even Agile. Changing requirements that mean the design of previously written software has to change happen all the time. – RemcoGerlich Jan 09 '17 at 15:30
  • @guillaume31: Not neccessarily triangulation but any technique using iterations at source code level. By acceptable solution I mean one that passes all tests and can be maintained reasonably well.. – Frank Puffer Jan 09 '17 at 20:42
  • Iterating, in TDD and agile in general, means repeating a *process* that encapsulates all the steps of operation - but not necessarily working continually on the *same problem*. Sometimes (most of the time?), the next loop is going to be about a totally different problem/feature, one that isn't connected in any logical way to the previous. I could see your analogy applying to triangulation and "fake it till you make it", but saying all of Agile/TDD is like that seems like overgeneralization. It's just one technique among others. – guillaume31 Jan 09 '17 at 21:44
  • I think [this answer in a related question](http://softwareengineering.stackexchange.com/a/339865/16247) serves as a counterpoint to many of the answers here. You may indeed reach a "local maximum" by doing TDD (mergesort in the example) instead of "discovering" quicksort. – Andres F. Jan 10 '17 at 14:08

8 Answers8

19

I don't think TDD has a problem of local maxima. The code you write might, as you have correctly noticed, but that's why refactoring (rewriting code without changing functionality) is in place. Basically, as your tests increase, you can rewrite significant portions of your object model if you need to while keeping the behavior unchanged thanks to the tests. Tests state invariant truths about your system which, therefore, need to be valid both in local and absolute maxima.

If you are interested in problems related to TDD I can mention three different ones that I often think about:

  1. The completeness problem: how many tests are necessary to completely describe a system? Is "coding by example cases" a complete way to describe a system?

  2. The hardening problem: whatever tests interface to, needs to have an unchangeable interface. Tests represent invariant truths, remember. Unfortunately these truths are not known at all for most of the code we write, at best only for external facing objects.

  3. The test damage problem: in order to make assertions testable, we might need to write suboptimal code (less performant, for example). How do we write tests so the code is as good as it can be?


Edited to address a comment: here's an example of excaping a local maximum for a "double" function via refactoring

Test 1: when input is 0, return zero

Implementation:

function double(x) {
  return 0; // simplest possible code that passes tests
}

Refactoring: not needed

Test 2: when input is 1, return 2

Implementation:

function double(x) {
  return x==0?0:2; // local maximum
}

Refactoring: not needed

Test 3: when input is 2, return 4

Implementation:

function double(x) {
  return x==0?0:x==2?4:2; // needs refactoring
}

Refactoring:

function double(x) {
  return x*2; // new maximum
}
Sklivvz
  • 5,242
  • 19
  • 34
  • 1
    What I have experienced, though, is that my first design only worked for some simple cases and I later realized that I need a more general solution. Developing the more general solution required more tests while the original tests for the special cases won't work again. I found it acceptable to (temporarily) remove those tests while I develop the more general solution, adding them back once time is ready. – 5gon12eder Jan 09 '17 at 05:23
  • @5gon12eder that's not how it's supposed to work, refactor only happens on "green", never on "red" – Sklivvz Jan 09 '17 at 13:59
  • 3
    I'm not convinced refactoring is a way to generalize code (outside of the artificial "design patterns" space, of course) or escape local maxima. Refactoring tidies up code, but it won't help you discover a better solution. – Andres F. Jan 09 '17 at 15:15
  • According to [Wikipedia](https://en.wikipedia.org/wiki/Code_refactoring) "Advantages [of refactoring, ndr] include improved code readability and reduced complexity; these can improve source-code maintainability and create a more expressive internal architecture or object model to improve extensibility." I would assume that includes generalizing and escaping narrow code constructs. I'll post an example in the answer. – Sklivvz Jan 09 '17 at 16:25
  • 2
    @Sklivvz Understood, but I don't think it works that way outside toy examples like the ones you posted. Also, it helped you that your function was named "double"; in a way you already knew the answer. TDD definitely helps when you more or less know the answer but want to write it "cleanly". It wouldn't help for discovering algorithms or writing really complex code. This is why Ron Jeffries failed to solve Sudoku this way; you cannot implement an algorithm you're unfamiliar with by TDD'ing it out of obscurity. – Andres F. Jan 09 '17 at 19:59
  • @AndresF. that would be my "problem 1", but it has to do more with the quantity and quality of tests than with local maxima. It's absurdly difficult to extend an algorithm if you don't know where you are going in advance, that's why you never know when you are done, with TDD. – Sklivvz Jan 10 '17 at 10:22
  • @AndresF. I have to disagree. I've used TDD to discover algorithms many times. It's just a matter of writing code to handle specific cases and then recognizing patterns in the resulting code and generalizing. This is as opposed to trying to write a general solution and then see if it works for certain special cases. I'm not saying the TDD approach is always best, but it can help. – Vaughn Cato Jan 10 '17 at 16:23
  • 1
    @VaughnCato Ok, now I'm in the position of either trusting you or being skeptical (which would be rude, so let's not do that). Let's just say, in my experience, it doesn't work like you say. I've never seen a reasonably complex algorithm evolved out of TDD. Maybe my experience is too limited :) – Andres F. Jan 10 '17 at 17:28
  • @AndresF. in reality, it's very rare to develop any new *algorithm* at any point in time. Most code I write only *uses* algorithms. TDD is great when you have some specific complex scenarios that are best described by tests (e.g. a complex state machine). – Sklivvz Jan 10 '17 at 17:31
  • @AndresF. I think it would be fun to work through an example with you. It would need to be in another medium than these comments though. – Vaughn Cato Jan 10 '17 at 17:34
  • @AndresF. Feel free to DM me through twitter: @ vaughncato – Vaughn Cato Jan 10 '17 at 17:38
  • 1
    @Sklivvz A new series like fibonacci or a new sorting algorithm? Maybe not. If you relax the definition of algorithm, however, it's no so uncommon. Take the Sudoku solver example. Say you want to learn how to write one: you won't achieve it by TDD. Say you want to write the AI for a game: ditto. Anything even remotely new that is not simply gluing together pieces of existing or relatively standard code will probably not be very suitable for TDD. – Andres F. Jan 10 '17 at 17:45
  • @AndresF. As long as you can write the appropriate tests, you can TDD it. A Sudoku solver can be written like so, but of course it can't be written if the tests are just "given this starting board then this is the solution". The *techniques* for solving Sudoku must be known *in order to write the tests*. – Sklivvz Jan 10 '17 at 18:16
  • 2
    @Sklivvz "As long as you can write the appropriate tests" is precisely the point: it sounds like begging the question to me. What I'm saying is that you often *cannot*. Thinking about an algorithm or a solver is not made easier by *writing tests first*. You must look at the whole picture *first*. Trying scenarios is of course necessary, but note TDD is not about writing scenarios: TDD is about *test driving the design*! You cannot drive the design of a Sudoku solver (or a new solver for a different game) by writing tests first. As anecdotidal evidence (which isn't enough): Jeffries couldn't. – Andres F. Jan 10 '17 at 18:52
  • 1
    @Sklivvz To clarify my point: this isn't an argument against testing. It's not even an argument against TDD in many (known, tractable) cases. It's an argument against using TDD to discover the solution to complex problems, where the main problem isn't "how do I decompose this in classes and methods so that it's easier to maintain?" but "how on Earth do I solve this!?". Granted, the latter problem is less common for most run of the mill dev jobs. It's more common if you're doing signal processing, AI or games. – Andres F. Jan 10 '17 at 18:53
  • @AndresF.: You say that TDD is not about writing scenarios, but I believe TDD is all about creating examples (red), solving those examples as simply as possible (green), and then seeing how it generalizes (refactor). To me, a test is just another name for an example of inputs and expected outputs. How do you see tests? – Vaughn Cato Jan 11 '17 at 03:49
  • 1
    I think we should take this to [chat] – Sklivvz Jan 11 '17 at 10:37
  • 1
    @VaughnCato I expressed myself wrongly: I meant to say TDD is not *primarily* about writing scenarios (otherwise it'd just be testing) but about *driving the design*. Note TDD is a *design* practice above all. I just don't believe you can drive the design of non-trivial algorithms this way. It can help you split your code into classes, but you cannot write a Sudoku solver, a chess player or an image filter this way. You cannot tease an algorithm into existence merely by writing a few examples/scenarios and refactoring. – Andres F. Jan 11 '17 at 12:10
  • @Sklivvz Do you know a good room? Should a new one be created? – Vaughn Cato Jan 11 '17 at 14:47
  • 1
    I've created a [chat room](http://chat.stackexchange.com/rooms/51604/discussion-on-answer-by-sklivvz-is-this-limitation-of-test-driven-development-a) now. – Sklivvz Jan 11 '17 at 14:50
13

In his answer, @Sklivvz has convincingly argued that the problem doesn't exist.

I want to argue that it doesn't matter: the fundamental premise (and raison d'être) of iterative methodologies in general and Agile and especially TDD in particular, is that not only the global optimum, but the local optimums as well aren't known. So, in other words: even if that was a problem, there is no way around doing it the iterative way anyway. Assuming that you accept the basic premise.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
13

What you're describing in mathematical terms is what we call painting yourself into a corner. This occurrence is hardly exclusive to TDD. In waterfall you can gather and pour over requirements for months hoping you can see the global max only to get there and realize that there is a better idea just the next hill over.

The difference is in an agile environment you never expected to be perfect at this point so you're more than ready to toss the old idea and move to the new idea.

More specific to TDD there is a technique to keep this from happening to you as you add features under TDD. It's the Transformation Priority Premise. Where TDD has a formal way for you to refactor, this is a formal way to add features.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
8

Can TDD and Agile practices promise to produce an optimal solution? (Or even a "good" solution?)

Not exactly. But, that's not their purpose.

These methods simply provide "safe passage" from one state to another, acknowledging that changes are time consuming, difficult, and risky. And the point of both practices is to ensure that the application and code are both viable and proven to meet requirements more quickly and more regularly.

... [TDD] is opposed to software development that allows software to be added that is not proven to meet requirements ... Kent Beck, who is credited with having developed or 'rediscovered' the technique, stated in 2003 that TDD encourages simple designs and inspires confidence. (Wikipedia)

TDD focuses on ensuring each "chunk" of code satisfies requirements. In particular, it helps ensure that code meets pre-existing requirements, as opposed to letting requirements be driven by poor coding. But, it makes no promise that the implementation is "optimal" in any way.

As for Agile processes:

Working software is the primary measure of progress ... At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (Wikipedia)

Agility isn't looking for an optimal solution; just a working solution -- with the intent of optimizing ROI. It promises a working solution sooner rather than later; not an "optimal" one.

But, that OK, because the question is wrong.

Optimums in software development are fuzzy, moving targets. The requirements are usually in flux and riddled with secrets that only emerge, much to your embarrassment, in a conference room full of your boss's bosses. And the "intrinsic goodness" of a solution's architecture and coding is graded by the divided and subjective opinions of your peers and that of your managerial overlord -- none of whom might actually know anything about good software.

In the very least, TDD and Agile practices acknowledge the difficulties and attempt to optimize for two things that are objective and measurable: Working v. Not-Working and Sooner v. Later.

And, even if we have "working" and "sooner" as objective metrics, your ability to optimize for them is primarily contingent on a team's skill and experience.


Things that you could construe as efforts produce optimal solutions include things like:

etc..

Whether each of those things actually produce optimal solutions would be another great question to ask!

svidgen
  • 13,414
  • 2
  • 34
  • 60
  • 1
    True, but I didn't write that the goal of TDD or any other software development method is an optimal solution in the sense of a global optimum. My only concern is that methodologies based on small iterations at source code level might not find any acceptable (good enough) solution at all in many cases – Frank Puffer Jan 09 '17 at 20:53
  • @Frank My answer is intended to cover both local and global optimums. And the answer either way is "No, that's not what these strategies are designed for -- they're designed to improve ROI and mitigate risk." ... or something like that. And that's partly due to what Jörg's answer gets at: the "optimums" are moving targets. ... I'd even take it a step further; not only are they moving targets, but, they're not entirely objective or measurable. – svidgen Jan 09 '17 at 21:17
  • @FrankPuffer Maybe it's worth an addendum. But, the basic point is, you're asking whether these two things achieve something they're not at all designed or intended to achieve. More to it, you're asking if they can achieve something that can't even be measured or verified. – svidgen Jan 09 '17 at 21:21
  • @FrankPuffer Bah. I tried to update my answer to say it better. I'm not sure I made it better or worse! ... But, I need to get off SE.SE and get back to work. – svidgen Jan 09 '17 at 22:03
  • This answer is ok, but the problem I have with it (as with some of the other answers) is that "mitigating risk and improving ROI" are not always the best goals. They are not truly goals by themselves, in fact. When you need something to work, mitigating risk ain't gonna cut it. Sometimes relatively undirected small steps as in TDD won't work -- you'll minimize risk alright, but you won't reach anywhere useful in the end. – Andres F. Jan 10 '17 at 13:34
  • @AndresF. Let's not get side tracked on the dogmas I'm not evangelizing here. I'm not advocating for TDD + Agility all the time. [Not even Uncle Bob tests all the time.](https://8thlight.com/blog/uncle-bob/2014/04/30/When-tdd-does-not-work.html) This answer isn't intended to be TDD or Agile propaganda *at all.* I simply mean to point out that risk mitigation and ROI are the primary goals of TDD + Agile; not "optimal solutions." Though, I *would* argue that these are reasonable *secondary* goals in just about any project - even if the only investment you want a return on is your time. – svidgen Jan 10 '17 at 15:02
  • Understood. I didn't mean to say you were evangelizing any dogma; my bad if I sounded that way! I meant to argue that TDD's stated goals are sometimes not conducive to escaping local maxima, as asked by the OP. This is because "minimizing risk" or "maximizing ROI" are not goals that help in this matter. Or rather, they are such generic goals they have little impact on finding good solutions. For this reason, I think the OP's question is good. – Andres F. Jan 10 '17 at 17:20
  • @AndresF. Oh, it's a good question. It's just the wrong question! ... And, it's good to answer partly *because* it's wrong. "How do sheep's bladders prevent earthquakes?" is a good question if you think sheep's bladder's prevent earthquakes. But, the right answer isn't going to come form of, "Because ... " It'll come in the form of, "They don't." – svidgen Jan 10 '17 at 17:30
8

A well-known limitation of such methods is that they do not guarantee to find the global optimum, or even an acceptable local optimum.

To make your comparison more adequate: for some kind of problems, iterative optimization algorithms are very likely to produce good local optima, for some other situations, they can fail.

I am thinking of a scenario where you have implemented a number of test cases and then find that the next test case would require a competely different approach. You will have to throw away your previous work and start over again.

I can imagine a situation where this can happen in reality: when you pick the wrong architecture in a way you need to recreate all your existing tests again from scratch. Lets say you start implementing your first 20 test cases in programming language X on operating system A. Unfortunately, requirement 21 includes that the whole program needs to run on operating system B, where X is not available. Thus, you need to throw away most of your work and reimplement in language Y. (Of course, you would not throw away the code completely, but port it to the new language and system.)

This teaches us, even when using TDD, it is a good idea to do some overall analysis & design beforehand. This, however, is also true for any other kind of approach, so I don't see this as an inherent TDD problem. And, for the majority of real-world programming tasks you can just pick a standard architecture (like programming language X, operating system Y, database system Z on hardware XYZ), and you can be relatively sure that an iterative or agile methodology like TDD won't bring you into a dead end.

Citing Robert Harvey: "You can't grow an architecture from unit tests." Or pdr: "TDD doesn't only help me come to the best final design, it helps me get there in fewer attempts."

So actually what you wrote

If your starting point is separated from all acceptable solutions by a large region of bad solutions it is impossible to get there and the method will fail.

might become true - when you pick a wrong architecture, you are likely to not reach the required solution from there.

On the other hand, when you do some overall planning beforehand and pick the right architecture, using TDD should be like starting an iterative search algorithm in an area where you can expect to reach "global maximum" (or at least a good-enough maximum) in a few cycles.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
4

One thing that nobody's added so far is that the "TDD Development" you are describing is very abstract and unrealistic. It may be like that in a mathematical application where you are optimising an algorithm but that doesn't happen a lot in the business applications most coders work on.

In the real world your tests are basically exercising and validating Business Rules:

For example - If a customer is a 30 years old non-smoker with a wife and two children the premium category is "x" etc.

You are not going to be iteratively changing the premium calculation engine until it is correct for very long - and almost certainly not while the application is live ;).

What you actually have created is a safety net so that when a new calculation method is added for a particular category of customer all of the old rules don't suddenly break and give the wrong answer. The safety net is even more useful if the first step of debugging is to create a test (or series of tests) that reproduces the error prior to writing the code to fix the bug. Then, one year down the track, if someone accidentally re-creates the original bug the unit test breaks before the code is even checked in. Yes, one thing TDD allows is that you can now do a large refactoring and tidy stuff up with confidence but it shouldn't be a massive part of your job.

mcottle
  • 6,122
  • 2
  • 25
  • 27
  • 1
    First, when I read your answer, I thought "yes, that is the core point". But after rethinking the question again, I thought it is not necessarily so abstract or unrealistic. If one picks blindly the completely wrong architecture, TDD won't solve that, not after 1000 iterations. – Doc Brown Jan 09 '17 at 22:26
  • @Doc Brown Agreed, it won't solve that problem. But it will give you a suite of tests that exercise every assumption and business rule so that you can iteratively improve the architecture. Architecture so bad it needs a ground up rewrite to fix is very rare (I'd hope) and even in that extreme case the business rule unit tests would be a good starting point. – mcottle Jan 09 '17 at 22:38
  • When I say "wrong architecture", I have cases in mind where one needs to throw away the existing test suite. Did you read my answer? – Doc Brown Jan 10 '17 at 06:45
  • @DocBrown - Yes I did. If you meant "wrong architecture" to mean "change the entire test suite" maybe you should have said that. Changing the Architecture does not mean that you have to trash all of the tests if they are Business Rule based. You will probably have to change all of them to support any new interfaces you create and even completely re-write some, but the Business Rules are not going to be superseded by a technical change so the tests will remain. Certainly investmenting in unit tests shouldn't be invalidated by the unlikely possibility of completely effing up the architecture – mcottle Jan 10 '17 at 08:27
  • sure, even if one needs to rewrite every test in a new programming language, one does not need to throw away everything, at least one can port the existing logic. And I agree to you 100% for the major real-world projects, the assumptions in the question are quite unrealistic. – Doc Brown Jan 10 '17 at 09:07
3

I don't think it gets in the way. Most teams don't have anyone who is capable of coming up with an optimal solution even if you wrote it up on their whiteboard. TDD/Agile won't get in their way.

Many projects don't require optimal solutions and those that do, the necessary time, energy and focus will be made in this area. Like everything else we tend to build, first, make it work. Then make it fast. You could do this with some sort of prototype if the performance is that important and then rebuild the whole thing with the wisdom gained through many iterations.

I am thinking of a scenario where you have implemented a number of test cases and then find that the next test case would require a competely different approach. You will have to throw away your previous work and start over again.

This could happen, but what is more likely to happen is the fear of changing complex parts of the application. Not having any tests can create a bigger sense of fear in this area. One benefit of TDD and having a suite of tests is that you built this system with the notion that it will need to be changed. When you come up with this monolithic optimized solution from the very beginning, it can be very difficult to change.

Also, put this in the context of your concern of under-optimization, and you can't help but spend time optimizing things you shouldn't have and creating inflexible solutions because you were so hyper-focused on their performance.

JeffO
  • 36,816
  • 2
  • 57
  • 124
0

It can be deceiving to apply mathematical concept like "local optimum" to software design. Using such terms make software development sound much more quantifiable and scientific than it really is. Even if "optimum" existed for code, we have no way of measuring it and therefore no way of knowing if we have reached it.

The agile movement was really a reaction against the belief that software development could be planned and predicted with mathematical methods. For better or worse, software development is more like a craft than a science.

JacquesB
  • 57,310
  • 21
  • 127
  • 176
  • But was it too strong a reaction? It certainly helps in a lot of cases where strict upfront planning proved unwieldy and costly. However, some software problems *must* be tackled as a mathematical problem and with upfront design. You cannot TDD them. You can TDD the UI and overall design of Photoshop, but you cannot TDD its algorithms. They are not trivial examples like deriving "sum" or "double" or "pow" in typical TDD examples [1]). You probably cannot tease a new image filter out of writing some test scenarios; you absolutely must sit down and write and understand formulae. – Andres F. Jan 10 '17 at 13:47
  • 2
    [1] In fact, I'm pretty sure `fibonacci`, which I've seen used as a TDD example/tutorial, is more or less a lie. I'm willing to bet nobody ever "discovered" fibonacci or any similar series by TDD'ing it. Everyone starts from already knowing fibonacci, which is cheating. If you try to discover this by TDD'ing it, you'll likely reach the dead-end the OP was asking about: you'll never be able to generalize the series by simply writing more tests and refactoring -- you *must* apply mathematical reasoning! – Andres F. Jan 10 '17 at 13:49
  • Two remarks: (1) You are right that this can be deceiving. But I didn't write that TDD is the *same* as mathematical optimization. I just used it as an analogy or a model. I do believe that math can (and should) be applied to almost everything as long as you are aware of the differences between the model and the real thing. (2) Science (scientific work) is usually even less predictable than software development. And I would even say that software engineering is more like scientific work than like a craft. Crafts usually require much more routine work. – Frank Puffer Jan 10 '17 at 15:51
  • @AndresF.: TDD does not mean you don't have to think or design. It just means you write the test before you write the implementation. You can do that with algorithms. – JacquesB Jan 10 '17 at 16:01
  • @FrankPuffer: OK, so what measurable value is it that has a "local optimum" in software development? – JacquesB Jan 10 '17 at 16:03
  • @JacquesB: You could calculate a value based on various software metrics for one implementation. Then you define a set of refactorings, apply all of them to the original implementation and calculate the value for each of these modifications as well. If all refactorings produce a worse value, the original implementation is a local optimum. Very limited of course, depending on the metrics and the set of refactorings considered, but probably better than nothing. – Frank Puffer Jan 10 '17 at 16:14
  • @FrankPuffer: Depending on metrics...of course. But what are those metrics? Lines of code? – JacquesB Jan 10 '17 at 16:28
  • @JacquesB: Yes, why not lines of code? You probably know that there are better ones. And all of these are very limited for sure. But they are somehow correlated to code quality when applied to pieces of software that are written in the same language and fulfill the same tests. And again, each model is a simplification and must not be mixed up with the real thing. – Frank Puffer Jan 10 '17 at 16:40
  • @JacquesB But it's often touted that way. Explanations usually pay lip service to the notion that TDD doesn't replace thought, but *in practice* it looks as if it did. Again, I refer you to Ron Jeffries' extremely clumsy attempt at evolving a Sudoku solver, which you can google. It's embarrassing. TDD just doesn't work for this kind of problem, when the solver itself is unknown. And Ron Jeffries is a prominent agilist and TDD proponent, not a random newbie. – Andres F. Jan 10 '17 at 17:24
  • @FrankPuffer: If you take a metric like lines of code the I believe the answer to your original question is yes: A refactoring might involve first writing some new code and then removing some old code, so the metric would be worse between the two points. – JacquesB Jan 10 '17 at 18:11