154

All the examples I've read and seen on training videos have simplistic examples. But what I don't see if how I do the "real" code after I get green. Is this the "Refactor" part?

If I have a fairly complex object with a complex method, and I write my test and the bare minimum to make it pass (after it first fails, Red). When do I go back and write the real code? And how much real code do I write before I retest? I'm guessing that last one is more intuition.

Edit: Thanks to all who answered. All your answers helped me immensely. There seems to be different ideas on what I was asking or confused about, and maybe there is, but what I was asking was, say I have an application for building a school.

In my design, I have an architecture I want to start with, User Stories, so on and so forth. From here, I take those User Stories, and I create a test to test the User Story. The User says, We have people enroll for school and pay registration fees. So, I think of a way to make that fail. In doing so I design a test Class for class X (maybe Student), which will fail. I then create the class "Student." Maybe "School" I do not know.

But, in any case, the TD Design is forcing me to think through the story. If I can make a test fail, I know why it fails, but this presupposes I can make it pass. It is about the designing.

I liken this to thinking about Recursion. Recursion is not a hard concept. It may be harder to actually keep track of it in your head, but in reality, the hardest part is knowing, when the recursion "breaks," when to stop (my opinion, of course.) So I have to think about what stops the Recursion first. It is only an imperfect analogy, and it assumes that each recursive iteration is a "pass." Again, just an opinion.

In implementation, The school is harder to see. Numerical and banking ledgers are "easy" in the sense you can use simple arithmetic. I can see a+b and return 0, etc. In the case of a system of people, I have to think harder on how to implement that. I have the concept of the fail, pass, refactor (mostly because of study and this question.)

What I do not know is based upon lack of experience, in my opinion. I do not know how to fail signing up a new student. I do not know how to fail someone typing in a last name and it being saved to a database. I know how to make a+1 for simple math, but with entities like a person, I don't know if I'm only testing to see if I get back a database unique ID or something else when someone enters a name in a database or both or neither.

Or, maybe this shows I am still confused.

johnny
  • 3,669
  • 3
  • 21
  • 35
  • 197
    After the TDD people go home for the night. – hobbs Jul 25 '17 at 02:42
  • 14
    Why do you think the code you wrote is not real? – Stop harming Monica Jul 25 '17 at 16:24
  • TDD is great for api's as it serves as great documentation and after planning it out elsewhere allows you can quickly create a solid RESTful api. However, whatever your thoughts on TDD, you should still test. – user3791372 Jul 25 '17 at 19:26
  • @johnny I just wanted to check back. I hope you accepted the answer because it answered your question, not because it was popular. Did that all make sense? – RubberDuck Jul 26 '17 at 01:50
  • 2
    @RubberDuck More than the other answers did. I'm sure I will refer to it soon. It is still kind of foreign, but I am not going to give up on it. What you said made sense. I'm just trying to make it make sense in my context or a regular business application. Maybe an inventory system or the like. I have to consider it. I am thankful for your time though. Thanks. – johnny Jul 26 '17 at 02:41
  • 1
    The answers already hit the nail on the head, but as long as all your tests are passing, and you don't need any new tests/functionality, it can be assumed the code you have is finished, bar linting. – ESR Jul 26 '17 at 06:35
  • 1
    I think what I really need to search for is a non-numerical example. I saw one by Uncle Bob, but I have to look at others. – johnny Jul 26 '17 at 15:59
  • 3
    There is an asumption in the question that may be problematic in "I have a fairly complex object with a complex method". In TDD you write your tests first so you start with a fairly simple code. This will force you to code a test-friendly structure that will need to be modular. So complex behaviour will be created by combining simpler objects. If you end with a fairly complex object or method then is when you refactor – Borjab Jul 26 '17 at 16:10
  • 1
    P.S: If you are learning TDD it is better to start with a new project or at least with new classes. It requires some practice and implementing it on legacy code is harder – Borjab Jul 26 '17 at 16:11
  • I use TDD if I have something with a really complex and difficult algorithm but otherwise its usually overkill for most coding. At least for myself, anyway. – Mark Rogers Jul 26 '17 at 21:11
  • 1
    I just saw your update and it's now clear to me that you're talking about *Acceptance* Test Driven Development as well as TDD. The only good advice I can give is to practice. TDD is *hard* for a long time. – RubberDuck Jul 27 '17 at 16:30
  • @RubberDuck Gee thanks. Now I get a new TDD acronym. :). No really. Thanks again. I now know where to look. – johnny Jul 27 '17 at 18:31

11 Answers11

244

If I have a fairly complex object with a complex method, and I write my test and the bare minimum to make it pass (after it first fails, Red). When do I go back and write the real code? And how much real code do I write before I retest? I'm guessing that last one is more intuition.

You don't "go back" and write "real code". It's all real code. What you do is go back and add another test that forces you to change your code in order to make the new test pass.

As for how much code do you write before you retest? None. You write zero code without a failing test that forces you to write more code.

Notice the pattern?

Let's walk through (another) simple example in hopes that it helps.

Assert.Equal("1", FizzBuzz(1));

Easy peazy.

public String FizzBuzz(int n) {
    return 1.ToString();
}

Not what you would call real code, right? Let's add a test that forces a change.

Assert.Equal("2", FizzBuzz(2));

We could do something silly like if n == 1, but we'll skip to the sane solution.

public String FizzBuzz(int n) {
    return n.ToString();
}

Cool. This will work for all non-FizzBuzz numbers. What's the next input that will force the production code to change?

Assert.Equal("Fizz", FizzBuzz(3));

public String FizzBuzz(int n) {
    if (n == 3)
        return "Fizz";
    return n.ToString();
}

And again. Write a test that won't pass yet.

Assert.Equal("Fizz", FizzBuzz(6));

public String FizzBuzz(int n) {
    if (n % 3 == 0)
        return "Fizz";
    return n.ToString();
}

And we now have covered all multiples of three (that aren't also multiples of five, we'll note it and come back).

We've not written a test for "Buzz" yet, so let's write that.

Assert.Equal("Buzz", FizzBuzz(5));

public String FizzBuzz(int n) {
    if (n % 3 == 0)
        return "Fizz";
    if (n == 5)
        return "Buzz"
    return n.ToString();
}

And again, we know there's another case we need to handle.

Assert.Equal("Buzz", FizzBuzz(10));

public String FizzBuzz(int n) {
    if (n % 3 == 0)
        return "Fizz";
    if (n % 5 == 0)
        return "Buzz"
    return n.ToString();
}

And now we can handle all multiples of 5 that aren't also multiples of 3.

Up until this point, we've been ignoring the refactoring step, but I see some duplication. Let's clean that up now by introducing a helper function.

private bool isDivisibleBy(int divisor, int input) {
    return (input % divisor == 0);
}

public String FizzBuzz(int n) {
    if (isDivisibleBy(3, n))
        return "Fizz";
    if (isDivisibleBy(5, n))
        return "Buzz"
    return n.ToString();
}

Cool. Now we've removed the duplication and created a well named function. What's the next test we can write that will force us to change the code? Well, we've been avoiding the case where the number is divisible by both 3 and 5. Let's write it now.

Assert.Equal("FizzBuzz", FizzBuzz(15));

public String FizzBuzz(int n) {
    if (isDivisibleBy(3, n) && isDivisibleBy(5, n))
        return "FizzBuzz";
    if (isDivisibleBy(3, n))
        return "Fizz";
    if (isDivisibleBy(5, n))
        return "Buzz"
    return n.ToString();
}

The tests pass, but we have more duplication. We have options, but I'm going to apply "Extract Local Variable" a few times so that we're refactoring instead of rewriting.

public String FizzBuzz(int n) {

    var isDivisibleBy3 = isDivisibleBy(3, n);
    var isDivisibleBy5 = isDivisibleBy(5, n);

    if ( isDivisibleBy3 && isDivisibleBy5 )
        return "FizzBuzz";
    if ( isDivisibleBy3 )
        return "Fizz";
    if ( isDivisibleBy5 )
        return "Buzz"
    return n.ToString();
}

And we've covered every reasonable input, but what about unreasonable input? What happens if we pass 0 or a negative? Write those test cases.

public String FizzBuzz(int n) {

    if (n < 1)
        throw new InvalidArgException("n must be >= 1");

    var isDivisibleBy3 = isDivisibleBy(3, n);
    var isDivisibleBy5 = isDivisibleBy(5, n);

    if ( isDivisibleBy3 && isDivisibleBy5 )
        return "FizzBuzz";
    if ( isDivisibleBy3 )
        return "Fizz";
    if ( isDivisibleBy5 )
        return "Buzz"
    return n.ToString();
}

Is this starting to look like "real code" yet? More importantly, at what point did it stop being "unreal code" and transition to being "real"? That's something to ponder on...

So, I was able to do this simply by looking for a test that I knew wouldn't pass at each step, but I've had a lot of practice. When I'm at work, things aren't ever this simple and I may not always know what test will force a change. Sometimes I'll write a test and be surprised to see it already passes! I highly recommend that you get in the habit of creating a "Test List" before you get started. This test list should contain all the "interesting" inputs you can think of. You might not use them all and you'll likely add cases as you go, but this list serves as a roadmap. My test list for FizzBuzz would look something like this.

  • Negative
  • Zero
  • One
  • Two
  • Three
  • Four
  • Five
  • Six (non trivial multiple of 3)
  • Nine (3 squared)
  • Ten (non trivial multiple of 5)
  • 15 (multiple of 3 & 5)
  • 30 (non trivial multiple of 3 & 5)
NCSY
  • 3
  • 2
RubberDuck
  • 8,911
  • 5
  • 35
  • 44
  • 3
    Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/62927/discussion-on-answer-by-rubberduck-when-do-you-write-the-real-code-in-tdd). – maple_shaft Jul 27 '17 at 13:42
  • 49
    Unless I'm completely misunderstanding this answer: "We could do something silly like if n == 1, but we'll skip to the sane solution." - the whole thing was silly. If you know up front you want a function that does , write tests for and skip the part where you write versions that obviously fail . If you find a bug in then sure: write a test first to verify you can exercise it prior to the fix and observe the test passes after the fix. But there's no need to fake all these intermediate steps. – GManNickG Jul 27 '17 at 21:47
  • 17
    The comments that point out the major flaws in this answer and TDD in general have been moved to chat. If you're considering using TDD, please read the 'chat'. Unfortunately the 'quality' comments are now hidden amongst a load of chat for future students to read. – user3791372 Jul 28 '17 at 00:07
  • I would be more precise regarding the contents of this "test list", if you wanted to improve this answer. I would explicitly talk about "boundary values" and "class partitioning". –  Jul 28 '17 at 23:37
  • 2
    @GManNickG I believe the point is to get the right amount of tests. Writing the tests beforehand makes it easy to miss what special cases should be tested, leading either to situations not being covered adequately in the tests, or to essentially the same situation being pointlessly covered loads of times in the tests. If you can do that without these intermediate steps, great! Not everyone can do so yet though, it's something that takes practice. – hvd Jul 30 '17 at 09:08
  • 1
    Not to bring this monster back to life, but I think it is important to note that the TDD method used here is *Triangulation*. I understand RubberDuck uses this everywhere, but it is not the only way to pass a test using TDD. Kent Beck describes in his book, **refactor** as a step to remove duplication by going from constants to variables (from `1.ToString()`, to `n.ToString()`), without writing a new test. In fact, the first test is not complete until this step has been taking. – Chris Wohlert Aug 04 '17 at 17:57
  • 2
    And here's a quote from Kent Beck on refactoring: "Now that the test runs, we can realize (as in “**make real**”) the implementation of summary()". He then proceeds to to change a constant to a variable. I felt this quote matched the question quite well. – Chris Wohlert Aug 04 '17 at 17:57
  • This came back to my attention, so to address the comments about skipping to the general case... I personally find that skipping to the general case makes it to easy to have false positives and missing test cases. To each their own, but I like knowing my code works as intended. – RubberDuck Dec 07 '17 at 22:49
  • 2
    Just stumbled upon this: the comment by GManNickG states that this is all silly when you know that a function does spec. But this is a toy example. Having an exact spec given to you in advance is a low bar; the whole point is that you *don't* have it. This is a design process; you are the one *making up* the detailed spec on the go, in order to satisfy a higher level requirement. And you're not merely testing inputs and outputs of a function, you're testing abstract behavior of a unit - an encapsulated piece of code that you designated to be a "unit". At least, that's what you should be doing. – Filip Milovanović Jun 09 '22 at 13:22
47

The "real" code is the code you write to make your test pass. Really. It's that simple.

When people talk about writing the bare minimum to make the test green, that just means that your real code should follow the YAGNI principle.

The idea of the refactor step is just to clean up what you've written once you're happy that it meets the requirements.

So long as the tests that you write actually encompass your product requirements, once they are passing then the code is complete. Think about it, if all of your business requirements have a test and all of those tests are green, what more is there to write? (Okay, in real life we don't tend to have complete test coverage, but the theory is sound.)

GenericJon
  • 541
  • 3
  • 7
  • 47
    Unit tests can't actually encompass your product requirements for even relatively trivial requirements. At best, they sample the input-output space and the idea is that you (correctly) generalize to the full input-output space. Of course, your code could just be a big `switch` with a case for each unit test which would pass all tests and fail for any other inputs. – Derek Elkins left SE Jul 24 '17 at 22:58
  • 9
    @DerekElkins TDD mandates failing tests. Not failing unit tests. – Taemyr Jul 25 '17 at 05:56
  • 7
    @DerekElkins that's why you don't just write unit tests, and also why there's a general assumption that you're trying to make something not just fake it! – jonrsharpe Jul 25 '17 at 06:37
  • 37
    @jonrsharpe By that logic, I would never write trivial implementations. E.g. in the FizzBuzz example in RubberDuck's answer (which only uses unit tests), the first implementation clearly "just fakes it". My understanding of the question is exactly this dichotomy between writing code that you know is incomplete and code that you genuinely believe will implement the requirement, the "real code". My "big `switch`" was intended as a logical extreme of "writing the bare minimum to make the tests green". I view the OP's question as: where in TDD is the principle that avoids this big `switch`? – Derek Elkins left SE Jul 25 '17 at 06:59
  • 2
    One possible answer is to use something more than (in addition to) unit tests (i.e. point tests). For example, randomized tests. This is usually not demonstrated in TDD introductions, let alone mentioned as a requirement for applying TDD. (Note, I'm only talking about "developer tests", i.e. the tests run in the red bar/green bar cycle. TDD clearly doesn't advocate replacing integration or acceptance testing with developer tests.) – Derek Elkins left SE Jul 25 '17 at 07:04
  • 1
    @DerekElkins This gets a bit confused because there's so many different meanings of "simplicity". One formal measure is Kolmogorov complexity (one of the ways Occam's Razor is formally defined, by the way) - you have a set of values you want to produce, and you want the shortest code that produces that result. A giant switch is more complex than `if (a % 3) ...`. A lookup table is also more complex, because input data is also considered part of the program in this formalism. This is separate from the code being readable or easy to understand, of course. – Luaan Jul 25 '17 at 10:25
  • 1
    @DerekElkins I agree that unit tests will not cover the entire input-output space. Even adding other kinds of tests cannot hope to cover it. But the principle that avoids the "big switch" solution is the same as if you weren't using TDD: Anyone creating such a solution will either be swiftly out of a job or sick of maintaining their own code. Developers shouldn't be simple robots programming to pass tests; A little human intelligence needs to be applied. – GenericJon Jul 25 '17 at 10:26
  • 1
    @DerekElkins A single input-output lookup table (or switch) is about as long as the message you're trying to convey. Adding more input-output cases means you need to make the code longer. The ifs are strictly simpler because they represent the new input-output cases without extra code (of course, not *all* the cases - we're not quite capable of writing bug-free code; even RubberDuck's FizzBuzz code can fail). Randomised tests are mainly useful by avoiding preferentially picking your samples, and other methods may be useful when verifying a good answer is easier than producing it. – Luaan Jul 25 '17 at 10:31
  • 2
    @GenericJon That's a bit too optimistic in my experience :) For one, there's people who enjoy mindless repetitive work. They'll be happier with a giant switch statement than with a "complicated decision-making". And to lose their job, they'd either need someone who calls them out on the technique (and they better have good evidence it's actually losing the company opportunities/money!), or do exceptionally badly. After taking over the maintenance on many such projects, I can tell that it's easy for very naïve code to last for decades, as long as it makes the customer happy (and paying). – Luaan Jul 25 '17 at 10:34
  • 1
    @Luaan: one possible metric to apply at code review is, "how easy is it to write an additional test that is correct per specification, and which is failed by the code I'm reviewing". So yes, of course, there has to be someone doing that review, but you can at least pit the mindless-repetitive-work-lovers against each other. You can't solve FizzBuzz with an exhaustive switch statement, so they'll find each other's code wanting, because proving others wrong is fun. Then you can suggest that they maybe should all start writing code with a chance of passing review :-) – Steve Jessop Jul 26 '17 at 14:13
  • 1
    @DerekElkins You're supposed to refactor your code between tests with the aim of improving its quality. Quality is usually defined in terms of a set of principles, e.g. [Kent Beck's design rules for simple code](https://martinfowler.com/bliki/BeckDesignRules.html). Your proposed `switch` violates at least two of those design rules, and probably three of them. If you're looking for something more formal, you might want to consider Bob Martin's [Transformation Priority Premise](https://8thlight.com/blog/uncle-bob/2013/05/27/TheTransformationPriorityPremise.html), which strictly prevents this. – Jules Jul 27 '17 at 09:59
  • @Jules I've read the second link before. The point of my comments was to highlight ways this answer is at best incomplete. Refactoring doesn't change behavior. If your code isn't correct, refactoring won't make it so. Transformation Priority Premise is not refactoring and articulates a "something more" to TDD than just "making the tests pass". Note how that article explicitly addresses my big `switch`. As to looking for something more formal, I'd look to the programming by refinement community for that, not Bob Martin's made-up-on-the-spot list. – Derek Elkins left SE Jul 27 '17 at 11:26
  • @Derek Elkins: "where in TDD is the principle that avoids this big switch?" Well, in my opinion, nowhere. In TDD you test some corner cases that give you a feeling your implementation is correct, but you cannot produce a correct and complete implementation only guided by TDD. That implementation is derived by your understanding of the problem domain and TDD is just an additional tool that helps you along the way, but most of the times, TDD alone is not sufficient. See the famous article: http://ravimohan.blogspot.de/2007/04/learning-from-sudoku-solvers.html – Giorgio Jul 31 '17 at 08:36
14

The short answer is that the "real code" is the code that makes the test pass. If you can make your test pass with something other than real code, add more tests!

I agree that lots of tutorials about TDD are simplistic. That works against them. A too-simple test for a method that, say, computes 3+8 really has no choice but to also compute 3+8 and compare the result. That makes it look like you'll just be duplicating code all over, and that testing is pointless, error-prone extra work.

When you're good at testing, that will inform how you structure your application, and how you write your code. If you have trouble coming up with sensible, helpful tests, you should probably re-think your design a bit. A well-designed system is easy to test -- meaning sensible tests are easy to think of, and to implement.

When you write your tests first, watch them fail, and then write the code that makes them pass, that's a discipline to ensure that all your code has corresponding tests. I don't slavishly follow that rule when I'm coding; often I write tests after the fact. But doing tests first helps to keep you honest. With some experience, you'll start to notice when you're coding yourself into a corner, even when you're not writing tests first.

Carl Raymond
  • 1,010
  • 1
  • 8
  • 6
  • 7
    Personally, the test I'd write would be `assertEqual(plus(3,8), 11)`, not `assertEqual(plus(3,8), my_test_implementation_of_addition(3,8))`. For more complex cases, you always look for a means of proving the result correct, *other than* dynamically calculating the correct result in the test and checking equality. – Steve Jessop Jul 26 '17 at 13:59
  • So for a really silly way of doing it for this example, you might prove that `plus(3,8)` has returned the correct result by subtracting 3 from it, subtracting 8 from that, and checking the result against 0. This is so obviously equivalent to `assertEqual(plus(3,8), 3+8)` as to be a bit absurd, but if the code under test is building something more complicated than just an integer, then taking the result and checking each part for correctness is often the right approach. Alternatively, something like `for (i=0, j=10; i < 10; ++i, ++j) assertEqual(plus(i, 10), j)` – Steve Jessop Jul 26 '17 at 14:02
  • ... since that avoids the big fear, which is that when writing the test we'll make the same mistake on the subject of "how to add 10" that we made in the live code. So the test carefully avoids writing any code that adds 10 to anything, in the test that `plus()` can add 10 to things. We do still rely on the programmer-verified intial loop values, of course. – Steve Jessop Jul 26 '17 at 14:08
  • 4
    Just want to point out that even if you're writing tests after the fact, it's still a good idea to watch them fail; find some part of the code which seems crucial to whatever you're working on, tweak it a little (e.g. replace a + with a -, or whatever), run the tests and watch them fail, undo the change and watch them pass. Many times I've done this the test doesn't actually fail, making it worse than useless: not only is it not testing anything, it's giving me false confidence that something is being tested! – Warbo Jul 28 '17 at 12:40
6

The refactor part is clean up when you're tired and want to go home.

When you're about to add a feature the refactor part is what you change before the the next test. You refactor the code to make room for the new feature. You do this when you know what that new feature will be. Not when you're just imagining it.

This can be as simple as renaming GreetImpl to GreetWorld before you create a GreetMom class (after adding a test) to add a feature that will print "Hi Mom".

candied_orange
  • 102,279
  • 24
  • 197
  • 315
6

Sometimes some examples about TDD can be misleading. As other people have pointed out before, the code you write to make tests pass are the real code.

But don't think that the real code appears like magic -that's wrong. You need a better understanding of what you want to achieve and then you need to pick the test accordingly, starting from the easiest cases and corner cases.

For example, if you need to write a lexer, you start with empty string, then with a bunch of whitespaces, then a number, then with a number surrounded by whitespaces, then a wrong number, etc. These small transformations will lead you to the right algorithm, but you don't jump from the easiest case to a highly complex case chosen dumbly to get the real code done.

Bob Martin explains it perfectly here.

Malice
  • 129
  • 7
1

But the real code would appear in the refactor stage of the TDD phase. I.e. the code that should be part of the final release.

Tests should be run every time you make a change.

The motto of the TDD life cycle would be: RED GREEN REFACTOR

RED: Write the tests

GREEN: Make an honest attempt to get functional code that passes tests as quickly as possible: duplicate code, obscurely named variables hacks of the highest order, etc.

REFACTOR: Clean up the code, properly name the variables. DRY up the code.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Graeme
  • 129
  • 2
  • 6
    I know what you're saying about the "Green" phase but it implies that hard-wiring return values to make the tests pass might be appropriate. In my experience "Green" should be an honest attempt to make working code to meet the requirement, it may not be perfect but it should be as complete and "shippable" as the developer can manage in a first pass. Refactoring is probably best done some time later after you've done more development and the problems with the first pass become more apparent and opportunities to DRY emerge. – mcottle Jul 25 '17 at 05:32
  • @mcottle i consider these all part of the same task. get it done, then clean it up. Further refactorings should take place as time goes on as part of other tasks. – Graeme Jul 25 '17 at 09:37
  • 2
    @mcottle: you might be surprised how many implementations of a get-only repository can be hardcoded values in the codebase. :) – Bryan Boettcher Jul 25 '17 at 15:06
  • 6
    Why would I ever write crap code and clean it up, when I can crank out nice, production quality code almost as fast as I can type? :) – Kaz Jul 25 '17 at 19:14
  • 1
    @Kaz Because this way you risk to add *untested behavior*. The only way to ensure having test for each and every desired behavior is to do the *simples possible change* regardless how crappy it is. Sometimes the following refactoring brings up a new approach you did not think of in advance... – Timothy Truckle Jul 27 '17 at 20:49
  • 1
    @TimothyTruckle What it if takes 50 minutes to find the simplest possible change, but only 5 to find the second simplest possible change? Do we go with second simplest or keep searching for simplest? – Kaz Jul 27 '17 at 23:08
  • You may understand "simplest change" as the one producing the least Levenshtein distance. An exact measure, but it leads to nowhere. +++ I guess, the time needed for the change is the most important part of its complexity. So when it takes ten times as long, then it can't be simpler. I understand the rule as 1. it must do the job, 2. it may be stupid and ugly, but 3. it must be honest, which implies it mustn't be hardwired. +++ Imagine a production failure to be fixed NOW. The failed test is an example of what went wrong. They may be more, so don't hardwire. No time, so don't overgeneralize. – maaartinus Jul 28 '17 at 05:45
  • @Kaz '"What it if takes 50 minutes to find the simplest possible change, but only 5 to find the second simplest possible change?"* That is nonsense. The *simplest possible change* is what ever comes first in your mind to be enough to make your *last written test* pass (but not necessarily solves the complete feature). Since your last test added a **single expectation about your codes behavior** this is **always** something simple like adding an `if`, and `else` to an existing `if`, a method call or the return of a value. – Timothy Truckle Jul 28 '17 at 08:42
1

When do you write the “real” code in TDD?

The red phase is where you write code.

In the refactoring phase the primary goal is to delete code.

In the red phase you do anything to make the test pass as quick as possible and at any cost. You completely disregard what you've ever heard of good coding practices or design pattern an alike. Making the test green is all that matters.

In the refactoring phase you clean up the mess you just made. Now you first look if the change you just made is the kind of the top most in the Transformation Priority list and if there is any code duplication you can remove most likely by applying a design patter.

Finally you improve readability by renaming identifiers and extract magic numbers and/or literal strings to constants.


It's not red-refactor, it's red-green-refactor. – Rob Kinyon

Thanks for pointing at this.

So it is the green phase where you write the real code

In the red phase you write the executable specification...

Timothy Truckle
  • 2,336
  • 9
  • 12
  • It's not red-refactor, it's red-green-refactor. The "red" is you take your test suite from green (all tests pass) to red (one test fails). The "green" is where you sloppily take your test suite from red (one test fails) to green (all tests pass). The "refactor" is where you take your code and make it pretty while keeping all tests passing. – Rob Kinyon Jul 27 '17 at 18:46
1

You are writing Real Code the whole time.

At each step You are writing code to satisfy the conditions which Your code will satisfy for future callers of Your code (which might be You or not ...).

You think You're not writing usefull (real) code, because in a moment You might refactor it out.

Code-Refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior.

What this means is that even though You are changing the code, the conditions the code satisified, are left unchanged. And the checks (tests) You implemented to verify Your code are already there to verify if Your modifications changed anything. So the code You wrote the whole time is in there, just in a different way.

Another reason You might think that it's not real code, is that You're doing examples where the end program can already be forseen by You. This is very good, as it shows You have knowledge about the domain You are programming in.
But many times programmers are in a domain which is new, unknown to them. They don't know what the end result will be and TDD is a technique to write programms step by step, documenting our knowledge about how this system should work and verifing that our code does work that way.

When I read The Book(*) on TDD, for me the most important feature which stood out was the: TODO list. It showed to me that, TDD is also a technique to help developers focus on one thing at a time. So this is also an answer to Your question aboout How much Real code to write? I would say enough code to focus on 1 thing at a time.

(*) "Test Driven Development: By Example" by Kent Beck

Robert Andrzejuk
  • 580
  • 2
  • 11
1

You're not writing code to make your tests fail.

You write your tests to define what success should look like, which should all initially fail because you haven't yet written the code that will pass.

The whole point about writing initially-failing tests is to do two things:

  1. Cover all cases - all nominal cases, all edge cases, etc.
  2. Validate your tests. If you only ever see them pass, how can you be sure they will reliably report a failure when one occurs?

The point behind red-green-refactor is that writing the correct tests first gives you the confidence to know that the code you wrote to pass the tests is correct, and allows you to refactor with the confidence that your tests will inform you as soon as something breaks, so you can immediately go back and fix it.

In my own experience (C#/.NET), pure test-first is a bit of an unattainable ideal, because you can't compile a call to a method which doesn't yet exist. So "test first" is really about coding up interfaces and stubbing implementations first, then writing tests against the stubs (which will initially fail) until the stubs are properly fleshed out. I'm not ever writing "failing code", just building out from stubs.

Zenilogix
  • 309
  • 1
  • 3
0

I think you may be confused between unit tests and integration tests. I believe there may also be acceptance tests, but that depends on your process.

Once you've tested all of the little "units" then you test them all assembled, or "integrated." That's usually a whole program or library.

In code that I've written the integration tests a library with various test programs that read data and feed it to the library, then check the results. Then I do it with threads. Then I do it with threads and fork() in the middle. Then I run it and kill -9 after 2 seconds, then I start it and check its recovery mode. I fuzz it. I torture it in all kinds of ways.

All of that is ALSO testing, but I don't have a pretty red / green display for the results. It either succeeds, or I dig through a few thousand lines of error code to find out why.

That's where you test the "real code."

And I just thought of this, but maybe you don't know when you are supposed to be done writing unit tests. You are done writing unit tests when your tests exercise everything that you specified it should do. Sometimes you can lose track of that among all of the error handling and edge cases, so you might want to make a nice test group of happy path tests that simply go straight through the specifications.

Zan Lynx
  • 1,300
  • 11
  • 14
  • (its = possessive, it's = "it is" or "it has". See for example *[How to Use Its and It's](http://www.wikihow.com/Use-its-and-it's)*.) – Peter Mortensen Jul 27 '17 at 16:08
-6

In answer to the title of the question: "When do you write the “real” code in TDD?", the answer is: 'hardly ever' or 'very slowly'.

You sound like a student, so I will answer as if advising a student.

You are going to learn lots of coding 'theories' and 'techniques'. They're great for passing the time on overpriced student courses, but of very little benefit to you that you couldn't read in a book in half the time.

The job of a coder is solely to produce code. Code that works really well. That is why you, the coder plans the code in your mind, on paper, in a suitable application, etc., and you plan to work around possible flaws / holes in advance by thinking logically and laterally before coding.

But you need to know how to break your application to be able to design decent code. For example, if you didn't know about Little Bobby Table (xkcd 327), then you probably wouldn't be sanitising your inputs before working with the database, so you wouldn't be able to secure your data around that concept.

TDD is just a workflow designed to minimise the bugs in your code by creating the tests of what could go wrong before you code your application because coding can get exponentially difficult the more code you introduce and you forget bugs that you once thought of. Once you think you've finished your application you run the tests and boom, hopefully bugs are caught with your tests.

TDD is not - as some people believe - write a test, get it passing with minimal code, write another test, get that passing with minimal code, etc. Instead, it's a way of helping you code confidently. This ideal of continuous refactoring code to make it work with tests is idiotic, but it is a nice concept amongst students because it makes them feel great when they add a new feature and they're still learning how to code...

Please do not fall down this trap and see your role of coding for what it is - the job of a coder is solely to produce code. Code that works really well. Now, remember you'll be on the clock as a professional coder, and your client won't care if you wrote 100,000 assertions, or 0. They just want code that works. Really well, in fact.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
user3791372
  • 170
  • 6
  • 3
    I'm not even close to a student, but I do read and try to apply good techniques and be professional. So in that sense, I am a "student." I just ask very basic questions because that's the way I am. I like to know exactly why I am doing what I am doing. The heart of the matter. If I don't get that, I don't like it and start asking questions. I need to know why, if I am going to use it. TDD seems intuitively good in some ways like knowing what you need to create and thinking things through, but the implementation was difficult to understand. I think I have a better grasp now. – johnny Jul 25 '17 at 21:34
  • 1
    [1. You are not allowed to write any production code unless it is to make a failing unit test pass. 2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures. 3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.](http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd) – RubberDuck Jul 27 '17 at 00:07
  • 4
    Those are the rules of TDD. You are free to write code however you want, but if you don't follow those three rules you are not doing TDD. – Sean Burton Jul 27 '17 at 10:10
  • 2
    "Rules" of one person? TDD is a suggestion to help you code, not a religion. It's sad to see so many people adhere to an idea so anally. Even the origin of TDD is controversial. – user3791372 Jul 27 '17 at 12:14
  • 2
    @user3791372 TDD is a very strict and clearly defined process. Even if many thinks that is just mean "Do some testing when you are programming", it's not. Let's try to not mix up terms here, this question is about the process TDD, not general testing. – Alex Jul 28 '17 at 09:21
  • 1
    @Alex I've never mentioned "general testing" in my answer. Don't read something into it that isn't there. – user3791372 Jul 28 '17 at 11:34
  • 1
    I don't see how it's not there. I do agree on your overall sentiment expressed in your answer, but TDD is per very clear definition _write a test, get it passing with minimal code, write another test, get that passing with minimal code, etc_. I'm personally not a fan of that process, I don't mind reusing concepts and ideas behind that process for my own development, but the term TDD is taken, very clearly defined, and it is a very strict process. And you yourself seem to agree on this considering your very first sentence in the answer. Maybe I'm missing something? – Alex Jul 28 '17 at 12:26