12

My understanding of the TDD methodology is that (failing) test cases are written promptly after finalizing the requirements.

My understanding of the open-closed principle (in the context of OOP) is to structure the class hierarchy so that any new features involve writing new code instead of modifying old code.

Those two strike me as contradicting. Shouldn't instead the process be more like

  1. clear up requirements
  2. take a long walk away from the computer
  3. design a public api with minimal implementation (e.g. throw NotImplemented)
  4. implement all requirements as tests, make sure they all fail
  5. don't touch the dummy implementation. Inherit the interface by another class, which turns all tests green?
Doc Brown
  • 199,015
  • 33
  • 367
  • 565
Vorac
  • 7,073
  • 7
  • 38
  • 58
  • 4
    TDD is not writing all the test cases and getting it green one by one. TDD follows the cycle of red-green-refactor. Write one failing test(red) - Get the test to pass (green) - Refactor the code.
    Open Close principle is about maintaining the structure of classes in a way that new features are added in new class without modifying the old. It would be practically impossible to be 100% open-close. (What about you startup/bootstrapping classes). The reason you have open close principle is you don't break old requirements when coding new. But if you test cases, they are reasonably taken care
    – Nachiappan Kumarappan Sep 14 '20 at 02:15
  • @Vorac, I see no contradiction if tests are written after design stage, but before implementation. Could you clarify why that does not fit in your model? – Basilevs Sep 14 '20 at 02:22
  • 5
    TDD is a design philosophy. The Tests are the software specification. Tests written after the design process are not the same beast, even though they are written using the same techniques. Open/Closed is a property that is desirable to have, particularly in highly shared objects. Its not a design goal in its own right though. – Kain0_0 Sep 14 '20 at 02:28
  • @Kain0_0 can you write tests before design? If they are written "during" design, then some relevant part of design is already done. – Basilevs Sep 14 '20 at 03:28
  • @Basilevs Yes there are obviously some design decisions that predate even writing a test. For example what language are you writing the test in. But Tests written as part of TDD are part of the design process, in that they are design specifications. Some of the specifications will have been pre-determined and simply encoded in the test. Others fall out as the specification is implemented, which would have been difficult to observe a priori, yet a posterori are blatantly obvious. These are observed during the refactor phase and extracted and codified as necessary requirements. – Kain0_0 Sep 14 '20 at 03:44
  • @Kain0_0 after a decision is made to open a component to extension, a corresponding test should be written. In this sense, a test is written after the design. I don't understand, how tests being part of (or the goal of) the design contradicts that. – Basilevs Sep 14 '20 at 03:56
  • @Basilevs If later a decision is made to edit the software, do you believe that you are not redesigning the software? Design is not a one step done thing. A new design requirement was surfaced/discovered/came along, and now the design must be updated. Following TDD (Test Driven *Design*), step 1. write a test capturing the design requirement. It does not matter if this was done in the projects first week, or 20 years after its first deployment. If the current implementation is no longer in alignment with the intended design, then the implementation must be reworked (step 2.) – Kain0_0 Sep 14 '20 at 04:19
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/112986/discussion-between-kain0-0-and-basilevs). – Kain0_0 Sep 14 '20 at 04:36
  • "Inherit the interface by another class" - there might not be a class to inherit from yet. In your example, this class would represent the "closed" part in OCP, but somebody has to write that class, and the path to it may not be straightforward (e.g., it will likely emerge after refactoring other code). But suppose you do have the class you can derive from; you still have to write the derived class, and while you're doing that, you want to do tests as you're writing it, so that you can get rapid feedback and check your assumptions as you go, and maybe adjust the design. 1/2 – Filip Milovanović Sep 14 '20 at 11:38
  • In other words, you don't want to write 20 tests, only to then deal simultaneously with like 12 errors; that completely defeats the main benefit of TDD. "Inherit the interface by another class, which turns all tests green" - OCP is not some arcane magic where you simply inherit something and, boom, you have a new feature; OCP just limits the impact of adding the new feature. If you are inheriting both interface and implementation, there will be parts that you'll have to adjust or add. If you are inheriting just the interface, you have to write all the new code. 2/2 – Filip Milovanović Sep 14 '20 at 11:38
  • The tests don't need to all fail at the same time. In fact, some tests can't be usefully written until other tests are passing (in the case of CRUD, it doesn't make sense to write a test for Create->ReadAll->ResponseBody1 should be listed in ResponseBody2 if your ReadAll response is erroring in the first place. I believe the intent of the "all tests should fail" thing is that each and every test should have at least one failing state. You should be able to break something in the code that makes that test fail. – RoboticRenaissance Sep 14 '20 at 13:37

2 Answers2

25

Frankly, I see at least three huge misconceptions in this question:

  1. what TDD is about, and

  2. what the OCP is about, and

  3. software is developed in a waterfall approach.

Let me start with the OCP. The OCP is a principle for producing reusable, generic, black box components or libraries. Such components may be developed and released by a vendor A, and then reused by a vendor B who has no direct control over the code (so it is closed against modification from B's point of view). But since A does not know the exact cases where B will reuse the component, they provide parametrization or extension points for the component - this is what open for extensions mean in the acronym OCP. Note that though the OCP is often explained using inheritance/polymorphism, this is not the essential characteristic of this principle.

In any business system of reasonable size, there will usually be a few components which follow the OCP, but most of them will not (except when you are in the role of the library vendor A and your task is to design nothing else but such components).

Now take the fact that requirements are not "finalized" (ar least, not all at once). Requirements are implemented one by one, each new one changes the existing system, the implementation can take influence on the design and change the basis for the next requirement to implement.

Whenever a new requirement is implemented in a system, there are parts of the existing code which have to be touched, extended and refactored. Components which fit to the OCP (at that time, and with regard to the specific requirement) can stay untouched, the code which uses these components will have to be adapted.

Now TDD comes into play. TDD is an implementation technique to write one test at a time, for the next "arriving" requirement (or "slice" of a requirement), before the requirement (slice) is actually implemented. Afterwards code gets written to make the test succeed and refactoring takes places. The refactoring may just clean up the code a little bit, but sometimes it can also be used to extract parts of the non-OCP compliant code and make it "OCP compliant", by introducing more parameters and extension points, or by extracting new reusable parts and components. So when the next requirement "arrives", one may be lucky and can reuse these parts of the existing code without any change.

I hope this made clear that TDD, refactoring, and the OCP are in no way contradictory: quite the opposite, TDD can actually help to develop OCP compliant components , and the OCP helps to build components which require less refactoring, less code changes and fewer tests.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
5

It’s not contradictory, it’s complementary:

  • TDD is about writing tests to formalize and verify requirements. But it’s not about finalized user requirements: it’s about currently known design requirements. These translate/ transform some aspects of the user requirements in a way that makes sense in your design, and more specially take into account the distribution of responsibilities between your components.

  • OCP is about shaping your design in a way not to reinvent the wheel, but also not to break things that already work well. It allows you to specialize class, and benefit from the existing tests, and write new tests only for the specialized parts. (here I say specialization to mean extension).

So there is a synergy between the two that allows to quicker reach a stable and robust design:

  • OCP means not only clean design but also fewer tests to verify the same requirements
  • TDD will reveal weaknesses of the current design early, which could suggest need for refactoring
  • refactoring allows you to improve OCP if it wasn’t well thought initially.

This approach is therefore completely compatible with incremental or evolving user requirements, which will translate to new design requirements and refacotring.

For your API:

  • you can of course take a long walk and develop it mentally, challenging yourself, alone in your mind.
  • But you can as well develop it with TDD and OCP, using for example mocks and test doubles, to collaboratively, with your team, converge together to a very robust design.

It’s a matter of project size. The team simply has more brainpower than any individual that composes it and a team is not efficient in long abstract discussions in a long walk.

Christophe
  • 74,672
  • 10
  • 115
  • 187