95

There's a debate going on in our team at the moment as to whether modifying code design to allow unit testing is a code smell, or to what extent it can be done without being a code smell. This has come about because we're only just starting to put practices in place that are present in just about every other software dev company.

Specifically, we will have a Web API service that will be very thin. Its main responsibility will be marshalling web requests/responses and calling an underlying API that contains the business logic.

One example is that we plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be. However, to unit test the Web API service we will need to mock this factory.

This essentially means we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

Some on the team seem reluctant to design code just for the sake of testing. It seems to me that there has to be some compromise if you hope to unit test, but I'm unsure how allay their concerns.

Just to be clear, this is a brand new project, so it's not really about modifying code to enable unit testing; it's about designing the code we're going to write to be unit testable.

jscs
  • 828
  • 9
  • 17
Lee
  • 1,101
  • 1
  • 8
  • 11
  • 33
    Let me repeat this: you colleagues want unit tests for new code, but they refuse to write the code in a way it is unit testable, though there is no risk in breaking anything existing? If that's true, you should accept @KilianFoth's answer and ask him to highlight the first sentence in his answer in bold! Your colleagues apparently have a very big misunderstanding about what their job is. – Doc Brown Jan 29 '19 at 12:36
  • There is a vast difference between modifying the code for the sake of testing and writing code that is easily tested. If you're retrofitting TDD or writing unit tests late, you will hit this problem time and time again. – Robbie Dee Jan 29 '19 at 12:49
  • @DocBrown That's the crux of it yes... it's specifically using interfaces and injection that some don't like the idea of if they're only used to support testing. I think I'm just going to have to respond with "there's no other way" or "how else can you unit test?" – Lee Jan 29 '19 at 12:49
  • 1
    if its just the interfaces i would go with. "they are not for unit testing, they are for decoupling", then if they say "we dont need to decouple", "you do for unit tests amongst others" – Ewan Jan 29 '19 at 13:23
  • I did actually mention tight coupling and the response was "yes but that's intended". – Lee Jan 29 '19 at 13:30
  • So, if you have a Web API, why not test directly against that API? By the way, I'd probably agree with your colleagues that to base an architecture on unit testing might not be a good idea, because it makes the code more complicated, and I value simplicity. Remember, over-engineering for the sake of unit tests is still over-engineering. – Christian Hackl Jan 29 '19 at 13:36
  • It _is_ possible for your unit tests to test multiple classes at once due to high cohesion. Just start writing the tests. If you realize tests are hard to set up, too slow due to dependencies (e.g. a database), then you'll figure out where you need to decouple. – Vincent Savard Jan 29 '19 at 13:38
  • @ChristianHackl "[...] why not test directly against that API?" That's a false dichotomy. Both kinds of tests are very important. You also built a straw-man to argue against unit testing: of course over-engineering is bad. Building your system so it is testable is the literal opposite of over-engineering. – Vincent Savard Jan 29 '19 at 13:41
  • 20
    @Lee: Who says that decoupling is *always* a good idea? Have you ever seen a codebase in which everything is passed as an interface created from an interface factory using some configuration interface? I have; it was written in Java, and it was a complete, unmaintainable, buggy mess. Extreme decoupling is code obfuscation. – Christian Hackl Jan 29 '19 at 13:51
  • @VincentSavard: I'm not sure the dichotomy is false, because usually, time, manpower and money on a software project are limited, so you cannot do everything and may have to choose one over the other (at least if you do it right; writing tests correctly consumes a lot of time). My argument is also not a straw-man; I argue that on the code level, creating additional interfaces just to inject test implementations is over-engineering. A system formed from such classes, modules and functions is testable and over-engineered at the same time. – Christian Hackl Jan 29 '19 at 13:57
  • @ChristianHackl That's _exactly_ because time, manpower and money are limited that you need to test at the appropriate level. Integration and system tests give you more confidence that the system works, but are much more costly to maintain. Unit tests allow you to test at a very granular level without bothering with setting up the entire system. They're both good at what they do. I won't add anything on the over-engineering point, I still don't see how it is any relevant. Bad architects will make bad systems, unit tests or no. – Vincent Savard Jan 29 '19 at 14:01
  • 1
    @VincentSavard: Well, it somewhat depends on the project context, but generally, when in doubt, I'd rather have excellent integration tests and no unit tests at all than so-so integration tests and so-so unit tests. – Christian Hackl Jan 29 '19 at 14:08
  • 1
    @ChristianHackl Once again, that's a false dichotomy... – Vincent Savard Jan 29 '19 at 14:13
  • @VincentSavard: You are seeing a lot of false dichotomies. – Christian Hackl Jan 29 '19 at 14:21
  • @ChristianHackl I definitely don't think decoupling is always a good idea. I just think using interfaces and DI is a better idea than, say, using reflection to inject mocks into a concrete implementation just to avoid interfaces and DI in constructors, which was an idea given by the team. Another one was using debug directives to prevent constructors with DI going into production. – Lee Jan 29 '19 at 14:25
  • @ChristianHackl You're presenting the choice between excellent tests, or so-so tests. There's no need to choose, you can very well have both excellent integration tests, and excellent unit tests, which is why this is a false dichotomy. Anyway, this definitely isn't a good medium to continue the discussion, I'm glad what you're doing works for you and your team. – Vincent Savard Jan 29 '19 at 14:53
  • 1
    Calling the interface "otherwise unnecessary" is like saying "we don't need to add a blinker to this car. It would allow the car to signal its turns, but is otherwise unnecessary". A mock implementation is probably one of the main advantages of having an interface in the first place. Furthermore, test-ability is definitely something that should go into design. Un-testable code leads to useless and unmaintainable tests. – Sava B. Jan 29 '19 at 19:33
  • 2
    I assume they have negative opinions of TDD. Does .NET not have any test frameworks where a derived class is created for mocking purposes (thus avoiding interfaces)? Interfaces are also basically zero cost, so it sounds somewhat like bikeshedding to argue over whether to use one if you derive good from it. – Matthew Read Jan 29 '19 at 22:02
  • 1
    Just a note. Interfaces are not mandatory for mocking classes. We still can extend the concrete and build stubbs. Stubbs are easier to implement and they make tests very deterministic since stubbs are not (usually) programmable. That said, interfaces make a big deal at a reasonable costs. – Laiv Jan 29 '19 at 22:16
  • @Laiv - that assumes your classes aren't `sealed` (or `final`, in Java), because the normal recommendation is to prohibit extension unless you specifically design for it. – Clockwork-Muse Jan 30 '19 at 00:40
  • 8
    Michael Feathers' *Working Effectively with Legacy Code* deals very nicely with this issue, and should give you a good idea about the advantages of testing even in a new code base. – l0b0 Jan 30 '19 at 01:37
  • @Laiv That's a good point about not having to use interfaces to mock, but it would mean (in .NET at least) making the base methods `virtual` or hiding them using `new`. Maybe this is a "less invasive" way of testing. – Lee Jan 30 '19 at 08:12
  • Although that being said a concern raised by some in the team was specifically crafting a constructor (or any other part of the class) to accept a mock, which they think would be otherwise unnecessary. – Lee Jan 30 '19 at 08:18
  • 8
    @l0b0 That's pretty much the bible for this. On stackexchange it wouldn't be an answer to the question, but in RL I would tell OP to get this book an read it (at least partly). OP, **get _Working Effectively with Legacy Code_ and read it, at least partly** (or tell your boss to get it). It addresses questions like these. _Especially_ if you didn't do testing and now you're getting into it - you might have 20 years experience, but _you will now do things you don't have experience with_. It's so much easier to read about them than to painstakingly learn all of that by trial and error. – R. Schmitz Jan 30 '19 at 10:50
  • @Lee you don't have to add an specific constructor to accept mocks. Just cast the mock to the concrete class in your unit test classes `new Component((ConcreteClass)myMock);` You don't even need casting the mock, I guess .Net will detect that Mock inherits ConcreteClass – Laiv Jan 30 '19 at 10:51
  • @Laiv the thing is we will have to cater for it being injected, because as it stands the constructor would have no params and would instantiate the type on its own. So while this removes having to use interfaces it will still require the class being able to take in some other derivative of the base class, which is the problem raised by others in the team. – Lee Jan 30 '19 at 11:39
  • 4
    Thanks for the recommendation of Michael Feathers' book, I will definitely pick up a copy. – Lee Jan 30 '19 at 11:44
  • 2
    @Lee "_the constructor would have no params and would instantiate the type on its own_". **Oh. Oh! Oh nonono! That might very well be the root of the issues you're encountering!** If you're doing unit tests, you should definitely go with _dependency injection_ instead (which just means "use constructor arguments" in this case). I wrote an answer to another question [here](https://softwareengineering.stackexchange.com/questions/381407/why-should-i-use-dependency-injection/381434#381434) which demonstrates why, _even without unit testing_. That might actually solve the whole thing already. – R. Schmitz Jan 30 '19 at 15:03
  • "we don't anticipate it ever being anything other than the concrete type it will be" If I had a nickel for every time I've heard this... For one, you ALEADY have a need to return something other than the concrete type: the mock object you'd use for testing. – Alexander Jan 30 '19 at 16:05
  • 1
    Cheers @Lee. Something else possibly even more important but harder to arrange is a mentor who has done *actual* TDD before. I was doing development driven testing (DDT) for years before being shown and thoroughly explained red-green-refactor TDD, and it was possibly the biggest eye opener in 15 years as a developer. – l0b0 Jan 30 '19 at 18:59
  • 1
    @R.Schmitz It IS the problem we're encountering, i.e. I'm trying to persuade the rest of the team to do what you're suggesting. – Lee Jan 31 '19 at 14:24
  • @l0b0 I'd love to find an experienced TDDer but my organisation doesn't have them unfortunately. I work in a huge organisation whose focus is healthcare, not software, which is a cause for many other problems. – Lee Jan 31 '19 at 14:25
  • @Lee, Well, just in case you need more "ammo", here is [a guide to writing testable code](http://misko.hevery.com/code-reviewers-guide/) from some google engineers, in which they also go into detail why [doing real work in the constructor is a flaw](http://misko.hevery.com/code-reviewers-guide/flaw-constructor-does-real-work/) and here is the kinda famous ["DI for 5-year olds"](https://stackoverflow.com/a/1638961/6385703). And now I should probably stop spamming the comments to your question :D – R. Schmitz Jan 31 '19 at 16:35
  • Just to point out the obvious which should be all the response needed to the naysayers. There's a difference between designing for testable code and adding code intended only to be used by unit tests. If you are adding/changing methods or parameters simply to support test cases then that's a code smell. However, if you are adding/changing methods or parameters because it reduces dependencies and makes the code more flexible (which has the side-effect of being more testable) then that's not designing code for the sake of testing; it is simply creating a better design. – Dunk Feb 01 '19 at 20:18
  • Let's go back to the basics of your question a bit, because it can be relevant. I have 2 questions: (a) does your team want to add unit-tests or was this imposed by someone higher in the foodchain? and (b) are the developers in your team actually familiar with unit-tests and their purpose? – Radu Murzea Feb 04 '19 at 08:37
  • a. Testing has been mandated by our manager and he would like to see as much of it automated as possible, so unit testing and (to some extent) integration testing will be expected. We lack many processes so I have put a proposal for BDD/TDD together, which we will have a go at seeing as most of the team seem happy with it. b. Their experience varies but generally I'd say no, at least not beyond the superficial "tests show if we've broken anything". – Lee Feb 04 '19 at 08:46

15 Answers15

203

Reluctance to modify code for the sake of testing shows that a developer hasn't understood the role of tests, and by implication, their own role in the organization.

The software business revolves around delivering a code base that creates business value. We have found, through long and bitter experience, that we cannot create such code bases of nontrivial size without testing. Therefore, test suites are an integral part of the business.

Many coders pay lip service to this principle but subconsciously never accept it. It is easy to understand why this is; the awareness that our own mental capability is not infinite, and is in fact, surprisingly limited when confronted with the enormous complexity of a modern code base, is unwelcome and easily suppressed or rationalized away. The fact that test code is not delivered to the customer makes it easy to believe that it is a second-class citizen and non-essential compared to the "essential" business code. And the idea of adding testing code to the business code seems doubly offensive to many.

The trouble with justifying this practice has to do with the fact that the entire picture of how value is created in a software business is often only understood by higher-ups in the company hierarchy, but these people don't have the detailed technical understanding of the coding workflow that is required to understand why testing can't be gotten rid of. Therefore they are too often pacified by practitioners who assure them that testing may be a good idea in general, but "we are elite programmers who don't need crutches like that", or that "we don't have time for that right now", etc. etc. The fact that business success is a numbers game and that avoiding technical debt, assuring quality etc. shows its value only in the longer term means that they are often quite sincere in that belief.

Long story short: making code testable is an essential part of the development process, no different than in other fields (many microchips are designed with a substantial proportion of elements only for testing purposes), but it's very easy to overlook the very good reasons for that. Don't fall into that trap.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • 39
    I would argue it depends of the kind of change. There's a difference between making code easier to test, and introducing test-specific hooks that should NEVER be used in production. I am personally wary of the latter, because Murphy... – Matthieu M. Jan 29 '19 at 13:16
  • 62
    Unit tests often break encapsulation and make the code under test more complex than would otherwise be required (e.g. by introducing additional interface types or adding flags). As always in software engineering, every good practice and every good rule has its share of liabilities. Blindly producing a lot of unit tests *can* have a detrimental effect on business value, not to mention that writing and maintaining the tests already costs time and effort. In my experience, integration tests have a much greater ROI and tend to improve software architectures with fewer compromises. – Christian Hackl Jan 29 '19 at 13:17
  • Integration tests and especially user acceptance tests are very important, but they're much slower. I don't think you can do TDD with just integration tests. Like with most thinks the middle road of a mix of all testing techniques is probably best. – Lee Jan 29 '19 at 13:34
  • 20
    @Lee Sure but you need to consider whether having a specific type of tests warrants the increase in code complexity. My personal experience is that unit tests are a great tool up until the point where they require fundamental design changes to accommodate mocking. That’s where I switch to a different type of tests. Writing unit tests at the expense of making the architecture substantially more complex, for the sole purpose of having unit tests, is navel-gazing. – Konrad Rudolph Jan 29 '19 at 15:17
  • 4
    @KonradRudolph I fully second that. In general, the answer says that changing the code for the sake of tests is OK because tests are important. They are, but to what extent should we modify the tested code? We aren't necessarily _improving_ the code, we are making it easier to write tests at the expense of readability, maintainability, simplicity, and so on. In some cases it is a justified trade-off, but not always. So I think the answer doesn't fully address the question and maybe even gives some wrong ideas. – Malcolm Jan 29 '19 at 15:56
  • 21
    @ChristianHackl why would a unit test break encapsulation? I've found that for the code I've worked on, if there is a perceived need to _add_ extra functionality to enable testing, the actual problem is that the function you are to test needs refactoring, so all the functionality is at the same level of abstraction (it's differences in abstraction level that usually create this "need" for extra code), with lower level code moved to their own (testable) functions. – Baldrickk Jan 29 '19 at 16:29
  • 29
    @ChristianHackl Unit tests should never break encapsulation, if you are trying to access private, protected or local variables from a unit test, you are doing it wrong. If you are testing functionality foo, you are only testing if it actually worked, not if local variable x is the square root of the input y in the third iteration of the second loop. If some functionality is private then so be it, you're going to be testing it transitively anyway. if it's really big and private? That's a design flaw, but probably not even possible outside of C and C++ with header implementation seperation. – Krupip Jan 29 '19 at 17:18
  • 1
    Agreed on how easy it is to underestimate the importance of unit tests.  (One anecdote: I rewrote a 10K-line system in a different language, and it was later put live without my knowledge or any user testing.  I put its almost complete success (only one minor issue, which took a week to notice) down to several hundred unit tests.)  But as @Malcolm says, it's a trade-off; some code tweaks may well be worthwhile, but not doing major surgery to squeeze the last few % of coverage. – gidds Jan 29 '19 at 18:08
  • 6
    @opa “should” vs “often do in practice”. I completely agree with you that a unit test that breaks encapsulation shouldn’t be written. In practice, it’s extremely common to find code that breaks encapsulation to let unit testing be done on its entrails. – Konrad Rudolph Jan 30 '19 at 12:34
  • @MatthieuM.: You're correct that we should never create "only for test" code in the business logic. However, that's not what OP's question is about. OP's question is about refactoring the code _so that it can be tested_ (which happens in the test project). It makes no sense to create a hook that is not used in production, because that inherently means that this hook is not actually part of the business logic and thus irrelevant as to the question of whether the application does what it's supposed to do (which is the question testing aims to answer). – Flater Jan 30 '19 at 12:54
  • 3
    @ChristianHackl: Unit tests cannot break encapsulation. Unit testing only cares about the **public input and output** of the object that's being tested. It does not care about the internal workings. If you're in a situation where you want to test smaller chunks than the current object allows (because it hides everything privately), then you've not sufficiently abstracted your code and are dealing with a god class. The desire to want to test smaller chunks proves that these chunks are **separate** parts of the logic with their own value/responsibility. – Flater Jan 30 '19 at 12:56
  • @KonradRudolph: When integration testing instead of unit testing, it only makes sense to do so if the object that needs to be tested is uniquely used by the consumer with which it is tested in the integration test. If it is used by multiple consumers, then the failure of the integration test of consumer A (which uses library Z) does not prove library Z has a fault - it could just as well be that consumer A is doing something wrong (and consumer B's integration tests do actually pass). In essence, you can no longer be sure _which_ part of the integration failed, only that _something_ failed. – Flater Jan 30 '19 at 13:03
  • 2
    @KonradRudolph `it’s extremely common to find code that breaks encapsulation to let unit testing be done on its entrails` This is code which was not developed with testing in mind, and when testing was introduced, was not actually refactored. The blame here lies with the developer (or manager) who refuses to refactor the code. The blame does not lie with the concept of unit testing. – Flater Jan 30 '19 at 13:06
  • 1
    @Flater: You say unit tests cannot break encapsulation because when you don't access anything internal, then you don't break anything. Well, by that logic, *no code at all* can break encapsulation, because the compiler prevents you from doing so anyway. The point is, of course, that something has been made public that should have been private in the first place (hence the broken encapsulation). This can also happen with the tiniest of classes, not just what you call "god classes". – Christian Hackl Jan 30 '19 at 14:07
  • 4
    @Flater: When an integration test fails, it's typically easy to start a more thorough investigation of the problem in order find out the precise source of the problem, for example by using a debugger or analysing log output. Good integration tests are determinstic and deliver reproducible results (just like unit tests, by the way). – Christian Hackl Jan 30 '19 at 14:16
  • 1
    @ChristianHackl: The investigation for the precise problem will generally be aided by unit tests, which are able to localize issues in particular components. That's the point of having unit tests to begin with. The point of automated testing is to prevent having to sniff around until you stumble on the source of the problem. I'm not saying unit tests catch everything (otherwise we'd have no need for other types of tests), I'm saying that integration tests shouldn't be used to also catch things that unit tests are designed to catch. – Flater Jan 30 '19 at 14:35
  • 4
    Why so rarely anyone suggests that writing unit tests is a trade-off? The industry nowadays religiously imposes writing unit tests on every developer. I've been to projects and teams where we spent more time writing unit tests than actual code. Was it worth it? It is hard to tell. We still had some bugs in production, like we would have without unit tests. It was still hard to introduce new features (one could argue it was even harder because you had to readjust unit tests). – kukis Jan 30 '19 at 15:20
  • 5
    It's very common for people to think integration tests can serve in place of unit tests. Very common to distrust interfaces and write specifically to an implementation thinking you're simplifying your life. And very common for these projects to get mired in long-running, fragile, high-maintenance integration tests that fail to attain decent coverage or detect bugs. Every step seems perfectly reasonable up to the day you find yourself up to your neck in the tarpit saying "how did i get here?" – Nathan Hughes Jan 30 '19 at 21:20
  • 3
    @NathanHughes: I have never experienced anything of the things you mention due to relying on integration tests. Do you have studies, citations or any references to back your claim? I also don't see how integration tests are less interface-friendly than unit tests. If anything, I believe unit tests are *more* dangerous in this regard, simply because they test smaller things by considering all of a module's inner "units" and not just its outermost API level. – Christian Hackl Jan 31 '19 at 05:28
  • 1
    @ChristianHackl I have experienced exactly the situation that Hughes describes, once spending a full two months just trying to sort out which project(s) caused the integration tests to fail, and which team(s) should address it. If there'd been effective unit tests in place, I could have just run those tests and said "Oh, that's where our code is failing" or "Well, our code isn't failing, so it must be at another layer" – MrSpudtastic Jan 31 '19 at 18:22
  • 2
    @MrSpudtastic: Well, then that shows once again that anecdotes and personal experience are a poor substitute for scientific research and broad studies, for how can it be that your experience is so vastly different to mine? I can just repeat what I said above: *"As always in software engineering, every good practice and every good rule has its share of liabilities."* Part of the reason why I'm so harsh on unit tests are the cargo cult and absolutism surrounding them, which makes it very important to remind people of their disadvantages, much more so than with other types of testing. – Christian Hackl Feb 01 '19 at 07:47
  • 2
    @kukis: There's a difference between a tradeoff and an abuse warning. In the end, all that counts is total hours spent on the project. Unit testing will on average lower that total; and the benefits will be more pronounced with larger projects. Unit tests do not prevent all types of bugs. Unit tests simply protect you from _breaking changes to the intended behavior_ (i.e. the business logic). It does not protect against runtime failures, technical limitations, developer misunderstandings, or badly designed business decisions (i.e. not bad code, but a bad analysis) – Flater Feb 01 '19 at 10:17
  • Also: no matter how final you believe your design is, in my experience there's always bound to be a change, and it is almost always so far into the future one barely recalls why the old code looks the easy it does. At that point unit testing really starts to matter, as well as the initial approach. Always use interfaces as to code by addition and NOT by modification. – rasmus91 Feb 04 '19 at 05:58
75

It's not as simple as you might think. Let's break it down.

  • Writing unit tests is definitely a good thing.

BUT!

  • Any change to your code can introduce a bug. So changing the code without a good business reason is not a good idea.

  • Your 'very thin' webapi doesn't seem like the greatest case for unit testing.

  • Changing code and tests at the same time is a bad thing.

I would suggest the following approach:

  1. Write integration tests. This should not require any code changes. It will give you your basic test cases and enable you to check that any further code changes you make don't introduce any bugs.

  2. Make sure new code is testable and has unit and integration tests.

  3. Make sure your CI chain runs tests after builds and deployments.

When you have those things set up, only then start thinking about refactoring legacy projects for testability.

Hopefully everyone will have learnt lessons from the process and have a good idea of where testing is most needed, how you want to structure it and the value it brings to the business.

EDIT: Since I wrote this answer, the OP has clarified the question to show that they are talking about new code, not modifications to existing code. I perhaps naively thought the "Is unit testing good?" argument was settled some years ago.

It's hard to imagine what code changes would be required by unit tests but not be general good practice you would want in any case. It would probably be wise to examine the actual objections, possibly it's the style of unit testing which is being objected to.

Ewan
  • 70,664
  • 5
  • 76
  • 161
  • I like your point that's essentially about the value of tests. Unit testing the Web API most likely will not provide much value. By the way, this isn't a legacy system, it's a new greenfield project. Should I rephrase the question? – Lee Jan 29 '19 at 12:05
  • I wouldn't change the question now. But yeah, you can't modify code that hasn't been written yet. – Ewan Jan 29 '19 at 12:08
  • I must have phrased it badly. So we're still designing the system and this has come out of those meetings. So it's basically "tailoring the code we're about to write to be testable seems like a code smell" ... which.. reading it back seems quite ridiculous. – Lee Jan 29 '19 at 12:10
  • I think the question is fine as it stands and i cover your senario breifly with point 2. It does sound more like a push back against unit testing rather than a legitimate concern when its a green field project though – Ewan Jan 29 '19 at 12:17
  • 12
    This is a **much** better answer than the accepted one. The imbalance in votes is dismaying. – Konrad Rudolph Jan 29 '19 at 20:24
  • 4
    @Lee A unit test should test a *unit of functionality*, which may or may not correspond to a class. A unit of functionality should be tested at its interface (which may be the API in this case). Testing may highlight design smells and the need to apply some different/more levelisation. Build your systems from small composable pieces, they will be easier to reason about and test. – Wes Toleman Jan 30 '19 at 03:44
  • 2
    @KonradRudolph: I guess missed the point where the OP added that this question was about designing new code, not changing existing one. So there is nothing to break, which makes most of this answer not applicable. – Doc Brown Jan 30 '19 at 12:10
  • @DocBrown That’s a tiny part of this answer (a single bullet point). The rest is relevant regardless of whether it’s for new code or existing one. The accepted answer, by contrast, is quite seriously flawed, as my comment there explains. – Konrad Rudolph Jan 30 '19 at 12:30
  • 2
    I strongly disagree with the statement that writing unit tests is always a good thing. Unit tests are good only in some cases. It's silly to use unit tests to test frontend (UI) code, they are made to test business logic. Also, it's good to write unit tests to replace missing compilation checks (e.g. in Javascript). Most frontend-only code should write end-to-end test exclusively, not unit tests. – Sulthan Jan 31 '19 at 07:32
  • @sulthan in 2019 I think we can still argue for days about whether a given test is unit/integration/e2e/ui/BDD etc etc It sounds from the OP that we are still at 'some passing tests' level of convincing – Ewan Jan 31 '19 at 13:28
  • @Ewan It's not me who needs to be convinced. If it were just me I'd have started typical unit testing practices by now. The bottom line is: some on the team don't like the idea of designing code in any way to allow usual unit testing practices - i.e. the ability to mock dependencies, which is why strange ideas like using reflection to crowbar in the mocks has been raised by some. – Lee Jan 31 '19 at 14:37
  • 1
    Designs can definitely suffer from "Test induced damage". Usually testability improves design: You notice when writing tests that something can't be fetched but must be passed in, making for clearer interfaces and so on. But occasionally you'll stumble across something that requires an uncomfortable design *only* for testing. An example could be a test-only constructor required in your new code due to existing third party code that uses a singleton for example. When that happens: take a step back and make an integration test only, instead of damaging your own design in the name of testability. – Anders Forsgren Feb 01 '19 at 12:15
18

Designing code to be inherently testable is not a code smell; on the contrary, it is the sign of a good design. There are several well-known and widely-used design patterns based on this (e.g., Model-View-Presenter) that offer easy (easier) testing as a big advantage.

So, if you need to write an interface for your concrete class in order to more easily test it, that is a good thing. If you already have the concrete class, most IDEs can extract an interface from it, making the effort required minimal. It is a little bit more work to keep the two in sync, but an interface shouldn't change much anyway, and the benefits from testing may outweigh that extra effort.

On the other hand, as @MatthieuM. mentioned in a comment, if you're adding specific entry points into your code that shouldn't ever be used in production, solely for the sake of testing, that might be a problem.

mmathis
  • 5,398
  • 23
  • 33
  • That problem can be solved through static code analysis - flag the methods (e.g. must be named `_ForTest`) and check the codebase for calls from non-test code. – Riking Feb 04 '19 at 02:49
13

It is IMHO very simple to understand that for creating unit tests, the code to be tested must have at least certain properties. For example, if the code does not consist of individual units which can be tested in isolation, the word "unit testing" does not even start to make sense. If the code does not have these properties, it must be changed first, that is pretty obvious.

Said that, in theory, one can try to write some testable code unit first, applying all the SOLID principles, and then try to write a test for it afterwards, without further modifying the original code. Unfortunately, writing code which is really unit testable is not always dead simple, so it is quite likely there will be some changes necessary which one will only detect when trying to create the tests. This is true for code even when was written with the idea of unit testing in mind, and it is definitely more true for code which was written where "unit testability" was not at the agenda at the beginning.

There is a well-known approach which tries to solve the problem by writing the unit tests first - it is called Test Driven Development (TDD), and it can surely help to make code more unit testable right from the start.

Of course, the reluctance of changing code afterwards for making it testable arises often in a situation where the code was manually tested first and/or works fine in prodcution, so changing it could actually introduce new bugs, that's true. The best approach to mitigate this is to create a regression test suite first (which can often be implemented with only very minimal changes to the code base), as well as other accompanying measures like code reviews, or new manual test sessions. That should you give enough confidence to make sure redesigning some internals does not break anything important.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • Interesting that you mention TDD. We're attempting to bring in BDD/TDD, which also has met some resistance - namely what "the minimum code to pass" really means. – Lee Jan 29 '19 at 12:08
  • 3
    @Lee: bringing changes into an organization always causes some resistance, and it always needs some time to adapt new things, that's not new wisdom. This is a people problem. – Doc Brown Jan 29 '19 at 12:12
  • Absolutely. I just wish we'd been given more time! – Lee Jan 29 '19 at 12:13
  • It is frequently a matter of _showing_ people that doing it this way will save them time (and hopefully quickly too). Why do something that will not benefit you? – Thorbjørn Ravn Andersen Feb 02 '19 at 22:44
  • @ThorbjørnRavnAndersen: The team may likewise show to the OP that their approach will save time. Who knows? But I wonder if we aren't actually facing problems of a less technical nature here; the OP keeps coming here to tell us what his team does wrong (in his opinion), as if he was trying to find allies for his cause. It might be more beneficial to actually discuss the project *together* with the team, not with strangers on Stack Exchange. – Christian Hackl Feb 05 '19 at 13:26
  • 1
    @ChristianHackl That is not unlikely. Underlying problem here is that OP wants the team to do something different and they don't see the value - my guess is that OP cannot show them yet, so it is a case of "I think you should do it this way" and not "Let me show you how to do it and how it benefits you". So what you name "Allies" would perhaps be better named as "Senior developers sharing experience" to leverage the desired change. I do not think discussing the project in the team at the current time would bring a different outcome, than "This is silly, lets get back to work". – Thorbjørn Ravn Andersen Feb 05 '19 at 14:15
12

I take issue with the (unsubstantiated) assertion you make:

to unit test the Web API service we will need to mock this factory

That's not necessarily true. There are lots of ways to write tests, and there are ways to write unit tests that don't involve mocks. More importantly, there are other kinds of tests, such as functional or integration tests. Many times it is possible to find a "test seam" at an "interface" that is not an OOP programming language interface.

Some questions to help you find an alternative test seam, which might be more natural:

  • Will I ever want to write a thin Web API over a different API?
  • Can I reduce code duplication between the Web API and the underlying API? Can one be generated in terms of the other?
  • Can I treat the whole Web API and underlying API as a single "black box" unit and meaningfully make assertions about how the whole thing behaves?
  • If the Web API had to be replaced with a new implementation in the future, how would we go about doing that?
  • If the Web API was replaced with a new implementation in the future, would clients of the Web API be able to notice? If so, how?

Another unsubstantiated assertion you make is about DI:

we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

Dependency injection does not necessarily mean creating a new interface. For example, in the cause of an authentication token: can you simply create a real authentication token programmatically? Then the test can create such tokens and inject them. Does the process for validating a token depend on a cryptographic secret of some kind? I hope you haven't hardcoded a secret -- I would expect you can read it from storage somehow, and in that case you can simply use a different (well-known) secret in your test cases.

This is not to say that you should never create a new interface. But don't get fixated on there only being one way to write a test, or one way to fake a behavior. If you think outside the box you can usually find a solution that will require a minimum of contortions of your code and yet still give you the effect you want.

Daniel Pryden
  • 3,268
  • 1
  • 21
  • 21
  • Point taken about the assertions regarding interfaces, but even if we didn't use them we'd still have to inject objects some how, this is the concern from the rest of the team. i.e. some on the team would be happy with a parameterless ctr instantiating the concrete implementation and leaving it at that. In fact one member floated the idea of using reflection to inject mocks so we don't have to design code to accept them. Which is a reeky code smell imo – Lee Jan 31 '19 at 14:31
10

You are in luck as this is a new project. I've found that Test Driven Design works very well for writing good code (which is why we do it in the first place).

By figuring out up front how to invoke a given piece of code with realistic input data, and then get realistic output data which you can check is as intended, you do the API design very early in the process and have a good chance of getting a useful design because you are not hindered by existing code that has to be rewritten to accomodate. Also it is easier to understand by your peers so you can have good discussions again early in the process.

Note that "useful" in the above sentence means not only that the resulting methods are easy to invoke, but also that you tend to get clean interfaces that are easy to rig up in integration tests, and to write mockups for.

Consider it. Especially with peer review. In my experience the investment of time and effort will very quickly be returned.

  • We have a problem with TDD as well, namely, what constitutes "minimum code to pass". I demonstrated to the team this process and they took exception to not just writing what we have already designed - which I can understand. The "minimum" doesn't seem to be defined. If we write a test and have clear plans and designs, why not write that to pass the test? – Lee Jan 30 '19 at 08:20
  • @Lee "minimum code to pass"... well, this might sound a bit dumb, but it's literally what it says. E.g. if you have a test `UserCanChangeTheirPassword`, then in the test you call the (not yet existing) function to change the password and then you assert that the password is indeed changed. Then you write the function, until you can run the test and it does neither throw exceptions nor have a wrong assert. If at that point you have a reason to add any code, then that reason goes into another test, e.g. `UserCantChangePasswordToEmptyString`. – R. Schmitz Jan 30 '19 at 11:10
  • @Lee Ultimately, your tests will end up being the documentation of what your code does, except it's documentation that checks if it is fulfilled itself, instead of just being ink on paper. Also compare with [this question](https://softwareengineering.stackexchange.com/questions/21463/writing-the-minimum-code-to-pass-a-unit-test-without-cheating) - A method `CalculateFactorial` that just returns 120 and the test passes. That _is_ the minimum. It's also obviously not what was intended, but that just means you need another test to express what _was_ intended. – R. Schmitz Jan 30 '19 at 11:21
  • @Lee This _is_ the design as such. The final product may be much more complex than what is necessary to have the test pass (you don't need an Oracle database connection to check 2 + 2) and then you identify your dependencies and can recursively make clean interfaces for those too. The understanding that at any given point minimize the amount of knowledge the code needs to have is crucial in making clean API's across the application. – Thorbjørn Ravn Andersen Jan 30 '19 at 13:31
  • I understand these premises, but it does seem odd to write the bare minimum when you already have a very good idea of a bit further down the line, so why not add the test and then add the code to pass that's more mature? The best answer I've seen so far about why you should write the absolute minimum is because you can't be sure whether your tests can actually fail, otherwise. Although I like Robert Harvey's not-so-purist view of TDD. – Lee Jan 31 '19 at 15:02
  • 1
    @Lee Small steps. The bare minimum may be more than you think when the code rises above trivial. Also the design you do when implementing the whole thing at once may again be less optimal because you make assumptions on how it should be made without having written the tests yet that demonstrate it. Again remember, the code should fail at first. – Thorbjørn Ravn Andersen Feb 01 '19 at 00:22
  • 1
    Also, regression tests are very important. Are they in scope for the team? – Thorbjørn Ravn Andersen Feb 01 '19 at 09:55
8

If you need to modify to the code, that is the code smell.

From personal experience, if my code is difficult to write tests for, it's bad code. It's not bad code because it doesn't run or work as designed, it's bad because I can't quickly understand why it is working. If I encounter a bug, I know it's going to be a long painful job to fix it. The code is also difficult / impossible to reuse.

Good (Clean) code breaks down tasks into smaller sections that are easily understood at a glance (or at least a good look). Testing these smaller sections is easy. I can also write tests that only test a chunk of the codebase with similar ease if I'm fairly confident about the subsections (reuse also helps here as it has already been tested).

Keep the code easy to test, easy to refactor, and easy to reuse from the start and you won't be killing yourself whenever you need to make changes.

I'm typing this while completely rebuilding a project that should have been a throwaway prototype into cleaner code. It's much better to get it right from the start and refactor bad code as soon as possible rather than staring at a screen for hours on end being afraid to touch anything for fear of breaking something that partially works.

David
  • 271
  • 1
  • 2
  • 11
  • 3
    "Throwaway prototype" - every single project ever starts life as one of those ... best to think of things as never being that. typing this as I'm .. guess what? ... refactoring a throwaway prototype that turned out not to be ;) – Algy Taylor Jan 30 '19 at 11:49
  • 4
    If you want to be sure that a throw away prototype will be thrown away, write it in a prototype language that will never be allowed in production. Clojure and Python are good choices. – Thorbjørn Ravn Andersen Jan 30 '19 at 13:37
  • 2
    @ThorbjørnRavnAndersen That made me chuckle. Was that meant to be a dig at those languages? :) – Lee Jan 31 '19 at 15:04
  • @Lee. No, just examples of languages that _might_ not be acceptable for production - typically because nobody in the organization can maintain them because they are unfamiliar with them and their learning curves are steep. If those, are acceptable choose another that isn't. – Thorbjørn Ravn Andersen Feb 01 '19 at 06:10
4

I would argue that writing code that cannot be unit tested is a code smell. In general, if your code cannot be unit tested, then it is not modular, which makes it difficult to understand, maintain, or enhance. Maybe if the code is glue code that really only makes sense in terms of integration testing you can substitute integration testing for unit testing, but even then when the integration fails you are going to have to isolate the problem and unit testing is a great way to do it.

You say

We plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be. However, to unit test the Web API service we will need to mock this factory.

I do not really follow this. The reason to have a factory that creates something is to allow you to change factories or change what the factory creates easily, so other parts of the code do not need to change. If your authentication method is never going to change, then the factory is useless code bloat. However, if you want to have a different authentication method in test than in production, having a factory that returns a different authentication method in test than in production is a great solution.

You do not need DI or Mocks for this. You just need your factory to support the different authentication types and for it to be configurable somehow, such as from a configuration file or environment variable.

Old Pro
  • 793
  • 6
  • 11
2

In every engineering discipline I can think of, there is only one way to achieve decent or higher levels of quality:

To account for inspection/testing in the design.

This holds true in construction, chip design, software development, and manufacturing. Now, this doesn't mean that testing is the pillar that the every design needs to be built around, not at all. But with every design decision, the designers must be clear about the impacts on testing costs and make a conscious decisions about the trade off.

In some cases, manual or automated (e.g. Selenium) testing will be more convenient than Unit Tests, while also providing acceptable test coverage on their own. In rare cases throwing something out there that's almost entirely untested can also be acceptable. But these have to be conscious case by case decisions. Calling a design that accounts for testing a "code smell" indicates a serious lack of experience.

Peter
  • 3,718
  • 1
  • 12
  • 20
1

I've found that unit testing (and other types of automated testing) have a tendency to reduce code smells, and I can't think of a single example where they introduce code smells. Unit tests usually force you to write better code. If you can't use a method easily under test, why should it be any easier in your code?

Well written unit tests show you how the code is intended to be used. They are a form of executable documentation. I've see hideously written, overly long unit tests that simply couldn't be understood. Don't write those! If you need to write long tests to set up your classes, your classes need refactoring.

Unit tests will highlight where some of your code smells are. I would advise reading Michael C. Feathers' Working Effectively with Legacy Code. Even though your project is new, if it doesn't already have any (or many) unit tests, you might need some non-obvious techniques to get your code to test nicely.

CJ Dennis
  • 659
  • 5
  • 15
1

In a nutshell:

Testable code is (usually) maintainable code - or rather, code that is hard to test is usually hard to maintain. Designing code that is not testable is akin to designing a machine that is not repairable - pity the poor shmuck who will be assigned to repair it eventually (it might be you).

One example is that we plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be.

You know that you will need five different types of authentication method types in three years time, now that you said that, right? Requirements change, and while you should avoid overengineering your design, having a testable design means that your design has (just) enough seams to be altered without (too much) pain - and that the module tests will provide you with automated means to see that your changes don't break anything.

CharonX
  • 1,633
  • 1
  • 11
  • 23
1

Designing around dependency injection isn't a code smell - it's best practice. Using DI isn't just for testability. Building your components around DI aids modularity and reusability, more easily allows for major components to be swapped out (such as a database interface layer). While it adds a degree of complexity, done right it allows for better separation of layers and isolation of functionality which makes the complexity easier to manage and navigate. This makes it easier to properly validate the behavior of each component, reducing bugs, and can also make it easier to track down bugs.

Zenilogix
  • 309
  • 1
  • 3
  • 1
    "done right" is a problem. I have to maintain two projects where DI was done wrong (although aiming for doing it "right"). This makes the code plain horrible and much worse than the legacy projects without DI and unit testing. Getting DI right isn't easy. – Jan Feb 04 '19 at 12:21
  • @Jan that's interesting. How did did they do it wrong? – Lee Feb 10 '19 at 16:07
  • 1
    @Lee One project is a service that needs a fast start time but is horribly slow on start because all the class initialization is done upfront by the DI framework (Castle Windsor in C#). Another problem that I see in these projects is mixing up DI with the creation of objects with "new", sidestepping the DI. That makes testing hard again and led to some nasty race conditions. – Jan Feb 11 '19 at 09:09
1

This essentially means we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

Let's look at the difference between a testable:

public class MyController : Controller
{
    private readonly IMyDependency _thing;

    public MyController(IMyDependency thing)
    {
        _thing = thing;
    }
}

and non-testable controller:

public class MyController : Controller
{
}

The former option has literally 5 extra lines of code, two of which can be autogenerated by Visual Studio. Once you've setup your dependency injection framework to substitute a concrete type for IMyDependency at runtime - which for any decent DI framework, is another single line of code - everything Just Works, except now you can mock and thus test your controller to your heart's content.

6 extra lines of code to allow testability... and your colleagues are arguing that's "too much work"? That argument doesn't fly with me, and it shouldn't fly with you.

And you don't have to create and implement an interface for testing: Moq, for example, allows you to simulate the behaviour of a concrete type for unit-testing purposes. Of course, that won't be of much use to you if you can't inject those types into the classes you're testing.

Dependency injection is one of those things that once you understand it, you wonder "how did I work without this?". It's simple, it's effective, and it just Makes Sense. Please, don't allow your colleagues' lack of understanding of new things to get in the way of making your project testable.

Ian Kemp
  • 379
  • 1
  • 11
  • 1
    What you are so quick to dismiss as *"lack of understanding of new things"* may turn out to be a good understanding of old things. Dependency injection is certainly not new. The idea, and probably the earliest implementations, are decades old. And yes, I believe your answer is an example of code becoming more complicated due to unit testing, and possibly an example of unit tests breaking encapsulation (because who says the class has a public constructor in the first place?). I've often removed dependency injection from codebases I had inherited from someone else, because of the tradeoffs. – Christian Hackl Feb 04 '19 at 08:50
  • Controllers *always* have a public constructor, implicit or not, because MVC requires it. "Complicated" - maybe, if you don't understand how constructors work. Encapsulation - yes in some cases, but the DI vs encapsulation debate is an ongoing, highly subjective one that won't help here, and in particular for most applications, DI is going to serve you better than encapsulation IMO. – Ian Kemp Feb 04 '19 at 09:37
  • Regarding public constructors: indeed, that's a particularity of the framework being used. I was thinking about the more general case of an ordinary class which is not instantiated by a framework. Why do you believe that seeing additional method parameters as added complexity equals a lack of understanding about how constructors work? However, I appreciate that you acknowledge the existence of a tradeoff between DI and encapsulation. – Christian Hackl Feb 04 '19 at 12:07
0

When I write unit tests, I begin to think of what could go wrong inside my code. It helps me to improve code design and apply the single responsibility principle (SRP). Also, when I come back to modify the same code a few months later, it helps me to confirm that existing functionality isn't broken.

There is a trend to use pure functions as much as you can (serverless apps). Unit testing helps me to isolate state and write pure functions.

Specifically, we will have a Web API service that will be very thin. Its main responsibility will be marshalling web requests/responses and calling an underlying API that contains the business logic.

Write unit tests for the underlying API first and if you have sufficient development time, you need to write tests for the thin Web API service as well.

TL;DR, unit testing helps improve quality of code and helps make future changes to code risk free. It also improves readability of the code. Use tests instead of comments to make your point.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Ashutosh
  • 131
  • 6
0

The bottom line, and what should be your argument with the reluctant lot, is that there is no conflict. The big mistake seems to have been that someone coined the idea to "design for testing" to people that hate testing. They should have just shut their mouths or word it differently, like "let's take the time to do this right".

The idea that "you have to implement an interface" to make something testable is wrong. The interface is already implemented, it just isn't declared in the class declaration yet. It is a matter of recognizing existing public methods, copy their signatures to an interface and declaring that interface in the class's declaration. No programming, no changes to existing logic.

Apparently some people have a different idea about this. I suggest you try to fix this first.

Martin Maat
  • 18,218
  • 3
  • 30
  • 57