162

Dependency injection (DI) is a well known and fashionable pattern. Most of engineers know its advantages, like:

  • Making isolation in unit testing possible/easy
  • Explicitly defining dependencies of a class
  • Facilitating good design (single responsibility principle (SRP) for example)
  • Enabling switching implementations quickly (DbLogger instead of ConsoleLogger for example)

I reckon there's industry wide consensus that DI is a good, useful pattern. There's not too much criticism at the moment. Disadvantages which are mentioned in the community are usually minor. Some of them:

  • Increased number of classes
  • Creation of unnecessary interfaces

Currently we discuss architecture design with my colleague. He's quite conservative, but open minded. He likes to question things, which I consider good, because many people in IT just copy the newest trend, repeat the advantages and in general don't think too much - don't analyse too deep.

The things I'd like to ask are:

  • Should we use dependency injection when we have just one implementation?
  • Should we ban creating new objects except language/framework ones?
  • Is injecting a single implementation bad idea (let's say we have just one implementation so we don't want to create "empty" interface) if we don't plan to unit test a particular class?
Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Arkadiusz Kałkus
  • 1,770
  • 2
  • 12
  • 15
  • 36
    Are you really asking about depenency injection as a pattern, or are you asking about using DI frameworks? These are really distinct things, you should clarify which part of problem you are interested in, or explicitly ask about both. – Frax May 29 '18 at 12:26
  • 13
    @Frax about pattern, not frameworks – Arkadiusz Kałkus May 29 '18 at 14:03
  • 10
    You're confusing dependency inversion with dependency injection. The former is a design principle. The latter is a technique (usually implemented with an existing tool) for constructing hierarchies of objects. – jpmc26 May 29 '18 at 14:16
  • 1
    @jpmc26 Well, I said pattern, not principle. I think it's reasonable to call DI a pattern. Anyway I know that DI != IoC/DInv, but thanks to mentioning the important issue. – Arkadiusz Kałkus May 29 '18 at 14:25
  • Possible duplicate of [When is it not appropriate to use the dependency injection pattern?](https://softwareengineering.stackexchange.com/questions/135971/when-is-it-not-appropriate-to-use-the-dependency-injection-pattern) – Honey May 29 '18 at 14:34
  • 1
    @Honey I'm asking about disadvantages, not scenarios when not to use. You can use something even if it has disadvantages as long as they are minor compared to advantages. But understanding them is valuable to become better engineer. – Arkadiusz Kałkus May 29 '18 at 14:43
  • 2
    CQRS doesn't have anything to do with the repository pattern, other than they are used in the same place in the architecture. It sounds like you're inexperienced. Folks who lack experience tend to work with patterns too early, before they are ready. The remedy is not to learn more patterns (especially from blog posts), it is to get better at software development overall. Wax on, wax off. – Robert Harvey May 29 '18 at 14:59
  • 1
    Your list of "pros" is incorrect. There is nothing "good" about adding dependency injection where it is not needed. – Frank Hileman May 29 '18 at 22:41
  • I agree with jpmc26 - everything you list as an advantage (and the disadvantages, too!) is actually an advantage of dependency inversion, not injection. It's just that injection often forces you to use dependency inversion as well. – Logan Pickup May 30 '18 at 06:29
  • 7
    I often write tests using the real database and no mock objects at all. Works really well in many cases. And then you don't need interfaces most of the time. If you have a `UserService` that class is just a holder for logic. It gets injected a database connection and tests run inside of a transaction that is rolled back. Many would call this bad practice but I found that this works extremely well. Don't need to contort your code just for testing and you get the bug finding power of integration tests. – usr May 30 '18 at 12:09
  • I can hardly remember my life before DI, but it involved a lot of "search for new MyClass() and replace every occurrence with new MyClass(x,y)" until I got tired of trying to figure out which `x` and which `y` to pass from points in the source code that didn't care about those things. – Reactgular May 30 '18 at 18:35
  • 3
    The DI is almost always good. The bad thing with it is a lot of people _think_ they know DI but all they know is how to use some weird framework without even being sure of what they are doing. DI nowadays suffers a lot from Cargo Cult Programming. – T. Sar Jun 01 '18 at 13:50
  • 1
    `Criticism ... of dependency injection` Sometimes when I have all of these small classes, I have trouble seeing the bigger picture. That is, I have trouble seeing the forest for the trees. I have to step back and ask myself, "What is it I actually want to accomplish". I wonder if over devs experiences this. – user2023861 Jun 12 '18 at 15:38

8 Answers8

202

First, I would like to separate the design approach from the concept of frameworks. Dependency injection at its simplest and most fundamental level is simply:

A parent object provides all the dependencies required to the child object.

That's it. Note, that nothing in that requires interfaces, frameworks, any style of injection, etc. To be fair I first learned about this pattern 20 years ago. It is not new.

Due to more than 2 people having confusion over the term parent and child, in the context of dependency injection:

  • The parent is the object that instantiates and configures the child object it uses
  • The child is the component that is designed to be passively instantiated. I.e. it is designed to use whatever dependencies are provided by the parent, and does not instantiate it's own dependencies.

Dependency injection is a pattern for object composition.

Why interfaces?

Interfaces are a contract. They exist to limit how tightly coupled two objects can be. Not every dependency needs an interface, but they help with writing modular code.

When you add in the concept of unit testing, you may have two conceptual implementations for any given interface: the real object you want to use in your application, and the mocked or stubbed object you use for testing code that depends on the object. That alone can be justification enough for the interface.

Why frameworks?

Essentially initializing and providing dependencies to child objects can be daunting when there are a large number of them. Frameworks provide the following benefits:

  • Autowiring dependencies to components
  • Configuring the components with settings of some sort
  • Automating the boiler plate code so you don't have to see it written in multiple locations.

They also have the following disadvantages:

  • The parent object is a "container", and not anything in your code
  • It makes testing more complicated if you can't provide the dependencies directly in your test code
  • It can slow down initialization as it resolves all the dependencies using reflection and many other tricks
  • Runtime debugging can be more difficult, particularly if the container injects a proxy between the interface and the actual component that implements the interface (aspect oriented programming built in to Spring comes to mind). The container is a black box, and they aren't always built with any concept of facilitating the debugging process.

All that said, there are trade-offs. For small projects where there aren't a lot of moving parts, and there's little reason to use a DI framework. However, for more complicated projects where there are certain components already made for you, the framework can be justified.

What about [random article on the Internet]?

What about it? Many times people can get overzealous and add a bunch of restrictions and berate you if you aren't doing things the "one true way". There isn't one true way. See if you can extract anything useful from the article and ignore the stuff you don't agree with.

In short, think for yourself and try things out.

Working with "old heads"

Learn as much as you can. What you will find with a lot of developers that are working into their 70s is that they have learned not to be dogmatic about a lot of things. They have methods that they have worked with for decades that produce correct results.

I've had the privilege of working with a few of these, and they can provide some brutally honest feedback that makes a lot of sense. And where they see value, they add those tools to their repertoire.

Berin Loritsch
  • 45,784
  • 7
  • 87
  • 160
  • Very well said, but I disagree with your second disadvantage. I cannot imagine a situation where an IOC container makes it more difficult to mock up objects directly in your test code. Your tests can simply not use the framework. Can you give even a theoretical example of how this would ever make tests more complicated? – JounceCracklePop May 29 '18 at 20:48
  • 8
    @CarlLeth, I've worked with a number of frameworks from Spring to .net variants. Spring will let you inject implementations right into private fields using some reflection/classloader black magic. The only way to test components built like that is to use the container. Spring does have JUnit runners to configure the test environment, but it is more complicated than setting things up yourself. So yes, I just gave a _practical_ example. – Berin Loritsch May 29 '18 at 21:00
  • The answer might benefit from more clearly saying what is meant by the term "dependencies". Anyone who isn't clear on what that means will have trouble understanding anything else. – supercat May 29 '18 at 22:16
  • 39
    There's one more disadvantage that I find to be a hurdle with DI through frameworks when I'm wearing my troubleshooter/maintainer hat: the spooky action at a distance they provide makes offline debugging harder. In the worst case, I have to run the code to see how dependencies are initialized and passed in. You mention this in the context of "testing", but it's actually much worse if you're just starting off looking at the source, never mind trying to get it to run (which may involve a ton of setup). Impeding my ability to tell what code does by just glancing at it is a Bad Thing. – Jeroen Mostert May 30 '18 at 11:17
  • @JeroenMostert, that's a good point. I'll add it to my disadvantages in the answer. I've run into that myself a number of times. – Berin Loritsch May 30 '18 at 14:29
  • I'm another person confused by the parent and child thing. When I see *parent object* and *child object* I think of an inheritance relationship, with the *parent object* being an instance of the *parent class* and the *child object* being an instance of the *child class*. That is how many other people seem to be interpreting it. This answer could do with a very explicit statement than such is not the case, otherwise many people are going to leave this page having misinterpreted your quote, through no fault of their own. – Pharap May 30 '18 at 15:06
  • 1
    Interfaces are not contracts, they are simply API's. Contracts imply semantics. This answer is using language specific terminology and Java/C# specific conventions. – Frank Hileman May 30 '18 at 19:54
  • 2
    @BerinLoritsch The main point of your own answer is that the DI principle != any given DI framework. The fact that Spring can do awful, unforgivable things is a disadvantage of Spring, not of DI frameworks in general. A good DI framework helps you follow the DI principle without nasty tricks. – JounceCracklePop May 31 '18 at 06:32
  • 1
    @CarlLeth: all DI frameworks are designed to remove or automate some things the programmer does not wish to spell out, they just vary in how. To the best of my knowledge, they all remove the ability to know how (or *if*) class A and B interact by just looking at A and B -- at the very least, you also need to look at the DI setup/configuration/conventions. Not a problem for the programmer (it's exactly what they want, actually), but a potential problem for a maintainer/debugger (possibly the same programmer, later). This is a trade off you make even if your DI framework is "perfect". – Jeroen Mostert May 31 '18 at 06:57
  • @JeroenMostert I think the more important thing to take from this is that "dependency inversion" isn't the same thing as "dependency injection". I think when Carl talks about "DI principle" he means "dependency inversion principle", which doesn't require dependency injection at all. My favorite so far is the composition root approach, which makes all the dependencies very explicit in one place (the composition root), at the expense of some run-time flexibility (it doesn't make it *impossible*, it just removes some of the simplicity of "startup-time composition"). – Luaan Jun 05 '18 at 06:51
  • You have the wrong definition of DI when the term was coined it was as a special case of IoC. By definition this involves giving up an aspect of control to a framework such as an IoC container. This way the entirety of your own code is decoupled from any concrete dependencies. DI absolutely does require a framework by the classical definition of the term. – Simon Wood Mar 16 '19 at 12:40
  • Yes, use DI only when the class has more than one responsibilities. – YantingChen Oct 24 '20 at 02:47
  • Disagree. Dependency inversion eliminates the need for child objects... – ahnbizcad Nov 06 '20 at 22:07
  • 1
    @ahnbizcad, A child object is still a child object whether you instantiate it directly or you use a framework to do it. Unless you have a different definition of a child object than I do. – Berin Loritsch Mar 17 '21 at 10:45
114

Dependency injection is, like most patterns, a solution to problems. So start by asking if you even have the problem in the first place. If not, then using the pattern most likely will make the code worse.

Consider first if you can reduce or eliminate dependencies. All other things being equal, we want each component in a system to have as few dependencies as possible. And if the dependencies are gone, the question of injecting or not becomes moot!

Consider a module which downloads some data from an external service, parses it, and performs some complex analysis, and writes to result to a file.

Now, if the dependency to the external service is hardcoded, then it will be really difficult to unit test the internal processing of this module. So you might decide to inject the external service and the file system as interface dependencies, which will allow you to inject mocks instead which in turn makes unit-testing the internal logic possible.

But a much better solution is simply to separate the analysis from the input/output. If the analysis is extracted to a module without side effects it will be much easier to test. Note that mocking is a code-smell - it is not always avoidable, but in general, it is better if you can test without relying on mocking. So by eliminating the dependencies, you avoid the problems which DI is supposed to alleviate. Note that such a design also adheres much better to SRP.

I want to emphasize that DI does not necessarily facilitate SRP or other good design principles like separation of concerns, high cohesion/low coupling and so on. It might just as well have the opposite effect. Consider a class A which uses another class B internally. B is only used by A and therefore fully encapsulated and can be considered an implementation detail. If you change this to inject B into the constructor of A, then you have exposed this implementation detail and now knowledge about this dependency and about how to initialize B, the lifetime of B and so on have to exist some other place in the system separately from A. So you have an overall worse architecture with leaking concerns.

On the other hand there are some cases where DI really is useful. For example for global services with side effects like a logger.

The problem is when patterns and architectures become goals in themselves rather than tools. Just asking "Should we use DI?" is kind of putting the cart before the horse. You should ask: "Do we have a problem?" and "What is the best solution for this problem?"

A part of your question boils down to: "Should we create superfluous interfaces to satisfy the demands of the pattern?" You probably already realize the answer to this - absolutely not! Anyone telling you otherwise is trying to sell you something - most likely expensive consulting hours. An interface only has value if it represents an abstraction. An interface which just mimics the surface of a single class is called a "header interface" and this is a known antipattern.

JacquesB
  • 57,310
  • 21
  • 127
  • 176
  • 19
    I couldn't agree more! Also note that mocking things for the sake of it means that we're not actually testing the real implementations. If `A` uses `B` in production, but has only been tested against `MockB`, our tests don't tell us if it will work in production. When pure (no side-effect) components of a domain model are injecting and mocking each other, the result is a huge waste of everyone's time, a bloated and fragile codebase, and low confidence in the resulting system. Mock at the system's boundaries, not between arbitrary pieces of the same system. – Warbo May 29 '18 at 19:54
  • Dependency inversion is (part of) a solution to the problem: I want to write software that is testable and maintainable. Everyone reading this answer has that problem, so your first paragraph is moot. Eliminating dependencies is great, but doesn't nullify the benefits of DI. Also, saying "anyone who disagrees with me is scamming you" is pretty arrogant. Maybe those interfaces are not as superfluous as you think. – JounceCracklePop May 29 '18 at 20:54
  • 21
    @CarlLeth Why do you think DI makes code "testable and maintainable", and code without it less? JacquesB is right that *side effects* are the thing which harms testability/maintainability. If code has no side-effects, we don't care what/where/when/how it calls other code; we can keep it simple and direct. If code has side-effects we *have* to care. DI can pull side-effects out of functions and put it in parameters, making those functions more testable but the program more complex. Sometimes that's unavoidable (e.g. DB access). If code has no side effects, DI is just useless complexity. – Warbo May 29 '18 at 22:59
  • 15
    @CarlLeth: DI is *one* solution to the problem of making code testable in isolation if it has dependencies which forbid it. But it does not reduce overall complexity, nor does it make code more readable, which means it does not necessarily increas maintainability. However, if all of those dependencies can be eliminated by better separation of concerns, this perfectly "nullifies" the benefits of DI, because it nullfies the need for DI. This is often a better solution to making code more testable *and* maintainable at the same time. – Doc Brown May 30 '18 at 04:34
  • 7
    @Warbo This was the original and still probably the only valid use of mocking. Even at system boundaries, it is rarely needed. People really do waste much time creating and updating nearly worthless tests. – Frank Hileman May 30 '18 at 19:49
  • @DocBrown That's like saying writing clean code nullifies the need for the Single Responsibility Principle. Principles like SRP and DI are the way you achieve clean code in the first place. – JounceCracklePop May 30 '18 at 20:47
  • 4
    @CarlLeth: you are trying really hard to misunderstand what I wrote, do you? Code does not get cleaner just because you throw arbitrarily DI at it - if you can eliminate the need for DI, because you eliminate a dependency completely, code can become even cleaner than by using DI. The SRP is not directly comparable to that, so eliminating the need of applying SRP would literally mean "code with zero responsibility" = no code at all, which is hard to achieve when the code shall keep its semantics. – Doc Brown May 30 '18 at 21:00
  • 1
    @CarlLeth: maybe an example helps: see [my answer here](https://softwareengineering.stackexchange.com/questions/306483/how-to-solve-circular-dependency/306489#306489) two years ago. Most of the other answers tried to solve the described problem by introducing interfaces and injecting the (cyclic) dependencies. My answer, however, suggested to apply the SRP and so remove the cyclic dependency at all - nothing more to inject, cleaner code. – Doc Brown May 30 '18 at 21:05
  • @DocBrown I'm not intentionally misunderstanding you; I'm challenging your definition of DI. The DI principle says "depend on abstractions instead of details or lower-level modules". It does not say "wire up the Spring framework using XML to do silent property injection with private setters" as most of the answers seem to think. Nor does it say "make one header interface for every implementation". The set of abstractions != the set of interfaces. – JounceCracklePop May 31 '18 at 06:29
  • 8
    @CarlLeth: ok, now I see where the misunderstanding comes from. You are talking about dependency **inversion**. But, the question, this answer and my comments are about DI = depency **injection**. – Doc Brown May 31 '18 at 06:37
  • 1
    You had me at _Dependency injection is, like most patterns, a solution to problems. So start by asking if you even have the problem in the first place. If not, then using the pattern most likely will make the code worse._ +1 – Alex Apr 23 '20 at 09:33
  • best answer on DI, the opening line is the one that a lot of people miss. +rep – humble_wolf Aug 19 '20 at 11:27
  • "An interface which just mimics the surface of a single class is called a "header interface" and this is a known antipattern." These are called Singletons, and I don't see how this is a problem. Care to elaborate? – Ryan Haney Oct 03 '20 at 22:57
  • @RyanHaney: No a singleton is a different thing. A singleton is a pattern which ensures that only a single *instance* is created for a class. It is also generally considered an antipattern, but it is a different antipattern. – JacquesB Oct 04 '20 at 09:05
  • @JacquesB You are correct. Not sure where my head was at. Thanks for pointing that out. What I think I was trying to say before my brain went off in another direction: it's common to have a "header interface" when using DI in conjunction with automated testing. As far as elaborating, I wanted to understand _why_ it's considered an antipattern. Continued in the next message... – Ryan Haney Oct 04 '20 at 21:02
  • @JacquesB In your example with SRP, assuming you have one class that performs analysis, and one class that gathers the information, I would test this as a concrete analysis class with mocked interface for IO (a "header interface"). I'm not sure if it's possible in other languages or frameworks, but my mocking framework only allows mocking interfaces. It seems like a reasonable approach. Considering that the data gatherer is only ever used by the analysis class, is that considered a leaking concern? How do we achieve testability without leaking concerns? – Ryan Haney Oct 04 '20 at 21:04
  • @RyanHaney If the data to be processed is passed to the analysis class (instead of the analysis class fetching the data through an injected IO interface) then you don't need mocking at all. – JacquesB Oct 06 '20 at 19:20
  • @JacquesB Thanks for the async discussion :-) At some point, a composition of the data-gatherer and analysis logic happens. How do you test that without mocking? – Ryan Haney Oct 07 '20 at 19:13
  • more concrete examples would be nice. – ahnbizcad Nov 06 '20 at 22:09
  • If your code base is going to grow and scale quickly, then you have a problem if you want to handle that by hand. – Lucas Montenegro Carvalhaes Jul 05 '22 at 02:03
55

In my experience, there are a number of downsides to dependency injection.

First, using DI does not simplify automated testing as much as advertised. Unit testing a class with a mock implementation of an interface lets you validate how that class will interact with the interface. That is, it lets you unit test how the class under test uses the contract provided by the interface. However, this provides much greater assurance that input from the class under test into the interface is as expected. It provides rather poor assurance that the class under test responds as expected to output from the interface as that is almost universally mock output, which is itself subject to bugs, oversimplifications and so on. In short, it does NOT let you validate that the class will behave as expected with a real implementation of the interface.

Second, DI makes it much harder to navigate through code. When trying to navigate to the definition of classes used as input to functions, an interface can be anything from a minor annoyance (e.g. where there is a single implementation) to a major time sink (e.g. when an overly generic interface like IDisposable is used) when trying to find the actual implementation being used. This can turn a simple exercise like "I need to fix a null reference exception in the code that happens right after this logging statement is printed" into a day long effort.

Third, the use of DI and frameworks is a double-edged sword. It can greatly reduce the amount of boiler-plate code needed for common operations. However, this comes at the expense of needing detailed knowledge of the particular DI framework to understand how these common operations are actually wired together. Understanding how dependencies are loaded into the framework and adding a new dependency into the framework to inject can require reading a fair amount of background material and following some basic tutorials on the framework. This can turn some simple tasks into rather time consuming ones.

Eric
  • 785
  • 4
  • 5
  • I would also add that the more you inject, the longer your startup times are going to become. Most DI frameworks create all injectable singleton instances at startup time, regardless of where they are used. – Rodney P. Barbati May 30 '18 at 03:06
  • 8
    If you would like to test a class with real implementation (not a mock), you can write functional tests - tests similar to unit tests, but not using mocks. – BЈовић May 30 '18 at 05:57
  • @RodneyP.Barbati I'd question "Most". Of the ones I've used, only Spring does this by default (it does it to try to flush injection errors out immediately), and you can opt out of it. HK2, Guava and Dagger all create instances at injection time, not application startup. – Logan Pickup May 30 '18 at 06:15
  • 3
    I think your second paragraph needs to be more nuanced: DI in itself doesn’t make it hard(er) to navigate through code. At its simplest, DI is simply a consequence of following SOLID. What increases complexity is the use of unnecessary indirection and *DI frameworks*. Other than that, this answer hits the nail on the head. – Konrad Rudolph May 30 '18 at 10:11
  • 6
    Dependency injection, outside of cases where it is truly needed, is also a warning sign that other superfluous code may be present in great abundance. Executives are often surprised to learn that developers add complexity for the sake of complexity. – Frank Hileman May 30 '18 at 19:43
  • @BЈовић The unit vs functional distinction isn't about mocking; both can be done with or without mocks. Instead it's about piecewise vs end-to-end, white-box vs black-box, implementation vs specification. Example: helper functions can be unit tested (with/without mocks), but not functional tested since they're implementation details with no specification of their own (public/private is a whole other debate... ;) ). Since they're white-box, unit tests can set up an arbitrary state and test what happens; black-box functional tests can only test states reachable via some input. – Warbo Jun 01 '18 at 12:49
  • 2
    This is a great answer. It does make it harder to navigate since you cannot just follow the relationship link via your IDE any more. You'll have to do a string search...overall it complicate things to allow decoupling which should theoretically help testing. – max Jul 21 '19 at 20:27
  • Injecting IDisposable is a code smell. – Ryan Haney Oct 03 '20 at 23:00
  • @max In my IDE, it's one extra hop from the usage to the implementation. There is also a way to jump straight to the implementation if there is only one, otherwise, you are shown a list. – Ryan Haney Oct 07 '20 at 19:22
  • 1
    @max indeed, IDE becomes semi-useless with DI framework. – John Jiang Feb 07 '23 at 20:52
22

I followed Mark Seemann's advice from "Dependency injection in .NET" - to surmise.

DI should be used when you have a 'volatile dependency', e.g. there's a reasonable chance it might change.

So if you think you might have more than one implementation in the future or the implementation might change, use DI. Otherwise new is fine.

Dennis
  • 8,157
  • 5
  • 36
  • 68
Rob
  • 231
  • 1
  • 3
  • 6
    Note he also gives different advice for OO and functional languages http://blog.ploeh.dk/2017/01/27/from-dependency-injection-to-dependency-rejection/ – jk. May 29 '18 at 11:51
  • 4
    That's good point. If we create interfaces for every dependency by default then it's somehow against YAGNI. – Arkadiusz Kałkus May 29 '18 at 12:05
  • 4
    Can you provide a reference to *"Dependency injection in .NET"*? – Peter Mortensen May 29 '18 at 23:39
  • 1
    If you are unit testing, then it is highly likely that your dependency is volatile. – Jaquez May 30 '18 at 16:12
  • 17
    The good thing is, developers are always able to predict the future with perfect accuracy. – Frank Hileman May 30 '18 at 19:44
  • 1
    @FrankHileman :P The core principle is "don't shoot yourself in the foot". If there's a major cost to making the dependency replaceable later, but it's easy doing it right now, go ahead. The main point YAGNI addresses is that people often add capabilities that *aren't* significantly cheaper to add right now compared to later - and the result is a lot of waste and complexity where it doesn't help anything. The "poor prediction" sword cuts both ways - your "great abstraction" might turn out to be horribly poor, and you're worse off even if you need *an* abstraction later. – Luaan Jun 03 '18 at 07:37
  • 1
    @Luaan Dependency injection does not add abstractions, it only makes them more difficult to use, and makes it possible for third parties to inject code into a library/application/server that they did not author. Adding any complexity where it is not needed is not software engineering or over-engineering; it is bad engineering. – Frank Hileman Jun 04 '18 at 16:31
  • @FrankHileman I wasn't talking about dependency injection specifically. Your comment didn't seem to either :) As for "3rd party injection"... the main point of object oriented programming is to allow adding code without having to modify existing code. But that's not how most people do it, is it? Dependency injection is one tool that helps you do that (though IMO 100% unnecessary). Just because it's *possible* for 3rd parties to inject code doesn't mean it's *allowed* or supported. That's like saying "we can't use .NET, because anyone can recompile our code from binaries". Ok, so what? – Luaan Jun 05 '18 at 06:43
  • @Jaquez "If you are unit testing, then it is highly likely that your dependency is volatile." So you don't unit test your most important, more relied-upon, core backbone pieces of code? Please. – ahnbizcad Nov 06 '20 at 22:12
  • @ahnbizcad, it seems like you've misunderstood the point I was making. First, I never made any comment about what code is tested. My point was that unit testing very often requires a new implementation of a dependency (i.e., the mocked/stubbed implementation). Once that new implementation exists, then there are at least 2 unique implementations, thus, per this answer, the dependency is "volatile" and should be injected. And please do "unit test your most important, more relied-upon, core backbone pieces of code", regardless of what anyone on the internet seems to be suggesting. – Jaquez Nov 08 '20 at 06:42
21

My biggest pet peeve about DI was already mentioned in a few answers in a passing way, but I'll expand on it a bit here. DI (as it is mostly done today, with containers etc.) really, REALLY hurts code readability. And code readability is arguably the reason behind most of today's programming innovations. As someone said - writing code is easy. Reading code is hard. But it's also extremely important, unless you're writing some kind of tiny write-once throwaway utility.

The problem with DI in this regard is that it's opaque. The container is a black box. Objects simply appear from somewhere and you have no idea - who constructed them and when? What was passed to the constructor? Who am I sharing this instance with? Who knows...

And when you work primarily with interfaces, all the "go to definition" features of your IDE go up in smoke. It's awfully difficult to trace out the flow of the program without running it and just stepping through to see just WHICH implementation of the interface was used in THIS particular spot. And occasionally there's some technical hurdle which prevents you from stepping through. And even if you can, if it involves going through the twisted bowels of the DI container, the whole affair quickly becomes an exercise in frustration.

To work efficiently with a piece of code that used DI, you must be familiar with it and already know what goes where.

Added 5 years later: The state-of-the-art IDEs of today like Visual Studio or the JetBrains family have actually gotten a bit better with the "Go to definition" part. If there is just one implementation of an interface, it will jump to that. Still, the other issues persist.

Vilx-
  • 5,320
  • 4
  • 20
  • 24
  • the alternative, displaying all the guts with long blocks of code with several strategies of them is even harder to read and harder to test. not knowing: the point is to duck type and not know the implementation details. you separate the high level from the low level. if that means "opaque", then i think you're picking at something that isn't the whole picture. >And when you work primarily with interfaces, all the "go to definition" features of your IDE go up in smoke. This sucks. – ahnbizcad Nov 07 '20 at 03:36
  • 1
    @ahnbizcad - If you have a piece of code that can have several strategies then sure, go for interfaces or inheritance or whatever. That's what it's there for. But 90% of the time you don't. There's just one "PersonRepository" class, why hide it behind a `IPersonRepository`? When you're new to the project, this will just confuse you and make you spend hours looking for the implementation of that interface and how it got there. Later it will be easier, but still act like a speedbump every time you need to jump from an interface to an implementation, even if you know where to search. – Vilx- Nov 07 '20 at 16:28
  • This is spot-on. Thinking back to the DI framework in my previous job still conjures up nightmare these days. – John Jiang Feb 07 '23 at 20:54
4

Enabling switching implementations quickly (DbLogger instead of ConsoleLogger for example)

While DI in general is surely a good thing, I'd suggest not to use it blindly for everything. For example, I never inject loggers. One of advantages of DI is making the dependencies explicit and clear. There's no point in listing ILogger as dependency of nearly every class - it's just clutter. It's the responsibility of the logger to provide the flexibility you need. All my loggers are static final members, I may consider injecting a logger when I need a non-static one.

Increased number of classes

This is an disadvantage of the given DI framework or mocking framework, not of DI itself. In most places my classes depend on concrete classes, which means that there's zero boilerplate needed. Guice (a Java DI framework) binds by default a class to itself and I only need to override the binding in tests (or wire them manually instead).

Creation of unnecessary interfaces

I only create the interfaces when needed (which is rather rare). This means that sometimes, I have to replace all occurrences of a class by an interface, but the IDE can do this for me.

Should we use dependency injection when we have just one implementation?

Yes, but avoid any added boilerplate.

Should we ban creating new objects except language/framework ones?

No. There'll be many value (immutable) and data (mutable) classes, where the instances just get created and passed around and where there's no point in injecting them -- as they never get stored in another object (or only in other such objects).

For them, you may need to inject a factory instead, but most of the time it makes no sense (imagine e.g., @Value class NamedUrl {private final String name; private final URL url;}; you really don't need a factory here and there's nothing to inject).

Is injecting a single implementation bad idea (let's say we have just one implementation so we don't want to create "empty" interface) if we don't plan to unit test a particular class?

IMHO it's fine as long as it causes no code bloat. Do inject the dependency but do not create the interface (and no crazy config XML!) as you can do it later without any hassle.

Actually, in my current project, there are four classes (out of hundreds), which I decided to exclude from DI as they're simple classes used in too many places, including data objects.


Another disadvantage of most DI frameworks is the runtime overhead. This can be moved to compile time (for Java, there's Dagger, no idea about other languages).

Yet another disadvantage is the magic happening everywhere, which can be tuned down (e.g., I disabled proxy creation when using Guice).

maaartinus
  • 2,633
  • 1
  • 21
  • 29
3

I have to say that in my opinion, the entire notion of Dependency Injection is overrated.

DI is the modern day equivalent of global values. The things you are injecting are global singletons and pure code objects, otherwise, you couldn't inject them. Most uses of DI are forced on you in order to use a given library (JPA, Spring Data, etc). For the most part, DI provides the perfect environment for nurturing and cultivating spaghetti.

In all honesty, the easiest way to test a class is to ensure that all dependencies are created in a method that can be overridden. Then create a Test class derived from the actual class and override that method.

Then you instantiate the Test class and test all its methods. This won't be clear to some of you - the methods you are testing are the ones belonging to the class under test. And all of these method tests occur in a single class file - the unit testing class associated with the class under test. There is zero overhead here - this is how unit testing works.

In code, this concept looks like this...

class ClassUnderTest {

   protected final ADependency;
   protected final AnotherDependency;

   // Call from a factory or use an initializer 
   public void initializeDependencies() {
      aDependency = new ADependency();
      anotherDependency = new AnotherDependency();
   }
}

class TestClassUnderTest extends ClassUnderTest {

    @Override
    public void initializeDependencies() {
      aDependency = new MockitoObject();
      anotherDependency = new MockitoObject();
    }

    // Unit tests go here...
    // Unit tests call base class methods
}

The result is exactly equivalent to using DI - that is, the ClassUnderTest is configured for testing.

The only differences are that this code is utterly concise, completely encapsulated, easier to code, easier to understand, faster, uses less memory, does not require an alternate configuration, does not require any frameworks, will never be the cause of a 4 page (WTF!) stack trace that includes exactly ZERO (0) classes which you wrote, and is completely obvious to anyone with even the slightest OO knowledge, from beginner to Guru (you would think, but would be mistaken).

That being said, of course we can't use it - it's too obvious and not trendy enough.

At the end of the day, though, my biggest concern with DI is that of the projects I have seen fail miserably, all of them have been massive code bases where DI was the glue holding everything together. DI is not an architecture - it is really only relevant in a handful of situations, most of which are forced on you in order to use another library (JPA, Spring Data, etc). For the most part, in a well designed code base, most uses of DI would occur at a level below where your daily development activities take place.

  • The problem with singletons is that they are making unit testing really difficult. Besides the different between a singleton and a global variable is quite small. DI is still way to go. And contrary to your answer, difference between singleton and DI is huge. – BЈовић May 30 '18 at 05:55
  • 8
    You've described not the equivalent, but the opposite of dependency injection. In your model, every object needs to know the concrete implementation of all its dependencies, but using DI that becomes the responsibility of the "main" component - to glue appropriate implementations together. Bear in mind that DI goes hand-in-hand with the _other_ DI - dependency inversion, where you do not want high-level components to have hard dependencies on low-level components. – Logan Pickup May 30 '18 at 06:21
  • 2
    its neat when you have only one level of class inheritance and one level of dependencies. Surely it will turn into hell on earth as it expands? – Ewan May 30 '18 at 11:04
  • 6
    if you use interfaces and move initializeDependencies() into the constructor its the same but neater. The next step, adding construction parameters means you can do away will all your TestClasses. – Ewan May 30 '18 at 11:05
  • Imagine you have class A with no dependencies, then B having A as dependency, C having B and D having C. Now you want a D which uses A', a class derived from A. B instantiates A, so you have to create B' instead. Then C', then D'. Writing one class forces you to create three others. With dependency injection you could just have wired your objects differently. – GodsBoss May 30 '18 at 17:36
  • 8
    There is so much wrong with this. As others have said, your 'DI equivalent' example is not dependency injection at all, it is the antithesis, and demonstrates a complete lack of understanding of the concept and introduces other potential pitfalls as well: [partially initialized objects are a code smell](https://softwareengineering.stackexchange.com/q/334970/38882) As Ewan suggests, Move the initialization to the constructor, and pass them via constructor parameters. Then you have DI... – Mr.Mindor May 30 '18 at 23:14
  • (cont) With initialization through the constructor parameters, you completely eliminate the need for all the boiler plate code your one sub class per test would require. You can keep all the sub classes for the dependencies if you really must, but those are the sort of thing mocking frameworks are written for. Lastly the bit about DI frameworks does actually offer something to the OP question hidden behind the rant. They are a tool that _can_ have a steep learning curve and _can_ be abused just like any other, and can make things worse if used inappropriately. – Mr.Mindor May 30 '18 at 23:27
  • 3
    Adding to @Mr.Mindor: there's a more general anti-pattern "sequential coupling" which doesn't just apply to initialisation. If methods of an object (or, more generally, calls of an API) must be run in a particular order, e.g. `bar` can only be called after `foo`, then that's a bad API. It's *claiming* to provide functionality (`bar`), but we can't actually use it (since `foo` might not have been called). If you want to stick with your `initializeDependencies` (anti?)pattern, you should at least make it private/protected and call it automatically from the constructor, so the API is sincere. – Warbo May 31 '18 at 12:35
  • 1
    I guess no one noticed this wasn't supposed to be an example of DI. If it was, there would be more code. I'm showing how unit testing is easier without DI. For those of you poo-pooing two stage construction - I'll assume you've never heard of a factory or factory pattern. For those of you thinking that this adds overhead - you either aren't unit testing or your unit tests are embedded in your production code - good for you! Also, you can't or shouldn't call overridden methods from a constructor. The rest of you should try it before commenting. – Rodney P. Barbati May 31 '18 at 20:23
  • @RodneyP.Barbati Factories (methods or classes) are orthogonal to two-step creation. Whether we create a `Foo` object with a constructor or factory, we should get back a working `Foo` object, not a half-finished one. Also public APIs shouldn't expose implementation details like `initializeDependencies`. If you still want to expose `initialiseDependencies` (e.g. to avoid calling it from a constructor), put it in an `UninitialisedFoo` class with no other methods, and have it return a `Foo` object (calling it multiple times should work too). – Warbo Jun 01 '18 at 13:04
  • While I agree that DI is not the magic bullet a lot of people say it is, I'll have to say that everything down from your third paragraph is... weird. – T. Sar Jun 01 '18 at 13:19
  • @RodneyP.Barbati I think everyone commenting here noticed that your example wasn't supposed to be an example of DI. The problem is that you bill it as _equivalent_ to DI and it is not. It is, as Logan described, pretty much the exact opposite: A specialized custom version of the class for each combination of dependency _behaviors_ you might need to override. That can amount to _up to_ as many sub classes as you have tests. No it does not cause runtime overhead, but it does lead to unnecessary overhead for the developers in the form of having to manage and maintain the extra code. – Mr.Mindor Jun 01 '18 at 18:52
  • (cont) Additional developer overhead because they also have to somehow track which subclass and which dependency subclasses belong to which test or tests. Further you claim it is easier to do this than to use DI... How is writing, maintaining, managing the extra subclasses _easier_ than `var dep1 = MockOrOverrideForDep1(); var dep2 = MockOrOverrideForDep2(); var testSubject = new ClassUnderTest(dep1, dep2); ... rest of test ` (cont) – Mr.Mindor Jun 01 '18 at 19:06
  • 1
    And yes you are right that in most languages (at least that I'm familiar with) you can't or at least really shouldn't call virtual functions from constructors, but that isn't a sufficient argument for two step initialization over DI. If you use dependency injection via constructor parameters, the _problem goes away completely_. You can even still use factories and _never have partially initialized objects_ `public static ClassToBuild ImAFactory() { var dep1 = new DependencyOne(); var dep2 = new DependencyTwo(); return new ClassToBuild(dep1,dep2); }` – Mr.Mindor Jun 01 '18 at 19:16
  • 1
    @Ewen Unit testing is ever only one class deep - that's why they call it unit testing. – Rodney P. Barbati Oct 30 '18 at 19:08
  • @Logan Pickup - can you please provide the details of when you actually used DI to inject more than a single class. If you are using DI to inject more than a single implementation, then your annotations have enough information to identify that single class, but in a roundabout fashion - you can't just say "class", you have to describe a class that looks like this. – Rodney P. Barbati Oct 30 '18 at 19:12
  • 1
    @RodneyP.Barbati Whether you inject a single class or multiple is irrelevant. The point is to remove the dependency on the implementation, so that if the implementation changes (not necessarily to a different implementation), the user doesn't need to change/be recompiled/etc. too. I also find, personally, that the constraints imposed by using DI help clarify what a particular interface should look like, which in my experience (maybe not yours) leads to cleaner designs. – Logan Pickup Oct 31 '18 at 01:43
  • @RodneyP.Barbati To answer your question though - yes, I have places in my code with annotations, because type information is not always sufficient (usually this only happens when injecting interfaces that I didn't write); I also have places where an unknown number of the same type can be injected (plugin-style architecture); and I also have places where depending on which app is being built, a different implementation is injected. – Logan Pickup Oct 31 '18 at 01:47
  • You're right about the global singletons, and after much reading, I've found that the "cons" to using singletons also apply to dependencies, because they're basically global. The reason I'm interjecting is this: Singletons appear to be more appropriate when their behavior is re-usable between different programs. I personally use them, because DI is just as opaque, if not more. If I need to hot-swap a "helper", DI forces me to pick where in the tree to swap it, singletons' "where" is time-based. Just a trade-off on what to know when debugging. Less code, same result = happy programmers. – Stephen J Oct 24 '19 at 21:02
  • @StephenJ "singletons' "where" is time-based" Uh oh, sounds like you're using shared mutable global state, which is a *massive* source of bugs. – Warbo May 08 '20 at 15:21
  • @RodneyP.Barbati Re: "Unit testing is ever only one class deep". There are multiple definitions for terms like "unit testing"; some people use "unit" to mean a "unit of code" (e.g. a class); others mean "unit of functionality" (e.g. "logging in"). My experience is that the former requires a lot of effort and complexity for little to no gain. I've written [a blog post about it](http://chriswarbo.net/blog/2017-11-10-unit_testing_terminology.html) – Warbo May 08 '20 at 15:24
  • @Warbo like a music player? Background music is an appropriate use-case. So's network data retrieval. There's only "massive bugs" if you don't know where your callbacks are at. – Stephen J May 26 '20 at 09:03
  • With your example, what happens if there are 5 versions of `ADependency`? You then need to pass in different arguments into `ADependency` when instantiating it. And that means `ClassUnderTest` needs to pass those values and have constructor params. And now a dependent object dictated the interface of a caller class's constructor. Instead of the parent having to know the dependency's parameters, just pass the right instance of the dependent object... and you're right at dependency injection. no. your example is not equivalent. – ahnbizcad Nov 07 '20 at 03:40
  • ahnbizcad - if there are five versions of ADependency, then you will have 5 other classes that require their own testing - none of which would have any association with the example I provided. This is because this example is a method of mocking the instance used for ADependency - in other words, instances of ADependency are not being tested in this example. – Rodney P. Barbati Nov 10 '20 at 19:16
  • Delayed response to @Mr.Mindor - I would ask you to explain how you might change the injected instance for use when testing? My point is that no matter what you do, you will always have some code (or configuration) somewhere which specifies a different instance be used - i.e. when testing, you have to use a different configuration for your classes that specifies mock instances. This means you have multiple artifacts for various purposes - same as the code above. – Rodney P. Barbati Nov 10 '20 at 19:22
  • @RodneyP.Barbati not exactly sure what you mean by _change the injected instance_. My comment from Jun 1 '18 at 19:06 shows how you would pass a mocked dependency directly to the constructor during a test. Step 1, create mocked instances, `MockOrOverrideForDep1()` would be a function that returns a mocked out dependency.(I normally wouldn't use a function to set up a one-off mock, but not a lot of room in comments.) Step 2 create instance of ClassUnderTests, Step 3 test ClassUnderTests's behavior. – Mr.Mindor Nov 18 '20 at 00:00
  • @RodneyP.Barbati if you meant how you would change the configuration of the DI framework, you usually don't have to, in most cases the DI framework wouldn't be involved in tests at all, really 99% of your code shouldn't even be aware a DI framework is in use. – Mr.Mindor Nov 18 '20 at 00:04
-5

Really your question boils down to "Is unit testing bad?"

99% of your alternate classes to inject will be mocks that enable unit testing.

If you do unit testing without DI you have the problem of how to get the classes to use mocked data or a mocked service. Lets say 'part of logic' as you may not be separating it out into services.

There are alternate ways of doing this, but DI is a good and flexible one. Once you have it in for testing you are virtually forced to use it everywhere, as you need some other piece of code, even it's so-called 'poor man's DI' which instantiates the concrete types.

It's hard to imagine a disadvantage so bad that the advantage of unit testing is overwhelmed.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Ewan
  • 70,664
  • 5
  • 76
  • 161
  • Unit testing, yeah. But my colleague can say - this is outermost/topmost layer of our component (in our case controllers of REST API), we won't test it, why inject not create new? And how about being strict, by banning "new" keyword - not allowing to create new objects at all? – Arkadiusz Kałkus May 29 '18 at 12:08
  • When it comes to 'poor man's DI' that's what I did in some cases to avoid unnecessary interfaces creation - but then my colleague asks, so why to inject them at all? Indeed. Why? To me - to keep consistency. But it's not the best, scientific reason. It's aesthetic reason rather. – Arkadiusz Kałkus May 29 '18 at 12:09
  • @Landeeyo You are still creating interfaces with *"Poor man's DI"*, but you omit the DI container and inject your dependencies *by hand*. – Paul Kertscher May 29 '18 at 12:17
  • @PaulKertscher Oh, I'm sorry, now I know what you mean. I meant something else - registering dependency as concrete type:concrete type, not interface:concrete type. – Arkadiusz Kałkus May 29 '18 at 12:22
  • 14
    I disagree with your assertion that DI is about unit testing. Facilitating unit testing is just one of the benefits of DI, and arguably not the most important one. – Robert Harvey May 29 '18 at 12:44
  • @RobertHarvey hmm my interpretation of the question is more "can we do without DI" rather than "why should we use DI" – Ewan May 29 '18 at 12:49
  • @Landeeyo 1. test your controllers 2. new in controllers is not the same as new in app.start 3. if you inject concrete types, then you can also inject classes which inherit from them. But you have an inheritance problem if you want to remove functionality (for want of a better term) Interfaces are just an empty class to inherit from – Ewan May 29 '18 at 12:53
  • @Landeeyo IIRC its the banning of __naked__ 'new' not just 'new', i.e. 'new' should only be used in factory methods etc. – esoterik May 29 '18 at 18:13
  • 7
    I disagree with your premise that unit testing and DI are so close. By using a mock/stub, we make the test suite lie to us a bit more: the system under test gets further from the real system. That's objectively bad. Sometimes that's outweighed by an upside: mocked FS calls don't require cleanup; mocked HTTP requests are fast, deterministic and work offline; etc. In contrast, every time we use a hard-coded `new` inside a method we *know* that the same code running in production was running during tests. – Warbo May 29 '18 at 19:44
  • @Warbo those would be integration tests – Ewan May 29 '18 at 21:14
  • 1
    @Ewan Integration tests (and mocking) are for *external* dependencies: Web services, relational databases, key/value stores, email senders, etc. At some point this idea got hijacked and applied to "any other class, even if you just wrote it, it's literally right there in the same directory, would actually be in the same file if Java allowed it, and the only reason that code is in another class at all is because of the Single Responsibility Principle" – Warbo May 29 '18 at 22:31
  • 2
    @Warbo I would argue that functions with side-effects, such as those that access the FS or make HTTP requests, cannot be tested effectively as a unit. When you keep all evaluation logic within pure functions and isolate side-effects to the application boundary, you are left with code that is optimally unit testable without the need for mocks/stubs. Since side-effects cannot be unit tested, they are left to the integration tests. – Joshua Jones May 30 '18 at 02:04
  • @Warbo integration if you leave the dependency in, unit if you mock it – Ewan May 30 '18 at 06:23
  • 8
    No, it’s not “is unit testing bad?”, it’s “is mocking (a) really necessary, and (b) worth the increase in complexity incurred?” That’s a very different question. Unit testing *isn’t* bad (literally nobody is arguing this), and it’s definitely worth it in general. But not all unit testing requires mocking, and mocking does carry a substantial cost so it should, at least, be used judiciously. – Konrad Rudolph May 30 '18 at 09:17
  • @KonradRudolph I feel that the argument has gone slightly off topic. Consider the OP. Are you disagreeing with "99% of your alternate classes to inject will be mocks that enable unit testing" or "If you do unit testing without DI you have the problem of how to get the classes to use mocked data"? – Ewan May 30 '18 at 10:42
  • 1
    @Ewan What I’m disagreeing with is that DI is necessary for unit testing. I simply draw the line at mocking: I’d rather write an *integration test* (which doesn’t necessarily require mocking) than a unit test with a mock object, if doing so requires me to increase my code base complexity manyfold (which DI frameworks do). Of course DI frameworks have utility beyond mocking (e.g. to allow plugins) but this use is more limited in scope and doesn’t make the rest of the code base more complex. – Konrad Rudolph May 30 '18 at 10:46
  • @KonradRudolph are you saying that an integration test is a unit test? – Ewan May 30 '18 at 10:46
  • 1
    @Ewan No, I’m saying that in such a situation I’d use an integration test *instead of* a unit test. – Konrad Rudolph May 30 '18 at 10:48
  • @KonradRudolph ok so we agree. _if_ you do unit tests, DI is very helpful and probably the majority of your DI usage. If you stop using DI then you either have to throw out your unit tests and switch to integration tests, or do something clever and unusual to replace it – Ewan May 30 '18 at 10:51
  • 6
    @Ewan After your last comment I don’t think we agree. I’m saying that *most unit tests don’t need DI [frameworks]*, because most unit tests don’t need mocking. In fact, I’d even use this as a heuristic for code quality: if most of your code can’t be unit tested without DI/mock objects then you’ve written bad code that was too rightly coupled. Most code should be highly decoupled, single responsibility and general-purpose, and be trivially testable in isolation. – Konrad Rudolph May 30 '18 at 10:54
  • 1
    @KonradRudolph you muddle the point with '[frameworks]' which i don't mention. Unless you are doing functional programming or something each layer of your application generally wraps an object from the layer below. if you Unit test your UI layer you inject your BusinessLayer, if you unit test a BL object you inject the Datalayer etc. Sure, _some_ objects might be pure logic, but even they need to share code occasionally! – Ewan May 30 '18 at 11:17
  • @KonradRudolph also, think about what you just said: code that uses DI is bad code? you just wrote off half the internet. I mean props for standing up for your principles against the tide, but you should probably have said that first – Ewan May 30 '18 at 11:37
  • @Ewan Sure but at that point of your argument, each function (particularly, but not exclusively, each constructor) accepting arguments is DI. While true for some (useful) definition of DI, this is clearly not what’s meant in this discussion. Hence my addition of “[frameworks]”. Incidentally, *the same is true in functional programming*. – Konrad Rudolph May 30 '18 at 12:14
  • @KonradRudolph well yes, but given that I am not talking about frameworks and we restrict DI to the usual parameter or construction injection. I'm still not sure what you disagree with. I don't fully believe that you think MyController(IMyService service) or MyService(IMyOtherServiceClient client) is a bad thing – Ewan May 30 '18 at 12:18
  • 3
    @Ewan Your characterisation of unit/integration too simplistic. If we have code like `class Order { ... public Price total() { result = new Price(); for (item : this.items) result.add(item.price()); return result; } }` and a test which does `Order o = new Order(); o.add(i); o.add(i); assertEqual(o.total(), 2 * i.price());` you're saying that it's an integration test because it calls `new Price` rather than mocking it? I would call it a unit test since it's testing a unit of functionality (calculating the total price of an order) with pure input/output examples. – Warbo May 30 '18 at 13:19
  • @Warbo wow that's a really on the line example. technically I guess its an integration test unless Price is a struct – Ewan May 30 '18 at 13:23
  • 4
    @Ewan I don't understand "technically": if some definition called that integration testing, I'd say that definition is flawed. That's also not "on the line": it's common OOP practice, modelling a domain using objects and their interactions, to create a language/algebra for expressing our problems and their solutions. Distinctions like struct/not-struct are either just leaks in that abstraction or irrelevant implementation details. A much more important distinction is whether or not a method has side-effects (which *are* domain relevant, e.g. "charges a card", "registers an account", etc.). – Warbo May 30 '18 at 14:49
  • @Warbo the definition of unit test is pretty clear. you have chosen an example where there is no functionality in the dependency and the definition of the function is to return the dependant class. If Order called Price.ChangeCurrency() or similar then it would definitely be an integration test by anyone definition and a unit test would require that you mocked Price in order to test only the Order logic. – Ewan May 30 '18 at 14:57
  • @Warbo and KonradRudolph basically your objections boil down to the fact that you **do** think unit testing is bad but call integration tests unit tests so you don't have to admit it – Ewan May 30 '18 at 15:02
  • This is about the limitations of your preferred tools, which is unrelated to the question. There is no need for DI to enable unit testing. – Frank Hileman May 30 '18 at 19:46
  • @Ewan: I agree with Warbo that you're defining unit-testing far too narrowly. If you wish to convince others of your views, you may need to calm down a bit with respect to terminological differences. (And if you can't do that, then you may need to take a break and leave the discussion to those with cooler heads.) – ruakh May 30 '18 at 20:41
  • @ruakh http://softwaretestingfundamentals.com/unit-testing/ – Ewan May 30 '18 at 21:03
  • 2
    @Ewan I completely agree with that link. It doesn't support anything you've said, but does support claims from me & others, e.g. my `Order` example tests a "smallest testable part" (`total` method), it's independent/isolated from other tests (instances only scoped to that test), etc. Link says mocks can "assist" unit testing & give example of making tests independent by mocking a DB. That's a justified use-case for mocking. They don't talk about DI, "dependencies", decoupling every class from every other class, etc. In other words DI/mocks are techniques (of last resort), not goals/principles. – Warbo May 31 '18 at 12:14
  • @warbo look I'm not here to justify or explain the definition of unit test. If you think a test which tests 2 classes in one go when you could have a test per class is a 'unit' test then you disagree with the definition of unit test – Ewan May 31 '18 at 12:21
  • @warbo but as a matter of fact the 99% of DI that i'm thinking of _are_ data and rest service mocks. – Ewan May 31 '18 at 12:24
  • 1
    @Ewan in warbo's Order/Price example. If instead of returning a `Price`, the total method just summed decimal price values (of a language built in type) would you feel the same way? If no, then to extend your apparent position to the logical conclusion, useful/practical unit tests _can't_ exist, because even if you are mocking dependencies, your test exercised both the local logic AND the Mock itself. Assuming `Price.add()` is tested elsewhere, the example is testing the logic `Order` uses to come up with a total price, not testing the functionality of `Price.add()`, and that is OK. – Mr.Mindor May 31 '18 at 13:37
  • 5
    @Ewan Your link gives a good definition of unit testing. By that definition my `Order` example is a unit test: it's testing a method (the `total` method of `Order`). You're complaining that it's calling code from 2 classes, my response is *so what*? We're not testing "2 classes at once", we're testing the `total` method. We *should not care* how a method does its job: those are implementation details; testing them causes fragility, tight coupling, etc. We *only* care about the method's behaviour (return value and side effects), not the classes/modules/CPU registers/etc. it used in the process. – Warbo May 31 '18 at 14:12
  • @Warbo as i said at the time, your order class is effectively a PriceFactory, technically yes it fails the definition of a unit test because you test the Price constructor as well as the Total() logic. But if it also used Price.Add() or something then yes you would want to test both price.add and order.total in their own unit tests – Ewan May 31 '18 at 15:18
  • @Mr.Mindor as I said at the time, "unless Price is a struct" The Order example is a straw man because you would make it a unit test by injecting a PriceFactory, but Order is already effectively a PriceFactory. Use a real example which uses Price.Add() and it becomes obvious that a unit test of order would want to separate out the price.add logic – Ewan May 31 '18 at 15:24
  • I would also add that i am not recommending testing internals. No Assert.Price.Add.WasCalled was mentioned. You want the Order test to fail when it has a bug, not when Price has a bug – Ewan May 31 '18 at 15:26
  • @Warbo perhaps you will explain how you would test a class which uses a third party rest service without DI? what about a database? These are the classes which make up the 99% of DI, not your academic edge cases – Ewan May 31 '18 at 15:32
  • 1
    @Ewan `Order` models an order, as found on an e-commerce site for example. It is not a `PriceFactory`, since prices are assigned by product managers not manufactured in factories, and in any case for e-commerce purposes we would abstract away the details of manufacturing behind some simple model like `Supplier`... I'm joking of course! Yet the joke has a ring of truth: code should model the domain (orders, products, customers, ...); when code models code (factories, controllers, executors, initialisers, ...) it's an indication that we're solving self-imposed problems. – Warbo May 31 '18 at 15:52
  • 3
    If the unit of isolation for a unit test is necessarily a single class, I had better inject a `ListFactory` and mock out that `List` that I'm newing up inside that method... – Ant P May 31 '18 at 15:55
  • @Ewan "technically yes it fails the definition of a unit test because you test the Price constructor as well as the Total() logic" Which definition? The one you linked to says *absolutely nothing* about methods calling other methods, classes, etc. (and quite right too!) At one point it mentions that tests may conflict if they share a database, which we can avoid by mocking. – Warbo May 31 '18 at 15:55
  • 3
    @Ewan REST calls are side-effects. DI is a good way to unit test code which mixes up logic with side-effects. Personally, I would prefer to disentangle the logic from the side-effects, i.e. have pure business logic *calculate* which calls to make, and return first-class values representing those calls. Unit testing is then trivial (just check the return value), no need for DI. – Warbo May 31 '18 at 16:06
  • @Ewan BTW if `Price` is buggy then I *absolutely* want my `Order` tests to fail! – Warbo May 31 '18 at 16:08
  • 3
    @Warbo also known as [functional core, imperative shell](https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell)... the idea that OOP means deeply nested sets of injected dependencies, all counterproductively tested in complete isolation from one another (or the *Tower of Mocks*, as I like to call it) is a relatively modern and very harmful phenomenon which is perpetuated largely by people who have never actually seen what proper OOP looks like. – Ant P May 31 '18 at 16:10
  • 1
    @AntP I agree. Worst case I've run across is [ServiceControllerServiceProvider](https://github.com/silexphp/Silex-Providers/blob/master/ServiceControllerServiceProvider.php). Also related to cultural appropriation from the [Kingdom of Nouns](https://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html) – Warbo May 31 '18 at 16:35
  • @warbo masterful question avoidence, I assume youve worked it through and realise that no matter how many extra helper classes you throw in at some point you either have to send the request to the third party or not – Ewan May 31 '18 at 17:34
  • @Ewan Wasn't trying to avoid, I gave 2 answers. 1st is use DI, since that's what it's for (avoiding I/O side effects); note this *doesn't* mean DI should be used for internal, unobservable, side-effect-free calls (like the `Order` and `Price` example). 2nd answer is make requests first-class values and return them, rather than executing. An (overly) simple example is returning strings of HTTP or SQL: unit tests are easy, no DI/mocks/etc. just regex. Integration tests would check we can send/receive HTTP strings, or run SQL queries, *in general*, but that is independent of the business logic. – Warbo May 31 '18 at 17:51
  • @warbo ok "use DI" thank you. did we really need to go through all that? – Ewan May 31 '18 at 17:54
  • @Ewan I don't understand. Using DI to prevent tests from having side-effects is sensible. Using DI (or anything else) to prevent any class from ever using any other class is just nonsense. – Warbo May 31 '18 at 17:55
  • @Warbo I dont want to get into "is unit testing bad" thats why its the first sentence of my answer. _IF_ you think its bad you wont need DI to support it. _IF_ you think its good its going to be 99% of your DI – Ewan May 31 '18 at 17:59
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/78265/discussion-between-warbo-and-ewan). – Warbo May 31 '18 at 17:59