37

According to Microsoft documentation, the Wikipedia SOLID principe article, or most IT architects we must ensure that each class has only one responsibility. I would like to know why, because if everybody seems to agree with this rule nobody seems to agree about the reasons of this rule.

Some cite better maintenance, some say it provides easy testing or makes the class more robust, or security. What is correct and what does it actually mean? Why does it make maintenance better, testing easier or the code more robust?

jmoreno
  • 10,640
  • 1
  • 31
  • 48
Bastien Vandamme
  • 1,623
  • 2
  • 14
  • 15
  • 2
    related: [Is SRP (Single Responsibility Principle) objective?](http://programmers.stackexchange.com/questions/99043/is-srp-single-responsibility-principle-objective) – gnat Apr 07 '14 at 08:50
  • 1
    I had a question you might find related too: [What is the real responsibility of a class?](http://programmers.stackexchange.com/questions/220230/what-is-the-real-responsibility-of-a-class) – Pierre Arlaud Apr 07 '14 at 12:36
  • Easier testing can be part of better maintenance. – JeffO Apr 07 '14 at 14:42
  • 4
    Can't all the reasons for single responsibility be correct? It does make maintenance easier. It does make testing easier. It does make the class more robust (in general). – Martin York Apr 07 '14 at 16:29
  • You also have to think about what the alternatives are. Organising code is always for the benefit of the (human) developer/team, processors don't care. How would you tell an inexperienced programmer which responsibilities should and shouldn't be merged? Would you want to deal with the results? – Will Apr 08 '14 at 11:26
  • 2
    For the same reason you have classes AT ALL. – Davor Ždralo Apr 08 '14 at 13:24
  • 2
    I have always had an issue with this statement. It's very difficult to define what a "single responsibility" is. A single responsibility can range any where from "verify that 1+1=2" to "maintain an accurate log of all money paid into and out of the corporate banking accounts". – Dave Nay Apr 08 '14 at 17:23
  • http://en.wikipedia.org/wiki/Cross-cutting_concern – Marco Apr 09 '14 at 09:01
  • @DaveNay I've always found that atomisity is a good acid test. If it is possible to refactor out two classes that do not know about each other, along with a class to compose them, then you don't have single responsibility. – ArTs Apr 09 '14 at 11:04
  • **`SRP` is currently viciously abused to convert `OOP` into `functional programming`.** Building a separate class for a single simple method that always acts against a single class type object is *doing it wrong* IMO. You put that stray method in the class where it belongs. **SRP is too often for my taste functional programming under the guise of OOP.** And smack a fancy acronym *(that nobody can actually define)* on it... to make it marketable. – CodeAngry Dec 18 '14 at 14:28

11 Answers11

57

Modularity. Any decent language will give you the means to glue together pieces of code, but there's no general way to unglue a large piece of code without the programmer performing surgery on the source. By jamming a lot of tasks into one code construct, you rob yourself and others of the opportunity to combine its pieces in other ways, and introduce unnecessary dependencies that could cause changes to one piece to affect the others.

SRP is just as applicable to functions as it is to classes, but mainstream OOP languages are relatively poor at gluing functions together.

Doval
  • 15,347
  • 3
  • 43
  • 58
  • 12
    +1 for mentioning functional programming as a safer way to compose abstractions. – logc Apr 07 '14 at 13:32
  • > but mainstream OOP languages are relatively poor at gluing functions together.< Maybe because OOP languages aren't concerned with functions, but messages. – Jeff Hubbard Apr 08 '14 at 22:40
  • @JeffHubbard I think you mean it's because they take after C/C++. There's nothing about objects that makes them mutually exclusive with functions. Hell, an object/interface is just a record/struct of functions. To pretend functions aren't important is unwarranted both in theory and practice. They can't be avoided anyways - you end up having to throw around "Runnables", "Factories", "Providers" and "Actions" or make use of the so-called Strategy and Template Method "patterns", and end up with a bunch of unnecessary boilerplate to get the same job done. – Doval Apr 08 '14 at 23:09
  • @Doval No, I really do mean that OOP is concerned with messages. That's not to say that a given language is better than or worse than a functional programming language at gluing functions together--just that *it's not the primary concern of the language*. – Jeff Hubbard Apr 09 '14 at 00:53
  • Basically, if your language is about making OOP easier/better/faster/stronger, then whether or not functions glue together nicely is not what you focus on. – Jeff Hubbard Apr 09 '14 at 00:56
30

Better maintenance, easy testing, faster bug-fixing are just (very pleasant) outcomes of applying SRP. The main reason (as Robert C. Matin puts it) is:

A class should have one, and only one, reason to change.

In other words, SRP raises change locality.

SRP also promotes DRY code. As long as we have classes that have only one responsibility, we may choose to use them anywhere we want. If we have a class that has two responsibilities, but we need only one of them and second is interfering, we have 2 options:

  1. Copy-paste the class into another and maybe even create another multi-responsibility mutant (raise technical debt a little bit).
  2. Divide the class and make it as it should be in the first place, which may be expensive due to extensive usage of the original class.
Maciej Chałapuk
  • 1,388
  • 8
  • 11
21

Is easy to create code to fix a particular problem. Is more complicated to create code that fixes that problem while allowing later changes to be made safely. SOLID provides a set of practices that makes the code better.

As to which one is correct: All three of them. They are all benefits of using single responsibility and the reason that you should use it.

As to what they mean:

  • Better maintenance means that is easier to change and doesn't change as often. Because there is less code, and that code is focused on something specific, if you need to change something that is not related to the class, the class doesn't need to be changed. Furthermore, when you need to change the class, as long as you don't need to change the public interface, you only need to worry about that class and nothing else.
  • Easy testing means fewer tests, less setup. There are not as many moving parts on the class, so the number of possible failures on the class is smaller, and therefore fewer cases that you need to test. There will be fewer private field/members to set up.
  • Because of the two above you get a class that changes less and fails less, and therefore is more robust.

Do try to create code for a while following the principle, and revisit that code later on to do some changes. You will see the massive advantage that it provides.

You do the same to each class, and end up with more classes, all complying with SRP. It makes the structure more complex from the point of view of connections, but the simplicity of each class justifies it.

Miyamoto Akira
  • 2,265
  • 16
  • 17
  • I don't think the points made here are justified. In particular, how do you avoid zero-sum games where simplifying a class causes other classes to become more complicated, with no net effect on the code base as a whole? I believe @Doval's [answer](http://programmers.stackexchange.com/a/235120/39685) does address this issue. –  Apr 07 '14 at 17:26
  • 2
    You do the same to all classes. You end with more classes, all complying with SRP. It makes the structure more complex from the point of view of connections. But the simplicity on each class justify it. I like @Doval's answer, though. – Miyamoto Akira Apr 07 '14 at 20:52
  • I think that you should add that to your answer. –  Apr 07 '14 at 23:53
4

Here are the arguments that, in my view, support the claim that the Single-Responsibility Principle is a good practice. I provide also links to further literature, where you can read even more detailed reasonings -- and more eloquent than mine:

  • Better maintenance: ideally, whenever a functionality of the system has to change, there will be one and only one class that has to be changed. A clear mapping between classes and responsibilities means that any developer involved in the project can identify which class this is. (As @MaciejChałapuk has noted, see Robert C. Martin, "Clean Code".)

  • Easier testing: ideally, classes should have as minimal a public interface as possible, and tests should address only this public interface. If you cannot test with clarity, because many parts of your class are private, then this is a clear sign that your class has too many responsibilities, and that you should split it in smaller subclasses. Please notice this applies also to languages where there are no 'public' or 'private' class members; having a small public interface means that it is very clear to client code which parts of the class it is supposed to use. (See Kent Beck, "Test-Driven Development" for more details.)

  • Robust code: your code will not fail more or less often because it is well-written; but, as all code, its ultimate goal is not to communicate with the machine, but with fellow developers (see Kent Beck, "Implementation Patterns", Chapter 1.) A clear codebase is easier to reason about, so less bugs will be introduced, and less time will pass between discovering a bug and fixing it.

logc
  • 2,190
  • 15
  • 19
  • If something has to change for business reasons believe me with SRP you will have to modify more than one classe. Saying that if a class change it's only for one reason is not the same that if there is a change it will only impact one class. – Bastien Vandamme Apr 16 '14 at 17:59
  • @B413: I did not mean that *any* change will imply a single change to a single class. Many parts of the system could need changes to comply with a single business change. But maybe you have a point and I should have written "whenever a functionality of the system has to change". – logc Apr 16 '14 at 21:16
3

There are a number of reasons, but the the one I like is the approach used by many of the early UNIX programs: Do one thing well. It is hard enough to do that with one thing, and increasing difficult the more things you try to do.

Another reason is to limit and control side effects. I loved my combination coffee maker door opener. Unfortunately, the coffee usually overflowed when I had visitors. I forgot to close the door after I made coffee the other day and someone stole it.

From the psychological standpoint, you can only keep track of a few things at a time. General estimates are seven plus or minus two. If a class does multiple things you need to keep track of all of them at once. This reduces your capability to track what you are doing. If a class does three things, and you want only one of them, you may exhaust your capability to keep track of things before you actually do anything with the class.

Doing multiple things increases code complexity. Beyond the simplest code, increasing complexity increases the likelihood of bugs. From this standpoint you want classes to be as simple as possible.

Testing a class that does one thing is much simpler. You don't have to verify that the second thing the class does did or did not happen for every test. You also don't have to fix the broken conditions and retest when one of these tests fails.

BillThor
  • 6,232
  • 17
  • 17
2

Because software is organic. Requirements change constantly so you have to manipulate components with as little headache as possible. By not following SOLID principles, you may end up with a code base that is set in concrete.

Imagine a house with a load bearing concrete wall. What happens when you take this wall out without any support? The house will probably collapse. In software we don't want this, so we structure applications in such a way so that you can easily move/replace/modify components without causing lots of damage.

CodeART
  • 3,992
  • 1
  • 20
  • 23
1

Especially with such important principle as Single Responsibility, I would personally expect that there are many reasons why people adopt this principle.

Some of those reasons might be:

  • Maintenance - SRP ensures that changing responsibility in one class doesn't affect other responsibilities, making maintenance simpler. That is because if each class has only one responsibility, the changes done to one responsibility are isolated from other responsibilities.
  • Testing - If a class has one responsibility, it makes it much easier to figure out how to test this responsibility. If a class has multiple responsibilities, you have to make sure you are testing the correct one and that the test is not affected by other responsibilities the class has.

Also note that SRP comes with a bundle of SOLID principles. Abiding by SRP and ignoring the rest is just as bad as not doing SRP in the first place. So you should not evaluate SRP by itself, but in the context of all the SOLID principles.

Euphoric
  • 36,735
  • 6
  • 78
  • 110
  • `... test is not affected by other responsibilities the class has`, could you please elaborate on this one? – Mahdi Apr 07 '14 at 10:47
1

I follow the thought: 1 Class = 1 Job.

Using a Physiology Analogy: Motor (Neural System), Respiration (Lungs), Digestive (Stomach), Olfactory (Observation), etc. Each of these will have a subset of controllers, but they each only have 1 responsibility, whether its to manage the way each of their respective subsystems works or whether they are an end-point subsystem and perform only 1 task, such as lifting a finger or growing a hair-follicle.

Don't confuse the fact that it may be acting as a manager instead of a worker. Some worker's eventually get promoted to Manager, when the work they were performing has become too complicated for one process to handle by itself.

The most complicated part of what i have experienced, is knowing when to designate a Class as an Operator, Supervisor, or Manager process. Regardless, you will need to observe and denote its functionality for 1 responsibility (Operator, Supervisor, or Manager).

When a Class/Object performs more than 1 of these type roles, you will find that the overall process will start having performance issues, or process bottle-necks.

GoldBishop
  • 161
  • 1
  • 9
  • Even if the implementation of processing atmospheric oxygen into hemoglobin should be handled be a `Lung` class, I would posit that an instance of `Person` should still `Breathe`, and thus the `Person` class should contain enough logic to, at minimum, delegate the responsibilities associated with that method. I'd further suggest that a forest of interconnected entities which are only accessible through a common owner is often easier to reason about than a forest which has multiple independent access points. – supercat Apr 07 '14 at 19:43
  • Yes in that case the Lung is Manager, although the Villi would be a Supervisor, and the process of Respiration would be the Worker class to transfer the Air particles to the Blood stream. – GoldBishop Apr 08 '14 at 12:14
  • @GoldBishop: Perhaps the right view would be to say that a `Person` class has as its job the delegation of all the functionality associated with a monstrously-over-complicated "real-world-human-being" entity, including the association of its various parts. Having an entity do 500 things is ugly, but if that's the way the real-world system being modeled works, having a class which delegates 500 functions while retaining one identity may be better than having to do everything with disjoint pieces. – supercat Apr 08 '14 at 18:43
  • @supercat Or you could simply Compartmentalize the whole thing and have 1 Class with the necessary Supervisor's over the sub-processes. That way, you could (in theory) have each process seperate from the base-class `Person` but still reporting up the chain on success/failure but not necessarily effecting the other. 500 Functions (although set as an example, would be overboard and unsupportable) i try to keep that type of functionality derived and modular. – GoldBishop Apr 08 '14 at 19:17
  • @GoldBishop: I wish there were some way to declare an "ephemeral" type, such that outside code could invoke members upon it or pass it as a parameter to the type's own methods, but nothing else. From a code organization perspective, subdividing classes makes sense, and even having outside code invoke methods one level down (e.g. `Fred.vision.lookAt(George)` makes sense, but allowing code to say `someEyes = Fred.vision; someTubes = Fred.digestion` seems icky since it obscures the relationship between `someEyes` and `someTubes`. – supercat Apr 08 '14 at 19:31
  • @supercat: Trudat, and that is why you would have `Persona.Gastro.Stomach.digest()` type structure. Although it might make sense to say `Persona.digest()` question I would have visually, is what is performing the `digest`? As it is not tied to a service or controller, how would you know without getting into the function/method that `digest` was actually associated with `Gastro.Stomach.....`. This would allow you the flexibility to only load the functionality at time of need, instead of loading all functionalities at once, even if you only need 1, maybe 3. – GoldBishop Apr 08 '14 at 19:37
  • @GoldBishop: If ephemeral types were supported, and would limit the lifetime of `Fred.vision` to a particular scope, then `fred.vision.lookAt(George)` would seem natural, but having `fred.vision` return a reference which identifies Fred (do `someEyes.showPizza()` and Fred's digestion will be the one that salivates) but isn't a reference to Fred seems rather icky. – supercat Apr 08 '14 at 19:44
  • @GoldBishop: If everything is handled by lower levels, how do you handle the fact that if Fred is deaf, an act like saying hello which most instances of `Person` would handle using vocal chords might instead be accomplished using sign language? The more things are delegated to lower levels, the less ability there is to handle aspects of the model which don't fit normal encapsulation boundaries. – supercat Apr 08 '14 at 20:13
  • @supercat: `Person.Ability` with `PersonAbility` Enum, ideally a concatenatable set of values `0,1,2,4,8,etc`. Such as `PersonAbility.Hearing`, `PersonAbility.Speech`, `PersonaAbility.Walk`, etc. Depends on how granular you want. But putting 100% of all functionality at the Person level would allow for too much code and produce a realized Class file that is 100k lines of code. Where as if you compartmentalized functionality, you would have seperation of responsibility and supportability. – GoldBishop Apr 08 '14 at 20:47
  • @supercat: in regards to Fred's eating habits, your going hypothetic and off track. You will be able to rationalize any code design you want, but the question is will others be able to rationalize your design after you are gone? I go for sustained support versus an immediate design of convenience. Maybe you have never had to support someone that designs, like your hypothetical situation(s), but I have. Its a good pay-check, cause I assist the client into making your theoretical design into a viable support model. – GoldBishop Apr 08 '14 at 20:51
  • @GoldBishop: Turning more toward a real-world example, consider `IEnumerable`. I would consider it rather anemic, and would favor including a few more members like an `EnumerableCharacteristics` property (of a flags-enum type), as well as `Snapshot()`, `ToArray()`, and `Count()` methods. Given any instance of `IEnumerable` which doesn't represents a sufficiently-small set of items, one could do the above, but seldom would those actions best be performed using only the `IEnumerable` interface. I would posit that each class which implements an enumerable interface... – supercat Apr 08 '14 at 21:06
  • ...should have a responsibility for providing a behavior for `Snapshot()`, etc., indicating that such a method cannot be meaningfully performed (e.g. because the instance encapsulates an endless random stream), or accepting a "default" behavior. Even if many `IEnumerable` implementations would chain to default behavior, having such methods chain through `IEnumerable` would IMHO be better than declaring that they're "not its responsibility". – supercat Apr 08 '14 at 21:10
  • @supercat: Then dont use and reinvent the wheel. I would do `EnumerableCharacteristics:IEnumerable` and then implement the associated logic that is required for the Interface. That way when i deploy my EF context, i dont have to worry about iteration logic, the EF framework automatically performs the Reflective action. `IEnumerable` allows for a "how ever you want to implement it" as long as you sustain the standard methods, properties, etc from the Interface it will always execute the same. – GoldBishop Apr 08 '14 at 23:44
1

The best way to understand the importance of these principles is to have the need.

When I was a novice programmer, I didn't give much thought to design, in fact, I didn't even know design patterns existed. As my programs grew, changing one thing meant changing many other things. It was hard to track down bugs, the code was huge and repetitive. There wasn't much of an object hierarchy, things were all over the place. Adding something new, or removing something old would bring up errors in the other parts of the program. Go figure.

On small projects, it might not matter, but in big projects, things can be very nightmarish. Later when I came across the concepts of design patters, I said to myself, "oh yah, doing this would've made things so much easier then".

You really can't understand the importance of design patterns until the need arises. I respect the patterns because from experience I can say they make code maintenance easy and code robust.

However, just like you, I'm still uncertain about the "easy testing", because I haven't had the need to get into unit testing yet.

harsimranb
  • 168
  • 6
  • Patterns are somehow connected to SRP, but SRP it not required, when applying design patterns. It's possible to use patterns and completely ignore SRP. We use patterns to address non-functional requirements, but using patterns is not required in any program. Principles are essential in making development less painful and (IMO) should be a habit. – Maciej Chałapuk Apr 08 '14 at 08:47
  • I completely agree with you. SRP, to some extent, enforcers clean object design and hierarchy. It is a good mentality for a developer to get into, as it gets the developer to start thinking in terms of objects, and how they should be. Personally, it has had a really good impact on my development as well. – harsimranb Apr 08 '14 at 16:10
1

The answer is, as others have pointed out, all of them are correct, and they all feed into each other, easier to test makes maintenance easier makes code more robust makes maintenance easier and so forth...

All of this boils down to a key principal -- code should be as small and do as little as necessary in order to get the job done. This applies to an application, a file, or a class just as much as to a function. The more things that a piece of code does, the harder it is to understand, to maintain, to extend, or test.

I think it can be summed up in one word: scope. Pay close attention to the scope of artifacts, the fewer things in scope at any particular point in an application the better.

Expanded scope = more complexity = more ways for things to go wrong.

jmoreno
  • 10,640
  • 1
  • 31
  • 48
1

If a class is too big, it becomes hard to maintain, test and understand, other answers have covered this will.

It is possible for a class to have more than one responsibility without problems, but you soon hit problems with too complex classes.

However having a simple rule of “only one responsibility” just makes it easier to know when you need a new class.

However defining “responsibility” is hard, it does not mean “do everything the application spec says”, the real skill is in know how to break the problem down into small units of “responsibility”.

Ian
  • 4,594
  • 18
  • 28