30

How can I tell my software has too much abstraction and too many design patterns, or the other way round, how do I know if it should have more of them?

Developers I work with are programming differently concerning these points.

Some do abstract every little function, use design patterns wherever possible and avoid redundancy at any cost.

The others, including me, try to be more pragmatic, and write code that is not perfectly fitting every design pattern, but is way faster to understand because less abstraction is applied.

I know this is a trade-off. How can I tell when there is put enough abstraction into the project and how do I know it needs more?

Example, when a generic caching layer is written using Memcache. Do we really need Memcache, MemcacheAdapter, MemcacheInterface, AbstractCache, CacheFactory, CacheConnector, ... or is this easier to maintain and still good code when using only half of those classes?

Found this in Twitter:

enter image description here

(https://twitter.com/rawkode/status/875318003306565633)

Daniel W.
  • 535
  • 4
  • 9
  • Possible duplicate of [How to determine the levels of abstraction](https://softwareengineering.stackexchange.com/questions/110933/how-to-determine-the-levels-of-abstraction) – gnat Jun 22 '17 at 11:49
  • see also: [How would you know if you've written readable and easily maintainable code?](https://softwareengineering.stackexchange.com/a/141010/31260) – gnat Jun 22 '17 at 11:49
  • Sorry but the alleged duplicate does not answer my question. – Daniel W. Jun 22 '17 at 11:51
  • 3
    What are you writing, a library, a framework or an application ? – Walfrat Jun 22 '17 at 12:10
  • 58
    If you're treating design patterns as things you pull out of a bucket and use to assemble programs, you're using too many. – Blrfl Jun 22 '17 at 12:27
  • 1
    Related question in sister site: https://stackoverflow.com/questions/2668355/how-much-abstraction-is-too-much – Tulains Córdova Jun 22 '17 at 12:28
  • 1
    I think my team uses too much abstraction and design patterns in general (applications, libraries, ...), there is only a very little benefit then but it's hard to get into the logic and hard to do changes. I'm going to discuss this with my team and I just take some feedback from you. – Daniel W. Jun 22 '17 at 12:41
  • Hasn't anybody seen interfaces as design tools? I mean, creating them to give junior programmers a starting point. – Tulains Córdova Jun 22 '17 at 12:44
  • 22
    ["A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." *Antoine de Saint-Exupery*](https://www.brainyquote.com/quotes/quotes/a/antoinedes121910.html) – Doc Brown Jun 22 '17 at 14:46
  • 5
    You could think of it design patterns like speech patterns people use. Because in a sense, idioms, metaphors, etc. are all design patterns. If you're using an idiom every sentence... that's probably too often. But they can help clarify thoughts and aid comprehension of what would otherwise be a long wall of prose. There's not really a right way to answer "how often should I use metaphors?"--it's up to the judgment of the author. – Prime Jun 22 '17 at 18:15
  • 9
    There is no way a single SE answer could possibly adequately cover this topic. This takes literally years of experience and mentoring to get a handle on. This is clearly Too Broad. – jpmc26 Jun 22 '17 at 18:27
  • @jpmc26 no I think [Arseni Mourzenko](https://softwareengineering.stackexchange.com/a/351410/131624) nailed it. Remember, just because they asked which screwdriver to use on a nail doesn't mean there is no good answer. [The answer is a hammer](https://softwareengineering.meta.stackexchange.com/a/8499/131624). – candied_orange Jun 22 '17 at 19:04
  • 1
    The tradeoff implicit in the tweet is that abstractions generally make the *whole program* easier to understand at the cost of making a dev unfamiliar with the codebase unable to ascertain at a glance what a given piece of the code is doing. Not using abstractions has the opposite problem/benefit: I can read any given snippet of C code with ease, understanding a large C project might take months or longer. – Jared Smith Jun 22 '17 at 19:25
  • 2
    As many as it takes for you to be able to reason about the code easily, but no more, and certainly not so many that you spend more time understanding the abstractions than the application logic. – anaximander Jun 22 '17 at 20:10
  • Can you provide the Twitter link? – Peter Mortensen Jun 22 '17 at 20:24
  • OK, I found it (2017-06-15): [https://twitter.com/rawkode/status/875318003306565633](https://twitter.com/rawkode/status/875318003306565633) – Peter Mortensen Jun 22 '17 at 20:30
  • 2
    @CandiedOrange There is literally a counterexample of where those rules fail in the first comment on that answer. This isn't something you can come up with neat little rules for. *It depends* on *way* too many factors about the specific piece of code. It's something you develop an eye and an instinct for through writing and maintaining code and seeing what worked and what didn't over years. Maybe that development can be sped up by good mentoring. The only general statement you can really make about it is a phrase that has become something of my motto for coding: *there are no silver bullets*. – jpmc26 Jun 22 '17 at 22:02
  • 1
    @CandiedOrange Also, this question isn't only about using patterns. (Or at least, no adequate answer would limit itself to only them.) It just uses application of patterns as a common example of when abstracting seems to go horribly wrong. An abstraction can be as simple as moving a block of code into a separate static function. It also mentions using abstraction to achieve DRY specifically. Choosing the right kind of abstraction and right thing to abstract is itself a way of reducing the number abstractions. – jpmc26 Jun 22 '17 at 22:11
  • 1
    *Every* programming problem can be solved by adding more abstraction layers, *except one*. Do you know what the exception is? – Eric Lippert Jun 22 '17 at 22:44
  • The blog post [Two is Too Many](http://www.codesimplicity.com/post/two-is-too-many/) (on codesimplicity.com) is relevant; it almost reads like an answer to this very question. (I'm not affiliated, although the author, a senior Google engineer, is a friend of mine.) – Wildcard Jun 22 '17 at 23:46
  • @jpmc26 what rules? Who said anything about rules? We're talking about choosing good abstractions. There are no rules for that. There are goals and guidelines but I don't see anyone offering up rules. – candied_orange Jun 23 '17 at 00:10
  • 1
    @CandiedOrange The rules *in the answer you linked*. Don't mince words with me to avoid responding to the substantive point I made. Aside from the two little rules about "implementation" and "interface," the rest of that answer amounts to, "It depends and we can't really give an answer." And since the rules it proposes don't work, it's basically a non-answer. Hence: Too Broad. – jpmc26 Jun 23 '17 at 00:12
  • @jpmc26 my [link](https://softwareengineering.meta.stackexchange.com/a/8499/131624) doesn't have any rules. It's first comment isn't a counter example for anything it's a difference of opinion about how to close. If you want me to stop playing dumb you need to do a better job of enlightening me. If you want to debate how this site is moderated I suggest you post a question on meta. – candied_orange Jun 23 '17 at 00:17
  • @CandiedOrange You linked *to an answer on this question*. Please review your comments if you can't remember what you wrote. Also, my *second* response above (the "Also, this question isn't only about using patterns..." one) discusses the lack of applicability of the meta post you linked. – jpmc26 Jun 23 '17 at 00:18
  • @jpmc26 I'ved linked to exactly the same thing twice now. I've tested the links and completely fail to make sense of your complaints. If you want we can do this in chat. The only thing you've made clear is that this debate doesn't belong here. – candied_orange Jun 23 '17 at 00:22
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/60939/discussion-between-candiedorange-and-jpmc26). – candied_orange Jun 23 '17 at 00:24
  • @TulainsCórdova The idea of "adding features to the software design in order to create work for junior programmers" seems to be "not even wrong" IMO. But I know software does get designed that way in the real world! – alephzero Jun 23 '17 at 01:13
  • 5
    Follow the basic design principle in the aviation industry: "Simplify and add more lightness". When you discover that the best way to fix a bug is simply to delete the code containing the bug, because it still doesn't do anything useful even if it was bug-free, you are beginning to get the design right! – alephzero Jun 23 '17 at 01:15
  • @user469104 Am I picking up a sly reference to "Hitchhiker's Guide to the Galaxy" ?:) And given the ultimate unanswerability of the question, that metaphor ain't a bad answer. – John Forkosh Jun 23 '17 at 01:20
  • @DocBrown that's the man who has written Le Petite Prince, wise man he is. – Daniel W. Jun 23 '17 at 07:02
  • I like this question, but perhaps it should be reworded to "how do you tell..." For me, the answer is simple. For each main functionality, put all the ones that are direct duplicates and near similar and make sure the way each is done is following the same design, and ideally sharing a lot of common classes (or descendants of common classes) to get their jobs done. For the converse, inventory all the design patterns and make sure each is repeated at least once, and if not be really sure that future expansion will repeat that pattern. – Thomas Carlisle Jun 24 '17 at 14:27
  • @Blrfl "If you're treating design patterns as things you pull out of a bucket and use to assemble programs, you're using too many." - Maybe I'm just being grumpy, but I think that is quite common. Probably even more common thnn actual understanding of architecture and the proper use of design patters. – Alex Jun 26 '17 at 14:20
  • @Alex I feel exactly the same way and make a similar comment in questions where it looks like that's what happening. There's exactly one use for design patterns, which is as a shorthand for describing what you did after the fact. – Blrfl Jun 26 '17 at 15:42

11 Answers11

50

How many ingredients are necessary for a meal? How many parts do you need to build a vehicle?

You know that you have too little abstraction when a little implementation change leads to a cascade of changes all over your code. Proper abstractions would help isolating the part of the code which needs to be changed.

You know that you have too much abstraction when a little interface change leads to a cascade of changes all over your code, at different levels. Instead of changing the interface between two classes, you find yourself modifying dozens of classes and interfaces just to add a property or change a type of a method argument.

Aside that, there is really no way to answer the question by giving a number. The number of abstractions won't be the same from project to project, from a language to another, and even from one developer to another one.

Arseni Mourzenko
  • 134,780
  • 31
  • 343
  • 513
  • 28
    If you have only two interfaces and hundreds of classes implementing them, changing the interfaces would lead to a cascade of changes, but that doesn't mean there is too much abstraction, since you only have two interfaces. – Tulains Córdova Jun 22 '17 at 12:26
  • The number of abstractions won't even be the same for different parts of the same project! – T. Sar Jun 22 '17 at 13:08
  • Hint: changing from Memcache to another caching mechanism (Redis?) is an *implementation* change. – Ogre Psalm33 Jun 22 '17 at 20:32
  • Your two rules (guidelines, whatever you want to call them) don't work, as demonstrated by Tulains. They are also woefully incomplete even if they did. The rest of the post is a non-answer, saying little more than we can't provide a reasonably scoped answer. -1 – jpmc26 Jun 23 '17 at 00:19
  • I'd argue that in the case of two interfaces and hundreds of classes implementing them, you quite possibly *have* overstretched your abstractions. I've certainly seen this in projects that re-use a very vague abstraction in many places (`interface Doer {void prepare(); void doIt();}`), and it becomes painful to refactor when this abstraction no longer fits. The key part of the answer is the test applies *when* an abstraction has to change - if it never does, it never causes pain. – James_pic Jun 23 '17 at 07:47
24

The problem with design patterns can be summed up with the proverb "when you're holding a hammer, everything looks like a nail." The act of applying a design pattern is not improving your program whatsoever. In fact, I would argue that you're making a more complicated program if you're adding a design pattern. The question remains whether or not you're making good use of the design pattern or not, and this is the heart of the question, "When do we have too much abstraction?"

If you're creating an interface and an abstract super class for one single implementation, you've added two additional components to your project that are superfluous and unnecessary. The point of providing an interface is to be able to handle it equally throughout your program without knowing how it works. The point of an abstract super class is to provide underlying behavior for implementations. If you only have one implementation, you get all the complication interfaces and abstact classes provide and none of the advantages.

Similarly, if you're using a Factory pattern, and you find yourself casting a class in order to use functionality only available in the super class, the Factory pattern isn't adding any advantages to your code. You've only added an additional class to your project that could have been avoided.

TL;DR My point being that the objective of abstraction is not abstract in of itself. It serves a very practical purpose in your program, and before you decide to use a design pattern or create an interface, you should be asking yourself if by doing so, the program is easier to understand despite the additional complexity or the program is more robust despite the additional complexity (preferably both). If the answer is no or maybe, then take a couple of minutes to consider why you wanted to do that and if perhaps it can be done in a better way instead without necessarily the requirement of adding abstraction to your code.

Neil
  • 22,670
  • 45
  • 76
  • The hammer analogy would be the problem of only knowing one design pattern. Design patterns should create a whole toolkit to select from and apply where appropriate. You don't select a sledgehammer to crack a nut. – Pete Kirkham Jun 23 '17 at 10:39
  • @PeteKirkham True, but even an entire array of design patterns at your disposal may not properly be suited for a particular problem. If a sledgehammer isn't best suited to crack a nut, and neither is a screwdriver, and neither is a tape measure because you're missing the hammer, that doesn't make a sledgehammer the right pick for the job, it just makes it the most appropriate of the tools at your disposal. That doesn't mean you should be using a sledgehammer to crack a nut though. Heck, if we're being frank, what you'd really need is a nutcracker, not a hammer.. – Neil Jun 23 '17 at 11:38
  • I want an army of trained squirrels for my nut cracking. – icc97 Jun 23 '17 at 14:59
6

TL:DR;

I don't think there is a "necessary" number of levels of abstrations below which there is too little or above which there is too much. As in graphic design, good OOP design should be invisible and should be taken for granted. Bad design always sticks out like a sore thumb.

Long answer

Most probably you will never know how many level of abstractions you are building upon.

Most levels of abstraction are invisible to us and we take them for granted.

That reasoning leads me to this conclusion:

One of the main purposes of abstraction is saving the programmer the necessity to have all the working of the system in mind all the time. If the design forces you to know too much about the system in order to add something then there's probably too little abstraction. I think bad abstraction (poor design, anemic design or over-engineering) can also force you to know too much in order to add something. In one extreme we have a design based on a god class or a bunch of DTO, in the other extreme we have some OR/persistance frameworks that make you to jump through uncountable hoops to achive a hello world. Both cases forces you too know too much.

Bad abstraction adheres to a Gauss bell in the fact that once you past a sweet spot in begins to get in the way. Good abstraction, in the other hand, is invisible and there can't be too much of it because you don't notice it's there. Think of how many layers upon layers of APIs, network protocols, libraries, OS libraries, file systems, harware layers, etc. your application is built upon and takes for granted.

Other major purpose of abstraction is compartmentalization, so errors don't permeate beyond a certain area, not unlike the double hull and separate tanks prevent a ship from flooding completelly when a part of the hull has a hole. If modifications to code end up creating bugs in seemingly unrelated areas then chances are there is too little abstraction.

Tulains Córdova
  • 39,201
  • 12
  • 97
  • 154
  • 2
    "Most probably you will never know how many level of abstractions you are building upon." – In fact, the whole point of an abstraction is that you don't know how it is implemented, IOW you don't (and *cannot*) know how many levels of abstractions it hides. – Jörg W Mittag Jun 22 '17 at 14:39
4

Design patterns are simply common solutions to problems. It's important to know design patterns, but they are just symptoms of well designed code (good code can still be void of the gang of four's set of design patterns), not the cause.

Abstractions are like fences. They help separate regions of your program into testable and interchangeable chunks (requirements for making non-fragile non-rigid code). And much like fences:

  • You want abstractions at natural interface points to minimize their size.

  • You don't want to change them.

  • You want them to separate things which can be independent.

  • Having one in the wrong place is worse than not having it.

  • They should not have big leaks.

dlasalle
  • 842
  • 6
  • 12
4

Refactoring

I did not see the word "refactoring" mentioned even once so far. So, here we go:

Feel free to implement a new feature as directly as possible. If you have only a single, simple, class, you likely do not need an interface, a superclass, a factory etc. for it.

If and when you notice that you expand the class in a way that it grows too fat, then is the time to rip it apart. At that time it makes great sense to think about how you actually should do that.

Patterns are a mind tool

Patterns, or more specifically the book "Design Patterns" by the gang of four, are great, amongst other reasons, because they build a language for developers to think and talk in. It is easy to say "observer", "factory" or "facade" and everyone knows exactly what it means, right away.

So my opinion would be that every developer should have a passing knowledge about at least the patterns in the original book, simply to be able to talk about OO concepts without always having to explain the basics. Should you actually use the patterns everytime a possibility to do so appears? Most likely not.

Libraries

Libraries are likely the one area where it may be in order to err on the side of too many pattern-based choices instead of too little. Changing something from a "fat" class to something with more pattern-derived (usually that means more and smaller classes) will radically change the interface; and that is the one thing you do not usually want to change in a library, because it is the only thing that is of real interest to the user of your library. They wouldn't care less about how you deal with your functionality internally, but they do very much care if they constantly have to change their program when you do a new release with a new API.

AnoE
  • 5,614
  • 1
  • 13
  • 17
2

The point of the abstraction should be first and foremost the value that is brought to the consumer of the abstraction, i.e. the abstraction's client, the other programmers, and often yourself.

If, as a client who is consuming the abstraction(s), you find you need to mix and match many different abstractions to get your programming job done, then there are potentially too many abstractions.

Ideally, layering should bring together a number of lower abstractions and replace that with a simple and higher-level abstraction that its consumers can use without having to dealing with any of those underlying abstractions. If they have to deal with the underlying abstractions, then the layer is leaking (by way of being incomplete). If the consumer has to deal with too many different abstractions, then layering is perhaps missing.

After considering the value of the abstractions for the consuming programmers, then we can turn to evaluate and consider the implementation, such as that of the DRY-ness.

Yes, it is all about easing maintenance, but we should consider the plight of our consumers' maintenance first, by providing quality abstractions and layers, then consider easing our own maintenance in terms of implementation aspects such as avoiding redundancy.


Example, when a generic caching layer is written using Memcache. Do we really need Memcache, MemcacheAdapter, MemcacheInterface, AbstractCache, CacheFactory, CacheConnector, ... or is this easier to maintain and still good code when using only half of those classes?

We have to look at the client's perspective, and if their lives are made easier then it is good. If their lives are more complex then it is bad. However, it could be that there is a missing layer that wraps these things together into something simple to use. Internally, these may very well make maintenance of the implementation better. However, as you suspect, it is also possible that it is merely over engineered.

Erik Eidt
  • 33,282
  • 5
  • 57
  • 91
2

Abstraction is designed to make code easier to understand. If a layer of abstraction is going to make things more confusing - don't do it.

The aim is to use the correct number of abstractions and interfaces to:

  • minimize development time
  • maximize code maintainability

Abstract only when required

  1. When you discover you're writing a super class
  2. When it will allow significant code re-use
  3. If abstracting will make the code will become significantly clearer and easier to read

Do not abstract when

  1. Doing so will not have an advantage in code-reuse or clarity
  2. Doing so will make the code significantly longer/more complex with no benefit

Some examples

  • If you're only going to have one cache in your whole program, don't abstract unless you think you are likely to end up with a superclass
  • If you have three different types of buffers, use a common interface abstraction to all of them
sdfgeoff
  • 121
  • 1
2

I think this might be a controversial meta-answer, and I'm a bit late to the party, but I think it's very important to mention this here, because I think I know where you're coming from.

The problem with the way design patterns are used, is that when they are taught, they present a case like this:

You have this specific scenario. Organize your code this way. Here's a smart-looking, but somewhat contrived example.

The problem is that when you start doing real engineering, things are not quite this cut-and-dry. The design pattern you read about will not quite fit the problem you are trying to solve. Not to mention that the libraries you are using totally violate everything stated in the text explaining those patterns, each in its own special way. And as a result, the code you write "feels wrong" and you ask questions like this one.

In addition to this, I'd like to quote Andrei Alexandrescu, when talking about software engineering, who states:

Software engineering, maybe more than any other engineering discipline, exhibits a rich multiplicity: You can do the same thing in so many correct ways, and there are infinite nuances between right and wrong.

Perhaps this is a bit of an exaggeration, but I think this perfectly explains an additional reason why you might feel less confident in your code.

It is in times like this, that the prophetic voice of Mike Acton, game engine lead at Insomniac, screams in my head:

KNOW YOUR DATA

He's talking about the inputs to your program, and the desired outputs. And then there's this Fred Brooks gem from the Mythical Man Month:

Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.

So if I were you, I would reason about my problem based on my typical input case, and whether it achieves the desired correct output. And ask questions like this:

  • Is the output data from my program correct?
  • Is it produced efficiently/quickly for my most common input case?
  • Is my code easy enough to locally reason about, for both me and my teammates? If not, then can I make it simpler?

When you do that, the question of "how many layers of abstraction or design patterns are needed" becomes much easier to answer. How many layers of abstraction do you need? As many as necessary to achieve these goals, and not more. "What about design patterns? I haven't used any!" Well, if the above goals were achieved without direct application of a pattern, then that's fine. Make it work, and move on to the next problem. Start from your data, not from the code.

2

Software Architecture is Inventing Languages

With every software layer, you create the language that you (or your co-workers) want to express their next-higher-layer solution in (so, I'll throw in some natural-language analogues in my post). Your users don't want to spend years learning how to read or write that language.

This view helps me when deciding on architectural issues.

Readability

That language should be easily understood (making the next-layer code readable). Code is read far more often than written.

One concept should be expressed with one word - one class or interface should expose the concept. (Slavonic languages typically have two different words for one English verb, so you have to learn twice the vocabulary. All natural languages use single words for multiple concepts).

Concepts that you expose should not contain surprises. That's mainly naming conventions like get- and set-methods etc. And design patterns can help because they provide a standard solution pattern, and the reader sees "OK, I get the objects from a Factory" and knows what that means. But if simply instantiating a concrete class does the job, I'd prefer that.

Usability

The language should be easy to use (making it easy to formulate "correct sentences").

If all these MemCache classes/interfaces become visible to the next layer, that creates a steep learning curve for the user until he understands when and where to use which of these words for the single concept of a cache.

Exposing only the necessary classes/methods makes it easier for your user to find what he needs (see DocBrowns quote of Antoine de Saint-Exupery). Exposing an interface instead of the implementing class can make that easier.

If you expose a functionality where an established design pattern can apply, it's better to follow that design pattern than to invent something different. Your user will understand APIs following a design pattern more easily than some completely different concept (if you know Italian, Spanish will be easier to you than Chinese).

Summary

Introduce abstractions if that makes usage easier (and is worth the overhead of maintaining both the abstraction and the implementation).

If your code has a (non-trivial) sub-task, solve it "the expected way" i.e. follow the appropriate design pattern instead of re-inventing a different type of wheel.

Ralf Kleberhoff
  • 5,891
  • 15
  • 19
1

The important thing to consider is how much does the consuming code that actually handles your business logic needs to know about these caching related classes. Ideally your code should only care about the cache object it wants to create and maybe a factory to create that object if a constructor method isn't sufficient.

The number of patterns used or level of inheritance aren't too important so long as each level can be justified to other developers. This creates an informal limit as each additional level is harder to justify. The more important part is how many levels of abstraction are affected by changes to either functional or business requirements. If you can make a change to only one level for a single requirement then you likely aren't over abstracted or abstracted poorly, if you change the same level for multiple unrelated changes you are likely under abstracted and need to further separate concerns.

Ryathal
  • 13,317
  • 1
  • 33
  • 48
-1

First, the Twitter quote is bogus. New developers need to make out a model, abstractions would typically help them "getting the picture". Provided the abstractions make sense of course.

Second, your problem is not too many or too few abstractions, it is that apparently no one gets to decide about these things. No one owns the code, no single plan/design/philosophy is implemented, any next guy can do whatever the hell he seems fit for that moment. Whichever style you go, it should be one.

Martin Maat
  • 18,218
  • 3
  • 30
  • 57
  • 2
    Let's refrain from dismissing ones experience as "bogus". Too many abstractions are a real problem. Sadly people add abstraction up-front, because "best practice", rather than solving a real problem. Also, no one can decide "about these things" ... people leave companies, people join, nobody takes ownership of their mud. – Rawkode Jun 23 '17 at 07:37