32

Is it true that overriding concrete methods is a code smell? Because I think if you need to override concrete methods:

public class A{
    public void a(){
    }
}

public class B extends A{
    @Override
    public void a(){
    }
}

it can be rewritten as

public interface A{
    public void a();
}

public class ConcreteA implements A{
    public void a();
}

public class B implements A{
    public void a(){
    }
}

and if B wants to reuse a() in A it can be rewritten as:

public class B implements A{
    public ConcreteA concreteA;
    public void a(){
        concreteA.a();
    }
}

which does not require inheritance to override the method, is that true?

ggrr
  • 5,725
  • 11
  • 35
  • 37
  • 1
    Related on Stackoverflow: [Any good examples of inheriting from a concrete class?](http://stackoverflow.com/questions/7496010/any-good-examples-of-inheriting-from-a-concrete-class); [Is inheritance of concrete classes evil?](http://stackoverflow.com/questions/3887871/is-inheritance-of-concrete-classes-evil) – sleske Apr 11 '16 at 07:00
  • 4
    Downvoted because (in comments below), Telastyn and Doc Brown agree about overriding, but give opposite yes/no answers to the headline question because they disagree about the meaning of "code smell". So a question intended to be about code design has been highly coupled to a piece of slang. And I think unnecessarily so, since the question specifically is about one technique against another. – Steve Jessop Apr 11 '16 at 16:05
  • 5
    The question is not tagged Java, but the code appears to be Java. In C# (or C++), you'd have to use the `virtual` keyword to support overrides. So, the issue is made more (or less) ambiguous by the particulars of the language. – Erik Eidt Apr 11 '16 at 16:19
  • 1
    It seems like almost every programming language feature inevitably ends up being classified as a "code smell" after enough years. – Brandon Apr 11 '16 at 22:03
  • 2
    I feel like the `final` modifier exists exactly for situations like these. If the behavior of the method is such that it is not designed to be overridden, then mark it as `final`. Otherwise, expect that developers may choose to override it. – Martin Tuskevicius Apr 11 '16 at 22:29
  • It is codesmell becouse you can use object instance scope for A, instade of this. Also there is a redundancy. Becouse there you can create an instance form A object with a new definition – marven Apr 11 '16 at 15:36
  • It's like pornography. You know it when you see it. I think this is an example of a personal preference. It's like debating how many spaces you should indent or where to place your brace. – Chloe Apr 12 '16 at 06:20
  • Useful resource on why you shouldn't ever override a concrete method: http://butunclebob.com/ArticleS.DeanWampler.ShouldYouEverOverrideConcreteMethods – Spotted Apr 12 '16 at 06:50
  • @Chloe no, this is an architectural and design issue, and one that Java and C# took fundamentally different approaches to, since Java makes every method overridable by default, while C# makes every method final by default. – Craig Tullis Jun 09 '17 at 15:04

5 Answers5

34

No, it is not a code smell.

  • If a class is not final, it allows to be subclassed.
  • If a method is not final, it allows to be overridden.

It lies within the responsabilities of each class to carefully consider if subclassing is appropriate, and which methods may be overridden.

The class may define itself or any method as final, or may place restrictions (visibility modifiers, available constructors) on how and where it is subclassed.

The usual case for overriding methods is a default implementation in the base class which may be customized or optimized in the subclass (especially in Java 8 with the advent of default methods in interfaces).

class A {
    public String getDescription(Element e) {
        // return default description for element
    }
}

class B extends A {
    public String getDescription(Element e) {
        // return customized description for element
    }
}

An alternative for overriding behaviour is the Strategy Pattern, where the behaviour is abstracted as an interface, and the implementation can be set in the class.

interface DescriptionProvider {
    String getDescription(Element e);
}

class A {
    private DescriptionProvider provider=new DefaultDescriptionProvider();

    public final String getDescription(Element e) {
       return provider.getDescription(e);
    }

    public final void setDescriptionProvider(@NotNull DescriptionProvider provider) {
        this.provider=provider;
    }
}

class B extends A {
    public B() {
        setDescriptionProvider(new CustomDescriptionProvider());
    }
}
Peter Walser
  • 494
  • 3
  • 7
  • I understand that, in the real world, overriding a concrete method may be code smell to some. But, I like this answer because it seems more spec based to me. – Darkwater23 Apr 11 '16 at 20:49
  • In your last example, shouldn't `A` implements `DescriptionProvider` ? – Spotted Apr 12 '16 at 05:25
  • @Spotted: `A` *could* implement `DescriptionProvider`, and thus be the default description provider. But the idea here is separation of concerns: `A` needs the description (some other classes might too), whilst other implementations provide them. – Peter Walser Apr 12 '16 at 06:35
  • 2
    Ok, sounds more like the [strategy pattern](https://en.wikipedia.org/wiki/Strategy_pattern). – Spotted Apr 12 '16 at 06:58
  • @Spotted: you're right, the example illustries the strategy pattern, not the decorator pattern (a decorator would add behaviour to a component). I'll correct my answer, thanks. – Peter Walser Apr 12 '16 at 07:35
  • Actually, this is not strategy pattern. The goal of strategy pattern is to change the behaviour at runtime. You can already change the behaviour of class A without subclassing it. This is more like the bridge pattern. By injecting different implementations to different subclasses you change the behaviour and seperating what varies. – ozum.e Dec 05 '19 at 22:55
26

Is it true that overriding concrete methods is a code smell?

Yes, in general, overriding concrete methods is a code smell.

Because the base method has a behavior associated with it that developers usually respect, changing that will lead to bugs when your implementation does something different. Worse, if they change the behavior, your previously correct implementation may exacerbate the problem. And in general, it's fairly difficult to respect the Liskov Substitution Principle for non-trivial concrete methods.

Now, there are cases where this can be done and is good. Semi-trivial methods can be over-ridden somewhat safely. Base methods that exist only as a "sane default" sort of behavior that is meant to be over-ridden have some good uses.

Hence, this does seem like a smell to me - sometimes good, usually bad, take a look to make sure.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • 29
    Your explanation is fine, yet I come exactly to the opposite conclusion: "no, overriding concrete methods is *not* a code smell in general" - it is only a code smell when one overrides methods which were not intended to be overridden. ;-) – Doc Brown Apr 11 '16 at 05:36
  • 2
    @DocBrown Exactly! I also see it that the explanation (and particularly the conclusion) oppose the initial statement. – Tersosauros Apr 11 '16 at 06:16
  • Can you provide a more concrete example where overriding a concrete method is good ? What are "semi-trivial" methods and how, as a developper wanting to overrides a method, can I recognize these ? – Spotted Apr 11 '16 at 09:21
  • 1
    @Spotted: The simplest counterexample is a `Number` class with an `increment()` method that sets the value to its next valid value. If you subclass it into `EvenNumber` where `increment()` adds two instead of one (and the setter enforces evenness), the OP's first proposal requires duplicate implementations of _every_ method in the class and his second requires writing wrapper methods to call the methods in the member. Part of the purpose of inheritance is for child classes to make clear how they differ from the parents by only providing those methods that need to be different. – Blrfl Apr 11 '16 at 09:55
  • ToString() is a good example of a method that is often safe to override. – Ian Apr 11 '16 at 10:32
  • 6
    @DocBrown : being a code smell doesn't imply something is _necessarily_ bad, only that it is _likely_. – mikołak Apr 11 '16 at 10:59
  • 1
    @mikołak: from the pure fact a concrete method is overwritten, I would not deduce that this is *likely* to be problem (nor that it is unlikely), thus I would not say it is *generally* a code smell. This looks a biased assumption to me. However, by just writing "this is a code smell when it violated the LSP", or "this is not a code smell when overwriting a default implementation", and removing that assumption of what is likely, this answer could IMHO be improved. – Doc Brown Apr 11 '16 at 11:21
  • 3
    @docbrown - for me, this falls alongside "favor composition over inheritance". If you're over-riding concrete methods, you're already in tiny-smell land for having inheritance. What are the odds that you're over-riding something you shouldn't be? Based on my experience, I think it is probable. YMMV. – Telastyn Apr 11 '16 at 11:42
  • @Telastyn: what you are saying sounds like "inheritance is always a code smell" ;-) (and let me tell you I upvoted your answer already right when I wrote my initial comment). – Doc Brown Apr 11 '16 at 12:12
  • @Blrfl so how would you implement `increment()` in `Number` if not making it abstract ? – Spotted Apr 11 '16 at 13:10
  • @Spotted - Well, (depending on the language) there are some trivial methods that you're almost *expected* to override - `toString()` or `equals()`. In certain game engines, you might expect to see `init()`, `start()`, `update()`, or `kill()` type methods on all game objects. These are just the most obvious examples. – Darrel Hoffman Apr 11 '16 at 14:04
  • @DarrelHoffman See my answer's comment to know my thoughts on `toString()` or `equals()`. To be clear, I'm not against overriding *abstract* method, I'm only against overriding *concrete* methods. – Spotted Apr 11 '16 at 14:12
  • The other reason to override is performance. If you know that you are dealing with a special case, does not violate the LSP (see for example std::advance) – soandos Apr 11 '16 at 14:25
  • "...sometimes good, usually bad, take a look to make sure" might want to add "...and make sure there's a comment documenting that this is the desired behavior and why the decision was made." – Jared Smith Apr 11 '16 at 15:02
  • @Spotted: Why, specifically, do you think that only abstract methods should be overridden (or, more technically correct, implemented, since there's no method to override)? The ability to subclass with different implementations of a subset of methods is practically a cornerstone of OO programming. Forcing everything abstract like that adds unnecessary code and gymnastics for little, if any, benefit. – Blrfl Apr 11 '16 at 17:46
  • 1
    @Blrfl Your example is almost certainly a violation of the LSP, depending on what `Number` is expected to do. An even number is-a number in the same way a square is-a rectangle (a narrowing of the definition), and having `Square` extend `Rectangle` is the quintessential LSP violation (assuming you're reading and mutating, which your description of `increment()` implies). In fact it's a great example of why overriding concrete methods is so smelly. – JounceCracklePop Apr 12 '16 at 05:25
  • @Blrfl If I develop a class (let's say `A`) containing a method (let's say `a()`) without abstract or final keyword, anyone who subclass `A` has the possibility to override `a()`. So the initial "unit of working software" I provided suddenly doesn't behave like expected (and like the behaviour is guaranteed in the unit tests) or even worse couldn't be working anymore. – Spotted Apr 12 '16 at 05:49
  • @Blrfl So should I write something in the doc ? How could I be sure the one who subclass will read the doc ? And if I write that calling super.a() is important but he doesn't care or forgets ? For me, this is too much unknowns if I want to work as a developer who provides you working software. That's why I either provide `final` methods by saying **nah, it's intended to work like that, don't touch me** or `abstract` by saying **yup, you can put your specific code here**. – Spotted Apr 12 '16 at 05:49
  • @CarlLeth: Whether `Square` extending `Rectangle` violates the LSP depends on the design of the latter, and it's easy to cook up strawman designs for both that do and don't. The question nobody ever asks when citing that example is whether the taxonomy is wrong and both should be subclasses of `FourSidedFigureWithRightAngles`, which would force the right things to be abstract from square one. Come to think of it, my `Number`/`EvenNumber` example suffers from the same problem. – Blrfl Apr 12 '16 at 12:10
  • @Spotted: `How could I be sure the one who subclass will read the doc ?` - The best way to assure your code won't be misused is to not write it at all. If you hand me a working class and I subclass it in a way that makes it misbehave, then I broke it and I own both pieces. That's _my_ problem, not yours. The additional gymnastics that composition imposes were inheritance would have been just fine means that if a class adds a method, every descendant has to be modified, and I find that equally smelly. See also my comment above about taxonomy. – Blrfl Apr 12 '16 at 12:19
7

if you need to override concrete methods [...] it can be rewritten as

If it can be rewritten that way it means you have control over both classes. But then you know whether you designed class A to be derived in this way and if you did, it is not a code smell.

If, on the other hand, you don't have control over the base class, you can't rewrite in that way. So either the class was designed to be derived in this way, in which case just go ahead, it is not a code smell, or it was not, in which case you have two options: look for another solution altogether, or go ahead anyway, because working trumps design purity.

Jan Hudec
  • 18,250
  • 1
  • 39
  • 62
  • If class A would have been designed to be derived, wouldn't make it more sense to have an abstract method to clearly express the intent ? How the one who overrides the method can know that the method was designed this way ? – Spotted Apr 11 '16 at 09:18
  • @Spotted, there are many cases where default implementations cam be provided and each specialization only needs to override some of them, either to extend functionality or for efficiency. Extreme example is visitors. The user knows how the methods were designed from documentation; it has to be there to explain what is expected of the override either way. – Jan Hudec Apr 11 '16 at 17:30
  • I understand your point of view, still I think it's dangerous that the only way a developer has, in order to know a method can be overriden, is by reading the doc because: 1) if it must be overriden, I have no guarantee that's will be done. 2) if it must not be overriden, someone will do it for its convenience 3) not everyone reads the doc. Also, I never faced a case where I needed to override a whole method, but only parts (and I ended using a template pattern). – Spotted Apr 12 '16 at 05:33
  • @Spotted, 1) if it _must_ be overriden, it should be `abstract`; but there are many cases where it does not have to be. 2) if it must _not_ be overridden, it should be `final` (non-virtual). But there are still many cases where methods _may_ be overridden. 3) if the downstream developer does not read the docs, they are going to shoot themselves in the foot, but they are going to do that no matter what measures you put in place. – Jan Hudec Apr 12 '16 at 08:20
  • Ok so our point of view's divergence only lies in 1 word. You say that every method *should* be abstract or final while I say that every method *must* be abstract or final. :) What are these cases when overriding a method is necessary (and safe) ? – Spotted Apr 12 '16 at 08:59
  • @Spotted, no, I don't say every method should be abstract or final. I only say it should be abstract or final _if it can_. And that there is a significant number of cases where it _can't_. – Jan Hudec Apr 12 '16 at 09:00
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/38269/discussion-between-spotted-and-jan-hudec). – Spotted Apr 12 '16 at 09:01
3

First, let me just point out that this question is only applicable in certain typing systems. For example, in Structural typing and Duck typing systems - a class simply having a method with the same name & signature would mean that class was type compatible (at least for that particular method, in the case of Duck typing).

This is (obviously) very different from the realm of statically typed languages, such as Java, C++, etc. As well as the nature of Strong typing, as used in both Java & C++, as well as C# and others.


I am guessing you're coming from a Java background, where methods must explicitly be marked as final if the contract designer intends for them not to overridden. However, in languages such as C#, the opposite is the default. Methods are (without any decorating, special keywords) "sealed" (C#'s version of final), and must be explicitly declared as virtual if they are allowed to be overridden.

If an API or library has been designed in these languages with a concrete method that is overridable, then it must (at least in some case(s)) make sense for it to be overridden. Therefore, it is not a code smell to override an unsealed/virtual/not final concrete method.     (This assumes of course that the designers of the API are following conventions and either marking as virtual (C#), or marking as final (Java) the appropriate methods for what they're designing.)


See Also

Tersosauros
  • 768
  • 3
  • 19
  • In duck typing, is two classes having the same method signature sufficient for them to be type compatible, or is it also necessary that the methods have the same semantics, such that they are liskov substitutable for some implied base class? – Wayne Conrad Jun 09 '17 at 17:54
2

Yes it is because it can lead to call super anti-pattern. It can also denatures the initial purpose of the method, introduce side-effects (=> bugs) and make tests fail. Would you use a class which unit tests are red (failling) ? I won't.

I'll go even further by saying, the initial code smell is when a method is not abstract nor final. Because it allows to alter (by overriding) the behaviour of a constructed entity (this behaviour can be lost or altered). With abstract and final the intent is unambiguous:

  • Abstract: you must provide a behaviour
  • Final: you're not allowed to modify the behaviour

The safest way to add behaviour to an existing class is what your last example shows: using the decorator pattern.

public interface A{
    void a();
}

public final class ConcreteA implements A{
    public void a() {
        ...
    }
}

public final class B implements A{
    private final ConcreteA concreteA;
    public void a(){
        //Additionnal behaviour
        concreteA.a();
    }
}
Spotted
  • 1,680
  • 10
  • 18
  • 9
    This *black & white* approach isn't really all that useful. Think about `Object.toString()` in Java... If this was `abstract`, many default things wouldn't work, and code wouldn't even compile when somebody hasn't overridden the `toString()` method! If it were `final`, things would be even worse - no ability to allow objects to be converted to string, except for the Java way. As [@Peter Walser](http://programmers.stackexchange.com/users/100350/peter-walser)'s answer talks about, **default implementations** (such as `Object.toString()`) *are* important. Especially in Java 8 default methods. – Tersosauros Apr 11 '16 at 11:59
  • 1
    @Tersosauros You are right, if `toString()` was either `abstract` or `final` it would be a pain. My personal opinion is that this method is broken and it would have been better not to have it at all (like [equals and hashCode](http://programmers.stackexchange.com/a/314183/189221)). The initial purpose of Java defaults is to allow API evolution with retrocompatibility. – Spotted Apr 11 '16 at 14:01
  • The word "retrocompatibility" has an awful lot of syllables. – Craig Tullis Jun 09 '17 at 15:17
  • @Spotted I am very disappointed at the general acceptance of concrete inheritance on this site, I expected better :/ Your answer should have many more upvotes – TheCatWhisperer Jun 09 '17 at 15:38
  • 1
    @TheCatWhisperer I agree, but on the other hand I took me so long to find out the subtles advantages of using decoration over concrete inheritance (in fact it became clear when I started to write tests first) that you can't blame anyone for that. The best thing to do is to try to educate the others until they discover the **Truth** (joke). :) – Spotted Jun 11 '17 at 11:05
  • @Spotted yes, the advantages of decorational hierarchy are very difficult to discern. I did not discover the Truth(tm) until recently. I took me a while to accept, even after I had already accepted the other good code practices. But, once you use decoration, even once, you'll never want to go back. – TheCatWhisperer Jun 12 '17 at 14:18
  • @Spotted, as it so happens, we recently got a question which is a great example of how bad inheritance can be, even without the "call super" problem! https://softwareengineering.stackexchange.com/questions/351029/correct-way-to-extend-a-hierarchy-tree/351034#351034 – TheCatWhisperer Jun 19 '17 at 14:35