55

It seems Java has had the power to declare classes not-derivable for ages, and now C++ has it too. However, in the light of the Open/Close principle in SOLID, why would that be useful? To me, the final keyword sounds just like friend - it is legal, but if you are using it, most probably the design is wrong. Please provide some examples where a non-derivable class would be a part of a great architecture or design pattern.

XaolingBao
  • 168
  • 7
Vorac
  • 7,073
  • 7
  • 38
  • 58
  • 44
    Why do you think a class is wrongly designed if it's annotated with `final` ? Many people (including me) find that it's a good design to make every non-abstract class `final`. – Spotted May 12 '16 at 08:39
  • 4
    It might be useful to the compiler, to optimize better such classes. – Basile Starynkevitch May 12 '16 at 08:47
  • 3
    @Spotted, If I want a class, similar to one already written, I can either contain an instance or inherit it. `final` prevents the latter, hence IMHO violates SOLID. – Vorac May 12 '16 at 08:52
  • 20
    Favour composition over inheritance and you can have every non abstract class `final`. – Andy May 12 '16 at 08:59
  • 2
    Have a look at the decorator pattern for a safest way to extend existing (and possibly `final`) classes. – Spotted May 12 '16 at 11:14
  • 24
    The open/close principle is in a sense an anachronism from the 20th century, when the mantra was to make a hierarchy of classes that inherited from classes that in turn inherited from other classes. This was nice for teaching object oriented programming but it turned out to create tangled, unmaintainable messes when applied to real-world problems. Designing a class to be extensible is hard. – David Hammen May 12 '16 at 11:50
  • 36
    @DavidArno Don't be ridiculous. Inheritance is the *sine qua non* of object-oriented programming, and it's nowhere near as complicated or messy as certain overly-dogmatic individuals like to preach. It's a tool, like any other, and a good programmer knows how to use the right tool for the job. – Mason Wheeler May 12 '16 at 12:14
  • 9
    @MasonWheeler, OK, calling it "evil" is a little over the top; but it's a seriously flawed concept. [Inheritance encourages coupling, makes testing harder, makes encapsulation harder and results in unpredictable violations to the open/closed principle due to the fragile base class problem](http://www.davidarno.org/2016/02/04/inheritance-just-stop-using-it-already/). Designing to interfaces and using composition does everything inheritance can do, without any of those problems... – David Arno May 12 '16 at 12:40
  • 16
    @DavidArno On the contrary, excessive decoupling is a seriously flawed concept that leads to code that is difficult to write, read, and maintain. (If you don't believe me, try debugging an Android UI sometime, where the XML declaration of the UI is so decoupled from the actual UI objects it generates that it is essentially impossible to use a debugger to find the answer to any question of the form "why does my X look like Y when I meant for it to look like Z?") And there are things composition can't do, and things it can't do nearly as well as inheritance. Use the right tool for the job! – Mason Wheeler May 12 '16 at 13:44
  • 5
    @MasonWheeler you are confusing decoupling with "using XML". The latter is almost as bad an idea as inheritance:) – David Arno May 12 '16 at 15:04
  • 49
    As a recovering developer, I seem to recall final being a brilliant tool for preventing harmful behavior from entering critical sections. I also seem to recall inheritance being a powerful tool in a variety of ways. It's almost as if *gasp* different tools have pros and cons and we as engineers have to balance those factors as we produce our software! – corsiKa May 12 '16 at 15:17
  • 1
    @corsiKa, [some tools are so dangerous, they have to be banned](http://objectofhistory.org/objects/brieftour/shorthandledhoe/?order=4)... – David Arno May 12 '16 at 16:26
  • 3
    @DavidArno Yes, that's [consistent with my answer](http://xkcd.com/292/). – corsiKa May 12 '16 at 17:06
  • 1
    @corsiKa, OK, you *definitely* win with that one! – David Arno May 12 '16 at 17:15
  • 1
    You might find this talk interesting https://yow.eventer.com/yow-2013-1080/the-solid-design-principles-deconstructed-by-kevlin-henney-1386 especially the part about the open close principle – Jens Schauder May 13 '16 at 05:15
  • 5
    @MasonWheeler No, it's the base of class-oriented programming. OOP doesn't need inheritance (or classes) at all. But that's really irrelevant to the question - is inheritance useful? Is it useful to *prohibit* inheritance? And the answer to both is "yup". – Luaan May 13 '16 at 08:52
  • 2
    Your design it not wrong, if your code uses the friend keyboard. Correctly used friend improves the code quality by strengthening encapsulation. The problem is that many people use it incorrectly and that way actually weaken encapsulation, but that does not make the keyword itself an indicator of design flaws. I would rather even say that a codebase that does not use friend at all is very likely flawed. – Kaiserludi May 13 '16 at 09:55
  • @MasonWheeler The issue there is that such layouts are actually _not_ decoupled; they look declarative, but under the hood they're still imperative (as shown when you actually take a look at how the classes used in such layouts are implemented and all the constructor boilerplate needed to handle the XML attributes). Of course, with any significantly advanced GUI framework that allows complete customization of look and feel and layout you're going to have a harder time working out what causes certain parts to look bad once your layouts get significantly complicated; debuggers aren't much help. – JAB May 13 '16 at 15:25
  • 2
    You could, for example, create a String class which no one could ever subclass, because *your* String is so abso-f*cking-lutely pure, absolute, and perfect, and changing, altering, or in fact doing anything beyond staring at it in awe-struck wonder is considered nouveau-object-oriented and gauche-beyond-words. I can't imagine that anyone would actually want to do something this prickish, though... – Bob Jarvis - Слава Україні May 13 '16 at 16:51
  • Isn't this like asking why access modifiers would ever be useful? – MCMastery May 16 '16 at 00:47
  • Additionally, in `C++` if you don't design for inheritance by writing a virtual destructor, your class should absolutely be `final`. Otherwise derived classes would leak memory. – Nathan Cooper May 16 '16 at 08:26
  • @NathanCooper leak memory: or rather, not call the derived destructor: that's only if you use `delete`. The instances can be stack based/contained only, yet still passed as arguments to functions that use them polymorphicly. Now with smart pointers (no bare deletes) that can be captured at creation time and not need a virtual destructor, either. – JDługosz May 16 '16 at 11:28
  • @JDługosz Yes, you're right, smart pointers know what the derived type is because you told it on construction. However, I still think its worth being safe, and create either virtual destructors or mark things as final. The OP may be using `delete` however, and should **bear in mind that in C++ there can be a (small) function difference between classes you intend to inherit from and those you don't** – Nathan Cooper May 16 '16 at 13:15

10 Answers10

137

final expresses intent. It tells the user of a class, method or variable "This element is not supposed to change, and if you want to change it, you haven't understood the existing design."

This is important because program architecture would be really, really hard if you had to anticipate that every class and every method you ever write might be changed to do something completely different by a subclass. It is much better to decide up-front which elements are supposed to be changeable and which aren't, and to enforce the unchangeablility via final.

You could also do this via comments and architecture documents, but it is always better to let the compiler enforce things that it can than to hope that future users will read and obey the documentation.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • 3
    I would expect that, as Michal points out below, by following the Liskov principle, never a base class is "changed to do something completely different". – Vorac May 12 '16 at 11:18
  • 14
    So would I. But anyone who's ever written a widely reused base class (like a framework or media library) knows better than to expect application programmers to behave in a sane way. They'll subvert, misuse and distort your invention in ways you hadn't even *thought* were possible unless you lock it down with an iron grip. – Kilian Foth May 12 '16 at 11:20
  • 10
    @KilianFoth Ok, but honestly, how is that your problem what application programmers do? – coredump May 12 '16 at 11:53
  • 21
    @coredump People using my library badly create bad systems. Bad systems breed bad reputations. Users will not be able to distinguish Kilian's great code from Fred Random's monstrously unstable app. Result: I lose out in programming creds and customers. Making your code hard to misuse is a question of dollars and cents. – Kilian Foth May 12 '16 at 11:57
  • 6
    "Final" is a brutal way of preventing potential breach of contract from a subclass, but I guess it is easier than contracts or runtime checks. "People using my library badly create bad systems": even if your library was perfect, they could still hack it so that it works in a context you could not imagine. "Bad systems breed bad reputations": if only that was true... Users generally don't care what is the technology behind a product, but developers do and bad reputation could arise from using a library that cop out to "final" where more flexible approaches could be useful. That being said, +1 – coredump May 12 '16 at 13:00
  • 23
    The statement "This element is not supposed to change, and if you want to change it, you haven't understood the existing design." is incredibly arrogant and not the attitude I would want as part of any library I worked with. If only I had a dime for every time some ridiculously over-encapsulated library left me with no mechanism to change some piece of inner state that *needs to be changed* because the author failed to anticipate an important use case... – Mason Wheeler May 12 '16 at 13:47
  • 4
    @coredump Full ack! Making code `final` because in fact its *contract* must not be modified is a poor last resort and an attempt to fix a problem at the wrong place. – JimmyB May 12 '16 at 14:42
  • 3
    @MasonWheeler Full ack to you too. Along the same line, one could also say the contrary: "If you feel the need to use `final`, your design is probably flawed." – JimmyB May 12 '16 at 14:44
  • 33
    @JimmyB Rule of thumb: **If you think you know what your creation will be used for, you're wrong already.** Bell conceived of the telephone as essentially a Muzak system. Kleenex was invented for the purpose of more conveniently removing makeup. Thomas Watson, president of IBM, once said "I think there is a world market for maybe five computers." Representative Jim Sensenbrenner, who introduced the PATRIOT Act in 2001, is on the record as saying that it was specifically intended to *prevent* the NSA from doing things they're doing "with PATRIOT act authority." And so on... – Mason Wheeler May 12 '16 at 15:08
  • 1
    Each object is supposed to maintain its own invariants. Final is one way to accomplish that goal. If a programmer is creating objects and doesn't understand this concept, he or she should not be using Final. – ngreen May 12 '16 at 18:23
  • "This element is not supposed to change..." Until I read later portions of your answer, I thought you might be saying that the code should never be modified. May I suggest a different turn of phrase: "This element is not supposed to be overridden..."? – jpmc26 May 13 '16 at 00:35
  • 2
    @MasonWheeler https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquote –  May 13 '16 at 01:29
  • 6
    @ngreen: That's fallacious reasoning. If a subclass overrides something, it is responsible (via LSP) for ensuring all invariants are respected. The base class certainly isn't, so final is an inversion of responsibility. – Kevin May 13 '16 at 03:03
  • 1
    @Mason If you just leave your classes non-final without considering the implications during design, it will be nigh impossible to inheritors to behave well in all situations (creating hard to use APIs will tarnish your reputation) and it will incredibly constrain you in further development (breaking code with every new release of your API won't make you popular either). If you want your class to be inherited it has to be an integral part of the design of the class. This is a lot of extra effort and should only be done when there's a use case for it. – Voo May 13 '16 at 10:22
  • 5
    @Voo Yes, I understand that that's theoretically a concern. In reality, though, I've never once run into a problem with the so-called "fragile base class problem," but I have trouble with designers failing to anticipate needed points of extension on a pretty regular basis. – Mason Wheeler May 13 '16 at 10:46
  • 3
    @Mason Check out Java's HashMap for just one very prominent example. While it might not be a big problem for you if your libraries are only intended for a small audience, if your code has to be backwards compatible and is used by a larger audience this becomes a real problem very quickly. I assume your libraries don't have to be backwards compatible with all clients (lucky you), because otherwise I can't see how you wouldn't have run into that problem in the past. – Voo May 13 '16 at 11:46
  • But yes if the use cases are varied and hard to predict it certainly makes sense to consider extensibility in the larger picture where inheritance might (not doesn't have to) be the right solution. This is a large additional effort that is underestimated by many though. – Voo May 13 '16 at 11:47
  • 2
    @Voo No, let me be a bit more clear. I've never run into the fragile base class problem as either a developer *or* as a user. Of course, I'm not a Java developer, so whatever "prominent" issues it's had with its HashMaps, I must confess I'm entirely unaware of them. – Mason Wheeler May 13 '16 at 12:28
  • 1
    @Kevin, that's not remotely plausible: if every subclass must find a way to preserve the invariants of its parent, the complexity will become untenable. Hardware circuits don't function like this, and software shouldn't either. How on earth will you ever upgrade to a new version of superclass if this is allowed? Answer: you can't. I've seen systems that try to do this, and every last one of them was awful. – ngreen May 13 '16 at 14:43
  • @MasonWheeler To reference Android again, the worst sort of overly-encapsulated libraries are those that only make that encapsulation external; internally they use all the private (or worse, hidden so you can't access them reflectively) classes they want and if you want to make a modified version of one of those classes to use because you can't change the existing one the way you want to you then have to reimplement _everything it depends on_. Encapsulation without proper decoupling/with no usage of public interfaces/etc. is horrid. – JAB May 13 '16 at 15:32
  • 1
    @MasonWheeler "because the author failed to anticipate an important use case", perhaps the user failed to select the right tool for his job? A lot of frameworks and libraries are written as part of a product. If said product is a platform-game and a user of the library fails to create a shooter with it, then that's not the author's fault. General purpose libraries are a different story, but those usually aren't that locked down. – Kevin May 15 '16 at 09:35
  • 3
    @Kevin Oh, you'd be surprised. General-purpose libraries are where I see that kind of problem the most! – Mason Wheeler May 15 '16 at 10:02
  • @MasonWheeler Well, in that case you are completely right. It's not useful as a general purpose library if there are locks on it. – Kevin May 15 '16 at 10:06
  • @Mason This has nothing to do with any particular language, but is a general problem when designing any kind of API. Handling inheritance not only means lots of extra effort when designing the library, but most often also involves other trade-offs, be it performance, extensibility or usability (the more generic you make something, the harder the simple use cases get). Yes you can just make everything extensible and let people create fragile code that will break with the next library update and in weird edge cases, but clearly people have different opinions on that. – Voo May 15 '16 at 16:33
  • 1
    Personally I like well designed APIs where I see what parts are intended to be extended and which aren't. Sure if that API is badly designed and doesn't fulfill my needs I will pick something else (or live with the limitations and work around it, which yes can be a hassle, no question). Sometimes that's just the better option. This reminds me of when people decided to add unicode support to TeX and ended up with patchfiles that were ten times larger than the original program instead of just creating a derivative. – Voo May 15 '16 at 16:36
60

It avoids the Fragile Base Class Problem. Every class comes with a set of implicit or explicit guarantees and invariants. The Liskov Substitution Principle mandates that all subtypes of that class must also provide all these guarantees. However, it is really easy to violate this if we don't use final. For example, let's have a password checker:

public class PasswordChecker {
  public boolean passwordIsOk(String password) {
    return password == "s3cret";
  }
}

If we allow that class to be overridden, one implementation could lock out everyone, another might give everyone access:

public class OpenDoor extends PasswordChecker {
  public boolean passwordIsOk(String password) {
    return true;
  }
}

This is usually not OK, since the subclasses now have behaviour that is very incompatible to the original. If we really intend the class to be extended with other behaviour, a Chain of Responsibility would be better:

PasswordChecker passwordChecker =
  new DefaultPasswordChecker(null);
// or:
PasswordChecker passwordChecker =
  new OpenDoor(null);
// or:
PasswordChecker passwordChecker =
 new DefaultPasswordChecker(
   new OpenDoor(null)
 );

public interface PasswordChecker {
  boolean passwordIsOk(String password);
}

public final class DefaultPasswordChecker implements PasswordChecker {
  private PasswordChecker next;

  public DefaultPasswordChecker(PasswordChecker next) {
    this.next = next;
  }

  @Override
  public boolean passwordIsOk(String password) {
    if ("s3cret".equals(password)) return true;
    if (next != null) return next.passwordIsOk(password);
    return false;
  }
}

public final class OpenDoor implements PasswordChecker {
  private PasswordChecker next;

  public OpenDoor(PasswordChecker next) {
    this.next = next;
  }

  @Override
  public boolean passwordIsOk(String password) {
    return true;
  }
}

The problem becomes more apparent when more a complicated class calls its own methods, and those methods can be overridden. I sometimes encounter this when pretty-printing a data structure or writing HTML. Each method is responsible for some widget.

public class Page {
  ...;

  @Override
  public String toString() {
    PrintWriter out = ...;
    out.print("<!DOCTYPE html>");
    out.print("<html>");

    out.print("<head>");
    out.print("</head>");

    out.print("<body>");
    writeHeader(out);
    writeMainContent(out);
    writeMainFooter(out);
    out.print("</body>");

    out.print("</html>");
    ...
  }

  void writeMainContent(PrintWriter out) {
    out.print("<div class='article'>");
    out.print(htmlEscapedContent);
    out.print("</div>");
  }

  ...
}

I now create a subclass that adds a bit more styling:

class SpiffyPage extends Page {
  ...;


  @Override
  void writeMainContent(PrintWriter out) {
    out.print("<div class='row'>");

    out.print("<div class='col-md-8'>");
    super.writeMainContent(out);
    out.print("</div>");

    out.print("<div class='col-md-4'>");
    out.print("<h4>About the Author</h4>");
    out.print(htmlEscapedAuthorInfo);
    out.print("</div>");

    out.print("</div>");
  }
}

Now ignoring for a moment that this is not a very good way to generate HTML pages, what happens if I want to change the layout yet again? I'd have to create a SpiffyPage subclass that somehow wraps that content. What we can see here is an accidental application of the template method pattern. Template methods are well-defined extension points in a base class that are intended to be overridden.

And what happens if the base class changes? If the HTML contents change too much, this could break the layout provided by the subclasses. It is therefore not really safe to change the base class afterwards. This is not apparent if all your classes are in the same project, but very noticeable if the base class is part of some published software that other people build upon.

If this extension strategy was intended, we could have allowed the user to swap out the way how each part is generated. Either, there could be a Strategy for each block that can be provided externally. Or, we could nest Decorators. This would be equivalent to the above code, but far more explicit and far more flexible:

Page page = ...;
page.decorateLayout(current -> new SpiffyPageDecorator(current));
print(page.toString());

public interface PageLayout {
  void writePage(PrintWriter out, PageLayout top);
  void writeMainContent(PrintWriter out, PageLayout top);
  ...
}

public final class Page {
  private PageLayout layout = new DefaultPageLayout();

  public void decorateLayout(Function<PageLayout, PageLayout> wrapper) {
    layout = wrapper.apply(layout);
  }

  ...
  @Override public String toString() {
    PrintWriter out = ...;
    layout.writePage(out, layout);
    ...
  }
}

public final class DefaultPageLayout implements PageLayout {
  @Override public void writeLayout(PrintWriter out, PageLayout top) {
    out.print("<!DOCTYPE html>");
    out.print("<html>");

    out.print("<head>");
    out.print("</head>");

    out.print("<body>");
    top.writeHeader(out, top);
    top.writeMainContent(out, top);
    top.writeMainFooter(out, top);
    out.print("</body>");

    out.print("</html>");
  }

  @Override public void writeMainContent(PrintWriter out, PageLayout top) {
    ... /* as above*/
  }
}

public final class SpiffyPageDecorator implements PageLayout {
  private PageLayout inner;

  public SpiffyPageDecorator(PageLayout inner) {
    this.inner = inner;
  }

  @Override
  void writePage(PrintWriter out, PageLayout top) {
    inner.writePage(out, top);
  }

  @Override
  void writeMainContent(PrintWriter out, PageLayout top) {
    ...
    inner.writeMainContent(out, top);
    ...
  }
}

(The additional top parameter is necessary to make sure that the calls to writeMainContent go through the top of the decorator chain. This emulates a feature of subclassing called open recursion.)

If we have multiple decorators, we can now mix them more freely.

Far more often than the desire to slightly adapt existing functionality is the desire to reuse some part of an existing class. I have seen a case where someone wanted a class where you could add items and iterate over all of them. The correct solution would have been to:

final class Thingies implements Iterable<Thing> {
  private ArrayList<Thing> thingList = new ArrayList<>();

  @Override public Iterator<Thing> iterator() {
    return thingList.iterator();
  }

  public void add(Thing thing) {
    thingList.add(thing);
  }

  ... // custom methods
}

Instead, they created a subclass:

class Thingies extends ArrayList<Thing> {
  ... // custom methods
}

This suddenly means that the whole interface of ArrayList has become part of our interface. Users can remove() things, or get() things at specific indices. This was intended that way? OK. But often, we don't carefully think through all consequences.

It is therefore advisable to

  • never extend a class without careful thought.
  • always mark your classes as final except if you intend for any method to be overridden.
  • create interfaces where you want to swap out an implementation, e.g. for unit testing.

There are many examples where this “rule” has to be broken, but it usually guides you to a good, flexible design, and avoids bugs due to unintended changes in base classes (or unintended uses of the subclass as an instance of the base class).

Some languages have stricter enforcement mechanisms:

  • All methods are final by default and have to be marked explicitly as virtual
  • They provide private inheritance that doesn't inherit the interface but only the implementation.
  • They require base class methods to be marked as virtual, and require all overrides to be marked as well. This avoids problems where a subclass defined a new method, but a method with the same signature was later added to the base class but not intended as virtual.
amon
  • 132,749
  • 27
  • 279
  • 375
  • 3
    You deserve at least +100 for mentioning the "fragile base class problem". :) – David Arno May 12 '16 at 11:40
  • 7
    I'm not convinced by the points made here. Yes, the fragile base class is a problem, but final does not solve all the issues with changing an implementation. Your first example is bad because you are assuming you know all the possible use-cases for the PasswordChecker ("locking everyone out or allowing everyone access... is not OK" - says who?). Your last "therefore advisable..." list is really bad - you are basically advocating not extending anything and marking everything as final - which completely obliterates the usefulness of OOP, inheritance and code-reuse. – adelphus May 12 '16 at 17:00
  • 5
    Your first example isn't an example of the fragile base class problem. In the fragile base class problem, changes to a base class break a subclass. But in that example, your subclass doesn't follow the contract of a subclass. These are two different problems. (Additionally, it actually reasonable that under certain circumstances, you might disable a password checker (say for development)) – Winston Ewert May 12 '16 at 23:05
  • 2
    @adelphus, prefer composition to inheritence. Subclassing is very often a poor way to do code reuse. – Winston Ewert May 12 '16 at 23:13
  • 5
    "It avoids the fragile base class problem" - in the same way that killing yourself avoids being hungry. – user253751 May 12 '16 at 23:51
  • 9
    @immibis, its more like avoiding eating to avoid getting food poisioning. Sure, never eating would be a problem. But eating only in those places that you trust makes a lot of sense. – Winston Ewert May 13 '16 at 03:33
  • 1
    The first example also illustrates the point that there is a chance that you need to expose methods that need to be immutable for security reasons. If you need people to be able to use your API but you don't want them to be able to bypass the security of the system, `final` can be a useful tool. We're often keen to think about conceptual purity more than practical security, which is great for us, but risky for people relying on our code to be safe. – glenatron May 16 '16 at 11:32
33

I'm surprised that no one has yet mentioned Effective Java, 2nd Edition by Joshua Bloch (which should be required reading for every Java developer at least). Item 17 in the book discusses this in detail, and is titled: "Design and document for inheritance or else prohibit it".

I won't repeat all the good advice in the book, but these particular paragraphs seem relevant:

But what about ordinary concrete classes? Traditionally, they are neither final nor designed and documented for subclassing, but this state of affairs is dangerous. Each time a change is made in such a class, there is a chance that client classes that extend the class will break. This is not just a theoretical problem. It is not uncommon to receive subclassing-related bug reports after modifying the internals of a nonfinal concrete class that was not designed and documented for inheritance.

The best solution to this problem is to prohibit subclassing in classes that are not designed and documented to be safely subclassed. There are two ways to prohibit subclassing. The easier of the two is to declare the class final. The alternative is to make all the constructors private or package-private and to add public static factories in place of the constructors. This alternative, which provides the flexibility to use subclasses internally, is discussed in Item 15. Either approach is acceptable.

Daniel Pryden
  • 3,268
  • 1
  • 21
  • 21
21

One of the reasons final is useful is that it makes sure you cannot subclass a class in a way which would violate the parent class's contract. Such subclassing would be a violation of SOLID (most of all "L") and making a class final prevents it.

One typical example is making it impossible to subclass an immutable class in a way which would make the subclass mutable. In certain cases such a change of behavior could lead to very surprising effects, for example when you use something as keys in a map thinking the key is immutable while in reality you are using a subclass which is mutable.

In Java, a lot of interesting security issues could be introduced if you were able to subclass String and make it mutable (or made it call back home when someone calls its methods, thus possibly pulling sensitive data out of the system) as these objects are passed around some internal code related to class loading and security.

Final is also sometimes helpful in preventing simple mistakes such as re-using the same variable for two things within a method, etc. In Scala, you are encouraged to use only val which roughly corresponds to final variables in Java, and actually any use of a var or non-final variable is looked at with suspicion.

Finally, compilers can, at least in theory, perform some extra optimizations when they know that a class or method is final, since when you call a method on a final class you know exactly which method will be called and don't have to go through virtual method table to check inheritance.

Andy
  • 10,238
  • 4
  • 25
  • 50
Michał Kosmulski
  • 3,474
  • 19
  • 18
  • 6
    *Finally, compilers can, at least in theory* => I've personally reviewed Clang's devirtualization pass, and I confirm it's used in practice. – Matthieu M. May 12 '16 at 11:51
  • But can't the compiler tell in advance that nobody's overriding a class or method regardless of whether or not its marked final? – JesseTG May 12 '16 at 12:37
  • 3
    @JesseTG If it has access to all the code at once, probably. What about separate file compilation, though? – Angew is no longer proud of SO May 12 '16 at 13:27
  • 3
    @JesseTG devirtualization or (monomorphic/polymorphic) inline caching is a common technique in JIT compilers, since the system knows which classes are currently loaded, and can deoptimize the code if the assumption of no overriding methods turn out to be false. However, an ahead of time compiler cannot. When I compile one Java class and code that uses that class, I can later compile a subclass and pass an instance to the consuming code. The simplest way is to add another jar to the front of the classpath. The compiler can't know about all of that, since this happens at run time. – amon May 12 '16 at 13:31
  • It is possible to change a string at runtime using reflection to access the char array which the string use as baking storage. I tested this in an older version of Java, but there is reason to think it would not work in Java 8). So you can't assume that strings are immutable when doing security analysis – MTilsted May 12 '16 at 17:11
  • @Angew, amon: Ah, thank you for the clarification. – JesseTG May 12 '16 at 21:34
  • @MTilsted Frankly, there are so many things which can break if people are modifying strings, that I tend to just hope people won't, and use coding standards to prevent it. For example, if the String came from the string literal "\", then changing it might be redefining "\" everywhere in the program. Or, imagine if some idiot changes the value of the string literal "". – Patrick M May 12 '16 at 23:42
  • 5
    @MTilsted You *can assume* that strings are immutable as mutating strings via reflection can be prohibited via a `SecurityManager`. Most programs don't use it, but they also don't run any non-trusted code. +++ You *have to assume* that strings are immutable as otherwise you get zero security and a bunch of arbitrary bugs as bonus. Any programmer assuming Java strings can change in productive code is surely insane. – maaartinus May 13 '16 at 01:09
  • @JesseTG The compiler can't tell, but at runtime the JVM can tell that there's only one method (i.e., a call site is monomorphic) and optimize the call. – David Conrad May 16 '16 at 00:44
  • @DavidConrad Nothing similar holds for C++, I assume? – JesseTG May 16 '16 at 01:05
  • @JesseTG For C++ it would have to be done by the linker, if at all, I guess, due to the same issue: separate compilation. (The optimizer needs to see the whole program.) But I don't know enough about C++ or the linker to say if this is commonly done, or ever done. – David Conrad May 16 '16 at 01:08
7

The second reason is performance . The first reason is because some classes have important behaviors or states that are not supposed to be changed in order to allow the system to work. For example if i have a class "PasswordCheck" and to build that class i've hired a team of security experts and this class communicates with hundreds of ATMs with well studied and defined procols. Allow a new hired guy fresh out of university make a "TrustMePasswordCheck" class that extends the above class could be very harmful for my system; those methods are not supposed to be overridden, that's it.

JoulinRouge
  • 678
  • 3
  • 9
7

When I need a class, I'll write a class. If I don't need subclasses, I don't care about subclasses. I make sure that my class behaves as intended, and the places where I use the class assume that the class behaves as intended.

If anyone wants to subclass my class, I want to fully deny any responsibility for what happens. I achieve that by making the class "final". If you want to subclass it, remember that I didn't take subclassing into account while I wrote the class. So you have to take the class source code, remove the "final", and from then on anything that happens is fully your responsibility.

You think that's "not object oriented"? I was paid to make a class that does what it's supposed to do. Nobody paid me for making a class that could be subclassed. If you get paid to make my class reusable, you are welcome to do it. Start by removing the "final" keyword.

(Other than that, "final" often allows substantial optimisations. For example, in Swift "final" on a public class, or on a method of a public class, means that the compiler can fully know what code a method call will execute, and can replace dynamic dispatch with static dispatch (tiny benefit) and often replace static dispatch with inlining (possibly huge benefit)).

adelphus: What is so hard to understand about "if you want to subclass it, take the source code, remove the 'final', and it's your responsibility"? "final" equals "fair warning".

And I'm not paid to make reusable code. I am paid to write code that does what it's supposed to do. If I'm paid to make two similar bits of code, I extract the common parts because that's cheaper and I'm not paid to waste my time. Making code reusable that isn't reused is a waste of my time.

M4ks: You always make everything private that isn't supposed to be accessed from the outside. Again, if you want to subclass, you take the source code, change things to "protected" if you need, and take responsibility for what you do. If you think you need to access things that I marked private, you better know what you are doing.

Both: Subclassing is a tiny, tiny portion of reusing code. Creating building blocks that can be adapted without subclassing is much more powerful and hugely benefits from "final" because the users of the blocks can rely on what they get.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
  • 7
    -1 This answer describes everything that is wrong with software developers. If someone wants to reuse your class by subclassing, let them. Why would it be your responsibility how they use (or abuse) it? Basically, you're using **final** as a f*k you, you're not using my class. *"Nobody paid me for making a class that could be subclassed"*. Are you serious? That's exactly why software engineers are employed - to create solid, reusable code. – adelphus May 13 '16 at 13:20
  • 5
    -1 Just make everything private, so nobody will every even think of subclassing.. – M4ks May 13 '16 at 16:32
  • 3
    @adelphus While the wording of this answer is blunt, bordering on harsh, it isn't a "wrong" point of view. In fact it's the same point of view as the majority of answers on this question so far only with a less clinical tone. – NemesisX00 May 13 '16 at 17:28
  • +1 for mentioning that you can remove 'final'. It is arrogant to claim foreknowledge of all possible uses of your code. Yet it is humble to make it clear that you cannot maintain some possible uses, and that those uses would require maintaining a fork. – gmatht May 17 '16 at 04:11
4

Let's imagine that the SDK for a platform ships the following class:

class HTTPRequest {
   void get(String url, String method = "GET");
   void post(String url) {
       get(url, "POST");
   }
}

An application subclasses this class:

class MyHTTPRequest extends HTTPRequest {
    void get(String url, String method = "GET") {
        requestCounter++;
        super.get(url, method);
    }
}

All is fine and well, but someone working on the SDK decides that passing a method to get is silly, and makes the interface better making sure to enforce backwards compatibility.

class HTTPRequest {
   @Deprecated
   void get(String url, String method) {
        request(url, method);
   }

   void get(String url) {
       request(url, "GET");
   }
   void post(String url) {
       request(url, "POST");
   }

   void request(String url, String method);
}

Everything seems fine, until the application from above is recompiled with the new SDK. Suddenly, the overriden get method isn't being called anymore, and the request aren't being counted.

This is called the fragile base class problem, because a seemingly innocous change results in a subclass breaking. Anytime change to which methods are called inside the class might cause a subclass to break. That tends mean that almost any change might cause a subclass to break.

Final prevents anybody from subclassing your class. That way, which methods inside the class can be changed without worrying that somewhere someone depends on exactly which method calls are made.

Winston Ewert
  • 24,732
  • 12
  • 72
  • 103
1

Final effectively means that your class is safe to change in the future without impacting any downstream inheritance based classes (because there are none), or any issues around thread safety of the class (I think there are cases where the final keyword on a field prevents some thread based high-jinx).

Final means that you are free to change how your class works without any unintended changes in behavior creeping into other people's code that relies on yours as a base.

As an example, I write a class called HobbitKiller, which is great, because all hobbits are tricksie and should probably die. Scratch that, they all definitely need to die.

You use this as a base class and add in an awesome new method to use a flamethrower, but use my class as a base because I have a great method for targeting the hobbits (in addition to being tricksie, they're quick), which you use to help aim your flamethrower.

Three months later I change the implementation of my targeting method. Now, at some future point when your upgrade your library, unbeknownst to you, your class's actual runtime implementation has fundamentally changed because of a change in the super class method you depend on (and generally do not control).

So for me to be a conscientious developer, and ensure smooth hobbit death into to the future using my class, I have to be very, very careful with any changes that I make to any classes that can be extended.

By removing the ability to extend except in cases where I am specifically intending to have the class extended, I save myself (and hopefully others) a lot of headaches.

  • 3
    If you targeting method is going to change why you ever made it public? And if you change the behaviour of a class significantly you need another version rather than overprotective `final` – M4ks May 13 '16 at 16:30
  • I have no idea why this would change. Hobbits are tricksie and the change was required. The point is that if I build it as a final I prevent inheritance which protects other people from having my changes infect their code. – Scott Taylor Nov 07 '16 at 20:18
0

To me it's a matter of design.

Let's suppose I have a program that calculates salaries for employees. If I have a class that returns number of working days between 2 dates based on the country (one class for each country), I will put that final, and provide a method for every enterprise to provide a free day only for their calendars.

Why? Simple. Let's say a developer wants to inherit base class WorkingDaysUSA in a class WorkingDaysUSAmyCompany and modify it to reflect that his enterprise will be closed for strike/maintenance/whatever reason the 2nd of mars.

The calculations for clients orders and delivery will reflect the delay and work accordingly when in runtime they call WorkingDaysUSAmyCompany.getWorkingDays(), but what happens when I calculate vacations time? Should I add the 2nd of mars as a holiday for everyone? No. But since the programmer used inheritance and I didn't protect the class this can lead to a confusion.

Or let's say they inherit and modify the class to reflect that this company doesn't work Saturdays where in the country they work half time on Saturday. Then a earthquake, electricity crisis or some circumstance makes the president declare 3 non-working days like it happened recently on Venezuela. If the method of the inherited class already subtracted each Saturday, my modifications on the original class could lead to subtract the same day twice. I would have to go to each subclass on each client and verify all changes are compatible.

Solution? Make the class final and provide a addFreeDay(companyID mycompany, Date freeDay) method. That way you are sure that when you call a WorkingDaysCountry class it's your main class and not a subclass

bns
  • 139
  • 2
0

The use of final is not in any way a violation of SOLID principles. It is, unfortunately, extremely common to interpret the Open/Closed Principle ("software entities should be open for extension but closed for modification") as meaning "rather than modify a class, subclass it and add new features". This isn't what was originally meant by it, and is generally held not to be the best approach to achieving its goals.

The best way of complying with OCP is to design extension points into a class, by specifically providing abstract behaviours that are parameterised by injecting a dependency to the object (e.g. using the Strategy design pattern). These behaviours should be designed to use an interface so that new implementations do not rely on inheritance.

Another approach is to implement your class with its public API as an abstract class (or interface). You can then produce an entirely new implementation which can plug in to the same clients. If your new interface requires broadly similar behaviour as the original, you can either:

  • use the Decorator design pattern to reuse the existing behaviour of the original, or
  • refactor the parts of the behaviour that you want to keep into a helper object and use the same helper in your new implementation (refactoring isn't modification).
Jules
  • 17,614
  • 2
  • 33
  • 63