37

I am refactoring a PHP OOP legacy website.

I am so tempted to start using 'final' on classes to "make it explicit that the class is currently not extended by anything". This might save lots of time if I come to a class and I am wondering if I can rename/delete/modify a protected property or method. If I really want to extend a class I can just remove the final keyword to unlock it for extending.

I.e If I come to a class that has no child classes I can record that knowledge by marking the class a final. The next time I come to it I would not have to re-search the codebase to see if it has children. Thus saving time during refactorings.

It all seems like a sensible time saving idea.... but I have often read that classes should only be made 'final' on rare/special occasions.

Maybe it screws up Mock object creation or has other side effects that I am not thinking of.

What am I missing?

JW01
  • 3,579
  • 4
  • 22
  • 22
  • 3
    If you have full control of your code, then it is not terrible, I suppose, but a bit on a OCD side. What if you miss a class (e.g it has no children but it aint final)? It is better let a tool scan the code and tell you whether some class has children. As soon as you package this as a library and give it to someone else, dealing with 'final' becomes a pain. – Job Jul 02 '11 at 13:58
  • For example, MbUnit is an open framework for .Net unit testing, and MsTest ain't (surprise surprise). As the result, you HAVE to shell out 5k to MSFT just because you want to run some tests the MSFT way. Other test runners cannot run the test dll that is built with MsTest - all because of the final classes that MSFT used. – Job Jul 07 '11 at 04:24
  • 3
    Some blog post about the topic: [Final Classes: Open for Extension, Closed for Inheritance](http://verraes.net/2014/05/final-classes-in-php/) (May 2014; by Mathias Verraes) – hakre May 18 '14 at 16:16
  • Nice article. It speaks my thoughts about this. I did not know about those annotations so that was handy. Cheers. – JW01 May 21 '14 at 19:54
  • "PHP 5 introduces the final keyword, which prevents child classes from overriding a method by prefixing the definition with final. If the class itself is being defined final then it cannot be extended." https://www.php.net/manual/en/language.oop5.final.php It is not bad at all. – Black Sep 24 '19 at 07:19

7 Answers7

72

I have often read that classes should only be made 'final' on rare/special occasions.

Whoever wrote that is wrong. Use final liberally, there’s nothing wrong with that. It documents that a class wasn’t designed with inheritance in mind, and this is usually true for all classes by default: designing a class that can be meaningfully inherited from takes more than just removing a final specifier; it takes a lot of care.

So using final by default is by no means bad. In fact, amongst OOP experts it’s widely agreed that final should be the default, e.g. Jon Skeet:

Classes should be sealed by default in C#

Or Joshua Bloch:

Design and document for inheritance or else prohibit it [Effective Java, 3rd Ed, Item 19]

Or Scott Meyers [More Effective C++, Item 33].

Which is why modern OO langauges such as Kotlin have final-by-default classes.

You wrote:

Maybe it screws up Mock object creation …

And this is indeed a caveat, but you can always recourse to interfaces if you need to mock your classes. This is certainly superior to making all classes open to inheritance just for the purpose of mocking.

Konrad Rudolph
  • 13,059
  • 4
  • 55
  • 75
  • 3
    1+: Since it screws up mocking; consider creating interfaces or abstract classes for each "final" class. – Spoike Jul 14 '11 at 11:49
  • Well that puts a spanner in the works. This late arrival contradicts all the other answers. Now I do not know what to believe :o. (Unticked my accepted answer and hold off decision during re-trial) – JW01 Jul 14 '11 at 16:18
  • 1
    You can consider the Java API itself and how often you will find the usage of the 'final' keyword. In fact, it is not used very often. It is used in cases were performance might get bad due to the dynamic binding taking place when "virtual" methods are called (in Java all methods are virtual if they are not declared as 'final' or 'private'), or when introducing custom subclasses of a Java API class might arise consistency issues, such as subclassing 'java.lang.String'. A Java String is a value object per definition and must not change its value later on. A subclass could break this rule. – Jonny Dee Jul 16 '11 at 20:46
  • 8
    @Jonny Most of the Java API is badly designed. Remember that the bulk of it is more than ten years old and many best practices have been developed in the meantime. But don’t take my word for it: the Java engineers themselves say so, first and foremost Josh Bloch who has written in detail about this, e.g. in *Effective Java*. Rest assured that if the Java API were developed today, it would look *very* different, and `final` would play a much bigger role. – Konrad Rudolph Jul 17 '11 at 10:07
  • 4
    I'm still not convinced that using 'final' more often has become a best practice. In my opinion, the 'final' should really be used only if performance requirements dictate it, or if you must assure consistency cannot be broken by subclasses. In all other cases necessary documentation should go into, well, the documentation (which needs to be written anyway), and not into the source code as keywords that enforce certain usage patterns. To me, that's kind of playing god. Using annotations whould be a better solution. One could add a '@final' annotation and the compiler could issue a warning. – Jonny Dee Jul 17 '11 at 17:47
  • 1
    "Jon Skeet" link is dead. Probably it was moved to [this SO question](http://stackoverflow.com/a/406790/264047). – Alexander Malakhov Aug 18 '16 at 05:49
  • 1
    The answer only claims that "final" is documentation, but it does more than document. It limits the power of other people and code using the class. If it's to provide a compelling case for "final", it ought to address why that limitation of power is acceptable, IMO. – Corrodias Dec 06 '22 at 09:25
  • 1
    @Corrodias I don’t really know what improvement you want to see. Would replacing “documents” with “documents […] and enforces this limitation” be better? To be clear, limitations of power in programming languages are almost without exception a *good* thing. This is why we have a type system, after all. It’s the opposite (= permissiveness) which needs to justify its acceptability. – Konrad Rudolph Dec 06 '22 at 11:24
  • @KonradRudolph Dynamically typed languages yet have a place, I'm sure. But that's an interesting perspective. I think making the case for less power being better is a vital aspect of explaining why "always final" is the better approach, because that's not a settled debate. Then again, I can't remember the last time I extended a foreign class, but I do remember the last time I had to use reflection to work around a bug. – Corrodias Dec 06 '22 at 17:09
  • Any language that makes classes non-final by default has made a mistake. C# made that mistake for example. – usr Dec 06 '22 at 22:49
  • @Corrodias On the contrary, this is pretty settled: there is an *extremely wide* agreement amongst OOP experts that “final by default”/“make extensibility explicit” is, in fact, the right design decision, and that not doing so is a mistake. I am not aware of *any* OOP thought leader who disagrees (I’m sure there must be a few, but they’d be in the minority, by far). – Konrad Rudolph Dec 07 '22 at 08:44
  • @KonradRudolph Java takes this non-final-by-default mistake to the extreme. Methods are virtual by default. This is dangerous design. I'm sure many compatibility issues are caused by the fact that API consumers override things that were not designed to be overridden. – usr Dec 07 '22 at 16:31
10

If you want to leave a note to yourself that a class has no sub-classes, then by all means do so and use a comment, thats what they are for. The "final" keyword is not a comment, and using language keywords just to signal something to you (and only you would ever know what it means) is a bad idea.

edited by original author to add: I completely disagree with this now. I cannot even create a model of my mental state 11 years ago that would explain why I would say this. I think this answer, and my comments defending it below, are ridiculous. The accepted answer is right.

Jeremy
  • 4,609
  • 22
  • 22
  • 22
    This is completely false! “ using language keywords just to signal something to you […] is a bad idea.” – On the contrary, that is *exactly* what language keywords are there for. Your tentative caveat, “and only you would ever know what it means”, is of course correct but doesn’t apply here; the OP is using `final` as expected. There’s nothing wrong with that. And using a language feature to enforce a constraint is *always* superior to using a comment. – Konrad Rudolph Jul 14 '11 at 06:51
  • 9
    @Konrad: Except `final` should denote "No subclass of this class should ever be created" (for legal reasons or something), not "this class currently has no children, so I'm still safe to mess with its protected members". The intent of `final` is the very antithesis of "freely editable", and a `final` class shouldn't even have any `protected` members! – SF. Jul 14 '11 at 07:10
  • 1
    @Konrad Rudolph Please re-read the original question. He is not using final for the purpose of sealing the class. He is using it as a comment. – Jeremy Jul 14 '11 at 13:32
  • @Jeremy The difference is minuscule. The purpose is generally to show that a class isn’t inherited from. This is `final`. Nothing to worry about. – Konrad Rudolph Jul 14 '11 at 13:37
  • 4
    @Konrad Rudolph It is not miniscule at all. If other people later want to extend his classes, there is a big difference between a comment that says "not extended" and a keyword that says "may not extend". – Jeremy Jul 14 '11 at 13:39
  • @Jeremy But classes are by default fundamentally not apt to extension, unless specifically designed for it. So the difference is the same, and `final` is still an apt description. – Konrad Rudolph Jul 14 '11 at 14:08
  • 2
    @Konrad Rudolph That sounds like a great opinion to bring to a question about OOP language design. But we know how PHP works, and it takes the other approach of allowing extension unless sealed. Pretending that its ok to conflate keywords because "thats how it should be" is just defensiveness. – Jeremy Jul 14 '11 at 14:35
  • 3
    @Jeremy I’m not conflating keywords. The keyword `final` means, “this class is not to be extended [for now].” Nothing more, nothing less. Whether PHP was designed with this philosophy in mind is irrelevant: it *has* the `final` keyword, after all. Secondly, arguing from PHP’s design is bound to fail, given how patchworky and just overall badly PHP is designed. – Konrad Rudolph Jul 14 '11 at 14:38
  • 3
    If you find in the future that you want to extend your `final` class after all, you can look at the class design, decide whether it is safe to extend, fix it if it isn't, then remove the `final` keyword. In effect, you refactor the class from `final` to non-`final`. Then you can extend it. Seems reasonable to me. – David K Jun 23 '14 at 13:20
  • 1
    I just got drawn back here by a recent upvote. I no longer think this answer or the comments defending it have any merit. I hope I've learned a lot about software design since saying this. – Jeremy Dec 08 '22 at 21:42
6

There is a nice article about "When to declare classes final". A few quotes from it:

TL;DR: Make your classes always final, if they implement an interface, and no other public methods are defined

Why do I have to use final?

  1. Preventing massive inheritance chain of doom
  2. Encouraging composition
  3. Force the developer to think about user public API
  4. Force the developer to shrink an object's public API
  5. A final class can always be made extensible
  6. extends breaks encapsulation
  7. You don't need that flexibility
  8. You are free to change the code

When to avoid final:

Final classes only work effectively under following assumptions:

  1. There is an abstraction (interface) that the final class implements
  2. All of the public API of the final class is part of that interface

If one of these two pre-conditions is missing, then you will likely reach a point in time when you will make the class extensible, as your code is not truly relying on abstractions.

P.S. Thanks to @ocramius for great reading!

Nikita U.
  • 173
  • 2
  • 7
5

"final" for a class means: You want a subclass? Go ahead, delete the "final", subclass as much as you like, but don't complain to me if it doesn't work. You're on your own.

When a class can be subclassed, it's behavior that others rely upon, must be described in abstract terms that subclasses obey. Callers must be written to expect some variability. Documentation must be written carefully; you can't tell people "look at the source code" because the source code isn't there yet. That's all effort. If I don't expect that a class is subclassed, it's unnecessary effort. "final" clearly says that this effort hasn't been made and gives fair warning.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
1

Using 'final' takes away freedom of others that want to use your code.

If the code you write is only for you and will never be released to the public or a customer then you can do with your code what you want, of course. Otherwise, you prevent others from building upon your code. Too often have I had to work with an API that would have been easy to extend for my needs, but then I was hindered by 'final'.

Also, there is often code that should better not be made private, but protected. Sure, private means "encapsulation" and hide things considered to be implementation details. But as an API programmer I might as well document the fact that method xyz is considered to be implementation detail and, thus, may be changed/deleted in future version. So everyone who will rely on such code in spite of the warning is doing it on his own risk. But he can actually do it and reuse (hopefully already tested) code and come up faster with a solution.

Of course, if the API implementation is open source one can just remove the 'final' or make methods 'protected', but than you have changed the code and need to track your changes in form of patches.

However, if the implementation is closed source you are left behind with finding a workaround or, in the worst case, with switching to another API with less restrictions regarding possibilities for customization/extension.

Note that I don't find 'final' or 'private' are evil, but I think they are just used too often because the programmer didn't think about his code in terms of code reuse and extension.

Dan Lugg
  • 113
  • 5
Jonny Dee
  • 947
  • 5
  • 12
  • 2
    "Inheritance is not code reuse". – quant_dev Jul 02 '11 at 21:00
  • 2
    Ok, if you insist on a perfect definition you're right. Inheritance expresses an 'is-a' relationship and it is this fact which allows for polymorphism and provides a way of extending an API. But in practice, inheritance is very very often used to reuse code. This is especially the case with multiple inheritance. And as soon as you call code of a super class you are actually reusing code. – Jonny Dee Jul 02 '11 at 21:33
  • @quant_dev: Reusability in OOP means first of all polymorphy. And inheritance is a critical point for polymorphic behaviour (sharing the interface). – Falcon Jul 14 '11 at 07:30
  • @Falcon Sharing interface != sharing code. At least not in the usual sense. – quant_dev Jul 18 '11 at 07:52
  • 4
    @quant_dev: You get reuseability in the OOP sense wrong. It means that a single method can operate on many datatypes, thanks to interface sharing. That's a major part of OOP code reuse. And inheritance automatically let's us share interfaces, thus enhacing code reuseability greatly. That's why we talk about reusability in OOP. Just creating libraries with routines is nearly as old as programming itself and can be done with almost every language, it's common. Code-Reuse in OOP is more than that. – Falcon Jul 18 '11 at 08:32
  • @Falcon OK, you're right. – quant_dev Jul 18 '11 at 08:43
0

I would like to add some nuance to the answer provided by Konrad Rudolph. While his response is certainly true in most contexts, it does not mention the situation where the code may be used as a dependency of another project.

The project that depends on this code may need to modify the behavior of the final class for a use case that was not anticipated by the original creator of the class (it is impossible to anticipate all situations), and marking the class as final could be quite problematic.

Let's imagine a scenario: I have coded a Symfony library that allows sending Batman-themed emails. The template is customizable, as it is often the case with most Symfony libraries because of its native template overriding mechanism.

I have created a service that contains a method sendBatmanEmail(BatmanEmailDto $emailDto) To use this service, the caller must initialize a BatmanEmailDto, which offers two setters setSubject and setText. The sendBatmanEmail method does its job (at least 300 lines of code!), then passes the DTO to the template for rendering. The DTO is marked final.

In most cases, it works fine with a subject and a text! Some slight modifications are possible via the Symfony template overriding engine, which is the icing on the cake.

But if tomorrow a project wants to have two distinct texts in the template, or add an image, manipulate a richer entity, etc... The use of the library will be completely questioned since it will be impossible to extend the BatmanEmailDto and pass more data to the template. The sendBatmanEmail method will need to be rewritten entirely (or maybe entirely copy/pasted? good luck for the maintenance!..)

All of this is to say that if you are developing a library and not a final project, please avoid using final classes. It looks nice on paper, but for future dependencies of your project, it rarely seems like a good idea.

And I forgot to mention if you want to replace some class by dependency injection configuration, which is a very common type of usage in Magento for example (and is very maintainable if used with caution when and only when it is useful btw)

PS: if some developer is not clever enough to understand that your class is not made to be extended, he will probably not be clever enough to NOT remove your final keyword also.

  • "if you are developing a library and not a final project, please avoid using final classes." I dont think this is good advice. Even for libraries with certain extension points, it is a good idea to default to `final` and only open classes/interfaces up explicitly for those extension points. – sfiss May 03 '23 at 13:55
-1

One thing you may not have thought of is the fact that ANY change of a class means that it has to undergo new QA-testing.

Do not mark things as final unless you really, really mean it.

  • Thanks. Could you explain that further. Do you mean that if I mark a class as `final` (a change) then I just have to re-test it? – JW01 Jul 02 '11 at 12:18
  • 5
    I would say this more strongly. Do not mark things as final unless an extension cannot be made to work because of the design of the class. – S.Lott Jul 02 '11 at 12:18
  • Thanks S.Lott - I am trying to understand how bad use of final affects the code/ project. Have you experienced any bad horror stories that have lead you to be so dogmatic about the sparing use of `final`. Is this first hand experience? – JW01 Jul 02 '11 at 12:27
  • 2
    @JW01: A `final` class has one primary use case. You have polymorphic classes that you do not want extended because a subclass might break polymorphism. Don't use `final` unless you must **prevent** the creation of subclasses. Other than that, it's useless. – S.Lott Jul 02 '11 at 12:36
  • @JW01, in a production setting you may not be _ALLOWED_ to remove a final, because this is not the code the customer tested and approved. You may, however, be allowed to add extra code (for testing) but you may not be able to do what you want to because you cannot extend your class. –  Jul 02 '11 at 12:40
  • @JW01: My comment is not a separate answer. It's clarification on this answer. It's the same thing: "Do not mark things as final unless you really, really mean it". – S.Lott Jul 02 '11 at 14:13
  • Do not mark a class as non-final unless you have spent the effort needed to make it subclassable. The meaning is "you can't subclass it unless you remove the 'final' keyword". It's not "you can't subclass it, and someone will cut your throat is you try". – gnasher729 Dec 12 '22 at 15:35