35

After 10+ years of java/c# programming, I find myself creating either:

  • abstract classes: contract not meant to be instantiated as-is.
  • final/sealed classes: implementation not meant to serve as base class to something else.

I can't think of any situation where a simple "class" (i.e. neither abstract nor final/sealed) would be "wise programming".

Why should a class be anything other than "abstract" or "final/sealed" ?

EDIT

This great article explains my concerns far better than I can.

trent
  • 242
  • 1
  • 8
  • 24
    Because it's called the `Open/Closed principle`, not the `Closed Principle.` – StuperUser Nov 21 '12 at 15:10
  • 4
    What type of things are you writing professionally? It may very well influence your opinion on the matter. –  Nov 21 '12 at 15:14
  • Well, I know some platform devs who seal everything, because they want to minimize the inheritance interface. – K.. Nov 21 '12 at 15:20
  • 5
    @StuperUser: That's not a reason, it's a platitude. The OP isn't asking what the platitude is, he's asking *why*. If there's no reason behind a principle, there's no reason to pay attention to it. – Michael Shaw Nov 21 '12 at 16:07
  • @MichaelShaw Indeed, that's why I put it as a comment rather than an answer, just missed out the tongue-in-cheek emotion. – StuperUser Nov 21 '12 at 16:17
  • 4
    I wonder how many UI frameworks would break by changing the Window class to sealed. – Reactgular Nov 21 '12 at 16:53
  • Related question: [Any good examples of inheriting from a concrete class?](http://stackoverflow.com/questions/7496010/any-good-examples-of-inheriting-from-a-concrete-class) – sleske Nov 22 '12 at 10:30
  • [Don't *ever* seal a class unless you *know* you'll have support issues with your clients.](http://programmers.stackexchange.com/a/210481/4261) – cregox Sep 04 '13 at 19:40
  • If you are an application developer who controls his own class trees this can go a long way. As a library developer however you would not have many satisfied users. Although you could offer everything as abstract and leave it up to the application programmer to write a descending class for anything he wants to use... it wouldn't be very convenient. – Martin Maat Jan 16 '22 at 08:33

13 Answers13

50

Ironically, I find the opposite: the use of abstract classes is the exception rather than the rule and I tend to frown on final/sealed classes.

Interfaces are a more typical design-by-contract mechanism because you do not specify any internals--you are not worried about them. It allows every implementation of that contract to be independent. This is key in many domains. For example, if you were building an ORM it would be very important that you can pass a query to the database in a uniform way, but the implementations can be quite different. If you use abstract classes for this purpose, you end up hard-wiring in components that may or may not apply to all implementations.

For final/sealed classes, the only excuse I can ever see for using them is when it is actually dangerous to allow overriding--maybe an encryption algorithm or something. Other than that, you never know when you may wish to extend the functionality for local reasons. Sealing a class restricts your options for gains that are non-existent in most scenarios. It is far more flexible to write your classes in a way that they can be extended later down the line.

This latter view has been cemented for me by working with 3rd party components that sealed classes thereby preventing some integration that would have made life a lot easier.

Michael
  • 6,437
  • 2
  • 25
  • 34
  • 21
    +1 for the last paragraph. I've frequently found too much encapsulation in 3rd party libraries (or even standard library classes!) to be a bigger cause of pain than too little encapsulation. – Mason Wheeler Nov 21 '12 at 15:20
  • 9
    The usual argument for sealing classes is that you cannot predict the many different ways that a client could possibly override your class, and therefore you cannot make any guarantees about its behavior. See http://blogs.msdn.com/b/ericlippert/archive/2004/01/22/61803.aspx?PageIndex=2 – Robert Harvey Nov 21 '12 at 15:30
  • 14
    @RobertHarvey I am familiar with the argument and it sounds great in theory. You as the object designer cannot forsee how people may extend your classes--that is precisely why they should *not* be sealed. You cannot support everything--fine. Don't. But don't take options away, either. – Michael Nov 21 '12 at 15:37
  • 7
    @RobertHarvey isn't that just people breaking LSP getting what they deserve? – StuperUser Nov 21 '12 at 15:42
  • 4
    I'm not a big fan of sealed classes either, but I could see why some companies like Microsoft (who frequently have to support things that they did not break themselves) find them appealing. Users of a framework are not necessarily supposed to have knowledge of a class's internals. – Robert Harvey Nov 21 '12 at 15:45
  • 1
    @RobertHarvey I can appreciate the idea; but to me it fails in one big way: If you have clients overriding your stuff that's as good as a voided warranty as far as I'm concerned. That said, we're engineers and we probably have all voided (several) warranties in our life times but knowing what we're doing we had no ill effects from this, as such you're better off trusting your clients to handle such things themselves if they feel so inclined rather than demanding they not even be allowed ignoring the possibility they really know what they're doing. – Jimmy Hoffa Nov 21 '12 at 15:46
  • @RobertHarvey good point about MS supporting modified implementations, there likely are scenarios where these things are problematic for people like them. That said a lot of MS support would say that's been voided therefore: Pay for the support or get none, and in the case that they overrode something that caused a break MS is *happy* to let people pay them to support it, that support model has a tremendous profit margin which only increases the longer it takes for their support engineer to figure out there's an LSP violation in an override. – Jimmy Hoffa Nov 21 '12 at 15:48
  • 2
    @JimmyHoffa: Framework classes are supposed to be little black boxes; overriding one in a derived class presupposes you have knowledge of the class's internal structure; i.e. you have access to the source code. In the absence of that knowledge, the consumer of your class should be compelled to use composition instead (Reflector notwithstanding). In ASP.NET MVC, I actually had to copy the source code of a class, modify it, and save it as a new class to solve a particular problem, because the class was sealed. But I had access to the source code. – Robert Harvey Nov 21 '12 at 15:51
  • +1 great answer. Spot on with my experiences, it sounds like the OP is strongly favoring inheritance over composition in a very dangerous way which causes great deals of tight bounding. He should practice data modelling with no inheritance except interfaces for a while to learn compositional style and techniques perhaps.. – Jimmy Hoffa Nov 21 '12 at 15:51
  • 1
    @JimmyHoffa: Of course, I supposed you could make all of your internal members private to prevent tampering, but that sort of defeats the purpose of having an unsealed class. Sealing the class is just more expedient, that's all. – Robert Harvey Nov 21 '12 at 15:56
  • @RobertHarvey Great article from Eric Lippert, as usual. This guy is a pragmatic genius. – Nicolas Repiquet Nov 21 '12 at 15:59
  • 2
    The *author* of the 3rd party component probably has a different opinion. If they allow you to override their classes, they will likely be railroaded into *supporting* your weird overriding and then they find they are unable to *change* their component as they would like because of backward compatibility concerns. Sometimes taking away options is necessary. TLDR: what Eric Lippert said. However if *you* and your colleagues are the *only* consumers of the component - fine, override anything you like! – MarkJ Nov 21 '12 at 16:11
  • @RobertHarvey: And if you're using any framework that you *don't* have the source to, you deserve whatever grief comes from making such a dumb decision. – Mason Wheeler Nov 21 '12 at 16:26
  • 1
    @MasonWheeler: The .NET Framework is technically closed source, so I guess I'm an idiot. – Robert Harvey Nov 21 '12 at 16:26
  • 1
    @RobertHarvey: Yes, that's something that I've long considered one of the biggest of .NET's many flaws. It's a big part of the reason why I don't use it. – Mason Wheeler Nov 21 '12 at 16:27
  • 1
    @MasonWheeler: Seriously though, it's not unreasonable to expect your users to successfully utilize your framework without needing access to your source code. If users have to patch your stuff to make it work, I'd consider it broken. – Robert Harvey Nov 21 '12 at 16:28
  • @RobertHarvey: ...and if the need to patch code to make it work was the only good reason why having source available is useful, that would be a valid argument. But it isn't. – Mason Wheeler Nov 21 '12 at 17:10
  • Is there a problem if you use abstract classes, and the implemented methods are final? – Random42 Nov 21 '12 at 18:18
  • @RobertHarvey: yes it would be broken. And that is exactly when you need frameworks/libraries to be extensible. So I can program around the bugs in it without having to jump through hoops. After all: frameworks and libraries bugs do not get fixed instantly, sometimes not at all... – Marjan Venema Nov 21 '12 at 18:18
  • @MarjanVenema: Still, inheritance implies having some knowledge of the base class' internals, which presumes that access to the source code is available. – Robert Harvey Nov 21 '12 at 18:23
  • @RobertHarvey: that certainly helps tremendously, but is not always necessary. Knowing the interface is often enough. Many Delphi component developers (used to) distribute the interface sections of all their units, even with "dcu-only" distributions. And nowadays reflection mechanisms also can help enormously. – Marjan Venema Nov 21 '12 at 18:35
  • 1
    @MarjanVenema: So... If you know the external interface, but not the object internals, then why would you need inheritance? Couldn't you just use composition? – Robert Harvey Nov 21 '12 at 18:38
  • @RobertHarvey: Ever heard of interposer classes? Where you give the derived class the same name as its ancestor and make sure it is closer in scope than the library unit? So you don't have to change every single line where the class is instantiated? – Marjan Venema Nov 21 '12 at 18:46
  • @MarjanVenema: That looks like a Delphi-specific thing. I doubt it would work at all in any of the curly-brace languages. – Robert Harvey Nov 21 '12 at 18:51
  • @RobertHarvey: :-) for curly-brace languages. You could be right. Maybe that's why I don't like curly braces? Nah, C# is growing on me. – Marjan Venema Nov 21 '12 at 18:59
  • @RobertHarvey: One big reason for using inheritance rather than composition is that the if `Bar` derives from `Foo`, then code which is designed to hold a reference to a `Foo` can hold a reference to a `Bar`; code which retrieves the reference to a `Foo` can then downcast if appropriate to access it as a `Bar`. By contrast, if `Bar` contains a `Foo`, there will be no way to use code which is designed to hold onto a `Foo` for later retrieval to instead hold a `Bar`. A framework could provide mechanisms other than inheritance to achieve such things, but that doesn't mean existing frameworks do. – supercat Nov 21 '12 at 21:23
  • 1
    @supercat: You can accomplish the same thing with interfaces. – Robert Harvey Nov 21 '12 at 21:53
  • @RobertHarvey: If the code which would receive a `Foo` were written to accept an `IFoo`, there would be no need for `Bar` to inerhit `Foo`. Unfortunately, since that would require using a different type when creating an object instance versus when storing a reference, a lot more code passes around class-type references than interface-type ones. – supercat Nov 21 '12 at 22:08
  • **Please avoid extended discussions in the comment section. If you would like to discuss this further then please go to the chat room. Thank you.** – maple_shaft Nov 22 '12 at 03:12
  • @maple_shaft: Are we being off-topic, or otherwise disruptive? Anyway, I think the conversation has run its course. – Robert Harvey Nov 23 '12 at 15:57
  • @RobertHarvey No the conversation is actually quite good, we should if possible however avoid extended discussions in comments. This is why we have the chat feature. If you create a private room and link by comment to the post then others will be able to join in the discussion and benefit from it. – maple_shaft Nov 23 '12 at 16:00
13

That is a great article by Eric Lippert, but I don't think it supports your viewpoint.

He argues that all classes exposed for use by others should be sealed, or otherwise made non-extensible.

Your premise is that all classes should be abstract or sealed.

Big difference.

EL's article says nothing about the (presumably many) classes produced by his team that you and I know nothing about. In general the publicly-exposed classes in a framework are only a subset of all the classes involved in implementing that framework.

Martin
  • 521
  • 2
  • 10
  • 1
    But you can apply the same argument to classes which are not part of a public interface. One of the main reasons to make a class final is that it acts as a safety measure to prevent sloppy code. That argument is just as valid for any codebase which is shared by different developers over time, or even if you are the only developer. It just protects the semantics of the code. – DPM Nov 30 '12 at 16:43
  • I think the idea that classes should be abstract or sealed is part of a broader principle, which is that one should avoid using variables of instantiable class types. Such avoidance will make it possible to create a type which can be used by the consumers of a given type, without it having to inherit all the private members thereof. Unfortunately, such designs don't really work with public-constructor syntax. Instead, code would have to replace `new List()` with something like `List.CreateMutable()`. – supercat Mar 11 '14 at 21:47
6

From a java perspective I think that final classes are not as smart as it seems.

Many tools (especially AOP, JPA etc) work with load time waeving so they have to extend your clases. The other way would be to create delegates (not the .NET ones) ad delegate everything to the original class which would be much more messy than extending the users classes.

Uwe Plonus
  • 1,310
  • 9
  • 16
  • If you invoke runtime weaving you forfeit all protections of Java language (or any other statically typed language with strong guarantees) and therefore any judgement of its "final" use is meaningless. – Basilevs Jan 16 '22 at 09:00
6

Two common cases where you will need vanilla, non-sealed classes:

  1. Technical: If you have a hierarchy that is more than two levels deep, and you want to be able to instantiate something in the middle.

  2. Principle: It is sometimes desirable to write classes that are explicity designed to be safely extended. (This happens a lot when you are writing an API like Eric Lippert, or when you're working in a team on a large project). Sometimes you want to write a class that works fine on its own, but it is designed with extensibility in mind.

Eric Lippert's thoughts on sealing makes sense, but he also admits that they do design for extensibility by leaving the class "open".

Yes, many classes are sealed in the BCL, but a huge number of classes are not, and can be extended in all sorts of wonderful ways. One example that comes to mind is in Windows Forms, where you can add data or behavior to almost any Control via inheritance. Sure, this could have been done in other ways (decorator pattern, various types of composition, etc.), but inheritance works very well, too.

Two .NET specific notes:

  1. In most circumstances, sealing classes often not critical for safety, because inheritors cannot mess with your non-virtual functionality, with the exception of explicit interface implementations.
  2. Sometimes a suitable alternative is to make the Constructor internal instead of sealing the class, which allows it to be inherited inside your codebase, but not outside of it.
Kevin McCormick
  • 4,064
  • 1
  • 18
  • 28
6

I believe the real reason many people feel classes should be final/sealed is that most non-abstract extendable classes are not properly documented.

Let me elaborate. Starting from afar, there is the view among some programmers that inheritance as a tool in OOP is widely overused and abused. We've all read the Liskov substitution principle, though this didn't stop us from violating it hundreds (maybe even thousands) of times.

The thing is programmers love to reuse code. Even when it's not such a good idea. And inheritance is a key tool in this "reabusing" of code. Back to the final/sealed question.

The proper documentation for a final/sealed class is relatively small: you describe what each method does, what are the arguments, the return value, etc. All the usual stuff.

However, when you are properly documenting an extendable class you must include at least the following:

  • Dependencies between methods (which method calls which, etc.)
  • Dependencies on local variables
  • Internal contracts that the extending class should honor
  • Call convention for each method (e.g. When you override it, do you call the super implementation? Do you call it in the beginning of the method, or in the end? Think constructor vs destructor)
  • ...

These are just off the top of my head. And I can provide you with an example of why each of these is important and skipping it will screw up an extending class.

Now, consider how much documentation effort should go into properly documenting each of these things. I believe a class with 7-8 methods (which might be too much in an idealized world, but is too little in the real) might as well have a documentation about 5 pages, text-only. So, instead, we duck out halfway and don't seal the class, so that other people can use it, but don't document it properly either, since it will take a giant amount of time (and, you know, it might never be extended anyway, so why bother?).

If you're designing a class, you might feel the temptation to seal it so that people cannot use it in a way you've not foreseen (and prepared for). On the other hand when you're using someone else's code, sometimes from the public API there is no visible reason for the class to be final, and you might think "Damn, this just cost me 30 minutes in search for a workaround".

I think some of the elements of a solution are:

  • First to make sure extending is a good idea when you're a client of the code, and to really favor composition over inheritance.
  • Second to read the manual in its entirety (again as a client) to make sure you're not overlooking something that is mentioned.
  • Third, when you are writing a piece of code the client will use, write proper documentation for the code (yes, the long way). As a positive example, I can give Apple's iOS docs. They're not sufficient for the user to always properly extend their classes, but they at least include some info on inheritance. Which is more than I can say for most APIs.
  • Fourth, actually try to extend your own class, to make sure it works. I am a big supporter of including many samples and tests in APIs, and when you're making a test, you might as well test inheritance chains: they are a part of your contract after all!
  • Fifth, in situations when you're in doubt, indicate that the class is not meant to be extended and that doing it is a bad idea (tm). Indicate you should not be held accountable for such unintended use, but still don't seal the class. Obviously, this doesn't cover cases when the class should be 100% sealed.
  • Finally, when sealing a class, provide an interface as an intermediate hook, so that the client can "rewrite" his own modified version of the class and work around your 'sealed' class. This way, he can replace the sealed class with his implementation. Now, this should be obvious, since it is loose coupling in its simplest form, but it's still worth a mention.

It is also worth to mention the following "philosophical" question: Is whether a class is sealed/final part of the contract for the class, or an implementation detail? Now I don't want to tread there, but an answer to this should also influence your decision whether to seal a class or not.

K.Steff
  • 4,475
  • 2
  • 31
  • 28
4

A class should neither be final/sealed nor abstract if:

  • It's useful on its own, i.e. it's beneficial to have instances of that class.
  • It's beneficial for that class to be the subclass/base class of other classes.

For example, take the ObservableCollection<T> class in C#. It only needs to add the raising of events to the normal operations of a Collection<T>, which is why it subclasses Collection<T>. Collection<T> is a viable class on its own, and so it ObservableCollection<T>.

FishBasketGordo
  • 387
  • 1
  • 9
  • If I was in charge of this, I'll probably do `CollectionBase`(abstract), `Collection : CollectionBase`(sealed), `ObservableCollection : CollectionBase`(sealed). If you look at [Collection](http://msdn.microsoft.com/en-us/library/ms132397.aspx) closely, you'll see it's a half-assed abstract class. – Nicolas Repiquet Nov 21 '12 at 15:26
  • 2
    What advantage do you gain from having three classes instead of just two? Also, how is `Collection` a "half-assed abstract class"? – FishBasketGordo Nov 21 '12 at 15:27
  • `Collection` exposes a lot of innards through protected methods and properties, and is clearly meant to be a base class for specialized collection. But it's not clear where you're supposed to put your code, as there is no abstract methods to implements. `ObservableCollection` inherits from `Collection` and is not sealed, so you can again inherit from it. And you can access the `Items` protected property, allowing you to add items in the collection **without** raising events... Nice. – Nicolas Repiquet Nov 21 '12 at 16:34
  • @NicolasRepiquet: In many languages and frameworks, the normal idiomatic means of creating an object requires that the type of the variable which will hold the object be the same as the type of the created instance. In many cases, the ideal usage would be to pass around references to an abstract type, but that would force a lot of code to use one type for variables and parameters and a different concrete type when calling constructors. Hardly impossible, but a bit awkward. – supercat Nov 21 '12 at 21:10
4

I agree with your view. I think that in Java, by default, classes should be declared "final". If you don't make it final, then specifically prepare it and document it for extension.

The main reason for this is to ensure that any instances of your classes will abide to the interface you originally designed and documented. Otherwise a developer using your classes can potentially create brittle and inconsistent code and, on its turn, pass that on to other developers / projects, making objects of your classes not trustable.

From a more practical perspective there is indeed a downside to this, since clients of your library's interface won't be able to do any tweaking and use your classes in more flexible ways that you originally thought of.

Personally, for all the bad quality code that exists (and since we are discussing this on a more practical level, I would argue Java development is more prone to this) I think this rigidity is a small price to pay in our quest for easier to maintain code.

DPM
  • 1,713
  • 1
  • 16
  • 24
3

The problem with final/sealed classes is that they are trying to solve a problem that hasn't happen yet. It's useful only when the problem exists, but it's frustrating because a third-party has imposed a restriction. Rarely does sealing class solve a current problem, which makes it difficult to argue it's usefulness.

There are cases where a class should be sealed. As example; The class manages allocated resources/memory in a way that it can not predict how future changes might alter that management.

Over the years I've found encapsulation, callbacks and events to be far more flexible/useful then abstracted classes. I see far to much code with a large hierarchy of classes where encapsulation and events would have made life simpler for the developer.

Reactgular
  • 13,040
  • 4
  • 48
  • 81
  • 1
    Final-sealed stuff in software solves the problem of "I feel the need to impose my will on future maintainers of this code". – Kaz Nov 21 '12 at 23:39
  • Wasn't final/seal something added to OOP, because I don't remember it being around when I was younger. It seems like an after thought kind of feature. – Reactgular Nov 22 '12 at 16:50
  • Finalizing existed as a toolchain procedure in OOP systems before OOP became dumbed down with languages like C++ and Java. Programmers work away in Smalltalk, Lisp with maximum flexibility: anything can be extended, new methods added all the time. Then, the compiled image of the system is subject to an optimization: an assumption is made that the system won't be extended by the end users, and so this means that the method dispatch can be optimized based on taking stock of what methods and classes exist *now*. – Kaz Nov 22 '12 at 16:55
  • I don't think that's the same thing, because this is just an optimization feature. I don't remember sealed being in all versions of Java, but I could be wrong as I don't use it much. – Reactgular Nov 22 '12 at 16:58
  • Just because it manifests itself as some declarations that you have to put into the code doesn't mean it isn't the same thing. – Kaz Nov 22 '12 at 17:21
2

"Sealing" or "finalizing" in object systems allows for certain optimizations, because the complete dispatch graph is known.

That is to say, we make it difficult for the system to be changed into something else, as a trade-off for performance. (That is the essence of most optimization.)

In all other respects, it's a loss. Systems should be open and extensible by default. It should be easy to add new methods to all classes, and extend arbitrarily.

We don't gain any new functionality today by taking steps to prevent future extension.

So if we do it for the sake of prevention itself, then what we are doing is trying to control the lives of future maintainers. It is about ego. "Even when I no longer work here, this code will be maintained my way, damn it!"

Kaz
  • 3,572
  • 1
  • 19
  • 30
1

Test classes seem to come to mind. These are classes that are called in an automated fashion or "at will" based on what the programmer/tester is trying to accomplish. I'm not sure I've ever seen or heard of a finished class of tests that's private.

joshin4colours
  • 3,678
  • 1
  • 24
  • 37
  • What's are you talking about? In what way must test classes not be final/sealed/abstract? –  Nov 21 '12 at 15:18
1

The time when you need to consider a class that is intended to be extended is when you are doing some real planning for the future. Let me give you a real life example from my work.

I spend a good deal of my time writing interface tools between our main product and external systems. When we make a new sale, one big component is a set of exporters that are designed to be run at regular intervals which generate data files detailing the events that have happened that day. These data files are then consumed by the customer's system.

This is an excellent opportunity for extending classes.

I have an Export class which is the base class of every exporter. It knows how to connect to the database, find out where it had got to last time it ran and create archives of the data files it generates. It also provides property file management, logging and some simple exception handling.

On top of this I have a different exporter to work with each type of data, perhaps there is user activity, transactional data, cash management data etc.

On top of this stack I place a customer-specific layer which implements the data file structure the customer needs.

In this way, the base exporter very rarely changes. The core data-type exporters sometimes change but rarely, and usually only to handle database schema changes which should be propagated to all customers anyway. The only work I ever have to do for each customer is that part of the code that is specific to that customer. A perfect world!

So the structure looks like:

Base
 Function1
  Customer1
  Customer2
 Function2
 ...

My primary point is that by architecting the code this way I can make use of inheritance primarily for code re-use.

I have to say I cannot think of any reason to go past three layers.

I have used two layers many times, for example to have a common Table class which implements database table queries while sub-classes of Table implement the specific details of each table by using an enum to define the fields. Letting the enum implement an interface defined in the Table class makes all sorts of sense.

OldCurmudgeon
  • 778
  • 5
  • 11
1

I have found abstract classes useful but not always needed, and sealed classes become an issue when applying unit tests. You can't mock or stub a sealed class, unless you use something like Teleriks justmock.

Dan H
  • 123
  • 6
  • 1
    This is a good comment, but doesn't directly answer the question. Please considering expanding your thoughts here. –  Nov 22 '12 at 01:34
1

As an example, NSView / UIView in MacOS / iOS is very useful on its own, without any modifications, but there are also many useful subclasses, and you will most likely create your own useful subclasses. It’s not abstract, and it can’t be final. And it’s one of the most used classes because without it you can’t draw anything on the screen.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
  • +1 It's a useful answer/example for visitors. Just wanted to let you know in case you didn't notice that it's a question from 2012. The recent activity was that, yesterday, someone updated the broken link to Eric Lippert's blog post (now archived at docs.microsoft.com). – Filip Milovanović Jan 14 '22 at 08:09