80

Why do many software developers violate the open/closed principle by modifying many things like renaming functions which will break the application after upgrading?

This question jumps to my head after the fast and the continuous versions in the React library.

Every short period I notice many changes in syntax, component names, ...etc

Example in the coming version of React:

New Deprecation Warnings

The biggest change is that we've extracted React.PropTypes and React.createClass into their own packages. Both are still accessible via the main React object, but using either will log a one-time deprecation warning to the console when in development mode. This will enable future code size optimizations.

These warnings will not affect the behavior of your application. However, we realize they may cause some frustration, particularly if you use a testing framework that treats console.error as a failure.


  • Are these changes considered as a violation of that principle?
  • As a beginner to something like React, how do I learn it with these fast changes in the library (it's so frustrating)?
Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Anyname Donotcare
  • 1,364
  • 2
  • 16
  • 29
  • 6
    This is clearly an example of *observing* it, and your claim 'so many' is unsubstantiated. The Lucene and RichFaces projects are notorious examples, and the Windows COMM port API, but I can't think of any others offhand. And is React really a 'big software developer'? – user207421 Apr 30 '17 at 21:15
  • 67
    Like any principle, the OCP has its value. But it requires that developers have infinite foresight. In the real world, people often get their first design wrong. As time goes on, some prefer to work around their old mistakes for the sake of compatibility, others prefer to eventually clean them up for the sake of having a compact and unburdened codebase. – Theodoros Chatzigiannakis May 01 '17 at 13:11
  • 1
    When was the last time you saw an object-oriented language "as originally intended"? The core principle was a messaging system that meant *every* part of the system is infinitely extensible by anyone. Now compare that to your typical OOP-like language - how many allow you to extend an existing method from the outside? How many make it easy enough to be useful? – Luaan May 01 '17 at 17:06
  • Legacy sucks. 30 years of experience has shown that you should *completely dump* legacy and start fresh, at all times. Today everyone has connection everywhere at all times, so legacy is just totally irrelevant today. the ultimate example was "Windows versus Mac". Microsoft traditionally tried to "support legacy", you see this in many ways. Apple have always just said "F- - - You" to legacy users. (This applies to everything from languages to devices to OSs.) In fact, Apple was totally correct and MSFT was totally wrong, plain and simple. – Fattie May 02 '17 at 10:18
  • Note too that the supposed OPC ***applies within programming - it basically applies to the design of your classes, protocols and so on***. It just has nothing at all to do with "products" as you are asking. Like, Chevy's new electric cars totally "break OPC" compared to their older petrol cars. You know? – Fattie May 02 '17 at 10:20
  • 6
    Because there are exactly zero "principles" and "design patterns" that work 100% of the time in real life. – Matti Virkkunen May 02 '17 at 11:45

4 Answers4

152

IMHO JacquesB's answer, though containing a lot of truth, shows a fundamental misunderstanding of the OCP. To be fair, your question already expresses this misunderstanding, too - renaming functions breaks backwards compatibility, but not the OCP. If breaking compatibility seems necessary (or maintaining two versions of the same component to not break compatibility), the OCP was already broken before!

As Jörg W Mittag already mentioned in his comments, the principle does not say "you can't modify the behavior of a component" - it says, one should try to design components in a way they are open for beeing reused (or extended) in several ways, without the need for modification. This can be done by providing the right "extension points", or, as mentioned by @AntP, "by decomposing a class/function structure to the point where every natural extension point is there by default." IMHO following the OCP has nothing in common with "keeping the old version around unchanged for backwards compatibility"! Or, quoting @DerekElkin's comment below:

The OCP is advice on how to write a module [...], not about implementing a change management process that never allows modules to change.

Good programmers use their experience to design components with the "right" extension points in mind (or - even better - in a way no artificial extension points are needed). However, to do this correctly and without unnecessary overengineering, you need to know beforehand how future use cases of your component might look like. Even experienced programmers can't look into the future and know all upcoming requirements beforehand. And that is why sometimes backwards compatibility needs to be violated - no matter how many extension points your component has, or how well it follows the OCP in respect to certain types of requirements, there will always be a requirement which cannot be implemented easily without modifying the component.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • 14
    IMO the biggest reason to "violate" OCP is that it takes a *lot* of effort to conform to it properly. Eric Lippert has an excellent [blog post](https://blogs.msdn.microsoft.com/ericlippert/2004/01/22/why-are-so-many-of-the-framework-classes-sealed/) on why many of the .NET framework classes seem to violate OCP. – BJ Myers Apr 30 '17 at 21:39
  • 2
    @BJMyers: thanks for the link. Jon Skeet has an [excellent post](https://codeblog.jonskeet.uk/2013/03/15/the-open-closed-principle-in-review/) about the OCP as beeing very similar to the idea of protected variation. – Doc Brown Apr 30 '17 at 21:42
  • 8
    THIS! The OCP says that you should write code that can be changed without being touched! Why? So you only have to test, review, and compile it once. New behavior should come from new code. Not by screwing with old proven code. What about refactoring? Well refactoring is a clear violation of OCP! Which is why it's a sin to write code thinking you'll just refactor it if your assumptions change. No! Put each assumption in it's own little box. When it's wrong don't fix the box. Write a new one. Why? Because you might need to go back to the old one. When you do, It'd be nice if it still worked. – candied_orange Apr 30 '17 at 23:04
  • 1
    Does this mean we should never refactor? We should never deprecate? Well we should try not to! Don't design expecting to lean on this. Every time you have to rewrite working published code you should feel bad that the problems you're solving can't be solved by simply writing new code. No one is perfect but please try to learn how to not to keep doing this. – candied_orange Apr 30 '17 at 23:10
  • 7
    @CandiedOrange: thanks for your comment. I don't see refactoring and OCP so contrary as you describe it. To write components which follow the OCP requires often several refactoring cycles. The goal should be a component which does not need modifications to solve a whole "family" of requirements. Nevertheless, one should not add arbitrary extension points to a component "just in case", that leads too easily to overengineering. Relying to the possibility of refactoring can be the better alternative to this in lots of cases. – Doc Brown May 01 '17 at 00:17
  • OCP and refactoring are contrary by definition. Refactoring is about preserving behavior while rewriting existing code. OCP is about writing code that can remain untouched despite change. You should never rely on refactoring to accommodate change. You use refactoring to modify brittle designs into flexible designs that let you follow OCP. OCP isn't easy to pull off. But it's much better then rewriting every time requirements change. – candied_orange May 01 '17 at 01:36
  • But you are correct that one can over engineer by thinking "just in case". Stick with "one idea in one box" and that issue mostly takes care of itself. – candied_orange May 01 '17 at 01:41
  • @CandiedOrange: "New behavior should come from new code. Not by screwing with old proven code." That can get pretty messy unless you also keep in mind that the "old proven code" sometimes isn't useful anymore, sometimes modifying the old code is the better course of action than extending it. – whatsisname May 01 '17 at 04:56
  • 4
    This answer does a good job of calling out the errors in the (currently) top answer - I think the key thing about being successful with open/closed though is to *stop* thinking in terms of "extension points" and to start thinking about decomposing your class/function structure to the point where every natural extension point is there by default. Programming "outside in" is a very good way of achieving this, where every scenario your current method/function caters to is pushed out to an external interface, which forms a natural extension point for decorators, adapters etc. – Ant P May 01 '17 at 11:24
  • 1
    ... this leads to a situation where application behaviour is controlled and modified by composition, rather than by modifying existing components. This has enormous benefits - in particular it leads to very simple sets of test cases for components (no testing what happens when x and y and z), neatly composed code avoids bugs which are often resultant of clumsy plugging together of bloated components and it means no poking at existing unit tests when behaviour changes - you just write new ones (or, better still, delete existing ones). – Ant P May 01 '17 at 11:29
  • @whatsisname That can get pretty messy. "old proven code" that isn't useful anymore still shouldn't be messed with. It should simply not be used anymore. With a good design, the big thing following OCP costs you in this case is thinking of a new name. – candied_orange May 01 '17 at 14:16
  • @CandiedOrange Asking for clarification... Let's say you have a well designed class that solves only 1 problem yet is still thousands of lines. That class has worked great for years, but now a requirement comes in that should affect *all* cases everywhere (for an example of distribution, let's say a Y2K kind of fix; nobody should ever use old bad version again, everywhere everyone should use new good version), and the fix is a simple – Aaron May 01 '17 at 14:49
  • 2
    I've never seen a well designed class that had thousands of lines. I have seen many classes that were thousands of lines and was proven code though. The problem here is that often those monsters are the only valid specification you have. Right up until you touch them. Then all bets are off. It is far better to write a new one with a new name to take it's place. Or pass things into it that do new and different things. Or (shudder) inherit from it and extend it the old fashioned way. – candied_orange May 01 '17 at 16:21
  • 2
    Refactoring it should only be considered once you have a complete set of tests that take over the job of being the valid spec and using any of the previous options is simply not an option because of the lousy design of the 1000 line monster. You should use refactoring to improve design. Not to meet requirements. When you're done you should have something you whose behavior can be changed by writing new code not by continuing to rewrite it's code. – candied_orange May 01 '17 at 16:25
  • But change has to come in from somewhere. So where? Main is where. It's my favorite "Composition root" as Mark Seemann likes to say. – candied_orange May 01 '17 at 16:30
  • @AntP: I agree, this is probably the better point of view. However, to understand how building software by such components works, one has to know examples like [Ralf Westphals event based components](http://d-nb.info/1069539953/34). This could fill a whole book and cannot easily be explained in an answer here on this site. – Doc Brown May 02 '17 at 06:14
  • One of the violations of the OCP most annoying to me is to declare members as private (including methods) and to not provide or to use getters (and setters). One may find this pattern throughout many frameworks. – Claude May 04 '17 at 07:15
  • 2
    It is interesting that @DocBrown precisely is the one saying that we can't look into the future :D – Enrique Moreno Tent Dec 08 '17 at 15:08
  • I think this answer could be improved with some references. The Open-Closed principle as described by Bertrand Meyer ("Object-oriented software construction", page 57) is about not modifying existing modules in use (to avoid cascading effects) and instead adding new functionality through new components which extend the existing components. So breaking backwards compatibility in an existing component would definitely also break the Open-Closed principle. This answer seem to refer to a different interpretation of the principle, but what is the source of this alternative version of the principle? – JacquesB Jul 30 '21 at 08:52
  • @JacquesB: this is the way I interpret all those different sources (Wikipedia, Meyer, Martin etc). My understanding of the OCP is that is it is a property of the software entity, not an instruction for its maintainers (and when I look into Meyer's book, for example, on page 57, I am pretty sure he meant the same - he wrote "OCP - Modules should be ...", not "OCP - though shalt not ...", like Bob Martin sometimes does). – Doc Brown Jul 30 '21 at 09:58
  • @DocBrown: It seems you have to book available - the following pages (58-59) describes it clearly and unambiguously as a strategy for evolving code without breaking existing functionality. Unfortunately the book is not online as far as I can tell so I cant link to the pages. – JacquesB Jul 30 '21 at 10:53
  • 1
    @JacquesB: the book can be found online (but I am not sure if that's really a lawful source, so I am not posting the link). And I read those pages and still interpret them differently. I read that "strategy for evolving code without breaking existing functionality" to be the motivation behind the OCP, since the OCP makes that it easier to reach that goal, but not the OCP itself. – Doc Brown Jul 30 '21 at 11:06
  • @DocBrown: Fair point - the text does not make a clear distinction between the principle and it's motivation, so I guess one can draw an arbitrary line. But the "closed" part of the principle clearly refers to the source code being fixed after publishing the module, so it *is* an instruction to the maintainers, not just a question of designing modules to be extensible. – JacquesB Jul 30 '21 at 11:31
  • 1
    @JacquesB: feel free to read this answer as my personal opinion about how I think the term OCP should be interpreted to make most sense in software engineering (derived from some decades of doing this stuff). Maybe Meyer's and Martin's books leave some room for interpretation, but I am convinced, the OCP is (or at least: should be) about modules, not about people. – Doc Brown Jul 30 '21 at 13:35
  • @DocBrown: I think your opinion is very sensible. I also think the OC principle (in it's original form) has limited applicability. I just think the answer should make it clear you are presenting your own opinion about software engineering, not the actual OC principle. – JacquesB Jul 30 '21 at 13:54
  • @JacquesB: that is why I started my answer with the acronym "IMHO". – Doc Brown Jul 31 '21 at 18:27
70

The open/closed principle has benefits, but it also has some serious drawbacks.

In theory the principle solves the problem of backwards compatibility by creating code which is "open for extension but closed for modification". If a class has some new requirements, you never modify the source code of the class itself but instead creates a subclass which overrides just the appropriate members necessary to change the behavior. All code written against the original version of the class is therefore unaffected, so you can be confident your change did not break existing code.

In reality you easily end up with code bloat and a confusing mess of obsolete classes. If it is not possible to modify some behavior of a component through extension, then you have to provide a new variant of the component with the desired behavior, and keep the old version around unchanged for backwards compatibility.

Say you discover a fundamental design flaw in a base class which lots of classes inherit from. Say the error is due to a private field being of the wrong type. You cannot fix this by overriding a member. Basically you have to override the whole class, which means you end up extending Object to provide an alternative base class - and now you also have to provide alternatives to all the subclasses, thereby ending up with a duplicated object hierarchy, one hierarchy flawed, one improved. But you cannot remove the flawed hierarchy (since deletion of code is modification), all future clients will be exposed to both hierarchies.

Now the theoretical answer to this problem is "just design it correctly the first time". If the code is perfectly decomposed, without any flaws or mistakes, and designed with extension points prepared for all possible future requirement changes, then you avoid the mess. But in reality everyone makes mistakes, and nobody can predict the future perfectly.

Take something like the .NET framework - it still carries around the set of collection classes which were designed before generics were introduced more than a decade ago. This is certainly a boon for backwards compatibility (you can upgrade framework without having to rewrite anything), but it also bloats the framework and presents developers with a large set of options where many are simply obsolete.

Apparently the developers of React have felt it was not worth the cost in complexity and code-bloat to strictly follow the open/closed principle.

The pragmatic alternative to open/closed is controlled deprecation. Rather than breaking backwards compatibility in a single release, old components are kept around for a release cycle, but clients are informed via compiler warnings that the old approach will be removed in a later release. This gives clients time to modify the code. This seems to be the approach of React in this case.

(My interpretation of the principle is based on The Open-Closed Principle by Robert C. Martin)

JacquesB
  • 57,310
  • 21
  • 127
  • 176
  • Good answer, I think Open/Closed is the least useful SOLID principal. Interface segregation is the most important, but least used. – TheCatWhisperer Apr 30 '17 at 17:20
  • 39
    "The principle basically says you can't modify the behavior of a component. Instead you have to provide a new variant of the component with the desired behavior, and keep the old version around unchanged for backwards compatibility." – I disagree with this. The principle says that you should design components in such a way that it shouldn't be necessary to change its behavior because you can extend it to do what you want. The problem is that we haven't figured out how to do that yet, *especially* with the languages that are currently in wide-spread use. The Expression Problem is one part of … – Jörg W Mittag Apr 30 '17 at 17:39
  • 8
    … that, for example. Neither Java nor C♯ have a solution for the Expression. Haskell and Scala do, but their userbase is much smaller. – Jörg W Mittag Apr 30 '17 at 17:40
  • @JörgWMittag: Can you point at some Haskell and Scala literature about solving the expression problem? Also, what about multimethods in Lisp? They also provide a solution to the expression problem, don't they? – Giorgio Apr 30 '17 at 17:49
  • 1
    @Giorgio: In Haskell, the solution are type classes. In Scala, the solution is implicits and objects. Sorry, I don't have the links at hand, currently. Yes, multimethods (actually, they don't even need to be "multi", it's rather the "open" nature of Lisp's methods that is required) are also a possible solution. Note that there are multiple phrasings of the expression problem, because typically papers are written in such a way that the author adds a restriction to the expression problem which results in the fact that all currently existing solutions become invalid, then shows how his own … – Jörg W Mittag Apr 30 '17 at 18:00
  • 1
    … language can even solve this "harder" version. For example, Wadler originally phrased the expression problem to not only be about modular extension, but *statically* safe modular extension. Common Lisp multimethods however are *not* statically safe, they are only dynamically safe. Odersky then strengthened this even more by saying it should be modular statically safe, i.e. the safety should be statically checkable without looking at the whole program, only by looking at the extension module. This can actually not be done with Haskell type classes, but it can be done with Scala. And in the … – Jörg W Mittag Apr 30 '17 at 18:03
  • 1
    … future someone else will come up with another restriction that Scala fails but some as-yet-to-be-invented mechanism can provide (maybe dependent types are the answer)? – Jörg W Mittag Apr 30 '17 at 18:03
  • Regarding Java and C++: isn't the visitor pattern a possible solution. A bit verbose, but I have used it and it seems to work pretty well, in particular, you do not need to touch old code when you want to have a new implementation. – Giorgio Apr 30 '17 at 18:09
  • @Giorgio ... Using the visitor pattern on the expression problem results in you needing to update all your visitor classes when you need to add a new type of node to your syntax tree. There are ways around this (you could have default implementations in a superclass of your visitors, for example) but this ends up losing the static guarantee that you've handled all cases that are necessary. AIUI, the team working on the most recent iteration of the C# compiler sidestepped this by having the visitor implementations automatically generated ... Essentially using metaprogramming to make it go away. – Jules Apr 30 '17 at 18:53
  • @Jules: Indeed, in my project we use a default empty implementation in a superclass of the visitor(s). This is OK in our case but I see that it might not be OK in general. – Giorgio Apr 30 '17 at 20:54
  • @Giorgio: The Expression Problem says that it's hard to design kinds of data and operations on those kinds in such a way that you can add both new kinds and new operations. In "traditional" OO it is easy to add new kinds by subclassing, but adding new operations requires either duplicating them across classes or modifying the base class. In "traditional" FP, kinds of data are handled by case discrimination in the functions, which makes it easy to add new operations, but requires adding new cases to add new kinds of data. The Visitor Pattern turns operations into classes (i.e. types), … – Jörg W Mittag May 01 '17 at 01:16
  • … and types into methods (i.e. operations). Which means that you now can add operations easily but no longer types. You haven't solved the expression problem, you have only turned it 90°. – Jörg W Mittag May 01 '17 at 01:18
  • @JörgWMittag: I knew the background about the expression problem (OOP versus FP) but I hadn't looked any deeper into the visitor pattern, thanks for the explanation. I superficially assumed that the visitor pattern does solve the expression problem because (1) multimethods (multiple dispatch) do and (2) the visitor pattern is a way to emulate multiple dispatch. If I understand correctly, there is more to it: with multimethods neither types nor operations are tied to a specific module / class: adding an operation means adding a `defgeneric` and then implementing the operation for all types. – Giorgio May 01 '17 at 09:20
  • Adding a new type means implementing all operations for it. Both extensions can be done in a separate module. On the other hand, with the visitor pattern, the types are tied in the interface of the visitor, and the operations for the new types MUST be added to the existing implementations of the visitor. Good point! – Giorgio May 01 '17 at 09:22
  • 2
    @Giorgio: Exactly. The thing that makes Common Lisp multimethods solve the EP is actually not multiple dispatch. It is the fact that the methods are open. In typical FP (or procedural programming), the type discrimination is tied to the functions. In typical OO, the methods are tied to the types. Common Lisp methods are *open*, they can be added to classes after the fact and in a different module. That's the feature that makes them usable for solving the EP. For example, Clojure's protocols are single dispatch, but also solve the EP (as long as you don't insist on static safety). – Jörg W Mittag May 01 '17 at 09:40
  • Where this answer lost me was here: "can't modify the behavior of a component". No. When you extend a class you most certainly modify it's behavior. What you don't modify is it's code. Finding a way to modify behavior without modifying code is the POINT of OCP. It lets you accommodate change by simply writing new code and plugging it into old code. I like being able to buy stereo equipment without being forced to buy new speakers. Don't you? – candied_orange May 01 '17 at 17:56
  • @CandiedOrange: Good call, I have rewritten to make that part a bit clearer. My point is just that in reality you cannot modify *any* kind of behavior through extension, so you sometimes might end up having to rewrite whole components to get desired behavior. E.g. a stereo is designed to have replaceable speakers, but you cannot easily turn it into a bicycle. – JacquesB May 01 '17 at 18:29
  • @JacquesB: Well, I do not think that anyone wants to turn a stereo into a bicycle. If you bought a stereo and it turns out you needed a bicycle, you put the stereo aside and go buy a bicycle. This would again speak in favour of OCP: do not try to change a component into something completely different. Rather, remove the old component and write a new one. – Giorgio May 01 '17 at 20:46
  • @Giorgio correct on all counts. What this design needs is an Entertainment abstraction that can accept either stereo or bicycle. Of course next they'll want tunes while biking but lets wait till they ask for that. – candied_orange May 01 '17 at 20:52
  • I can't open archive.org, is it the linked PDF the same as https://www.cs.duke.edu/courses/fall07/cps108/papers/ocp.pdf ??? – Mindwin Remember Monica May 02 '17 at 12:51
  • @Mindwin: Yes it is, I have updated the link. Thank you. – JacquesB May 02 '17 at 12:52
20

I would call the open/closed principle an ideal. Like all ideals, it gives little consideration to the realities of software development. Also like all ideals, it is impossible to actually attain it in practice -- one merely strives to approach that ideal as best as one can.

The other side of the story is known as the Golden Handcuffs. Golden Handcuffs are what you get when you slave yourself to the open/closed principle too much. Golden Handcuffs are what occur when your product which never breaks backwards compatibility can't grow because too many past mistakes have been made.

A famous example of this is found in the Windows 95 memory manager. As part of the marketing for Windows 95, it was stated that all Windows 3.1 applications would work in Windows 95. Microsoft actually acquired licenses for thousands of programs to test them in Windows 95. One of the problem cases was Sim City. Sim City actually had a bug which caused it to write to unallocated memory. In Windows 3.1, without a "proper" memory manager, this was a minor faux pas. However, in Windows 95, the memory manager would catch this and cause a segmentation fault. The solution? In Windows 95, if your application name is simcity.exe, the OS will actually relax the constraints of the memory manager to prevent the segmentation fault!

The real issue behind this ideal is the pared concepts of products and services. Nobody really does one or the other. Everything lines up somewhere in the grey region between the two. If you think from a product oriented approach, open/close sounds like a great ideal. Your products are reliable. However, when it comes to services, the story changes. It's easy to show that with the open/closed principle, the amount of functionality your team must support must asymptotically approach infinity, because you can never clean up old functionality. This means your development team must support more and more code every year. Eventually you reach a breaking point.

Most software today, especially open source, follows a common relaxed version of the open/closed principle. It's very common to see open/closed followed slavishly for minor releases, but abandoned for major releases. For example, Python 2.7 contains many "bad choices" from the Python 2.0 and 2.1 days, but Python 3.0 swept all of them away. (Also, the shift from the Windows 95 codebase to the Windows NT codebase when they released Windows 2000 broke all sorts of things, but it did mean we never have to deal with a memory manager checking the application name to decide behavior!)

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Cort Ammon
  • 10,840
  • 3
  • 23
  • 32
  • That's a pretty great story about SimCity. Do you have a source? – BJ Myers Apr 30 '17 at 21:26
  • 5
    @BJMyers It's an old story, Joel Spoleky mentions it near the end of [this article](https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii-chicken-and-egg-problems/). I originally read it as part of a book on developing video games years ago. – Cort Ammon Apr 30 '17 at 21:28
  • 1
    @BJMyers: I am pretty sure they had similar compatibility "hacks" for dozens of popular applications. – Doc Brown Apr 30 '17 at 21:33
  • 3
    @BJMyers there is plenty of stuff like this, if you want a good read go to [The Old New Thing](https://blogs.msdn.microsoft.com/oldnewthing/) blog by [Raymond Chen](https://web-beta.archive.org/web/20090327183247/http://en.wikipedia.org:80/wiki/Raymond_Chen), browse the History tag or search for "compatibility". There is recollection of plenty of tales, including [something conspicuously close to the aforementioned SimCity case](https://blogs.msdn.microsoft.com/oldnewthing/20160418-00/?p=93307) - Addentum: Chen doesn't like to call names to blame. – Theraot May 01 '17 at 06:12
  • 2
    Very few things broke even in the 95->NT transition. The original SimCity for Windows still works great on Windows 10 (32-bit). Even DOS games still work perfectly fine provided you either disable sound or use something like VDMSound to allow the console subsystem to handle audio properly. Microsoft takes backwards compatibility *very* seriously, and they're not taking any "let's put it in a virtual machine" shortcuts either. It sometimes needs a workaround, but that's still pretty impressive, especially in relative terms. – Luaan May 01 '17 at 17:15
  • Another story about Windows back-compat: even though the tightened security in Windows Vista only enforces through software what Microsoft had said not to do for over a decade before (for example, don't write to directories outside of the user's home and your own installation directory), the Vista kernel reportedly contains fingerprints of several *thousand* applications which are known to violate those restrictions and thus would break on Vista; the OS detects those applications and relaxes the security constraints for them. – Jörg W Mittag May 05 '17 at 08:34
11

Doc Brown's answer is closest to accurate, the other answers illustrate misunderstandings of the Open Closed Principle.

To explicitly articulate the misunderstanding, there seems to be a belief that the OCP means that you should not make backwards incompatible changes (or even any changes or something along these lines.) The OCP is about designing components so that you don't need to make changes to them to extend their functionality, regardless of whether those changes are backwards compatible or not. There are many other reasons besides adding functionality that you may make changes to a component whether they are backwards compatible (e.g. refactoring or optimization) or backwards incompatible (e.g. deprecating and removing functionality). That you may make these changes doesn't mean that your component violated the OCP (and definitely doesn't mean that you are violating the OCP).

Really, it's not about source code at all. A more abstract and relevant statement of the OCP is: "a component should allow for extension without need of violating its abstraction boundaries". I would go further and say a more modern rendition is: "a component should enforce its abstraction boundaries but allow for extension". Even in the article on the OCP by Bob Martin while he "describes" "closed to modification" as "the source code is inviolate", he later starts talking about encapsulation which has nothing to do with modifying source code and everything to do with abstraction boundaries.

So, the faulty premise in the question is that the OCP is (intended as) a guideline about evolutions of a codebase. The OCP is typically sloganized as "a component should be open to extensions and closed to modifications by consumers". Basically, if a consumer of a component wants to add functionality to the component they should be able to extend the old component into a new one with the additional functionality, but they should not be able to change the old component.

The OCP says nothing about the creator of a component changing or removing functionality. The OCP is not advocating maintaining bug compatibility forevermore. You, as the creator, are not violating the OCP by changing or even removing a component. You, or rather the components you've written, are violating the OCP if the only way consumers can add functionality to your components is by mutating it e.g. by monkey patching or having access to the source code and recompiling. In many cases, neither of these are options for the consumer which means if your component isn't "open for extension" they are out of luck. They simply can't use your component for their needs. The OCP argues to not put the consumers of your library into this position, at least with respect to some identifiable class of "extensions". Even when modifications can be made to the source code or even the primary copy of the source code, it's best to "pretend" that you can't modify it as there are many potential negative consequences to doing so.

So to answer your questions: No, these are not violations of the OCP. No change an author makes can be a violation of the OCP because the OCP is not a proporty of changes. The changes, however, can create violations of the OCP, and they can be motivated by failures of the OCP in prior versions of the codebase. The OCP is a property of a particular piece of code, not the evolutionary history of a codebase.

For contrast, backwards compatibility is a property of a change of code. It makes no sense to say some piece of code is or is not backwards compatible. It only makes sense to talk about the backwards compatibility of some code with respect to some older code. Therefore, it never makes sense to talk about the first cut of some code being backwards compatible or not. The first cut of code can satisfy or fail to satisfy the OCP, and in general we can determine whether some code satisfies the OCP without referring to any historical versions of the code.

As to your last question, it's arguably off-topic for StackExchange in general as being primarily opinion-based, but the short of it is welcome to tech and particularly JavaScript where in the last few years the phenomenon you describe has been called JavaScript fatigue. (Feel free to google to find a variety of other articles, some satirical, talking about this from multiple perspectives.)

Derek Elkins left SE
  • 6,591
  • 2
  • 13
  • 21
  • 3
    "You, as the creator, are not violating the OCP by changing or even removing a component." - can you provide a reference for this? None of the definitions of the principle I have seen states that "the creator" (whatever that means) is exempt from the principle. Removing a published component is clearly a breaking change. – JacquesB May 01 '17 at 11:31
  • 1
    @JacquesB People and even code changes don't violate the OCP, components (i.e. actual pieces of code) do. (And, to be perfectly clear, that means the component fails to live up to the OCP itself, not that it violates the OCP of some other component.) The whole point of my answer is that the OCP isn't talking about code changes, breaking or otherwise. A *component* is either open to extension and closed to modification, or it's not, just like a method may be `private` or not. If an author makes a `private` method `public` later on, that doesn't mean they've violated the access control, (1/2) – Derek Elkins left SE May 01 '17 at 12:26
  • 2
    ... nor does it mean the method wasn't really `private` before. "Removing a published component is clearly a breaking change," is a non sequitur. Either the components of the new version satisfy the OCP or they don't, you don't need the history of the codebase to determine this. By your logic, I could never write code that satisfies the OCP. You are conflating backwards compatibility, a property of code changes, with the OCP, a property of code. Your comment makes about as much sense as saying quicksort isn't backwards compatible. (2/2) – Derek Elkins left SE May 01 '17 at 12:26
  • 1
    In Martins paper he explains "closed for modification" as "*The source code of such a module is inviolate. No one is allowed to make source code changes to it.*" – JacquesB May 01 '17 at 13:13
  • 3
    @JacquesB First, note again that this is talking about a *module* conforming to the OCP. The OCP is advice on how to *write* a module so that *given the constraint* that the source code cannot be changed, the module can nevertheless be extended. Earlier in the paper he talks about *designing* modules that never change, not about implementing a change management process that never allows modules to change. Referring to the edit to your answer, you don't "break the OCP" by modifying the code of the module. Instead, if "extending" the module *requires* you to modify the source code, (1/3) – Derek Elkins left SE May 01 '17 at 14:51
  • 1
    ...your module never satisfied the OCP in the first place. If, for a module that does conform to the OCP, you modify the source code unnecessarily or make a change that does not count as an "extension", this doesn't change the fact that the module conformed to the OCP. There may be bad consequences to doing this and avoiding those consequences was part of the *motivation* for the OCP, but avoiding those consequences is not itself the OCP. The OCP is about making it possible to not *need* to face those consequences. Frankly, that paper, like most OOD literature, is vague and inconsistent. (2/3) – Derek Elkins left SE May 01 '17 at 14:51
  • It never defines what constitutes an extension, and the meaning of "closure" morphs as the paper progresses. For example, later he talks about "classes [that] are closed against changes in variables". What? What does that have to do with modifications to source code? In fact, encapsulation would seem to only *increase* the likelihood that I would need to modify the source code of a module to support a change, and yet he presents it as another instance of the OCP. His "description" is at best a sloppy description of the OCP even just with respect to the rest of the article! (3/3) – Derek Elkins left SE May 01 '17 at 14:51
  • 2
    *"The OCP is a property of a particular piece of code, not the evolutionary history of a codebase."* - excellent! – Doc Brown May 02 '17 at 06:33
  • @DerekElkins : Thanks a lot, but this is so confusing to me, when you say that `People and even code changes don't violate the OCP` then you said `components (i.e. actual pieces of code) do` ! Could you explain Why deleting components and extracting behaviors in new ones can't be considered as a violation to `OCP` ? and then what 's the meaning of `OCP` , i understand it as "The source code have to open to be extended and closed to be modified" – Anyname Donotcare May 02 '17 at 10:27
  • 1
    @AnynameDonotcare: a programmer might violate *backwards compatibility* (but not the OCP) by changing (or deleting) a component. When we say "a programmer violates the OCP", we mean "a programmer designs a component in a way it will be likely that it will need modifications in the nearby future". – Doc Brown May 02 '17 at 11:23
  • @DocBrown : then `OCP` is about the design not the implementation ? but it becomes like a philosophical case :D ,making a change in a a component means that we design it prone to the future modifications in the first place ! – Anyname Donotcare May 02 '17 at 11:31
  • @AnynameDonotcare: depends on what you understand by "design", and what by "implementation". – Doc Brown May 02 '17 at 11:45
  • @AnynameDonotcare The OCP isn't really about source code at all. It's about abstraction boundaries, hence the emphasis on "components" and "modules". Modifying the source code is just an extreme form of violating the abstraction boundary. I've made a substantial edit to my answer to expand on this. A more general form of the OCP (and this more general form is necessary to understand the examples in Bob Martin's article) is that you should be able to add functionality to a component without violating its abstraction boundaries. This is why monkey patching isn't an acceptable form of extension. – Derek Elkins left SE May 02 '17 at 23:19
  • @DerekElkins: indeed, even in languages like Java or C# one can "extend" a component without modifying the source, but by breaking an abstraction boundary. This can be done by some "reflection hack", for example, to access private members. However, I think the mental model of "not modifying the source" is easier to understand and sufficient for most of the discussion, so it is not a bad model. – Doc Brown May 03 '17 at 06:04
  • 1
    @AnynameDonotcare: When you buy a car, you are supposed to use it as it is. You shouldn't modify it, because by modifying the car you invite all kinds of weird problems. For example, the police might tell you that your car isn't roadworthy because it has been modified. On the other hand, the manufacturer is absolutely free to release a new, modified, and hopefully improved model every year. – gnasher729 May 04 '17 at 00:27