62

In Java there are no virtual, new, override keywords for method definition. So the working of a method is easy to understand. Cause if DerivedClass extends BaseClass and has a method with same name and same signature of BaseClass then the overriding will take place at run-time polymorphism (provided the method is not static).

BaseClass bcdc = new DerivedClass(); 
bcdc.doSomething() // will invoke DerivedClass's doSomething method.

Now come to C# there can be so much confusion and hard to understand how the new or virtual+derive or new + virtual override is working.

I'm not able to understand why in the world I'm going to add a method in my DerivedClass with same name and same signature as BaseClass and define a new behaviour but at the run-time polymorphism, the BaseClass method will be invoked! (which is not overriding but logically it should be).

In case of virtual + override though the logical implementation is correct, but the programmer has to think which method he should give permission to user to override at the time of coding. Which has some pro-con (let's not go there now).

So why in C# there are so much space for un-logical reasoning and confusion. So may I reframe my question as in which real world context should I think of use virtual + override instead of new and also use of new instead of virtual + override?


After some very good answers especially from Omar, I get that C# designers gave stress more about programmers should think before they make a method, which is good and handles some rookie mistakes from Java.

Now I've a question in mind. As in Java if I had a code like

Vehicle vehicle = new Car();
vehicle.accelerate();

and later I make new class SpaceShip derived from Vehicle. Then I want to change all car to a SpaceShip object I just have to change single line of code

Vehicle vehicle = new SpaceShip();
vehicle.accelerate();

This will not break any of my logic at any point of code.

But in case of C# if SpaceShip does not override the Vehicle class' accelerate and use new then the logic of my code will be broken. Isn't that a disadvantage?

  • 81
    You're just used to the way Java does it, and simply haven't taken the time to understand the C# keywords on their own terms. I work with C#, understood the terms immediately, and find the way that Java does it odd. – Robert Harvey Jun 18 '14 at 15:26
  • 1
    There are times you want to override the method and times when you don't. There are times when you want to override the method and do something entirely different or do something additional. This is the reason C# `virtual` works the way it does. You decide exactly how that method should work. – Ramhound Jun 18 '14 at 15:39
  • 12
    IIRC, C# did this to "favor clarity." If you must explicitly say "new" or "override," it makes it clearly and immediately apparent what is happening, rather than you having to faff around trying to figure out whether the method is overriding some behavior in a base class or not. I also find it very useful to be able to specify which methods I want to specify as virtual, and which ones I don't. (Java does this with `final`; it's just the opposite way). – Robert Harvey Jun 18 '14 at 15:52
  • 15
    “In Java there are no virtual, new, override keywords for method definition.” Except there is [`@Override`](http://stackoverflow.com/q/94361/41071). – svick Jun 18 '14 at 16:03
  • 4
    See also [this answer](http://stackoverflow.com/a/836841/18192) to [Why all java methods are implicitly overridable](http://stackoverflow.com/q/836705/18192). – Brian Jun 18 '14 at 16:05
  • 4
    @svick annotation and keywords are not same thing :) – Anirban Nag 'tintinmj' Jun 18 '14 at 16:08
  • 3
    It's worth noting that the designers of C# has Java to look at, whereas the designers of Java didn't have C# to look at so, to some extent, the differences between Java and C# are down to learning from problems in Java and trying to solve them. – Jack Aidley Jun 18 '14 at 20:31
  • 1
    It simply provides more tools for the programmer. You can always think and work the Java way, but C# has more language features. – marczellm Jun 18 '14 at 20:42
  • @JackAidley: the designers of Java had C++ to look at, which also had explicit virtual methods, but still decided to make virtual the default. – SztupY Jun 19 '14 at 09:21
  • 3
    @SztupY: And you can see why they thought it would be an improvement but the experience of Java is part of the reason why C#'s designers decided to do it. Language design is always a bit of learning process. I don't think anyone would design C++ quite how it's designed if they were starting from scratch today, for example. – Jack Aidley Jun 19 '14 at 09:48
  • 2
    Because inharitance is stupid and composition is better than extension – AK_ Jun 19 '14 at 20:02
  • You should learn C++! – user541686 Jun 20 '14 at 04:04
  • 2
    "It simply provides more tools for the programmer.": However, sometimes less is more (https://en.wikipedia.org/wiki/KISS_principle). – Giorgio Jun 20 '14 at 06:49
  • Your edit is almost a separate question, and it looks like I've misunderstood something. In your example, if Spaceship doesn't *have* an "accelerate" method, then both Java and C# will use the base class's behaviour (in this case Vehicle). If it does, in Java, you will automatically always call Spaceship's, and in C# you'll get a compiler warning about "method hiding" and it will still call Vehicle's (hence the warning). At which point, you simply add the override keyword to Space Ship. – deworde Jun 20 '14 at 09:53
  • 1
    The only issue is if Vehicle's accelerate is non-virtual, which would mean *Car's* override wouldn't work either. Edit: That's my misunderstanding, using "new" to remove the compiler warning. In this case, you are specifically saying "I don't want a user who's using a Vehicle to be exposed to Spaceship.accelerate()", and they won't be. That's not a bug, that's a feature. A feature you should generally avoid unless you have a specific use for it (which is why it's generally a warning), but a feature nonetheless. – deworde Jun 20 '14 at 09:54
  • downvote issued because the question has been updated to an entirely new question. – Ramhound Jun 20 '14 at 15:46
  • @Ramhound sorry but I have to edit the question, because that question was constantly arising in my mind and getting answer and *understanding it fully* is more valuable to me than votes. – Anirban Nag 'tintinmj' Jun 20 '14 at 18:31
  • @AnirbanNag'tintinmj' So ask a new question. Changing it completely invalidates all the information that was already here, it wastes the time of those that actually tried to answer your original question, and wastes the time of those of us that found this thread on google in search of coherent information. – arkon May 18 '16 at 21:21

6 Answers6

95

It was done because it's the correct thing to do. The fact is that allowing all methods to be overridden is wrong; it leads to the fragile base class problem, where you have no way of telling if a change to the base class will break subclasses. Therefore you must either blacklist the methods that shouldn't be overridden or whitelist the ones that are allowed to be overridden. Of the two, whitelisting is not only safer (since you can't create a fragile base class accidentally), it also requires less work since you should avoid inheritance in favor of composition.

Doval
  • 15,347
  • 3
  • 43
  • 58
  • Please clarify: Are you saying that overriding methods *in general* is wrong? Or simply that allowing any arbitrary method to be overridden is wrong? – Bobson Jun 18 '14 at 15:43
  • 7
    Allowing any arbitrary method to be overridden is wrong. You should only be able to override those methods that the superclass designer has designated as safe for overriding. – Doval Jun 18 '14 at 15:48
  • Ok, that's what I was hoping you meant. Upvoted. – Bobson Jun 18 '14 at 15:52
  • 1
    except in the real world, base classes never know what they need to have overridden, this is why a derived class can call the base method, so the base class doesn't need to know that it has been overridden. In C# you can easily hide the base class method entirely using the new keyword anyway, so its all a bit messy. A change to a base class could always break derived classes, and shouldn't be worried about - any change to any code can break things. The only consideration that makes sense is the sealed modifier, but that is a poor restriction to a dev's ability to read the API docs! – gbjbaanb Jun 18 '14 at 16:00
  • 21
    Eric Lippert discusses this in detail in his post, [Virtual Methods and Brittle Base Classes](http://blogs.msdn.com/b/ericlippert/archive/2004/01/07/virtual-methods-and-brittle-base-classes.aspx). – Brian Jun 18 '14 at 16:04
  • @Doval *"superclass designer has designated as safe for overriding"* so as a designer what are the rules of thumb? – Anirban Nag 'tintinmj' Jun 18 '14 at 16:39
  • 4
    @gbjbaanb "Any change to any code can break things." That is *not* what I'm talking about here. I'm saying it's *impossible* to know if a change will break a subclass without inspecting all of the subclasses. It doesn't matter how much you review the code. If the base class in question is part of a public API, you're double screwed, because even if you were willing to inspect every single subclass, you can't; [that's a fact](http://www.cs.cmu.edu/~aldrich/papers/selective-open-recursion.pdf). Inheritance is already problematic enough without dragging the fragile base class problem into it. – Doval Jun 18 '14 at 17:00
  • 3
    @tintinmj Simple - seal everything until you know it requires overriding. Since you should favor composition over inheritance, this'll be the vast majority of the cases. – Doval Jun 18 '14 at 17:01
  • 12
    Java has `final` modifier, so what's the problem? – Display Name Jun 18 '14 at 19:03
  • 7
    @SargeBorsch It's better to opt into inheritance than to opt out. In my opinion, C# doesn't go far enough - it'd be even better if everything was `sealed` by default. This is especially true since knowledge of the fragile base class problem isn't ubiquitous. It's not uncommon to see Java classes left completely open even though they were never designed for inheritance. – Doval Jun 18 '14 at 19:12
  • @Doval how it is related to the inheritance? Even if the base class method is non-virtual, you can break derived classes by changing its implementation if a derived class depends on a property of this method that will no longer hold. And you can do it even without inheritance at all... Why this is a special case? – Display Name Jun 18 '14 at 19:20
  • 2
    @SargeBorsch Good question. In such a case the change you mention would've "broken" the base class too, in the sense that any code depending on the base class is now wrong. The insidious thing about the fragile base class problem is that a change that leaves the behavior of the base class intact can *still* break the subclasses and there's no way to know a priori. As a simple example, consider a Set class with methods `add(E element)` and `addAll(Iterable elements)`, neither one being `sealed`/`final`. For convenience, you call `add` from the implementation of `addAll`. (continued...) – Doval Jun 18 '14 at 19:33
  • 1
    @SargeBorsch (...) Later someone decides to create a subclass of `Set` that optimizes the performance of `size()` by adding a private counter and overriding `add` and `addAll` so they increment this counter. Because you implemented `addAll` in terms of `add`, the counter gets bumped up twice. The person now notes this dependency between `add` and `addAll` and fixes his code by overriding only one of them. However, you had intended the dependency between `add` and `addAll` to be an implementation detail! Later you find a more efficient way to implement `addAll` that doesn't call `add`... – Doval Jun 18 '14 at 19:38
  • 2
    @SargeBorsch ...but otherwise behaves correctly. This update will break the subclass, but it looks correct to you. This happened because you didn't establish an API for inheritance, but still left the class open. One way to establish an API for inheritance would've been to make `addAll` sealed/final and make the fact that it's implemented in terms of `add` part of the contract. On the other hand, if you never even thought about inheritance, the class should've been sealed so no one can write a subclass that you'll inadvertently break. – Doval Jun 18 '14 at 19:43
  • @Doval this scenario can be done with composition, too, isn't it? – Display Name Jun 18 '14 at 19:43
  • 1
    @SargeBorsch No, composition would be safe. If someone implements a new kind of set by having the original set as a member, they wouldn't be affected by implementation changes to the original set. – Doval Jun 18 '14 at 19:46
  • 1
    Hmm, the difference is that base class can't call "overridden" methods of the container class. But anyway, if a person abuses this feature, it is not the language problem, I think... – Display Name Jun 18 '14 at 19:48
  • @SargeBorsch It's the language's problem in the sense that Java's default behavior (allowing inheritance) is dangerous. Someone who doesn't know any better won't mark the method/class `final` and inadvertently creates a fragile base class. Because inheritance requires planning, the default should be to make everything sealed. – Doval Jun 18 '14 at 19:51
  • @Doval objects should be open to extension - they really shouldn't be made sealed by default. With the introduction of the `@Override` annotation you get a warning if you override something but didn't annotate it as such (and you can make your compiler escalate this warning to an error), and you get an error if you have an @Override annotation but don't actually override. This allows the base developer to make his class open to being overriden (good) and gives the other developer immediate feedback as to if he needs to investigate further. It's win-win. – corsiKa Jun 18 '14 at 21:54
  • 14
    @corsiKa "objects should be open to extension" - why? If the developer of the base class never thought about inheritance it's almost guaranteed that there are subtle dependencies and assumptions in the code that will lead to bugs (remember `HashMap`?). Either design for inheritance and make the contract clear or don't and make sure nobody can inherit the class. `@Override` was added as a bandaid because it was too late to change the behavior by then, but even the original Java devs (particularly Josh Bloch) agree that this was a bad decision. – Voo Jun 18 '14 at 23:05
  • @corsiKa As Voo points out, ad-hoc extension simply doesn't work. But even if the fragile base class problem weren't an issue, inheritance still creates tight coupling and on top of that doesn't scale since you can only inherit from one class. Composition dodges both problems. If you want something that's open, what you want is an interface. – Doval Jun 18 '14 at 23:16
  • 1
    @Doval "It's better to opt into inheritance than to opt out." That's your opinion, other people obviously disagree, including language designers... – jwenting Jun 19 '14 at 06:51
  • 9
    @jwenting "Opinion" is a funny word; people like to attach it to facts so they can disregard them. But for whatever it's worth it's also Joshua Bloch's "opinion" (see: *Effective Java*, [Item 17](http://goo.gl/5fCv6z)), James Gosling's "opinion" (see [this interview](http://www.artima.com/intv/gosling3P.html)), and as pointed out in [Omer Iqbal's answer](http://programmers.stackexchange.com/a/245419/116461), it's also Anders Hejlsberg's "opinion". (And although he never addressed opting in vs opting out, Eric Lippert clearly agrees inheritance is dangerous as well.) So who are referring to? – Doval Jun 19 '14 at 11:28
  • @jwenting: Many arguments in language design stem from a belief that individual things should be "always" or "never" be implicitly allowed, rather than recognizing the significance of combinations. Many such arguments could be avoided if language designers would focus more on trying to distinguish situations where something would, if allowed, likely behave as intended [e.g. `someFloat = someInteger*0.1;`] from those which wouldn't [e.g. `someDouble = someLong*0.1f;`], rather than trying to design the "simplest" rules. – supercat Jun 19 '14 at 16:33
  • 1
    @jwenting Umn name one language designer who's of the opinion that opting out of inheritance is preferable to opting in. Because seriously I can't think of a single one. Certainly not the Java designers as Doval points out - matter of fact Josh Bloch is an opponent of using inheritance much these days (as in it's mostly overused with bad results). – Voo Jun 20 '14 at 18:05
  • Closed source objects should ALWAYS be open for extension. Unless you are releasing the source code, you are handicapping your software by keeping it from being extended. – rich remer Jun 21 '14 at 14:36
  • 4
    @richremer Do you have *any* objective basis for this statement? There's no free lunch here; classes that weren't designed for extension, and whose extension points haven't been clearly documented, can't be extended without your subclass breaking at random. And if you really want your software to be extensible, you wouldn't be using inheritance in the first place; interfaces and composition give you all the benefits of inheritance (and more; you can't multiply inherit) without any of the pitfalls. – Doval Jun 21 '14 at 14:56
  • "Can't be extended" by someone without the talent to do so. This comes from years of working with both proprietary and open software built by shitty developers. – rich remer Jun 21 '14 at 15:10
  • @richremer Yes, "can't be extended", because it make *promises* about behaviour, that it's clients *rely on*, and receives maintainance, in the form of implementation details changing. Pick two of "behaving as specified", "inheritable" or "receives updates" – Caleth Nov 21 '19 at 11:03
90

Since you asked why C# did it this way, it's best to ask the C# creators. Anders Hejlsberg, the lead architect for C#, answered why they chose not to go with virtual by default (as in Java) in an interview, pertinent snippets are below.

Keep in mind that Java has virtual by default with the final keyword to mark a method as non-virtual. Still two concepts to learn, but many folks do not know about the final keyword or don't use it proactively. C# forces one to use virtual and new/override to consciously make those decisions.

There are several reasons. One is performance. We can observe that as people write code in Java, they forget to mark their methods final. Therefore, those methods are virtual. Because they're virtual, they don't perform as well. There's just performance overhead associated with being a virtual method. That's one issue.

A more important issue is versioning. There are two schools of thought about virtual methods. The academic school of thought says, "Everything should be virtual, because I might want to override it someday." The pragmatic school of thought, which comes from building real applications that run in the real world, says, "We've got to be real careful about what we make virtual."

When we make something virtual in a platform, we're making an awful lot of promises about how it evolves in the future. For a non-virtual method, we promise that when you call this method, x and y will happen. When we publish a virtual method in an API, we not only promise that when you call this method, x and y will happen. We also promise that when you override this method, we will call it in this particular sequence with regard to these other ones and the state will be in this and that invariant.

Every time you say virtual in an API, you are creating a call back hook. As an OS or API framework designer, you've got to be real careful about that. You don't want users overriding and hooking at any arbitrary point in an API, because you cannot necessarily make those promises. And people may not fully understand the promises they are making when they make something virtual.

The interview has more discussion about how developers think about class inheritance design, and how that led to their decision.

Now to the following question:

I'm not able to understand why in the world I'm going to add a method in my DerivedClass with same name and same signature as BaseClass and define a new behaviour but at the run-time polymorphism, the BaseClass method will be invoked! (which is not overriding but logically it should be).

This would be when a derived class wants to declare that it does not abide by the contract of the base class, but has a method with the same name. (For anyone who doesn't know the difference between new and override in C#, see this Microsoft Docs page).

A very practical scenario is this:

  • You created an API, which has a class called Vehicle.

  • I started using your API and derived Vehicle.

  • Your Vehicle class did not have any method PerformEngineCheck().

  • In my Car class, I add a method PerformEngineCheck().

  • You released a new version of your API and added a PerformEngineCheck().

  • I cannot rename my method because my clients are dependent on my API, and it would break them.

  • So when I recompile against your new API, C# warns me of this issue, e.g.

    If the base PerformEngineCheck() was not virtual:

     app2.cs(15,17): warning CS0108: 'Car.PerformEngineCheck()' hides inherited member 'Vehicle.PerformEngineCheck()'.
     Use the new keyword if hiding was intended.
    

    And if the base PerformEngineCheck() was virtual:

     app2.cs(15,17): warning CS0114: 'Car.PerformEngineCheck()' hides inherited member 'Vehicle.PerformEngineCheck()'.
     To make the current member override that implementation, add the override keyword. Otherwise add the new keyword.
    
  • Now, I must explicitly make a decision whether my class is actually extending the base class' contract, or if it is a different contract but happens to be the same name.

  • By making it new, I do not break my clients if the functionality of the base method was different from the derived method. Any code that referenced Vehicle will not see Car.PerformEngineCheck() called, but code that had a reference to Car will continue to see the same functionality that I had offered in PerformEngineCheck().

A similar example is when another method in the base class might be calling PerformEngineCheck() (esp. in the newer version), how does one prevent it from calling the PerformEngineCheck() of the derived class? In Java, that decision would rest with the base class, but it does not know anything about the derived class. In C#, that decision rests both on the base class (via the virtual keyword), and on the derived class (via the new and override keywords).

Of course, the errors that the compiler throws also provide a useful tool for the programmers to not unexpectedly make errors (i.e. either override or provide new functionality without realizing so.)

Like Anders said, real world forces us into such issues which, if we were to start from scratch, we would never want to get into.

EDIT: Added an example of where new would have to be used for ensuring interface compatibility.

EDIT: While going through the comments, I also came across a write-up by Eric Lippert (then one of the members of C# design committee) on other example scenarios (mentioned by Brian).


PART 2: Based on updated question

But in case of C# if SpaceShip does not override the Vehicle class' accelerate and use new then the logic of my code will be broken. Isn't that a disadvantage?

Who decides whether SpaceShip is actually overriding the Vehicle.accelerate() or if it's different? It has to be the SpaceShip developer. So if SpaceShip developer decides that they are not keeping the contract of the base class, then your call to Vehicle.accelerate() should not go to SpaceShip.accelerate(), or should it? That is when they will mark it as new. However, if they decide that it does indeed keep the contract, then they will in fact mark it override. In either case, your code will behave correctly by calling the correct method based on the contract. How can your code decide whether SpaceShip.accelerate() is actually overriding Vehicle.accelerate() or if it is a name collision? (See my example above).

However, in the case of implicit inheritance, even if SpaceShip.accelerate() did not keep the contract of Vehicle.accelerate(), the method call would still go to SpaceShip.accelerate().

Pang
  • 313
  • 4
  • 7
Omer Iqbal
  • 3,224
  • 15
  • 22
  • 13
    The performance point is completely obsolete by now. For a proof see my [benchmark](http://stackoverflow.com/questions/24222991) showing that accessing a field via a *non-final* but never overloaded method takes a single cycle. – maaartinus Jun 18 '14 at 20:17
  • 7
    Sure, that might be the case. The question was that when C# decided to do so, why did it at THAT time, and hence this answer is valid. If the question is whether it still makes sense, that's a different discussion, IMHO. – Omer Iqbal Jun 18 '14 at 20:22
  • 1
    I fully agree with you. – maaartinus Jun 18 '14 at 21:09
  • 2
    IMHO, while there are definitely uses for having functions non-virtual, the risk of unexpected pitfalls occurs when a something which isn't expected to override a base-class method or implement an interface, does so. – supercat Jun 19 '14 at 01:38
  • @OmerIqbal See the edit please. – Anirban Nag 'tintinmj' Jun 19 '14 at 08:32
  • @tintinmj I am unable to see the edit. When I go to see the edit history, it shows me all of my edits, and no proposed edits, if that is what you meant. – Omer Iqbal Jun 19 '14 at 08:38
  • @OmerIqbal no I added something at last in my question. With heading as **EDIT** – Anirban Nag 'tintinmj' Jun 19 '14 at 08:48
  • @tintinmj updated response based on your edited Q, but the response seems to be becoming a bit overwhelming now – Omer Iqbal Jun 19 '14 at 09:04
  • @OmerIqbal *"However, in the case of implicit inheritance, [...] still go to SpaceShip.accelerate()"* sorry but I didn't understand. – Anirban Nag 'tintinmj' Jun 19 '14 at 10:54
  • @OmerIqbal What I was talking about in my *edit* is suppose I'm the developer of `SpaceShip` class and I created the `accelerate` method as `new`. My client was using `Car` before in their code, so they have code like `Vehicle vehicle = new Car();`. Now they want to use `SpaceShip` class so they can just write `Vehicle vehicle = new SpaceShip()` and compiler will not complain anything; right? But at runtime `SpaceShip.accelerate()` method will be not invoked rather `Vehicle.accelerate()` method will be invoked. Which is not intended! – Anirban Nag 'tintinmj' Jun 19 '14 at 11:01
  • @tininmj If it is not intended then why are you using `new` and not `override`? You are writing the `SpaceShip` class, you have to decide whether accelerate is the same contract as bad class or not (e.g. it accelerates the car until it destroys or crashes but that's not the base class' contract). Compiler is not going to make the decision and that's the whole point, you are expecting the compiler to make this decision it seems or correct your decision if you are making an incorrect one. Please move this to chat if you want to continue chatting :-) – Omer Iqbal Jun 19 '14 at 17:38
  • 1
    I find it weird that Anders is talking about "making promises" by making a method virtual. The responsibility of breaking something should be on the guy overriding, not the guy developing the base class. The lack of flexibility in C# and "final by default" has caused a lot of headache when it comes to unit testing the .NET framework. E.g. we would not need RhinoMock if ASP.NET was overridable. – Nilzor Jun 20 '14 at 07:41
  • 4
    @Nilzor: Hence the argument regarding academic vs. pragmatic. Pragmatically, the responsibility for breaking something lies in the last changemaker. If you change your base class, and an existing derived class depends on it not changing, that's not their problem ("they" may no longer exist). So base classes become locked into their derived classes' behaviour, *even if that class should never have been virtual*. And as you say, RhinoMock exists. Yes, if you correctly final all methods, everything is equivalent. Here we point at every Java system ever built. – deworde Jun 20 '14 at 10:02
  • 1
    @deworde I'm not sure I understand your argument of base classes being locked in. My argument was that omission of the final keyword in Java should *not* be a promise of contract stability. If that was the case, the responsibility lies always on the author of the derived class. Base class author may break things in derived systems, but that's not his problem. For me, *that* sounds like the pragmatic approach. PS: I've done most .NET work, so that's maybe why I don't see problems with the Java approach. – Nilzor Jun 20 '14 at 10:54
  • 1
    @Nilzor Standard situation:You want to make a change to a base class that will break between 1 and countless derived classes, which are in projects providing real world value. Good luck stating to their busy owners "well, you have to change" without gaining a rep as a "Framework: Delayer of Projects and Causer of Bugs" to the people who have to approve decisions. – deworde Jun 20 '14 at 11:47
  • 1
    @Nilzor Especially as, by not declaring it final, you've implicitly declared they were okay to do the thing that now prevents your change. And if you don't have a significant number of derived classes providing real world value by overriding your base class, why is it a base class in the first place? – deworde Jun 20 '14 at 12:06
  • @Nilzor End result: Your framework base class code becomes incredibly fragile and static, and you can't really make the necessary changes to stay current/efficient. Alternatively, your code gets a rep for instability and people won't use it at all. Simply put, you cannot ask busy project owners to delay their project in order to fix bugs that you are introducing, and expect anything better than a "go away" unless you're providing significant benefits to them, even if the original fault was theirs. Better to not let them do the wrong thing in the first place. – deworde Jun 20 '14 at 12:10
33

As Robert Harvey said, it's all in what you're used to. I find Java's lack of this flexibility odd.

That said, why have this in the first place? For the same reason that C# has public, internal (also "nothing"), protected, protected internal, and private, but Java just has public, protected, nothing, and private. It provides finer grain control over the behavior of what you're coding, at the expense of having more terms and keywords to keep track of.

In the case of new vs. virtual+override, it goes something like this:

  • If you want to force subclasses to implement the method, use abstract, and override in the subclass.
  • If you want to provide functionality but allow the subclass to replace it, use virtual, and override in the subclass.
  • If you want to provide functionality which subclasses should never need to override, don't use anything.
    • If you then have a special case subclass which does need to behave differently, use new in the subclass.
    • If you want to ensure that no subclass can ever override the behavior, use sealed in the base class.

For a real-world example: A project I worked on processed ecommerce orders from many different sources. There was a base OrderProcessor which had most of the logic, with certain abstract/virtual methods for each source's child class to override. This worked fine, up until we got a new source which had a completely different way of processing orders, such that we had to replace a core function. We had two choices at this point: 1) Add virtual to the base method, and override in the child; or 2) Add new to the child.

While either one could work, the first would make it very easy to override that particular method again in the future. It'd show up in auto-complete, for example. This was an exceptional case, however, so we chose to use new instead. That preserved the standard of "this method doesn't need to be overridden", while allowing for the special case where it did. It's a semantic difference which makes life easier.

Do note, however, that there is a behavior difference associated with this, not just a semantic difference. See this article for details. However, I've never run into a situation where I needed to take advantage of this behavior.

Bobson
  • 4,638
  • 26
  • 24
  • 7
    I think that intentionally using `new` this way is a bug waiting to happen. If I call a method on some instance, I expect it to always do the same thing, even if I cast the instance to its base class. But that's not how `new`ed methods behave. – svick Jun 18 '14 at 15:57
  • @svick - Potentially, yes. In our particular scenario, that would never occur, but that's why I added the caveat. – Bobson Jun 18 '14 at 16:01
  • One excellent use of `new` is in WebViewPage in the MVC framework. But I have also been thrown by a bug involving `new` for hours, so I don't think it's an unreasonable question. – pdr Jun 18 '14 at 16:46
  • Maybe am I wrong but you can do (almost) all of this with Java. The thing is that the philosophy is the opposite. In Java, everything is overridable by default but you can blacklist the methods you don't want to override with the `final`keyword. – C.Champagne Jun 18 '14 at 16:47
  • @C.Champagne: You're still confusing `override` with `new`. They're not the same; not even close. – pdr Jun 18 '14 at 16:55
  • @C.Champagne - It's the difference between "All child classes should behave like the base class specifies" vs "All child classes can do what they want". One gives you much more control. – Bobson Jun 18 '14 at 17:02
  • @pdr I don't know C# very well but I think understood the difference and that is why I used "almost". That is for me the biggest difference between those languages. The thing is that it breaks polymorphism, that is why I don't feel enthousiastic. – C.Champagne Jun 18 '14 at 17:08
  • 1
    @C.Champagne: Any tool can be used badly and this one is a particularly sharp tool -- you can cut yourself easily. But that's not a reason to remove the tool from the toolbox and remove an option from a more talented API designer. – pdr Jun 18 '14 at 17:18
  • @svick: What would you think of defining a `new` function but tagging it `Obsolete("Use XX or YY depending upon requirements", true)` and refactoring code which expects the derived-class meaning to use a slightly-renamed alternative? That should avoid hidden bugs. – supercat Jun 18 '14 at 18:28
  • 1
    @svick Correct, which is why it's normally a compiler warning. But having the ability to "new" covers edge conditions (such as the one given), and even better, makes it *really obvious* that you're doing something weird when you come to diagnose the inevitable bug. "Why's this class and only this class buggy... ah-hah, a "new", let's go test where the SuperClass is used". – deworde Jun 20 '14 at 10:09
  • Another good use case for `new` is changing the signature of a property or method. A most useful example that I used multiple times was to change the type of a property to a subclass of the base class's property. For example hiding `IXXResolver Resolver {get;}` with `new DefaultResolver Resolver {get => base.Resolver as DefaultResolver; }`. It fits perfectly. It does still respect the contract with the base class and also makes it easier to access type-specific properties of a subclass's type (or interface implementation) if you already know the type of the child class. – Soroush Falahati Dec 11 '18 at 02:11
  • Also, this should be added that `new` doesn't necessarily hide the method or property of the base class. It works only when you do know the real type of object. If a part of code accepts, for example, the base class as a parameter or the value, it stills calls the base class's method, even tho it is hidden in the child class. `new` use cases are little and far between, however, when you need them, they are super useful. – Soroush Falahati Dec 11 '18 at 02:15
8

The design of Java is such that given any reference to an object, a call to a particular method name with particular parameter types, if it is allowed at all, will always invoke the same method. It's possible that implicit parameter-type conversions may be affected by the type of a reference, but once all such conversions have been resolved, the type of the reference is irrelevant.

This simplifies the runtime, but can cause some unfortunate problems. Suppose GrafBase does not implement void DrawParallelogram(int x1,int y1, int x2,int y2, int x3,int y3), but GrafDerived implements it as a public method which draws a parallelogram whose computed fourth point is opposite the first. Suppose further that a later version of GrafBase implements a public method with the same signature, but its computed fourth point opposite the second. Clients which receive expect a GrafBase but receive a reference to a GrafDerived will expect DrawParallelogram to compute the forth point in the fashion of the new GrafBase method, but clients who have been using GrafDerived.DrawParallelogram before the base method was changed will expect the behavior which GrafDerived originally implemented.

In Java, there would be no way for the author of GrafDerived to make that class coexist with clients that use the new GrafBase.DrawParallelogram method (and may be unaware that GrafDerived even exists) without breaking compatibility with existing client code that used GrafDerived.DrawParallelogram before GrafBase defined it. Since the DrawParallelogram can't tell what kind of client is invoking it, it must behave identically when invoked by kinds of client code. Since the two kinds of client code have different expectations as to how it should behave, there's no way GrafDerived can avoid violating the legitimate expectations of at one of them (i.e. breaking legitimate client code).

In C#, if GrafDerived is not recompiled, the runtime will assume that code which invokes DrawParallelogram method upon references of type GrafDerived will be expecting the behavior GrafDerived.DrawParallelogram() had when it was last compiled, but code which invokes the method upon references of type GrafBase will be expecting GrafBase.DrawParallelogram (the behavior that was added). If GrafDerived is later recompiled in the presence of the enhanced GrafBase, the compiler will squawk until the programmer either specifies whether his method was intended to be a valid replacement for inherited member GrafBase, or whether its behavior needs to be tied to references of type GrafDerived, but should not replace the behavior of references of type GrafBase.

One might reasonably argue that having a method of GrafDerived do something different from a member of GrafBase which has the same signature would indicate bad design, and as such shouldn't be supported. Unfortunately, since the author of a base type has no way of knowing what methods might be added to derived types, nor vice versa, the situation where base-class and derived-class clients have different expectations for like-named methods is essentially unavoidable unless nobody's allowed to add any name which someone else might also add. The question is not whether such a name duplication should happen, but rather how to minimize the harm when it does.

supercat
  • 8,335
  • 22
  • 28
  • 2
    I think there's a good answer in here, but it's being lost in the wall of text. Please break this up into a few more paragraphs. – Bobson Jun 18 '14 at 17:02
  • What is `DerivedGraphics? A class using `GrafDerived`? – C.Champagne Jun 18 '14 at 17:12
  • @C.Champagne: I meant `GrafDerived`. I started out using `DerivedGraphics`, but felt it was a bit long. Even `GrafDerived` is still a bit long, but didn't know how best to name graphics-renderer types which should have a clear base/derived relation. – supercat Jun 18 '14 at 18:22
  • 1
    @Bobson: Better? – supercat Jun 19 '14 at 16:12
6

Standard situation:

You are the owner of a base class that is used by multiple projects. You want to make a change to said base class that will break between 1 and countless derived classes, which are in projects providing real-world value (A framework provides value at best at one remove, no Real Human Being wants a Framework, they want the thing running on the Framework). Good luck stating to the busy owners of the derived classes; "well, you have to change, you shouldn't have overridden that method" without gaining a rep as a "Framework: Delayer of Projects and Causer of Bugs" to the people who have to approve decisions.

Especially as, by not declaring it non-overridable, you've implicitly declared they were okay to do the thing that now prevents your change.

And if you don't have a significant number of derived classes providing real world value by overriding your base class, why is it a base class in the first place? Hope is a powerful motivator, but also a very good way to end up with unreferenced code.

End result: Your framework base class code becomes incredibly fragile and static, and you can't really make the necessary changes to stay current/efficient. Alternatively, your framework gets a rep for instability (derived classes keep breaking) and people won't use it at all, since the main reason to use a framework is to make coding faster and more reliable.

Simply put, you cannot ask busy project owners to delay their project in order to fix bugs that you are introducing, and expect anything better than a "go away" unless you're providing significant benefits to them, even if the original "fault" was theirs, which is at best arguable.

Better to not let them do the wrong thing in the first place, which is where "non-virtual by default" comes in. And when someone comes to you with a very clear reason why they need this particular method to be overridable, and why it should be safe, you can "unlock" it without risking breaking anyone else's code.

deworde
  • 1,892
  • 14
  • 21
0

Defaulting to non-virtual assumes that the base class developer is perfect. In my experience developers are not perfect. If the developer of a base class cannot imagine a use case where a method could be overridden or forgets to add virtual then I cannot take advantage of polymorphism when extending the base class without modifying the base class. In the real world modifying the base class is often not an option.

In C# the base class developer does not trust the subclass developer. In Java the subclass developer does not trust the base class developer. The subclass developer is responsible for the subclass and should (imho) be given the power to extend the base class as they see fit (barring explicit denial, and in java they can even get this wrong).

It's a fundamental property of the language definition. It isn't right or wrong, it is what it is and cannot change.

clish
  • 11
  • 2
    this doesn't seem to offer anything substantial over points made and explained in prior 5 answers – gnat Nov 09 '14 at 06:57