This is more philosophical question, which address .NET platform, but maybe it is useful also for other languages. I'm doing lot of Unit Testing and especially when I'm using third-party components I often struggling with. In .NET is enormous claim to component design(er) to choice which method should be virtual or not. On one side is it component usage (what make sense to be virtual), on other is it component mockability. You can use Shims to mock up third party components but this leads often to bad design and complexity. As I can remember the discussion about make all methods be virtual (JAVA has it since beginning) in .NET was about performance. But it is still the issue? Why aren't all methods in .NET virtual or why doesn't have each class at least one interface?
-
1Ah yeah because after couple of minutes somebody downvoted me (my first time) I would like to know why. – Anton Kalcik May 31 '16 at 08:44
-
I'm guessing the person who voted to close this question as "too broad" also down-voted you. – David Arno May 31 '16 at 08:46
-
1_"Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development"_ Did I understood something wrong? – Anton Kalcik May 31 '16 at 08:47
-
5I tend to agree that this question is too broad. "Why doesn't each class have a virtual?" Why should it? You mention testing. Maybe you could improve the question by changing it to be "why does this class (most annoying .net class) not have these methods marked virtual?" You may however find out that the answer is "Oh yeah. They didn't really anticipate the explosion of TDD when designing .NET 1.0" – perfectionist May 31 '16 at 09:03
-
@perfectionist I agree but it was my intention to make this question broad, because it is conceptual question about .NET platform. I want only to know opinion not concrete answer. – Anton Kalcik May 31 '16 at 09:07
-
You say when you use Shims this can lead to bad design and complexity, I'd like to see a shims example that would be cleaner if you could inherit or override some method you cannot and how that would look even though it won't compile. – weston May 31 '16 at 11:54
-
4If you want to make it broad because you are only interested in opinions, then it is off-topic for *two* reasons: *too broad* and *opinion-based*. – Jörg W Mittag May 31 '16 at 12:24
-
I'm interested to know how we overcome this issue which tend to write less testable software components – Anton Kalcik May 31 '16 at 12:31
-
@AntonKalcik: Two words: *functional programming.* – Robert Harvey May 31 '16 at 16:15
2 Answers
As Anders says, its partly about performance and partly about locking down poorly-thought designs down to reduce the scope of trouble caused later by people blindly inheriting things that were not designed to be inherited.
Performance seems obvious - while most methods wont be noticed on modern hardware, a virtual getter might cause quite a noticeable hit, even on modern hardware that does rely more and more on easily predicted instructions in the CPU pipeline.
Now the reason to make everything virtual by default seems to be driven by one thing only: unit testing frameworks. It seems to me that this is a poor reason, where we are changing code to suit the framework rather than designing it properly.
Perhaps the issue is with the frameworks used here, they should improve to allow replacement of functions at runtime with injected shims rather than building code that does this (with all the hacks to get round private methods as well as non-virtual ones, not to mention static and 3rd party functions)
So the reason methods are not virtual by default is as Anders said, and any desire to make them virtual to suit some unit test tool is putting the cart before the horse, proper design beats artificial constraints.
Now you can mitigate all of this by using interfaces, or by reworking your entire codebase to use components (or microservices) that communicate via message passing instead of directly wired method calls. If you enlarge the surface of a unit for testing to be a component, and that component is entirely self-contained, you don't really need to screw with it to unit test it.

- 48,354
- 6
- 102
- 172
-
-
@gbjbaanb I didn't understood the last sentence _"If you enlarge the surface of a unit for testing to be a component, and that component is entirely self-contained, you don't really need to screw with it to unit test it."_ – Anton Kalcik May 31 '16 at 20:49
-
-
1@AntonKalcik I mean, if you have a black box then it shouldn't have dependencies that need mocking. Ie, you test it by firing input and seeing what comes out. If your box is small and you've designed it to be decoupled from other services then this can happen easily. The trouble is that a class is not de-coupable from other classes very easily, and thus requires mocking. If you think of a unit as a component (a dll maybe, or a microservice) rather than a class, you can test in this way. – gbjbaanb Jun 01 '16 at 07:52
-
Regarding performance: the CPU can easily predict virtual method call (if it's always the same one). The big problem is that it's much harder for the JIT compiler to inline virtual method calls, which then prevents further optimizations. – svick Jun 09 '16 at 13:49
There's two elements to your question, so I'll try and address them in turn:
Why aren't .NET methods virtual by default?
Inheritance, in practice, has many problems. So if it is to be used as part of a design, then it should be carefully considered. By making methods non-virtual by default, the theory is that it encourages the developer to think about inheritance when designing a class. Whether that works in practice is another matter of course.
Why don't all classes implement an interface?
Poor design is the simple answer. Despite being commonly recognised as good practices for a very long time, sadly lots of folk still don't design their systems with DI and TDD in mind.
The combination of classes not being fully inheritable and not being easily mocked does create a "perfect storm" of hard-to-test code. Until we reach the day though when everyone uses DI and the design-to-interface principle, there's not much that can be done to alleviate the pain.

- 38,972
- 9
- 88
- 121
-
Perfekt answer. My question is if the language self or her features can support this shift to design-to-interface principle. – Anton Kalcik May 31 '16 at 09:53
-
3"Poor design is the simple answer" - I tend to agree, but there are exceptions. See data transfer objects or record objects. For simple POCO/POJO/insert language here objects, it does not make sense for them to implement an `interface` just because. What value does having an `interface` for what is essentially a glorified tuple have? (Although maybe using a POJO/POCO itself is a design flaw) – Dan May 31 '16 at 09:55
-
2@AntonKalcik, the C# team adopt a very conservative "never introduce a breaking change" approach to the language and making interfaces mandatory would definitely be a breaking change. So I suspect you won't be seeing that added to C#. – David Arno May 31 '16 at 09:56
-
3@DanPantry, simple data objects rarely need mocking though, so don't normally need interfaces. – David Arno May 31 '16 at 09:57
-
I agree, @DavidArno, but that's my point; not every class has interfaces *because* of situations like that. Unless a specific type was used for DTOs (`struct`?) then I don't see the point in automagically implementing an interface. That said, with Flowtype/TypeScript in JavaScript we have structural (rather than nominal) typing, where every class/function/object implicitly has an interface – Dan May 31 '16 at 09:58
-
But what about to make all methods to be virtual? As @DavidArno pointed out _"By making methods non-virtual by default, the theory is that it encourages the developer to think about inheritance when designing a class. **Whether that works in practice is another matter of course.**"_ I think in practice it doesn't work most the time and it results to lot of work to wrap the components around to be _mockable_. – Anton Kalcik May 31 '16 at 10:46
-
1Good point by Anders Hejlsberg http://www.artima.com/intv/nonvirtual.html – Anton Kalcik May 31 '16 at 12:26
-
2In C♯, interfaces define Objects, classes define Abstract Data Types. (It's a common misconception that classes are what you use in C♯ for writing object-oriented code. Quite the opposite: OO code in C♯ can *never* use classes or structs as types, only interfaces. As soon as you use classes as types, you are no longer doing OO.) So, I wouldn't call not using interfaces a design mistake. It's a design choice: ADTs and Objects have dual strengths and weaknesses. You should choose what makes more sense in your particular domain. – Jörg W Mittag May 31 '16 at 12:28
-
5@JörgWMittag: `As soon as you use classes as types, you are no longer doing OO` -- Classes *are* types. What?? Normal class inheritance *is* OO, no matter what you say. You don't get to decide it's not OO anymore just because DI is all the rage now. – Robert Harvey May 31 '16 at 16:12
-
@AntonKalcik : Thanks for the article, though it does argue the case against systematic vitual. +1 for keeping an open mind. – Newtopian May 31 '16 at 16:22
-
@RobertHarvey Jörg means using classes/structs as argument/return types. He's referring to [On Understanding Data Abstraction, Revisited](https://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf). The tl;dr of the paper is that OO abstractions rely only on compatible interfaces (i.e. `interface`) rather than specific types (i.e. `class` or `struct`). This is roughly in line with Alan Kay's definition involving "sending messages". The FP analogy would be using records of functions or typeclasses to create polymorphic functions instead of accepting/returning only specific types. – Doval May 31 '16 at 22:01