7

Some times I see an object instantiated this way.

ICustomer oCustomer = new Customer

Obvious but the Customer class is implementing ICustomer interface in this example. Are there any advantages of instantiating an object that way? From my understanding, if a class in implementing an interface it is already following the contract. What is the name of that practice and is there a disadvantage of instantiating an object "normal"(what I usually do) way? I know there are already some existing questions out there, but they don't really highlight the advantages and disadvatages

Interface instantiation vs class instantiation

Customer oCustomer = new Customer
Howls Hagrid
  • 803
  • 1
  • 8
  • 13
  • 1
    It's not really about advantages or disadvantages. I wouldn't even say that there are disadvantages, since all you do is restrict yourself to a contract and if that contract isn't enough then.. well.. it wasn't the right contract in the first place. Another post on the subject: http://stackoverflow.com/questions/1484445/why-are-variables-declared-with-their-interface-name-in-java – Jeroen Vannevel Apr 02 '15 at 00:24
  • This seems to be a clear duplicate of that question, but you're right that the current answer there is not great. Let's see if we can't do better. – Telastyn Apr 02 '15 at 00:24
  • Most likely you are working with the type of developer that parallels many classes with interfaces, whether or not they are needed. I recommend you decide rationally if an interface is truly needed, when writing your own code. On the other hand, you may be looking at an interface that is truly needed. – Frank Hileman Apr 02 '15 at 01:39
  • 1
    Naming that variable `oCustomer` is way, way out of fashion, a throwback to "Hungarian" notation (Microsoft, Charles Simonyi). The original purpose of Hungarian notation was to indicate the type of a variable, in the C language, since C has such a weakly enforced type system (virtually non-existent, really) and the tools were so much more primitive. The original C# language specification at Microsoft actually recommended explicitly against using Hungarian notation. I used to be a huge advocate, now it seems kind of obnoxious when I encounter it. Just an observation. ;-) – Craig Tullis Apr 02 '15 at 07:37
  • 1
    @Craig: [The way I heard it](http://www.joelonsoftware.com/articles/Wrong.html), Hungarian notation was never meant to indicate type. It was to indicate type-like information like the units of a measurement or whether coordinates were relative to one thing or another. It only started being used for type once people completely misunderstood it and screwed the whole thing up. – user2357112 Apr 02 '15 at 17:56
  • 1
    @user2357112 I do sort of remember the history of the thing. I'm going to argue that using just an "o" as a prefix is actually pretty lame, though. It's *so* cargo cult... How many people are surprised to discover that an instance of `Customer` is an object (show of hands?). None. There's my point. If the prefix actually told you something more useful, like the old `lpsz` prefix in C which told you that you were (ostensibly) dealing with a long pointer to a null terminated plain old C string (as opposed to something like a BSTR), then the prefix would serve a purpose. :-) – Craig Tullis Apr 02 '15 at 18:08
  • 1
    @user2357112 Also, good call invoking Spoelsky. But do note the date on the article (10 years ago). Having said that, I PROFOUNDLY AGREE with Joel on this point: "4. You deliberately architect your code in such a way that your nose for uncleanliness makes your code more likely to be correct." I'm just no longer convinced that Hungarian notation plays a direct role in that with modern, strongly-typed languages. Just like I'm no longer convinced that using `var` instead of specific type names in variable declarations is an abomination. – Craig Tullis Apr 02 '15 at 18:09
  • @Craig I guess I am so used to coding that way... we have about 100,000 legacy code with Hungarian notation :) – Howls Hagrid Apr 03 '15 at 23:23
  • Possible duplicate of [Why define a Java object using interface (e.g. Map) rather than implementation (HashMap)](https://softwareengineering.stackexchange.com/questions/225674/why-define-a-java-object-using-interface-e-g-map-rather-than-implementation) – gnat Jun 02 '17 at 05:50

5 Answers5

18

There is one, very important distinction that I think that you're overlooking.

The code you provided is for three things: the declaration of a variable, the instantiation of an object, and initializing that variable with that object.

There is no interface implementation here. Customer needs to implement ICustomer (or do one or two other tricks) for that to compile successfully, but there's no actual implementation going on here.

Now that the terminology is out of the way, let's focus on the meat of the problem: is one better?

This sort of thing falls into the realm of program to an interface. When you declare your variable, you need to give it a type. That type should define the sort of things the variable can do. It is a contract you make with future programmers (or yourself) that oCustomer can only do these sorts of things. You should decide this contract regardless of the source you use to initialize the variable.

If this variable only needs the interface, then using the interface as its type is clearly defining what it can (and can't) do. If this variable really needs to be a full Customer, then double check that you really need that. If you do, then go ahead and make it a Customer. This provides clear communication to future programmers (or you) that the variable needs to be of that type. And if the variable is just some placeholder for your initializer; if its type doesn't really matter, then use var.

The benefits of programming to an interface are described in depth in the link above, but they boil down to allowing you to be flexible. Software inevitably changes. By coding to an interface, you allow this code to remain blissfully unaware when Customer changes - or if you need tomorrow to initialize oCustomer with a VipCustomer instead.

This sort of thing applies in many places throughout programming, not just variable declarations.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • Please do not call an interface, as defined in common languages, a contract. It is only a contract in a syntactic sense, in that you must implement the methods in the interface to compile. But there is no semantics in these "contracts", as we would find in an environment that supports the specification and verification of invariants, preconditions, and postconditions. – Frank Hileman Apr 02 '15 at 01:37
  • 4
    @FrankHileman - ... yes, you are correct, but I'm stuck since an `interface` as defined in common languages is only a subset of what "interface" in "code to an interface" means. People who need an answer to this question are unlikely to encounter programmatic contracts, so no confusion. If you have a better term, I'd be happy to adjust things. – Telastyn Apr 02 '15 at 01:44
  • well you got to the heart of the problem, "code to an interface". The code may just throw exceptions, so it does not mean much. I just use the word "interface" instead of "contract". They are simply collections of method/property/event signatures in c#, for example. Their main use is to bypass restrictions on multiple inheritance, and to provide third-party extensibility. Sorry I did not mean to get on your case in particular. But a contract really is something different. – Frank Hileman Apr 02 '15 at 01:48
  • 8
    @FrankHileman - an "interface" in "code to an interface" is _not_ just a bundle of method/property/event signatures. The same advice applies to the behavior of a protocol or the exposed endpoints of a service. The interface is the public surface of some nebulous thing you interact with - ***not*** an `interface`. It's not just the programmatic hooks; after all, you could just cast the variable to its concrete type and claim that is part of its programmatic interface. It's the implicit contract the publisher provides: "if you do this stuff, we (probably) won't break it." – Telastyn Apr 02 '15 at 01:53
  • As an aside, there's a tension between reducing typing and errors by utilizing type-deduction, and using base-types (interface or not) to explicitly constrain the exposed interface programmed to for substitutability. – Deduplicator Apr 02 '15 at 14:56
  • The big difference between this case and the usual "program to an interface" is that any bit of code with `new Customer` can't have the concrete class abstracted away from it. If I write a method that takes an `ICustomer` as a parameter, that method can be written to the interface abstraction. But if you do `ICustomer customer = new Customer();` it seems like you're only pretending to have that extra abstraction where you really don't. cont... – Ben Aaronson Apr 02 '15 at 15:04
  • ...This is exactly where you *should* be able to access any public members which are on the class but not the interface. If your design doesn't permit you to do that, that's a smell, and this is a half-hearted way of hiding it. – Ben Aaronson Apr 02 '15 at 15:05
  • @Telastyn You are speaking of an "implicit contract," whereas I am speaking about the true difference between a syntactic and semantic contract. There is no enforcement of this implicit contract, therefore it is a contract by convention only, which can be achieved even in languages with no "interface" abstraction, even in dynamic languages. I state again, an interface, as defined in popular languages, is not a contract in the semantic sense, except by convention. – Frank Hileman Apr 02 '15 at 16:23
  • @FrankHileman Most non-trivial contracts can only be "enforced by convention", yet the correctness of the program depends on them. – Doval Apr 02 '15 at 19:46
10

As long as your code looks as simple like this

void ExampleFunc()
{
   ICustomer oCustomer = new Customer();
   oCustomer.Method1OfICustomer();
   oCustomer.Method2OfICustomer();
   // ...
}

there is no semantic difference - you can exchange "ICustomer" by "Customer", and the behaviour will stay identical. In this example, however, it could fulfill already some documentary purposes. The first line may express the design decision of the programmer to avoid adding calls like

   oCustomer.MethodNotPartOfICustomer();

in the future, but in fact, when this kind of call is needed later on, one could also change the "new" line, or add the missing method to the ICustomerlater.

In fact, the use of interfaces, as the term implies, will start to make more sense when there is some real "interfacing" involved - interfacing between functions, classes, components. For example, the function

void ExampleFunc(ICustomer oCustomer)
{
   oCustomer.Method1OfICustomer();
   oCustomer.Method2OfICustomer();
   // ...
}

is probably more generic than

void ExampleFunc(Customer oCustomer)
{
   oCustomer.Method1OfICustomer();
   oCustomer.Method2OfICustomer();
   // ...
}

since it will work with any class implementing the ICustomer interface, not just Customer. For example, for testing purposes one could think of passing a "CustomerMock" into the function, something you cannot do with the second variant.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
2

How you create it is not better or worse in any way.

Let's think about it:

You either set objects or get objects (i.e. passing them into a constructor, method or returning them from a method).

In either case as long as you are providing an interface for setting and getting you can new up your object in any way you want because either way you will be able to leverage polymorphism, abstraction and encapsulation - and that's the whole point of the whole program to interface not to implementation principle, which is what your core question is about.

AvetisCodes
  • 1,544
  • 3
  • 14
  • 26
1

In C# there isn't much difference.

The difference lies in what you can call on the reference you created.

This code

ICustomer customer = new Customer();

creates an interface reference named customer to the instance created by the new Customer(); call.

This code

Customer customer = new Customer();

creates an object reference name customer to the instance created by the new Customer(); call.

While both references point to an instance of the Customer class, they are different. The interface reference only offers what is defined in the ICustomer interface, regardless of what else the Customer class makes available. Using the object reference you can use everything the Customer class has to offer.

Using an interface reference you would have to cast it back to the implementing class in order to use the "additional" stuff on offer by the class. Something you really do not want to do. After all, when you are passed an interface reference you can't be certain which class was used to instantiate the object referenced by it. There really is only one exception to this rule: when you just created the interface reference yourself and need to use methods that you do not want to put in the interface because they are only needed during construction/building/initialization of the instance. And even this should be a rare case.

For C# that's about it.

For languages in which the developer is responsible for explicit lifetime management of the objects that are instantiated (iow having to free each object explicitly), there can be a huge difference between the two ways of referencing the instantiated object.

In Delphi for example, interfaced objects (instances of a class implementing an interface) are reference counted by default and are freed "automatically" when their reference count drops to zero. The reference count is incremented and decremented by _AddRef and _Release calls that are added to your code automatically by the compiler.

In Delphi you really do not ever want to create an object reference to an instance of an interfaced class. Doing so means the compiler won't insert the _AddRef call that increments the reference count to 1 upon instantiation. Which means the reference count is one too low. Which means the instance will be freed too soon.

That's why the advice for languages such as Delphi is to never to mix interface and object references and to always use interface references for interfaced classes.

Marjan Venema
  • 8,151
  • 3
  • 32
  • 35
-6

This question goes to the very heart of what Object-Orientation means. Simply put, in a language like Java, C# or Visual Basic.NET, classes (and structs) define Abstract Data Types and interfaces define Objects. As soon as you have a class or struct as a type (i.e. the type of a local variable, field, property or a method parameter, a method return type, a type argument to a generic class or interface, the target of a cast operator or the argument of an instanceof operator), you are not doing object-oriented programming.

The only thing you can use for a type, is an interface, you can not use classes or structs (and certainly not primitives in a language like Java which has them). The only place where a class or struct is allowed, is right next to the new operator: classes and structs are merely factories for objects.

This is explained much better than I could ever hope to achieve in On Understanding Data Abstraction, Revisited by William R. Cook. It uses Java for the examples, but it applies just as well to C# or any other language.

So, the simple answer is: if you weren't instantiating an object this way, it wouldn't be an object, but an instance of an abstract data type.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • @downvoter: does my edit increase the usefulness of my answer? – Jörg W Mittag Apr 02 '15 at 08:07
  • 5
    I understand what you are trying to say with this answer and agree, but even if we should manage to walk through all that theoretical hair-splitting in the linked paper, this answer seems contradictory: interfaces are objects, but classes are abstract types? But types must be interfaces (obviously not: just see the code in the question, it has a class as a type right there), while classes are just object factories? What? By what definition am I not doing OOP if I'm using classes and objects? If this answer wants to be useful, it needs to be much more grounded in practical software development. – amon Apr 02 '15 at 10:08