69

Uncle Bob's chapter on names in Clean Code recommends that you avoid encodings in names, mainly regarding Hungarian notation. He also specifically mentions removing the I prefix from interfaces, but doesn't show examples of this.

Let's assume the following:

  • Interface usage is mainly to achieve testability through dependency injection
  • In many cases, this leads to having a single interface with a single implementer

So, for example, what should these two be named? Parser and ConcreteParser? Parser and ParserImplementation?

public interface IParser {
    string Parse(string content);
    string Parse(FileInfo path);
}

public class Parser : IParser {
     // Implementations
}

Or should I ignore this suggestion in single-implementation cases like this?

Zac Crites
  • 107
  • 5
Vinko Vrsalovic
  • 858
  • 1
  • 6
  • 13
  • 1
    @MikeDunlavey Uncle Bob's Clean Code is from 2009. – Vinko Vrsalovic Aug 23 '16 at 12:43
  • 30
    That doesn't make it any less of a religion. You gotta look for reasons, not just somebody X says Y. – Mike Dunlavey Aug 23 '16 at 12:44
  • 35
    @MikeDunlavey And that's the reason for the question! – Vinko Vrsalovic Aug 23 '16 at 12:47
  • 14
    If there's an existing body of code (that you aren't going to rewrite), then stick with whatever convention it uses. If new code, use whatever the majority of the people that are going to be working on it are happiest with / used to. Only if neither the above apply should it become a philosophical question. – TripeHound Aug 23 '16 at 12:48
  • 9
    @TripeHound Majority rule is not always the best option. If there are reasons some things are better than others, a single person can convince the rest of the team making a case based on those reasons. Just blindly submitting to majority without evaluating things through leads to stagnation. I see from the answers that for this concrete case there's no good reason to remove the Is, but that does not invalidate having asked in the first place. – Vinko Vrsalovic Aug 23 '16 at 12:52
  • 2
    Agreed. Every so often somebody steps on a soapbox and declares that such-and-so popular thing is bad. I've even done it, but at least I've explained why, so people can see for themselves if they buy it or not. – Mike Dunlavey Aug 23 '16 at 12:53
  • 1
    @MikeDunlavey To be fair with Uncle Bob, he does explain the reasons for avoiding Hungarian notation, it's just the "I" removal from interfaces that he missed to solidly explain in my opinion. – Vinko Vrsalovic Aug 23 '16 at 13:00
  • 35
    "Uncle Bob's" book focuses on JAVA, not C#. Most paradigmas are the same with C#, but some naming conventions differ (also look at lowerCamelCase function names in Java). When you write in C#, use C# naming conventions. But anyway, follow the major point of his chapter on naming: readable names which communicate the meaning of the item! – Bernhard Hiller Aug 23 '16 at 13:20
  • 3
    Easy way to solve your confusion. Throw that book in the trash and talk with your team. – Matthew Whited Aug 23 '16 at 13:24
  • I have the opinion that the language should enforce the naming convention – Bradley Thomas Aug 23 '16 at 13:29
  • 1
    That's not possible without limiting what you can name classes and variables. – Matthew Whited Aug 23 '16 at 13:30
  • And as a note, Hungarian notation is often used when naming controls. (It's not as popular with declarative UI models such as WPF but it is still very common in places like WinForms.) – Matthew Whited Aug 23 '16 at 13:32
  • In the land of PHP, interfaces have been traditionally identified with a suffix of Interface. There is one reasonably successful standards group which has published three standards using the suffix convention. But now they have been side tracked over what is the "right" way for new standards. Basically, you can read this exact same discussion here: https://groups.google.com/forum/#!topic/php-fig/Potawlu2CrQ Some things never change. – Cerad Aug 23 '16 at 14:54
  • 2
    Actually, in Android programming (which is Java, too) the IInterface convention is just as strongly established as it is in C#. I also vote for ignoring the book on this subject (too). – Gábor Aug 23 '16 at 15:25
  • 2
    @MatthewWhited The irony being that most such usage is systems Hungarian when apps Hungarian would likely be better. – JAB Aug 23 '16 at 16:48
  • Not really. In a declarative UI model part of the intent is to seperate data type from presentation. – Matthew Whited Aug 23 '16 at 17:45
  • @VinkoVrsalovic: FWIW, here's an example of [*my soapbox*](http://programmers.stackexchange.com/a/329124/2429). – Mike Dunlavey Aug 23 '16 at 20:00
  • 3
    I wildly disagree with the assumption that interfaces are mainly used to achieve testability through dependency injection. – axl Aug 24 '16 at 05:03
  • 2
    A C# developer who gets confused because he sees an interface name that doesn't start with "I" is a person who doesn't need to be a C# developer. – user1172763 Aug 24 '16 at 14:42
  • Also see [What's the reasoning behind the “I” prefix naming convention for interfaces in .NET?](https://softwareengineering.stackexchange.com/questions/108443/whats-the-reasoning-behind-the-i-prefix-naming-convention-for-interfaces-in) and [Should interface names begin with an “I” prefix?](https://softwareengineering.stackexchange.com/questions/117348/should-interface-names-begin-with-an-i-prefix/). – Franklin Yu Jul 31 '17 at 14:46

8 Answers8

189

Whilst many, including "Uncle Bob", advise not to use I as a prefix for interfaces, doing so is a well-established tradition with C#. In general terms, it should be avoided. But if you are writing C#, you really should follow that language's conventions and use it. Not doing so will cause huge confusion with anyone else familiar with C# who tries to read your code.

Bryan Oakley
  • 25,192
  • 5
  • 64
  • 89
David Arno
  • 38,972
  • 9
  • 88
  • 121
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/44434/discussion-on-answer-by-david-arno-naming-issues-should-isomething-be-renamed). – maple_shaft Aug 24 '16 at 18:00
  • 11
    You provide no basis for "In general terms, it should be avoided". In Java, i should be avoided. In C# it should be embraced. Other languages follow their own conventions. – Basic Aug 24 '16 at 18:20
  • 4
    The "in general terms" principle is simple: A client should have the right to not even know whether it's talking to an interface or an implementation. Both languages got this wrong. C# got the naming convention wrong. Java got the bytecode wrong. If I switch an implementation to an interface with the same name in Java I have to recompile all clients even though the name and methods didn't change. It's not a "pick your poison". We've just been doing it wrong. Please learn from this if you're creating a new language. – candied_orange Jan 06 '17 at 14:38
41

The interface is the important logical concept, hence, the interface should carry the generic name. So, I'd rather have

interface Something
class DefaultSomething : Something
class MockSomething : Something

than

interface ISomething
class Something : ISomething
class MockSomething : ISomething

The latter has several isues:

  1. Something is only one implementation of ISomething, yet it is the one with the generic name, as if it was somehow special.
  2. MockSomething seems to derive from Something, yet implements ISomething. Maybe it should be named MockISomething, but I've never seen that in the wild.
  3. Refactoring is harder. If Something is a class now, and you only find out later that you have to introduce an interface, that interface should be named the same, because it should be transparent to clients whether the type is a concrete class, an abstract class or an interface. ISomething breaks that.
wallenborn
  • 1,972
  • 1
  • 13
  • 12
  • I agree with you about calling the first concrete class you make the generic name. Although I eould postfix rather than prefix the name : RepositorySql etc – Ewan Aug 23 '16 at 11:41
  • 1
    Prefix or postfix is a matter of taste. IMO, each implementation should have a reason to exist, and that reason should be reflected in the name. So if you have repositories in SQL, plain text, and in-memory (for unit tests), the names should be RepositorySQL (or SqlRepository), RepositoryCSV, RepositoryInMemory. – wallenborn Aug 23 '16 at 12:42
  • 59
    Why would you not follow the well-established standard that C# provides? What benefit does this impart that exceeds the cost of confusing and irritating every C# programmer that follows you? – Robert Harvey Aug 23 '16 at 15:01
  • 6
    @wallenborn That's fine, but realistically you often only have a single legitimate implementation of an interface, because it's common to use interfaces for mockability in unit tests. I don't want to put `Default` or `Real` or `Impl` in so many of my class names – Ben Aaronson Aug 23 '16 at 15:02
  • 4
    I agree with you, but if you travel this road you're going to have to fight every linter ever made for C#. – RubberDuck Aug 23 '16 at 15:57
  • 3
    I see no problem with having an instantiable class with the same name as an interface if the vast majority of instances that implement the interface will be that class. For example, a lot of code needs a straightforward general-purpose implementation of `IList` and doesn't really care how it's stored internally; being able to just lop of the `I` and have a usable class name seems nicer than having to magically know (as in Java) that code wanting an ordinary implementation of `List` should use `ArrayList`. – supercat Aug 23 '16 at 17:23
  • 1
    @BenAaronson why would you want the type marker on the interface which you will end up seeing everywhere as opposed to the class name which you will only ever see once or twice (ie the class itself and when you instantiate it into whatever injection framework you are using). – Sled Aug 23 '16 at 17:54
  • 2
    @ArtB Couple of reasons. One, there's no language where the convention is a nice short marker on the class like `CFoo : Foo`. So that option isn't really available. Two, you get a weird inconsistency then because some classes would have the marker and others wouldn't, depending on whether there *might* be other implementations of the same interface. `CFoo : Foo` but `SqlStore : Store`. That's one problem I have with `Impl`, which is essentially just an uglier marker. Maybe if there was a language with the convention of *always* adding a `C` (or whatever), that might be better than `I` – Ben Aaronson Aug 23 '16 at 19:28
  • @BenAaronson Yes, "Impl" is an uglier marker, but use you those identifiers only once or twice in a Java Spring app, but the interface dozens to hundreds of times. Least aggregate ugliness IMHO. – Sled Aug 23 '16 at 20:26
  • This is bad advice for C#, much better to follow the existing conventions there. When in rome and all that. – Andy Aug 24 '16 at 16:14
  • If something is standard, it does not mean it is the most logical choice. It is just like politics, where a group of somebody just collectively agreed to pick the choice and everyone else is expected to follow. If you have good reasons and you are brave enough, you can always start another political party. – Lukman Aug 25 '16 at 14:26
29

This isn't just about naming conventions. C# doesn't support multiple inheritance so this legacy use of Hungarian notation has a small albeit useful benefit where you're inheriting from a base class and implementing one or more interfaces. So this...

class Foo : BarB, IBarA
{
    // Better...
}

...is preferable to this...

class Foo : BarB, BarA
{
    // Dafuq?
}

IMO

Robbie Dee
  • 9,717
  • 2
  • 23
  • 53
  • 7
    It’s worth noting that Java neatly avoids this issue by using different keywords for class inheritance and interface implementation, i.e. `inherits` vs `implements`. – Konrad Rudolph Aug 24 '16 at 10:51
  • 6
    @KonradRudolph you mean `extends`. – OrangeDog Aug 24 '16 at 11:53
  • @OrangeDog Yup. – Konrad Rudolph Aug 24 '16 at 13:09
  • @KonradRudolph In this case I don't think being more wordy really adds any value here. – Andy Aug 24 '16 at 16:16
  • 2
    @Andy The added value is that we can avoid the Hungarian Notation cruft, yet preserve all of the clarity touted in this answer. – Konrad Rudolph Aug 24 '16 at 16:22
  • @KonradRudolph I'd rather just prefix with I; I don't have that much of an issue with Hungarian notation that I'd want to type implements/extends everywhere I could just have typed a colon simply to avoid Hungarian notation on interface names. – Andy Aug 24 '16 at 16:29
  • @Andy The downside of your argument is that, rather than keeping the inconvenience contained, you now pollute your whole codebase with it. The keywords obviously also have the (small) advantage that they enforce correct usage via the compiler; for the Hungarian notation, nothing prevents you from erroneously prefixing a class name with “`I`” and forgetting it in front interface names — which is exactly the problem have with Hungarian: it’s an unenforced convention, i.e. weak typing. A big problem? Certainly not. But neither are two contextual keywords. – Konrad Rudolph Aug 24 '16 at 16:31
  • @KonradRudolph I don't consider IInteface any kind of pollution; I just said I really don't have an issue with IInterface. And this answer is kinda wrong too, now that I think about it. – Andy Aug 24 '16 at 16:33
  • 2
    There really shouldn't be any confusion; if you inherit a class, it MUST be first in the list, before the list of implemented interfaces. I'm pretty sure this is enforced by the compiler. – Andy Aug 24 '16 at 16:33
  • @Andy It’s pollution in the sense that it doesn’t offer any actionable information (except arguably in the place mentioned in this answer, but as you say not even there). So it’s a completely unnecessary character in the name. The single potential benefit of knowing that a type is an interface (which is a pretty questionable benefit to begin with) is undermined because the convention isn’t statically enforced. – Konrad Rudolph Aug 24 '16 at 16:38
  • @KonradRudolph As I said, I don't care about a single extra character in the name, and I don't care that its not actionable, especially if that means I can name the actual implementation Parser instead of having to make something up for the class name because the interface lacks an I prefix. – Andy Aug 24 '16 at 17:39
  • @Andy By putting the class first (`class Foo : BarA, BarB`) you can tell that `BarB` is definitely an interface, but there's still an ambiguity as to whether `BarA` is one as well - it could be a base class. Using the naming convention, the ambiguity is removed (`class Foo : BarA, IBarB` vs `class Foo : IBarA, IBarB`). – Roujo Aug 25 '16 at 03:57
  • 2
    @Andy *...if you inherit a class, it MUST be first in the list* That's great but without the prefix you wouldn't know if you were dealing with a couple of interfaces or a base class and an interface... – Robbie Dee Aug 25 '16 at 15:27
  • @RobbieDee It will also be colored differently in the editor. – Andy Aug 25 '16 at 17:45
  • 1
    @Andy That would depend on your editor which has nothing to do with syntax. – Robbie Dee Aug 26 '16 at 08:10
17

No no no.

Naming conventions are not a function of your Uncle Bob fandom. They are not a function of the language, c# or otherwise.

They are a function of your code base. To a much lesser extent of your shop. In other words the first one in the code base sets the standard.

Only then do you get to decide. After that just be consistent. If someone starts being inconsistent take away their nerf gun and make them update documentation until they repent.

When reading a new code base if I never see an I prefix I'm fine, regardless of language. If I see one on every interface I'm fine. If I sometimes see one, someone's going to pay.

If you find yourself in the rarefied context of setting this precedent for your code base I urge you to consider this:

As a client class I reserve the right to not give a damn what I'm talking to. All I need is a name and a list of things I can call against that name. Those may not change unless I say so. I OWN that part. What I'm talking to, interface or not, is not my department.

If you sneak an I in that name, the client doesn't care. If there is no I , the client doesn't care. What it's talking to might be concrete. It might not. Since it doesn't call new on it either way it makes no difference. As the client, I don't know. I don't want to know.

Now as a code monkey looking at that code, even in java, this might be important. After all, if I have to change an implementation to an interface I can do that without telling the client. If there is no I tradition I might not even rename it. Which might be an issue because now we have to recompile the client because even though the source doesn't care the binary does. Something easy to miss if you never changed the name.

Maybe that's a non issue for you. Maybe not. If it is, that's a good reason to care. But for the love of desk toys don't claim we should just blindly do this because it's the way it's done in some langauge. Either have a damn good reason or don't care.

It's not about living with it forever. It's about not giving me crap about the code base not conforming to your particular mentality unless you're prepared to fix the 'problem' everywhere. Unless you have a good reason I shouldn't take the path of least resistance to uniformity, don't come to me preaching conformity. I have better things to do.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
  • 6
    Consistence is key, but in a statically typed language and with modern refactoring tools is is pretty easy and safe to change names. So if the first developer made a bad choice in naming, you don't have to live with it forever. But you can change naming in your own code base, but you cannot change it in the framework or third party libraries. Since the I-prefix (like it or not) is an established convention across the .net world, it it is better to follow the convention than to not follow it. – JacquesB Aug 24 '16 at 11:32
  • If you're using C#, you're gonna see those `I`s. If your code uses them, you'll see them everywhere. If your code's not using them, you'll see them from the framework code, leading exactly to the mix you don't want. – Sebastian Redl Jul 13 '18 at 15:04
8

There's one tiny difference between Java and C# that is relevant here. In Java, every member is virtual by default. In C#, every member is sealed by default - except for interface members.

The assumptions that go with this influence the guideline - in Java, every public type should be considered non-final, in accordance with Liskov' Substitution Principle [1]. If you only have one implementation, you'll name the class Parser; if you find that you need multiple implementations, you'll just change the class to an interface with the same name, and rename the concrete implementation to something descriptive.

In C#, the main assumption is that when you get a class (name doesn't start with I), that's the class you want. Mind you, this is nowhere near 100% accurate - a typical counter-example would be classes like Stream (which really should have been an interface, or a couple of interfaces), and everyone has their own guidelines and backgrounds from other languages. There's also other exceptions like the fairly widely used Base suffix to denote an abstract class - just like with an interface, you know the type is supposed to be polymorphic.

There's also a nice usability feature in leaving the non-I-prefixed name for functionality that relates to that interface without having to resort to making the interface an abstract class (which would hurt due to the lack of class multiple-inheritance in C#). This was popularised by LINQ, which uses IEnumerable<T> as the interface, and Enumerable as a repository of methods that apply to that interface. This is unnecessary in Java, where interfaces can contain method implementations as well.

Ultimately, the I prefix is widely used in the C# world, and by extension, the .NET world (since by far most of .NET code is written in C#, it makes sense to follow C# guidelines for most of the public interfaces). This means you will almost certainly be working with libraries and code that follows this notation, and it makes sense to adopt the tradition to prevent unnecessary confusion - it's not like omitting the prefix will make your code any better :)

I assume that Uncle Bob's reasoning was something like this:

IBanana is the abstract notion of banana. If there can be any implementing class that would have no better name than Banana, the abstraction is entirely meaningless, and you should drop the interface and just use a class. If there is a better name (say, LongBanana or AppleBanana), there's no reason not to use Banana as the name of the interface. Therefore, using the I prefix signifies that you have a useless abstraction, which makes the code harder to understand with no benefit. And since strict OOP will have you always code against interfaces, the only place where you wouldn't see the I prefix on a type would be on a constructor - quite pointless noise.

If you apply this to your sample IParser interface, you can clearly see that the abstraction is entirely in the "meaningless" territory. Either there's something specific about a concrete implementation of a parser (e.g. JsonParser, XmlParser, ...), or you should just use a class. There's no such thing as "default implementation" (though in some environments, this does indeed make sense - notably, COM), either there's a specific implementation, or you want an abstract class or extension methods for the "defaults". However, in C#, unless your codebase already omits the I-prefix, keep it. Just make a mental note everytime you see code like class Something: ISomething - it means somebody isn't very good at following YAGNI and building reasonable abstractions.

[1] - Technically, this isn't specifically mentioned in Liskov's paper, but it is one of the foundations of the original OOP paper and in my reading of Liskov, she didn't challenge this. In a less strict interpretation (the one taken by most OOP languages), this means that any code using a public type that is intended for substitution (i.e. non-final/sealed) must work with any conforming implementation of that type.

Luaan
  • 1,850
  • 1
  • 13
  • 10
  • 3
    Nitpick: LSP doesn’t say that every type should be non-final. It just says that for types that *are* non-final, consumers must be able to use subclasses transparently. — And of course Java also has final types. – Konrad Rudolph Aug 24 '16 at 10:52
  • @KonradRudolph I added a footnote regarding this; thanks for the feedback :) – Luaan Aug 24 '16 at 12:31
3

So, for example, what should these two be named? Parser and ConcreteParser? Parser and ParserImplementation?

In an ideal world, you should not prefix your name with a short-hand ("I...") but simply let the name express a concept.

I read somewhere (and cannot source it) the opinion that this ISomething convention leads you to define an "IDollar" concept when you need a "Dollar" class, but the correct solution would be a concept called simply "Currency".

In other words, the convention gives an easy out to the difficulty of naming things and the easy out is subtly wrong.


In the real world, the clients of your code (which may just be "you from the future") need to be able to read it later and be familiar with it as fast (or with as little effort) as possible (give the clients of your code what they expect to get).

The ISomething convention, introduces cruft/bad names. That said, the cruft is not real cruft *), unless the conventions and practices used in writing the code are no longer actual; if the convention says "use ISomething", then it is best to use it, to conform (and work better) with other devs.


*) I can get away with saying this becasuse "cruft" is not an exact concept anyway :)

utnapistim
  • 5,285
  • 16
  • 25
2

As the other answers mention, prefixing your interface names with I is part of the coding guidelines for the .NET framework. Because concrete classes are more often interacted with than generic interfaces in C# and VB.NET, they are first-class in naming schemes and thus should have the simplest names - hence, IParser for the interface and Parser for the default implementation.

But in the case of Java and similar languages, where interfaces are preferred instead of concrete classes, Uncle Bob is right in that the I should be removed and interfaces should have the simplest names - Parser for your parser interface, for instance. But instead of a role-based name like ParserDefault, the class should have a name that describes how the parser is implemented:

public interface Parser{
}

public class RegexParser implements Parser {
    // parses using regular expressions
}

public class RecursiveParser implements Parser {
    // parses using a recursive call structure
}

public class MockParser implements Parser {
    // BSes the parsing methods so you can get your tests working
}

This is, for instance, how Set<T>, HashSet<T>, and TreeSet<T> are named.

TheHans255
  • 132
  • 1
  • 11
  • 5
    Interfaces are preferred in C# too. Whoever told you it was preferred to use concrete types in dotnet? – RubberDuck Aug 23 '16 at 21:45
  • @RubberDuck It's hidden in some of the assumptions the language imposes on you. E.g., the fact that members are `sealed` by default, and you must make them explicitly `virtual`. Being virtual is the special case in C#. Of course, as the recommended approach to writing code changed over the years, many relics of the past needed to stay for compatibility :) The key point is that if you get an interface in C# (which starts with `I`), you *know* it's an entirely abstract type; if you get a non-interface, the typical assumption is that that's the type you want, LSP be damned. – Luaan Aug 24 '16 at 08:55
  • 1
    Members are sealed by default so that we're *explicit* about what inheritors can over ride @Luaan. Admittedly, it's a PITA sometimes, but it's nothing to do with a preference for concrete types. Virtual is certainly not a special case in C#. It sounds like you had the misfortune of working in a C# codebase written by amateurs. I'm sorry that was your experience with the language. It's best to remember that the dev is not the language and vice versa. – RubberDuck Aug 24 '16 at 09:27
  • @RubberDuck Hah, I never said I didn't like it. I very much prefer it, because *most people don't understand indirection*. Even among programmers. Sealed by default is *better* in my opinion. And when I started with C#, *everybody* was an amateur, including the .NET team :) In "real OOP", everything is supposed to be virtual - every type should be replaceable by a derived type, which should be able to override any behaviour. But "real OOP" was designed for specialists, not people who switched from replacing light bulbs to programming because it's less work for more money :) – Luaan Aug 24 '16 at 10:27
  • No... No. That's not how "real OOP" works at all. In Java, you should be taking the time to seal methods that shouldn't be overridden, not leaving it like the Wild West where anyone can over ride everything. LSP doesn't mean you can override anything you want. LSP means that you can safely replace one class with another. Smh – RubberDuck Aug 24 '16 at 10:48
-8

You are correct in that this I = interface convention breaks the (new) rules

In the old days you would prefix everything with its type intCounter boolFlag etc.

If you want to name things without the prefix, but also avoid 'ConcreteParser' you could use namespaces

namespace myapp.Interfaces
{
    public interface Parser {...
}


namespace myapp.Parsers
{
    public class Parser : myapp.Interfaces.Parser {...
}
Ewan
  • 70,664
  • 5
  • 76
  • 161
  • 11
    new? old? Wait, I thought programming was more about rationality than hype. Or was this in the old days? –  Aug 23 '16 at 09:14
  • 3
    no its all about making up conventions and blogging about how they 'improve readablity' – Ewan Aug 23 '16 at 09:15
  • 8
    I believe you are thinking of Hungarian notation, in which you did indeed prefix names with types. But there was no time in which you'd write something like `intCounter` or `boolFlag`. Those are the wrong "types". Instead, you'd write `pszName` (pointer to a NUL-terminated string containing a name), `cchName` (count of characters in the string "name"), `xControl` (x coordinate of the control), `fVisible` (flag indicating something is visible), `pFile` (pointer to a file), `hWindow` (handle to a window), and so on. – Cody Gray - on strike Aug 23 '16 at 12:19
  • 2
    There is still a lot of value in this. A compiler is not going to catch the error when you add `xControl` to `cchName`. They are both of type `int`, but they have very different semantics. The prefixes make these semantics obvious to a human. A pointer to a NUL-terminated string looks exactly like a pointer to a Pascal-style string to a compiler. Similarly, the `I` prefix for interfaces emerged in the C++ language where interfaces are really just classes (and indeed for C, where there is no such thing as a class or an interface at the language level, it's all just a convention). – Cody Gray - on strike Aug 23 '16 at 12:22
  • 2
    "In the old days" compilers were often severely limited in identifier lengths; something like 6-10 characters was common. You could use longer names, but the part beyond the first few characters was not considered for variable name uniqueness. I think at least in Turbo C 2.0, the length was even *configurable!* – user Aug 23 '16 at 12:59
  • 2
    the good ol' days you mean, before all this longVariableNamesAreGoodForSomeReason nononsense! (or gdOD -beforeNsense as i call it) – Ewan Aug 23 '16 at 14:44
  • 4
    @CodyGray Joel Spolsky wrote a nice article, [Making Wrong Code Look Wrong](http://www.joelonsoftware.com/articles/Wrong.html) about Hungarian notation and the ways in which it's been (mis)understood. He has a similar opinion: the prefix should be a higher level type, not the raw data storage type. He uses the word "kind", explaining, **"I’m using the word *kind* on purpose, there, because Simonyi mistakenly used the word *type* in his paper, and generations of programmers misunderstood what he meant."** – Joshua Taylor Aug 23 '16 at 17:24
  • 1
    @CodyGray I agree with the value of hungarian notation, but now we can do better than it: use actual descriptive names, like CheckBoxXCoordinate, NameCharCount, IsCheckBoxVisible, FilePointer, WindowHandle, so there's no need to use that notation because vars can be more descriptive without having to understand and remember a specific notation. `NameCharCount + CheckBoxXCoordinate` is even more evidently wrong than `xControl+cchName` – Vinko Vrsalovic Aug 23 '16 at 20:17
  • @JoshuaTaylor, thank you for linking that article. – Wildcard Aug 24 '16 at 01:28
  • 1
    @VinkoVrsalovic For some cases, yeah. But do you really write names like `FirstnameLongPointerToANullTerminatedString`? Semantic hungarian notation is still very useful, you just have to make sure it's worth the cost of teaching the meaning - you *will* need to do a lot of teaching for any newcomer anyway, don't pretend they just magically follow the same guidelines you do, even if you both base them on the same document. Strict typing would be preferred in most cases, of course, which is why hungarian isn't used so much in languages like C# and modern C++. – Luaan Aug 24 '16 at 09:00
  • @luaan did you see the examples I used? Do they look unusable? No need to cherry pick an artificially long example which makes no sense in modern languages to make your point. I agree that for 'old' languages Hungarian makes sense, but that was my point when I said we can do better now, using more modern languages. If you are stuck with C, do go on using Apps Hungarian. – Vinko Vrsalovic Aug 24 '16 at 09:05
  • I should point out that I'm not actualy recommending not prefixing interfaces with I. just saying if you want to remove the prefix, namespaces are better than suffixing the implementation with 'Concrete' – Ewan Aug 24 '16 at 11:00