98

"Best practices" are everywhere in our industry. A Google search on "coding best practices" turns up nearly 1.5 million results. The idea seems to bring comfort to many; just follow the instructions, and everything will turn out fine.

When I read about a best practice - for example, I just read through several in Clean Code recently - I get nervous. Does this mean that I should always use this practice? Are there conditions attached? Are there situations where it might not be a good practice? How can I know for sure until I've learned more about the problem?

Several of the practices mentioned in Clean Code did not sit right with me, but I'm honestly not sure if that's because they're potentially bad, or if that's just my personal bias talking. I do know that many prominent people in the tech industry seem to think that there are no best practices, so at least my nagging doubts place me in good company.

The number of best practices I've read about are simply too numerous to list here or ask individual questions about, so I would like to phrase this as a general question:

Which coding practices that are popularly labeled as "best practices" can be sub-optimal or even harmful under certain circumstances? What are those circumstances and why do they make the practice a poor one?

I would prefer to hear about specific examples and experiences.

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283
Steven Evers
  • 28,200
  • 10
  • 75
  • 159
  • 8
    Which are the practices you disagree with?, just curious. – Sergio Acosta Oct 28 '10 at 13:17
  • Anyone can write a book, and I do not have to agree with it - it is as simple as that. – Job Jan 12 '11 at 18:49
  • One thing I like about the book "Code Complete" by Steve McConnell is that he backs up all his tips with hard evidence and research. Just sayin' – JW01 Jan 17 '11 at 14:44
  • 5
    @Walter: This has been open for months, it is definitely constructive, why close it? – Orbling Jan 18 '11 at 02:23
  • @Orbling it was even referred to by Aarobot in his "poll post" on MSO as an example of a poorly worded question that ended up with good productive answers. – Nicole Jan 18 '11 at 03:30
  • @Renesis: Well Aarobot and I clash a fair bit, but at least there is an acknowledgement of good productive answers. Perhaps Walter does not read meta, I just can't, I get too annoyed. – Orbling Jan 18 '11 at 03:33
  • @Walter: This question definitely has merit and is worth discussing. And in my opinion definately meets the subjective criteria. This needs to be re-opened. PS. Ho come it only takes one person to close. ON SO it needs 5 votes to close a question. – Martin York Jan 18 '11 at 05:40
  • @Martin - I brought that up in Meta today (http://meta.programmers.stackexchange.com/questions/996/moderators-taking-community-function) – Kyle Rosendo Jan 18 '11 at 13:21
  • @Martin @Kyle Rozendo: It comes up frequently, and rather unsurprisingly, gets shot down every time. Moderator votes count absolute, granted they need that power, but it makes a mockery of the democratic system, they should not vote unless there is serious infringement. – Orbling Jan 18 '11 at 20:19
  • @Orbling: Seems to have worked this time. The question is no longer closed. – Martin York Jan 18 '11 at 22:18
  • @Martin York: 5 users voted to re-open the question. It's a bit imbalanced, but it _can_ work. – Steven Evers Jan 18 '11 at 22:26
  • 2
    Seeing as how my name is being mentioned in here, I suppose I should chip in: I believe that the answers here are valuable, but the question could be reworded into something definitively less poll-like without invalidating any of the answers. Example title: *"Which popular 'best practices' can sometimes be harmful, and when/why?"* – Aaronaught Jan 19 '11 at 01:33
  • @Aaronaught, agreed, and that fortunately seems to be the way that most posters interpreted it when they answered. – Nicole Jan 19 '11 at 02:57
  • @Aaronaught - Excellent edit! – Walter Jan 19 '11 at 14:08

45 Answers45

125

I think you've hit the nail on the head with this statement

I'd hate to take things at face value and not think about them critically

I ignore almost all Best Practices when it doesn't come with explanation on why it exists

Raymond Chen puts it best in this article when he says

Good advice comes with a rationale so you can tell when it becomes bad advice. If you don't understanding why something should be done, then you've fallen into the trap of cargo cult programming, and you'll keep doing it even when it's no longer necessary or even becomes deleterious.

alex
  • 140
  • 7
Conrad Frix
  • 4,155
  • 2
  • 18
  • 34
  • 4
    Wonderful quote. – David Thornley Oct 26 '10 at 20:10
  • The next paragraph of that Raymond Chen quote is probably describing Hungarian notation! Most companies I see using it with no real good reason that they can explain. – Craig Oct 27 '10 at 02:03
  • 3
    I hope people don't take this as an excuse to not look up what the reasoning behind the best practices are, though. ;) Unfortunately, I have seen developers with this attitude. – Vetle Oct 27 '10 at 08:23
  • @vetler: That's true, but before using a pattern you also need to understand why you're using it *now*. Composite patterns are all very nice, but if all you really need is a 1-D collection (List, Array, whatever) then that's what you should use. – JohnL Oct 27 '10 at 09:51
  • 3
    Rationale is good. Research is better. – Jon Purdy Oct 27 '10 at 14:26
  • 7
    Absolutely true, and I've said the same before. Perhaps the first standard in any "Standards" document should read, "In the interest of sharing knowledge and providing the information necessary to rescind standards at later times, all standards shall include the reasons for their existence." – Scott Whitlock Oct 29 '10 at 00:47
  • @Craig: You're dead on target. I worked for Simonyi at Xerox on the BravoX project (1978-80), the first project to use Hungarian notation, and it made perfect sense *in context*. BravoX was written in BCPL, the grandfather of C. BCPL had one type: integer. Everything else (structures, pointers, etc.) existed by inference and convention. *This* was the original reason for Hungarian: visual typechecking of a typeless language, and it worked great! It was simple and the whole "spec" took about 1.5 pages--I still have a copy of it somewhere. MS uses a mutated, bloated version of Hungarian 1.0. – Peter Rowell May 18 '11 at 15:18
94

Might as well throw this into the ring:

Premature optimization is the root of all evil.

No, it's not.

The complete quote:

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

That means that you take advantage of specific, strategic performance enhancements throughout your design process. It means that you use data structures and algorithms that are consistent with performance objectives. It means that you are aware of design considerations that affect performance. But it also means that you do not frivolously optimize when doing so will give you minor gains at the expense of maintainability.

Applications need to be well-architected, so that they don't fall down at the end when you apply a little load on them, and then you wind up rewriting them. The danger with the abbreviated quote is that, all too often, developers use it as an excuse to not think about performance at all until the end, when it might be too late to do anything about it. It's better to build in good performance from the start, provided you're not focusing on minutiae.

Let's say you're building a real-time application on an embedded system. You choose Python as the programming language, because "Premature optimization is the root of all evil." Now I have nothing against Python, but it is an interpreted language. If processing power is limited, and a certain amount of work needs to get done in real-time or near real-time, and you choose a language that requires more processing power for the work than you have, you are royally screwed, because you now have to start over with a capable language.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • 4
    +1 for the strong *No, it's not.* – Stephen Oct 26 '10 at 21:26
  • 21
    But if you've already recognized that one particular optimization is within that critical 3%, are you premature in optimizing it? – John Oct 26 '10 at 21:33
  • @John: No, you're not. – Robert Harvey Oct 26 '10 at 21:45
  • 7
    @Robert: Then what is the point of disagreement with the statement that "Premature optimization is the root of all evil"? – John Oct 26 '10 at 22:20
  • 3
    @John: It's not a complete statement. How can you determine whether or not you are within that critical 3% if you don't even know there *is* a critical 3%? – Robert Harvey Oct 26 '10 at 22:32
  • At what point does an optimization become premature? – Mongus Pong Oct 27 '10 at 00:04
  • 3
    @Mongus: Optimization becomes premature when you begin focusing on minute performance enhancements at the expense of maintainability of the code, and you are not profiling to see if these changes really improve the performance significantly. – Robert Harvey Oct 27 '10 at 06:36
  • @Robert, so that critical 3% would probably not be considered as 'premature' optimization, but rather well-timed optimization. – Mongus Pong Oct 27 '10 at 09:21
  • 8
    It is never premature to optimise high level design and technical decisions like language choice. However often it is only after you have mostly completed a design that it's inefficiencies become aparent, which is why Fred Brooks said that most teams write a throw away version whether they intend to or not. Another argument for prototyping. – Dominique McDonnell Oct 27 '10 at 12:06
  • 2
    The problem with the quote is that the presumption by Knuth that the programmer (him self) would choose something quite good even if not preoptimizing, and not - as it has come to, just pick anything out of a hat. –  Oct 27 '10 at 13:32
  • Premature optimization is the same as any unecessary feature. Like the request that start with, "You know what would be cool?" – JeffO Oct 27 '10 at 14:35
  • 1
    @Mongus: The critical 3% is about timing in the sense that it's about choosing well-designed and effective optimizations *during* development, not after. – Robert Harvey Oct 27 '10 at 16:46
  • 2
    @Mongus: Premature optimisation is optimisation that does not fit the conceptual level you are working at. When you are still designing a feature, don't try to make optimisations that can't be expressed in the design language. The difficult part might be to differentiate between an implementation phase (where you write maintainable code) and a tweaking phase (where maintainability is sacrificed for micro-optimisations) – Bart van Ingen Schenau Oct 28 '10 at 12:55
  • 2
    I would upvote you a million times if I could. I have seen more bad applications due to this one quote than almost anything else. Avoiding premature optimization does not mean not optimizing especially in datbase design where performance is critical! – HLGEM Oct 28 '10 at 14:38
  • I agree with you. I've seen so many times people throw away best practices or refuse it just because they think it is a premature optimization. It's written here: http://satukubik.com/2009/08/10/premature-optimization-vs-best-practice/ – nanda Oct 29 '10 at 07:27
  • @Bart: a design that is the easiest to understand at the conceptual level may have the wrong data granularity that will make future optimization impossible (for example, too much redundant data copying or storing, or requiring too many function calls). In that case, the simpler design can only be kepy around for teaching only, not for production. In other words, difficulty in the tweaking phase may force you back to the design phase, which is costly. Although @DominicMcDonnell says this will happen anyway, it may be avoided if you consider the issues when you are doing the conceptual design. – rwong Dec 01 '10 at 09:08
  • 1
    @rwong: If you find that too much data-copying is going on to reach the required performance, then either you made a sub-optimal implementation of a good algorithm or you selected the wrong algorithm in your design phase because you did not consider all relevant constraints. Avoiding premature optimisation does not mean you can disregard the known (performance and memory) constraints during design, but rather that you make a balanced decision between speed and memory without worrying about the last bit and clock-cycle. – Bart van Ingen Schenau Dec 14 '10 at 11:09
  • 8
    @Robert, the Knuth quote was optimized prematurely... –  Jan 21 '11 at 00:00
  • 2
    I found [Bill Harlan's post on premature optimization](http://billharlan.com/pub/papers/A_Tirade_Against_the_Cult_of_Performance.html) interesting. I quote: _"It is easier to optimize correct code than to correct optimized code."_ – Steven Jeuris Feb 19 '11 at 20:20
  • @Steven: I agree with every word he says. – Robert Harvey Feb 20 '11 at 06:40
93

One return per function/method.

iMacUwhAK
  • 201
  • 2
  • 4
  • 7
    I was going to put this. I love me some early return statements. – Carson Myers Dec 01 '10 at 03:37
  • 4
    Absolutely! People contrive some pretty interesting program flow to avoid an early `return` statement. Either deeply nested control structures or continual checks. This can really bloat a method when a `if return` could really simplify this problem. – snmcdonald Dec 19 '10 at 22:26
  • 4
    If you need multiple returns in a function (aside from guards) your function is probably too long. – EricSchaefer Dec 28 '10 at 20:16
  • 18
    There's no point in having a return keyword unless it was intended to appear in multiple places. Return early, return often. It'll only serve to simplify your code further. If people can understand how break/continue statements work, why do they struggle with return? – Evan Plaice Jan 12 '11 at 16:14
  • 7
    I think this is a very out-dated best practice. I don't think it's a modern best practice. – Skilldrick Jan 17 '11 at 11:26
  • 1
    multiple returns can be beneficial/clear, although you always have to watch out for the spaghetti code monkey that sticks in 10 or 15 of them. – Ken Henderson Jan 21 '11 at 00:44
  • 1
    I agree with you, I don't follow this practice, though I feel there are messy ways to implement a function using multiple return statements. My general approach is that I use multiple return statements for "symmetry". Except for guard conditions, if I return from the "if" of an if-statement, then I prefer to also return from the "else" of that same if-statement. I also "recursively" apply this rule to return from a complex if-else hierarchy. My goal is to avoid a function that might exit early through some obscure set of conditions but also might continue on for 50 more lines of code. – Dr. Wily's Apprentice Jan 21 '11 at 15:38
  • 1
    If your language supports exceptions you already have multiple exit points from a function - with or without returns. – Nemanja Trifunovic Feb 20 '11 at 02:20
  • @Nemanja Exceptions shouldn't be used for logic though, they're much more expensive than a return any day. – Evan Plaice Mar 25 '11 at 12:10
  • 1
    @Evan: True. However, exceptional exit points are still exit points and you need to be aware of them. – Nemanja Trifunovic Mar 25 '11 at 12:43
  • @Nemanja I agree. – Evan Plaice Mar 25 '11 at 12:45
87

Don't reinvent the wheel is a widely mis-used dogma. Its idea is that if a suitable solution exists, use it instead of creating your own; in addition of saving effort, the existing solution is likely better implemented (bug-free, efficient, tested) than what you would come up with initially. So far, so good.

The problem is that a 100 % suitable solution seldom exists. A 80 % suitable solution might exist, and using it is probably fine. But how about 60 % suitable? 40 %? Where do you draw the line? If you don't draw the line, you could end up incorporating a bloated library to your project because you're using 10 % of its features - just because you want to avoid "reinventing the wheel".

If you do reinvent the wheel, you'll get exactly what you want. You'll also learn how to make wheels. Learning by doing shouldn't be underestimated. And in the end, a custom wheel may well be better than off-the-shelf generic wheel.

Joonas Pulakka
  • 23,534
  • 9
  • 64
  • 93
  • 3
    I've had this happen the other way around. I built my own ajax grid component because at the time there were none that did what I wanted, but later on replaced it with Ext JS grids. It helped that I made the assumption from the beginning that the display layer would get replaced. – Joeri Sebrechts Oct 28 '10 at 19:15
  • 20
    Agreed. If no one ever reinvented the wheel, then we'd all be driving our cars on wooden tires. – Dr. Wily's Apprentice Oct 29 '10 at 20:41
  • 6
    I always feel like your 10% example when I add Boost to a C++ project. I *always* need less than 10% of it, directly, but of course the functions I need import other modules which import other modules which... – Roman Starkov Nov 19 '10 at 13:02
  • What makes you add whole boost?, just add the pieces you need. – Kugel Nov 30 '10 at 04:46
  • 3
    +1: just this week, I reinvented the wheel (e.g. replace the bloated yet popular jquery plugin we used by something that fits our needs yet modular) and it resulted in huge performance gains. Also, there are people whose job is literally to reinvent the wheel: take Michelin for example, they do R&D to improve tires. – wildpeaks Jan 12 '11 at 18:39
  • When you look for a job you really must know how to re-invent the wheel(aka linked list?) – IAdapter Jan 18 '11 at 13:45
  • I'll use that as my title in non-employment-related cicles: "Software Wheel Maker" – Christopher Mahan Jan 20 '11 at 22:08
  • 2
    @Dr. Wily, those wheels were not reinvented, they were refactored! –  Jan 21 '11 at 00:01
  • +1: This is so very true! I once wrote [a post with a list of reasons why this dogma doesn't always hold](http://stackoverflow.com/questions/18428/what-considerations-should-be-made-before-reinventing-the-wheel/1951913#1951913). – Dimitri C. Jan 28 '11 at 07:38
  • @wildpeaks which is why I was go to Michelin when I need tires instead of rolling my own ;p – Pete Jun 26 '11 at 20:39
78

"Unit Test Everything."

I've heard it said often that all code should have unit tests, a point I disagree with. When you have a test for a method, any change to the output or structure of that method must be made twice (once in the code, once in the test).

As such, unit tests should be, in my opinion, proportional to the structural stability of the code. If I'm writing a layered system from the bottom up, my data access layer will have tests out the wazoo; my business logic layer will be pretty well tested, my presentation layer will have some tests, and my views will have little to no testing.

Fishtoaster
  • 25,909
  • 15
  • 111
  • 154
  • 7
    I suspect that "Unit Test Everything" has become a cliche, like the "premature optimization" quote. I generally agree with your proportions, and have seen many examples of developers going through monumental effort to mock an application-layer object, effort that might be better spent doing acceptance testing. – Robert Harvey Oct 26 '10 at 18:23
  • 36
    If a change to the structure of a method causes a change in your tests, you may be doing testing wrong. Unit tests should not verify implementation, only the result. – Adam Lear Oct 26 '10 at 18:27
  • +1; "unit test the risky bits" is a better maxim. –  Oct 26 '10 at 18:30
  • 1
    Some Jeff guy had some interesting to say about Unit Testing that are in line with what you wrote http://herdingcode.com/?p=263 – Conrad Frix Oct 26 '10 at 18:56
  • 7
    @Anna Lear: I think he was talking about doing design/structural changes (refactoring). Since the design is not enough mature, when you find the better way to do it, you might have to modify a lot of test in the way. I agree that when you are more skilled tester you might more easily notice where the test would be a bad idea (because of this reason, and others) but if the design is not really mature, you will still most probably have some tests in the way. – n1ckp Oct 27 '10 at 00:44
  • 13
    I think this is also why the "do test first" idea do not work. To do the tests first you have to have the design right. But having the design right require that you try things and see how they works so you can improve on them. So you can't really do the tests before you have the design, and getting the design right require you to code and see how it works. Unless you got some really uber-architect I don't really see how that idea will work. – n1ckp Oct 27 '10 at 01:02
  • 2
    Also, the facts that they will need to be modified, tests might prevent a developer from improving design. He will see this as too much work and just not do it because of it. Strictly enforcing a test mentality, while a noble idea, might do more harm than good to your project. – n1ckp Oct 27 '10 at 01:03
  • 13
    @n1ck TDD is not actually a testing exercise as much as it is a design exercise. The idea is that you evolve your design through tests (as that quickly exposes a reasonable API for your stuff) rather than fit the tests to an existing design (which may be bad/insufficient). So no, you don't have to have the design right to do the tests first. – Adam Lear Oct 27 '10 at 04:45
  • 1
    @Anna Lear: I'm not sure of getting your idea of getting design through test. I'll have to think more about this. What do you think about the fact that it will prevent developer from improving design because of the added work though? Also, what do you mean by "which may be bad/insufficient"? The design? The tests? – n1ckp Oct 27 '10 at 05:39
  • 2
    In my view, if you fit the test to the design instead of the other way around, you have better chance of having a better design since the test will not "be in the way" when thinking about the design. Think of it as adding artificial constraint to your design. If you do the test after, you don't have that problem. What do you think? – n1ckp Oct 27 '10 at 05:42
  • 4
    I think that if you design up-front, you will end up with the design you think you need rather than the design that's more testable and more usable. Think of tests (in the TDD context) as other developers using your API. The thought process is something like "okay, I need a method to do X. Let's write a test that calls it and validate some output." Then you work out a natural design and API. It does take getting used to and experience to get proficient with. So yes, there's a period of time where you are a bit slower. It balances out in the end. – Adam Lear Oct 27 '10 at 13:13
  • 2
    @n1ck The developer would improve the design through evolving the tests. That's what TDD is. It's not "write once and forget immediately". It's more along the lines of "validate my design through using it in my tests. If I think of a better way, let me adjust the tests to make sure that it is, in fact, better." – Adam Lear Oct 27 '10 at 13:14
  • 2
    @Anna Lear: the problem is that specification are always not 100% clear. It is in the nature of the problem, otherwise you would basically end up with the whole program written in text, and text is a really poor medium to not have ambiguity. As such, getting the "I need a method to do X" right (the design) is almost impossible to do. How do you get which parameter you will need for that method? How do you know if you will not need to call a method Y before that method X so that your program will work. There's a bunch of little details that need to be resolved before you can write a .. – n1ckp Oct 27 '10 at 13:46
  • 1
    meaningfull test. That's what I was saying in my last comment: You are adding artificial constraint that may or may not be in the best for the design. Think of it in a usability perspective: you don't know yet what will be the best way. By trying some things, you can see how each part interact with each other and will be in a better position to get the best design (think: usability). When the design is very mature, then you can easily add test as you now know that it will not change much. Unless you are very good, getting the design right is almost impossible on the first try, in my experience – n1ckp Oct 27 '10 at 13:49
  • 3
    In my view, TDD only seems to be good when you are herding a bunch of amateurs. They have no clue what makes a good design so making them write tests at least give them something to look for. When you start having a clue about design, TDD really doesn't seem like the best way: you will want to refine your design each time you need to make it interact with a new part and you don't want tests to be in your way, at least not until it is mature enough so that cost become smaller than benefits (of the tests). – n1ckp Oct 27 '10 at 13:52
  • 1
    Think in term of a cost minimization functions. You want to maximize the usability of your API. To do that you need to see the whole picture of what part interact with which and in what way. That's for the design part. For the test part, you want to wait that the benefits of the tests is bigger than the cost it incurs in your job of getting the best design. – n1ckp Oct 27 '10 at 13:59
  • @n1ck Looks like we'll have to agree to disagree on this one. I agree that TDD doesn't apply to everything, but I wouldn't discount it as much as you do either. I've not found tests to be in my way as I got better with TDD, but your experience may vary. I don't know if there's a question on this site that covers this already, but if there isn't it might be a good one to ask. – Adam Lear Oct 27 '10 at 15:27
  • 1
    @Anna Lear: Sorry if I couldn't convince you. I think I had good arguments. No problem, we can disagree if you want. Maybe you could give me your reason why you feel TDD is better for you? This could help me understand. Maybe I'm wrong after all. – n1ckp Oct 27 '10 at 16:03
  • 3
    @n1ck I've used TDD and haven't found it to be a huge impediment to design progress. Over time, it sped up development for me because I didn't have to go back and try to figure out how to fit unit tests into my design (which sometimes if untestable, if I get carried away :)). It's ultimately a matter of preference. Neither TDD nor not-TDD is the right approach 100% of the time, so I don't think this is a matter of right and wrong. Best tool for the job and all that. :) – Adam Lear Oct 27 '10 at 17:03
  • @Anna Lear: Good point. I'll give some thought on the untestability of design argument. Thanks for enlightening me. I'm not convinced but it could helps put some water into my wine. I think you may be right. If you don't take into account testability into your design constraint, well it makes sense that it is untestable. Why I'm not convinced is that I think a good design should probably be testable by the way it is a good design but I'll have to think about this. Thanks a lot. – n1ckp Oct 27 '10 at 21:10
  • 5
    Unit tests mostly make sense to me in teams. Unit tests are my way of saying, "Whatever you do, do NOT check in code that breaks these tests." A far cry from "Unit test everything!" – riwalk Nov 22 '10 at 04:39
  • 1
    @Stargazer712 In my team of one (1) I need those unit tests to make myself not break the code when changing something a few months down the road. – Per Wiklander Jan 12 '11 at 20:41
  • @n1ck: Thank you for clarifying my problems with TDD. – Loren Pechtel Feb 04 '11 at 00:31
  • Unit tests are great documentation, especially when you add string descriptions as parts of why you are asserting things. – Henrik Feb 19 '11 at 22:13
57

Always program to interfaces.

Sometimes there will only be one implementation. If we delay the process of extracting an interface until the time when we see the need for it, we will often find it isn't necessary.

Eric Wilson
  • 12,071
  • 9
  • 50
  • 77
  • 4
    Agreed, you program to an interface when you need an interface (i.e. a stable API to work from). – Robert Harvey Oct 26 '10 at 20:06
  • 45
    In my reading this rule is not about interfaces as language constructs. It means you shouldn't make any assumptions about the inner workings of a class when calling its methods, and should only rely on the API contracts. – Zsolt Török Oct 26 '10 at 20:52
  • @FarmBoy: regarding the extraction of the interface, there is The Rule of Three: 3 strikes and you refactor. – rwong Oct 27 '10 at 06:31
  • In Java, programming to interfaces even if only one implementation allows transparent after-the-fact proxying with new functionality. –  Oct 27 '10 at 13:33
  • Ok I understand this, but what's the "sunk cost" of creating an interface? An extra file with no code that just serves as a razor thin abstraction over my code (albeit an extremely useful abstraction). Common sense applies obviously, but most classes I'm even thinking about creating an interface for I'm probably going to be unit testing anyway -- and having simple interfaces abstractions allows me to separate and mock my unit tests easier allowing for better tests. – Watson Oct 28 '10 at 12:49
  • Easy to create, but visual clutter to navigate. For unit testing, use Mockito to mock behavior for the objects, interfaces not necessary. – Eric Wilson Oct 28 '10 at 13:43
  • 2
    Ok here's an interesting question - Im primarily a .NET developer so for me my interfaces look like IBusinessManager or IServiceContract. For me this is extremely easy to navigate (and I generally keep my interfaces in another namespace [or even another project]). When I've used Java, I've actually found this confusing (generally interface implementations I've seen are suffixed with .impl - and interfaces have no delineation). So could this be a code standards issue? Of course interfaces in java make the code look cluttered - they look exactly the same as normal classes upon first glance. – Watson Oct 28 '10 at 14:19
  • Good point. I am thinking as a Java developer here. But I expect that in either case a good mocking framework would mean that unit testing does not require interfaces. – Eric Wilson Oct 28 '10 at 16:16
  • 5
    @Watson: one cost is that every time i hit F3 ('jump to declaration') on a method call in Eclipse, i jump to the interface, not the one implementation. I then have to control-T, down-arrow, return to get to the implementation. It also blocks some automatic refactorings - you can't inline a method across an interface definition, for example. – Tom Anderson Oct 28 '10 at 17:35
  • @Tom: Well, there are a few things in my experience that you should probably pay a little extra for instead of going cheap: 1) A decent vacuum cleaner 2) Dental work 3) A decent IDE (I use Intellij which has CTL+ALT+B mapped as "Go to Implementation(s)") ;) I understand your point, I'm just not an Eclipse fan. – Watson Oct 29 '10 at 13:34
  • @Watson: Sir, i'm afraid you leave me no choice but to declare an Editor War. Roll up your sleeves and step outside! – Tom Anderson Oct 29 '10 at 13:39
  • 4
    @Tom: Well sir, I would gladly engage you in that war, Eclipse vs. Intellij -- however I have a outstanding moral code which prevents me from getting into physical confrontations with someone who has an obvious handicap. BOOM. I'm not saying Eclipse is bad, I'm saying that if the Axis powers had used it to build or design their war machines, WWII would now be known as the "Two-day kerfuffle". Seriously though, I've found that it lacks some polish that I get in off-the-shelf IDEs (Intellij/VS + ReSharper). I have found myself fighting it on more than one occasion -- which is one too many. – Watson Oct 29 '10 at 14:15
  • +1, Hehe, what's the interface of an `int`? ;) – mlvljr Nov 22 '10 at 13:29
  • @mlvljr In python where there are abstract base classes instead of interfaces (their just a generalization of interface), you can get it through `numbers.Integral`. – aaronasterling Nov 29 '10 at 21:08
  • 1
    @Tom, use Ctrl-T instead of F3 to navigate to an implementation in Eclipse. You do not have to go to the definition first. –  Dec 09 '10 at 21:25
  • @Zsolt Could agree more... – EricSchaefer Dec 28 '10 at 20:14
46

Don't use anything open-source (or non-Microsoft for you .NET developers)

If Microsoft didn't develop it -- we don't use it here. Want to use ORM - EF, want to use IOC - Unity, want to log - enterprise logging application block. So many better libraries exist -- yet I'm always stuck ordering from the dollar menu of the development world. I swear every time I hear Microsoft Best Practices I think "McDonald's Nutritional Guidelines". Sure, you'll probably live if you follow them, but you'll also be malnourished and overweight.

  • Note this might not be your best practice, but it's a common one followed almost everywhere I've worked.
Watson
  • 2,262
  • 14
  • 16
  • 13
    Sounds horrible...=( I'm probably too much on the other side, though, I avoid M$ as much as possible. – Lizzan Oct 28 '10 at 13:23
  • That shouldn't be that way. A library should be picked for its value, not just considering who made it. For instance, I love EF but I had a bad experience with Enterprise Library, and found better tools for validation and logging, such as FluentValidation, log4net and Elmah. – Matteo Mosca Oct 28 '10 at 13:28
  • 4
    You won't get fired for buying IBM^wMicrosoft – Christopher Mahan Oct 29 '10 at 04:30
  • @Christopher: Yes but I will be pulling my hair out attempting to shoehorn a sub-par solution. And it's not just Microsoft -- I've heard of similar stories coming from my fellow Websphere developers as well. – Watson Oct 29 '10 at 13:39
  • @Watson: Never said it was a good idea... – Christopher Mahan Nov 01 '10 at 05:49
  • Whats worse, is when you find a FOSS library, and because of legal concerns and time, management tell you to write a replacement. "Oh that library thats been kicking around and TESTED for 10 years, no no no, you write 4k lines to do the same functionality, less paper work that way.." – Mark Underwood Jan 17 '11 at 08:59
  • 17
    There is also the mirror-image version, ie Never use anything Microsoft, or never use anything you have to pay for. – Richard Gadsden Jan 17 '11 at 12:53
  • 5
    I'm lucky enough to work at an organization where this is not a widely held dogma, but in the places where we have adopted commercial solutions, there sure is a great deal of pain. The problem comes in when some part of the commercial solution doesn't quite work. When it's open source, you can look at the source (The ultimate documentation) and find out what's going wrong. With closed source, you have to *pay* for the privilege of accessing the knowledge of tech support who knows even less about the product that you do. And thats the only available 'fix'. – SingleNegationElimination Jan 29 '11 at 06:15
40

Object orientation

There is the assumption, just because code is "object-oriented", it's magically good. So people go on squeezing functionality into classes and methods, just to be object-oriented.

Bobby Jack
  • 101
  • 3
LennyProgrammers
  • 5,649
  • 24
  • 37
  • 7
    I can't imagine building a software system of any significant size without taking advantage of the organization that Object Orientation provides. – Robert Harvey Oct 28 '10 at 20:55
  • 18
    Robert. Unix isn't object oriented, and it certainly qualifies as a software system of significant size. It also seems to be quite popular (think Mac OSX, iPhone, Android phones, etc) – Christopher Mahan Oct 29 '10 at 04:29
  • 7
    What i meant is that i think we should use the most appropriate methodology. I've seen people use methods and classes where it cumbersome and does not make much sense, just because "it's object-oriented". That's cargo cult. – LennyProgrammers Oct 29 '10 at 07:49
  • 1
    embrace Object Orientation when and only when it's useful. period. (PS: In my opinion it's nearly always useful, but opinions tend to differ :-)) – Sean Patrick Floyd Nov 22 '10 at 11:21
  • 8
    There is no silver bullet. Functional Programming (Haskell) is quite successfull without being object oriented. At the end of the day, you've got multiple tools and it is your job to pick the best assortment for the task at hand. – Matthieu M. Nov 22 '10 at 20:46
  • 9
    The funny thing is, that besides using classes, polymorphism and the likes, most of object oriented code is in fact procedural code. – Oliver Weiler Nov 30 '10 at 00:05
  • 1
    +1 I registered here just so I could up-vote this answer :) – Bobby Jack Jan 17 '11 at 10:43
  • Robert, the Internet isn't Object Oriented. – Christopher Mahan Jan 20 '11 at 22:18
  • I don't think that just because it s"object-oriented", it's magically good. I do however think its magically delicious. – Conrad Frix Jan 20 '11 at 22:28
  • 1
    @Lenny222 - there are some people now who don't know how to write anything thats NOT OO. – quickly_now Jan 21 '11 at 01:12
  • @quickly_now I don't really see how that's worse than devs that only know how to write procedural code. Considering that most of OO is just procedural code with more organizational structures, wouldn't a dev who only knows OO be more valuable than a dev who only knows how to write procedural code? – Evan Plaice Feb 07 '11 at 11:13
  • @Evan Plaice: Both extremes worry me. People who can't understand something 100 lines long worry me most. – quickly_now Feb 07 '11 at 12:09
  • @quickly_now Are you talking about 100 lines of procedural or OOP code. Because, if you're talking about OOP, 50 of those lines are probably dedicated to class declaration, added opening/closing braces, property getters/setters, etc. Whereas, in procedural... well, I don't know because I can only read OOP :) – Evan Plaice Feb 07 '11 at 12:24
35

All code should be commented.

No, it shouldn't be. Some time you have obvious code, for example setters should not be commented, until they do something special. Also, why should I comment this:

/** hey you, if didn't get, it's logger. */
private static Logger logger = LoggerFactory.getLogger(MyClass.class);
Vladimir Ivanov
  • 665
  • 8
  • 13
  • 13
    All code should be _understandable_. Comments are a major tool in that, but far from the only one. – Trevel Jan 17 '11 at 22:27
  • Absulutely, code should be understandable. But there is no single reason to write a comment which won't add anything to, for example, method name. If you write `/** sets rank. */ void setRank(int rank) { this.rank = rank; }` I assume the comment as a stupid one. Why it is written? – Vladimir Ivanov Jan 17 '11 at 22:34
  • 2
    Generated documentation. That's what the `/** */` format is for instead of the `/* */` format comment. Or for .NET it would be `///` – Berin Loritsch Jan 18 '11 at 16:19
  • 10
    Using `}//end if`, `}//end for`, `}//end while` are the best example of wasteful commenting I've ever encountered. I've seen this many times where the opening brace is no more than 2 lines above. IMHO if you **need** these comments then your code **needs** re-factoring... or you need to pony up $20 and buy an IDE/Text editor that highlights matching braces. – scunliffe Jan 21 '11 at 11:54
  • +1 Before you spend time writing comments to clarify your code try this novel approach instead: just clarify your code! If you're making a library and a full set of documentation to go with it, I could understand using nDoc ready comments. Otherwise, stop wasting your time. – Michael Brown Jan 22 '11 at 00:03
  • @Berin yes but generated documentation should also be on this list ;) – jk. Jan 25 '11 at 10:52
  • @jk01, I disagree. Writing API documentation needs to be done in one place, and the one place that makes the most sense is next to the code being documented. API docs are most critical on the publically exposed API, but I can stomach slacking off on internal stuff. – Berin Loritsch Jan 25 '11 at 13:43
  • 7
    Code say "how". Comments need to say "why". –  Jan 27 '11 at 18:03
32

Methodologies, particularly scrum. I can't keep a straight face when I hear grownups use the phrase "Scrum Master". I get so tired of hearing developers protest that some aspect of Methodology X isn't working for their company, only to be told by Guru So-and-So that the reason it's not working is that they are not, in fact, true practitioners of Methodology X. "Scrum harder, you must, my Padawan learner!"

There are nuggets of wisdom in agile methodologies---lots of them---but they're often couched in so much manure that I can't fight my gag reflex. Take this bit from Wikipedia's scrum page:

A number of roles are defined in Scrum. All roles fall into two distinct groups—pigs and chickens—based on the nature of their involvement in the development process.

Really? Pigs and chickens, you say? Fascinating! Can't wait to pitch this one to my boss...

evadeflow
  • 1,232
  • 1
  • 10
  • 16
  • Interesting. I agree to an extent. With that last part though: call them what you want, they're mnemonics, that's all. – Steven Evers Nov 29 '10 at 21:38
  • 12
    +1...you're right, its hard to take that seriously. *I AM THE SCRUMMASTER* – GrandmasterB Nov 29 '10 at 21:50
  • 2
    And the parables. It reminds me of church sermons, or the kinds of anecdotes that self-help gurus (and comedians) are famous for: "Take my friend Steve. Steve was constantly arguing with his wife Sheryl. Those two would go at it for hours, to the point where their marriage was in real jeopardy. Then, one day..." These kinds of didactic yarns don't bother me in other spheres, but I hate to see them proliferate in the engineering sciences. – evadeflow Nov 29 '10 at 22:02
  • 2
    What about the Scrum Ninjas? – Berin Loritsch Jan 12 '11 at 18:20
  • Hell yeah @Berin! I could definitely dig a methodology that involves ninja roles! – Filip Dupanović Jan 18 '11 at 02:06
  • 1
    I don't agree with the "Pig and Chicken" comparison...it flies directly in the face of the Agile Manifesto. Namely "Customer collaboration over contract negotiation". Customers are as vested (if not moreso) in the success of the project as the project team. Calling some roles pigs and other roles chickens sets up an "us versus them" mentality that IMHO is the biggest roadblock to successful projects. – Michael Brown Jan 21 '11 at 23:58
  • I find the scrum thinking can help consultants working with other companies' consultants at some third party customer. It's very much about communication. **Communication is good, a "best practise", if I ever saw one.** (Scrum on the other hand may or may not work well in your case) – Henrik Feb 19 '11 at 22:28
25

Object Relational Mapping... http://en.wikipedia.org/wiki/Object-relational_mapping

I don't want to ever be abstracted away from my data, nor do I want to lose that precise control and optimization. My experience with these systems has been extremely poor... The queries generated by these abstraction layers is even worse than what I've seen from off-shoring.

Fosco
  • 1,671
  • 14
  • 19
  • 19
    Premature optimization is the root of all evil. Slow code is, in real life, only incredibly rarely a problem when compared to unmaintainable code. Use an ORM, then cut through the abstraction only where you need it. – Fishtoaster Oct 26 '10 at 18:12
  • 28
    ORM's are an 80-20 tool. They are intended to handle the 80% CRUD that becomes so tiresome to write all that endless plumbing code after awhile. The other 20% can be done in more "conventional" ways like using stored procedures and writing ordinary SQL queries. – Robert Harvey Oct 26 '10 at 18:13
  • 18
    @Fishtoaster: Don't you mean, *"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."?* – Robert Harvey Oct 26 '10 at 18:15
  • 5
    @Robert Harcey: There's a reason I didn't use a direct quote. I think that most programmers focus on efficiency way too much- it's a problem few of them really need to solve. Admittedly, there are certain domains where it's more important than others, but maintainability and extensibility are a problem everywhere. Another modified-quote: "Make it work, make it maintainable, make it readable, make it extensible, make it testable, and *then*, if you have time and it turns out you need it, make it fast." – Fishtoaster Oct 26 '10 at 18:23
  • 1
    To me, implementing an ORM is slower, right off the bat. I am sure there are projects and scenarios where even I would benefit from it, though, just thought it was a good answer for this question. – Fosco Oct 26 '10 at 18:26
  • 3
    @Fosco: Interesting. Do you come from a DBA-centric background? I find the opposite for myself. ie. A DB is just another storage mechanism for me that _should_ be abstracted away... until the need arises that I have to get at it. – Steven Evers Oct 26 '10 at 19:34
  • @SnOrfus I wouldn't really consider myself a DBA, but I definitely can do those functions and am very strong with SQL. I don't look at it as 'just another storage mechanism,' that's for sure. :) – Fosco Oct 26 '10 at 19:36
  • 4
    Performance with an ORM can be an issue when you are new to using it. Just as half the CS graduates don't understand that running SELECT * FROM Customer (1,000,000 rows returned) is a bad thing, most developers probably will have performance issues with an ORM for the first 12 months before they learn how to use it properly. A good ORM will give you a high level of control over the SQL produced and allow injecting stored procs where required. – Craig Oct 27 '10 at 02:07
  • 3
    @fishtoaster in my experience slow code is a far greater problem than directly written SQL which is not by any stretch of the imagination "unmaintainable" – HLGEM Oct 28 '10 at 14:41
  • 5
    -1. A good ORM is very quick and easy to work with; direct database access is not - it may not be difficult, but there's piles of boilerplate. An ORM may not generate great SQL, but it's good enough most of the time. You can replace or bypass it when you need to - but *when* you need to. – Tom Anderson Oct 28 '10 at 17:37
  • 12
    @Craig: How did you not recognize the irony in your statement? Needing a year to learn how to get good performance out of an ORM is an _excellent_ argument _against_ ORMs, as is the need to "control" the SQL produced and inject stored procedures. If you have the knowledge for that, you have the knowledge to bypass the ORM entirely. – Nicholas Knight Nov 22 '10 at 11:48
  • @Nicholas +1 I couldn't have said that any better... – Fosco Nov 22 '10 at 15:01
  • 1
    Ouch! You must have used the wrong ORMs. I'm particularly fond of the ORM+Active Record+Naked objects combo. – Filip Dupanović Jan 18 '11 at 02:03
  • 3
    @Nicholas, learning to use ANY tool well takes time. But it's time well invested. If that weren't the case, why take the time it takes to become "very strong" with SQL, when you could use flat files? Because SQL provides an advantage over flatfile structures. O/RMs provide an advantage over hand coding SQL. If you think about it, hand-coded SQL is duplication in its worst incarnation. I have to write the stored procedure and then I have to write the code to call the stored proc. It's funny if you talked to me about 3 years ago I would have been in your camp. – Michael Brown Jan 21 '11 at 23:48
  • @Nicholas, most common problem in ORMs are the cartesian product (too many joins) and the N+1 selects problem (select many entities in one go (1), then iterate with proxied properties (N)); but you would have these perf problems with SQL as well, without the option of easily choosing where to apply ahead-of-time eager fetching (i.e. tackling N+1) nor the option of tackling the cartesan product problem where it affects you. A well-made ORM doesn't really add more complexity to your querying but decreases the time taken to develop. It's just like any other higher-than-previously construct. – Henrik Feb 19 '11 at 22:22
  • Another way of seeing it, is that you'd be out-sourcing the complexity of working with data objects in an object oriented system to another project. All the above said, however, for new systems I'd use event-sourcing for write and fluent-nhibernate/sql for reads. Reads go up where it matters; close to the view, so we don't have to keep shovelling data around through layer after layer. – Henrik Feb 19 '11 at 22:24
22

MVC - i often find that shoehorning many web design problems into the MVC approach is more about making a framework (rails etc) happy than about simplicity or structure. MVC is a favorite of "architecture astronauts" who seem to value excessive scaffolding over simplicity. ymmv.

class-based OO - in my opinion encourages complex structures of mutable state. the only compelling cases i have found for class-based OO over the years are the corny "shape->rectangle->square" examples that form chapter 1 of any OO book

Brad Clawsie
  • 740
  • 3
  • 7
  • 4
    I've done web development in ASP.NET and ASP.NET MVC and, even though MVC does seem scaffoldy, I much prefer it over ASP.NET, for many reasons: simplicity, maintainability, and exceedingly fine control over the markup. Everything has its place and, while it all seems a bit repetitive, it's a joy to maintain. It's completely customizable, so if you don't like a behavior out of the box, you can change it. – Robert Harvey Oct 29 '10 at 06:06
  • 1
    As far as OO is concerned, there are good and bad ways to do it. Inheritance is overrated, and used more sparingly in the real world than most books would have you believe; there is currently a trend towards a more functional, immutable development style, even in the OO world. – Robert Harvey Oct 29 '10 at 06:08
  • 1
    +1 for mentioning MVC. While the concept of MVC (separating data layer logic, presentation logic, and background logic is a good idea) physically separating them into a complex folder hierarchy with a mishmash of files containing code snippets is stupid. I blame this whole phenomena on PHP's lack namespace support and newbie developers glorifying decades-old techniques as 'the newest thing'. It's simple, create a namespace for the database accessors, GUI, and background logic (and sub-namespaces where needed). – Evan Plaice Nov 19 '10 at 18:50
  • 3
    As for OO, you won't really see it's benefits until your project grows to a size where managing complexity becomes important. Follow the single responsibility principle wherever possible and if your code is accessible publicly exposed (ex. for a .dll) hide the internal implementation of classes/methods/properties wherever possible to make the code more secure and to simplify the API. – Evan Plaice Nov 19 '10 at 18:55
  • @Evan - I don't think he was criticizing OO as much as *class-based* OO. Presumably he likes prototype or multimethod OO better. – Jason Baker Nov 29 '10 at 19:48
  • 1
    I personally find the shape->rectangle->square example to be one of the most elegant arguments *against* oop. for instance, to `Square(10).Draw()` it is sufficient to `Rectangle(10, 10).Draw()`. so I guess that means square is a subclass of rectangle. But `mySquare.setWidthHeight(5,10)` is nonsense (IE, it fails the Liskov substitution principle), a square cannot have different height and width, although a rectangle can, so that implies that rectangle is a subclass of square. This is known in other contexts as the "Circle, Ellipse problem" – SingleNegationElimination Jan 19 '11 at 02:32
  • @TokenMacGuy I agree completely. It's a perfect example of where subclass method/constructor hiding should be a feature of OOP; where hidden methods/constructors automatically throw a NotSupportedException. I wish the people teaching OOP better understand it's uses. The hardest part of learning OOP for me was finding good practical examples. – Evan Plaice Jan 28 '11 at 05:00
  • @Evan Plaice: But if you raise an exception there, what you are saying is that a square is not, in fact, a rectangle at all, since it doesn't support all of the operations a rectangle supports. – SingleNegationElimination Jan 28 '11 at 13:54
  • @TokenMacGuy Well, it isn't really. If you wanted to be more precise, Rectangle should inherit from Square since; Rectangle contains all of the properties of square, with some specific additions. It may seem counter-intuitive compared traditional math bit think about it. Subclasses should contain the same characteristics of their parents, with some characteristics that their parent doesn't have. – Evan Plaice Jan 28 '11 at 20:31
  • @TokenMacGuy (continued) So the order would be Shape(has sides).Square(has 4 sides).Rectangle(opposite sides equal in length).Parallelogram(opposide angles are equal), etc... Every sub inherits the properties of it's base all the way up the chain. In the end, it can be done either way, Rectangle.Square is more hackish but also more intuitive to many people. One of the drawbacks to OOP is, it's not always easy to pick what the structure should be. It's not always a bad idea to break the rules, as long as you know the limitations of doing so. – Evan Plaice Jan 28 '11 at 20:44
  • @TokenMacGuy (continued) Personally, I wouldn't inherit from Rectangle at all because there's isn't much to be gained from the additional abstraction. Ex, Shape.Rectangle, Shape.Square, Shape.Parallelogram, etc... This is a perfect example where the classic examples do a poor job of teaching OOP because they inherit for inheritance sake. Inheritance should only be used where it may benefit/enhance/simplify the code. IRL, inheritance is used sparingly. – Evan Plaice Jan 28 '11 at 21:36
  • 1
    @Evan: I just say one thing: "Acts As" and "AOP". Single responsibility principle there, and it's both class-based and OOP, too! – Henrik Feb 19 '11 at 22:36
  • +1 for MVC. I wrote my own little microframework for ASP.Net (arguably only for use in my personal website) because making a CommentView, CommentModel, and CommentRoute seemed pretty stupid. So now I have a CommentHandler and CommentView. And if I choose, I can break it out further, but the complexity is low and I felt like I was trying to shoehorn on an MVC point of view when trying to make the same thing in Rails – Earlz Oct 17 '11 at 22:48
22

YAGNI

(You ain't gonna need it)

This approach has cost me hours and hours when I had to implement features on an existing codebase, where careful planning would have included these features beforehand.

Ideas of mine have often been rejected because of YAGNI, and most of the times someone had to pay for that decision later.

(Of course one could argue that a well-designed codebase would also allow features to be added later on, but reality is different)

  • 15
    I agree with YAGNI, but I see what you're getting at. The point of YAGNI is to deal with people who want to plan *everything* up front down to the slightest detail. Nine times out of ten though, it's used as an excuse to under engineer code or completely skip planning. – Jason Baker Nov 29 '10 at 19:55
  • 12
    P.Floyd, @Jason Baker: +1 Absolutely right. The old saying applies here "months in the lab can save hours in the library" – Steven Evers Nov 29 '10 at 21:40
  • A specification often will (and mostly should) leave most of the implemnentation, and some of the interface open. everything not in the spec directly, but required in order to implement the spec, whether an implementation decision, user viewable interface, or anything else, is also an indirect requirement of the spec. If a feature is not in the spec, and not implied by the spec, then you don't need it. How can this ever be confusing? – SingleNegationElimination Jan 28 '11 at 14:01
  • 1
    @TokenMacGuy the crucial aspect is the *implied by the spec* part. That's where opinions differ a lot. – Sean Patrick Floyd Jan 28 '11 at 14:04
21

Writing function names as if they were English sentences:

Draw_Foo()
Write_Foo()
Create_Foo()

etc. This might look great, but it's a pain when you're learning an API. How much easier is it to search an index for "Everything that starts with Foo"?

Foo_Draw()
Foo_Write()
Foo_Create()

etc.

Colen
  • 536
  • 1
  • 3
  • 9
  • 2
    Probably about as easy as typing Foo in TM's function list. – Josh K Oct 26 '10 at 17:50
  • I would think that documentation layout wouldn't be a very good thing to base naming conventions on. That sounds like more of an issue with how the docs are structured than what the functions are named. – Fishtoaster Oct 26 '10 at 17:56
  • 68
    Sounds like you'd actually want it to be `Foo.Draw()`, `Foo.Write()` and `Foo.Create()`, so that you could do `Foo.methods.sort` or `Foo.methods.grep([what I seek]).sort` – Inaimathi Oct 26 '10 at 17:59
  • 7
    Starting with 'Get' is another example. – JeffO Oct 26 '10 at 18:25
  • 1
    I come from the objective-c world, and greatly miss verbose method names and infix notation when I do java (my other life). Since code completion started working I haven't found the extra typing a problem, either. –  Oct 26 '10 at 18:28
  • This is probably out of date if you use a newer IDE. Intellisense will show any method *containing* the characters you've typed, and even similar ones. – Scott Whitlock Oct 29 '10 at 00:53
  • 2
    @Scott Whitlock: Not _that_ out of date for some .NET developers, iirc, VS 2008 didn't do that. 2010 does though, and it's fantastic. – Steven Evers Nov 18 '10 at 21:44
  • 1
    Learn to use regular expressions when searching solves that problem. – Martin York Jan 18 '11 at 05:45
  • What if you want to look draw something, and have forgotten the Object: `Draw_Foo()`, `Draw_Bar()` and `Draw_` ahem, what was it again? – René Nyffenegger Jan 21 '11 at 00:22
  • @SnOrfus if you use VS without resharper, you deserve the RSI you inflict on yourself from being cheap. And Resharper has supported this for quite a while. – Michael Brown Jan 21 '11 at 23:51
  • Makes me wish I owned a copy of VS '10. Damn you Microsoft for your terrible software distribution model. – Evan Plaice Feb 06 '11 at 04:06
  • 1
    @Inaimathi: 57 upvotes for your entirely correct reply, and only 5 downvotes on this answer. People hesitate to downvote too much ... – Steven Jeuris Jun 26 '11 at 13:06
20

For SQL

  1. Don't use triggers
  2. Always hide tables behind views

In order:

  1. They are a feature that has it's place. You have multiple update paths to a table, or require 100% auditing?

  2. Just why? I would if I was refactoring to maintain a contract, but not when I've read that folk change the view to match any table changes

Edit:

Number 3: Avoiding * with EXISTS. Try 1/0. It works. The column list isn't evaluated as per SQL standard. Page 191

gbn
  • 2,467
  • 1
  • 17
  • 14
  • 3
    #2 is a best practice? – Hogan Oct 27 '10 at 03:35
  • 2
    @Hogan: Yes, it's documented in many places http://vyaskn.tripod.com/sql_server_security_best_practices.htm and http://flylib.com/books/en/3.404.1.34/1/ And less so here: http://www.simple-talk.com/community/blogs/tony_davis/archive/2009/08/06/74264.aspx – gbn Oct 27 '10 at 03:57
  • I looked at these, the first one says "Let your users query views instead of giving them access to the underlying base tables." That is, for security. Makes sense. Of course it makes more sense to just use a stored procedure if refactoring is expected. #2 as stated is (IMHO) not the best practice, in this article it was a rule to avoid the typical anti-pattern; Dynamic sql directly accessing the tables. I believe the best practice should be stated as "Don't use dynamic SQL to directly access tables, use views or SPs instead", do you still have a problem with it? – Hogan Oct 27 '10 at 10:57
  • 1
    @Hogan: A view that simply mirrors a base table adds no security over using the base table directly. If you join to a securityuser table, or mask some columns then fair enough. But SELECT every column from table or view: no difference. Personally, I do use stored procs anyway. – gbn Oct 27 '10 at 11:20
  • @gbn : Actually it can add security in SQLServer (which was what your link was about). SQLServer lets you assign rights by view. So a view does not just mirror if it has separate security settings (in SQLServer). – Hogan Oct 27 '10 at 11:40
  • 3
    @Hogan: I know something about SQL Server :-) http://stackoverflow.com/users/27535/gbn What I mean is GRANT SELECT on table is no different to GRANT SELECT ON VIEW if the view is SELECT * FROM TABLE – gbn Oct 27 '10 at 11:42
  • 1
    @gbn : I agree. There is no difference there. I think we might be saying the same thing. I guess my original comment ("#2 is a best practice?") was based more on my personal experience that views (like triggers) are more often miss-used than correctly used. Thus such a best practice would only lead to abuse not enhancement. If it is considered a best practice you are 100% right it is a bad one. – Hogan Oct 27 '10 at 22:53
  • In PostgreSQL, when you say `CREATE VIEW foo AS SELECT * FROM bar;`, then add a column to `bar`, `foo` does not automatically gain the column as well; you have to recreate the view. There's one good reason not to "always hide tables behind views". – Joey Adams Dec 10 '10 at 08:05
  • 1
    @Joey Adams: [SQL Server too](http://stackoverflow.com/questions/4402509/how-to-refresh-a-views-schema-after-modifying-table/4402848#4402848) – gbn Dec 11 '10 at 14:35
20

Design Patterns mostly. They are over used and under utilised.

Adam Paynter
  • 349
  • 1
  • 5
  • 13
Geek
  • 3,951
  • 1
  • 24
  • 29
14

Now that you mentioned Clean Code, while it contains some good ideas, I think its obsession to refactor all methods into submethods and those into subsubmethods etc. is taken way too far. Instead of a couple of ten-line methods you're supposed to prefer twenty (supposedly well named) one-liners. Obviously someone thinks it's clean, but to me it seems way worse than the original version.

Also, replacing simple elementary things such as

0 == memberArray.length

within a class with a call to the class's own method such as

isEmpty()

is a questionable "improvement" IMHO. Addition: The difference is that the first check does exactly what it says: checks whether the array length is 0. Ok, isEmpty() might check array length too. But it could also be implemented like this:

return null != memberArray ? 0 == memberArray.length : true;

That is, it includes an implicit null check! This may be fine behavior for a public method - if the array is null, then something is certainly empty - but when when we're talking about class internals, this is not so fine. While encapsulation to outside is a must, class internals must know exactly what's going on within the class. You can't encapsulate the class from itself. Explicit is better than implicit.


This is not to say that breaking down long methods or involved logical comparisons is no good; of course it is, but to which degree to do it -- where the sweet spot is -- is obviously a question of taste. Breaking down a long method results in more methods, and that does not come for free. You have to jump around source code in order to see what's really going on, when you could see it at a glance if all that stuff were in a single method.

I would even go as far as to say that in some cases a 1-line method is too short to deserve being a method.

Joonas Pulakka
  • 23,534
  • 9
  • 64
  • 93
  • 6
    I rarely see this as an issue. Mostly that's because I usually see too much in a single method, not too little. However, according to some studies very low complexity in methods also has a slightly higher bug rate than moderately low complexity. http://www.enerjy.com/blog/?p=198 – MIA Oct 27 '10 at 15:30
  • Yeah, this is definitely an issue in Clean Code only. In real life methods tend to be too long, as you said. But it's interesting to see that curve! Indeed, things should be made as simple as possible, but no simpler. – Joonas Pulakka Oct 27 '10 at 16:01
  • 1
    I find your second example eminently more readable, and that form (or something similar, like a Length property on the class itself) is a must if you're making it public. – Robert Harvey Oct 27 '10 at 22:13
  • @Robert Harvey: The second example is a fine public method, but calling it from within the class itself is questionable, because you don't know exactly what it does before you look how it's implemented. It could check for null, for example. See my addition above. – Joonas Pulakka Oct 28 '10 at 05:58
  • @Joonas: Fair enough. – Robert Harvey Oct 28 '10 at 21:08
  • +1: This is actually one of the occurrences in Clean Code that spawned this question. – Steven Evers Oct 28 '10 at 21:23
  • The problem with Clean Code is that Uncle Bob probably had years to refactor the code until it became (nearky) perfect. In real life, you never have that time. Still, I think the book is great :-). – Oliver Weiler Nov 30 '10 at 00:00
  • +1 for the saying that you can't encapsulate a class from itself. – riwalk Feb 11 '11 at 02:46
14

The Single Responsibility Principle

("every class should have only a single responsibility; in other words, every class should have one, and only one, reason to change")

I disagree. I think a method should have only one reason to change, and all methods in a class should have a logical relationship to one-another, but the class itself might actually do several (related) things.

In my experience, this principle too-often gets applied over-zealously, and you end up with many tiny one-method classes. Both the agile shops I've worked at have done this.

Imagine if the creators of the .Net API had had this sort of mentality: rather than List.Sort(), List.Reverse(), List.Find() etc., we'd have ListSorter, ListReverser, and ListSearcher classes!

Rather than argue anymore against the SRP (which itself isn't terrible in theory), I'll share a few of my long-winded anecdotal experiences:


At one place I worked, I wrote a very simple max flow solver which consisted of five classes: a node, a graph, a graph-creator, a graph-solver, and a class to use the graph-creator/solvers to solve a real-world problem. None were particularly complex or long (the solver was by far the longest at ~150 lines). However, it was decided that the classes had too many "responsibilities," so my co-workers set about refactoring the code. When they were done, my 5 classes had been expanded to 25 classes, whose total lines-of-code were more than triple what they were originally. The flow of the code was no longer obvious, nor was the purpose of the new unit-tests; I now had a hard time figuring out what my own code did.


At the same place, almost every class had only a single method (its only "responsibility"). Following the flow within the program was nearly impossible, and most unit-tests consisted of testing that this class called code from another class, both of whose purpose were equally a mystery to me. There were literally hundreds of classes where there should have been (IMO) only dozens. Each class did only one "thing", but even with naming conventions like "AdminUserCreationAttemptorFactory", it was difficult to tell the relationship between classes.


At another place (which also had the classes-should-have-only-one-method mentality), we were trying to optimize a method which took up 95% of time during a certain operation. After (rather stupidly) optimizing it a bit, I turned my attention towards why it was being called a bajillion times. It was being called in a loop in a class... whose method was being called in a loop in another class.. which was also being called in a loop..

All told, there were five-levels of loops spread out over 13 classes (seriously). What any one class was actually doing was impossible to determine just by looking at it - you had to sketch a mental graph of what methods it called, and what methods those methods called, and so on. If it had all been lumped into one method, it would have only been about 70 lines long with our problem-method nested inside five immediately-obvious levels of loops.

My request to refactor those 13 classes into one class was denied.

  • 6
    Sounds like someone got "Pattern Fever" or in this case "Principle Fever" at that job. The list class doesn't violate SRP. All of its functions serve one purpose, manipulating a collection of objects. Having only one function in a class sounds like overkill to me. The principle behind SRP is that a unit of code (be it method, class, or library) should have a single responsibility that can be stated succintly. – Michael Brown Jan 22 '11 at 00:37
  • 3
    I've started to see this kind of madness from people who find it impossible to write pure plain functional code. Too much educating that every problem in the world can be solved out of a book of patterns. Not enough thought about pragmatism. Like you I've seen some class-based OO code thats so horrible that following it is completely impossible. And its huge and bloated. – quickly_now Feb 06 '11 at 09:51
  • 3
    2nd comment here. Lots of "principles" are over-applied. There are many things that are good ideas, where doing the exact opposite is sometimes appropriate. GOOD programmers know when the break the rules. Because the rules are not "rules", they are statements of "good practice most of the time except when its a really dumb idea." – quickly_now Feb 06 '11 at 09:53
  • "Imagine if the creators of the .Net API had had this sort of mentality: rather than List.Sort(), List.Reverse(), List.Find() etc., we'd have ListSorter, ListReverser, and ListSearcher classes!". This is exactly what is done in C++, and it is *wonderful*. The algorithms are separated from the data structures, so if I write my own container, all of the algorithms that work with the standard library _just work_ with my new container. It must be horrible in .Net land, writing a new sorting function for every new container that you want to sort. – Mankarse Oct 26 '11 at 12:14
13

"Be liberal with comments"

Comments are definitely a good thing, but too many are just as bad if not worse than not enough. Why? Well, people tend to tune comments out if they see too many unnecessary ones. I'm not saying that you can have perfectly self-documenting code, but it is preferable to code that needs comments for explanation.

Jason Baker
  • 9,625
  • 8
  • 44
  • 67
  • 1
    self-documenting code is definitely nice. Although, I like having comments along-side something like simple calculations (to express what the return-type or returned-value is). However, if your comments need more words than the code itself, it's probably time to rewrite the code. – sova Nov 29 '10 at 20:11
  • 6
    I have to agree with the way that sova put this one. Clean code is preferable to comments. – riwalk Nov 29 '10 at 23:47
  • 4
    You still need the "why"'s in there! –  Jan 21 '11 at 00:04
  • I'd rather have comments explaining why. It means I have to THINK less in reverse engineering the code intent when I come to look at it. – quickly_now Jan 21 '11 at 01:17
12

GoTo Considered Harmful

If you are implementing a state machine a GOTO statement can make more sense (readability and efficient code) than a "structured programming" approach. It really worried some co-workers when the first piece of code I wrote in a new job included not just one but several goto statements. Fortunately they were intelligent enough to realize that it was actually the best solution in this particular case.

Any "best practice" that doesn't allow for sensible and documented exceptions to its rules is just plain scary.

MZB
  • 535
  • 4
  • 8
  • 16
    I'm going on nine years programming without a single goto statement (including several state machines, as you mentioned). Expand your mind to new ideas. – riwalk Nov 22 '10 at 04:46
  • @MZB: `goto` is harmful to structured languages --> when do you execute the destructor if the scope is interrupted mid-way to jump... somewhere in the middle of another scope ? – Matthieu M. Nov 22 '10 at 20:44
  • 3
    @Mathieu M. - agreed - mixing GOTO with structured control statements isn't sensible. (This was pure C and it wasn't an issue. – MZB Nov 25 '10 at 00:22
  • 1
    @Stargazer2 - with a simple FSM, it depends whether putting the state in a variable and using it as an index to call a procedure (isn't that the same as a computed GOTO?) gives clearer/faster code than using the program counter as the FSM state. I'm not advocating this as the best solution in most circumstances, just the best solution in some circumstances. Expand your mind to alternative approaches. – MZB Nov 25 '10 at 00:31
  • 5
    @MZB, wouldn't you agree that a function call is also just a computed GOTO? Same goes for for/while/if/else/switch constructs, among others. Language designers abstract away direct changes to the program counter for a reason. Do not use goto. – riwalk Nov 29 '10 at 22:37
  • If you feel up to it, you can build a hash table where the mapping goes from STATE -> &NEXT_STATE_COMPUTING_FUNCTION. The function takes the input and returns the new state. – Paul Nathan Dec 10 '10 at 03:20
  • In an ideal world, the stack would never end. Every use of `goto` would have a structured alternative, and such alternatives go beyond `if`, `while`, and `switch` to include recursion and exceptions. Although `goto` can be avoided in most code, there are cases where higher-level abstractions just don't cut it (e.g. recursion causes stack overflow, tail recursion is broken, try-catch is needlessly messy/expensive, etc.). If you're coding to learn, **please do** use goto. That is, until you understand when/why you shouldn't. – Joey Adams Dec 10 '10 at 08:36
  • 3
    Directly implementing a statemachine is probably an antipattern. There are lots of ways to have a state machine without expressing the states and transiitions literally. for instance, `import re` – SingleNegationElimination Jan 19 '11 at 02:37
  • I love to use GOTO in code that never gets run to spook the noobs. ... No, that's not true. I use it in production when I have to. – Christopher Mahan Jan 20 '11 at 22:27
  • @MZB, A loop and a unique value for each state should do it. –  Jan 27 '11 at 18:04
  • 1
    @Stargazer712: Yes, language designers have abstracted away reasons to use GOTO but they haven't removed *ALL* of the reasons. The real issue is don't create spaghetti. – Loren Pechtel Feb 04 '11 at 00:37
  • @Joey Adams, recursion can always be replaced with a loop. It is rarely as concise, but if a stack overflow is your concern, then it is worth the extra effort to avoid recursion. If you're worried about try/catch problems, wrap the troublesome code in a function and wrap the function call in a try/catch. Easy as pie. Do not use goto. – riwalk Feb 04 '11 at 16:26
  • @Stargazer712: sometimes retry makes your code read better and hence be more maintainable. I use it once in a while for "retrying". Not everything can be moved to functions you know (anonymous types, closure captures for example). – Henrik Feb 19 '11 at 22:54
  • 1
    So you're saying [this](http://xkcd.com/292/) hasn't happened to you yet? – Mateen Ulhaq May 01 '11 at 05:28
  • @Stargazer: While all recursion can be replaced by a loop _and stack_ (with the stack being unnecessary only in the tail-recursive case) it does not mean that the code must be clearer or faster if you do it that way. – Donal Fellows Jul 01 '11 at 08:37
12

The sacrifices we make to make code testable

I jump through a lot of hoops to make my code testable, but I don't pretend that I wouldn't if given the choice. However, I often hear people pushing the idea that these are "best-practices." These practices include (written in the language of .Net, but applies to other languages as well):

  • Creating an interface for every class. This doubles the number of classes (files) to deal with, and duplicates code. Yes, interface programming is good, but that is what the public/private specifiers are meant for.
  • Every class not instantiated at startup needs a factory. Clearly, new MyClass() is much simpler than writing a factory, but now the method that creates it cannot be tested in isolation. If not for this fact, I would only make 1/20th the number of factory-classes that I do now.
  • Make every class public, which defeats the purpose of having access specifiers on classes at all. However, non-public classes cannot be accessed (and thus, tested) from other projects, so the only other option is move all the testing code to the same project (and thus release it with the final product).
  • Dependency Injection. Clearly having to give every other class I use a field and a constructor-parameter is significantly more work than just creating them when I need them; but then I can no longer test this class in isolation.
  • The Single Responsibility Principle, which has caused me so many headaches I've moved it to its own answer.

So what could we do to fix this? We'd need a drastic change in the language architecture:

  • We'd need the ability to mock classes
  • We'd need the ability to test private methods of internal classes, from another project (this may seem like a security vulnerability, but I don't see a problem if the testee is forced to name its tester-classes).
  • Dependency injection (or service location), as well as something equivalent to our existing factory pattern, would have to be a core part of the language.

In short, we need a language designed from the ground-up with testability in mind.

  • I'm guessing you've never heard of [TypeMock](http://www.typemock.com/)? It allows mocking classes, privates, statics (factory), anything. – Allon Guralnek Jan 20 '11 at 23:16
  • @Allon: I have, but it is [far from free](http://www.typemock.com/pricing), making it not an option for most people. – BlueRaja - Danny Pflughoeft Jan 20 '11 at 23:30
  • If you have to write lots of factory classes then you're doing something wrong. Smart DI libraries (e.g. Autofac for C#) can utilize Func, Lazy, Delegates etc. for factories, without writing any boilerplate yourself. – gix Feb 22 '11 at 15:01
10

Separating applications into Tiers; Data Layer, Business Layer, UI Layer

The main reason that I dislike this is that most places that follow this method, use very brittle frameworks for getting it done. I.E. UI Layer is hand coded to deal with business layer objects, business layer objects are hand coded to deal with business rules and database, database is SQL and is already fairly brittle and managed by the "DBA" group who dislike change.

Why is this bad? The most common enhancement request is likely "I need a field on screen X that has Y." Bang! You just have a new feature that affects every single layer, and if you separate layers with different programmers, it just became a big issue involving multiple people and groups, for a very simple change.

Additionally, I don't know how many times I've been in arguments that go something like this. "The name field is limited to a maximum length of 30, is this a UI layer, or business layer, or data layer issue?" And there are a hundred arguments, and no right answer. The answer is the same, it affects all layers, you don't want to make the UI dumb and have to go through all layers, and fail at the database, just so the user finds out his entry is too long. If you change it, it affects all layers, etc.

The "layers" tend to leak as well; If any layer is physically separated by process/machine boundaries (I.E. web UI and business backend logic), the rules get duplicated to make everything work reasonably well. I.e. some business logic ends up in the UI, even though it's a "business rule", because the user needs the UI to be responsive.

If the framework used, or architecture used, is resilient to small changes and leakage, i.e. based on meta data, and dynamically adjusted through all layers, it can be less painful. But most frameworks this is a royal pain, requiring UI changes, business layer changes, and database changes, for every small change, and causing more work and less help than the technique is supposed to produce.

I will probably get slammed for this, but there it is :)

Jay
  • 1,901
  • 13
  • 15
  • +1, sounds like my last place of employement to the most minute detail! I respect tiered applications on principle however too many people treat it like a silver bullet when it doesn't make sense. Most business software has a staggerringly low amount of business logic and what it does have is relatively simple. This can make layering business logic a nightmare of boilerplate code. Many times the lines can get blurred between data access and business logic because the query IS the business logic. – maple_shaft Jun 23 '11 at 12:42
  • ... further, most shops absolutely fail at recognizing UI logic or Presentation logic. Because they don't understand just how little business logic exists in a typical CRUD application, they feel like they must be doing something wrong when the majority of their logic resides in the presentation layer as presentation logic. It gets falsely identified as business logic and then people push it to the server for yet another server call. A Thin Client can and should have Presentation logic, Eg. (Hide textField2 if option1 is selected in dropDown3). – maple_shaft Jun 23 '11 at 12:46
9

JavaBeans

The use of JavaBeans in Java. See my question Why shouldn't I use immutable POJOs instead of JavaBeans? on StackOverflow.

Jonas
  • 14,867
  • 9
  • 69
  • 102
7

User Stories / Use Cases / Personas

 

I understand the need for these when you are programming for an industry that you are not familiar with, but I think that when they are implemented in full force they become too corporate and become a waste of time.

riwalk
  • 7,660
  • 3
  • 29
  • 32
7

80 char/line limits are dumb

I understand that some compromises need to be made to match the pace of the slowest runner on the GUI side (screen resolution limitations, etc) but, why does that rule apply to code formatting?

See... There was this little invention called the horizontal scroll bar that was created to manage virtual screen space outside of the right-most pixel boundary. Why don't developers, who have managed to create great productivity enhancing tools like syntax highlighting and auto-complete use it?

Sure, there are CLI editors religiously used by *nix dinosaurs that follow the tired old limitations of their VT220 terminals but why are the rest of us held to the same standard?

I say, screw the 80 char limit. If the dinosaurs are epic enough to hack on emacs/vim all day, why can't the should be capable of creating an extension that automatically wraps lines or gives horizontal scrolling capabilities to their CLI IDEs?

1920x1080 pixel monitors will eventually become the norm and developers worldwide are still living under the same limitations with no bearing on why they do except, that's what they were told to do by their elders when they were just starting to program.

80 char limits are not a best practice but a niche practice for a very minority of programmers and should be treated as such.

Edit:

It's understandable that many devs don't like the horizontal scrollbar because it requires a mouse gesture so... Why not increase the column width limit to a higher number (than 80) for those of us who use modern displays.

When 800x600 computer monitors became the norm for most users, web developers increased their website widths to accommodate the majority... Why can't developers do the same.

Evan Plaice
  • 5,725
  • 2
  • 24
  • 34
  • Horizontal scrollbars are the devil. 80-char limit is still important. Unless the language is whitespace sensitive in places, then allowances have to be made. – Orbling Jan 17 '11 at 14:17
  • 1
    @Orbling Nice GWB logic with the ___ is evil angle. So you loathe the hScroll, do you have any valid reason why col-width should be limited to 80 chars? Why not 160 or 256? I think we can all assume that most developers have retired their VT220 terminals and replaced them with puTTY so they're capable of extending the width programmatically anyway. – Evan Plaice Jan 18 '11 at 01:38
  • 4
    I prefer that we stick to the 80 char limit. As soon as you give me more horizontal space, I'll try to open up additional documents side-by-side with the rest. I'd hate to have to scroll four ways. Also, I've noticed often that I'm force to write more readable code with the 80 char cap. – Filip Dupanović Jan 18 '11 at 02:17
  • @Evan Plaice: I do use PuTTY a fair bit, and whilst you can extend it, the default is 80, it is tradition. That aside, long lines generally reduce readability of code; if you want to print code at a readable font-size, 80-chars is a good limit; when putting code windows side-by-side on a single screen, 80-chars works fine even on a smaller 4:3 display and a number of other reasons, I think there is actually a whole question on this point. – Orbling Jan 18 '11 at 02:27
  • @kRON @Orbling give me more horizontal space and I no longer have to deal with function/method calls being broken up into multiple lines. I use a 17" laptop with 1920x1080 resolution with multiple desktops or additional monitors so horizontal space isn't an issue. 80 char limits are an issue (waste of time). What's wrong with using a modern editor + text wrapping? – Evan Plaice Jan 18 '11 at 02:43
  • @Evan text wrapping doesn't play nice with formatted text, or at least it confuses the hell out of me. Multiple screens are nice, but I've always felt that bigger screens are the way to go. I find it a great productivity booster if you have multiple documents, that are the focus of your current task, open side by side, especially if there are dependencies involved. – Filip Dupanović Jan 18 '11 at 03:04
  • 2
    how would you print it? –  Jan 21 '11 at 00:11
  • 5
    Sorry - have to disagree on this one. I really hate long long lines - it requires more eye movement, it requires mouse gestures to scroll across, its harder to see subtly little dangly things down the end of a line. In about 99% of cases there are clean ways of making stuff run over several (shorter) lines which is clearer and easier to read. 80 Chars might be arbitrary and "because it was that way in punch card days" but its still a reasonable framework to work inside - most of the time - for the reasons above. – quickly_now Jan 21 '11 at 01:25
  • @Thorbjørn Break lines after an argument and after a specified number of spaces. Don't print a line number on a line that has been wrapped. Or, add a symbol representing that the line has been wrapped. And, optionally, indent wrapped lines 4 spaces more than the line that came before it. Just because most text-wrapping uses the notepad standard doesn't mean it can't be improved. Pretty much, do it the same way that programmers do it manually now. If on-the-fly spell checking can be accomplished programmatically why can't this? – Evan Plaice Feb 06 '11 at 03:39
  • @quickly_now To each his own... But, consider this; in OOP you lose 4 spaces to the namespace indent, 4 to the class indent. That's 72 spaces left. Add a classifiers, private/public/internal, (we'll assume 7 spaces) 65 left. Add type int/double/float/short/etc) 60 left (if it doesn't return a class). 5-15 for the name. plus opening/closing braces... That leaves roughly 43 spaces for all your arguments plus their type specifiers and comma separators. 80 spaces pretty much guarantees that you'll need to do a lot of manual indenting in OOP code. IMHO, it's a waste of effort to do this manually. – Evan Plaice Feb 06 '11 at 03:50
  • @Evan, so you basically introduce another formatting step in the printout meaning that what you get on paper is not 1-1 on what you see on screen. I cannot figure out right now if that is an advantage or a disadvantage :) –  Feb 06 '11 at 09:39
  • 3
    Indent by 2 spaces. And use an editor with an automatic indentation tracker. I've done it that way for years, no big deal. (Emacs with the right modes helps here.) – quickly_now Feb 06 '11 at 09:47
  • @Thorbjørn I thought you meant print on screen, not print on a printer. People still print out code? Anyway, I was talking more along the lines of automatic word wrapping long lines to 80 spaces cleanly. Maybe a tool to do this on the fly for printing might meet your needs. – Evan Plaice Feb 07 '11 at 01:31
  • @quickly_now No can do. Project guidelines require 4 spaces and development is done using C# in Visual Studio/MonoDevelop. Emacs may be useful for some languages, but I wouldn't even remotely consider doing C# development in an editor that doesn't support some form of intelligent autocomplete. Emacs FTL. – Evan Plaice Feb 07 '11 at 01:35
  • @Evan Plaice: For C#, agreed. Emacs would be a bad idea. Perhaps coding guidelines should be changed :) – quickly_now Feb 07 '11 at 07:43
  • @quickly_now Or perhaps, this isn't really an industry best practice because it only really benefits a small subset of developers. Oh snap :) – Evan Plaice Feb 07 '11 at 11:07
  • @Evan The 4 spaces for the namespace indent isn't related to OOP, but C++ and C# (only languages I know with scoped namespaces). Java, Python, D, Haskell, etc. all do file based namespace scoping. You also may have just had a greater complaint against languages with too many classifiers than the 80 character limit. – jsternberg Apr 16 '11 at 15:09
5

Measure, Measure, Measure

So fine, measure away, but for isolating performance bugs, measuring works about as well as successive elimination. Here is the method I use.

I've been trying to track down the source of the measure-measure "wisdom". Somebody with a tall enough soapbox said it, and now it travels.

Mike Dunlavey
  • 12,815
  • 2
  • 35
  • 58
5

Use Singletons

When you only should have one instance of something. I cant disagree more. Never use a singleton and just allocate it once and pass around the pointer/object/reference as necessary. There is absolutely no reason not to do this.

  • 2
    That's a whole bunch of negatives that has left me rather confused as to your actual stance on singletons. – Paul Butcher Nov 30 '10 at 11:17
  • 1
    @Paul Butcher: I hate singletons and it should never be used –  Dec 01 '10 at 07:58
  • The reason is that "assuming there can only be one instance running at a time" is sometimes unwarranted pessimism. The story usually goes like this: at the beginning, there is only need for a single singleton. As time evolves, you find that the singleton is no longer single anymore. Solution: refactor the singleton to become a normal object that can be passed around. To prevent this from happening, only use singletons for legitimate reasons, and not for a quick shortcut. – rwong Dec 01 '10 at 16:21
  • 1
    @rwong: Personally i dont think any reason is a legitimate reason. Just write it as a normal class. Really, there is no reason to use a singleton other then lazyness and to promote bad habits or design. –  Dec 01 '10 at 22:23
  • 6
    Who says that using Singeltons is best practice? – Phil Mander Dec 10 '10 at 00:29
  • 2
    Singletons do have their place, especially in instances where an operational structure is allocated and filled on startup, then basically becomes read only throughout the run time of the program. In that case, it just becomes a cache on the other side of a pointer, though. – Tim Post Dec 10 '10 at 08:06
  • @Tim: i could not use a singleton for that. –  Dec 10 '10 at 11:39
  • 1
    I've never seen an instance where a Singleton was warranted. I've only done web and desktop development, so maybe (just maybe) there is a place for them. But I haven't seen a valid demonstration of it yet. – Berin Loritsch Jan 12 '11 at 18:32
  • 1
    A Singleton is just a glorified global variable. Instead of directly accessing the Singleton/global variable, you can always pass it around explicitly as an parameter. – sbi Jan 19 '11 at 19:07
  • @sbi: Are you saying global variable are ok? We know what singletons are, i say (and some ppl agree) that you should NEVER EVER use them –  Jan 20 '11 at 03:18
  • @acidzombie24: No, not at all. What I'm saying is that Singletons are __only marginally better__ than global variables, which are shunned for very good reason. And you can get rid of Singletons/global variables in the same way: stop accessing it from everywhere. Instead pass it around explicitly as a function argument. – sbi Jan 20 '11 at 07:19
  • Sometimes a very small number of global variables is a perfectly acceptable thing to have. Simple, effective, cheap. Judgement needed though. – quickly_now Feb 06 '11 at 09:57
  • 1
    @sbi: I say Singletons are MUCH worse then global variables. They fool you into thinking you are using a proper class. And its less obvious when you look at source. –  Feb 06 '11 at 20:50
  • @acidzombie24: What does the lifetime and instantiation policy for a variable have to do with whether it is a proper class? – sbi Feb 06 '11 at 21:03
  • @sbi: It doesn't, but globals are clearly globals and a bad smell where singletons potentially trick you into believing they do something they do not. –  Feb 07 '11 at 00:14
  • @acid: To that I fully agree. When you're using a Singleton (or a less pompously named global variable), you're only to lazy to pass around the object. – sbi Feb 07 '11 at 03:12
5

My teacher demands I start all my identifiers (not including constants) with lowercase letters, e.g. myVariable.

I know it seems like a minor thing, but many programming languages require variables to start with uppercase letters. I value consistency, so it's a habit of mine to start everything with uppercase letters.

Maxpm
  • 3,146
  • 1
  • 25
  • 34
  • 9
    I had a teacher that required camelCase because he insisted that was what people use in the realWorld... At the same time I was programming in two different groups at my work - both groups insisted on under_scores. What matters is what your group uses. He could have defined himself as the lead programmer and all would have been fine in my book - we follow his conventions - but he was always giving his opinion as "the way things are done in the real world" as if there was no other valid way. – xnine Nov 30 '10 at 04:42
  • @xnine I don't have enough rep on this site yet to rate your comment, so I'll just reply with this **comment of approval.** – Maxpm Nov 30 '10 at 04:44
  • @Maxpm there you go :) – Carson Myers Dec 01 '10 at 03:43
  • The practice of using lowercase letters at the start of variable and function names is to clearly distinguish them from your classes (which in those languages start with capital letters). If the language limits you from starting with a lower case letter then that's different, but just make sure you use capitals on each following word after the first - Myreallylongvariablename is much less readable than MyReallyLongVariableName – adamk Dec 18 '10 at 00:42
  • 5
    Camel case (first letter lowercase) is a pretty common convention as well as Pascal case (first letter of every word capitalized). In most languages, camelCase is used on private/internal variables where PascalCase is used on classes, methods, properties, namespaces, public variables. It's not a bad practice to get used to just be ready for projects that may use a different naming scheme. – Evan Plaice Jan 12 '11 at 16:18
  • 2
    Just an FYI. Some languages infer meaning from whether the first letter of a variable is capital or not. In one such language, if the first letter is capital the variable is treated as a constant--any attempt to change it will throw an error. – Berin Loritsch Jan 12 '11 at 18:28
  • 1
    In [Go](http://go-lang.org/), public methods and members in classes start with an Upper-case letter; private ones with a lower-case letter. – Jonathan Leffler Jan 18 '11 at 14:39
5

Using unsigned int as iterator

When will they learn that using signed int is much safer and less bug-prone. Why is it sooo important that array index can only be positive number, that everyone is glad to overlook the the fact that 4 - 5 is 4294967295?

AareP
  • 369
  • 1
  • 7
  • Okay, now I'm curious- why do you say that? I'm feeling a little dumb - can you provide some code examples to back up your statements? – Paul Nathan Dec 10 '10 at 03:22
  • 4
    @Paul Nathan: as far as the buggyness goes here's one example: for(unsigned int i=0; i<10; i++){int crash_here=my_array[max(i-1,0)];} – AareP Dec 10 '10 at 05:59
  • @AareP: To be sure - I presume you're referencing the fact that when an unsigned int `0` is decremented by `1`, you actually end up with the **maximum positive value** that an unsigned int may store? – Adam Paynter Dec 10 '10 at 10:11
  • 1
    @Adam Paynter: yes. This might seem normal for c++ programmers, but let's face it, "unsigned int" is one bad "implementation" of positive-only-numbers. – AareP Dec 10 '10 at 11:19
  • Not a good idea on small embedded machines - frequently unsigned ints will generate smaller, faster code. Depends on the compiler and processor. – quickly_now Jan 21 '11 at 01:19
  • I disagree. This makes the programmer think twice before he writes `for(int i=10; i>=0; --i)`. – Mateen Ulhaq May 01 '11 at 05:31
  • -1 Using `unsigned int` for sizes *(including array indices)* is extremely **extremely** important for [writing secure programs](http://stackoverflow.com/questions/3259413/should-you-always-use-int-for-numbers-in-c-even-if-they-are-non-negative/3261019#3261019) – BlueRaja - Danny Pflughoeft Aug 17 '11 at 19:24
4

Methods shouldn't be longer than a single screen

I agree with the single-responsibility principle completely but why do people perceive it to mean "a function/method can have no more than a single responsibility at the finest level of logical granularity"?

The idea is simple. A function/method should accomplish one task. If part of that function/method can be used elsewhere, chop it out into it's own function/method. If it could be used elsewhere on the project, move it into its own class or a utility class and make it internally accessible.

Having a class that contains 27 helper methods that are only called once in the code is dumb, a waste of space, an unnecessary increase in complexity, and a massive time sink. It sounds more like a good rule for people who want to look busy refactoring code but don't produce much.

Here's my rule...

Write functions/methods to accomplish something

If you find yourself about to copy/paste some code, ask yourself whether it would be better to create a function/method for that code. If a function/method is only called once in another function/method, is there really a point in having it in the first place (will it be called more often in the future). Is it valuable to add more jumps in/out of functions/methods during debugging (Ie, does the added jump make debugging easier or harder)?

I completely agree that functions/methods greater than 200 lines need to be scrutinized but some functions only accomplish one task in as many lines and contain no useful parts that can be abstracted/used on the rest of the project.

I look it at from an API dev perspective... If a new user were to look at the class diagram of your code, how many parts of that diagram would make sense within the greater whole of the project and how many would exist solely as helpers to other parts internal to the project.

If I were to choose between two programmers: the first has a tendency to write functions/methods that try to do too much; the second breaks every part of every function/method to the finest level of granularity; I would choose the first hands down. The first would accomplish more (ie, write more meat of the application), his code would be easier to debug (due to fewer jumps in/out of functions/methods during debugging), and he would waste less time doing busy work perfecting how the code looks vs perfecting how the code works.

Limit unnecessary abstractions and don't pollute the autocomplete.

Evan Plaice
  • 5,725
  • 2
  • 24
  • 34
  • This. I once refactored some long function into several, only to realize that almost all of them needed almost all of the parameters of the original code. The parameter handling was such a pain that it was easier to just go back to the old code. – l0b0 Jan 17 '11 at 14:59
  • 3
    I think one counter argument to this is that refactoring parts of a large method into separate calls can make the larger method easier to read. Even if the method is only called once. – Jeremy Jan 17 '11 at 15:33
  • 1
    @Jeremy How? How does, abstracting out a section of code and placing it in its own method make it any more readable than just placing a one line comment at the top of that chunk of code describing what it does? Assuming that block of code is only used once in that section of code. Is it really **that** difficult for most programmers to decompose the working parts of the code while they read it and/or place a few one-line comments to remind them what it does if they can't? – Evan Plaice Jan 18 '11 at 01:28
  • 4
    @Evan: Putting pieces of code into functions effectively gives them a __name__, hopefully well explaining what that piece of code does. Now, wherever that piece of code is called, you can see the name __explaining what the code does__, instead of having to __analyze and understand__ the algorithm itself. If done well, this can ease reading and understanding code tremendously. – sbi Jan 19 '11 at 19:00
  • @sbi understandable, that's why I mentioned adding a one-liner comment describing what the subsequent code does. Ie. it accomplishes the same thing without adding the extra jump during debugging. I'm not saying that I **never ever** move some code out into functions just to clean up the code a little, I'm just trying to point out the that the practice is completely unnecessary. Abstracting out parts that are only going to be used once is good to assuage our inner OCD tendencies but it adds no real value to the code. In short, it's not really a best practice. – Evan Plaice Jan 20 '11 at 19:50
  • 1
    +1 and I'd give more if I could. There's nothing at all wrong with a lump of C code thats 1000 lines in a single function (eg a parser with a massive switch()) PROVIDED the intent is clear and simple. Busting all the little pieces out and calling them just makes the thing harder to understand. Of course there are limits to this too.... sensible judgement is everything. – quickly_now Jan 21 '11 at 01:22
  • What you are decribing is typically a consequency of badly designed code. Plus, a function called just in one place make sens : It is easier to read (because have to understand smaller pieces of cade at the same time), easier to test (and you'll be happy when support comes), easier to make the code evolve and so on. When you comes up with long function (usually short one that grow more and more over time) at some point you have to stop and think about refactoring. long function isn't bad in itself, they are sign that your design has to be improved. – deadalnix Jul 01 '11 at 19:07
2

Absolutist, strict, encapsulation

I agree with encapsulation mostly, but it was a real liberator to temporarily break that rule in order to convert some procedural code to OOP using the Array to Object refactoring. (Needed a bit of a push to loosen up a bit. Thanks, Martin Fowler)

  • Array

    ->

  • new class with array as public property

    ->

  • go through client code / add accessors

    ->

  • only then encapsulate it and make it private/protected

JW01
  • 3,579
  • 4
  • 22
  • 22
  • I agree, while using accessors is a good idea most of the time, it isn't always. It depends on how it's used. If the codebase is going to be used as a publicly accessible API, everything that's marked public should be encapsulated. If it's a library public and internal properties should be encapsulated. There are instances where it may make sense to break the rules but only if the internal implementation doesn't need to change later. For apps that have no API, I don't really see encapsulation as being necessary at all (and I usually do it anyway out of habit). – Evan Plaice Jan 18 '11 at 02:01
1

Over generalization.

The square block does NOT fit into the circle hole!

PMV
  • 121
  • 5
  • Agreed, but can you give an example? When you are building a framework, the complexity of generalizing things pays back many times over when you can use your widgets in flexible scenarios. – Robert Harvey Oct 26 '10 at 20:08
  • 6
    Since when was "over generalization" a best practice? – Peter Alexander Oct 26 '10 at 20:59
  • Seems more like an antipattern I guess. – PMV Oct 27 '10 at 03:07
  • I would edit this to read "making things as general as possible". You've got to get into the land of concrete things at some point. – Jon Purdy Oct 27 '10 at 14:32
  • 8
    Actually, a square block can fit into a circle hole, it just has to be smaller, or you have to use a lot of force. – Malfist Oct 27 '10 at 15:59
1

I tend to ignore almost all best practices when doing a relatively small, one-off project, like writing a parser required to migrate files. When all I need is the result, once, I don't care about readablitiy, scalability, maintainability, etc.

user281377
  • 28,352
  • 5
  • 75
  • 130
  • 16
    Please God, do not let me work with you. For two reasons: First, I agree, to a small extent, but some practices not only improve code, but improve development time. Second, I have come to realize that there is virtually no such thing as "one off code". If it was useful to you, there is a very good chance it will be useful to someone else. – riwalk Nov 22 '10 at 04:44
  • 4
    For those small one-off things, I don't have to work with anyone. I'm talking about programs with something like 100-200 lines of code, no teamwork required, thank you very much. IMO it's better to be honest with those things, declare them as disposable and do not make a big deal about them. Otherwise, you might be tempted to spend weeks on building the ultimate "data transformation and import framework" just to make sure that your program could hypothetically be useful for the yet unknown next task of that kind. – user281377 Nov 22 '10 at 08:18
  • @ammoQ, that's what I'm talking about. 100-200 lines of code that you write alone. I once was on a project where I had to copy new files (and only new files) from server A to server B several times per week. I wrote a one-off script to do it. When I got moved to a different project, the new guy asked me what I used to do it, so I gave him the script. From there, the script was expanded to give daily reports of changes and has become a solid part of the development environment. There is no such thing as one off code. – riwalk Nov 29 '10 at 22:32
  • @Stargazer712, best practices *improve* development time? This seems unlikely for the kinds of tasks (I believe) ammoQ is talking about. I recently wrote about 20 lines of python to patch together separate boot loader, kernel and rootfs binaries into a single image file. Should I have written a 'user story' about it, or played a few rounds of 'planning poker' to estimate it? Seems like overkill... – evadeflow Nov 29 '10 at 23:05
  • @evadeflow, First, I said "some", not "all". Second, you're living on the assumption that a user story is a best practice :). – riwalk Nov 29 '10 at 23:38
  • @Stargazer712: If you use something regulary (like "several times a week"), it's not a one-off thing. Maybe it's just 20 lines of code, but it should be properly written, properly documented, put into SVN etc. because someone will have to maintain it, sooner or later! My best-practice-exception only applies to software that is *immediately* obsolete after the first (and only) use in production. – user281377 Nov 29 '10 at 23:53
  • 2
    I've done that and seriously had it come back to bite me. The best practice of 'Code Reuse' might breathe life into a shambling monstrosity you hacked together, and the next thing you know it's crushing downtown New York while you scream "YOU WERE NEVER MEANT TO BE USED!" at it from your jail cell, where you await execution for having created that abomination. – Trevel Jan 17 '11 at 22:32
  • Trevel: LOL, that's why you have to kill it while you still can :D – user281377 Jan 18 '11 at 07:59
  • I have, for many years, been part of a multi-MLoC project. That thing rested on __thousands of those one-off pieces of code__, some a decade old, all of which probably started their miserable life as a proof of concept for some feature, but then got bend to base the feature on, because nobody had found the time to redo it properly. That taught me to start to refactor my code the moment it exceeds 20 lines. Yes, 80% of these pieces get thrown away, but __the other 20% are well worth the effort__. – sbi Jan 19 '11 at 19:05
  • sbi: Holy sh*t, seems like somebody got a wrong idea about the concept of one-off. Anyway, a proof-of-concept thing should be well-made if possible, because it happens a lot that it becomes production code. – user281377 Jan 19 '11 at 20:41
  • One-off where you don't actually DELETE the source code is not one-off. If lives whether you like it or not. I too have been caught - both myself and in inheriting somebody elses crap to fix and maintain. In the spirit of the topic: One-off code isn't. – quickly_now Jan 21 '11 at 01:28
  • quickly_now: Well, i have a compost heap where those sources go to. – user281377 Jan 21 '11 at 07:20
1

Only one I can think of is formatting Java if statements like this:

if (condition) ...

instead of

if(condition) ...

For the most part, I've found best practices to be best, and any initial reservations I have with them tend to be misplaced, or overridden by more significant concerns.

Armand
  • 6,508
  • 4
  • 37
  • 53
1

"If it ain't broke, don't fix it". Code is always broken. The reason nobody is telling the original developer is that they

  1. think it's their own fault,
  2. don't know who to contact,
  3. can't get through support to speak with a "techie",
  4. can't be bothered, or
  5. don't know how to describe the problem in enough detail.
l0b0
  • 11,014
  • 2
  • 43
  • 47
1

Always favour long, explicit, variable and routine names over the shortened form

e.g favouring Authentication over AuthN

This bit me, when MS file paths got so long that my subversion working copy could not load onto my PC.

See: https://stackoverflow.com/questions/3282303/my-choice-of-class-names-is-hampered-by-windows-xp-max-path-length-issues-with-sv

JW01
  • 3,579
  • 4
  • 22
  • 22
  • 2
    Usually, name lengths are shortened by combining similar methods into namespaces/classes. For instance, I'd put that function under CustomerNotifications.PaymentReceived(). That's more a limitation of file-based-MVC frameworks and/or poor design than the fault of variable naming conventions. And yes, I prefer Authorization over AuthN because, what is N supposed to represent anyway. To the person naming it, it's clear, to others it isn't. – Evan Plaice Jan 18 '11 at 01:54
  • I try to be explicit as much as possible. But, if you have to modify your whole design to accommodate the "best practice of favouring longer names" then something is wrong. I don't think 'Good design' ispo facto leads to shorter file paths. – JW01 Jan 21 '11 at 17:00
0

In JavaScript many people consider it best practice to write all the semicolons they would need in other C branch languages, (and a lot of them ain't afraid to point out the "error" whenever they see code that could have more semicolons). I never found a need for them, their role as visual marker is already filled by line shifts.

On a side note it is kinda funny to see a compiler requiring semicolons, but still confidently throw the error "Missing semicolon". "We both know what you meant to write, but I must reject your application because I formally ain't allowed to distinct between spaces and line shifts."

A number of other JavaScript/C-style best practices are circulating, but none of them come close in terms of hype without qualified arguments.

Edit: It is perfectly possible to minify code without semicolons Closure Compiler will insert them as needed.

aaaaaaaaaaaa
  • 1,076
  • 6
  • 11
  • 2
    Douglas Crockford says to put the semi-colon, so I put the semi-colon. – PeterAllenWebb Nov 30 '10 at 01:38
  • @Peter that's a terrible reason, unless what you really mean is "Douglas Crockford says the semicolon is a good idea because [...] and I agree with him." My personal reason is something like, "Douglas Crockford says to use the semicolon, and I don't remember why exactly, but I remember it made sense when I read it, so I've tried to adopt the habit." – Tyler Nov 30 '10 at 08:29
  • 3
    The purpose for this is so that minifiers don't screw it all up. The line breaks only work while they still exist. – Paul Butcher Nov 30 '10 at 11:18
  • 11
    I can't stand people saying, "Well the compiler complained about the semicolon, why can't it just put one there!?" The answer is because you did not put one there. Anyone who worked with Word 2003 enough knows that computers are terrible at guessing intention. Compilers are even worse. The "semicolon expected" error just means that based on the crap you typed, it looks like a semicolon might work here. It could be that you were missing a semicolon, or it could be that you typed an intelligible line of crap that cannot be read. It is not the compilers job to read your mind. – riwalk Nov 30 '10 at 15:19
  • Closure Compiler handles line breaks perfectly well, inserting semicolons as needed. Sure I have read that omitting semicolons makes code error prone, but I have never found any hard arguments, and my experience confirms none of the supposed problems. @PeterAllenWebb: You also remember to break all lines longer than 80 characters, as Crockford say you must? – aaaaaaaaaaaa Nov 30 '10 at 15:45
  • @MatrixFrog Yes, it is bad to do something only for the reason that Douglas Crockford says to. The Raymond Chen quote in Conrad Frix's answer tells you exactly why. In fact, my actual reason is exactly the same as yours. It would be better yet if we both knew exactly why we were doing it, of course, and now thanks to Paul Butcher we remember at least one reason. – PeterAllenWebb Nov 30 '10 at 19:58
  • @PeterAllenWebb: You should include Douglas Crockford's explanation about *why* you should end all lines of javascript with a semi-colon. The absence of a semi-colon "can mask some errors". http://javascript.crockford.com/code.html – Jim G. Mar 25 '11 at 16:40
0

Databinding

I hate it and love it. (Depends on complexity of the project)

Amir Rezaei
  • 10,938
  • 6
  • 61
  • 86
0

Threading is the answer for all performance problems

Disagree with that for many reasons.

  • In general indexing and temporal caching gives much greater boost in performance. Of course you might actually need to use "dirty flags".. Making your design all messy and stuff..

  • Problems that come from multi-threading overweight all benefits time and money wise.

  • What multi-core-technology really brought us is better multi-tasking (running couple of applications just as fast as one). Utilization of skillfully written multithreading programs is just a side effect.

AareP
  • 369
  • 1
  • 7
  • Hmmm... I agree with you; but I have to say that this "best practice" hasn't reached my dark corner of the Earth. Through the years, many teammates have been quick to blame the database for performance problems. – Jim G. Feb 19 '11 at 23:15
  • Multiprocessing (while IPC may be more complex) **are** the answer. Why? Because garbage collection is as easy as dropping a process and, if one module of the program crashes, it doesn't bring down the whole system. Look at this http://www.google.com/googlebooks/chrome/small_00.html to see what I mean. – Evan Plaice Mar 25 '11 at 12:07
  • @Evan Plaice: You've *got* to be kidding me. Those performance gains come at the expense of significant programmatic complexity. – Jim G. Mar 25 '11 at 16:35
0

Well, one of my hot buttons is performance analysis. People say

Premature optimization is the root of all evil.

And then they say the complete quote is:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Which makes perfect sense when you're discussing "algorithms" - clever academic stuff where 3% of the program might be two lines of code.

I work in 10^6 line apps, with 10^5 methods in 10^4 classes, worked on over years by a dozen programmers and testers, undergoing numerous revisions and releases, where that quote has no relevance at all. Typically the time drains are single lines of code which may be "opportunities" but no one could ever guess where they are.

OTOH, people say "profile", or they say "measure measure". Well, fine. They say it, but they don't often DO it. Could be, the reason they don't do it is it doesn't work very well.

This answer says more about that, and points out what does work.

Mike Dunlavey
  • 12,815
  • 2
  • 35
  • 58
0

CONTINUE STATEMENT IS BAD

I'm going to reference the Joint Strike Fighter Air Vehicle C++ Coding Standards here:

AV Rule 190 (MISRA Rule 57) The continue statement shall not be used.

and a professor of mine:

"THE CONTINUE STATEMENT IS BAD"

Their logic is that it disrupts control flow and makes the code less readable. Using nested if statements is a cleaner approach.

Yet, I find this more readable:

 
for(int i = 0; i < CONST_FOO; ++i) {
    if(aTwo[i] == NULL) continue;
    aTwo[i]->DoBar();
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
    //...do other stuff with aTwo[i]
}
 

Than this:

 
for(int i = 0; i < CONST_FOO; ++i) {
    if(aTwo[i] != NULL) {
        aTwo[i]->DoBar();
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
        //...do other stuff with aTwo[i]
    }
}
 

This is a very simplistic example, (imagine the nested ifs go a lot more than one level) but it is similar to the ridiculous "only one return statement per function" "best" practice. It prevents the debugger from jumping around (possibly several pages away), less indentation generally makes easily readable code, etc.

Casey
  • 425
  • 3
  • 13
-1

Using static and final keywords.

static, because its faster.

final, because you dont want anybody to extend that class(aka Design For Extension).

it makes class using those classes very hard to unit-test.

IAdapter
  • 1,345
  • 1
  • 9
  • 22
  • You are certainly correct that those things make unit testing hard, but you are assuming the best practice of unit testing everything. There are other ways to test, there are not other ways to make things static or final if you need those efficiencies / protections. – Bill Oct 27 '10 at 21:13
  • 3
    As a best practice, classes are marked final in frameworks because it is hard to predict how a user might extend them. Consequently it is safer to seal the class than it is to try and figure out every possible way your users might abuse the class. See http://blogs.msdn.com/b/ericlippert/archive/2004/01/22/61803.aspx – Robert Harvey Oct 27 '10 at 22:05
  • 4
    Also, I'm not a big fan of testing frameworks that require you to inherit from the classes you are testing. There are plenty of test frameworks out there that don't require this, making it straightforward to test sealed classes. – Robert Harvey Oct 27 '10 at 22:08
  • 6
    @Robert Harvey: It's safer to seal the class - safer to *whom*? If someone extends a class, isn't *he* responsible for figuring out how to do it properly? I think sealing the class just for the sake of dogma is like obfuscating your source code before putting it into version control. Yeah, safe, kind of, because then nobody else can change it. That kind of safety just doesn't come for free. – Joonas Pulakka Oct 28 '10 at 06:53
  • @Joonas: You've apparently never had to provide technical support for a product. :) I agree that it is inconvenient (and a bit counterintuitive) to deal with sealed classes *in a framework,* but you must also agree that sealing the classes greatly reduces the number of things that can go wrong. – Robert Harvey Oct 28 '10 at 14:44
  • 1
    Granted, this depends on the context. I definitely understand your stance if you're making a software *library* product for customers who *code* on top of your code; you're then also protecting yourself from having to freeze some implementation details because your clients happen to depend on those. It's quite different from making code within the house where just a few programmers ever interact directly with your code (in which case keeping the code "open" is quite an useful general practice). – Joonas Pulakka Oct 28 '10 at 15:05
  • Read "Effective Java" and it will be immediately clear why classes which weren't explicitly designed for inheritance should be final (and why C#'s non-virtual-methods-by-default is preferable) – BlueRaja - Danny Pflughoeft Jan 20 '11 at 22:00
  • @BlueRaja - Danny Pflughoeft please do some real world programming and not just read books written by people who made a lot of mistakes and are proud of it("Java Puzzles" is an amazing book, right? for me its a list of bugs that should be fixed and are not because of backward compatibility and laziness) – IAdapter Jan 21 '11 at 08:53
  • @0101: Please read some books to learn about other people's mistakes and how to avoid them - it's not everyone who gets to design an API used by millions of people :) – BlueRaja - Danny Pflughoeft Jan 21 '11 at 16:28
  • @BlueRaja - Danny Pflughoeft I have read a lot of books, I have even read one today. there are just some things I disagree with. everybody can be a sheep and you blindly follow the pack. I understand the reasoning, but I don't agree with it. P.S. its topic about what BP you disagree with. – IAdapter Jan 21 '11 at 20:50
-1

Static typing of primitives

Having to specify a data type as one of a large number of types of number is premature optimization. Just call it a number, keep track of its value, and assign a type later if it's useful for optimization.

User-defined types are useful. Inherent types aren't.

David Thornley
  • 20,238
  • 2
  • 55
  • 82
  • 3
    The behavior of each of the numeric types differs significantly. For example, the `decimal` type in c# is suitable for financial calculations, while the `'float` and `double` types are not. `int` will handle negative numbers; `uint` will not. You might as well choose a suitable type to start with. – Robert Harvey Oct 28 '10 at 20:54
  • @Robert: Why? Why do I have to know in advance if a variable will sometime be used for financial calculations or negative numbers, or, for that matter, numbers greater than 2^31 - 1? Why should I have to worry about `float` vs. `double` when I'm writing the program? Why not have a generic number type that will work for all of these? – David Thornley Oct 28 '10 at 21:02
  • 4
    I submit that if you don't already have a pretty good idea of what the variable is going to be used for, you don't need it (yet). See also http://docs.sun.com/source/806-3568/ncg_goldberg.html – Robert Harvey Oct 28 '10 at 21:06
  • 1
    @Robert: And I believe that, if you want a number, you shouldn't have to worry immediately about expected size and signedness. I found it ironic that you included a link for what everybody needs to know about floating-point numbers: it should be possible to program without keeping that entire paper in mind (much as I like that particular paper). – David Thornley Oct 28 '10 at 21:15
  • Very interesting and I agree. Not really a programming practice though (more of a language feature). – Steven Evers Oct 28 '10 at 21:33
  • 3
    If I need a general number, and it doesn't have a decimal point in it, I generally use an `int`. That covers about 95% of all cases where I need a number. – Robert Harvey Oct 28 '10 at 21:42
  • @Robert Harvey Me too, and that's excellent, but I think David has a point: why can't "the smart enough compiler" look at the code and see that I never used my int for a negative or for more than 255 values and make it a uint8? Precise vs imprecise decimal/float is a bit trickier of a problem, but automatic sizing and signing of ints should be a no brainer, as in I shouldn't need to think about which of 8 versions of int I should use. – CodexArcanum Nov 01 '10 at 16:06
  • 6
    @Codex: Ah, no. I don't want the compiler making that decision for me. Changing types has all sorts of profound implications on the behavior of the code (casting and type conversion, for example). The compiler is not smart enough to figure out my intentions, nor should it be. *I specify my intentions by telling the compiler what numeric type I want it to use.* – Robert Harvey Nov 01 '10 at 16:19
  • 1
    Don't bother working on networking packet structure code (or anything stored in binary format) because they **do** care about what types you use and picking the wrong type can yield drastic results. For 99% of devs, int and float/double are enough. For the rest of us, there's a big difference between byte, short, ushort, int, uint, long, ulong. – Evan Plaice Jan 12 '11 at 16:30
-3

I don't see anything wrong with:

if ($variable = some_function_call()) {
  do_some_stuff($variable); # etc.
}

but some people (clearly Quiche Eaters) insist that the single equals sign is easily mistaken for an ==, which I've never had a problem with—and neither have any of the programmers I've ever worked with.

David Kendal
  • 165
  • 7
  • 2
    Is this assigning the result of `some_function_call()` to `$variable`, and then using `$variable` as the boolean result for the `if`? **Yuck.** This adds too much cleverness for not enough benefit, and makes it difficult to figure out what your intent is. Do the world (or at least the programmer coming after you) a favor, and separate it into two operations; the assignment, and then the `if` check. – Robert Harvey Jan 12 '11 at 18:23
  • The only *similar* case I've seen is when using raw MySQL functions in PHP and a `while ($row = mysql_fetch_row` method. I don't recommend it. – Josh K Jan 13 '11 at 14:18
  • I do this in Java when I am reading from a `BufferedReader`. That is: `for(String line; (line = in.readLine()) != null;){ /*do stuff*/}` This is nice since I am only calling `in.readLine()` once in my code. – Jeremy Jan 17 '11 at 15:30
  • It's similar to Python's \n for line in open('file.txt','r').readlines():\n\tprint line – Christopher Mahan Jan 20 '11 at 22:19
  • I used to think I was terribly clever doing things like this. Then I saw the light. If I have to stop and have a big pause to figure out whats really going on... then the programmer failed. Its clearer, in the end, to break this into 2 statements / lines. – quickly_now Feb 06 '11 at 10:02
  • @Jeremy: I also use that technique. It is nice because Java makes your intention clearer because it forces you to explicitly convert the expression (`line = in.readLine()`) into a boolean expression (`!= null`). When the language automagically converts arbitrary expressions into boolean expressions, things can get a little wonky. – Adam Paynter Feb 20 '11 at 10:11
-8

Using const-keyword in c++

How hard is it to understand that functions starting with "Get..." are normally "const". Also has anyone seen evidence that compiler optimizes const-riddled code better? At least when comparing the following two later one performs faster (when optimization is enabled):

__forceinline void test(const int a);
test(1);
test(2);
test(3);


template<int a>
void test();
test<1>();
test<2>();
test<3>();
AareP
  • 369
  • 1
  • 7
  • 5
    Have to disagree here; there are cases where const is required, mainly for operator overloading to work properly. (This probably why you are getting down votes.) – Jay Jan 12 '11 at 18:31
  • http://stackoverflow.com/questions/4669870/whats-wrong-with-const/4669936#4669936 – sbi Jan 19 '11 at 19:11
  • If you do embedded development on small micros, and actually look at the generated code, you will frequently find that using const where you want something to be read-only DOES generated better code. – quickly_now Feb 06 '11 at 10:03
  • @sbi: sure, every technique has some benefits. Unfortunately the end result and productivity is what counts. Why c# is considered to be more productive language than c++? Because there's no const-correctness in it, that's why. :) Just kidding of course.. – AareP Feb 19 '11 at 19:39
  • @AreP: I'm actually programming in C#, and I think the main reason is its rich infrastructure (.NET libraries). As for productivity, nothing is less productive than spending days to hunt down silly bugs: http://chat.stackoverflow.com/transcript/message/254711#254711 – sbi Feb 19 '11 at 21:36