19

I've been here for nearly a month and it seems that people have a tendency to be eager to use the "Premature Optimization is the root of all evil" argument as soon as someone mentions efficiency.

What is really a premature optimization? What is the difference between what is essentially writing a well designed system, or using certain methods and premature optimizations?

Certain aspects that I feel should be interesting topics within this question:

  • In what way does the optimization influence code complexity?
  • How does the optimization influence development time/cost?
  • Does the optimization emphasize further understanding of the platform(if applicable)?
  • Is the optimization abstractable?
  • How does the optimization influence design?
  • Are "general solutions" the better choice instead of specific solutions for a problem because the specific solution is an optimization?

EDIT / update: Found these two links that are very interesting regarding what a premature optimization really is:
http://smallwig.blogspot.com/2008/04/smallwig-theory-of-optimization.html
http://www.acm.org/ubiquity/views/v7i24_fallacy.html

7 Answers7

26

Any optimisation that involves an associated pessimisation (i.e readability, maintainability) without a clear benefit, demonstrable via testing/profiling.

pauljwilliams
  • 501
  • 3
  • 4
  • I like your answer but I think that it has simplified what I was really out to get. There are a lot of cases that you can't demonstrate by profiling or testing (such as not yet implemented software) and taking an optimized decision regarding design is definetly not always a bad thing, is it? I mean, one of the actual requirements of a system can be a certain run-time performance. If you didn't design the system with performance in mind, you'll most likely have to re-write all of it. This thought is specifically directed towards "How does the optimization influence development time/cost?". –  Aug 13 '10 at 14:04
  • 2
    @Simon - I think he may have prematurely optimised this answer... – Paddy Aug 13 '10 at 14:27
  • @Simon, the answer stands. If you're making the code less readable, or less maintainable, and you don't have profile results to prove that you need to, then the optimization is premature. You haven't proven that the code isn't good enough; you haven't proven that you need to hurt your productivity *today* by making the code worse in the face of a *possible* problem tomorrow. See YAGNI: http://en.wikipedia.org/wiki/YAGNI – Joe White Aug 15 '10 at 13:17
  • @Joe White: I think that smallwig's theory of optimization applies, and I think that his reasoning is way better than yours. Randall Hyde has written an even more extensive "Fallacy of Premature Optimizations". I suggest that you read them, they're way better than "YAGNI". –  Aug 16 '10 at 07:21
  • Wow, really succinct and to the heart of the matter, +1. – peterchen Aug 16 '10 at 13:21
  • I'm with Visage here. I've seen many instances of people making changes "because it'll be faster" which end up making no difference to performance and reducing maintainability. I've seen NO instances of someone "having to rewrite" because of performance. Virtually all speedups, in the rare cases where work has to be done, can be done by rewriting a tiny fraction of the code. – DJClayworth Sep 28 '10 at 14:06
  • The only exception is an obvious mistake, like writing a O(n^2) process where there is a good O(N) process that does the same. – DJClayworth Sep 28 '10 at 14:07
  • @DJClayworth: NO instances? The place I work currently has performance problems caused by the design of our SQL tables. If we'd thought ahead we'd have done a better design or gone for something noSQL. – Zan Lynx Feb 25 '11 at 23:38
  • @DJClayworth: The ONLY exception? Honestly, anyone claiming ANYTHING as unequivocally as that in the complex world of dos and don'ts, whys and why nots concerning how to construct well-performing systems should not and cannot be taken seriously. Perhaps you should just read instead? – Olof Forshell Feb 26 '11 at 00:12
  • 1
    @Zan I would have said good design would have isolated the design of the SQL tables, which meant that you could have optimized them at a later date without affecting the rest of the program. If your design didn't allow for that then I agree, that wouldn't have been premature. Having your table design locked in early was the problem there. – DJClayworth Feb 28 '11 at 15:36
  • @Olof Remember you are disagreeing with Donald Knuth here. However even Knuth only claimed that we should disregard performance "97% of the time". I'll go along with that. – DJClayworth Feb 28 '11 at 15:39
10

When you optimise something without profiling it. How can you be sure it will actually speed anything up? If anything it is likely to make your code harder to read and less maintainable without any real benefit. Modern compilers have a huge array of optimisations they can use, in many cases you may have no idea what effect (speedwise) your optimisation may have.

In short: Profile, Optimise, Profile, Optimise...

Callum Rogers
  • 500
  • 3
  • 8
  • 2
    You can design a system to be performant or not performant. By experience you can surely tell that doing something in a certain way leads to poor performance and avoid doing so (or suggesting to other people that they shouldn't do so). Is that a premature optimization since the actual code hasn't even been implemented yet? Compilers really can't optimize stupid solutions. –  Aug 13 '10 at 14:06
7

A premature optimisation is one that was done too soon to guarantee an optimisation.

This is different to a bad optimisation:

  1. Maybe that something really does speed up an important piece of code isn't worth some down-side, even if it was applied after profiling and user-experience demonstrated its value - in which case it is bad, but not premature.

  2. On the other hand, maybe something done prematurely actually turns out to be just what was needed - in which case it was good, even though it was premature. Generally people who are guilty of premature optimisation will do this a lot, it's just that they'll balance it by doing stuff that makes things worse even more frequently.

Premature also doesn't mean early, just too early.

If you're designing an application that involves use of communication between two agents, and there is a way to either build a caching mechanism into the communication, or to build on one in an existing communication system (e.g. if you use HTTP), then this caching is not premature, unless the responses won't actually be cacheable (in which case it's premature as it's pointless). Such a fundamental design decision based on analysis of the problem (but not profiling of actual code - the code hasn't been written) made for purposes of efficiency can be the difference between feasibilty and infeasibilty.

Premature doesn't mean going for the option that seems most likely to be efficient given seemingly equivalent choices. You have to go for one of the options after all, so the likely efficiency is one of the factors you should weigh up. This is a case where intuition does have some value: (A) you don't yet have enough data to make an informed decision purely on the facts, (B) some of the other factors you are weighing up are intuitive also (those that affect how readable the code is; measuring it beyond just seeing how readable it is to you is only useful in a few cases, and again you can't measure the readability of something that hasn't been written, and finally it can only be measured statistically so if you're the main dev your opinion matters more than a statistically valid sample of devs). Move too far away from an option just because you fear being accused of premature optimisation, and you're just prematurely optimising for a different feature.

Nor does premature mean a small optimisation. If you can make a small improvement with no downside, go for it. Indeed, when Knuth said "premature optimisation is the root of all evil" he actually brings it up while defending taking the effort of working to gain small efficiency gains:

The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering~ Of course I wouldn't bother making such optimizations on a oneshot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies. There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. —Knuth, Donald. "Structured Programming with go to Statements", Stanford, 1974

Premature is when it is simply too early to know whether something you are doing, with efficiency (whether in speed, memory, resource use, or a combination of these) in mind will actually produce the benefits intended and not also bring about other lacks and indeed that they won't actually harm the efficiency in the area they are seeking to be efficient in (a classic in the .NET world is people, often acting out of an emotional desire for control, seeing the GC as their enemy and working to "improve the efficiency" of garbage collection, and making it less efficient perhaps 95% or more of the time).

That's premature because it is, well, premature - too early.

Jon Hanna
  • 2,115
  • 12
  • 15
  • 1
    "Premature is when it is simply too early to know wether something you are doing, with efficiency in mind, will actually produce the benefits intended and not also bring about other lacks..". I think that sentence was a really good definition of premature optimizations. It suits into the fact that when designing a system, it is ok to have optimization in mind in order to "foresee" probable performance-costs. As long as it doesn't make the system more complex/hard to work with in the future, it's even good if it's a small optimization... right? –  Aug 13 '10 at 14:19
  • 1
    And also, sometimes you **can** foresee definite performance costs. You should have as much of an idea of the conditions affecting your application as you can before you write the first line of code. Sometimes optimising on the basis of these is about probabilities, but sometimes you can tell with 100% certainty that a certain approach will be more performant. As said above, sometimes such optimisations are the difference between a feasible solution and one that's impossible with available hardware. –  Aug 13 '10 at 14:25
  • 2
    On the flip side, sometimes a reasonable decision you hoped will make things more efficient, and which doesn't hurt in any other regard, will have no effect or a negative one on efficiency. Such an optimisation wasn't premature, but it was incorrect. When you **do** come to profile, you may well end up replacing it with what you at first thought would be less efficient. For that matter what is efficient today may be inefficient tomorrow and vice versa (changes in runtime are often talked about, but changes in the underlying data is a more common reason for good code becoming bad). –  Aug 13 '10 at 14:36
  • @Jon Hanna: That's a very good point. I think that people tend to ignore the way data is represented, and how that changes, way too often. –  Aug 13 '10 at 14:43
  • @Jon Hanna: One problem is that programmers are empirically bad at finding hot spots, which is why some of us keep insisting on profiling first. This obviously doesn't apply to the overall algorithm, or to things that might help but won't hurt (like using pre-increment operators when possible in C++). – David Thornley Aug 13 '10 at 15:15
  • @Simon. It's certainly something I've come up against, where I've changed collection-type used and gained an improvement and then months later changed it back and made an improvement again, because what (and particularly, how much) is going into that container has changed. –  Aug 13 '10 at 15:40
  • @David. LOL, it's still second nature to me to use ++x over x++ in cases where they are equivalent, though in the C# it's *extremely* unlikely that one ever beats the other. No harm done, though, and since so many are used to it, it becomes idiomatic which aids readability. Perhaps we could call reasonable *a priori* decisions "aiming at the optimial", to differentiate from optimising (which *a priori* is a verb that doesn't yet have an object). Good aiming at the optimal is different to good optimising, and the later need profiling or at least some sort of measurement. –  Aug 13 '10 at 15:47
  • @Jon Hanna, @David: I think you guys are pointing out something very important here. "priori" decisions are different from person to person though and how abstract can you make such a decision? Post/pre increment is a small thing, but could using container X instead of container Y be such a thing? Using stack allocated data vs heap? Etc. –  Aug 13 '10 at 16:44
  • Hmm, seeing the full quote here, it appears to me that Knuth's 97% might have been referring to how often we should forget small efficiencies, rather than how often premature optimization is the root of evil as I've often interpreted it in the past. And the rest of the answer is pretty great, too, so have a +1. – 8bittree Jul 27 '16 at 13:43
  • @8bittree he certainly is. For how often premature optimization is the root of all evil that's inherently 100% ("all evil") but also clearly hyperbole. It's worth reading his whole article, btw. – Jon Hanna Jul 27 '16 at 15:03
4

Premature optimization is often meant to refer to something that makes the code less legible while at the same time attempting to increase performance. When asking what, exactly, it might look like I can best describe it as "you know it when you see it" which probably doesn't help.

Some offhand examples that I can think of:

  1. Using bit mangling to increment a counter in a for-loop.
  2. Creating one gigantic function instead of many smaller functions to avoid function call overhead.
  3. Setting everything public to avoid the overhead of mutators and accessors.
  4. Creating a list of commonly used strings within a data object so that they do not need to be constantly reallocated (string pooling anyone?)
  5. Writing your own collection which mirrors functionality within an existing collection to avoid the overhead of bounds checking.

I'm sure there's more I've seen but, you get the idea.

wheaties
  • 3,584
  • 22
  • 23
  • These are simply examples of common low-level optimizations. I'm afraid that they don't really address any of the topics that I felt was interesting in the question. –  Aug 13 '10 at 14:07
  • Yes, they are common low level optimizations. More to the point, however, is that if they're done before you know you need them they make reading the code more difficult. They circumvent encapsulation, code reuse, and often times the principles of D-R-Y. Unless you need them, they're premature optimizations. – wheaties Aug 13 '10 at 14:11
  • @wheaties: I totally agree. Does this mean that high-level optimizations that are done on a design-level of a system is not premature optimizations? Or where is the line drawn? At the point where it makes code more difficult to read/debug/use? –  Aug 13 '10 at 14:12
  • Good question and I think the answer is "It Depends." – wheaties Aug 13 '10 at 14:26
  • @wheaties: On what? :) That's a bit what I was out to get when I wrote the question. How can you determine what is and what isn't a premature optimization? –  Aug 13 '10 at 14:29
  • As to readability, quite often yesterday's awkward optimisation is today's idiom. A coder in C or a C-based language that allows fall-through (i.e. not C#) should recognise Duff's Device well enough that it reads perfectly well. This doesn't mean it should be used (insert standard comment about compiler loop unwinding here), but that the readability factor has changed over time, not because the computers or the languages changed, but because the people did. –  Aug 13 '10 at 14:40
  • 1
    @wheaties: On a certain microcontroller and cross compiler I use a lot, "do {} while(--index);" loops produce faster and more compact code than other loop styles. Sometimes it matters; other times it doesn't. Generally, when coding for that micro I use the do{}while(--index);" loop as the 'standard idiom' for loops where I don't need an up-counting index. Is that "premature optimization" or simply a coding style? Incidentally, I use do-while rather than while unless there may be a need to skip the loop entirely. –  Aug 13 '10 at 15:30
  • You're dealing with a whole different regime than what I've mentioned up there in my post. In embedded systems, space/time/processing power are limited and the usual rules don't apply. On the other hand, if you're applying the same techniques to an enterprise application spanning multiple machines you're doing premature optimization. Let me clarify above. – wheaties Aug 13 '10 at 15:40
  • 1
    @wheaties: you claim that "In embedded systems, space/time/processing power are limited and the usual rules don't apply" does this mean that you believe that "In non-embedded systems, space/time/processing power are unlimited and the usual rules apply". How do you know where to draw the line? – Olof Forshell Feb 25 '11 at 23:40
  • @supercat: i'd call it applying hard-earned experience to everyday programming. Your choices and constructs are somewhat different (but not overly so) from what a newbie might write, they take no extra time (for you) to write yet and in the end they could well be the difference between being a project's hero or a zero. In achieving hard-erned experience I gather you've been called a "premature optimizer" on numerous occasions and been proud of it! :-) – Olof Forshell Feb 25 '11 at 23:46
  • @Olof Forshell You draw the line where it's needed on a case by case basis. That is to say, if you have to write code down to the machine level, then you deal with it, but only if your system is either highly limited or if you must have execution speed minimized. I'm talking where milliseconds matter type systems. – wheaties Feb 26 '11 at 13:28
  • @wheaties: Ok so there's some sort of absolute line (or boundary?) at a few (I think you mean) milliseconds and above that you do something and below you don't on a case by case basis. Is that a fair summation of your comment? – Olof Forshell Feb 28 '11 at 12:28
  • @Olof Forshell I could say yes but that wouldn't be completely true. In reality you need to profile and identify the "hot spots" of your application. If that means there's one part that needs micro-optimization at the millisecond level, then you do it there. In general, however, that's a fairly good rule of thumb. Millisecond optimizations are rarely, if ever, needed and when you're in that realm you can worry about them. – wheaties Feb 28 '11 at 12:39
2

A typical premature optimization I encountered a lot of times is doing everything in the same function because of "function call overhead". The result is an insanely huge function which is difficult to maintain.

The fact is: you don't know where the code is slow until you actually run a profiler. I've seen a very interesting demonstration of this point when I've been asked to optimize some code, a long time ago. I already had a course in code optimization, so I ran gprof and checked. The result was that 70 % of the running time was spent zeroing arrays. When you face such situation, the speedup you can obtain is negligible, and every effort is just going to make things worse.

Another case is handmade loop unrolling. Most compilers support loop unrolling as a feature, but unrolling every loop you create just because you are told it's a possible speedup strategy is wrong. The point of optimization is not to write high-performance code. It is to write high-performance code where there's a need for it.

To answer your questions

In what way does the optimization influence code complexity?

optimized code most likely contains clever tricks, and clever tricks tend to be 1) non portable across platforms, 2) not easy to understand immediately. Optimization necessarily produces more difficult code.

How does the optimization influence development time/cost?

you pay a development price to perform optimization, both for the actual brainwork to do, for the code deployment and testing, and for the less maintainable code you obtain. For this reason, optimization is better done on code that you are not going to touch in the future (e.g. base libraries or frameworks). Pristine new, or research code are better left unoptimized for obvious reasons.

Clearly, performance can be a product, so the important point is to strike the balance between development investment and customer satisfaction.

Does the optimization emphasize further understanding of the platform(if applicable)?

Absolutely. Pipelining requires a specific arrangement of instructions in order not to reset the pipeline. Highly parallel machines require rearrangement and communication that sometimes depend on the architecture or the networking of the cluster.

Is the optimization abstractable?

there are patterns for optimization, so in this sense yes, it can be abstracted. In terms of code, I doubt it, unless you delegate some optimizations to the compiler through auto-detection or pragmas.

How does the optimization influence design?

potentially, a lot, but if you have a good design, optimization tends to be hidden (simple example, replacing a Linked List with a Hash, or performing a do-loop in fortran column-wise instead of row-wise. In some cases however, a better design can have better performance (such as the case of the flyweight pattern).

Are "general solutions" the better choice instead of specific solutions for a problem because the specific solution is an optimization?

Not really. With general solution I assume you talk about general approaches such as a well-known design pattern. If you find out that you need a change in the pattern to satisfy your needs, that's not an optimization. It's a design change aimed at your specific problem. Optimization is aimed at improving performance. In some cases, you change the design to accommodate performance, so in this case the design change is also an optimization for performance, which can either reduce or increase maintainability, depending if the new design is more complex or not. An overdesigned system can also have performance problems.

That said, optimization is a case-by-case thing. You know the art, and you apply it to your specific problem. I'm not a great code-optimizer, but I am a good refactorer and the step is rather short.

Stefano Borini
  • 2,026
  • 3
  • 22
  • 32
  • Sun's JIT compiler for example likes small methods. Refactoring to more and smaller methods (that are called often) can speed up things when on a JVM. – eljenso Aug 13 '10 at 13:25
  • 1
    Mostly I see the insanely huge subroutine due to simple incompetence. If someone does it *on purpose*, they really need to be flogged with the cluestick. –  Aug 13 '10 at 13:34
  • I think that it's interesting that you took my topics and addressed them as specific to any general optimization. I was thinking of them more like points to think about and not questions. I really like on of the things that you point out regarding "How does the optimization influence design?" - "potentially, a lot, but if you have a good design, optimization tends to be hidden". Does this include that proposing a "better" design to a system (which leads to better performance) is not a premature optimization? Or is it? –  Aug 13 '10 at 14:11
  • 1
    @T.E.D. : I should do flogging at least three major research institutes then. – Stefano Borini Aug 13 '10 at 16:11
  • @Simon : that is debatable, and a matter of point of view. You can do a completely stupid design which leads to slow performance. In some sense, it would be comparable to build a shopping mall with the parking lot on the opposite side of the entrance. Doing it the right way is kind of a premature optimization, but it's an optimization of the design, not of the implementation. The "premature optimization" problem is mostly addressed to implementation optimization, although you should also not prematurely optimize a design unless you already know it's going to be slow. – Stefano Borini Aug 13 '10 at 16:14
  • @Simon : in other words, the premature optimization evilness resides in focusing on planning and implementing optimization since the beginning when you _don't_ _know_ _anything_ about the performance beforehand, and even if you know, you really have to be in the exact same situation. Pure code optimization (implementation) must always comes last because in general is a very fine, low-level process, compared to design which is a high level one. Premature design optimization can result in analysis paralysis, which is a baaad situation. – Stefano Borini Aug 13 '10 at 16:18
  • @Stefano Borini - Three? Well...you'd best get started then. :-) –  Aug 13 '10 at 16:24
  • 1
    @Stefano Borini: Any optimization, be it premature or not, can fail and lead to worse performance. I agree that pure code optimization (implementation) must always come last. I must say that you surely can know something about performance beforehand. A pretty good example is designing a system to place data in a none-cache friendly way while the main idea of the system is to iterate over a set of data and perform low-level processing on it. I've seen several "premature optimization!" screams about suggesting improvement regarding that kind of systems, and I think it is not. –  Aug 13 '10 at 16:39
  • @Stefano Borini: unless you're a complete newbie you should have at least somewhat of an inkling of what performance is possible beforehand. If you're experienced it should no longer be an inkling (though I've seen such "professionals" too) but rather a pretty good idea. As stated elsewhere in answers to this questions a performance-savvy programmer will write better-performing code without thinking about it - and at no extra penalty to the project - than one who isn't and who has never thought about performance. A well-performing system starts at the drawing-board though. – Olof Forshell Feb 25 '11 at 23:58
1

I'll try to describe how I look upon the development process with respect to performance and (not always) premature optimization. Also, make sure you have the complete quote

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.

(Donald Knuth)

Yet attempts at optimization "after the fact" i e after the system development has completed are very costly and should not be justified. It's like building a new aircraft and a few days before the first test flight realizing it's too heavy. After a period of frenzy trying to make the plane lighter it is able to get off the ground, but for any sort of practical use a total redesign is necessary. We can laugh at the thought but this scenario is commonplace with computer systems.

To me it's very much about project organization.

All too many projects are unconcerned with good response times being necessary for the user productivity/experience. Lip service will be paid to the concept of a "well-performing" or "high-performance" system. Many will assume that "the machines are so fast that that won't be a problem" or "if performance tests show that we have a problem we'll fix it then." In such organizations optimizations are almost always premature. Anyone concerned with performance will be put in his place or even reprimanded for just trying to quantify it. "We'll do it ... later." We'll turn the steering wheel after we've gone off the cliff.

So you've finally completed the system but performance testing indicates that you'll have response times of twenty-three seconds instead of "no more than three." So you do optimizations and lowering by a third is a reasonable goal (at least the first time you do it) but even then we're still talking about fifteen seconds rather than "no more than three". So now you have to go your customer and explain why his server hardware is going to cost eight times and the upgraded internet connection three times as much as you'd led him to believe. "Actually the system is better than we originally planned" - except no one will want to use it.

To sum it up premature optimization is NOT the root of all evil, especially if we're talking SW development. There are plenty of more worthy candidates to consider first: poor planning, poor guidelines, poor leadership, indifferent developers, poor follow-up, timid project management and so on.

Mark Twain's quote "everyone talks about the weather but no one does anything about it" is very apt in that many developers talk performance and yet do nothing about it - except perhaps to make it less achievable. In fact, a great deal can be done from the very start of a project to assure and later to achieve good (if not stellar) performance.

Relatively speaking few projects will be organized such that performance has a reasonable chance to be attained:

Optimization begins at the drawing board when you think through how the program is supposed to work and what performance it needs to attain. Performance is often called a non-functional requirement tested with non-functional testing but I disagree. BTW, doesn't that sound horrible - "non-functional"? It's as if it's destined never to be. I claim that performance in the form of response time can make or break a system. What use is the greatest system on earth if the user tires from waiting thirty seconds or more for a response? So bring in performance at the very start and let it go head to head with the system's other requirements.

At the planning stage you should have identified some critical areas where you do not know if or how adequate performance can be achieved. Then you experiment with different solutions and time them using rdtsc. A typical problem area is SQL-databases where many designers have "a by the book" approach rather than a practical one. This may result in unnecessarily many tables.

So now you know how to move forward through the critical areas. Now you set up guidelines. In a C/S-system the client should have real capabilities. Too many Clients send too detailed requests to the server, thereby overloading it. Or expect the server to order the data because the client developers are too ignorant to see the performance impact and/or too lazy to implement ordering in the client: "why should I, the server already has that functionality? Should I implement it again - that's wasting resources!" Forbid the clients to formulate SQL statements directly to the server. You might also want to integrate timing at the functionality level into the apps themselves (using rdtsc on x86 for example).

So now development starts. As part of the development guidelines you have performance requirements such as that no client-serving functionality can consume more than 20 (or 5 or whatever) ms of server execution time. Any software exceeding that runs the risk of being rejected. Your guidelines may also contain code constructs to be avoided as well as those to use. Guidelines to keep SOAP data size under control if frameworks are being used.

Since you have timing in place in the apps you will be able to continuously monitor their processing efficiency during the development process. Certain functionality will execute too slowly and this is treated as a bug which will be prioritized and eventually corrected. Yes, "corrected" just like a bug - it is a bug!

In the end you will performance test your apps. If the previous steps have been in place and enforced the tests will just acknowledge that you have the performance you planned for all along.

So now you're "faced with" the luxury issue of having a product that performs the way it should with planned-for and achieved response times. You may opt for "shooting for the moon" i e lowering the response times to further enhance the end-user's experience. A typical and achievable goal is lowering them by a third from say 1.5 to 1 second. To do this you use the built-in timing to identify the sequences that matter, write bug reports and then correct them.

Why the "don't-optimizers" you mention in your question are so vocal (why not "rabid?") we can only speculate about. I'll offer a few suggestions: they may have tried it themselves but were unsuccessful (lack of strategy, lack of skills, thought it was boring). They may be under the impression that any source code (good as well as bad) can be translated by the compiler into good and fast code or executed quickly by the interpreter. This is not true. Just as it is possible to write an app which kills even the most capable hardware it is also possible to write one which makes the most efficient use of it. The don't-optimizers will typically pepper their responses with words like always, never, waste of time, stupid, brain-dead etc. That they feel they have something worthwhile to say without really knowing the different issues involved is, well, embarassing. But I guess that if you don't know what you're talking about then you can always act as if you do.

0

As an example, I have seen developers focus on specific code optimizations that inevitably make no difference. The focus should be to get the thing working and use profilers to tell you how much time is being spent. Using profilers is akin to going after the low hanging fruit.

The compiler itself will make many optimizations. I would think that if you keep your code complexity low, the compiler will have a better chance to get more optimizations.

Some optimizations are specific to the implementation and if the architecture/design is still influx, there can be a lot of wasted time and effort.

  • I don't think "low hanging fruit" is the right analogy. That usually means "the least amount of effort". Through profiling you learn what are your bottlenecks and what will give the greatest return on investment. – Bryan Oakley Aug 13 '10 at 13:21
  • I meant the 'low hanging fruit' to be the glaring sections of the application that should be looked at first. Once all these obvious bottlenecks are addressed, more in-depth analysis may be needed to squeeze out more benefits. –  Aug 13 '10 at 13:41
  • 1
    Profiling tells you where the tastiest fruit is. You then balance how tasty it is with how low-hanging it is, in deciding which to go after first. –  Aug 13 '10 at 14:12
  • @Bryan Oakley: If the machine spends 60% of its time executing one piece of code, and 1% of its time executing another, cutting the execution cost of the former by 1.5% will improve system performance as much as cutting the cost of the latter by 90%. Unless the former code is really efficient or the latter code is really inefficient, it's going to be easier to shave the cost of the former by 1.5% than cut the latter by 90%. –  Aug 13 '10 at 15:40