I'll try to describe how I look upon the development process with respect to performance and (not always) premature optimization. Also, make sure you have the complete quote
We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil. Yet we should not
pass up our opportunities in that
critical 3%. A good programmer will
not be lulled into complacency by such
reasoning, he will be wise to look
carefully at the critical code; but
only after that code has been
identified.
(Donald Knuth)
Yet attempts at optimization "after the fact" i e after the system development has completed are very costly and should not be justified. It's like building a new aircraft and a few days before the first test flight realizing it's too heavy. After a period of frenzy trying to make the plane lighter it is able to get off the ground, but for any sort of practical use a total redesign is necessary. We can laugh at the thought but this scenario is commonplace with computer systems.
To me it's very much about project organization.
All too many projects are unconcerned with good response times being necessary for the user productivity/experience. Lip service will be paid to the concept of a "well-performing" or "high-performance" system. Many will assume that "the machines are so fast that that won't be a problem" or "if performance tests show that we have a problem we'll fix it then." In such organizations optimizations are almost always premature. Anyone concerned with performance will be put in his place or even reprimanded for just trying to quantify it. "We'll do it ... later." We'll turn the steering wheel after we've gone off the cliff.
So you've finally completed the system but performance testing indicates that you'll have response times of twenty-three seconds instead of "no more than three." So you do optimizations and lowering by a third is a reasonable goal (at least the first time you do it) but even then we're still talking about fifteen seconds rather than "no more than three". So now you have to go your customer and explain why his server hardware is going to cost eight times and the upgraded internet connection three times as much as you'd led him to believe. "Actually the system is better than we originally planned" - except no one will want to use it.
To sum it up premature optimization is NOT the root of all evil, especially if we're talking SW development. There are plenty of more worthy candidates to consider first: poor planning, poor guidelines, poor leadership, indifferent developers, poor follow-up, timid project management and so on.
Mark Twain's quote "everyone talks about the weather but no one does anything about it" is very apt in that many developers talk performance and yet do nothing about it - except perhaps to make it less achievable. In fact, a great deal can be done from the very start of a project to assure and later to achieve good (if not stellar) performance.
Relatively speaking few projects will be organized such that performance has a reasonable chance to be attained:
Optimization begins at the drawing board when you think through how the program is supposed to work and what performance it needs to attain. Performance is often called a non-functional requirement tested with non-functional testing but I disagree. BTW, doesn't that sound horrible - "non-functional"? It's as if it's destined never to be. I claim that performance in the form of response time can make or break a system. What use is the greatest system on earth if the user tires from waiting thirty seconds or more for a response? So bring in performance at the very start and let it go head to head with the system's other requirements.
At the planning stage you should have identified some critical areas where you do not know if or how adequate performance can be achieved. Then you experiment with different solutions and time them using rdtsc. A typical problem area is SQL-databases where many designers have "a by the book" approach rather than a practical one. This may result in unnecessarily many tables.
So now you know how to move forward through the critical areas. Now you set up guidelines. In a C/S-system the client should have real capabilities. Too many Clients send too detailed requests to the server, thereby overloading it. Or expect the server to order the data because the client developers are too ignorant to see the performance impact and/or too lazy to implement ordering in the client: "why should I, the server already has that functionality? Should I implement it again - that's wasting resources!" Forbid the clients to formulate SQL statements directly to the server. You might also want to integrate timing at the functionality level into the apps themselves (using rdtsc on x86 for example).
So now development starts. As part of the development guidelines you have performance requirements such as that no client-serving functionality can consume more than 20 (or 5 or whatever) ms of server execution time. Any software exceeding that runs the risk of being rejected. Your guidelines may also contain code constructs to be avoided as well as those to use. Guidelines to keep SOAP data size under control if frameworks are being used.
Since you have timing in place in the apps you will be able to continuously monitor their processing efficiency during the development process. Certain functionality will execute too slowly and this is treated as a bug which will be prioritized and eventually corrected. Yes, "corrected" just like a bug - it is a bug!
In the end you will performance test your apps. If the previous steps have been in place and enforced the tests will just acknowledge that you have the performance you planned for all along.
So now you're "faced with" the luxury issue of having a product that performs the way it should with planned-for and achieved response times. You may opt for "shooting for the moon" i e lowering the response times to further enhance the end-user's experience. A typical and achievable goal is lowering them by a third from say 1.5 to 1 second. To do this you use the built-in timing to identify the sequences that matter, write bug reports and then correct them.
Why the "don't-optimizers" you mention in your question are so vocal (why not "rabid?") we can only speculate about. I'll offer a few suggestions: they may have tried it themselves but were unsuccessful (lack of strategy, lack of skills, thought it was boring). They may be under the impression that any source code (good as well as bad) can be translated by the compiler into good and fast code or executed quickly by the interpreter. This is not true. Just as it is possible to write an app which kills even the most capable hardware it is also possible to write one which makes the most efficient use of it. The don't-optimizers will typically pepper their responses with words like always, never, waste of time, stupid, brain-dead etc. That they feel they have something worthwhile to say without really knowing the different issues involved is, well, embarassing. But I guess that if you don't know what you're talking about then you can always act as if you do.