If you are a programmer, do not consider yourself a "computer scientist"; computer scientists are the ones creating the next generation of computers, some of which are still science fiction until the correct mix of materials, miniatuization and computational theory are derived. They are only the start of the pipeline. People who develop software in the here and now are "software engineers"; they take the theories and tools, sometimes layering practical theory and real-world tools on top, to harness the power in potentia of this complex piece of electroinic wizardry and make it do what we want. That is in turn one specialization of the field of "computer engineering", which takes the theories of the computer scientists and applies them, hardware and software, to real-world end-user electronic solutions.
This is, IMO, where business meets theory. In these types of cases, the old adage "the enemy of better is good enough" can easily be turned around to read "the enemy of good enough is better". Considering yourself an "engineer" instead of a "scientist", and putting what you do in parallel with other engineering disciplines, throws the differences into relief.
Let's say a client comes to you, a civil/structural engineer, and asks you to build a bridge. The bridge needs to span 20 feet, support itself and one ton carry load, it should last 10 years with routine maintenance, and they want it in a month for $20,000. Those are your constraints; meet the minimums while not exceeding maximums. Doing that is "good enough", and gets you the paycheck. It would be poor engineering for you to build the Golden Gate Bridge, far exceeding both the design specs and the budget by several orders of magnitude. You usually end up eating the cost overruns and paying penalties for time overages. It would also be poor engineering for you to construct a rope bridge rated for the weight of 5 grown men even though it cost only $1000 in time and materials; you don't get good client reviews and testimonials, and depending on your contract you'll be told to take it down and do it again, for no additional money beyond the contract.
Back into software, say you have a client who needs a file-processing system built to digest files coming in and put the information into the system. They want it done in a week and it has to handle five files a day, about 10MB worth of data, 'cause that's all the traffic they currently get. Your precious theories largely go out the window; your task is to build a product that meets those specs in a week, because by doing so you also meet the client's cost budget (as materials are generally a drop in the bucket for a software contract of this size). Spending two weeks, even for ten times the gain, is not an option, but most likely, neither is a program built in a day that can only handle half the throughput, with instruction to have two copies running.
If you think this is a fringe case, you are wrong; this is the daily environment of most in-housers. The reason is ROI; this initial program doesn't cost much and will thus pay for itself very quickly. WHEN the end users need it to do more or go faster, the code can be refactored and scaled.
That's the main reason behind the current state of programming; the assumption, borne out by the entire history of computing, is that a program is NEVER static. It will always need to be upgraded and it will eventually be replaced. In parallel, the constant improvement of computers on which the programs run both allow for decreased attention to theoretical efficiency, and increased attention to scalability and parallelization (an algorithm that runs in N-squared time but that can be parallelized to run on N cores will appear linear, and often the cost of more hardware is cheaper than that of developers to devise a more efficient solution).
On top of that, there is the very simple tenet that every line of developer code is something else that can go wrong. The less a developer writes, the less likely it is that what he writes has a problem. This isn't a criticism of anyone's "bug rate"; it's a simple statement of fact. You may know how to write a MergeSort backwards and forwards in 5 languages, but if you fat-finger just one identifier in one line of code the entire Sort doesn't work, and if the compiler didn't catch it it could take you hours to debug it. Contrast that with List.Sort(); it's there, it's efficient in the general case, and, here's the best thing, it already works.
So, a lot of the features of modern platforms, and tenets of modern design methodologies, were built with this in mind:
- OOP - build related data and logic into an object, and wherever the concept of that object is valid, so it the object, or a more specialized derivation.
- Pre-built templates - a good 60% or more of code is syntactical cruft and the basics of getting the program to show something on-screen. By standardizing and auto-generating this code, you reduce the developer's workload by half, allowing an increase in productivity.
- Libraries of algorithms and data structures - As in the above, you may know how to write a Stack, Queue, QuickSort, etc, but why do you have to, when there's a library of code that has all this built in? You wouldn't rewrite IIS or Apache because you needed a website, so why implement a QuickSort algorithm or a red-black tree object when several great implementations are available?
- Fluent interfaces - Along the same lines, you may have an algorithm that filters and sorts records. It's fast, but it's probably not very readable; it would take your junior developer a day just to understand it, let alone make the surgical change needed to sort on an additional field in the record object. Instead, libraries like Linq replace a lot of very ugly, often brittle code with one or two lines of configurable method calls to turn a list of objects into filtered, sorted, projected objects.