19

Generally speaking, what type of optimizations do you typically slant yourself towards when designing software?

Are you the type that prefers to optimize your design for

  • Development time (i.e., quick to write and/or easier to maintain)?
  • Processing time
  • Storage (either RAM, DB, Disc, etc) space

Of course this is highly subjective to the type of problems being solved, and the deadlines involved, so I'd like to hear about the reasons that would make you choose one form of optimization over another.

Jason Whitehorn
  • 655
  • 5
  • 18
  • All three of the above but I want to throw in generality (which relates to maintenance). When you take your time to design a really efficient data structure widely applicable to your software's needs, for example, and thoroughly test it, it'll serve you for years and prevent you from having to write many more data structures narrowly suited to solving individual problems. –  Dec 10 '17 at 01:53

18 Answers18

40

Maintenance

Then profiling if necessary and optimize for speed. Rarely have I ever had a need for storage - at least not in the past 10 years. Prior to that I did.

Tim
  • 946
  • 7
  • 9
  • 8
    +1, if you optimize for maintainability to start with, then it will be easier to optimize for speed or storage later on if it proves necessary. – Carson63000 Nov 09 '10 at 05:00
  • You still need to at least _consider_ processing time and storage so you do not pick an extremely excessive approach. –  Nov 09 '10 at 06:14
  • @Thorbjørn, if you are optimizing for developer time you'd probably (more than likely) pick the algoritms that are easier to read/write in a given language. Only later, and only if performance becomes an issue, would you both to (as @Tim said it) pick up a profiler. – Jason Whitehorn Nov 09 '10 at 11:33
  • @Jason, I strongly disagree. You should be familiar with the performance characteristics of the algorithms you choose so you choose a suited implementation. I.e. choosing an ArrayList when you need it primarily for looking up zip-codes might not scale well. –  Nov 09 '10 at 11:55
  • 1
    @Thorbjørn, I do not disagree with you. In fact, I think you are correct in that attention should be given to the algorithms at hand. However, I think where we differ in opinions is that in my opinion the idea of which algorithms to choose is something learned through experience, and fixed only when a problem presents itself. You are correct in your point, I just don't see the need to optimize for that at the expense of less readable/longer to implement code. – Jason Whitehorn Nov 09 '10 at 12:02
  • ++ I think everyone would agree, but here's my problem. When systems get big, they get loaded down with data structure, which "notifications" try to keep consistent. That's how programmers are taught to work, and it ends up with *massive* performance cost, because those things compound in layer upon layer upon layer. If I were back in teaching, I would try to teach how to be ruthlessly minimalist, when it comes to data. – Mike Dunlavey Nov 09 '10 at 22:07
27

Development Time

Processing and storage is cheap. Your time is not.

Just to note:

This doesn't mean do a bad job of writing code just to finish it quickly. It means write the code in a fashion that facilitates quick development. It also depends entirely on your use cases. If this is a simple, two or three page web site with a contact form you probably don't need to use a PHP framework. A couple of includes and a mailer script will speed development. If the plan is instead to create a flexible platform on which to grow and add new features it's worth taking the time to lay it out properly and code accordingly because it will speed future development.

In the direct comparison to processing time and storage, I lean towards a faster development time. Is using the collectionutils subtract function the fastest and most memory efficient method of subtracting collections? No! But it's faster development time. If you run into performance or memory bottlenecks you can resolve those later. Optimizing before you know what needs to be optimized is a waste of your time and that is what I'm advocating against.

Josh K
  • 23,019
  • 10
  • 65
  • 100
  • 4
    Moore's law ended about two years ago. It might be time to start thinking about concurrency and using those extra cores. That's the only way you're going to get cheap clock cycles in the future. – Robert Harvey Nov 09 '10 at 04:50
  • 3
    To be correct, Moore's law is still going strong with a doubling of the number of transistors on a chip approximately every 2 years, which is what is enabling the placement of multiple cores on a single die. What has ended is the 'free lunch' of ever escalating number of cycles a second. – Dominique McDonnell Nov 09 '10 at 05:12
  • 1
    "Processing and storage is cheap." CPU cache and the bus speed are not. They are the main performance bottlenecks today. – mojuba Nov 09 '10 at 14:59
  • @Robert: The GHz corollary of Moore's law ended in 2003. You could get a 3 GHz processor then, and maybe about a 4 GHz now. That isn't a doubling every eighteen months to two years. Instead, we're getting multiple cores and bigger caches. Cache, I think, is now the proper target of most micro-optimizations. – David Thornley Nov 09 '10 at 15:08
  • it depends on your constraints... if you are running the largest hardware you can reasonably get and still running over a deadline that costs you money development time gets cheap pretty fast. – Bill Nov 09 '10 at 15:21
  • 3
    I completely agree with this. Maintaining readable code, using appropriate tools for the task, and adhering to your company's agreed-upon standards will significantly reduce your time spent actually typing code into a computer, saving your company a ton of money. You're time is better spent engineering than typing. – Matt DiTrolio Nov 09 '10 at 15:40
  • 1
    @Bill: Or if you're doing embedded, where you may have hard limits that will significantly increase product cost if you exceed them. Or for server software, sometimes - if somebody could improve processing on Google servers by 1%, that'd be quite a bit of savings. – David Thornley Nov 09 '10 at 16:17
  • Holy Cow, what do you say to your customers? "Yes, it's crap, but we finished it real quickly!". Optimize for code quality. – DJClayworth Nov 09 '10 at 16:31
  • @DJClay: Um, development time isn't just finishing it quickly, it's finishing it in a way that makes future development quick and correct. – Josh K Nov 09 '10 at 16:34
  • I don't think that was what you wrote. If you meant optimize for future development time I would call that 'maintainability'. Maybe you could edit the answer? – DJClayworth Nov 09 '10 at 16:37
  • @DJClay: It doesn't appear other people have misunderstood what I meant. I write my code according to the clients use case. See me (now 3rd) revision. – Josh K Nov 09 '10 at 16:39
  • Would love to have comments from downvoters! :) – Josh K Nov 09 '10 at 20:02
  • Development time is cheap compared to wasted user time and loss of customers due to badly performing systems. Failure to optimize for user performance and user needs first is just plain unprofessional. Contrary to popular opinion, developers aren't special people whose time is worth more than anyone else's. – HLGEM Nov 09 '10 at 21:12
  • 1
    @HLGEM: I think you're making a mistake in assuming I'm saying that making developers' lives easier is the only concern. Optimizing your development experience so that you can produce the best quality code in the least amount of time allows you to produce a strong product and address bugs and enhancements in a timely and effective manner. In turn, your users end up with a product that they're happier with in less time. – Matt DiTrolio Nov 09 '10 at 21:28
  • 1
    @HLGEM: I don't think you understand that you need to take your clients use cases in mind when developing, *which is the main point*. If you are writing a processing heavy application you will of course need to optimize it. This is **not** saying *"Fuck the user I want to be done."* It's saying *"My time is valuable and I shouldn't spend time optimizing code that isn't necessary."* – Josh K Nov 09 '10 at 21:42
13

User experience.

This is the only value that matters to your customer.

Development Time is less important. I can write a fully featured command line application a lot faster than a GUI, but if Mrs. Jane can't figure out how to make it spit out reports she wants, it's useless.

Maintenance is less important. I can repair a a seesaw really quickly, but if it's in the middle of a forest, users can't find it.

Processing Time is less important. If I make a car that goes 0 to light speed in 60 seconds, users can't steer.

Aesthetics is less important. I can paint a Mona Lisa, but if she's hidden behind a wall no one gets to see her.

User Experience is the only value that matters. Making an application that does exactly what the user wants in the way the user expects is the ultimate achievement.

Malfist
  • 3,641
  • 5
  • 29
  • 40
Joeri Sebrechts
  • 12,922
  • 3
  • 29
  • 39
8

There is only one thing to optimize for and it is:

What your customers want

Do your customers need the fastest program possible? Optimize for speed.

Do your customers need absolute reliability? Optimize for that.

Do they need it delivered tomorrow or it will be useless? Optimize for speed of development.

Running on an incredibly tiny resource-constrained device? Optimize for those resources.

DJClayworth
  • 538
  • 4
  • 9
5

Processing Time

My user's time is not cheap. What comes around goes around.


I just upgraded an application I use this last year. They had completely rewritten the app, and boy was it slow. I finally had to buy a new computer to run it quickly. I guarantee you that wasn't cheap, but my time is more valuable.

  • Interesting take on the processing time slant. Care to share what type of applications you develop? I am intrigued. – Jason Whitehorn Nov 09 '10 at 04:21
  • 1
    Processing time is important if you run on *lots* of computers. For example, if it's a choice between spending an extra 2 months optimizing, or upgrading 10,000 PCs to newer hardware, in that case the developer's time does *not* win out. But of course, it's a compromise. If you only run on half a dozen servers, the developer's time likely wins out in that case. – Dean Harding Nov 09 '10 at 04:33
  • 1
    @Jason, I have it easy right now, working with Excel and VBA in a conglomeration of spreadsheets (which I've been condensing rapidly). My users work in the next room, and they let me know if I have any problems. My perspective comes from using computers for thirty years, and watching applications keep bloating up, forcing upgrades just too compensate. I know that developers can do better, they just have to get in the habit of writing efficient code. –  Nov 09 '10 at 05:04
  • +10 for the efficient code. That's far too often overlooked, especially in modular programming. Every module runs at a reasonable speed, but the sum of all can be horrendeously slow. – Joris Meys Nov 09 '10 at 21:50
4

I tend to slant towards limiting memory consumption and allocations. I know it's old school, but:

  • Most of the non-throwaway code I write is heavily parallel. This means that excessive memory allocation and garbage collection activity will serialize a lot of otherwise parallelizable code. It also means there will be a lot of contention for a shared memory bus.
  • My primary language is D, which doesn't have good 64-bit support yet (though this is being remedied).
  • I work with fairly large datasets on a regular basis.
dsimcha
  • 17,224
  • 9
  • 64
  • 81
  • +1 for working to prevent bloatware. Memory hogging programs are bad programs. –  Nov 09 '10 at 04:07
  • Memory hogging programs can be run on 64-bit systems, by and large. That's what we did when one of our apps ran into memory issues (it legitimately uses large amounts of memory). The first bullet point is important when performance is. – David Thornley Nov 09 '10 at 15:11
2

Whatever virtualization technology I'm using

Remember the days when systems with more than 512 MB of RAM were considered bleeding edge? I spend my days writing code for the prior.

I work mostly on low level programs that run on the privileged domain in a Xen environment. Our ceiling for the privileged domain is 512 MB, leaving the rest of the RAM free for our customers to use. It is also typical for us to limit the privileged domain to just one CPU core.

So here I am, writing code that will run on a brand new $6k server, and each program has to work (ideally) within a 100kb allocated ceiling, or eschew dynamic memory allocation completely.

Concisely, I optimize for:

  • Memory footprint
  • Sorts (where most of my code spends most of its time)

I also have to be extremely diligent when it comes to time spent waiting for locks, waiting for I/O or just waiting in general. A substantial amount of my time goes into improving existing non blocking socket libraries and looking into more practical methods of lock free programming.

Every day I find it a little ironic that I'm writing code just like I did 15 years ago, on systems that were bought last month, due to advancements in technology.

This is typical for anyone working on embedded platforms as well, though even many of those have at least 1GB at their disposal. As Jason points out, it is also typical when writing programs to be run on mobile devices. The list goes on, Kiosks, thin clients, picture frames, etc ..

I'm beginning to think that hardware restrictions really separate programmers from people who can make something work without caring what it actually consumes. I worry (down vote me if you must) what languages that completely abstract type and memory checking to the collective pool of common sense that (used to be) shared amongst programmers of various disciplines.

Tim Post
  • 18,757
  • 2
  • 57
  • 101
  • 1
    +1 for the memory foot print angle. I've never coded against the particular constraints that you are dealing with, but remove the first section talking about Xen and replace that with iPhone and I know exactly where you are coming from :-) – Jason Whitehorn Nov 09 '10 at 12:15
2

I would say I optimise toward efficiency, with efficiency being defined as a compromise between development time, future maintainability, user-experience and resources consumed. As a developer you need to juggle all of these to maintain some kind of balance.

How do you achieve that balance? Well, first you need to establish a few constants, such as what the deadline is, what hardware your application will be running on and what type of person will be using it. Without knowing these you cannot establish the correct balance and prioritise where it is needed.

For instance, if you are developing a server application on a powerful machine you might want to trade-off performance efficiency to ensure you hit an immoveable deadline. However, if your developer an application that needs to respond quickly to user input (think a video game) then you need to prioritise your input routine to ensure it is not laggy.

Dan Diplo
  • 3,900
  • 1
  • 27
  • 30
2

Research Results

As an academic, I figured I should share what I optimize for. Note that this isn't quite the same as optimizing for a shorter development time. Often it means that the work might support some research question, but not be a deliverable, polished product. This might be viewed as an issue with quality, and it could explain why many say that (academic) computer scientists don't have any "real world" experience. (E.g., "Wouldn't they know how to develop a deliverable product otherwise?")

It's a fine line. In terms of impact, you want your work to be used and cited by others, and Joel's Iceberg Effect comes into play: a little polish and shine can go a long way. But if you aren't making a foundation for other projects to be built on, you just might not be able to justify the time spent making a deliverable product.

Macneil
  • 8,223
  • 4
  • 34
  • 68
1
  1. Design
    • low coupling, modular
    • concise, well defined, functional areas
    • well documented
    • continuously refactor for cruft
  2. Maintenence
    • reproducible build and debug
    • unit tests
    • regression tests
    • source control

... after that everything else

... finally, optimise for performance ;-)

cmcginty
  • 729
  • 5
  • 10
1

Quality/Testing

Optimise towards quality, as in ensuring there is time in the development schedule for testing, both unit testing and testing after features/phases.

DBlackborough
  • 1,196
  • 6
  • 12
1

It depends on the need of your program.

Most of what I do is constrained heavily by processing capability and memory, but does not go through very many, if any, significant changes in the average year.

I have in the past worked on projects where the code is changed frequently so the maintainability becomes more important in those cases.

I have also worked on systems in the past where the amount of the data is the most significant issue, even on disk for storage, but more commonly the size becomes an issue when you have to move the data a whole lot, or over a slow link.

Bill
  • 8,330
  • 24
  • 52
1

Elegance.

If your code is well designed, it will have several effects:

  1. It will be easier to maintain (cutting costs for the customer)
  2. It will be easier to optimize (for JIT or full compilers)
  3. It will be easier to replace (when you think of a better solution)
0

Expressiveness of my intent.

I want someone reading my code to be able to easily see what operations I was trying to invoke on the domain. Similarly I try to minimize non-semantic junk (braces, 'function' keywords in js, etc) to make scanning easier.

Of course you gotta balance that by maintainability. I love writing functions that return functions and all sorts of advanced techniques and they DO further my goal, but if the benefit is slight I will err on the side of sticking to techniques that solid jr programmers would be familiar with.

George Mauer
  • 2,002
  • 1
  • 16
  • 18
0

Development time, absolutely. I also optimize for bandwidth, but I don't go to binary.

Christopher Mahan
  • 3,404
  • 19
  • 22
0

Since I do installations on multiple types of systems, everything from IBM mainframe to PCs, I first optimize for compatibility, then size, then speed.

Dave
  • 427
  • 3
  • 7
0

It Depends

If you are working on a real-time embedded video processing system then you optimize for processing speed. If you are working on a word processor you optimize for development time.

However, in all cases your code must work and it must be maintainable.

Dima
  • 11,822
  • 3
  • 46
  • 49
-6

All of them

Processing time

Today's computers are fast, but far from what's enough. There're many many situations where performance is critical - if you do streaming media servers.

Storage

You customer might have a big disk, let's say, 1Tb. Which can be taken up by 1000 HD movies, if you want to make it a service it's far from enough, isn't it?

Development time

Well I'm not sure if this count as “optimization", what I do is I use Java instead of C++, and the development get 10 times faster. I feel like I'm telling what I think directly to the computer, very straight forward and totally rocks!

BTW I believe to speed up development your development process you should choose java, never try rubbishes like python... which claims they can shorten you DEV time.

tactoth
  • 543
  • 1
  • 3
  • 12