4

This question is a narrowing-down of these related questions:

Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency?

If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

rwong
  • 16,695
  • 3
  • 33
  • 81

2 Answers2

3

I don't think that giving the user control over these options is a good idea. Of course this depends on the software and target user demographic, but I can't imagine a common application where the average user is interested in tweaking that sort of stuff. They just want the application to work. Incrementally adjusting performance options through trial and error is probably too much.

If you have performance problems, fix them. If you don't, don't worry about it and don't complicate the app and potentially confuse the user with options they may don't understand. You could open yourself up to a lot of support tickets along the lines of "I changed some options and now your application is too slow. Fix it.", if they even admit that they changed any settings.

Adam Lear
  • 31,939
  • 8
  • 101
  • 125
  • You can add options for tweaking in a config file, but you don't make a nice UI for it. 99% users don't know what to do with these anyway; the few power users and sysadmins that do know would prefer a config file. – 9000 Dec 26 '10 at 20:06
3

You never want to expose average users to the internal guts of how your application works--especially if you are supporting it. The BeOS used to ship with a utility that would allow you to turn off a processor (this is before multicore processors) just to see the impact on performance. And yes, you could turn off all the processors. That's when the machine just froze up and the only recourse to reboot the machine.

Most platforms will provide a way for you to query the system for the number of cores/processors on the machine. By running a few profile tests in your environment, you can determine characteristics of how your application runs most efficiently. You can have the app set up its internal configuration using simple ratios to the number of cores available. I'm sure you're thinking that is the type of thing that a sys admin might want to have control over. You may or may not be right.

If we are talking server software here, then I think a better approach is to use an approach similar to Java's Hot Spot technology. Basically, Hot Spot will recompile optimized portions of your code (completely removing sections that won't ever apply) and swap that new implementation in when it is safe. It makes its decisions on the branch behavior during runtime (and the value of static values, etc.).

In a similar approach you could have key performance areas monitored. As long as you have an appropriate cost function you have an effective meter to fine-tune your configuration at runtime and adapt to variations in performance outside your application's control. If performance is getting lower, you can either spin up a new concurrent task or take some down. You can have it choose to run something remotely vs. locally if the cost of communications and the current load of the software on the server warrant it.

Giving a user access to the way the internals of your application works is like giving them a loaded gun. Bottom line is that unless you are going to train someone how to properly use a gun, it's probably best not keep it around--much less load it. They can shoot themselves in the foot, or worse.

Berin Loritsch
  • 45,784
  • 7
  • 87
  • 160