21

As most people agree, encouraging developers to make fast code by giving them slow machines is not a good idea. But there's a point in that question. My dev machine is fast, and so I occasionally write code that's disturbingly inefficient, but that only becomes apparent when running it on other people's machines.

What are some good ways to temporarily slow down a turbocharged dev machine? The notion of "speed" includes several factors, for example:

  • CPU clock frequency.
  • Amount of CPU cores.
  • Amount of memory and processor cache.
  • Speed of various buses.
  • Disk I/O.
  • GPU.
  • etc.
Joonas Pulakka
  • 23,534
  • 9
  • 64
  • 93
  • 35
    Unpress the "Turbo button" ... no, wait. – LennyProgrammers Dec 02 '10 at 08:41
  • 6
    Here is the root of your problem: "Disturbingly inefficient". change your coding habit – Darknight Dec 02 '10 at 10:04
  • 17
    @Darknight: No, it's not that. You have to [first make it right, then make it fast](http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast) *if needed*. To know what to optimize, you have to test and find out what's the problematic part. Making things as fast as possible in the first place is waste of *your* time - and likely waste of [doing it right](http://c2.com/cgi/wiki?MakeItFastBreaksMakeItRight). – Joonas Pulakka Dec 02 '10 at 10:13
  • 1
    Well I partly agree. However if you have an efficient coding habit to start of with; then as your "making it work right" you can spend less time later "making it faster". – Darknight Dec 02 '10 at 10:25
  • @Darknight: I agree with you in that it makes sense to spend some time to figure out things like whether some algorithm is O(n) or O(n^2), and what's the n going to be. Is there 1 kB, 1 MB, 1 GB or 1 TB of data. There are architectural decisions that can't be easily optimized away later on. But then there are things like "should I cache this one here?", which are better found out by experimenting. Programmers are notoriously bad at *guessing* where the problems will emerge :-) – Joonas Pulakka Dec 02 '10 at 10:29
  • Simply profiling heap usage, cache misses, etc isn't acceptable to predict how it will run on [n] types of machines? – Tim Post Dec 02 '10 at 11:45
  • @Tim Post: It would be acceptable for purely algorithmic problems. But experimentation is indispensable to probe real-time performance with attached hardware, GUI responsiveness etc. The problem with profiling is that how do I know how much heap usage, cache misses etc. is acceptable, and what would be enough to justify spending time in tweaking the (already working!) code, instead of doing something more important? Answer: I don't know, I would have to guess! Experimentation is certainly better than guessing. – Joonas Pulakka Dec 02 '10 at 12:03
  • @Joonas Agreed 100%. That's pretty much what I had in mind too. Make initial "sensible" choices, that later on can be optimized further if needed. – Darknight Dec 02 '10 at 12:08
  • 4
    @Darknight: I think @Joonas is asking a very sensible question. The idea that just "changing your coding habit" is sufficient is not realistic. Here's an example: (http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort/927773#927773) AND, the idea that you can just time it on a slower machine without an IDE assumes that's enough to find performance bugs. Lots of people *talk* about profiling, but *doing* it (successfully) is another matter. What would really help me (& others I think) is what Joonas is asking for. – Mike Dunlavey Dec 02 '10 at 21:49
  • 1
    @Joonas: I rely on the random-pause method of "profiling", and one thing I've found useful is this: Create a data-watch breakpoint in the program. That seems to cause the IDE to either emulate or interrupt after every instruction, which slows it down by 1-2 orders of magnitude, making it easier to pause and see what's happening. I don't do this a lot, but sometimes it's useful. – Mike Dunlavey Dec 02 '10 at 22:09
  • @Darknight & @Mike Dunlavey: Additionally, profiling shows where your code spends most of the time - but so what? It has to spend its time somewhere! The core problem is to know whether that time is "too much" or not, and that's subjective in the end. Of course, it's better to be on safe side: If there's some simple thing you can do to speed up your code 90 %, then it's likely worth doing. – Joonas Pulakka Dec 03 '10 at 06:45
  • http://threadmaster.nyland.dk/threadmaster.htm – Codism Dec 02 '10 at 22:35

5 Answers5

39

Run your tests in a virtual machine with limited memory and only one core.

The old machines people still may have now are mostly Pentium 4 era things. That's not that unrealistic - I'm using one myself right now. Single core performance on many current PCs normally isn't that much better, and can be worse. RAM performance is more important than CPU performance for many things anyway, and by limiting a little more harshly than for an old 1GB P4, you compensate for that a bit.

Failing that, if you're willing to spend a bit, buy a netbook. Run the tests on that.

  • 1
    Or an elderish laptop. –  Dec 02 '10 at 08:52
  • The problem with virtual machines is that none of them (AFAIK) supports IEEE 1394 (firewire) port. Some of my software uses cameras that are connected with firewire, so... – Joonas Pulakka Dec 02 '10 at 09:02
  • the real ones let you assign any PCI device to the VM – Javier Dec 02 '10 at 09:54
  • 3
    Could be a job for Xen - the virtual machine doesn't have a host O/S, but is the top layer in itself. Has a heavily Unix history, but can now support proprietary OSes. But I never used it, and don't know how much control you can have over a particular VMs performance and resources. –  Dec 02 '10 at 10:01
  • 1
    +1 A VM is highly tunable and provides exactly the environment you're after for testing. I use VMWare myself for this purpose. – Gary Dec 02 '10 at 12:29
  • Is there any virtual machine that would allow limitting graphic card capabilities too? – Klaim Jul 21 '11 at 08:18
  • @Klaim - graphic card capabilities are normally limited by default. You don't get decent graphics drivers until you install the host "additions" software, so no acceleration (even for 2D, text etc) and a very limited choice of graphics modes. You also get to choose how much video memory the virtual graphics card can use when setting up the virtual machine hardware, and this can be modified later - in effect, you can upgrade or downgrade the virtual machines graphics card at any time - but only in limited ways. –  Jul 21 '11 at 20:45
11

The way to spot significant algorithm inefficiency is to profile you code. The way to catch memory overuse is to first understand how much memory your target uses have, and then design accordingly, and regularly test in that environment.

If you are writing threaded code, testing on multiple machines with differing CPU speeds will help highlight specific timing related bugs in your thread handling, but aggressive unit testing of thread logic is a must.

Michael Shaw
  • 9,915
  • 1
  • 23
  • 36
  • 1
    No, profiling won't catch algorithmic inefficiency. It'll show you where the program is spending its time if you need to speed it up, but not if you need to speed it up. – David Thornley Dec 02 '10 at 22:45
  • I think I am missing the distinction here. If you mean that profiling will not tell you IF you are being sub optimal, just where you are spending your CPU cycles, then I agree. That takes experience to make that judgement. – Michael Shaw Dec 02 '10 at 23:27
  • 4
    @David Thornley & @Ptolemy: I think algorithm inefficiency or code hot spots are secondary to the core problem: "Is it *too* slow or not?" It's subjective, but it's also the most important question. If it doesn't feel slow in practice, then so what if your algorithm is inefficient? It does what it needs to do! Or if the program feels too slow regardless of highly optimal algorithms, then you may have to change the approach (architecture? programming language? something!) altogether. Having highly optimal algorithms are not an excuse for program slowness :-) – Joonas Pulakka Dec 03 '10 at 06:53
  • 1
    To reveal algorithm inefficiency, use progressively-sized data sets for testing. – rwong Dec 07 '10 at 09:07
10

Anything that you do to slow down your machine would probably be a hack.

Here are a couple of suggestions:

  • Use virtual machines
  • Profile the code on your machine, looking for bottlenecks
  • Use an old machine for "performance testing"
Jason
  • 541
  • 2
  • 8
  • @matt what does that mean? – johnny Dec 02 '10 at 20:31
  • 1
    @johnny: I mean I am up voting because Jason has suggested profiling the application, which would hopefully find the source of performance bottle necks without the need to move to a slower system. – Matt Ellen Dec 02 '10 at 22:44
9

Install Virtual PC, create a hardware profile, create a virtual machine and start playing :)

devnull
  • 2,969
  • 20
  • 20
4

Realise this is quite an old question, but for anyone else in this situation; you could try CPUKiller. It basically is a small app that you can configure to consume different %'s of your processor. http://www.cpukiller.com/

Dave
  • 41
  • 1