3

How do software developers manage to know if their software will run on the target hardware they are developing to? Many developers today work on machines much faster than the intended platform. What kind of tools do they use to understand when they are going overboard?

I'm asking from the perspective where you for example develop algorithms on a modern x86 in a GNU/Linux environment but the intended platform is an much slower Arm, still in GNU/Linux.

Adam Zuckerman
  • 3,715
  • 1
  • 19
  • 27
macbug
  • 41
  • 3
  • 11
    By testing at least once on the actual hardware. – Chewy Gumball Jan 23 '15 at 23:04
  • 1
    If the developer has actual experience using the target hardware, then they gain insight into what kind of tasks are performance impacting. For example; Arm can be very fast even if slower speed, but Flash memory can be a real bottleneck for performance. You learn only by using it. – Reactgular Jan 24 '15 at 02:29
  • MichaelT: I'm not asking for minimum requirements. Lets say that we have 3 possible intended target platforms. The price range is great and there is money to save in choosing the right one. There is also money to save by not having to test it on the intended platform. Ideally in my world there would be a tool to benchmark the platforms and then a tool for running the program to get some figures so you could compare and understand which platform you'll be using later on. – macbug Jan 24 '15 at 05:19
  • @macbug: if the platforms are different enough from each other, and you want to make sure it works, you will have to test it on all these platforms, if you like it or not. That's an old wisdom learned from developing for different OSes, different versions of the same OS, different browser's, different databases, different graphic cards etc. When you cannot affort buying the platforms or doing the test all by yourself, you have to think of other solutions like renting the platforms, renting testers, find volunteers etc. – Doc Brown Jan 24 '15 at 07:54
  • ... and of course, for the case you mentioned, you should pick the slowest target platform you aim for, buy or rent it, and test on the actual hardware. Isn't that pretty obvious? – Doc Brown Jan 24 '15 at 07:57
  • @Doc Brown: Well, depends on what you are developing. But i can find quite good reasons for developers developing algorithms for embedded platforms to get an indication if the algorithm is to heavy for the intended platform without testing it on the actual target. – macbug Jan 24 '15 at 08:11
  • I am just here with StephenC - you can try to make an extrapolation, but don't expect it to be very accurate. – Doc Brown Jan 24 '15 at 08:17

3 Answers3

5

Ideally in my world there would be a tool to benchmark the platforms and then a tool for running the program to get some figures so you could compare and understand which platform you'll be using later on.

There is no such tool.

There is not even a methodology to follow.

The best you can do is extrapolate from the measured performance of the application running on a different platform. Even that is liable to be unreliable.

Basically, the performance of an application is going to depend on a large number of complicated platform parameters, and (in some cases) on complicated interactions between the platform and the applications fine-grained behaviour. It is too complicated to model.

(Depending on the nature of your application, the parameters could include CPU clock speed, CPU chipset, number of cores, memory size, memory architecture, memory speed, I/O system bandwidth, network interface, graphics card, hard drive controller and device, etcetera, etcetera. Hundreds of different hardware and software characteristics could be relevant ...)


The best practical advice would be to just try it. If you can't afford to buy an example of each candidate platform, then see if the salesman will let you try out the application on their kit before you sign the contract to buy. If you can't do that, then toss a coin ...

Stephen C
  • 25,180
  • 6
  • 64
  • 87
3

If you mean how minimum/recommended system requirements are found, the application is simply tried on different machines.

In most cases, there is no hard limit: if the application works with 512 MB of memory, it will probably work with 511 MB of RAM as well (unless it explicitly checks for the memory). This means that you may have a limited number of machines to use for benchmarking, and deduce the limits from there. For instance, if the machine with 1 GB of RAM can barely run the app, while a machine with 4 GB of RAM runs it well enough and keeps in average 1 to 2 GB free, the minimum system requirements may include 2 GB of memory.

Precision

Note that benchmarking and profiling is precise. A non-functional requirement of performance, for instance, will specify in detail the test hardware and the load, the number of milliseconds representing the threshold and the threshold percentage. You can then make an automated test which either passes or fails, on every commit, indicating when the app became slower than expected. Talking about feelings (“this part of the app feels slow for me”) is unacceptable, because the lawyer of your customer may assert that the app still doesn't feel fast enough, while you've spent the last two months optimizing it and find it extremely fast.

When it comes to minimum/recommended system requirements, such precision is rarely required. The person writing down the system requirements may indeed simply test the app on multiple machines and use his feeling of fast/slow as the only criteria. If on the other hand the contract stipulates that the app should run on a machine with 2 GB of memory, than it should be in the Software requirements specification, written in non-ambiguous terms (see above).

Test environment

Also note that:

  • You should test the software on different hardware anyway (unless, of course, the software is distributed in a controlled environment, like a single data center), so there are chances that you already have the infrastructure you need.

  • Virtual machines make such testing less expensive than the purchase of dozens of actual, real machines.

    However, testing on virtual machines may not be as straightforward as throwing a VM in the pool: while many hypervisors (or operating systems themselves) do a great job of allowing you to throttle some aspects (such as network bandwidth), it still requires additional configuration.

Complexity

I used RAM as an illustration, but the same logic applies to any other aspect: CPU speed, free space on hard disk, the speed of those hard disks, network bandwidth, etc. Not counting that the same hardware may not operate exactly the same every time.

For instance, one of my software products had a bug I spent a lot of time to debug. It appeared that when Windows put hard disks on stand-by when they are not used for a few minutes and they slept for a long time, waking them up takes a while, which sometimes triggered a timeout in my app.

This makes such testing a difficult task, even with virtual machines. This is one of two major complexities of desktop software, the other one being the fact that the software product has to survive in the wild, i.e. get along with thousands of other software products (including malware) which may be installed, deal with different configurations, accessibility options, broken things, etc.

Arseni Mourzenko
  • 134,780
  • 31
  • 343
  • 513
  • Lets say that the RAM is just the same on booth platforms and you are more interested in the differences in CPU speed and motherboard I/O. – macbug Jan 24 '15 at 05:23
  • I tried to ask for a way to understand if an algorithm would run on a target machine. I Was not asking for the minimum system requirements. You might end up with the same solution in the end by asking for the minimum requirement, but I'm looking from a different perspective. – macbug Jan 24 '15 at 10:59
-2

Seeing as how you can configure the hardware and memory that the VM is allowed you can test a wide range of setups on your own PC. This also allows you to test across multiple operating systems.

Well for instance the CPU of a PC could be 6 Ghz. When you install virtual PC you can actually set the computer to use a certain amount of that. So for example you could set the virtual PC to run at 2.5 Ghz, and test the game to see if it runs. When you find the point where the game doesnt run properly say 2ghz. You would take a step up to maybe 2.2 Ghz and if it runs well then that would be your minimum specification for that piece of hardware.

I have sourced the information below from Justin Cave on stackexchange. The web url to a similar question is How are minimum system requirements determined? All the information below has came directly from Justin Cave 2011 and has not been edited.

Frequently, the minimum requirements are set by looking at the types of systems that target market customers would actually use for the product in question and picking some reasonable cutoff that doesn't alienate the target customer and is something the QA department can test with a minimal additional hassle.

If you expect that most of your customers are going to install your product on relatively recent desktop computers, for example, you would probably look around and see that just about any low end desktop computer for the home is going to ship with 2 GB of RAM. So a recent computer is very likely to have at least 1 GB of RAM even if it's a couple years old. If very few of your customers are going to want to use a machine that only has 512 MB of RAM, the revenue of these sales is likely to be more than offset by the support requests (older machines are likely to have lots of other problems and incompatibilities that will cause problems and generate more help desk calls than other customers). So it may well be more profitable to avoid making sales to those customers.