If you mean how minimum/recommended system requirements are found, the application is simply tried on different machines.
In most cases, there is no hard limit: if the application works with 512 MB of memory, it will probably work with 511 MB of RAM as well (unless it explicitly checks for the memory). This means that you may have a limited number of machines to use for benchmarking, and deduce the limits from there. For instance, if the machine with 1 GB of RAM can barely run the app, while a machine with 4 GB of RAM runs it well enough and keeps in average 1 to 2 GB free, the minimum system requirements may include 2 GB of memory.
Precision
Note that benchmarking and profiling is precise. A non-functional requirement of performance, for instance, will specify in detail the test hardware and the load, the number of milliseconds representing the threshold and the threshold percentage. You can then make an automated test which either passes or fails, on every commit, indicating when the app became slower than expected. Talking about feelings (“this part of the app feels slow for me”) is unacceptable, because the lawyer of your customer may assert that the app still doesn't feel fast enough, while you've spent the last two months optimizing it and find it extremely fast.
When it comes to minimum/recommended system requirements, such precision is rarely required. The person writing down the system requirements may indeed simply test the app on multiple machines and use his feeling of fast/slow as the only criteria. If on the other hand the contract stipulates that the app should run on a machine with 2 GB of memory, than it should be in the Software requirements specification, written in non-ambiguous terms (see above).
Test environment
Also note that:
You should test the software on different hardware anyway (unless, of course, the software is distributed in a controlled environment, like a single data center), so there are chances that you already have the infrastructure you need.
Virtual machines make such testing less expensive than the purchase of dozens of actual, real machines.
However, testing on virtual machines may not be as straightforward as throwing a VM in the pool: while many hypervisors (or operating systems themselves) do a great job of allowing you to throttle some aspects (such as network bandwidth), it still requires additional configuration.
Complexity
I used RAM as an illustration, but the same logic applies to any other aspect: CPU speed, free space on hard disk, the speed of those hard disks, network bandwidth, etc. Not counting that the same hardware may not operate exactly the same every time.
For instance, one of my software products had a bug I spent a lot of time to debug. It appeared that when Windows put hard disks on stand-by when they are not used for a few minutes and they slept for a long time, waking them up takes a while, which sometimes triggered a timeout in my app.
This makes such testing a difficult task, even with virtual machines. This is one of two major complexities of desktop software, the other one being the fact that the software product has to survive in the wild, i.e. get along with thousands of other software products (including malware) which may be installed, deal with different configurations, accessibility options, broken things, etc.