So I'm trying to push for more automated testing in my company, which focus a lot on prototype and proof of concept systems.
We currently use Google Test for unit testing. This tests specific test cases for correctness, but a lot of the stuff we do the metrics for how the code is performing follow more of a "does it work better or worse than before, and if it works worse, is it an acceptable loss in performance" task than a boolean "yes no". These tests I call "Characterization" for lack of a better term (I've got a EE background, so sue me): it describes how the code behaves. We NEED to have the metric for all builds, and we NEED to be able to compare it between builds in order to see what direction we are heading. We can often make a general cuttoff to make a high-level pass fail, but these tests can be extensive: they take a long time to run and often require number crunching and plotting stuff. This seems to be out of scope for a Unit Test, more for a Functional Test in my mind (but I'm no test expert). How do people/companies handle this type of test? Is there any framework for this type of test out there?