4

Unit testing is something I love now after forcing myself to do it in projects (and doing it at work) after seeing the massive rewards it offers down the road when refactoring and ensuring things work the way they should.

Right now I'm writing a software renderer an I'm unsure if there's ways to go about setting it up to be tested. I'll give you an example of where I'm stuck:

When scanning a polygon, it's most convenient right then and there after generating the scan to set the z-buffer and pixel from the texture as you're going along; when you're rendering a ton of polygons you need all the speed you can get.

The nice unit test way would be to return those scans so I could verify that each scan was where it was supposed to be, and check the data set along with it. Then for the next component that takes the scans, ...etc (more testing here)

The problem here is that tons of polygon scans will require a lot of ranges to be returned in various cases, adding to not only more memory usage but extra function calls that will scale badly as users have higher resolutions. Doing it all in one go makes the renderer not choke, especially in polygon intensive scenes.

I thought of a few possible ways around this but they seem to all have their own drawback:

  • Just do it the optimized way and check the final pixels at the end of it (which I can intercept and check), but then if things break I'm going to be spending tons of time potentially finding out exactly where something broke.

  • Extend the classes and inspect a stub or somehow intercept the data before passing it on to drawing to a pixel buffer, however I'll need to make virtual methods (using C++) and then this will potentially introduce overhead that I don't need through a virtual table, unless I'm wrong. I am leaning towards this since I don't know if the vtable is actually that expensive, but I could be dead wrong when it comes to massive polygon rendering.

  • Just eat the performance penalty and maybe at the very end of the project, optimize it after its been tested enough, but this sounds like it's not very TDD since refactoring at a further point could make a mess.

I want to make sure that all my elements work, but so far I have to bundle them together and it makes unit testing feel not as proper... since if something goes wrong, I don't know if the polygon edge scanner is broken, or if the scanline algorithm is broken, or the z-buffer was set wrong, or if the mathematics for vector/cross/dot products is done wrong, etc.

I'm also not a fan of taking a screenshot at the end and with some tolerance checking if the renderer is working properly (more of an integration test I guess). I probably will do it anyways but it feels too fragile for me, as I like knowing "okay this submodule just broke" rather than "this entire pipeline just broke, gonna get my coffee and get comfy for the next few hours trying to find out where."

Assuming I'm not missing some 'forest from the trees' thing that's really obvious, what is a proper way to go about this?

Water
  • 356
  • 1
  • 12
  • Can't you use option 2, but not by virtual methods, but by a compile time mechanism? For example, by utilizing `#ifdef _DEBUG` which adds some kind of inspector only in "debug" mode, which is what you also use for your unit tests? So in "release" mode you have no performance penalty? If you don't like macros, you can also accomplish the same thing by template meta programming - a special test version with an "inspector" added only in "testing" mode. – Doc Brown Sep 08 '16 at 19:30
  • @DocBrown This is also an idea I was considering, do you have any resources for template meta programming? Or something that may illustrate what you have? I'm intrigued with the idea. – Water Sep 08 '16 at 20:36
  • 1
    The canonical book on this is [Modern C++ Design](https://www.amazon.com/Modern-Generic-Programming-Patterns-Applied/dp/0201704315). But if you have no experience with this so far, you should probably try a macro solution first. Don't you already have a Debug and a Release version? And if so, do you run your unit tests in Debug or Release mode? – Doc Brown Sep 08 '16 at 20:44
  • From rasterisation, store horizontal fragments and do occlusion on the fragments themselves, then do texture lookup after you have visible fragments only. This is faster because it avoids needing a Z buffer, avoids doing per-pixel Z tests, and avoids doing texture lookup for everything that's (eventually) occluded. If you're crafty you can also get free anti-aliasing in the horizontal direction by using floating point "starting X and ending X" for fragments. As an ironic side-effect, it also solves your unit testing problem by accident. – Brendan Sep 09 '16 at 13:39

2 Answers2

4

Use option 2 ("intercepting the data before passing it on to drawing to a pixel buffer"), but not by virtual methods. Instead use a compile time mechanism which only activates the "data inspector" or "data logger" during your unit tests.

For example, assumed your unit tests run only in "debug" mode of the application, utilize a preprocessor instruction like #ifdef _DEBUG which adds some inspection or logging calls only in "debug" mode. So in "release" mode, which you use for deploying your final code, you will have no performance penalty. If you need to run your tests in "release mode" for some reasons, you could also introduce a specific "Test" mode, which is almost identical to "Release" mode, with all other compiler and optimization flags identical, but differs only in the added inspection calls.

If you don't like preprocessor macros, you can also accomplish the same thing by template meta programming.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • I'll be checking into `_DEBUG` since I will only run tests in debug mode (unless that's a bad idea?) – Water Sep 09 '16 at 14:02
  • @Water: it is typicallly ok for *unit* testing as long as you do not want to test something for which the expected behaviour in debug and release mode is quite different (for example, tests where the output depends on the running speed). If you do also automated integration test, I would recommend always to have at least some of those tests in the release mode, too, to be sure there is no accidental difference in behaviour between debug and release version. – Doc Brown Sep 09 '16 at 14:21
3

I don't think there's a "proper" method for anything which involves runtime, but I can tell you what I've had good luck with.

I created a Journal class which is an abstract base class. I pass this along to any of my functions like your code for polygon scans. Whenever I finish a scan, I check to see if the journal I've been given is not-null. If it's not null, I call a function on it to report the scan results.

By doing this, I can get access to this scan-level testing data. I simply create a Journal instance to collect it. When rendering normally, that pointer is null, so the only cost is the extra if statement needed.

This journal can be passed to your function in any way you find reasonable. You might pass it as a function argument. You might pass it as a global. You might use some exotic dynamic-scoped-variable if need be. Whatever approach makes the most sense for your particular program.

If you profile your code and find that the if statements are slowing you down, there's a few things you can do

  • Collect the journal data up and pipe it out in conveniently sized chunks. This can save time because it can be cheaper to save off the data than it was to do the if statement (in some extreme situations)
  • Set a const bool at the start of the function which stores whether the journal is null or not. Quite often a good optimizing compiler can see this and generate two copies of the code, one with journaling and one without.
  • As a last resort, for some of the highest performance code, you can duplicate it: one with journaling one without (and then manage the version control challenges of duplication)
Cort Ammon
  • 10,840
  • 3
  • 23
  • 32