Unit testing is something I love now after forcing myself to do it in projects (and doing it at work) after seeing the massive rewards it offers down the road when refactoring and ensuring things work the way they should.
Right now I'm writing a software renderer an I'm unsure if there's ways to go about setting it up to be tested. I'll give you an example of where I'm stuck:
When scanning a polygon, it's most convenient right then and there after generating the scan to set the z-buffer and pixel from the texture as you're going along; when you're rendering a ton of polygons you need all the speed you can get.
The nice unit test way would be to return those scans so I could verify that each scan was where it was supposed to be, and check the data set along with it. Then for the next component that takes the scans, ...etc (more testing here)
The problem here is that tons of polygon scans will require a lot of ranges to be returned in various cases, adding to not only more memory usage but extra function calls that will scale badly as users have higher resolutions. Doing it all in one go makes the renderer not choke, especially in polygon intensive scenes.
I thought of a few possible ways around this but they seem to all have their own drawback:
Just do it the optimized way and check the final pixels at the end of it (which I can intercept and check), but then if things break I'm going to be spending tons of time potentially finding out exactly where something broke.
Extend the classes and inspect a stub or somehow intercept the data before passing it on to drawing to a pixel buffer, however I'll need to make virtual methods (using C++) and then this will potentially introduce overhead that I don't need through a virtual table, unless I'm wrong. I am leaning towards this since I don't know if the vtable is actually that expensive, but I could be dead wrong when it comes to massive polygon rendering.
Just eat the performance penalty and maybe at the very end of the project, optimize it after its been tested enough, but this sounds like it's not very TDD since refactoring at a further point could make a mess.
I want to make sure that all my elements work, but so far I have to bundle them together and it makes unit testing feel not as proper... since if something goes wrong, I don't know if the polygon edge scanner is broken, or if the scanline algorithm is broken, or the z-buffer was set wrong, or if the mathematics for vector/cross/dot products is done wrong, etc.
I'm also not a fan of taking a screenshot at the end and with some tolerance checking if the renderer is working properly (more of an integration test I guess). I probably will do it anyways but it feels too fragile for me, as I like knowing "okay this submodule just broke" rather than "this entire pipeline just broke, gonna get my coffee and get comfy for the next few hours trying to find out where."
Assuming I'm not missing some 'forest from the trees' thing that's really obvious, what is a proper way to go about this?