I write lots of code that involves three basic steps.
- Get data from somewhere.
- Transform that data.
- Put that data somewhere.
I typically end up using three types of classes - inspired by their respective design patterns.
- Factories - to build an object from some resource.
- Mediators - to use the factory, perform the transformation, then use the commander.
- Commanders - to put that data somewhere else.
My classes tend to me be quite small, often a single (public) method, e.g. get data, transform data, do work, save data. This leads to a proliferation of classes, but in general works well.
Where I struggle is when I come to testing, I end up will tightly coupled tests. For example;
- Factory - reads files from disk.
- Commander - writes files to disk.
I can't test one without the other. I could write additional 'test' code to do disk read/write also, but then I'm repeating myself.
Looking at .Net, the File class takes a different approach, it combines the responsibilities (of my) factory and commander together. It has functions for Create, Delete, Exists, and Read all in one place.
Should I look to to follow the example of .Net and combine - particularly when dealing with external resources - my classes together? The code it still coupled, but it's more intentional - it happens at the original implementation, rather than in the tests.
Is my issue here that I have applied Single Responsibility Principle somewhat overzealously? I have separate classes responsible for read and write. When I could have a combined class which is responsible for dealing with a particular resource, e.g. system disk.