2

I lead a small but growing team of developers on an iOS application with a server backend. We have comprehensive unit and integration tests on both ends.

As the product grows, I want the onus of "quality assurance" to fall on all team members -- currently, there is a very "throw it over the wall" mentality after development is done.

The question that inevitably comes up is "But what do I test exactly?"

When testing new features, the answer is simple and well-understood: each story has acceptance requirements and for the most part, it's a matter of testing that those requirements are met.

However, as the product gets more complex and the team starts to contain more members who have less than intimate knowledge of the entire app, regressions (at the user acceptance level) become a big problem. How does the new employee know whether Joe's changes broke feature 2345 if he doesn't know how feature 2345 is intended to function?

My thought is that we probably require some sort of master documentation of how every aspect of the application functions, from a user perspective.

My questions are:

  • is this a common solution to this sort of problem?
  • what is this type of documentation typically called? (it isn't technical documentation per se, it's not code documentation, it not user documentation)
  • are there good examples out there of how companies have structured & written such documentation?
Christophe
  • 74,672
  • 10
  • 115
  • 187
dtj
  • 139
  • 4
  • 8
    I have never seen a single case where documentation prevented regression defects. The only thing I've seen that prevents regression defects and answers the question about what to test is by writing automated tests. No documentation can fill the void left behind by a lack of automated tests. Even if you write docs on how to test. People miss things. A test script does things the same way *every* time. – Greg Burghardt Sep 26 '19 at 23:17
  • 6
    You say you have comprehensive unit & integration tests, but if you’re having regression issues, they must not be comprehensive. Find the gaps and fill them in. When you do get a bug report, insist on tests being written for it. The only way I’ve found it effective to prevent regressions in a large code base is to have an even larger test suite. – RubberDuck Sep 27 '19 at 00:02
  • @RubberDuck: very, very, very related to your comment: https://softwareengineering.stackexchange.com/a/21933/118878 – Greg Burghardt Sep 27 '19 at 00:17
  • I find it interesting to consider whether, for applications that may last many years but probably not many decades (and thus won't outlast the entire careers of a development team), it is more economical to retain experienced staff for the lifetime of the application, than it is to try and embed all possible knowledge in tests and documentation (which, even if successful in allowing knowledge to be reproduced in new staff - which it often isn't - will multiply the up-front burden on the development team many-fold, and thus multiply the hiring, training, and skill requirements up-front). – Steve Sep 27 '19 at 07:16

3 Answers3

3

How does the new employee know whether Joe's changes broke feature 2345 if he doesn't know how feature 2345 is intended to function?

That is not necessary in most cases. Usually it is sufficient to check if Joe's changes did not change the previous behaviour of feature 2345, not if it works as its described in some documentation. So if in doubt, one needs to look at the previous version of the program, assume that this version worked as intended and check if the change to the code kept the old behaviour. Even if the former version contained already a defect here, not changing the behaviour avoids to make things worse.

Of course, ideally one should try to automate this kind of regression testing.

And yes, it is also good to bring the documentation to the test process and compare it with the actual behaviour of the application. But if those two deviate, a "new employee" probably still not know where the failure is - it might be the program, or it might be the documentation. Documentation is often not part of the solution, it can be part of the problem.

Then you asked for the name of

some sort of master documentation of how every aspect of the application functions, from a user perspective.

This is definitely called "user documentation". In your comment, you mentioned some extra documentation which could "outline finer points than any end-user would necessarily need to know to use our software" - but what should that be?

  • Either some detail is obvious from the GUI, then it does not have to be described in the user documentation, and it should be also clear for the developers trying to change something in that area

  • Or it is not obvious, but still relevant for the user, then it definitely belongs into the user documentation, since it will be needed to use your software correctly

  • Or it is not relevant for the user: then it is an implementation detail, and should be documented in the code or your usual technical documents

Note there is actually no "simple" solution for keeping complex applications manageable. One has to work on several layers for this, like

  • training of your devs (especially new developers) so they get a better understanding of how certain features in an application should work from a users perspective

  • finding the right amount of documentation - "the more docs - the better" is a fallacy; since docs have also to be maintained

  • same holds for "automated regression testing" and "manual regression testing"

  • when the app reaches a certain size, modularization can also help - so not every team member may have to know the whole code base in every gory detail, but you have specialists for certain areas (but beware, don't lose sight for the "big picture").

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • I understand your point, however, I feel like that hasn't worked in practice, at least for me. The new employee may not know all the finer points of feature 2345 just by loading it up and trying it out. Furthermore, there could be 10s or 100s of features that could touch this change in some fashion, and they might not even know that. – dtj Sep 26 '19 at 21:46
  • and yes, i am also concerned about documentation deviating from the actual behavior. I hadn't thought it was called user documentation since it would outline finer points than any end-user would necessarily need to know to use our software – dtj Sep 26 '19 at 21:47
  • @dtj: see my edit – Doc Brown Sep 27 '19 at 08:44
1

In very short

Yes, the synergy with a light user documentation can be a good solution for your problem.

All the details

The ideal approach, from a quality assurance point of view, would of course to have fully automated test that you can run systematically after each change, to check the regression. If you can go into that direction, go for it.

Real life constraints

Nevertheless, it's not always possible. If you are for example writing an enterprise software (e.g. an ERP or a CRM system), you'll discover a lot of requirements on the fly and have to adapt your product accordingly. Automated end-to-end tests are difficult to maintain at this stage in view of the frequent evolutions, and sometimes radical changes.

If you're developing a complex software in a smaller organisation, it's even worse, because even if the automated approach would be feasible, you'll not have the resources to go for it. So creativity is required.

The same issue applies to manual test cases. Those describe step by step each action to be performed by the tester together with the expected response/behavior/results of the sytem. Unfortunately, these detailed test specifications are time-consuming to write and almost as difficult to maintain as automated tests.

User documentation and synergies

On the other side, you'll often need to produce some kind of documentation for the end-users. A possible synergy would then to use this end-user documentation for the new developpers (or even the older ones, since the system grows significantly) to guide them step by step in the tests.

Not all the user documentations are equal. The big, comprehensive user manual of 50-200 pages will be of new use here: testing would require to put together pieces of explanations spread over multiple chapters. Moreover testers (especially if integrated in the development team) do not have the time or patience to read it all. Interestingly, users are also often lost in thos manuals.

Business process-oriented manuals produced iteratively

A proven approach, both in terms of usefulness for end-users, and for tests, is to build a process-oriented documentation, which is composed of a set of small well-structured documents organised like a straightforward fact sheet. It's modularity applied to documentation:

  • A first document describes briefly the big picture: the end-to end business process flow, the list of the steps, and how they relate to each other.
  • For every process step: the scenario for the user, step by step
  • Then you would complete by adding some addition documents for the variants (a scenario for a special case or an error situation).

This kind of documentation can be produced incrementally with the software itself:

  • During the early development phase, this documentation would be maintained in a draft form (no fancy graphics or screenshots).
  • The process-step docs would evolve as you add new features. Every process sheet would present the sequence of actions from the user perspective, with explanation about the fields to fill.
  • Only when the user interface appears much stabler, and the first shipment approaches, should an end-product quality level be considered, with graphical design and screenshots insertion, as well as fine-tuning of the wording.

Using these kind of documents for test is a totally winning approach:

  • new developper can easily learn what he/she has to test and the expected result
  • if the documentation is ambiguous, the developper can report/correct the issues, improving the documentation in each iteration exactly as the software would be improved.
  • The tester would then execute the scenario with some hypothetical data
  • At the end, you have a fine-tuned documentation that is really helpful to end-users.

Use cases or user-stories ?

"Business process oriented" means that we provide the big picture, regardless of how requirements are provided !

Some further synergies can be envisaged:

  • If you use use-cases: a use case describes relevant goals and scenarios that matter for the users. Typically, you'll find an easy mapping between a a use case or a use-case step and a business process step.
  • If you use user stories, it depends at which level the story is. But if you'd use user story mapping, then the big picture emerges and you'll find the relevant mapping with the documentation.

Conclusion

I could experience on several larger projects very good results using this approach of combining acceptance testing and user documentation:

  • One big advantage is that many people without prior knowledge could make tests with this approach. In addition, newcomers in the team can faster grasp how the pieces of the system fit together.
  • The other benefit is that this approach consumes less resources than keeping test scripts and used documentation completely separate.
Christophe
  • 74,672
  • 10
  • 115
  • 187
0

Documentation that is (supposed to be) maintained independently and isolated from the software source code will not help prevent regressions.

You say that you have "comprehensive unit and integration tests". This is a good start, but end-to-end automated acceptance testing will also help. You may also find gaps - any bug reported should result in at least one new test case being written, which should fail on the code before the bug fix is applied and pass on the code after the bug fix is applied.

You should also consider your tests to be an executable specification of the system. If anyone makes any change and a test has to also change, the person making the change should understand why the test has to change. There are a few reasons. Maybe the test was extremely fragile and the change doesn't change the intent of the test - this is a valid reason to change the test. But maybe the implementation caused some assertion about the system to no longer be true. In this case, the developer needs to understand what was being asserted, why it was being asserted, and if the condition must continue to hold. This will help either update the tests or to change the implementation to support both assertions.

Personally, I've found that using BDD-style test frameworks for all levels of tests is extremely helpful for this. Even if the test implementation is not readable to all, they can often generate output that is readable to product owners, business analysts, developers, and users that allows them to review the assertions made by tests. It aids in the use of tests as a specification of the behavior of the system by making the tests readable by the stakeholders, who can identify gaps in testing or incorrect assertions made, which can then be corrected.

I would shy away from plain-text specifications that aren't executable. These are more likely to become out-of-sync with the implementation of the system and lose value quickly. That doesn't mean they don't have a place, but that there are likely to be better options.

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283