21

Does it make sense to give signoff authority to testers? Should a test team

  1. Just test features, issues, etc, and simply report on a pass/fail basis, leaving it up to others to act on those results, or
  2. Have authority to hold up releases themselves based on those results?

In other words, should testers be required to actually sign off on releases? The testing team I'm working with feels that they do, and we're having an issue with this because of "testing scope creep" -- the refusal to approve releases is sometimes based on issues explicitly not addressed by the release in question.

  • 2
    Please rephrase your question so that it is not a poll. What is the problem you are trying to solve? – sourcenouveau Jul 01 '13 at 14:58
  • 5
    "should" largely depends on the organizational structure of the company. If they get measured on bugs found in production, being able to hold a buggy release is an essential tool. –  Jul 01 '13 at 15:10
  • Excellent point, @MichaelT . In my organization, testing is a role rather than a job description, and evaluation is more ad-hoc and not at all quantitative. Successful deployments would feed into positive reviews, but bugs in production would not be specific negatives, any more than for anyone else on the team. – Ernest Friedman-Hill Jul 01 '13 at 15:14
  • 6
    If you have problem that testers refuse to approve releases based on issues not planned to be addressed, than you have communication problem (they don't know the issues are not planned to be addressed) or people problem (they are making themselves important; whether to release is ultimately business decision). – Jan Hudec Jul 01 '13 at 15:23

5 Answers5

27

Most places I have worked the QA people do have some sort of sign-off step, but do not have final authority on if the release proceeds or not. Their sign-off represents that they completed the testing expected by the release plan, not that the release is flawless.

Ultimately QA != the business and the business needs to decide if they are OK with deploying the code in the current state or if the benefit outweighs the downside or whatever. This is often done by clients or stakeholders immediately prior to deploy and is often called User Acceptance.

If your QA is also your User Acceptance group then there is the potential that they do have the authority to define your release candidate as unacceptable, but if you are getting this over issues that are out of scope to the bugfix/iteration/sprint/change request/whatever you bucket your time in, then the Project Manager or the business line stakeholders needs to have a come to Jesus meeting with the QA team.

It is fine to report on preexisting defects or unintended outcomes of new requirements, but if it is out of scope and non-disastrous it is generally not acceptable to label it as a blocking issue. It goes in the backlog for the product owner to prioritize like everything else.

Bill
  • 8,330
  • 24
  • 52
  • Who decides if you will invite the customer to perform an acceptance test on build 1234 or that it is not good enough for acceptance testing? – Bart van Ingen Schenau Jul 01 '13 at 15:23
  • 3
    @BartvanIngenSchenau: The product owner/project manager has to have last word on these. Because even the acceptance tests often _will_ bulge if need be (do you want feature X now even though we can't get Y to work with it, or do you want to wait 2 more months until we fix it?) and the product owner is negotiating that. – Jan Hudec Jul 01 '13 at 15:27
  • in addition to what Jan said, in many methodologies there would be schedule or cadence here. some folks deploy every candidate that survives the initial smoke test to UAT, some auto deploy anything checked into the candidate branch, some everything that is checked into main. ideally you have been showing the stakeholders progress as you go somehow so there shouldn't be much of a surprise at the end. in some of these cases you end up showing the stakeholders what the QA people have been dragging you through the coals on and they just say "who cares" and it is over. – Bill Jul 01 '13 at 15:43
  • In modern SaaS with continuous deployment, the code (service) deployment cycle can be separate from the feature (business) release cycle. This feature release cycle can be implemented using feature flags or toggles, e.g. from alpha (internal) to beta (opt-in) to general availability. That is one way to involve more formal business sign-off separate from and regardless of deployability of particular code or services. The opposite, tying feature releases to service deployments, introduces coupling and risk into the process that can be avoided with the feature flag technique. – Will Feb 12 '18 at 20:27
  • 1
    @will I don't disagree, but someone is still making the judgement call on if those hidden features are hidden enough not to be noticed by users other than the beta team on the initial deployment and ultimately anywhere I have used that approach the sequence plays out more or less the same, but with different labels on the moving parts and the risk shifted around a bit. I prefer the situation you describe, but the QA team finding something pre-existing or the product manager deciding to proceed anyway is as much a thing in this model as any other in my experiences. – Bill Feb 12 '18 at 21:15
  • @Bill agreed, was mostly trying to point out to others interested in this answer that with SaaS in particular, there are also techniques for isolating at least some aspects of the technical decisions from the business decisions, even though to your point, there's no avoiding the business impact of any particular decision. In my current project, we strive to give engineers relatively independent authority over service deployment, but also try to ensure that the trust is well-placed through good communication with automated and human checks along the way. – Will Feb 12 '18 at 22:55
6

Giving sign-off authority (i.e. a veto right) for releases to testers makes as much sense as giving that right to developers: none at all.

Testers and developers are primarily technical people, so they are likely to make their decisions mostly technical grounds. But, the concerns that need to be weighed when making a release are both technical and business concerns. Obviously, the customer won't be happy if you deliver a bug-ridden product, but the customer will be equally unhappy if you keep postponing a release because there are still open issues on the product.

Someone needs to find the right balance between a good product and keeping to the schedule that was promised to the customer. To do that, you should not be involved in the project in a purely technical role, but rather in a more business/management oriented role like project manager or product owner and take your input from the testers and the developers.

Jim G.
  • 8,006
  • 3
  • 35
  • 66
Bart van Ingen Schenau
  • 71,712
  • 20
  • 110
  • 179
  • 1
    I voted this down because I fundamentally disagree with several points and assumptions that you are making. I disagree that QA shouldn't have authority to veto a release because many QA groups also operate in a User Acceptance role as well. Furthermore, I disagree with the assumption that testers are technical people. Not always the case, not every group that releases software can afford a full QA team, so that role can fall on business analysts that may not be technical at all. – maple_shaft Jul 01 '13 at 15:37
  • 1
    in addition to maple_shaft's point I often see the final call on this left to whoever is in the customer role unless there is something terrible identified. it is ultimately their deliverable and they are most likely to have the right point of view on risk assuming your provide them with accurate information. – Bill Jul 01 '13 at 15:51
6

Someobody needs that authority. Whether it is a tester, the team of testers, the leader of the team of testers, or the leader of the development organization is somewhat irrelevant. Or perhaps more accurately, it depends on the organization.

Ultimately, the choice to release software is a business function. The business has to decide whether the quality is appropriate. Arguably, the director of quality assurance should make that decision, or feed that decision to the appropriate business unit. That all depends on the size of the company, the relative importance of quality, etc.

All that being said, the information used to make the decision starts with the tester. Whether they have the power to stop a release or not, they should feel the responsibility to inform the decision makers when they see something that they think should cause a delay in the release.

Bryan Oakley
  • 25,192
  • 5
  • 64
  • 89
5

The decision to 'release' or 'not to release' is at the end of the day a business decision, where a rigorous risk/reward analysis needs to be performed.

It is insane for any organization to ask the test team to take on this responsibility or for the test team to agree to this responsibility.

The role of the test team is to provide an analysis of the quality of the software, its readiness to be released, and any risks identified as an input to the business decision to release or not to release.

As others have noted, _somebody_ (and I believe it is an individual) does need the authority to make the 'release' or 'not to release' decision. That same person can have delegated that decision under specific conditions (i.e. no P1 or P2 Bugs)

Jordan
  • 151
  • 4
3

I've worked with the same situation of testers over-reaching and inventing ever more creative ways of breaking a system which, when risk assessed, are incredibly unlikely to ever happen in production.

While I understand and commend the test team for not wanting to send out imperfect release, it does require strong product ownership to define what is an "acceptable risk".

In my experience, the test team should be given a veto on releasing software but this veto should be overridable by the product owner but only after discussion with the lead testers.

Software will never be perfect, if you're suffering from test creep then you'll never get anything released until there's a major production issue (which won't be tested correctly) and rushed out.

Michael
  • 2,979
  • 19
  • 13
  • 1
    That is a real problem but I am not sure if that is necessarily the OP's problem though. My interpretation is that somehow the testers are interpreting new requirements that weren't originally defined. – maple_shaft Jul 01 '13 at 15:42
  • 2
    my experience with testing teams has lead me to fall on the other side of this. To me QA should have no expectation of being able to block a deploy without making their case to the rest of the team or getting the owner to override the team. – Bill Jul 01 '13 at 15:47
  • 1
    True - I probably wasn't explicit enough, the same issues occur when testers raise defects, and I quote, "in the spirt of the story" which leads to the same issues - nothing ever getting released. – Michael Jul 01 '13 at 15:47
  • In my case, it's more @maple_shaft 's interpretation; not so much being devious in finding ways to break the software, as reporting failure to handle explicitly unsupported cases. – Ernest Friedman-Hill Jul 01 '13 at 15:55
  • 1
    @ErnestFriedman-Hill It sounds like you are describing *Negative Requirements* and these are what is missing from your functional requirement documents. A Negative Requirement is an explicit statement about what a system will **NOT** do, and can be as acceptable as regular requirements. If these are declared then their test cases against Negative Requirements are not valid. – maple_shaft Jul 01 '13 at 16:59