5

As a QA person, I always believe that a defect should have all the steps needed for anyone fixing it to be able to reproduce it.

however, is it crazy to think that every defect QA opens need to be checked if it already exists in production too? Reason, why I ask is we have a dev manager who constantly whines about, well if QA would have done this investigation whether if this issue exists in production or not it would save time for dev folks. I understand that but I still think its kinda non sense to check if every bug QA finds in testing cycle exists in production too.

logiclife
  • 59
  • 1
  • 2
  • 1
    I don't understand your concern. Is the dev manager asking you to not report defects if they aren't in production? Or simply to tell developers if the bug is or is not in the current production environment by running your test cases there, as well? – Thomas Owens May 28 '15 at 22:01
  • "Or simply to tell developers if the bug is or is not in the current production environment by running your test cases there, as well?" EXACTLY. – logiclife May 28 '15 at 22:10
  • 1
    Are you providing software as a service or do your customers have their own installations? Are you obligated to support multiple versions of the software simultaneously? – Thomas Owens May 28 '15 at 22:11
  • Thanks for your comments. No its just a customer facing web application. One version thats all and no its not SaaS. – logiclife May 28 '15 at 22:13
  • Just to be clear - your customers are self-hosting the software, but you only support the most recent version of the software? I'm working on my answer now, and I just want to make sure it covers your case. – Thomas Owens May 28 '15 at 22:18
  • QA documents defects. If QA goes the extra step to describe all the steps to duplicate the defect then they are already going above and beyond. Thus, I agree with you, the dev manager is definitely wrong. Sometimes our QA documents the steps but many times that isn't easy for them to determine exactly. All we ask for from QA is to be sure to save the log files. We can figure it out from there. If the dev team didn't build in sufficient diagnostics into their application then that's their fault. – Dunk May 28 '15 at 22:21
  • @ThomasOwens, no they dont install any software they just access it via the browser. we just push the new version every month o so. hope it helps – logiclife May 28 '15 at 22:24
  • Thanks. That, by the way, is [software as a service](http://en.wikipedia.org/wiki/Software_as_a_service). I should be posting my answer shortly. – Thomas Owens May 28 '15 at 22:25
  • @Dunk, AGreed. the dev guy expects that every time QA finds a defect in functional testing cycle, he expects us to check if that issue exists in current production or not. Iam not sure what is hte point behind that futile exercise rather than to say well that issue exists in production, no one reported it or whatever and hence we dont fix it now. You see where he is going with this!? – logiclife May 28 '15 at 22:26
  • @ThomasOwens, I should have been more clear. this application is for our internal customers they use in the field. The company hosts the application in its own data center. – logiclife May 28 '15 at 22:29
  • 1
    If your organization has a customer service segment, then you probably want to be able to alert them if there is a significant problem in production. – Gort the Robot May 29 '15 at 03:46
  • I guess I'll modify my answer somewhat with a caveat. If the development team doesn't have something representative of the production system then it makes sense for someone who does have access to do this check. I've been on a few projects like that where the hardware was very expensive and software was given a limited window where they got to do their development and integration testing with the hardware but then they lost the hardware after that. But that's a special case, my original response applies in the general case. – Dunk Jun 01 '15 at 16:35

4 Answers4

5

From the development manager's standpoint, it absolutely matters whether the defect is new or whether it is an existing defect because it has a direct and immediate impact on how the bug needs to be handled. From the development manager's standpoint, the most important question is whether a new bug needs to be resolved in the current release cycle or whether or whether it can wait and be prioritized in a later cycle. That, in turn, often depends on whether the bug is new or not.

If you've found a new bug, that implies that one of the new features/ bug fixes in the current release cycle introduced the issue. If that's the case, someone either needs to remedy the issue as part of the current release (either by reverting the change that introduced the bug or by fixing the bug itself) or the business needs to decide whether adding new feature X is worth deploying even if it introduces new bug Y. Almost always, the bug has to be resolved in the current build cycle or the offending change needs to be rolled back. On the other hand, if you've found an old bug that existed prior to the current round of changes, the current build cycle can generally continue and the new bug can be prioritized for a future release. Of course, there are cases where a newly identified bug needs to be handled in the current release cycle because the bug is just that critical, those cases tend to be rare.

Now, whether it should be QA's responsibility to check whether the bug affects the current release or whether that should be done by whoever is prioritizing the bugs (assuming that prioritization happens immediately) is an open question. My bias would be to ask QA to do it since they're already writing up the bug. Since the QA person already knows how to reproduce the bug, they're best positioned to verify whether it exists in production or not. The QA department also tends to have more hours available for this sort of investigation than the person doing the prioritization does since the work can be spread across many analysts.

Justin Cave
  • 12,691
  • 3
  • 44
  • 53
  • Agree completely. And do not forget that if the defect exists in production, data manipulation may be necessary (updating wrong data generated by the defect) – JSBach May 29 '15 at 07:48
  • I don't agree that when the defect gets fixed depends on if the bug is new or not. A major issue found, even if it's many releases old, could be prioritized as a show-stopped for a given release. An example could be a security hole that a QA test case turned up - even if a user hasn't found it, it may be wise to fix immediately. Likewise, a minor issue or an intermittent issue could be delayed for several releases so that the users can get new functionality. The business and user needs are what drive defect priorities, not defect age. – Thomas Owens May 29 '15 at 10:04
  • 1
    If your decision to make an update live, or to reject it delay is based on the criteria, "is the product better with or without this release" knowing that the defect is there is kind of important in assessing its impact on the go live decision. **There are plenty of other criteria that companies can use**. – Michael Shaw May 29 '15 at 11:31
  • 1
    There's a third possibility for triaging the bug. If the bug exists in production, it's possible the dev team will want to release a HotFix prior to the next major/minor release. – RubberDuck May 30 '15 at 19:01
  • Re: "QA tends to have more hours available". Huh! Where did that come from? Every place I have ever worked, QA is short-staffed and responsible for multiple projects. As developers, we are always trying to pry them away from other projects to work on ours. So no, they don't have any hours available, let alone more hours available. No reason to add more tasks to their queue when determining whether a problem currently exists in production or not can frequently be answered by the developers just by hearing what the problem is without needing to duplicate it on the production system. – Dunk Jun 01 '15 at 16:25
2

Two separate questions here, should the productive system be checked and who should do it? Let's assume we're talking about two separate departments in the same company ...

  • Which department has the time to do it? In most projects I've experienced, be it waterfall or scrum, tests become the bottleneck as the release draws near. It might make sense to hand the check on the productive environment off to the dev staff for that reason alone.
  • On the other hand, many testers earn less than developers, so it makes economic sense to let them do the legwork if they're available in sufficient numbers.
  • Who has the necessary access to the productive environment? Where I work, Development often doesn't have it, just Operations (of course) and QA (because they're doubling as 2nd Level Support).

Tell your dev manager that you're happy to do the checks on stage, productive or whatever system as long as the project budget and schedule reflect this responsibility.

o.m.
  • 495
  • 3
  • 5
1

Once a defect is found, I think that the most important thing to do is to get it logged with the key information - a title or summary, details about the defect or problem is, what actually happens and what you expect should happen, the steps you need to reproduce it, and the environment where it was found (such as the OS, web browser, software version under test, etc). Depending on your process, the person submitting the defect report may also assign a priority and/or severity.

From a QA perspective, you are usually looking at software that is still under development, or at best a release candidate. Although the development team should be performing unit, integration, and some system tests before you get to it, their tests aren't perfect and you're going to find issues that the dev team should look at before they go to release. Ultimately, it's up to someone in the organization to prioritize identified issues and decide if something will be fixed in this release cycle or a later release cycle.

Throughout the development cycle, both the development team and the QA team should be changing their tests. There may be a regression test suite that always gets executed, but I would suspect that testing is more thorough and focused on features or parts of the software that have changed. Defects could be exposed for any number of reasons. A change could have introduced a defect. A change could have made defect more obvious, easier to trigger, or more common. The new test cases could have found a defect that has been latent in the software for a long period of time. Regardless of why the defect was detected by the testing isn't that important right now - the first priority should be to fix the defect.

In some environments, such as customer self-hosted software or where there is an obligation to support or maintain multiple versions, it may be necessary to test multiple versions of the software to determine when the defect was first introduced so that customers can be notified, especially if it is a significant defect. In environments such as these, I would expect more of a burden to be placed on the QA team in identifying affected versions(s) out of those that are currently supported and to address why the issue escaped from multiple levels and rounds of testing.

However, in environments where only one version of the software is supported, I'm not sure that it really matters what version the defect started in. If the defect was affecting users and they have a mechanism to report issues, then they will report it as a problem. However, from a quality perspective, it is important to understand known issues with the software. From a project management perspective, the amount of open issues can be used to determine if software is ready to be released or to plan work for a future development cycle. Even if it's not scheduled to be fixed prior to the end of the current development cycle, it could be reflected in user-facing documentation, along with workarounds, until it can be fixed.

At the end, after the defect has been fixed, you may choose to perform some kind of root cause analysis to determine when the defect was injected (in the current development cycle), why the defect wasn't discovered in previous development and QA cycles (if it wasn't recently introduced), and what types of tests should be created, executed, and managed to prevent this defect from returning in the future.


As an aside, I would like to note that there are multiple ideas for what a Software QA organization should be responsible for. In some organizations, Software QA is essentially a test team that is the last line of defense for software releases to ensure that it meets requirements and has no significant issues. In other organizations, Software QA is not only responsible for product quality, but also process quality and may be involved in every step of the development process to ensure that all work products (from requirements through the distribution media) are complete and correct per standards. There are other expectations, as well. This answer focuses more on Software QA being responsible for product quality.

Thomas Owens
  • 79,623
  • 18
  • 192
  • 283
  • Agree mostly. My question is mostly on one specific area like I mentioned earlier. Everyone has an equal responsibility/burden in promoting the best quality in whatever they do (requirements, code reviews, unit tests, system tests etc) especially given the ever increasingly shortened product life cycles. No one person or one team can ever alone ensure quality, it takes the entire project team to get there and yes QA can lead by example. – logiclife May 28 '15 at 22:57
  • @logiclife I agree that everyone is responsible for promoting quality. The last three paragraphs in the main body of my answer specifically address your question. The first three are mainly setting the stage by defining what a good defect report consists of and putting QA in perspective with project management and product development teams. – Thomas Owens May 28 '15 at 23:03
0

As a developer I get this problem a lot.

Some new feature is sent to test and a bug is reported which describes behaviour which although it 'seems wrong' is present in the live enviroment.

This is a problem because we have had time assigned to develop the new feature. Not to fix a random 'bug'

Often there has to be an extended discussion about whether the behaviour is a bug at all, what priority fixing it should take etc.

The problem stems from creating additional test cases and not running them against live before adding them to the test suite.

Developers using scrum or some other aglie process usualy have thier time quite tightly managed.

If testers raise bugs ad hoc rather than against the specific requirements the developers are working too it causes delay and frustration.

Ewan
  • 70,664
  • 5
  • 76
  • 161
  • "If testers raise bugs ad hoc rather than against the specific requirements the developers are working too it causes delay and frustration." That sounds like a dangerous attitude. Are you saying the testers should just ignore obvious problems because that's not what they are supposed to test? Note that new bugs need to be prioritized (just like features), so not having time assigned should not be a problem - you fix the bug once it has been prioritized and queued for fixing. – sleske May 31 '15 at 09:11
  • Yes, Test teams tend to be a bit behind dev in terms of rigourous practice these days. Test need to up thier game, write test cases against specs before dev starts and stick to those test cases when dev finishes. Get the business requirment out the door and avoid general critique of the product – Ewan May 31 '15 at 09:22