53

I'm a software developer. There is a team of testers who follow and run test cases written by the analyst, but also perform exploratory testing. It seems like the testers have been competing to see who opens more bugs, and I've noticed that the quality of bug reports has decreased. Instead of testing functionality and reporting bugs related to the operation of the software, the testers have been submitting bugs about screen enhancements, usability, or stupid bugs.

Is this good for the project? If not, how can I (as a software developer), try to change the thinking and attitudes of the team of testers?

Another problem is because the deadline is estimated and cannot change, so as the deadline nears, the testers will be scrambling to finish their test cases, and this will cause the quality of the tests decrease. This will cause legitimate bugs to be in the final product received by the client.

OBS: This competition is not a practice of the company! It is a competition between only the testers organized by them, and without any prizes.

Necreaux
  • 103
  • 2
  • It's probably good fun, but it shouldn't decide productivity or salary - the system can be gamed easily and to the detriment of other teams. – Ordous May 27 '15 at 17:08
  • 3
    Are the testers involved before a build? Meaning are they involved in developing the requirements or use cases or user stories, reviewing design documentation, or participating in code reviews? Are the reports that the testers file good, and are there checks in place to make sure that the reports are valid and complete? If you could edit your question to elaborate more on the roles/responsibilities of the testers and how their reports are managed, that would help me write a good answer. – Thomas Owens May 27 '15 at 17:08
  • 36
    Competition is not necessarily bad, but combined with incentives it can have adverse effects. This question reminds me of a [story on The Daily WTF where testers colluded with devs to create extra bugs that could then be heroically found](http://thedailywtf.com/articles/classic-wtf-the-defect-black-market). Fun read. Don't repeat that mistake. – amon May 27 '15 at 17:22
  • 1
    Maybe it would benefit your company to provide a better specification of what a "bug" is (as opposed to enhancement, etc.), and allow non-testers to reject poor bug reports. – Eric King May 27 '15 at 17:45
  • possible duplicate of [What is a good measure of testing/tester efficiency?](http://programmers.stackexchange.com/questions/186400/what-is-a-good-measure-of-testing-tester-efficiency) – gnat May 27 '15 at 19:01
  • 6
    Your point is well taken, but as an aside, I appreciate when someone tells me my work has usability problems. That's one of the hardest things to get right in software, and also one the most valuable to have right. – jpmc26 May 27 '15 at 19:39
  • 9
    Having come from a project more than a year long with meticulous QA, I can say that, while defects about having too much white-space between elements or different colored symbols that mean the same thing might seem unproductive, they ultimately enhance the user experience, often improving productivity, reducing the load on technical support, and giving a more professional look and feel to an application, all desirable traits. And, yes, sometimes software will be delayed because of it, but the price to pay is usually worth it. – phyrfox May 27 '15 at 20:06
  • 2
    If testers are struggling to find serious bugs and are raising ones about niggles, glitches and nice-to-haves then surely that is a good thing? The end game for everyone concerned is increasingly polished software. It does of course becomes a game of diminishing returns but as a developer, a steady stream of "good" bugs coming through would indicate there are significant problems with the software. – Robbie Dee May 27 '15 at 20:17
  • 1
    We had a similar "game" for closing defects, close sev1 get 5pts, close sev5 get 1pt. If a defect gets reopened you lose 10pts. You could implement something similar for the testers, except for opening defects, and if they open an invalid defect it's a large minus. – Captain Man May 27 '15 at 21:10
  • 1
    @amon that TDWTF was a derivative version of the original: http://dilbert.com/strip/1995-11-13 – Dan Is Fiddling By Firelight May 28 '15 at 13:35
  • @RobbieDee You are correct. The problem is it! The significant problems are being found in customers – Only a Curious Mind May 28 '15 at 16:51
  • Your edits are shifting the focus from *"Is it good that testers competing to see who opens more bugs?"* (like in the title) to *"how can I (as a software developer), try to change the thinking and attitudes of the team of testers?"* If I were you I would try opening a new question for the second and link to this one - the [Workplace.se](http://workplace.stackexchange.com/) is more on-topic for the second question. – DoubleDouble May 28 '15 at 17:20
  • @DoubleDouble My question is to generate a discussion about the game of find bugs in test team. My edited is to say my vision of the test team of my company, this is what is happening in my work. – Only a Curious Mind May 28 '15 at 17:26
  • @DoubleDouble I needed to generate this discussion for form my think and create new ideas about how to deal with this problem (yes, I knew that is not a good thing! haha) Thank you by your comments. – Only a Curious Mind May 28 '15 at 17:28
  • 9
    A number of answers suggest that the job of testers is to find bugs; this mindset is what produces the problem you've identified. The job of **quality assurance** is to **accurately determine whether or not the product meets a stated quality bar**. I don't care if a tester is producing bug reports; I care whether a tester is producing an accurate, customer-focused analysis of the quality of the product. That's the thing that should be incentivized. – Eric Lippert May 29 '15 at 00:10
  • Testers really have two jobs: Assuring the quality of the product, and helping to improve the quality of the product. If the test department just assured the quality by providing a "ship" or "don't ship" answer, that would be quality assurance, but wouldn't help anyone improving the product. – gnasher729 May 29 '15 at 14:46
  • 1
    "This competition is not a practice of the company! It is a competition between only the testers organized by them, and without any prizes. " Inform management about the practice and that you consider it to be harmful to the project. – Pharap May 29 '15 at 15:34
  • 1
    @EricLippert, that's a very important point. A test case which finds no bugs after a fix can be just as important as a test case which finds bugs. – o.m. May 30 '15 at 06:09

8 Answers8

86

I do not think it's good that they make a contest out of finding the most bugs. While it is true that their job is to find bugs, their job is not "find the most bugs". Their goal isn't to find the most, their goal is to help improve the quality of the software. Rewarding them for finding more bugs is about the same as rewarding a programmer for writing the most lines of code, rather than the highest quality code.

Turning it into a game gives them an incentive to focus on finding many shallow bugs, rather than finding the most critical bugs. As you mention in your edit, this is exactly what is happening in your organization.

One could argue that any bug they find is fair game, and that all bugs need to be discovered. However, given that your team likely has limited resources, would you rather have a tester focus several hours or days probing deeply into your system trying to find really big bugs, or spend several hours or days skipping through the app looking for typographical errors and small errors in the alignment of objects on a page?

If the company really wants to make a game out of it, give the developers the power to add points to a bug. "stupid bugs" get negative points, hard to find bugs with well written reports get multiple points. This then moves the incentive from "find the most" to "be the best at doing your job". However, this isn't recommended either, because a programmer and QA analyst could work together to artificially pad their numbers.

Bottom line: don't make a game out of finding bugs. Find ways in your organization to reward good work and leave it at that. Gamification rewards people for reaching a goal. You don't want a QA analyst to have the goal of "find the most bugs", you want their goal to be "improve the quality of the software". Those two goals are not the same.

Bryan Oakley
  • 25,192
  • 5
  • 64
  • 89
  • 5
    The first thing I thought was similar - if they want to turn it into a game, it would be better if a QA manager (if there is one) sets points on bugs found, assuming that person can be trusted to have the best interest of the company in mind. In this respect he can control the competition better and, whether you view this as acceptable or not, can even arbitrarily make the competition a bit closer by assigning slightly higher or lower points for the sake of the competition. (*otherwise if one person just gets way ahead due to testing what that new developer wrote, everyone else gives up*) – DoubleDouble May 27 '15 at 20:48
  • 2
    Even so, I wouldn't recommend that idea because it quickly gets boring unless your team members are almost all identically matched (which doesn't happen). It's better to compete against yourself. – DoubleDouble May 27 '15 at 20:56
  • 1
    Upvoted for the idea that measuring QA productivity by number of bugs found is equivalent to measuring programmer productivity by lines of code written (or story points closed). Both are ridiculous but both persist in the minds of PHBs who can't see any more subtle way to quantify performance. – dodgethesteamroller May 28 '15 at 14:36
  • Your answer is the same thing that I thought. But, the @DoubleDouble point about the identically level of testers is a good point to think! – Only a Curious Mind May 28 '15 at 17:05
  • 2
    Agreed. Even though my former QA job didn't have any hard and fast quotas, there were a couple of testers who felt it was most important to bug every little nitpick they could find -- things like "character's shirt is too long, most people do not wear shirts that long" (when the character's shirt length was utterly irrelevant to the game) rather than dig for the real bugs like "repeatedly connecting/disconnecting network cable on host [in a peer-hosted game] results in game being forfeited by client and win being added to host's online record". – Doktor J May 28 '15 at 22:19
  • WRT "because a programmer and QA analyst could work together to artificially pad their numbers": [check this entry on The Daily WTF](http://thedailywtf.com/articles/The-Defect-Black-Market). – BCdotWEB May 29 '15 at 13:04
  • @BCdotWEB I read this article, it was mentioned by another user in the question comments haha. Very funny! – Only a Curious Mind May 29 '15 at 20:33
  • @DoubleDouble - It doesn't even have to be someone assigning points - it can be automatic based on the type of bug. Someone would just need to write up the definition of each level. Something like "Critical (50 points): prevents general usage or crashes; Serious (20): Edge cases or poor performance; Minor (5): usability improvements; Trivial (1): Cosmetic improvements" With that in hand, QA would categorize their own bugs, and the trivial ones would be appropriately trivialized. – Bobson May 29 '15 at 20:46
  • I agree it could be - but the thing I don't like about "automatic" is that it isn't as easily adaptable and the (1) point Cosmetic improvement is still one point, so the question OP will still receive every tiny little cosmetic improvement. If you add a Useless(-1) category, then someone has to decide which bugs fit that category anyway, since testers themselves wouldn't categorize it (knowingly) as completely useless. – DoubleDouble May 29 '15 at 21:22
  • Assigning negative points to bugs means they will never be caught. If you have a lot of sloppy errors (e.g. typos) then the product's image will suffer due to perception of sloppiness overall. There's no reason to artificially devalue catching typos. – éclairevoyant Dec 12 '22 at 14:57
17

I am going to disagree a bit with the other answers. "Finding bugs" for a tester is a bit like "writing code" is for a developer. The raw amount is meaningless. The job of the tester is to find as many of the bugs that exist that they can, not to find the most bugs. If tester A finds 5 of the 10 bugs in a high quality component and tester B finds 58 of the 263 bugs in a low quality component, then tester A is the better tester.

You want developers writing the minimum amount of code to solve a particular problem, and you want a tester writing up the minimum number of reports that correctly describe broken behavior. Competing to find the most defects is like competing to write the most lines of code. It is far too easy to slip into gaming the system to be useful.

If you want to have testers to compete, it should be more directly based on what they are there to do, which is to validate that the software works as described. So perhaps have people compete to see who can write the most accepted test cases, or even better, write the the set of test cases that cover the most code.

The better measure of developer productivity is the number of tasks complete times task complexity. The better measure of tester productivity is the number of test cases executed times test case complexity. You want to maximize that, not bugs found.

Gort the Robot
  • 14,733
  • 4
  • 51
  • 60
  • 3
    *The job of the tester is to find as many of the bugs that exist that they can, not to find the most bugs.* If there is intended to be a big difference between these statements of the testing goals, it is lost on me. – Atsby May 28 '15 at 03:25
  • 6
    Because if tester A finds 5 of the 10 bugs in a high quality component and tester B finds 58 of the 263 bugs in a low quality component, then tester A is the better tester. – Gort the Robot May 28 '15 at 03:48
  • 6
    @Atsby if a single broken behavior manifests it in 10 different places, then 1 bugreport about the actual broken thing is far better than 8 separate bugreports that describe 8 out of 10 the different symptoms. – Peteris May 28 '15 at 08:02
  • 8
    @Peteris (and Steven) These are both interesting points, *but they are not effectively communicated by Steven's quoted statement*. – Atsby May 28 '15 at 08:14
  • @Atsby In the sentence you quote, the first clause is a relative statement (find the largest fraction of bugs), and the second is absolute (find the largest number of bugs). It's the difference between saying _fill this bucket 90%_ and _fill this bucket with 1/2 gallon_ when the bucket holds 1 gallon. – dodgethesteamroller May 28 '15 at 14:33
  • Tester A isn't the "better tester". If there's only 10 bugs then they have all the time in the world to catch them. If there's 263 bugs then I would be surprised if QA can catch even 1/3 of them, much less have time to write up all of them, and the devs should be on the hook because it's a waste of time to test such a mess. Devs shouldn't be passing the responsibility to QA for obviously untenable code. Also, it's impossible to know a priori how many bugs exist in a piece of code, so such percentages are impossible to determine. – éclairevoyant Dec 12 '22 at 15:01
16

Based on my personal experiences, this is not a good thing. It almost always leads to developers filing bugs that are duplicates, ridiculous, or completely invalid. You'll typically see a lot of these appearing suddenly at the end of a month/quarter as testers rush to meet quotas. About the only thing worse than this is when you also penalize developers based on the number of bugs found in their code. Your test and development teams are working against each other at that point, and one can't succeed without making the other look bad.

You need to keep your focus on the user here. A user has no idea how many bugs were filed during testing, all they see is the one that got through. Users ultimately don't care if you file 20 bug reports or 20,000, as long as the software works when they get it. A better metric for evaluating testers would be the number of bugs that were reported by users but that should have reasonably been caught by testers.

This is a lot harder to keep track of, though. It's fairly easy to run a database query to see how many bug reports were filed by a specific person, which I suspect is the main reason why the "bugs filed" metric is used by so many people.

bta
  • 1,003
  • 6
  • 12
  • 1
    +1 but the only problem with your better metric is, it creates an incentive to not improve the user bug reporting system... The idea is right, but maybe it should be a more general 'bugs found outside of the official testing process' – user56reinstatemonica8 May 29 '15 at 09:54
  • 1
    @user568458 - I was assuming that the organization in question had different teams for internal QA and for customer-facing support, and that this question only dealt with internal QA. If both are the same team, then you will indeed have conflicts of interest (whether using my method or not). – bta May 29 '15 at 19:18
6

There is nothing wrong with making a game out of finding bugs. You have found a way to motivate people. This is good. It's also revealed a failure to communicate priorities. Ending the contest would be a waste. You need to correct the priorities.

Few real games have a simple scoring system. Why should the bug hunt?

Rather than score the game simply by number of bugs you need to provide a measure of bug report quality. Then the contest is less about number of bugs. It'll be more like a fishing contest. Everyone will be looking to find the big bug that'll get a high priority score. Make the quality of the bug report part of the score. Have the developers provide the testers with feedback on the quality of the bug report.

Fine tuning game balance is not a simple task so be prepared to spend some time getting this right. It should communicate your goals clearly and it should be fun. It'll also be something you can adjust as business needs change.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
5

Finding bugs is their job. As long as they aren't making things less efficient (for instance, by opening a bug for ech of 10 typos instead of one to cover several of them) this is encouraging them to do exactly what they're supposed to be doing, so I can't see much of a downside.

mootinator
  • 1,280
  • 10
  • 18
  • Couldn't agree more with Moot. **Of course** people could do something stupid (file 100s of typos, etc) -- but "people can do something stupid" when following any scheme at all. – Fattie May 28 '15 at 03:22
1

This is an expansion on @CandiedOrange's answer.

To get started on shifting the attention to more useful objectives, consider something very informal and unofficial. For example, the developers could buy some small tokens and trophies.

Each day that at least one significant bug was reported, leave a "Bug of the Day" token on the tester's desk. Once a week, hold a ceremony with a procession of developers delivering a bigger and better "Bug of the Week" token or trophy. Make the "Bug of the Month" trophy delivery even more dramatic, perhaps with cake. Each token or trophy should be accompanied by a citation saying why the developers thought it was a good thing that bug was found in testing. Copies of the citations should be put somewhere where the testers can all read them.

The hope is that the testers would switch their attention from finding the most bugs to collecting the most trophies and tokens. Their best strategy for doing that would be to read the citations and think about what approaches to testing are likely to bring out bugs the developers will consider important.

Simply ignore unimportant bug reports. Since it would all be very unofficial and informal, it could be shut down or changed at any time.

  • I'd have to agree. One thing: don't make this about getting approval from management. To make it feel like a game it's critical that the testers feel like they understand the rules themselves. If the login system is the high priority concern let them know up front and turn them loose on it. If high traffic use case defects are the priority rather than obscure corner cases then make that clear and explain how it's scored. Simply having clear priorities will making it fun and get people fishing in the right fishing hole. – candied_orange May 30 '15 at 02:37
1

Is this good for the project?

No. You have pointed out, yourself, that you have observed that it results in low-quality reports that are not targeted at required functionality, and that the testers end up, to compound the problem, scrambling to complete the work that they are actually "supposed" to be doing.

If not, how can I (as a software developer), try to change the thinking and attitudes of the team of testers?

Raise the issue with your Project Manager. They should consider this sort of thing to be part of their job. If your PM is unwilling or incapable of dealing with it, you are sort of stuck developing your own coping strategies. (which would be a different question)

David
  • 245
  • 1
  • 14
-1

I think how it will be(or how it already is) if it goes on like this, you wont necessarily get lower quality. Al though I think it will decrease the quantity to quality ratio. It depends if this is a bad thing or not. It depends if

reporting bugs about screen enhancements, usability, or stupid bugs.

is something you really dont want. If this is clear with the testers, I would just tell them not to do the things you dont want reported but be clear about it. Do it when one of those reports show up again.

The reason they have a competition is probably to have fun while working, so they are probably not intending to do bad work(if this is considered bad).

Loko
  • 197
  • 1
  • 9
  • 1
    I absolutely do want to know about usability issues. We refer to them as "bugs in the spec". – RubberDuck May 29 '15 at 11:31
  • 1
    @RubberDuck Well if this is 100% clear with the team, then there is a reason to tell them while letting know you dont like what they do at all and they know why. So warn them. If this is not talked through with the team specificly, I dont think you can actually get mad at them and just give an example of one the reports you disapprove of and let them know you dont want it like that. – Loko May 29 '15 at 11:35