9

As a member of our company's QA team, I frequently get entirely unenthusiastic feedback from developers in their responses to test results in our agile, web-based software-as-a-service shop. Most of our testing is manual, since automated testing doesn't really make sense for us right now, and developers are usually reluctant to listen to any change suggestions beyond those that prevent javascript/500 errors. I understand that fixes/changes require work, and our developers are rarely short on work to do, but I don't think developers respect QAs input.

Unfortunately, our product owners are vacant: acceptance testing doesn't exist, and user stories usually are only one sentence long, and don't provide the developer with much to go off of. There is no other feedback mechanism to development other than from customers x-weeks later, who aren't designers/developers either, of course, and whose suggestions are all over the board.

I am technically competent, at worse, and am capable of simple development on our LAMP stack and feel confident that developers respect my knowledge. However, I have for the most part given up on feedback beyond that which prevents critical errors--which affect data integrity, or bottom-line functionality.

This has raised the question of whether seniority, or pay grade is a significant factor in how seriously developers value QAs input. In our case, where we don't do automated testing and QA members likely don't have as much technical expertise, it kind of makes sense that we make less than developers (between 60-70%, depending on time in grade). I don't believe in the argument that the opinion of the team member with the biggest pay check is the most important, however I can imagine how it's difficult to take feedback from team members who have a year or two less experience, are not as technically knowledgeable, and make noticeably less. In the end the best idea should win, but unfortunately that might be decided after the enhancement has been on production for several months, and users love it or hate it.

gnat
  • 21,442
  • 29
  • 112
  • 288
premiumFrye
  • 193
  • 4
  • 6
    Why would (or should) salary or seniority have any bearing whatsoever? – Robert Harvey Sep 04 '13 at 22:56
  • 1
    You do realize that most companies don't disclose employee salaries and thus the developers don't know what someone in QA is making right? Similarly, how do you measure seniority here? Just because someone has lasted for 10 years as a QA member doesn't mean they actually have honed skills as it could be that they are doing the same basic stuff year after year. Same for developers so time in the field isn't a good metric here. – JB King Sep 04 '13 at 23:00
  • 2
    @JBKing - I'm pretty sure that the developers will simply assume that QA people are on a lower paygrade than them. Developers, after all, being the pinnacle of human achievement. :-) – Carson63000 Sep 05 '13 at 00:13
  • 2
    @Carson63000: I know you're making a joke, but nevertheless, your comment deserves a response: Most people have some idea of what the median salary is for the various positions that surround them, and if they don't, it's not hard to look up. "Developers make more than QA with equivalent experience/seniority" isn't crude arrogance, it's just a statement of fact, much like declaring that development managers make more than developers. – Aaronaught Sep 05 '13 at 01:27
  • 3
    To the OP: In my experience it's got nothing to do with seniority and everything to do with persuasion. We've got a great QA team that I have the utmost respect for, most of whom have been around a lot longer than I have, but we didn't start getting anything resembling proper acceptance criteria until I (a developer) started pushing for it and one of the QA analysts did a proof of concept with Cucumber and Selenium. Your entire team has to want it, or, failing that, you have to be able to influence the decision-makers. *Maybe* seniority influences them... or maybe it's something else entirely. – Aaronaught Sep 05 '13 at 01:30
  • @RobertHarvey Salary/Seniority Are USUALLY related to experience/skill. I don't think that's a stretch to say. – premiumFrye Sep 05 '13 at 01:44
  • @Aaronaught I very much agree. Unfortunately, the general attitude (developers and management) that the changes I suggest create more work isn't very conducive to pushing new processes and technologies. Acceptance testing--or even deciding what acceptance criteria are--would create more work for our product owners. Because what we've done before works, so it ought to continue working, right? I'm the junior of our QA team, which isn't helping my cause. – premiumFrye Sep 05 '13 at 01:50
  • 1
    Don't try to argue the fact that what you're suggesting will create more work. It *will* create more work. What it will also do is reduce post-production defects, which are approximately 3-5 times more expensive to fix once they're in production. 80% of development time is spent on debugging; if your test overhead is 30% (a typical number) then the 1 day you spend writing automated tests could eliminate 1 full week of firefighting. It's demonstrably true - in our organization it's almost always the products *without* automated tests that suck up tons of developer time with production issues. – Aaronaught Sep 05 '13 at 02:05
  • Can you elaborate on what "change suggestions" QA is making? Either the app behaves as 'required' or it doesn't. Poor requiremnts probably opens the door to a lot of suggestions. – JeffO Sep 05 '13 at 02:26
  • 1
    @user1403844: Yes, but I don't look at my coworker and think "how much does he make" before responding to him or evaluating my approach to the discussion. I may take his *actual* experience into account. But your corporate issues go much deeper than that anyway. – Robert Harvey Sep 05 '13 at 03:19
  • @RobertHarvey As I mentioned, since I'm still very much the junior QA member (graduated intern, essentially), my actual experience honestly isn't very deep, which appears to have the effect of making it easier to write off. As I mentioned in a previous reply salary/seniority are USUALLY related to experience skill, and as I mention in the original post, I don't agree with the belief that the highest paid person in the room has the best ideas. – premiumFrye Sep 05 '13 at 05:10
  • @JeffO Usability, primarily. As mentioned, there aren't many requirements (acceptance criteria), so it's lots of shooting from the hip. Suggestions might include something to the extent of 'this information has been entered elsewhere, and so should be automatically populated/calculated', or 'reduce the number of steps required by a user to accomplish this goal by simplifying x and y'. To which a developer might respond that it wasn't part of the user story specifications--whereas of course basically nothing was, so I don't like that response. – premiumFrye Sep 05 '13 at 05:13
  • possible duplicate of [QA - Developer communication](http://programmers.stackexchange.com/questions/164799/qa-developer-communication). See also: [How much time/money should be spent on software QA/QC?](http://programmers.stackexchange.com/questions/80587/how-much-time-money-should-be-spent-on-software-qa-qc) – gnat Sep 05 '13 at 09:23

4 Answers4

30

It sounds to me like you have a dysfunctional team with a cowboy culture and you're trying to figure out what the root cause is. You are proposing a hypothesis that maybe developers don't respect test because of some sort of implicit hierarchy or length of service or some other factor, but you're not necessarily presenting evidence for the case, you're essentially asking "could this be what's wrong?"

In fact, it sounds like many things are wrong with the organization, and any issues driven by perceptions of status or power are merely symptomatic of poor leadership. You are not an agile shop if you are not practicing any of the enabling mechanisms of agile development. One line stories are not agile; those are mere bullet points on wishlists. A story contains a business motivation, a description of the customer's interaction with the product, and a definition of when the story is done. If you don't have those three things, you don't have enough information to decide what should happen or how you know you've done it right, so the story will never be "done". That's treading water, not making progress. Developers will never be short of work to do in such organizations, because they'll constantly be firefighting, aided only by tiny buckets of their own urine.

Some of the "definition of done" can be part of a general team agreement, but specific acceptance criteria for any story, even if terse, are essential.

There are very few cases in which "automated testing doesn't really make sense for us right now". It may be the case that the test team isn't the right organizational locus to deliver automated testing, especially early on, but it always makes sense to have automated testing. While it's ok in my book for developers to do a little bit of exploratory coding without formal automated tests (I'm not a TDD or even BDD purist), it seems horrifying to me as a developer that I'd consider releasing code to a test organization with no developer-written automated tests. Unit Tests and BDD tests, written by developers, and scenarios preferably written by Product Owners, are essential parts of agile delivery.

Figuring out the best use of a test organization in an agile team is a tricky problem for which there is no single formula for success. In an organization which has no definition of done, it will be highly difficult to demonstrate value, because there's no way of knowing if the test team has contributed to "done." I've worked in old school waterfall teams as well as agile teams with 1) no distinct test organization or 2) moderately integrated test teams, 3) partially integrated test teams with separate stories and work product, and 4) gated release models, where some QA involvement happened alongside ordinary development but there was a distinct "test pass" due to some legacy or regulatory reason.

The "right" model for the test team will actually depend on the level of technical sophistication of the test team members. I think having test team members with moderate or better technical sophistication pair with a developer while writing code, to suggest cases for automation, can be a great model. But a test team can be reasonably effective in validating that the stories have measurable acceptance criteria, doing some exploratory testing as developers check in code, and trying to augment developer unit testing with integration scenarios and fleshing out special cases. It's even sort of ok to have a throw-the-build-over-the wall approach in some circumstances, as long as there's a way of converting stories into test cases and there's some sort of feedback loop with the Product Owners and Developers.

But you won't really get there without active buyoff from your management and product owners on what the organizational priorities are and what test's role should be. I doubt there've been any serious conversations in your team other than "oh, I've worked on other software projects and that means I know we need to have some sort of test effort. Let's hire a test team." Most average and some above average developers will be tolerant of organizational inertia that doesn't demand they engage with the test team. In order for real progress to be made, some management or consensus driven initiative to drive "better" development practices needs to happen.

As a developer and as a former STE, STE Lead and SDET, I have nearly zero interest in how senior the test team members are or how much they are paid. What I care about is how they can help me ship better software. I personally like leveraging the skills of people who can work through tons of scenarios that I can't meaningfully explore given the team's desired organizational velocity; I'd be happy to walk through a test team member on how to start from existing unit tests or scenarios and build better coverage, or read test plans and provide feedback. But I might settle for focusing on "just good enough" coverage on my end and let the product owners and maybe just hope that the testers catch what I miss, if that's all the organization appears to value.

Somehow, either you are going to need to start selling to your management or to your most sympathetic developers on taking a more, dare I say it, agile approach to development and quality. I can't give you a formula for this, because I've not been that great at driving such things in organizations resistant to change, but the best you can hope for is a business value driven case (talking to the business side) or perhaps a craftsmanship/continuous improvement case on the technical side.

JasonTrue
  • 9,001
  • 1
  • 32
  • 49
  • 13
    *Developers will never be short of work to do in such organizations, because they'll constantly be firefighting aided only by tiny buckets of their own urine* - my personal firefighting was just aided by a spray of coffee out of my nose as I read that line. – Carson63000 Sep 05 '13 at 00:14
23

As a member of our company's QA team, I frequently get entirely unenthusiastic feedback from developers in their responses to test results in our agile, web-based software-as-a-service shop.

That's because:

Our product owners are vacant: acceptance testing doesn't exist, and user stories usually are only one sentence long, and don't provide the developer with much to go off of. There is no other feedback mechanism to development other than from customers x-weeks later, who aren't designers/developers either, of course, and whose suggestions are all over the board.

QA feedback only has teeth when you have clear, measurable, actionable requirements that drive both the development and testing efforts, and developers that care about producing a quality product which meets those requirements. You must have requirements that allow you to declare success. You can't constantly move the goal posts and expect developers to be enthusiastic about scoring touchdowns.

See Also
Characteristics of Good Requirements

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
1

My instinct is that the issue here is really a combination of your expectations and the respect shown to you by the team management.

QA's main goals are to document a combination of product deviations from the spec, usability issues and performance issues. The key word is document.

The product manager role is responsible for deciding priorities on where developers spend their effort, and it may mean that at the current moment they have decided that the priority is to implement features, and to address defects later (The infinite bug strategy). Whilst this is known to be a poor long term strategy, there are often situations where short term it is the politically correct thing to do. (e.g show progress to a customer at next weeks meeting).

As long as you are documenting the defects, you are doing your job, and the product manager has to justify making the decision to ship with known defects.

That said, the secondary issue is about the respect shown to you by the team managers. If they are seeing you as a problem, because you keep finding bugs, and all they want to do is to get the product out, earning money, then the solution is the same. Keep identifying the bugs, but respect that the choices (and then the blame) are the managers' in the decisions to ship /apply development resources, and your role is to provide them accurate and timely information to make decisions.

If my instincts are right, I'd start looking for a more successful organisation to work for, because this project is likely to end badly.

sleske
  • 10,095
  • 3
  • 29
  • 44
Michael Shaw
  • 9,915
  • 1
  • 23
  • 36
  • 1
    By taking power out of QA's hands, 'Quality Assurance' is a misnomer. The position you're describing is more like a beta tester that just documents error. A QA member whose opinion was actually valued wouldn't constantly need to be validated and micromanaged by the project manager--a defect is a defect. – premiumFrye Sep 05 '13 at 13:48
  • My point is not whether management should be deciding if a defect is a defect or not, but that the project management are responsible for setting priorities and for deciding between releasing a buggy product or spending more time on bug fixing. – Michael Shaw Sep 05 '13 at 14:22
  • @user1403844: Yes, this depends on the role of QA. In some organizations QA indeed has authority over the process, and can even veto releases. In others the authority rests with the PM or even the business owner (small shops). This answer asssumes the latter. – sleske Sep 10 '13 at 07:02
0

Is seniority/paygrade an important factor for effective QA members?

In short: No, it is not. The effective QA member is the one who provides Quality Assurance of deliverable (for ex: module functionality) by verifying them against the requirements. They are valuable team members who spot out requirement gaps, wrong implementation of business rules in the application and in the database system.

As a developer, we do respect and value a well versed QA, who has good understanding of software development process, open to lean/understand new things, and posses scripting skills to check correctness of data entries in the database with good humor and attitude toward the team members.

Yusubov
  • 21,328
  • 6
  • 45
  • 71