6

I have a question about working with independent testers doing manual testing (not about automated unit and regression testing.)

In a flow process I do my work on a feature branch until I'm confident that it works and I haven't introduced bugs. I merge from the develop branch to my feature, late and often, to ensure I haven't broken anything in merges with other recent work. Sometimes I'll even do it again during the testing phase, so that the tester can work with the most recent snapshot. Still, there's always a small window of time after testing where new work can -- and in high traffic times does -- come in from other features.

This means that the merge back to the develop/release branch is sometimes not trivial, despite our treating it like it should be. (Sometimes it's even iterative: by the time I'm done making sure I've correctly integrated one feature that's slipped in, running regression tests, checking the code, and doing some manual testing, yet another one has come in.)

My question is, is there a workflow for developers and testers where you don't lose out on the safety net of testers for that last step (but also hopefully don't need to ask again and again for re-testing tested work)? What are industry best practices here? If we could assure that branches won't interfere with one another, we'd be fine, but in practice we get conflicts sometimes.

I'll add that I'm sure we don't want to do our main testing on the develop/release branch. It's been a huge win and stress-reducer since we switched to flow. We can easily put off releasing work that's created a problem or raised a question during testing. In our pre-flow practice, we wound up with emergencies near a release, where a problem was found that we had to deal with urgently before releasing because the work of a non-critical feature was already merged into the main branch for testing.

Michael Durrant
  • 13,101
  • 5
  • 34
  • 60
  • Why do you have to ask testers to run tests again? Can't you run tests by yourself? Can't tests be run automatically on CI server? – scriptin Apr 16 '15 at 15:56
  • 1
    Again, I'm explicitly not talking about automated regression tests, which absolutely are run at every stage on all of our feature and release branches (and which are the responsibility of developers, not testers.) I'm talking about thorough hands-on testing by dedicated testers. – Joshua Goldberg Apr 16 '15 at 16:15
  • Make both the title and the details explicity say "manual" testing to save us all some brain cycles ;) – Michael Durrant Jun 10 '15 at 10:05
  • 2
    FYI (was replying to the deleted answer) - I work for a big shop and we have a test team who do a lot of manual testing. They're really good at it, finding all manner of awkward bugs that occur around the module under test. It can also be difficult for them to automate some of the testing as it involves video streams as well as traditional data updates. So while automated testing is a good thing, its not the only thing. – gbjbaanb Jun 10 '15 at 10:32
  • 1
    related https://softwareengineering.stackexchange.com/questions/344498/where-should-qa-team-do-the-testing-in-gitflow-branching-model – Ahmed Nabil Feb 16 '21 at 13:55

2 Answers2

1

So your problem is ultimately nothing to do with testing but the issue of difficult merges of your feature branch back to develop?

I'd say why do you not want to run your tests off the develop branch? Are you testing the individual feature you're developing, or the integration whole of yours and others features? I'd say the feature branch testing is a matter for the developer, only when you think its complete do you merge to develop and then build a package for the test team from there (personally I'd rename the 'develop' branch to 'integration'). This way, test team has a current version of the product, and can test completed features, feeding back bugs to the developer to fix and iterate the feature=merge-to-develop process again until test find no bugs in it, then the feature is closed. When the test team declares the product tested, can it be merged from develop to master.

Typically you'll want to perform QA tests on the releases as well, but if master is simply a snapshot copy of develop for release purposes then you can skip this.

gbjbaanb
  • 48,354
  • 6
  • 102
  • 172
  • The last paragraph of my question gets at why we don't do our core testing on develop. It allows the testers to act as gatekeepers, so that a feature with problems discovered doesn't create an emergency when a release goes out. (A feature can be abandoned if it's not going to work, whether it's an implementation problem or a usability issue, etc.) I pass the feature to testers only after I've fully tested it myself and convinced myself it's ready. – Joshua Goldberg Jun 10 '15 at 15:20
  • By the way, we do have a round of integrated testing at release time. (Both devs and testers drop other work and go through complex use cases.) Also in a sense, I think any feature branch already is a "current version of the product" -- the only things missing from it are features that haven't passed testing yet. – Joshua Goldberg Jun 10 '15 at 15:22
  • @JoshuaGoldberg but surely you only merge back to trunk from develop when the testers have passed their tests, so until they do, develop is effectively a big feature(s) branch. Using the flow model described in the link, develop gets merged to master. So you could do your integration testing on develop and ignore any feature-branch-only testing. IMHO a test team shouldn't be testing individual feature branches, that's what the dev (or dev team) should do. – gbjbaanb Jun 10 '15 at 15:30
  • Wouldn't that mean giving up the gate-keeper benefit I descibed (that, again, we *really* love)? By the time a tester makes some catch that means the feature either will take longer than the next release or shouldn't go in in its current form at all, it's too late to extract it from develop/integration without a lot of trouble. – Joshua Goldberg Jun 10 '15 at 19:37
  • @JoshuaGoldberg you get the gatekeeping at the overall level of the package,not its pieces. It seems you place a lot of work on the testers to test each feature individually, only to have them test the whole lot again. If this s the case, I would reassign 1 tester to work with the dev teams per feature (in an agile way!) and let the rest of the testers test the develop branch. In some orgs, the test team gets a new build every night, which would have been built from the develop branch, and include all the latest (working) iterations from feature branches (eg regular merging to develop) – gbjbaanb Jun 11 '15 at 07:32
-2

If you are commonly having difficult merges, there is likely something wrong with the way you are using git.

It sounds like you have branches that don't get deleted within a working day of them being created. That is generally, if not universally, considered to be a bad idea. Most of the time, git branches are more an extra (optional) step in the commit-staging process that gets you a nice clean history. Generally, avoid using them for storing product variants, features, tickets or anything else that has a meaning outside that history.

And of course, if you fix that problem, the manual testing issue goes away, as the testers will just naturally be testing the right version in the first place (i.e. the tag setup by your CI system whenever all automated tests pass).

soru
  • 3,625
  • 23
  • 15