I have a question about working with independent testers doing manual testing (not about automated unit and regression testing.)
In a flow process I do my work on a feature branch until I'm confident that it works and I haven't introduced bugs. I merge from the develop branch to my feature, late and often, to ensure I haven't broken anything in merges with other recent work. Sometimes I'll even do it again during the testing phase, so that the tester can work with the most recent snapshot. Still, there's always a small window of time after testing where new work can -- and in high traffic times does -- come in from other features.
This means that the merge back to the develop/release branch is sometimes not trivial, despite our treating it like it should be. (Sometimes it's even iterative: by the time I'm done making sure I've correctly integrated one feature that's slipped in, running regression tests, checking the code, and doing some manual testing, yet another one has come in.)
My question is, is there a workflow for developers and testers where you don't lose out on the safety net of testers for that last step (but also hopefully don't need to ask again and again for re-testing tested work)? What are industry best practices here? If we could assure that branches won't interfere with one another, we'd be fine, but in practice we get conflicts sometimes.
I'll add that I'm sure we don't want to do our main testing on the develop/release branch. It's been a huge win and stress-reducer since we switched to flow. We can easily put off releasing work that's created a problem or raised a question during testing. In our pre-flow practice, we wound up with emergencies near a release, where a problem was found that we had to deal with urgently before releasing because the work of a non-critical feature was already merged into the main branch for testing.