Overall the process seems reasonable, but it doesn't seem like it will scale with either product complexity or team size.
The idea of creating a feature branch from the develop
branch seems sound. This branching model is consistent with the gitflow model. There are a few related branching models that involve feature branching off of develop
but vary in the use of master
and release branches. I've had good successes with this type of approach overall, depending on the approach toward release and deployment.
I'm not sure that testing in the feature branch is where you want to be testing. I wrote about where QA should perform testing in an answer to a similar question. Just speaking generally, testing should happen twice - once in the feature branch to confirm that the added functionality is working as intended and once in the integration branch (develop
, in this case) to ensure that the system remains stable. The question that remains is who does the testing in each place and what testing methodologies are used.
I can see some cases where you may need to branch off of a feature branch to continue working, but often this is a symptom of something else, such as architectural debt or not decomposing the work in the best way. This should be a rare event, not a regular occurrence.
The biggest concern that I have is the lack of automation testing in this whole process. By "automation testing", I'm referring to any tests that can be run in an automated process, from unit testing up to functional and acceptance testing. Manual testing is costly, especially as the system grows in complexity. My favored approach would be to start introducing test automation and including at least "happy-path" testing and regression testing as part of your pull requests and code reviews. The goal should be to find defects early as well as reduce the burden on QA manual testing.