6

We currently drive changes to our process through the following mechanisms:

  • Weekly wrap-up meeting
  • Project postmortem

We discuss what isn't working, what is working, etc. I use these settings to introduce new practices, and eliminate ones that aren't working. We usually only make 1 change at a time and try it for 2 weeks to a month.

To validate our change we look at:

  • our velocity and std. deviation (work accomplished week over week)
  • dialectic
  • our opinion/perception of the practices effect and effectiveness

Those practices which seem to work for us we keep. Everything else is removed. This has worked pretty well for us, but I'm always interested in better ways to do this.

How often/When do you review you development process?

How do you validate the changes in your process are effective?

dietbuddha
  • 8,677
  • 24
  • 36
  • 1
    You are doing it right. It is also known as Retrospectives. There is a wonderful book on the subject. –  May 24 '11 at 17:48
  • It's a great book. It's called [Agile Retrospectives](http://pragprog.com/titles/dlret/agile-retrospectives) – Rein Henrichs May 25 '11 at 03:35

3 Answers3

3

How often/When do you review you development process?

All the time.

How do you validate the changes in your process are effective?

By monitoring and evaluating everything all the time.

This may sound unbelievable, but where I work we do not stop reviewing. Apart from the formal release post-mortem with regard to planned and realised issues (and why some were postponed or brought forward), and the usual HRM "performance reviews" we do not have a formal schedule for evaluations.

We evaluate and learn. All the time. No schedule. Whenever we encounter something that we would rather not have happen again, we try to work out a way to prevent it happening again. Whenever something goes especially well, we try to figure out what it is that is making it go better than other times so we can replicate that in the future.

It is informal and very ad hoc, but very to the point and very effective as well. For one because any disection of a situation is immediate, which means everything is still fresh for all involved. For another because if you weren't involved in the situation and/or are not part of figuring out the solution, you do not have to waste time in scheduled meetings to which you may have nothing to contribute.

Personally, I like this continual (continuous?) attention to how we can improve our product, our processes and ourselves. You can't do this in any ol' team though. It requires:

  • A very clear idea of the priorities in all aspects of the product and its development.
  • A team of very open minded people.
  • An absence of ego's/ego-tripping. Nobody is perfect and prima donna's just get in everybody's way.
  • No code ownership by individuals. No one is the sole master of a piece of code, everybody can work on anything. Though expertise is taken into account, knowledge transfer is equally important.
  • An atmosphere where it is recognized that errors happen and everybody can make a mistake or misjudge the impact/required time for an issue. This does not mean we like errors or mistakes. We certainly don't like to see the same error by the same person more than once.
Marjan Venema
  • 8,151
  • 3
  • 32
  • 35
  • +1 We also do alot of ad-hoc evaluation. However, I like explicitly review the previous week/cycle as it changes your view to the macro rather than the micro (where it's usually at). – dietbuddha May 24 '11 at 20:52
  • @dietbuddha: I guess we switch continually between macro and micro. The interaction between development (developers/testers) and non-developers (support, speccers, consultants) is quite direct. One of our developers is 50% developer and coordinates the support team in the other 50%... These dual roles certainly help to keep a broad perspective. Of course our overall manager has weekly progress meetings with the coordinators of our development, qa, and support teams. Though they are focused on progress of a release's development, I am sure that processes and performance come up when needed. – Marjan Venema May 25 '11 at 06:03
  • @Martin: Yes, almost, we have got plenty of challenges left :-) However, I do indeed count myself very very lucky to be working where I am. It took me long enough to find them: approximately 21 years in software development to find the company that is an almost exact match to what I find important in a job/employer/team... :-) (btw it is not the company listed in my profile, that's my to-the-side business). – Marjan Venema May 25 '11 at 12:13
0

I like your answer, but here's another, for the sake of variety:

  • review 2 weeks and/or 3 cycles after start of process or practice - when your team has just enough time to have ironed the "we did this totally wrong" type problems and is just starting to get a handle on it. "Start" can be a new phase (like waterfall phases), or the instantiation of something new that should transcend phases - like a new continuous build system

  • review at critical mass - when you have a "statistically significant" amount of data to look at. I had to put it in quotes, cause I don't really do statistical analysis here. But 3 iterations is too small. I mean somewhere between 10-20 repetitions of something where you may have enough data to see some outliers or an average trend. This can be really machine-relevant stuff like the time it takes to do a build, or it can be subjective stuff like the accuracy of individual estimates or the estimated time it took to fix a class of bugs.

  • review at completion - if the practice will be stopped take a look and see if it was bad or good, could have been better or had any great side effects that you didn't anticipate.

  • review at staff change - whether you're growing or shrinking, staff changes are a good time to touch base and review. If it's a new person joining the team, maybe they have some great tricks you don't. If it's a team member leaving, capture that last knowledge. This may not necessarily be a team sport - it may be a manager/changing staff thing - particularly if you think it's cherished process that is up for serious critique.

  • review because the game changed - either the product or the environment in which you build it changed - time to assess and see if you need to do anything differently.

Can't say I do all those things all the time. Too much navel gazing will drive you nutty. But these are all decent touchstones to be used with consideration.

bethlakshmi
  • 7,625
  • 1
  • 26
  • 35
0

I am gonna play the devil here, because I strongly believe in changing for good rather than changing for the sake of it. I also believe every team slowly moves towards an equilibrium - it can hang there for eternity.

Have you measured the cost/benefits of changing processes constantly? I know of a bunch of teams who have been working on an incremental development model with regular meet ups for years and its perfect for them. Yeah new team members join in, old ones leave - but the process has stuck and worked wonders.

Another question I have is how do you scale this / adapt this for a busy team with tight dates? I can see this may be working in a small team with manageable deliverables - but have you tried this out at a larger team / company level?

  • You're attacking a strawman here. No on is suggesting "changing for the sake of it" or "changing processes constantly". We're talking about incremental, measured improvements to process of exactly the same kind you already find useful in software. – Rein Henrichs May 25 '11 at 03:40
  • No attacks. Just want to understand how these improvements work cost/benefit wise. It is one thing to change stuff like interview patterns that happens about 10% of your time - but totally different trying to get me to follow different processes every two weeks. – Subu Sankara Subramanian May 25 '11 at 03:42
  • When I say "you're attacking a straw man", I am referring to the [logical fallacy](http://en.wikipedia.org/wiki/Straw_man). Once again, *no one is suggesting* "trying to get [you] to follow different processes every two weeks". That's the straw man to which I'm referring. It is a flaw in your argument. – Rein Henrichs May 25 '11 at 03:44
  • Valid point - Still I would like to know if there are costs associated with evaluating/changing processes frequently. – Subu Sankara Subramanian May 25 '11 at 03:53
  • Yes, certainly there are. Your straw men indicate to me that you are overestimating them. There are also costs involved with not improving your process. Also, processes are not static. They can deteriorate over time and require maintenance. – Rein Henrichs May 25 '11 at 03:55
  • I am not estimating anything:) I want to know how it works out for the OP and other folks who answered here in terms of costs. Have they tried measuring the efficacy of a fluid process setting. that's all. – Subu Sankara Subramanian May 25 '11 at 04:03
  • 1
    @Subu I can say that my teams tend over time towards more frequent and higher value deliveries, lower defect rates, and higher team morale and confidence. I can also say that as a less experienced team lead who didn't perform proper retrospectives, these changes were less pronounced. Of course, that could also be due to other improvements in my team leadership. It's hard to isolate and quantify these changes in a complex system but anecdotal evidence (mine and others) is overwhelmingly in favor of the efficacy of retrospectives. – Rein Henrichs May 25 '11 at 04:09
  • 1) don't make too many process changes at once, at most two. 2) make changes only to address issues, or gain efficiencies, not just to change 3) measure efficacy of change after x weeks or months 4) busy teams are not an exception. You are already trying to remove road blocks and inefficiencies that should allow the team finish earlier. Otherwise the would be no point to making the change. If the change doesn't work and you take longer then learn from that, but don't train the change as is – dietbuddha Feb 10 '17 at 17:33