7

On my team, we carry out code reviews (without the use of external tools). The reviewer produces a list of items which the developer should address or consider and then essentially leave them to it (after discussing with them the relevant points) without follow up on their subsequent changes. Occasional spot checks suggest that for the most part all the comments are being addressed.

However, in a lot of literature on the subject of code-reviews there is a strong emphasis on verifying that the code review comments are addressed. I'm wondering what people's thoughts are on this. Does your team have (or had) issues with people not addressing comments? Does your team behave like mine and not do the verification step? If your team does do the verification step, how often is it just a rubber stamp exercise?

Chris Knight
  • 702
  • 5
  • 12

8 Answers8

6

The key to success (if there is one) is to do it like the team likes to do it. I've been in teams where nothing gets committed to VCS prior code review, followed by another review if there was something to fix. This suited team fine as everyone had agreed to do it, it was considered valuable and not too rigorous.

In some other teams, this definitely would not have worked. For example, in another team we had members from different organizations, with different skills. Even with much lighter review style we had issues with people being scared of doing reviews or 'being' reviewed. There are obviously more than one explanation for this 'failure', but IMO, the biggest is the lack of 'buy in': people had too different backgrounds and expectations, perhaps lack of trust, to be able to effectively utilize code reviews. In these teams, code reviews perhaps were not beneficial at all.

Team needs to decide whether code reviews are done at all, and if so, how. Start with something that everyone in the team can agree on. Over time, the habits and tastes will change. Then it's time to adapt again.

merryprankster
  • 426
  • 2
  • 6
4

Our code review process requires a sign off from the reviewer. If the changes they recommend following discussion are not made, they don't sign off.

temptar
  • 2,756
  • 2
  • 23
  • 21
  • How often do the reviewers find that changes weren't made? And how often do the reviewers rubber stamp the review without close inspection once the developer says 'Yep, I done all those changes you recommended'? – Chris Knight Jul 21 '11 at 09:39
  • It's never happened on code I review and any code I write that gets reviewed, the changes get reviewed - and in fact, I show them to the reviewer in the source file. – temptar Jul 21 '11 at 10:14
3

Your team hopefully consists of professionals who can and will do the job that they are paid to do. I go to great lengths not to treat them like children by checking up on them and making sure that they are doing what they are supposed to.

Code reviews are done at our desk and everybody involved takes notes to bring to the meeting. The meeting shouldn't last longer than half an hour. Somebody takes minutes and will send an email out to everybody what the action items are for that meeting. Those code review suggestions that were agreed upon are now tasks on the sprint plan.

If these tasks aren't met that sprint then the developer needs to explain why. There is no need at that point to "double-check" him to make sure he did that work.

IMHO if you need to "double-check" your developers completed tasks then your team has a serious skill deficiency, ethical deficiency or both and would need to be corrected at a higher level.

maple_shaft
  • 26,401
  • 11
  • 57
  • 131
1

We do code reviews for 100% of our code.

We use Fisheye which we've tied into our ticket tracking system, Jira

  • Reviewers are added to a ticket so that they can review the code.
  • They review and make comments in Fisheye
  • The developer manually makes changes as indicated, sometimes after discussion either online or in-person.
  • Additional 'rounds' of reviews may take place depending on the changes made after reviews.

Finally the review is closed. There is NO specific 'step' to make sure changes have been applied. Instead we rely on the fact that we are professionals, we are craftspeople, we care about our code. I prefer this 'professional' approach to rigid procedural rules that people are afraid to break. If a reviewer made a suggestion and we ignored it, it will usually surface soon in another context.

However this approach implies:

  • professionals that RESPECT each other
  • really good communication
  • agreement on standards

Really good communication actually includes a lot of factors such as:

  • physical layout. members of the group not being in the main area is a frequent problem
  • scrum and scrum of scrums (to pass info with other teams)
  • retrospectives - to make sure the process works for everyone

Agreement on standards means:

  • either formal written or oral understanding of practices to use
  • systems that adapt to change
  • developers talking out differences and seeking agreement
  • retrospective on retrospectives - to look at our agile process itself.
Michael Durrant
  • 13,101
  • 5
  • 34
  • 60
1

I've seen both practices, and in general it reflects the quality demands on the different organizations.

In the environment where quality was essential, issues brought up were assigned a severity. The senior developer in charge of the review would check the changes made, and approve the result (or recommend another review, if the changes were too complex).

In another environment we could just trust developers. Issues brought up during code review were generally formulated as advice anyway, and often it was reasonable to ignore them. When deadlines are tight, some improvements just don't pay from a business perspective.

MSalters
  • 8,692
  • 1
  • 20
  • 32
0

My code reviewers come in different flavours but I mostly get rubber stamped. Certain code reviewers appear to want the process over as quickly as possibly, to the point where I've asked my team leader for one and his response was "yeah, that's fine; just put my name down", without actually looking at the code.

There's a contractor who can review my code (currently working on the same project), and will give occasional feedback but never produce comments. I get the impression that he's not overly concerned about the codebase as long as the result looks like it works. I've had to start flagging shortcuts that he's been taking, which isn't a situation I want to be in.

The two other team leaders are more thorough: they check every file, tell me to change things around, give advice and good feedback. Still no documentation is produced, aside from their names going against the commit.

The drawbacks of the decent reviewers are that the sessions last longer and I feel like my team leader could feel slighted if I stop asking him for reviews. I think our code reviews would be better without the reviewee present and done at the luxury of the reviewer.

Adrian's system sounds like a good one, but I doubt the management would go for it due to the extra time it would take.

0

In my team we do live, pair-programming style code reviews. Fixes are often made on the fly while discussing possible enhancements that can be made. We found that direct communication happens to create a kind of contract between reviewer and reviewee so that fixes are more likely to be made and there are a lot less problems than if we used a tool or document as a buffer between the two.

Only written trace we have is that we mention reviewer in source control when committing, which makes it easier to trace which code commits were or weren't reviewed.

guillaume31
  • 8,358
  • 22
  • 33
0

Code Reviews are built into our SDLC.

  1. A developer is assigned a Task.
  2. Code is checked into Source Control, associated to that Task.
  3. Once all code is ready, the Task is marked for Review.
  4. If Review fails, the developers fixes the code under the same Task. GOTO 4.
  5. If Review passes, the code is promoted to the Integration (Alpha) Environment.

We check the code against our Coding Standards and look for anything that look like it could have a negative impact of performance.

Adrian J. Moreno
  • 1,098
  • 6
  • 12