Group Code Review :
As part of CodeCAT (an area under the umbrella initiative we have at Mindfire called Catalyst – an initiative to improve competency of every person involved in software development) we have been doing code reviews regularly. After every review we post the findings in GPS (our company’s intranet) so that everyone else in the company can go through the review and get to learn from that.
In the beginning the enthusiasm was there but over time what we observed is that mostly people do not go through other’s code reviews. So the intent of spreading the knowledge of one code review (which is typically a one to one session) to many others was not being met. I started taking awareness sessions for all people in batches – things improved after that but not as much as we would have loved it to be. Something had to be done !!
Since we were clear on two basics things – one is the quality of code review being done was good and second was we were sure what we needed to improve,i.e spreading the knowledge of one review with others. This awareness of our strength and area of improvement helped us in coming up with an effective solution. We decided to start “Group Code Review” – call 5-10 developers to a room and have 1-2 code reviewers, pick 1-2 projects randomly and start doing a code review there itself. The intent was to spread knowledge and sensitize people not to repeat silly mistakes.
We anticipated that people might object to this way of review so what we did is we started collecting opinion/views from the codeCAT team members(of 25 persons). We had a lot of discussions on the pros and cons of having something like this – people might like it or might not.
After a lot of discussions we finally agreed that since our primary intent is to spread knowledge among more people and at the same time ensure that reviewees take up code review comments seriously and work on improving their code quality – what we decided to do is have the first round of group code reviews done team wise and by the team lead (the team lead had the flexibility to call other reviewers too). So instead of the earlier plan where we thought to call randomly 10 people working in one technology (but in different teams) and have 1-2 codeCATs assigned who in the meeting itself pick out few people for review, we called people working in 1 team for review. Since the concept was new to start with – we realized the earlier approach will not work because people might not feel comfortable. So we called 7-8 people working in 1 team to a room and got the code of 1-2 persons reviewed by the team lead (if he happens to be a CodeCAT – in the first phase we called teams whose team leads were CodeCATs). The review session was used as a forum to not only identify mistakes done by developers but also was used as an interactive discussion forum where people got a chance to ask questions and understand why something will not work and what changes need to be done to make it work.
So far the feedback from the team members, who participated in the reviews , has been positive and people seem to be liking it. Ofcourse the next round of reviews will be a little challenging when we pick groups of 8-9 people randomly and assign a reviewer who they have not worked with. I will keep you posted on what our findings are after we do that plus things that should be taken care of to make group review more successful.
Author – Atma Prakash Ojha