We all know code review is very important. I would like to know how you conduct code reviews within your team? What does your workflow look like?
Our team is very small as of now (4 people) and we do code review in an informal way. But we would like to improve and adhere to a standard.
So, my question is what is the best way to perform code review in a small team?
We use Gitlab, nobody is allowed to commit to master, you need to make a merge request for new features and then somebody on the team needs to +1 your merge request before it's accepted. In Jira we use a Kanban board, after you've made a merge request, you need to move a task to In Review. If the number of tasks in review are too many, you can't start a new task until tasks are reviewed.
New projects doesn't require merge requests, they are committed straight to master, the reviewer must then checkout the project and any fixes must be done on a branch and once again +1 before they can be accepted.
It's not perfect, stuff sometimes slips through if you only review in Gitlab itself, to see the full impact of a change and minimise things slipping through, the best is to checkout the project and see a change in its context.
For me currently there are 2 code review stories to tell, both significantly different from each other.
In the large organisation we have an established git work flow. The master branch reflects production code and sits on top of a series of staging environments. After a release, each project branches off the master - for better or for worse projects don't have much visibility of other projects (except of course when something goes wrong because of stuff from pulled in from other projects) Within each project, each developer branches off their own project branch to develop their code. Once developed a team member (peer) reviews the code (ownership of this process belongs to the project manager) The code is then run through our own project staging environment before being merged into the project branch. The project branch is then merged to other project branches as the new developments move up the staging environments towards Master.
The other story is one with a very small team where everyone (designer, coders, ops) knows the problem being addressed intimately. There is only one staging environment and the work flow is not very rigid. Code is peer reviewed as it is being staged ready for production.
From my point of view, the question I find myself asking more often than not is 'what is the purpose of the code review ?'
In a previous team while running my own programming house (largest team size was 8) code reviews were both an opportunity to learn and share knowledge. The culture was one where everyone, from product manager through marketers to designers and coders was encouraged to engage as fully as possible with the problem areas. There were standards (naming conventions, DB access patterns, where different code files were located, commenting, etc etc) and code reviews were then meaningful in the sense of ensuring adherence to those standards, challenging colleagues to explain their thinking, generating discussion and helping knowledge sharing and learning.
If I was to recommend anything, I'd encourage establishing a work flow and making code reviews part of the general culture of product development that you want to create. It is great to learn from outside influences of course, but I'd argue you should adapt those influences to suite your own unique needs.
We use Git (Gitlab) and git-flow. The master branch is protected. The new code can only be submitted to master via a merge request. The Project owners duty is to check the code for smells and issues. Additionally, we use code review tool and code review process.
I think Gerrit is the most used review tool for Git repositories gerritcodereview.com We use github.com/ooyala/barkeep because we use Gitlab, and both (Gitlab & Barkeep) are written in Ruby.
Our process is simple. Unless a full-time code reviewer is hired (it never will be) we have a minimum of 5 reviews per coder per month. 5 is usually to less to significantly increase code quality and share knowledge. But 5 is good for avoiding endless discussions if coders are hired for coding or for doing reviews. People sometimes simply do not accept advice or blame from other peoples who may not have the same job title.
In projects where no Linting tool exists, or the project owner avoids to use one (limited knowledge, limited skills, limited mindset) we try to execute some spotting tasks:
And some common spotting tasks for all kind of projects:
If someone spots and issue, comment on it. The discussion may begin or gets ignored by the whole team (lack of time, limited ..., and sometimes political decision).
In case of the spotters comment gets accepted as an issue, an action is discussed. This can simply be a fix regarding the spotted issue type. Or a new refactoring task is created. With each refactoring task, we also agree to review the refactored code. That is when knowledge gets shared amongst the coders.
I wish we could do better and accept knowledge more quickly but we are all just humans after all.
Matthew Dailey
Tech Lead @ Palantir Technologies
We use Gerrit to manage code reviews for our git repo. When you push a new change to gerrit, it opens a CR which requires approval before merging into the development branch.
We use a very loose process which works because code reviews vary widely. That said, 99% of commits get a review pre-merge.
Ownership
It is the responsibility of the coder to make sure the review is completed successfully. Gerrit will automatically email the reviewer when a review is requested but this can get lost. If the review is urgent, often the coder will ping the reviewer to make sure it's prioritized.
When requesting a CR I'll often request multiple reviewers. This can lead to a tragedy of the commons where no reviewer actually prioritizes the review. To avoid this I will also comment on the review why I requested each person. Often Reviewer-A has legacy knowledge that would be valuable but Reviewer-B and Reviewer-C are teammates working in the same area of the code base.
Be nice
It's extremely important to give positive feedback in code reviews. Research shows that positive teams are more productive and the ideal positive comment to negative comment ratio is 6 to 1. Code reviews often feel like a ton of negative comments all at once which can be really damaging and frustrating. Try to highlight things your teammate did well, you may learn something!
Get in a room together
Sometimes code reviews can be hard. One of the best ways to unblock a team struggling to approve a hard code review is to put everyone in a room and have the coder step through the git diff file by file and explain what's going on. This exposes the underlaying thought process and raises new questions. It works best if everyone has looked familiarized themselves with the code.