资料内容:
1 INTRODUCTION
Peer code review, a manual inspection of source code by
developers other than the author, is recognized as a valuable
tool for improving the quality of software projects [2, 3]. In
1976, Fagan formalized a highly structured process for code
reviewing—code inspections [16]. Over the years, researchers
provided evidence on the benefits of code inspection, especially for defect finding, but the cumbersome, time-consuming,
and synchronous nature of this approach hindered its universal adoption in practice [37]. Nowadays, most organizations adopt more lightweight code review practices to limit
the inefficiencies of inspections [33]. Modern code review is
(1) informal (in contrast to Fagan-style), (2) tool-based [32],
(3) asynchronous, and (4) focused on reviewing code changes.
An open research challenge is understanding which practices represent valuable and effective methods of review in thisnovel context. Rigby and Bird quantitatively analyzed code
review data from software projects spanning varying domains
as well as organizations and found five strongly convergent
aspects [33], which they conjectured can be prescriptive to
other projects. The analysis of Rigby and Bird is based on the
value of a broad perspective (that analyzes multiple projects
from different contexts). For the development of an empirical
body of knowledge, championed by Basili [7], it is essential
to also consider a focused and longitudinal perspective that
analyzes a single case. This paper expands on work by Rigby
and Bird to focus on the review practices and characteristics
established at Google, i.e., a company with a multi-decade
history of code review and a high-volume of daily reviews to
learn from. This paper can be (1) prescriptive to practitioners
performing code review and (2) compelling for researchers
who want to understand and support this novel process.
Code review has been a required part of software development at Google since very early on in the company’s history;
because it was introduced so early on, it has become a core
part of Google culture. The process and tooling for code
review at Google have been iteratively refined for more than
a decade and is applied by more than 25,000 developers
making more than 20,000 source code changes each workday,
in dozens of offices around the world [30].
We conduct our analysis in the form of an exploratory
investigation focusing on three aspects of code review, in line
with and expanding on the work by Rigby and Bird [33]:
(1) The motivations driving code review, (2) the current
practices, and (3) the perception of developers on code review,
focusing on challenges encountered with a specific review
(breakdowns in the review process) and satisfaction. Our
research method combines input from multiple data sources:
12 semi-structured interviews with Google developers, an
internal survey sent to engineers who recently sent changes
to review with 44 responses, and log data from Google’s code
review tool pertaining to 9 million reviews over two years