Scaling continuous integration at Google

From the Agility and Project Leadership Blog
by
Bridging the gap between traditional and agile project management and leadership.

About this Blog

RSS

Recent Posts

Google considered the best US company to work for due to HR agility

The oversimplification problem of Agile

Continuous Dysfunction: When Agile's obsession with being done becomes toxic!

It’s the Product Owner, stupid!

Is Walmart going Agile? Not really…

Email Notifications off: Turn on


As a follow up to my post on "Continuously controlled integration for Agile development", here's a Google Tech Talk video on using such practices to scale on a massive code base.  Here's a snapshot of what's involved:

Even at this size, Google still runs the build from a single monolithic source that has various programming languages intermingled with each other.  Here's an excerpt from the talk's introduction:

At Google, due to the rate of code in flux and increasing number of automated tests, this approach does not scale. Each product is developed and released from 'head' relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.
 
With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build 'green' by analyzing hundreds if not thousands of changes that were incorporated into the latest test run to determine which one broke the build. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, the system could run every test at every change, but that would be very expensive.
 
To solve this problem, Google built a continuous integration system that uses fine-grained dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change.
 
For those who want to learn more, watch the video below:
 
At Google, due to the rate of code in flux and increasing number of automated tests, this approach does not scale. Each product is developed and released from 'head' relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.
 
With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build 'green' by analyzing hundreds if not thousands of changes that were incorporated into the latest test run to determine which one broke the build. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, the system could run every test at every change, but that would be very expensive.
 
To solve this problem, Google built a continuous integration system that uses fine-grained dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change.
Posted on: November 18, 2012 05:29 PM | Permalink

Comments

Please Login/Register to leave a comment.

ADVERTISEMENTS

"If a man does only what is required of him, he is a slave. If a man does more than is required of him, he is a free man."

- Chinese Proverb

ADVERTISEMENT

Sponsors

>