Scaling continuous integration at Google

From the Agility and Project Leadership Blog
by
A contrarian and provocative blog that goes beyond the traditional over-hyped dogma of "Agile", so as to obtain true agility and project leadership through a process of philosophical reflection.

About this Blog

RSS

Recent Posts

Has Scrum outlived its usefulness? Should Scrum just go away?

The rise of Agile’s SAFe is like a bad episode of the movie Groundhog Day

Marcel Proust’s recursive novel: Why the concept of iteration in Agile is shortsighted

Forecast for 2015: The beginning of the end of Agile?

Google considered the best US company to work for due to HR agility



As a follow up to my post on "Continuously controlled integration for Agile development", here's a Google Tech Talk video on using such practices to scale on a massive code base.  Here's a snapshot of what's involved:

Even at this size, Google still runs the build from a single monolithic source that has various programming languages intermingled with each other.  Here's an excerpt from the talk's introduction:

At Google, due to the rate of code in flux and increasing number of automated tests, this approach does not scale. Each product is developed and released from 'head' relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.
 
With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build 'green' by analyzing hundreds if not thousands of changes that were incorporated into the latest test run to determine which one broke the build. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, the system could run every test at every change, but that would be very expensive.
 
To solve this problem, Google built a continuous integration system that uses fine-grained dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change.
 
For those who want to learn more, watch the video below:
 
At Google, due to the rate of code in flux and increasing number of automated tests, this approach does not scale. Each product is developed and released from 'head' relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.
 
With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build 'green' by analyzing hundreds if not thousands of changes that were incorporated into the latest test run to determine which one broke the build. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, the system could run every test at every change, but that would be very expensive.
 
To solve this problem, Google built a continuous integration system that uses fine-grained dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change.
Posted on: November 18, 2012 05:29 PM | Permalink

Comments (0)

Please login or join to subscribe to this item


Please Login/Register to leave a comment.

ADVERTISEMENTS

I see where one young boy has just passed 500 hours sitting in a treetop. There is a good deal of discussion as to what to do with a civilization that produces prodigies like that. Wouldn't it be a good idea to take his ladder away from him and leave him up there?

- Will Rogers

ADVERTISEMENT

Sponsors

Vendor Events

See all Vendor Events