Scaling continuous integration to the enterprise
From CitconWiki
Revision as of 16:13, 6 April 2008 by MarkEWaite (talk | contribs) (→Enterprise Scale Continuous Integration)
Enterprise Scale Continuous Integration
These were the notes captured from the discussions. We started by describing the problems we had seen in trying to run continuous integration on very large code bases, and in large organizations which are coming from monthly or weekly integration cycles to continuous integration cycles.
Problem Definition
- 300 devs, 1 build break / year
- build will be broken every day
- slows us down
- creates distrust of the source master
- 2 hour build, 3 day acceptance test
- hard to assign failure due to multiple commits per build
- long cycle time on failure (hours before you know you broke something)
- failures affect more people, are more expensive
- Understand root cause of failure is not obvious
- How to handle 300 applications, each with a few devs, how to scale to many projects and still manage level
- How to manage many branches to many mains
- Managing build time dependencies (unexpected, undetected coupling)
- incorrect incremental builds
Addressing the problems, alternatives, risks, and trade-offs
- subcomponents
- reduces build time, but
- increases integration time
- build acceleration technology
- parallel build, multi-machine, multi-core (Electric Accelerator, for instance)
- bug fast machines (although disc I/O may dominate)
- modularize to get recent successful build, not compile
- faster, less built (narrow the impact to a smaller team)
- Use "pre-flight" build (production build with many changes, not yet on the source master)
- integration race conditions
- faster hardware
- parallel builds
Alternatives (2)
- 3 day acceptance test
- throw bodies at the problem (but it is not scalable)
- review the acceptance process for automation opportunities
- increase automated testing inside the application (at the interfaces)
- modularize tests, make them independent so they can run in parallel
- accept human tests less frequently, automation running continuously
- use assistive automation to support more effective exploratory testing
- Brian Marick has some work going in this area
- Michael Bolton describes his use of Watir as assistive automation
Lisa Crispin suggested that Jared Richardson had done the continuous integration work for SAS and might share insights and ideas.
Alternatives (3)
300 applications, small teams on each
- Either many independent CI systems or an enterprise CI system
- unified view
- shared configuration
- reuse between teams
- security
- usable for small teams
- Dependency management
- component level dependencies managed by tools
- Anthill / Codestation
- maven
- ivy
- scheduling builds, which build should be run first
- how do I express the rules by which I select a component
- version (specific version, pattern match a version, relational operator to version string, etc.)
- acceptance test results
- component level dependencies managed by tools