Keeping Your Build Under Ten Minutes

[article]
Summary:

One of the practices recommended by Extreme Programming (XP) is to keep a ten-minute build.  Kent Beck and Cynthia Andres write in Extreme Programming Explained (Second Edition): "Automatically build the whole system and run all of the tests in ten minutes.  A build that takes longer than ten minutes will be used much less often, missing the opportunity for feedback."



So what do you do when your build takes longer than ten minutes? 

One of the practices recommended by Extreme Programming (XP) is to keep a ten-minute build.  Kent Beck and Cynthia Andres write in Extreme Programming Explained (Second Edition) : "Automatically build the whole system and run all of the tests in ten minutes.  A build that takes longer than ten minutes will be used much less often, missing the opportunity for feedback."

So what do you do when your build takes longer than ten minutes? 

James Shore and Shane Warden offer some advice in The Art of Agile Development : "For most teams, their tests are the source of a slow build."  Frequently these tests are integration tests -- tests that confirm that your code interacts correctly with third-party dependencies such as databases, files, and web frameworks.  These tests are useful to automate and add value to a project, but it's easy to write too many.

"You shouldn't need many integration tests," Shore and Warden explain.  "The best integration tests have a narrow focus: each checks just one aspect of your program's ability to talk to the outside world.  The number of focused integration tests in your test suite should be proportional to the types of external interactions your program has, not the overall size of the program.  (In contrast, the number of unit tests you have is proportional to the overall size of the program.)"

One strategy our team has used is to replace slow integration tests with fast unit tests.  This means finding the slow tests, deciding if they are testing too much at once (for example, database interaction, web framework usage, etc.), and replacing them with focused unit tests.  We typically do these as slack tasks in our iteration.  But it can be hard work; it often requires reasoning about what the tests are testing, why they were written as integration tests in the first place, and what we need to do to be confident that a unit test will provide sufficient coverage.  Most of the time we find that these tests really didn't need to be integration tests.

However, sometimes projects live long enough (we hope!) that they grow beyond their original scope, and a ten-minute build becomes a challenge.  (I can imagine that Microsoft Word cannot be built and tested in ten minutes, but someone please correct me if I'm wrong!) What do you do then?  Splitting the code into discrete, independent modules that can be built and tested in isolation is one approach.  Another strategy would be to use the power of distributed build servers to run your tests in parallel on multiple machines, but I've not yet tried this (and can imagine many difficulties in setting this up - again, someone please prove me wrong).

How have you practiced keeping your build time in check?

User Comments

2 comments
Anonymous's picture
Anonymous

We keep our JUnit build under 8 minutes, otherwise the feedback is too slow and builds stack up. Have taken lots of approaches over the years - faster build machines, faster OS, faster software. Latest one was to move from CruiseControl to Hudson and distribute the builds across several virtual machines. (This is much easier to do in Hudson, and Hudson has many other advantages too such as easy UI mgmt and ease of accessing results). Also have four suites of FitNesse tests running in four separate builds but sequentially right now, that's about 2 hours. And a suite of Canoo WebTest GUI test scripts, also a couple hours - are looking at dividing those up and running concurrently. Fast feedback is incredibly critical to our team.

May 18, 2009 - 5:23am
Anonymous's picture
Anonymous

Distributed tests isn't trivial, but it's definitely doable. Here's a 2007 blog post discussing how functional tests can be split up and distributed: <a href="http://www.anthillpro.com/blogs/anthillpro-blog/2007/06/07/1181244540000... rel="nofollow">www.anthillpro.com/blogs/anthillpro-blog/2007/06/07/1181244540000.html&l... . It's a little AnthillPro specific, but the same basic strategy could probably be applied more generally.<br><br>I do like to split up the fast tests that are run prior to check-in from the slow tests. If you have tests that run for hours, that's ok. That's why you have the build machine sitting there. You don't even need to include the slow integration tests in the standard CI loop, if they take two hours, run a loop that picks up every two hours, grabs the latest build passing unit tests, and runs it through the integration / functional / randomized data testing loop. Keep that hardware busy!

May 18, 2009 - 6:02am

About the author

Daniel Wellman's picture Daniel Wellman

Daniel Wellman is a technical lead at Cyrus Innovation, a leading agile consultancy based in New York, where he leads development projects and coaches teams on adopting agile software development practices. Daniel has more than ten years of experience building software systems and is an expert in agile methodologies, object-oriented design, and test-driven development. Contact Daniel at dan@danielwellman.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

May 04
May 04
May 04
Jun 01