Virtual Hudson Build System: The Rest of The Story

[article]
Part 2

Time to Implement Something New
It took a long time to rebuild everything and, at the same time, we had some major software architecture changes that made it hard to determine whether a build was failing because of a new Hudson configuration issue or because of our code changes.

The good news was that this failure prompted management to approve not only our original request but also a new Hudson master to replace the failed box. After some debates and a lot of planning, we decided to make everything virtual—even the master—in order to guard against another hardware failure that we knew would happen at some time in the future. If the system crashed again, any virtual machines (VMs) on the crashed box could migrate to the working box. If we did this correctly, we would no longer have any downtime due to hardware failures.

The Dawn of a New Generation
Before I could commit 100 percent to the virtualization path, I needed the performance data to back up the decision. Recall from part one that our precrash Hudson server could do the compile in fifteen minutes; the old, postcrash server could do it in seventeen. But, I needed to know what overhead virtualization caused. The following table shows the results of my performance testing: 

 

Server

Time

HUDSON Precrash

15 minutes

HUDSON Postcrash

17 minutes

New Eight-core Server (Non Virtualized)

10 minutes

New Eight-core Server (Virtualized)

12 minutes

New Eight-core Server (Virtualized with iSCSI SAN for VM Storage)

13 minutes

The holy grail of virtualization, at least in my mind, is to be able to move a VM from server to server without stopping the VM. To do this, you need some sort of shared storage between virtualized hosts. The last entry on the table above is a virtualized host running its VMs on an iSCSI SAN. Considering what we gain with that, thirteen minutes is an awesome feat. The overhead of virtualization is well worth it. We will be able to further decrease our build time by parallelizing the builds even more, and adding capacity is pretty simple, too, via the addition of more virtual hosts.

Conclusion
We didn’t make our seven-minute build time goal, and I’m not sure we will ever see that short a time again. We probably could if we hadn’t virtualized any of the build servers, but that is a price we are willing to pay to have a more reliable build system. Overall, our build will be faster, as our queue should not be that deep anymore.

This solution is very effective at getting every single ounce of capacity out of a server (the bosses will like that). Even though we didn’t spend a lot of money on this system and it doesn’t have the fastest servers on the block, it is what we have for now and it works well.

Read Virtual Hudson Build Environments, Part 1.

About the author

Tony Sweets's picture Tony Sweets

Tony Sweets is a 15-year veteran in the software industry. For the past 10 years he has been working exclusively on Java enterprise web applications in the financial sector. Tony possesses a wide range of skills, but likes to work mostly to work on Java Applications and the tools that make the development process better. Tony’s blog is located at www.beer30.org/.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09