The benefits of automating software deployment processes are well known: efficiency, accuracy, security and an increase in productivity. But many organizations have stopped short of fully automating their deployment, especially in complex environments.
What’s holding them back? They are either missing the automation to take advantage of their virtual infrastructure, not using a virtual infrastructure or only doing static virtualization. Automation and virtualization are like peanut butter and jelly: they’re better together. If you try to automate without virtualization, you’ll miss the advantages of very flexible resource management. Conversely, virtualization without automation leads to developers hoarding their virtual machines in the same way they did when they had physical machines, which results in simply replacing your server sprawl with VM sprawl.
Automating software deployment is really a twofold process. You need to find an automation platform that works for your organization, but it gets dramatically better when combined with virtualization that provides dynamic resource allocation.
When it comes to deployment for either test or production, you’re almost never dealing with just one machine or server. Instead, you’re trying to juggle multiple servers, databases and testing processes. As testing becomes more complex and teams grow, trying to manage these resources becomes increasingly cumbersome. And at a certain point, sticking a Post-It note to a physical machine to claim it as yours while you deploy a piece of code just doesn’t cut it anymore.
Today, most organizations have some level of virtualization, but many have only gotten as far as static virtualization. They have replaced their physical machines with virtual ones, but there is little difference in the way the machines are managed. Developers still need to claim machines for the tasks at hand; it’s just that their Post-It notes are now virtual.
Dynamic virtualization, on the other hand, gives developers immediate and flexible self-service. Machines are created as they need them, and torn down and redistributed when they’re done.
Dynamic virtualization solves the resource coordination problem, giving teams the resources they need, when they need them, and lets them work independently of each other, without having to worry about which machines the rest of their team are using. It also eliminates the need for teams to guess ahead of time how many machines they will need to complete a task. Before dynamic virtualization, hours, days or even weeks of valuable development time could be lost if you guessed wrong. Now, resources can be made immediately available as they are needed.
We’ve seen the benefits of dynamic virtualization in our own office. Before, Electric Cloud needed several physical machines for each developer. By using dynamic virtualization, we now measure in developers per machine rather than machines per developer. The result is more flexibility because they don’t have to wait for a physical machine to be available.
Choosing the right automation platform
For some organizations, a traditional build/continuous integration tool like Cruise Control or Hudson/Jenkins is a fit, but these tools are often limited by the need to run on the same machine as the work they are coordinating. This limits these tools’ capability to dynamically provision resources. In addition, such tools store their configuration and metadata in the file system, not a database; this can be a barrier to scalability, reliability and high availability. It also makes it difficult to use traditional reporting tools to perform analysis on the data.
We’ve found tools like VMware vCloud Director essential because they let us manage virtual machine images and group them into multi-machine configurations. For example, a configuration with a server machine, a client machine and a database machine can be