Deployment is the New Build


so what actually were the steps that made this transition possible? Looking at the evolution of Java build tooling over the past decade or so, there have been at least three main developments:

  1. Reusable commands
  2. 2. Models
  3. 3. Conventions++

Reusable commands
The first step, epitomised by Apache Ant, was the recognition that most of the low-level tasks or actions are the same whatever application is being built, whether calling a compiler, copying files or replacing placeholders.

Rather than copy-pasting the same OS commands into every new build script, we encapsulated these commands as libraries of reusable components that only needed to be written once.

Further, we discovered that certain patterns of step sequences would appear in many different builds. These 'chunks', such as constructing a classpath, or copying and processing static resources, evidently represented some higher-level build activity with a distinct function.

Whilst the realisation that all the actions carried out in a build are basically a sequence of common chunks was an important start, the next big advance was brought about by recognising that we weren't just seeing repeated patterns of actions, but that the types of data these actions were working on were also shared.

This gave rise to the notion of a true domain model for application builds, with source sets, resource sets, modules, dependencies and so forth that were originally introduced by Maven and have featured and been reused in all build systems since. 

Combining the sequence of common chunks with the new domain model that structured the data being processed gave rise to notion of distinct phases, in which parts of the build model are generated, prepared and made available to subsequent commands.

In addition, Maven also supported the idea that the domain model, and thus the build phases, would have to be able to vary slightly to accommodate different types of Java artifacts that need to be delivered, such as JARs and EARs. This has subsequently been further developed to support builds of totally different technologies such as Adobe Flex applications.

An additional benefit of domain models for build was the ability to make use of default values in a structured way, for instance for the names of built artifacts or the location of resource files.

However, the flip side of this convenience, certainly in combination with XML as a descriptor language for builds, meant that deviating from these standards could be quite a challenge. Certainly if the aim was to extend the domain model in some way, or to support a language or technology whose build flow was reasonably different from Java's, such as building documentation bundles or virtual machine images, for instance.

This has led to a generation of build tools, such as Gradle, that aim to restore the developer to a position of full control in which arbitrary actions can easily be defined and organised into phases, tasks and entire builds. Of course, given how used we have become to the convenience of "it just works" in simple cases, these tools still support the full domain models and conventions of common technologies such as Java.

Who'd have thought?
Reviewing this progression from today's perspective, a couple of facts stand out that, certainly in comparison to other evolutions in IT, are quite surprising.

Firstly, whilst it's now hard to imagine specifying a dependency using anything other than the groupId:artifactId:version pattern, none of the models or conventions that developed were formalised in industry standards. Instead, they were either based on observations of common patterns, or simply clever or even somewhat arbitrary choices ('src/main/java', for instance).

Secondly, we have seen how ease-of-use

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.