On Sumo, Architecture, and Enterprise Agile

[article]

Here is a riddle for you: What is very strong and solid so as to be hard to move, yet can move easily at the same time? The answer: a champion sumo wrestler. The other answer: well-thought-out application architecture.

Why are we talking about 400-plus-pound Japanese athletes in the context of agile software development? Let me explain. In order to be successful in the ring, a sumo wrestler needs to maintain a heavy body weight and at the same time be in peak physical condition. Just as these Japanese athletes have to find the right balance through a well thought-out combination of diet and training regimen, software development organizations need a balanced approach to application architecture on agile projects.

System Design: The Winning Technique
Proponents of agile development oppose pursuing a thoroughly worked-out architecture in advance (the so-called “Big Design Up Front,” or BDUF) as a feature of the traditional “waterfall” approach, the paradigm that agile was essentially created to challenge and overturn. In waterfall, the design phase of a software system follows the development of functional specifications and precedes coding. The Agile Manifesto opposes this approach and implies that self-organized teams are responsible for creating the best system design through a series of incremental steps. As the system evolves iteratively, the architecture, in turn, matures gradually throughout the development cycle. Over time, the team members’ knowledge of the business and the technology grows, and they continually review and refactor the architecture. Because in most real-world situations the customers’ idea of what they want from the system will change over time in response to changing market conditions and their own evolved understanding, this process serves the goal of giving customers the software they really want rather than a simply being a masterpiece of elegant, up-front design with insufficient business value.

This approach seems reasonable in that it just doesn’t make much sense to invest too heavily in architecting for system behavior that can change significantly over the life of an application. Moreover, doing so is likely to give developers significant grief as they try to preserve the original design and simultaneously cope with evolving requirements. In fact, it can result in so many complex workarounds that the code may eventually become too convoluted to understand, even for the developers who wrote it.

On the other hand, if absolutely no high-level architectural decisions are made at the start, you run the risk of making the code so complicated that it will be almost impossible to maintain and extend. This would be an acceptable approach if all you want to do is whip up something relatively simple and then throw it away (e.g., a prototype or proof of concept), but attempting to build an actual shippable product this way would be a bad idea.

As an example, consider a client who wants a prototype of a web shop with a simple product database, а homepage, product pages, shopping cart, plus integrations with a payment server and a shipping service. You could implement all of that in, say, a couple of weeks as ASPX code-behind files, without being concerned about such things as presentation logic or data separation and without comprehensive testing (writing tests will be particularly challenging in this case).

After your team finishes all the workarounds and patches, you will have a prototype that can be demoed to the customer, and at this point the temptation to continue to build on top of that codebase instead of discarding the prototype and starting over may be great; after all, you have just spent time and money building the thing, and it seems to work fine! However, that would be a recipe for disaster. As soon as you start adding new features like suggested products, user-defined search filters, product reviews, etc., the numbers of similar database queries will balloon, and you will need to replicate the logic in multiple pages as well as make numerous changes to the code in many places all at once. This will lead to more defects, the entire application will have to be re-tested, and the cost of subsequent changes will continue to go up, thus completely defeating the initial purpose.

So if the no-architecture approach is not likely to work well for developing large systems, and BDUF goes against the agile paradigm, how much architecture do you have to do up front to get an enterprise agile project right?

Wrestling Maintenance Costs Down
A 2011 Gartner paper on the definitions of “success” in application projects noted the growing demand for maintainable code and for tools to evaluate its maintainability as an important trend in the world of custom application development (AD). Particularly when organizations engage outside firms to develop custom applications, it is important for the customers to describe the successful outcome of the project in the actual contract. Generally, contracts for outsourced software development have tended to define successful delivery as the satisfactory completion of functional user acceptance tests (UAT). However, even if the application is able to pass UAT, it may still be coded to poor design standards and be exceedingly complex, often making it very expensive to support and maintain as well as being too costly and slow to modify as business requirements evolve.

Today, the way companies approach the calculation of outsourced software development costs is changing. Ten to fifteen years ago the costs of a solution were understood to mainly consist of actually coding the desired functionality plus other expenses such as software licenses, hardware, training, etc. The reality is, however, that very often maintenance is responsible for a large share of the total cost of ownership. Today’s buyers of outsourced AD services are taking into account the total costs of developing, running, maintaining, and extending the application over the course of its life. One reason for this change is the rapid pace of change (e.g., new platforms, frameworks, and tools) in the IT world, which requires software to be continually adaptable in order to remain valuable to the business. The universally growing demand for cloud accessibility and cross-platform support is a vivid example. It is now the industry norm that system development only stops at the end of a product’s life.

To anticipate future costs of supporting and enhancing a system after the initial release, companies are now increasingly using metrics known as nonfunctional requirements. In cases where a third-party application development firm is hired to deliver a system, metrics like test coverage, code complexity, components coupling, response time, HTML page size, maximum number of simultaneous users supported, etc., may be written into contracts and become binding to the vendor. On the other hand, these code and performance metrics, if carefully considered by the development team before the start of implementation, should result in the team making effective architectural decisions and selecting the patterns that will likely remain in place until the end of the product’s life.

A Regime for Flexibility
Keeping these non-functional requirements in mind can help guide the thinking of developers and architects in the initial planning stages of an application development project. Although there are no hard and fast rules for this type of process, the following questions can be viewed as guideposts or pointers in making your up-front design decisions:

1. How easy is it to change the application’s business logic?
Remember that the domain model will most likely continue to evolve throughout the life of the system. Because of this, you need to take into account when designing the application that the dependencies between objects should be reduced as much as possible (e.g., with the help of approaches like dependency injection), and similar features should be grouped within the same components to make testing easier.

2 Are you locking yourself and your team into a specific data source?
There is always a chance that the DBMS originally chosen will not remain in place for the life of the system; for example, MS SQL may be replaced with MySQL or a requirement might come up to fetch data from external web services or XML files. If your domain objects know how to read/write data from/to a specific database, a change will require a developer to update all of them in order to add another data source. To avoid this, a data access layer should separate the business model from data sources. You can use data mappers (such as EF or nHibernate) or adapters to populate entities.

3. How is data presented to end users?
In addition to human beings, end users can also include other tools that consume your system’s web services, so I’m referring to data presentation in general. Because the trend today is clearly in favor of building more web- and mobile-oriented applications that support various browsers and platforms, you should isolate presentation logic from the business model. It may make sense to use an intermediate service layer defining the system’s common operations, which is consumed by particular interface implementations. The MVC pattern is an example of a commonly used approach.

4. How complicated is it for your team to create tests for new or existing components?
Complexity usually results from spreading business logic across all tiers of the application as opposed to concentrating it in certain components of the system’s business logic. If the system implements complex workflows or if there are a lot of dependencies on data that are difficult to mock (and thus write unit tests for), then specialized integration testing tools may come in handy (BDD tools like Cucumber or Specflow could be a good fit).

In general, there is no doubt in my mind that the agile principle of “you aren’t gonna need it” (YAGNI) is valid. Keeping system design as simple as possible is a good idea, as is deferring design decisions until the time they actually need to be made. For example, XML parsing logic for fetching data should not be implemented until this requirement actually appears in the sprint backlog. And when it’s time to code it, the right architecture will allow you to simply add a connector for the new data source without needing to modify the application's behavior.

A Matter of Balance
So, yes, you can (and should) do some architecture before starting development. And no, you should not attempt to predict all possible use cases. Rather, the objective of the design effort should be to build enough robustness into the system to reduce the complexity and costs of future changes required to keep the software valuable to customers with continually evolving needs. In other words, you need to have enough structure to be strong, yet remain agile—like a true yokozuna. (And you win extra points if you didn’t have to Google “yokozuna”).

About the author

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.