Ten Application Lifecycle Management (ALM) "Best" Practices

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

In this article, I'll focus on application lifecycle management (ALM) "best" practices.  I've listed them below, and then I go into some detail on each of them. Hopefully you'll find these to be key factors to consider for your own environment.

    1. Look at what processes and tools are available and how they fit your environment
    2. When selecting a tool, use the vendor for free training
    3. Don't be afraid to change your configuration management processes and or tools
    4. Look at the full application lifecycle management problem, not just configuration management
    5. Use agile software configuration management methods
    6. Use role-based interfaces and dashboards
    7. Perform interactive build comparisons
    8. Pay attention to backups, recovery and availability strategies
    9. Use multiple site solutions that span the entire application lifecycle management spectrum
    10. Unit testing and peer review of changes

1.  Look at What Processes and Tools are Available

How long has it been since you had a good look at what CM/ALM technology and tools are available?  As CM matures, CM processes mature, and the tools evolve to support these processes.  It's true that it's a royal pain (in the wallet and in resources) to adjust some tools to maturing processes.  It's also true that any change within the tool framework is generally not well received by the user base.  However, with 3rd and 4th generation CM technology comes both an ease of customization, and a real focus on ease of use.  This will help to address both cost and user concerns.  On top of that, operational overheads/costs can be significantly decreased, freeing up your CM team to provide better customization.

Many tools do not evolve rapidly.  Perhaps they focus on simplifying administration or on improving performance.  But there are a few tools out there that evolve very rapidly compared to the rest of the industry.  As well, there are a number of new tools on the market.  If your needs are modest, new open source offerings may be able to provide much of the capability you're currently paying for.  But, as license costs are such a small portion of the total cost of operation (TCO), I would strongly recommend you look beyond open source.

2.  When Selecting a Tool, Use the Vendor for Free Training

As you go through the tool selection process, identify key new features that you haven't seen before.  This is a great time to upgrade your CM technology knowledge.  If vendors want you to choose their tools, not only will they tell you all about their nifty new capabilities, but they'll also give you ammunition to shoot down claims by their competition.  In other words, they'll provide you with the right questions to ask to identify the weak spots of their competition.  Perhaps you shouldn't wait until you're ready to procure new technology to talk to vendors.  Many are willing to share their expertise with you so that you'll consider their solutions when you are ready.\par

This is an opportunity for free training in state-of-the-art CM technology.  And as many vendors are process centric at this point, you're CM process skills may be broadened somewhat as well.  Now I've been in the CM industry for better than 30 years and I'm still broadening my reach.  It's important to understand what capabilities there are and what trends you see so that you don't lock yourself into 2nd generation, or even early 3rd generation, solutions.

3.  Don't be afraid to Change Your CM Processes and/or Tools

We touched on this in the first point - resist the tendency to resist change.  People are really used to change, especially as technology advances accelerate.  They actually like change, when it benefits them.  Why do think people have moved from command line interfaces to GUIs, or from early cell phone + PDA +... to SmartPhones?  They chose to move because of the benefits.

Look around your shop.  Identify what is ailing your developers, your build team, your testers, anybody and everybody involved in your product development.  If you can go up to them and offer them a solution that easy to use, easy to learn and not only addresses their concerns but throws in some neat new functionality as well, I'm sure their ears will be open.  Get your vendors to help you to do the sales job.  And make sure that it ends with:  and we'll save loads of effort, time and money.

When you look around, cast your glance wide.  Perhaps you've used three CM tools, and you look at them and find that they're still basically the same with just some nice window dressing - not enough to cause you to switch.  Well then you're missing a number of other tools that will give you more than enough reason.  However, if you find that the evaluation process for a tool is complex, perhaps you should take this as a warning sign of tool complexity and just move on to the next.

4. Look at the Full ALM Problem, Not Just CM

There are a lot of focused tools out there that deal with version control (VC), or perhaps VC plus Change Management and Build Management, or perhaps they even add in Problem Tracking.  There are a lot of good tools in this area.  But these tools will support a small portion of the product team.  Your product team is everyone who has anything to do with your product.  Customers (usually through customer reps) need to give their input. Testers need to track what their testing against specific configurations by looking at the new features, the problems addressed, etc.  Requirements traceability needs to flow easily.

If you look at CM, in isolation from these other users and their tool requirements, you will be limiting the scope of your solution.  More than that, you will be limiting the capabilities of your solution.  If I can track exactly what release each customer has installed, and compare the current release to that customer's release with a single click, it may be a lot easier to identify if the problems they're having have already been addressed.  If my requirements and testing are addressed by the same management tool (and repository) as my version control and build management, not only am I going to be able to navigate traceability links more easily, but I won't have to maintain glue that integrates all of my tools together.  That in turn will allow me more flexibility in customization.  Don't forget about less administration, less training, easier upgrades and probably lower licensing costs too.

Developers are often quick to dump on CM teams.  But give them an end-to-end tool that meets their needs and perhaps the dumping will stop - and perhaps the rest of the product team will be able to communicate more easily with core development, by virtue of the fact that they have access to the same repository of data.  That's where the team building starts, and that's when CM starts to be recognized as a backbone business technology and communication, rather than as a developer tool.

5.  Use Agile CM Methods

I mentioned that easy customization was important.  Well here's one reason - your CM processes, as well as your development processes and project management processes need to become more and more agile over time.  Let's focus on what we need, when we need to, but not at the expense of quality, information capture, security or anything else for that matter.  An Agile CM process is not one that has the minimum process, it's one that minimizes overhead while maximizing benefit.  It should support Agile Development, but it should also support more Traditional Development.  The idea is that CM shouldn't get in the way, and neither should the tools.  Automation of good process will move CM toward this goal.

I see CM tools out there that require you to branch when you shouldn't have to (and then of course you have to merge and retest).  Some make you use different tools for version control and for change management.  Some have minimal, or even no, change packaging.  Some force you, or someone else, to update the status of a problem or feature in a different database, or as part of a separate action than the one that you've already performed that already implies the status change.  Some let you work your way into trouble if you miss a key step.  These are things that make CM tools and processes non-agile.

If I have to manually tag items because the tool didn't capture information based on what I was doing, not only am I wasting time, but I'm potentially influencing quality on the down-side.  If I have to tell my team to hold off checking in code because of my tools and processes, I'm impacting schedules and efforts.  This is not what Agile is made of.

Agile CM should support traditional project management, but it should also drive priority-based feature and problem fix scheduling.  It should allow team members to look at what needs to be done next without having to wait for a meeting that may have been delayed because someone got sick or was caught in a traffic jam.  An agile CM tool is one that will improve communication considerably by allowing the data to do most of the talking:  approvals, assignment, status updates, etc.  An agile CM tool needs to be able to provide the right information  with little or no guidance.  If it takes me 2 days to compare the current release functionality to the previous, that might be OK for the technical writers preparing release notes, but it's not going to help developers who need to identify if a problem has been fixed between the two releases, nor one who is trying to track down when a problem was introduced so that (s)he can review the change delta to more rapidly pin down the cause.

So, the bottom line here is:  move to a more agile CM process.  Make your tool move with you or leave it behind in favor of one that's up to the task.

6.  Using Role-based Interfaces and Dashboards

I've seen three types of CM user interfaces in my day:  command-line, GUI-based which cover the entire tool, and Role-based GUIs.  While experts in a tool may tend to cling to a command-line interface (CLI), the CLI should really only be used to customize the real user interface.  Software programming will never disappear, but if you still needed to understand software programming and scripting in order to use a computer, we'd see about 1% of computer penetration in society that we see now.  Thank God for GUIs, which I admit I resisted for a long time.  But I don't have to read manuals any more, at least if my tool GUI is focused on my needs rather than trying to encapsulate all the capabilities in the underlying technology.  I don't want the GUI to expose everything in the tool to me - just what I need.   That's where role-based GUIs come in.  These are important.  Don't make a tech writer understand configuration management.  Don't make a tester understand ... OK, testers need to understand a lot... but not so much project management or makefiles.

If your shop doesn't have interfaces that are tailored to each role, you're going to waste a lot of money training people what not to do, and searching for how to do what they want to do.  What are the things a developer does?  Those should appear in the developer interface.  What about a CM Manager?  How about a Product Manager? A Project Manager? A Tester?  They all have different roles.  Yes the same tool can help them all if it's well designed.  Each should have their own set of Inboxes/To-Do Lists specific to their roles.  Each should have the abilitiy to add and modify the data for their role, and to navigate it easily.

In recent years, the concept of a dashboard has arisen.  Dashboards are nice - they summarize what you need to know, and you can even zoom-in to details with most of them.  But as your responsibility increases, your dashboards can get cluttered.  The solution is role-based dashboards.  Let me look at the info from the perspective of this role, then that, then another.  In fact, as dashboards have evolved, a few things are obvious to me:

  • I need to customize my dashboards (and I don't want to spend a lot of time doing it).
  • Dashboards can be used to reduce clicks and to reduce the time I spend looking for the right menu buttons.
  • Dashboards are designed to show information, but I often want to use them as Work Stations.

What's a Work Station? It's basically a dashboard of information from which I can actually do work.  For example, I might have a Change dashboard that let's me zoom into changes to see the details of each change.  But maybe I'd like to use the same dashboard to do peer reviews, or as a developer, to do my checkouts and checkins from, and more.

The ultimate Work Station for me is a Meeting Work Station.  It's a work station dashboard that I can use to run a meeting.  For example, a Problem Review Board can use a Meeting Work Station to present the Problems being reviewed, to zoom-in in a priority-based fashion, to add decisions, approvals, comments, etc.  Even to assign and re-prioritize the problems.  Similarly for a CRB (Change Request Board), a quality review board, a project management team, etc.  Design the work station to make the meetings run smoothly while at the same time ensuring that all information is immediately captured.

When looking at how you're using your current tool, or for planning your next tool, consider role-based user interfaces, with role-based dashboards, and work stations, that are easy to customize (and easy to build in the first place).  Resist the temptation that a tool vendor already knows what you need and has hard-coded the dashboards for you.  "What you need" will change over time, more rapidly than you know.

7.  Interactive Build Comparisons

One dashboard, and associated query capability, that is crucial to any CM environment is that of build comparisons.  Build comparisons are a frequent and important capability that can answer questions such as:

  • What problems are going to be fixed for a customer if they upgrade to a new build?
  • What changes went into the build that might have caused a previously working feature to fail?
  • What new functionality are we delivering in this release (as compared to our last release)?
  • What changes have caused the lastest build to fail?
  • What changes have to be made to the customer repository on delivery of a new build?

These questions are asked by developers, CM managers, build teams, customers, documentation teams, testers, management and  more.  These are important questions, and I'm sure you can add a dozen or so more related to the comparison of two builds (or any context view and a build, for that matter).

Build comparisons in a VC tool tell you what code changes.  In an ALM system, that's one option, but there are many others:  problems fixed, features addresses, requirements satisfied, additional testing completed, new files added, customer configuration files to be removed, and so on.

It is not sufficient to ask these questions only when you're ready to deliver a new release.  If that were the case, it wouldn't matter if it took a few minutes, a few hours or a few days to get the answers.  These are day-to-day queries that are used to tune your agile decision-making.  Your build comparison capabilities must be interactive and responsive so that they are used as part of a solid decision-making capability for all parts of your development team.  When a new problem arises, build comparison should be the first action used to narrow down the cause - not by looking at lines of code, but by first considering and narrowing down the list of potentially responsible changes, by looking the functionality they're addressing.  Even better, as you're browsing through each of, say, a couple dozen changes, it would be useful if the delta for each of the changes were presented in case you needed to delve down deeper.  Good tools, with good performance and smart dashboards, can present this information.  Don't expect it from many of the older tools.

8.  Pay More Attention To Backups, Recovery and Availability Strategies

One area that often gets ignored when looking at CM/ALM tools is the whole area of downtime impact.  Many, if not all, CM tools have strategies for backing up your data.  Most have strategies for restoring them too, though these are not always as effective.  But it's really the down time that's the issue, not specifically how effectively you can do backups.  Here are some things to consider in your CM/ALM administration of backups and availability:

  • Do backups require down time?  Does it affect both update and query availability?
  • Does your multiple site strategy compromise effective complete, consistent backups?
  • Do backups take significantly more time as your repository grows in size?  Does this affect availability?
  • How quickly can you restore your environment if you need to resort to backups?
  • Are backups your first line of defense or are there other capabilities that allow you to recover?
  • If you find that your backups are corrupt or inconsistent, do you have an alternate path forward.
  • When you do recover, do you lose the transactions performed since recovery, or is it difficult to recover them?

Next generation CM environments allow multiple recovery paths, whether dealing with disk corruption, disasters or data sabotage.  Full traceabilty of changes should double as a capability to restore and then re-apply the changes.  Multiple Site strategies should double as disaster recovery capabilities and as an extra level of backup.  Most of your data is not changed day-to-day, or even month-to-month, so your backup durations shouldn't grow indefinitely as your repository does.

There are other things to consider as well.  Like what happens if a new release of software corrupts a version file?  Do you have to duplicate the entire disk to protect against disk failure?  And perhaps a bigger question: how many people are affected by a server outage, and for how long?  What about outages for upgrades - does your information disappear for a few hours or days, or do users not even notice a blip in most cases?

What about all the other data that's on user disks, perhaps on laptops?  Are there stategies to easily backup data in workspaces or elsewhere?  Staging is one popular way of doing so, but does this staging clutter up the CM database with a lot of irrelavant data in between good data points, or is it done more effectively?

There's a lot to be covered here, and some of the best tools can have some of the worst levels of exposure in this area. Again, familiarize yourself with vendor technology and use this information as a lever against your current vendor to get your requirements met.

9.  Use Multiple Site Solutions That Span the Entire ALM Spectrum

Multiple site solutions facilitate global operation.  Older generation systems require partitioning and re-synchronizing of data, a painful and potentially administration intensive operation.  Modern systems have a more automated approach, but sometimes at the expense of flexibility.  You'll have to look at some of my previous articles for a more detailed account of multiple site solutions for global operations.  But one key element I'd like to highlight here is that a multiple site solution is not much of a solution if it doesn't cover the entire ALM spectrum.  If it's a version control solution but leaves the rest of the data out of the picture, you've got a problem.

Or perhaps you have different tools with different multiple site capabilities for the different pieces of data being controlled.  You might be able to get this to work successfully, but more than likely, there's some consistency exposure, not to mention an extra level of administration to coordinate the multiple solutions. Whatever your solution, make sure your global development is covered by a consistent multiple site solution across all parts of your ALM function.

10.  Unit Testing and Peer Review of Changes

Finally, many people consider design and development practices separate from CM.  This is not the case, in many areas.  One area that critical is the quality of what goes into the CM repository.  If garbage is going into the CM repository, you'll be dealing with a lot of roll-backs and other CM administration that might go with it. Your development and CM processes must help both your product and its quality to move forward.

If you're not doing unit testing (where the "unit" is a change package, a.k.a. an update), you'll notice your quality dipping as changes are made.  Unit testing must be done by the developer before checking in software, or at a minimum, before checked-in software is marked ready for build integration.  If you think your organization or project has a good reason to avoid this, think again.  If there are road blocks, remove them.  The cost of not doing so is simply too high.

Along with unit testing, peer reviews of code are critical.  A well groomed peer review process will be more effective than testing at discovering product quality problems, and at a much lower cost.  Peer reviews should not just review code changes, but should include a demonstration of the problem fix or new functionality, and should also review the unit testing for completeness and success.

CM/ALM tools come into play here by providing the means to record unit testing scripts and results.  These in turn can be used by the test teams to help ensure that their verification suites are updated, though developer test scripts should serve as an alternate accounting of the required tests to complement those developed by the verification team.  CM/ALM tools should also make it easy to do peer reviews.  It should not be necessary for all peer reviewers to meet at the same time to review the changes.  On-line reviews, with comments and responses, should be easily managed by the CM tool.  As well, it should be easy for architectural gurus, and others involved in multiple reviews, to perform multiple reviews in succession with minimal keystrokes.  The CM tool here can help by tracking reviewers, providing reviewers with To-Be-Reviewed in-boxes, and by providing effective dashboards to navigate change packages in these in-boxes.  It should also help by providing point-and-click traceability to the specs, problem reports or other data from which the changes were derrived.

So there you have it.  Another 10 CM Best Practices geared for Next Generation projects, processes and tools.  Combined with my previous "Best Practices" article, we've covered a lot of ground.  If you want your CM to be more effective, run through these practices, where applicable, and through the previous 20 I've referenced.  Let me know if you disagree with them, and I'll either try to convince you otherwise or change my mind.  If there's some that I've still not covered, don't be surprised, but let's hear from you so that I can cover them in the "reply" section or in a future column.

Tags: 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.