The State of IT Change

[article]
Summary:
Through clear lines of communication, and dedicated SCM processes, change in the IT world can occur without negatively affecting the company as a whole. By giving IT change the attention and focus it deserves, transitions become smoother, and less costly than when IT change is shortsighted.

We who work in IT know that all businesses can now be considered technology companies to some extent or another from the large financial firms to the smallest home town bank, networking companies, manufacturing, utilities and energy, retail, insurance, health care, government agencies to the local law firm and mom and pop retail shops around the corner. This is due to the overwhelming reliance each and every second, or 24/7, on technology. If any of these companies were to lose their technology at any given time they could stand to lose money (in some circumstances millions of dollars or more).

It is estimated that 70 percent of all problems related to IT can be traced back to some type of change that occurred within the infrastructure or application. Many firms take their change practices very seriously or that is what you hear from the IT and executive management within a company or those who takes part in any given change process (CP). From technicians responsible for server OS changes or even a simple patch to the OS to developers who make changes to functionality of the applications used by their company or their user community. From engineers tweaking links to the network infrastructure or the customer engineers and vendors responsible for hardware changes to the systems themselves. All of these highly critical aspects of supporting an IT environment and more, “yes there are many groups I missed”, have a stake in making sure that their changes never, “OK, never is next to impossible, so I will say decrease” impacts to the systems and products they support.

Now because of these changes to their small piece of the IT world “which in reality could impact the big picture of a complex, interdependent, big and scary world” even the smallest changes to IT can break a functioning product that a firm provides or whoever uses their products. Many firms say that managing changes successfully to their IT environments is the number priority. But even those small changes many times do not get the attention they require desperately because the true impacts are not documented or defined within a process or tool. We all know the big changes (ZOS upgrade, major server OS changes) gets some diligence just for the fact that they are big and well known impacts, but they too are the cause of many failures even though they are known risks.

Analysts and engineers or those responsible for Software Configuration Management (SCM) know all too well that many of the critical “business related” changes flow from their repositories making SCM what I consider the “foundation” of the change process. These SCM engineers also understand more than many that change is a dangerous business and that diligence to verify impacts and other possible risks is a never ending task. Hopefully the tools they have chosen can provide the right dependency reporting for each and every component or artifact that is being altered. This is not to lessen the criticality that other groups of analyst or business process engineers supporting a higher function of the change event management (CEM) process. They attempt to manage cross functional impacts, change collisions, risks or scheduling and managing approvals and running weekly “nea even daily” change meetings for each and every event taking place throughout the enterprise.

Where does it all begin? While I’ve said that SCM is the foundation for change, how a firm manages or tracks their assets is the “floor” of that foundation. Many times I have seen, or heard, from managers that they believe their asset inventory is at best 80 percent accurate (some have admitted to being far below that). How can you track changes to technology when you don’t even know what or sometimes where a component you are changing is. What is the true version of OS, microcode, revision level, executable version of a given artifact/component is running. Making sure that you can identify and track each of every component that requires change in the infrastructure is extremely critical. In other words anything that is plugged into a power cord, connected or communicates to a network, runs a OS, is loaded with microcode and let us not forget the source code or linked libraries that gets turned into a binary and is executed as either lower level code bound or linked to the end programs that run the application itself. Having the ability to query some DB which you hope in turn feeds your CEM tool each and every asset means that all changes to any identifiable component can be tracked. Linking these dependent components within your asset DB or your CEM tool enables you to identify collisions and cross functional or customer impacts and their associated risks.

So you think that asset collection and tracking is hard to do? As the saying goes, “Anything worth doing is worth doing right” and the benefits FAR outweigh the cost and time required to ensure you have a proper accounting of your assets. Once completed and the right data (what that data is, is critical too) is captured you can pull metrics on any aspect of your infrastructure and these reports can provide management and everyone responsible for IT with a wealth of vital information. What revision of microcode is any NIC running, what patches or versions of an OS does each and every server or desktop/laptop in your infrastructure run, and of course what versions of code your client facing applications are running. The list goes on and on and you can clearly see the benefits.

Once a full accounting of all your assets is completed all that you need is sound process to ensure new hardware, software or change to any asset that requires tracking is immediately placed into your tool of choice. I’ve seen Notes DBs, Xcel spread sheets, to simple entry sequenced data files on mainframes as well as the pricey “Enterprise” Asset tools from some of the biggest computer vendors in the world used for asset tracking. None of these tools are worth the effort to design, buy or outsource if you cannot enforce compliance by utilizing documented procedures for all users to follow as a sound and solid process to ensure the right data is entered. Compliance in updating your asset data is a topic for another time for how to ensure data continues to be correct is a process unto itself.

Any robust CEM methodology or process starts with a tool that can communicate with your asset data base. Many do not as once again I have seen everything from homegrown Notes DBs, INFOMAN on the mainframe, Spread sheets and basic email systems to Perigrine Service Center, Remedy etc. so you once again need to turn to sound documented processes to ensure you can at least obtain information needed to avert collisions or impacts while getting the proper groups of people to approve and promote a change record (CR) thru the life cycle. To initiate a change to your infrastructure you would generate a CR which should be populated with asset information along with a list of cross impacted components that are associated to your component that requires a change. This CR should have the correct approvers associated with the component and the cross impacted functional owners which could experience a problem due to any given change being propagated thru the life cycle. Development, QA, integration, and finally production environmental impacts should all be taken into account.

Where does SCM fit into the scheme of things? Let’s get back to the foundation of all change! Any piece of software which runs on technology should be archived within your SCM tool. This enables everyone to understand what versions of code you are running, which version was last installed as well as who submitted any given version of source code, script, object, binary or any software within your entire infrastructure. (It sounds like SCM should feed Asset tools too doesn’t it?) Many SCM tools store and version binary as well as source and this is very important. Not only can you safely store all software within an SCM tool you can always fall back to a previous version since it can never be removed if properly archived within the SCM DB. Hopefully your SCM tool has a release functionality whereby you can build a package containing all the components being installed within a release. This packaged release should be able to promote through a life cycle from a basic unit test to development environment, to a QA environment and eventually production.

One package that has not changed from the time is has been cast into the technology pond should be able to run in each stage or state that it enters if no functionality failures occur from that package. How to maintain as close as possible the environments for both hardware and software between life cycle stages is another story for another time but is part of this process. If problems have been found in the QA environment than the release could be altered but only by those entitled to make changes in a given stage or state. Your best bet would be to reject the package or back down the package to the previous state, fix the failures and have the package recast. In a perfect would each and every package/release would promote up the life cycle without failures but we all know better. We also know that all packages/releases are different every time so careful vision of impact and risk is critical. Collision between packages going through the life cycle is also something which needs clear vision and of course there are merge tools within many SCM products to ensure you do not impact the same code avoiding stepping on each others code (toes). Managing merges is a process that requires careful thought and of course the proper diligence to ensure risk and impact is lessened.

I will not get into the build process at this point because many SCM tools require third part build products or they utilize in-house scripts already built for their current build process to create the binary which should then be versioned in the SCM tool. This is an unfortunate by-product of the complexity of building code with many different IDEs or build tools used throughout the industry. Nowhere is there one-stop shopping for build tools available. There are ways to standardize the build process throughout your enterprise which will simplify everyone’s life but that too is another story for another time.

The major disconnect experienced with the full change event management process is that CEM tools which maintain event approvals and get populated hopefully with the cross functional impacts is that they do not communicate with the SCM tools. In a perfect world you open a change record in your CEM tool which would in turn create the SCM release or package as an empty shell in your SCM tool. Through sound project management requirement gathering you identify the components that will make up your functionality changes to be placed into the package. Make the changes needed to support your new functionality and wrap all those changes and cast your package for release. The change record within the CEM tool can now be promoted with the proper approvals by those responsible business managers and environment owners through each stage of the life cycle.

Along with the CEM change record the package/release within the SCM tool is promoted and automatically installed with the proper trigger, into each environment as it goes through the stages of the life cycle. Eventually an approval for a production change control occurs within the CEM tool generating an install from SCM of the package to production and if no failures are experienced your life cycle ends for that package/release. Many packages/releases could be occurring at any given time which SCM tools can handle but you still have the environmental concerns between each stage of the life cycle and that is another management aspect which process, asset tracking, cross functional impacts and proper SCM tools or processes can assist in identifying to ensure a smooth transition of any change.

In a previous life I have combined the different but critical functions of a CEM tool and SCM tool to automate the entire life cycle in as close to a “Perfect World” as one could hope to come too. It was specific to a proprietary platform but the entire process was “almost” flawless. I loved being able to help in the design, implementation and support of such a beautiful process. All of us in technology know very well that change is inevitable and constant. Too often vendors of change products of any kind miss the mark when it comes to total change harmony but without the tools working together you can define procedures and processes and document them for all to see and use to make technology changes your ally and not your enemy.

So that leaves us with a state of change which has too many tools that do not communicate with each other. We as experts of configuration and change have to design processes and procedures that mimic as close to possible a complete and bullet proof life cycle process. I have designed and seen many others create ingenious methods to make tools work together as best as possible but the industry tool makers have to stop thinking about each part of change as a separate entity and to build and all encompassing product which can handle the full configuration and change processes harmoniously and seamlessly. That has yet to happen and while some tool vendors provide the various bits and pieces a disconnect between each of those tools still exists.

There also needs to be a move away from project managers as those in corporations who drive IT change as they are working on any number of specific efforts at a time and their focus is to complete projects and move on to the next set of requirements which is not a bad thing. It is just that they are not immersed in all the areas of technology change. But there are many pragmatic SCM and CEM people who can work all sides of the change equation.  We as change experts are waiting for the lights to come on and for the tools to catch up with our experiences and of course for the executives and bean counters in many companies to put the genuine emphasis on IT change that is truly and finally deserves.


Robert Kaylor has been involved with Software Configuration Management, Change Management and Change Control since 1995. During this time he has managed a large Global SCM Engineering team supporting Mainframe, Mid-Range, UNIX and Windows platforms using various SCM tools for applications SDLC across the globe. He has also worked as a Senior Support Analyst for a vendor of a proprietary SCM product for a mainframe platform performing installations and customizations of that specific SCM tool around the world. He has further Operations Management and Data Center support experience on multiple platforms during this time and prior too, since 1980 on both enterprise sized and smaller infrastructures. He started in technology as a customer engineer supporting network data centers, their infrastructures and their users.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.