instigate the control while maintaining the agility. For this, the tools need to be able to deal with all the changes, meaning there should be software changes but also the environment, software infrastructure changes and so on. Tools should deal with all process changes – formal and informal. The changes that are automated and changes that are done manually. Changes that are authorized and changes that are not authorized, not planned. Tools should cover the entire environment, from software through all layers of the underlying infrastructure stack.
Tools for the Entire Lifecycle and Environment
Also for the entire lifecycle, addressing the gap between development and operations, tools should be involved from development to testing, to staging, to production, to DR, to the retirement of the system – the entire lifecycle. What is very important for this is that the tools get to the right level of details. Take the tools available for software configuration management, like any SCCM tool, which can give you the exact line and exact change that was done in the software code. Compared to this, when you look at the environment side, tools like CMDB (that are supposed to maintain this configuration information) are very limited in providing such a level of details. You might know which configuration item has changed but don’t necessarily know what specific attribute changed. Getting to the parameter level, the configuration, the most granular level that is where the most risk of issues and incidents hide. These tools should be able to overcome the complexity automatically. When we speak about the change, there is a lot of information about the configuration for one system. Today’s systems have multiple applications, in a very complex infrastructure, possibly affected by millions of parameters and attributes, and are all application pieces and information that need to be tracked and analyzed.
Cutting Through Noise
These tools need to be able to cut through the noise and identify the information that will be valuable for a specific user and for a specific step of the ALM and operation process. These tools should be able to deal both with logical system architecture and physical environment topology. For the transition, the lifecycle, logical architecture remains the same, but physical architecture could be very different from one environment to the next. These tools need to support the new disruptive technologies as well as traditional data center environments.
ALM Tools Must Realize Value Rapidly
It is extremely important at this agile pace of change to be able to realize value quickly. If you create changes every day you cannot wait a year for implementation based on the tools available to help you optimize this process. Really for the same pace that you implement changes, the tools you are implementing should keep that pace and deliver results as well.
I think that the degrees of evolution that have happened in software engineering in technology organizations in IT have lead to a new way of working, requiring changes to ALM processes, technologies and tools. What I believe is that a new generation of tools is required to cope with the new trends. The existing tools are great, but they are designed based on an old approach, with old concepts in mind – when there was a well defined process, a structured organization, a well planned roadmap and so on.
So the new set of tools that will be supporting agility and providing flexible control and be able to deliver information, in spite of complexity, the right amount of information to the user that needs it, those are the tools we will see coming