away from reality as “live” configuration updates are made without updating an offline golden image.
Growing Complexities in Hazy Clouds
Cloud computing—having become a great marketing term—and cloud platforms hold exciting promise: reduced costs and management overhead; flexibility, scalability, and accessibility; and automated provisioning. However, IT should be careful not to underestimate the underlying intricacy inherent in the cloud. Today, every major aspect of a datacenter is under unprecedented change, including the entire application stack. Automation is critical for facilitating an agile environment in order to reach a DevOps model for production. Monitoring, orchestration, provisioning, service catalogs management, development, testing, and more must execute in perfect unison; a tough thing to accomplish for any operations team. The new distribution models like IAAS, PAAS, and SAAS also offer their own unique challenges that demand decisive actions to reduce MTTR.
So, IT Ops must still understand how services are built, the underlying infrastructure, and how issues impact the datacenter.
Workflow Creates False Security
Enforced processes strengthen the belief that everything is under control. However, no organization can claim it operates completely within the bounds of all established processes and approvals. This false security undermines IT operations.
IT Operations Needs to Collect and Analyze
Neurologists explain that the brain has two distinct hemispheres. The right side of the brain collects information, while the left side is cognitive and analyzes this information, translating all of the sensory input into usable data.
This is really the same model for how the IT organization needs to manage the cloud, where operations needs to know what’s happening now.
For example, a patch or "minor" release to an application can change hundreds of parameters at a granular level. The higher level of dynamics in the cloud, along with the extra configuration layers (e.g., virtual machine and host configurations) heightens the complexity and increases the management effort. And then the application may not function as planned. IT managers may check the processes that the upgrade went through, yet still see poor performance. They need to go into the fine details and trace every step, identifying the make-up of even minor changes and learn how it was deployed. Finally, managers need to take this enormous amount of data—configuration and granular changes—and run a search to pinpoint what was the root cause. Such an endeavor requires enormous resources and time.
Cloud Encourages Unapproved Processes
Now, self-service provisioning has multiplied the amount of activities occurring outside of static processes. IT Ops is no longer directly managing environments. For example, an organization may set up a private cloud with a dynamic management system. One of the things the organization can do is to allow self-service provisioning of servers for the testing team. Traditionally, testing professionals would come to IT and request an environment and IT would oversee and manage this entire process. IT was responsible for that server. IT Ops was an integral part of the process and IT knew what was happening. But now that the process is independent, testing can create an environment when it needs it. IT now has no visibility to what happens there.
Intelligent Analytics for Mountains of Data
The amount of data has grown nearly exponentially in the cloud scenario. The mountains of dynamic information confronting IT is not trivial and cannot be managed on the level of just a dashboard or metrics. Monitoring systems often yield too much data that translates into a lack of usable information. This requires taking all the data—really a multi-dimensional universe—and dynamically analyzing it according to intelligent parameters, not just by using a reporting tool. Intelligent analytics needs to know how