More innovation in the cloud space
One especially surprising event this year was Amazon’s unilateral removal of WikiLeaks content, giving a whole new meaning to the phrase “eventual consistency”. Since data management is probably the most painful part of moving services to the cloud (as Andrew Tanenbaum once said, “never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway”), and since keeping it secure is essential both to your business retaining its competitive advantage as well as conforming to regulations, organizations will need to consider very carefully the risks of outsourcing data hosting.
Nevertheless, “The Cloud” is a magic phrase that will continue to dominate discussion in the IT world. Both public and private cloud offerings will continue to evolve, both in terms of the number of vendors, their offerings, and the toolchain. However, as the Amazon debacle demonstrates, IT will continue to rely on a combination of traditionally-managed services and cloud-based ones. This heterogeneity will make it painful to create a strategy for adopting cloudy systems in the enterprise.
It will be ever more essential to come up with such a strategy and execute it. The benefits of virtualization (and thus by extension the cloud) for rapid application development, for creating production-like environments for testing purposes, and for scaling up production environments when demand is volatile, are too great a benefit for businesses to ignore them.
What won’t happen: consolidation
Unfortunately evolution in this space is going to proceed too fast for there to be any meaningful consolidation of vendors, or even the toolchain. We’re certainly nowhere near any standards (another quote from Tanenbaum: “The nice thing about standards is that you have so many to choose from”.) This means you need to be careful about the way you design your systems such that there are flexible abstraction layers. The good news is that there’s not that many basic cloud operations so abstraction layers for this stuff aren’t exactly rocket science.
This abstraction layer is, in some ways, the interface between the development team and the operations team, since it’s the operations team that will be managing what’s on the other side of the abstraction layer, which brings us nicely to the next trend.
Enterprises will start to take notice of DevOps
Most enterprises are spending upwards of 70% of their budgets on operations, including maintaining legacy services. However faced with the business imperative of delivering more services more rapidly with more integrations and larger data sets to a rapidly proliferating set of client devices, operations teams are buckling under this pressure.
Broadly speaking, continuous delivery is the answer to this problem. But continuous delivery requires better collaboration between development, testing, and operations, and the application of much automation. This is exactly the focus of the DevOps movement, where DevOps can broadly be characterized as applying agile techniques to the world of operations.
As the speed of delivery increases, the current approach to achieving compliance in enterprises - expensive change management processes and a rigid division between development and operations - will become unmaintainable. Automation and collaboration actually form a very powerful alternative mechanism for managing risk effectively and transparently, but it will take a while for enterprises to understand and switch to this model. Gartner predicts that by 2015 20% of Global 2000 organizations will have adopted strategies from DevOps.
What won’t happen: enterprises actually adopting DevOps
The DevOps approach is so radical it will take some time to cross the chasm, and indeed it will be actively resisted by many organizations where it threatens traditional delivery models and organizational structures. As with other flavors of Agile, many organizations will adopt a version of DevOps that is buzzword compliant, but omits the practices that actually deliver the promised value.
The development of automated security testing tools
Security testing is currently a major bottleneck in the adoption of continuous delivery. Specialized consultancies dominate this field, and as with other kinds of testing, nothing can replace a human in the planning of security testing. However there is a real opportunity to create an integrated tool suite for performing automated testing of applications for security, including penetration testing, static analysis of systems, and injection attack detection for both traditional and newer platforms.
Release management gets serious
Many organizations have a release manager or release engineer who has the responsibility to ensure releases go well, but no real power to actually ensure all the correct dependencies are in place, either technically or organizationally. As organizations need to deploy more frequently, they will have to develop this capability, which will entail better co-operation between the various groups involved in delivering software, and the establishment of good practices that are both agile and compliant with frameworks like ITIL and CoBiT.
It’s going to be almost impossible to hire
The market in Silicon Valley is already tight as a drum, and as the economy slowly starts to move out of recession, it will become almost impossible to hire people who understand the new world of release and configuration management. India and China of course never had a recession, so this is business as usual in those places.
Well, that’s it. As always, I’d love to get your feedback on what you think will be hot this year, and where I’ve got it wrong. Meanwhile, a happy and prosperous 2011 to you and your families.
Five Predictions for 2011