We are on the brink of a massive shift in CM, both philosophically and technologically. Our abilities and our means must undergo a change in how we operate and how we are perceived. The changes are incredibly exciting and offer enormous opportunity for the discipline. We're being backed against the wall in many organizations but we can be climbing on top of the wall like Berliner's at Check Point Charlie when the wall fell, moving our CM nation to a much better place.
Many of the tools we have play nicely in our sandbox. But the changes we make going forward must be geared toward the organizational management level. We're all so busy controlling the color of each pixel, we lose track that senior management rarely sees the pixels, only the picture. One hundred new features may go into a release. How much of that does the executive level care about? Maybe one or two. The rest is just expected improvement to match competition or fixes to problems from previous releases. All the CCB work, the file-by-file management, and the myriad of test cases and results don't mean diddly up the chain. The questions are always simple and almost always the same. When will "it" be done? How much will "it" cost to produce?
All the grunt work on our end becomes static outside our circle. That means we need to do much better packaging of our "exports." We don't just export software products, we export information about how well we operate. Are systems running effectively? Are we continually introducing problems as we fix things? Are defects at an all time low because requirements are much better written now than ever and we've improved the training cycles for software developers? Is there a disparity in the defect rates internally versus externally? The tools and processes we utilize control the SDLC; they also have to
be able to show our successes and failings. Why shouldn't the senior level be directly pulling reports showing the number of open CR's, maintenance releases, and other activity for major systems to understand system health? Many in IT complain when the order comes down to push a dying system well past its viable state instead of using those resources for the next generation. We complain when funding is cut because "they just don't understand." Our tools are not built with the executive in mind yet they are the most primary customers we have. The company is spending hundreds of thousands, if not millions, of dollars on various efforts but most places have to depend on status reports by word of mouth, with all their intonations, shades, and opportunistic wording? That's a huge shortfall for our tools. Our tools are about exactness, not interpretation. This isn't just configuration management at a micro level. We all want to provide better solutions at a macro level. Lifecycle management is what we ultimately want to achieve. If we want those better solutions, we can't just buy them, we have to set the expectations for ourselves.
It's one thing to hold yourself to a standard of activity. It's something else to hold yourself up for others to see, especially when you are setting your own goals. That visibility raises the level of competence of everyone involved. That push for excellence is the kind of behavior that improves quality at every step. Many times competitors are competing against themselves. Can they shave that extra second off or be more accurate with a throw or hang on just 10 more seconds? In our case, it's a function of doing the work right across functional areas. Are we capturing and delivering baselines correctly or leaving no issues abandoned?
To some extent, this is what audit is about. Show that we do what we say we expect of ourselves. When we fail, the discrepancy is written up. But to where? If it never escapes