Transparency and Accountability

[article]

Spare the rod; spoil the child.  It's an old statement that conveys the notion of accountability in raising a well-mannered kid.  The concept of accountability is just as true for the software development process.  Making the process and activities transparent to senior management means we can do more than just present work progression.  We can also filter problem areas up the chain as well to help improve the lifecycle. In this article, we'll look at ways that encourage process improvement through the use of regular reporting.

It's no surprise that people consistently perform better when they are being scrutinized.  We all strive to be professional but everyone has moments where focus wanes a tad.  Likewise, in our lifecycles we have points where the attention is lacking and responsibilities blur a bit.  Deliverables don't flow as smoothly as they should.  In Configuration Management, we rarely have the power to make the changes necessary in other functional areas to improve time to market, quality, and efficiency so we need to get that information to the people who can.  Here are some data points to present.

Duration

There are many ways to identify duration of incoming items.  We may have a tool that keeps history through the lifecycle or it may be as simple as the folder or file creation dates on a network drive. It could even be some form of log.  From that we can compare (in a limited way) one release against another.  There are many reasons why something would sit parked, like items waiting for a code merge from an outside vendor or in conflict with code currently in a testing environment. While very simple, that information can still be very relevant.  A long lag in processing for an oversized release may be no cause for concern but if the release is only a couple of stored procedures, there may be other issues that need to be addressed.  Similarly, if the wait on that vendor is consistently long, then either we need to thump the vendor or, as an organization, we can manage code better to get more into a release before the merge.

And if you can identify the motion through the cycle, you can also start extracting secondary data.  Changes in processes and tools can be reflected in the duration times.  Or we can show that despite a decrease in work load, there is no decrease in hours billed for a given activity.   If there is no accompanying increase in quality, then perhaps it reflects the excessive turnover of employees, so retention may become a glaring issue.  It may reflect that QA is being asked to do too much with the resources on hand and they need to add and retain staff.

Activity Levels

When passing activity metrics up the chain, we have the opportunity to point out areas of concern.  In the reporting, though, be sure you are aggregating the data in a way that means something to senior management.  If they only see Activity 1 = 100, it won't mean anything.  Show them how that's changed over time or in relation to the other activities.  If Activity 7 is up by 64% but nothing else is, it should be a flag to understand why there is such a hiccup in the cycle.  And you might not do this kind of reporting every period.  The reporting that reflects trending may only be done quarterly or even annually but it's a great way to raise issues without being belligerent.  And few senior managers want to be caught with several quarter's worth of visible problem they didn't do anything about.

If we can accurately keep track of the activities being accomplished by our group or others, we can then push more effectively for better tools.  When we can show that 80% of a given activity is manual but can be automated for a bit of cost, we can then relate that to labor burn rates and show a reasonably accurate cost comparison for a new tool.

Rework

It's also no surprise to many of us that a small percentage of individuals seem to cause a disproportionate amount of workload for others. In a recent example, we noted one release request that was followed by 16 updates, not including testing issues that were corrected.  In a perfect world, we'd see 1 request that moves through the cycle with no updates.  Obviously that's not always possible but 16 corrections for one initial request is entirely unacceptable to the organization.  That type of information needs to be filtered back to at least the line manager so that appropriate steps can be take to train or encourage individuals with frequent issues. 

No doubt this is a touchy issue and people will have legitimate (or not) reasons for the events.  Few want to be the cause of someone else's call on the carpet but sometimes that's just what is needed to prevent recurrences and restore balance.  Occasionally, process gets out of balance because one group doesn't recognize its impact on the rest of the company.  The only way to create the whiney/temper tantrum child is consistently indulging poor behavior and it's unfortunately true in the corporate world as well.  Statements like "Well, we could never get the developers to do XXXX" not only encourages poor behavior, it increases workload for others down the line.  We should be able to identify the extra steps required and track them.  That's not to specifically pick on developers since all areas have such nuggets but rather to note that if we aren't holding people accountable for work, they often won't be.  We can use focused reporting to show how those missed opportunities are costing the organization real money.

Late Stage

We know that issues uncovered later in the lifecycle are more expensive to address than early ones.  And many of the issues have indefinable costs, like damage to the client
relationship for missed dates or failed functionality.  In many organizations, issues are treated the same across the lifecycle rather than highlighted by timescale.  If we see as many or more issues in a later testing environment as we did in the initial test environment, we either have environment discrepancies or the environments are not appropriately controlled from configuration or code perspectives.  We can push information like that up the chain to help get the funding necessary to make environments more equal, if that's the case.

Focusing the Lens

We're often asked to provide transparency in the form of reporting (or tool access) to upper management.  With just a bit of extra work we can capture data in a way that isn't
just activity related but process related as well.  We can focus the flashlight lens on areas that consistently slow down the lifecycle and reduce efficiency.  We can generate real cost/benefit data to justify better tools or revamped processes.  IT Governance isn't just about seeing work concluded successfully, it's also about getting there efficiently.  We can use our tools and processes to help focus the lens in the right directions.


Randy Wagner is a Contributing Editor for CM Crossroads and Senior Configuration Manager with EFD in Sunrise, FL. His experience ranges from major financial institutions to multimedia multinationals to the Federal government. Working in small to large project efforts has given him a unique perspective on balancing the discipline of SCM and enterprise change management with the resources and willpower each organization brings to the table. You can reach Randy by email at   [email protected]

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.