You have a Y2K effort in place, and it's all about preparation for an event you know is coming. What have you overlooked that’s going to bite you? This article will help give you 20-20 foresight to anticipate potential "gotchas."
The day after Thanksgiving, the biggest shopping day in the United States, a storm knocked out electric power service to a client's main retail store. No problem--of course they had a contingency plan. A year earlier they had bought an expensive electric power generator justified solely "in case we lose power the day after Thanksgiving."
Relieved that they had planned ahead, they fired up the backup power generator. Silence. The generator was not generating. They'd never actually tried it out before, and now it failed on its big day.
Oh, shoot! They had planned for a contingency. The contingency happened. The plan failed. With 20-20 hindsight, what they had overlooked is obvious.
You have a Y2K effort in place, and it's all about preparation for an event you know is coming. What have you overlooked that's going to bite you? This article will help give you 20-20 foresight to anticipate potential gotchas. It also will give you some ideas of what you can do about them in the time remaining.
Of course, there may be some things you still overlook, and you may not be able to do anything about some of the issues you do identify. We are, after all, in the end game. Nevertheless, you must get to the point where normal maintenance procedures can handle any remaining Y2K problems at such times as they actually occur.
Where You Are Now
You're probably involved with a Y2K remediation and testing effort that's completed already, or at least pretty well on track to be completed long before January 1, 2000. If you're not that far along, you may want to stop reading and get back to work on the basics.
Chances are you're following a testing strategy similar to the one shown in Figure 1. It's basically a three-step Y2K testing process. Step 1 is a Baseline Test to demonstrate how the software works today (that's presuming it does work today), pre-2000 and pre-remediation.
Step 2 is a Regression Test applying the same test data as in Step 1, but after the current software has been remediated for Y2K. We assume the development organization also has performed technical testing of whatever changes they made. The purpose of the Regression Test is to demonstrate that the clean-up changes have not impacted the software's ability to continue functioning in the current manner.
Step 3 is Future Date testing, often involving a "time machine." The changed programs are run with the system date set to January 1, 2000, February 29, 2000, and other dates we'd expect to cause problems. The test data input is essentially the same as in Steps 1 and 2, except that dates are aged to make sense within the context of the future system date.
If you're not following a strategy similar to the three steps described above, you've probably overlooked significant assurances that your Y2K changes are effective. Even if you have, though, be aware there can be several gaps in how the strategy is executed.
Perhaps the most prevalent problem relates to overlooking coordination with ongoing maintenance changes. Since practically every module in your software portfolio may undergo multiple changes for Y2K, other current maintenance changes may be applied to some modules in ways that drop or counteract the Y2K changes.
Comprehensive configuration management to control module versions is essential. So is clustering related changes in identifiable releases. Many organizations find it convenient to promote software back into production as soon as it passes the Regression Test. This shortens the period in which the module is checked out for remediation, reducing the likelihood of