Yes, You Can Review Your Own Work!

[article]

 

 

Summary:

In last month's column, "Reducing Your Cost of Quality," I listed "structured personal reviews" as being a highly effective appraisal method.  This resulted in e-mails from multiple people asking me about that topic.  So this month, I will explain what I mean by this term, and explain how you can make your reviews effective.

 

What is a "Structured Personal Review"?
Peer reviews and software inspections have been written about and effectively used in many organizations over the past two to three decades. They have proven to be very effective in improving the quality of software systems before the testing process even begins. These methods have many benefits and any organization would be well served by them.

That said, they do have one large downside:  Doing them requires the cooperation of others within your organization, especially management. If you cannot secure the support necessary for those methods, they cannot be done. This fact leaves many software professionals who wish to improve their quality performance with few options for getting started.

A personal review is just what the name implies: a review that is done by an individual on his or her own work. It is a peer review without the peer. It is an inspection where the author is the only inspector. Can you really review your own work, though? Many people have tried to do self-reviews and have found that it is very difficult to do them well. I know, because I did! I was reviewing my own work before I learned a structured personal review process, but rarely found more than 20% of my own defects.

The key to making a personal review effective is "structure." This column very briefly sketches the structural elements that helped me to quickly improve my own personal reviews from below 20% effectiveness to over 70% in a matter of a few weeks. I did this using the Personal Software Process
SM (PSP) from the Software Engineering Institute (SEI), which contains the best example of a dtructured personal review process.

Knowing What to Look For
The point behind reviewing your work is to find defects in your work products and remove them quickly and easily. So doing an effective review requires that you know what you are looking for. How can you know what mistakes you made this time around?

The best predictor of the defects in your latest work is the defects that were in prior things you produced. We humans tend to be very consistent and when it comes to defects, we are painfully consistent! This consistency, though, means that we can harness the information we have from prior projects to guide us in doing effective personal reviews.

We need to build a record of our personal defect history. Most organizations use a defect tracking system, so you already have a lot of information to start with. Go through the data and glean out the defects that were in your products. Here is part of your defect history. I say "part" because it does not include the defects you removed during compile, unit test, and any other activities that happened before the defect logging started. It is a start, though!


To fill in the rest of your defect history, you will need to start keeping track of the defects you find on your own, whether it was during your reviews, compiling, or unit test. After a short time, you will have a pretty complete picture of the defects you commonly inject in you work. For each defect, you want to determine:

·       What did you do wrong? Defect reports often describe only symptoms, but to be helpful during reviews, we need to identify what we actually did wrong. Was the condition on an IF statement wrong? Did you forget to initialize a variable? Was the syntax of a statement wrong? Was your detailed design faulty? Think about what you had to do to fix each problem, because that will focus your attention on what was actually wrong.

·       What type of a defect was it? If you are like the rest of us, you will be managing a lot of defect data. A scheme will help you to organize it all and use it in the next step, which is described in "guiding your review" below. At first, you will be coming up with arbitrary groupings. After you have studied many defects, though, you will start to see natural groupings that they fall into.

·       How much work did it cause? When you are estimating this time, be sure to include all of the effort that went into the defect, including:

1.       Logging, managing and closing the defect report

2.       Investigating the report and reproducing the defect to diagnose the problem

3.       Figuring out how to fix the problem

4.       Re-working the high-level design (if needed)

5.       Re-working the detailed design (if needed)

6.       Re-working all of the affected code

7.       Compiling the re-worked code

8.       Unit Testing the re-worked code

9.       Building the system to incorporate the re-worked code

10.   Re-testing the system to ensure that there are no negative side-effects and also to ensure the defect was actually fixed (be sure to include your own as well as other people's test time)

With all of this information about your own defect history, you are now armed for the next step.

Guiding Your Review
The mounds of data you have compiled during the previous step will not be terribly useful unless it is distilled down to its essence. The best form for this information is a checklist to guide your review. A review checklist should be specific to the work product that you are reviewing, so if you do multiple types of work, you should segregate your defects into a different checklist for each type of work.

For example, a programmer who writes in two different languages will need multiple checklists: At minimum, one will be needed to review code written in each language.  Another will be needed for either reviewing designs or for reviewing designs for each language (depending on how different the designs or design methods are).

Each checklist should be specifically focused on the task at hand. For example, don't include design defects in your code review checklist. Instead, put them on the design review checklist and expect to remove them before you write the code.

Word your checklist items to describe what should be true of the work product, not the problem you want eliminate. For example, if you sometimes forget to initialize a variable, the checklist item should say, "All variables are initialized," rather than "Forgot to initialize a variable." Each checklist should be no more than one page long. If they become longer than this, your review process will start to lose its effectiveness. Combine and group items if necessary to reduce a long checklist to a single page. Or, eliminate defects that are either rare or cost very little to fix later. You want to end up with checklists that are short, and have the highest value items on them.

The Personal Review Process
After you have studied your defects and created your checklists, you are almost there. The final piece of an effective personal review process is the process itself. Your process should guide you in a systematic way to check each and every work product you create against each and every item on the appropriate checklist. Many people find that the process I use works well for them:

1.     Write down the time when you start the review process.

2.     Print the checklist.

3.     Print the work product.

4.     Read the first item on the checklist.

5.     Scan the work product looking for lines that this item applies to (most people can do this level of scanning at the rate of two to five seconds per printed page of program code).

6.     When you come across a line that the checklist item applies to, quickly check it. If there is a defect, circle it and continue with your review. (Don't fix it during the review, as this will break your rhythm.)

7.     When you reach the end of the work product, place a check mark next to the item on the checklist. Treat this seriously, as if it is a certification that the checklist item it absolutely true for that work product.

8.     Read the next item on your checklist and repeat steps 5 thru 7.

9.     When you reach the end of your checklist, you are done reviewing that work product. If you have other work products to review, print out the next one, and continue with step 4.

10.   After you have reviewed all of your work products, fix each defect that you found, and record the information for each defect that was listed in the prior section.

11.   If you found defects that were not represented on your checklist, consider adding items to your checklist.

12.   Write down the time you finished the review process. Count the non-comment lines of code (LOC) that you reviewed, and compute your review rate (LOC per hour).

The key to making your personal review effective is checking each and every checklist item against each and every work product, while avoiding rushing the process. In the personal software process, we teach students to review at a rate of 200 or fewer LOC per hour. Most people have to work pretty hard to review their code that slowly. It is not unusual for people to start out at rates of 500 to 1000 LOC per hour.

Most people's data shows that going much faster than 200 LOC per hour results in ineffective reviews. And their data also shows that reviewing slowly does not result in a productivity hit, because every additional defect they find in the review represents time saved during unit test or later in the project.

Making Your Personal Reviews Effective
Doing structured personal reviews is one of the most effective ways to find and remove defects from your work products. But its effectiveness will not be automatic. You should watch your data and let it guide you in determining how build your effectiveness over time. Here are some specific numbers to watch:

·       Review rate (LOC reviewed per hour): As stated above, you should shoot for no more than 200 LOC/hour. But after you have collected your own data, you will be able to identify your own optimal review rate. Adjust your speed to get the best payback from the time you spend. (But beware of drawing conclusions about effectiveness if you have had only a few reviews – or none – that were around 200 LOC/Hour.)

·       Review efficiency (defects found per hour):  In the spirit of spending your time wisely, you want to ensure that your review time is well spent. If you are removing fewer effects per hour in reviews than in Unit Test, your reviews are not working will. (This should prompt you to improve your reviews, not abandon them!)

·       Review yield (% of defects existing at the time of the review that were removed): This cannot be computed until after the product has been fully tested. You might draw the line at the time of release, or possibly one year after release. For each defect found after the review, decide if it was in the product at the time the review was done. If it was, it counts against the yield. We hope to see our review yields start out over 50%, and quickly build to at least 70% as we perfect our review process and checklists.

·       Appraisal-to-failure ratio (total personal review and fix time for the project divided by to total compile and unit test time):  Review time is counted as "appraisal" because very little of the time is spent fixing defects, whereas compile and test time are counted as "Failure" because the vast majority of time in those phases is spent fixing defects, re-compiling and re-testing. The biggest benefit of doing reviews is to reign in our failure time. Failure time tends to be unpredictable, and it is inefficient (on a defects removed per hour spent basis).

·       Our goal for appraisal-to-failure ratio (A/FR) is at least 2.0. What we see is that people who are having difficulty getting their review processes working right, tend to have an A/FR of less than 1.0 (meaning they are spending more time in compile and test than they are in reviews). On the other hand, when people get their review process working well, their AF/R is routinely over 2.0 (twice as much time spent in reviews as in compile and unit test).  It can even skyrocket on some projects when they find no defects at all in compile and unit test!

Structured personal reviews are the best method for removing defects from our work before we pass that work on to others for additional work, testing, or use. They can be done even when the rest of the organization will not cooperate in doing peer reviews or inspections, allowing you to improve your quality performance, even as the rest of the organization lags. All it takes to get started is to take some time to look at your data, learn from it, and start to act on it. Your own data is powerful. It can be your stepping stone to superior performance!

P.S. If your organization already does peer reviews or inspections, structured personal reviews will still provide significant benefit. Some organizations that have added personal reviews on top of their software inspections have reported dramatic reductions in system test time and a few cases where no defects were found in the product after release!

SM "Personal Software Process" and "PSP" are Sales Marks of Carnegie Mellon University.
 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.