TPF 10X Efforts

In 1991, the TPF development laboratory in Danbury, Connecticut embarked upon a program to improve the quality of the next big release (NBR) of code. One of the foremost challenges this presented was to develop processes that would:

In this article we would like to share with you the ideas that one of the development groups are using to obtain their challenging goal of 1/10 the number of defects in the code submitted to test. The group is developing 20,000 new lines of complex code.

Code developed for TPF 3.1 entered our Function Test phase with about 14 defects for every 1000 lines of code (KLOC). Of these 14 defects, six were found by our Function Test phase, three were found by our System Test phase, and five were reported as APARs. To achieve a ten time improvement, we believe that the code submitted to test must be ten times better than before. The goal for this development group is to submit code to test with only one defect per KLOC, or only 20 defects for the project. Clearly a dramatic change was necessary.

Development has always had a process to follow, but this group felt that strict adherence alone would not provide sufficient results. Brainstorming sessions were used to surface ideas and after a few weeks the following ideas were mapped into the process now being used by the group.

If you are wondering as to whether we have succeeded, we do not know yet. The work is still underway and it will be six months before we know if the objective has been met. What we wish to share with you are our ideas.

High Level design Phase
Historically, we have expended 20 percent of our total resource in this phase. The document we produce covers both the external function and the internal implementation. At the end of the cycle we conduct a review within the lab as well as with some customers. The reviews usually last less than one day. Prior to the review the authors hold a 2 to 3 hour tutorial to help the reviewers understand the material. A review of about 10 to 15 people is normal.

The changes the group made are:

The HLD phase was increased to 30 percent of the total resource. This was accomplished by transferring support savings and applying them up front. In the external section of the document, the group incorporated all the error messages and dumps that would occur to ensure a thorough look was given to path errors. Historically, these have been added in the coding/testing phases.

In the internal section, for each data record/table and for every field within them a matrix was developed showing who initialized the field, who updated it, who referenced it, what its values were, what sort of update controls were used (FTWHC, $LOCKC, CORHC, etc.), how many records were needed, the impact of subsystems and subsystem users, the considerations of loosely coupling, and how the core was allocated and initialized.

We then addressed the interfaces between components. We found it insufficient to say that a routine's only inputs were the parameters passed. The inputs were defined as the parameters plus the contents of every field in the various records/tables that was referenced. The outputs were the returned values and every record/table field that was modified.

We restricted the level of the write-ups to approaches only, no pseudo code. This approach prevented us from getting bogged down in the details and losing sight of the overall design. Pictures were incorporated into the document to better show the interrelationship of tables and the way data changed during processing.

And finally we involved the test organization at this phase to determine changes to the test systems and driver requirements for extensive testing. We also evaluated the design for the ability to properly test the function. This review caused us to add some additional displays to the function to allow the test to monitor the changes being made to the table structures.

For the tutorial, we decided that one was not enough. We scheduled two different kinds of tutorials - external and internal. The external class was geared toward explaining the functions of the package. The second classes were geared toward the internals and were expected to make the reviewers almost as knowledgeable of the design as the authors. We ran one external and three internal classes in half day sessions. We video taped the sessions so reviewers and tester who did not attend would still be able to benefit.

The review was broken into three different types. We held an externals only review, an internal review of the total design, and a series of break-it reviews of the various parts. The external review yielded 42 defects, the internal review found 18 more, the break-it reviews found still another 73 defects. While the external/internal reviews were much the same as what we had done previously, the break-it reviews were new. The addition of break-it reviews to our normal external/internal review was very worthwhile.

What differentiates break-it reviews from the general review was the concentration and method. The regular review evaluated approximately 20 KLOC in about 12 hours or 1.7 KLOC/hour. The break-it reviews were scheduled to cover about 250 LOC/hour or 1/7 the speed of the general review. The reviewers tested the high level logic with different scenarios looking to break the design.

The participants in these reviews were not the same as the participants in the general reviews. In the general review we needed the "experts". In the break-it reviews we drew from the general population as system/package knowledge was not needed to do a quality review. An added benefit was far more eyes were involved with the project. Roles were also assigned to the reviewers to ensure all aspects were covered. Some of the roles were performance, usability, and test considerations.

The answers to all defects raised in the reviews are also formally reviewed. All 133 defects were entered into our electronic data base to allow us to continually evaluate the effectiveness of each stage of the process. During this and every phase the group is meeting for two hours each week to do a causal analysis on the most serious problem.

The process itself needs continuous evaluation and modification. Measurements and causal analysis sessions are the essential parts of this iterating operation.

Low level Design Phase (LLD) or Logic Specifications
The approach we chose for the LLD was to write the prologues and commentary for the code rather than a document. We feel this saves us time and produces a uniform set of specifications at a low level that follows for an effective review.

For the review we made a few changes. First, to measure the effectiveness of each review, we intentionally insert some errors and see how many get caught. If we don not find 90 percent of those errors we consider the review to be a failure and we schedule another review. This method is being used for code reviews also.

At the review itself, the author first presents a short tutorial and then leaves the room. The review is conducted by one member of the team acting as the moderator and a small group of "fresh" minds, one of whom attended the HLD review of this area. The format is a walkthru not a review. The rate if 100 LOC/hour. So to go through the logic of a 200 LOC segment, we plan a walkthru of two hours. Since these are walkthrus, the only upfront work required of the participants is to review the HLD for the area being reviewed. However, some reviewers have chosen to get early copies of the LLD to browse prior to the actual walkthru.

All issues from these reviews are entered into the electronic data base also. Additionally, at this phase we also attempt to determine whether a defect found should have been found at this level or at an earlier level. This is one of the alarm systems we are using to ensure solid progress through each phase. The method we are using is called Orthogonal Defect Classification and was developed at IBM Research.

Again, the team continually monitoring error rates from the reviews looking for excessive numbers of defects as well as too few defects. Causal analysis is done of the exceptions.

Coding Phase
For the best quality we chose to write in "C". We have opted to modularize the code so as to keep routines to one page, and are continually focusing on how to package routines to make them reusable.

For the code reviews we are using error injection and walkthrus, and excluding the author as we do the LLD reviews. Again, all issues are entered into the electronic data base and the data is continually monitored.

We are excluding the author from reviews to prevent issues from being explained away and from reviewers being impacted by an author who becomes defensive.

Unit Test Phase
The objective of this phase is to test EVERY path through the code. The use of "C", modularized code and reuse makes this a more realistic objective. The approach we are using is to test every path through each module, not every path through all modules. For example, if we have two modules each with four possible paths, then we well run eight tests, four for each. With unmodularized code we would have had to run 4*4 or 16 tests. With more modules and more possible paths the benefits of modularization become astronomical.

We are also using CMS/TPF to assist us in this phase.

As with all phases, every error found is recorded, and the data analyzed. The data will help not only this project, but those to follow.

Summary
Process is the key. Within the framework of process, new approaches are being used for design, reviewing, coding and testing. All approaches and the process itself are being continually evaluated for effectiveness and efficiency.

What we have described is one of the various approaches being used. The development teams are sharing their ideas and experiences and modifying their processes as they proceed. Not all teams are using these ideas, and not all of these ideas are proven effective. We will continue to share our successes and failures with you.

Bob Dryfoos or Sue Pavlakis
TPF
IBM Corporation
40 Apple Ridge Road
Danbury, CT. 06810