TPF Automation: The Next Step is Here
by Matthew James Denman

American Express has been using a product call TPF Expert for our CRAS sets for approximately three years. This PC program was the beginning of our automation efforts. The product provides an array of functionalities including colored TPF messages, procedures to task out a group of TPF messages, saving the TPF message to a PC disk file and so forth.

One of the more interesting features, the remote facility, allows another PC to receive the TPF messages from the CRAS set without having to be directly in contact with TPF itself. The messages are received in realtime and allow the user of the remote to see what is occurring online as it happens.

The remote feature of TPF Expert is being used to send the TPF messages to another PC that is running a custom package, called the TPF Message Analyzer (or just Analyzer), that analyzes the TPF messages for more complex automation tasks. This program, developed in-house specifically for American Express, operates under OS/2 and used Presentation Manager for the user interface. The program is now in version 4 and has been under development for over two years.

Currently, we have three CRAS sets that are running with TPF Expert. Each of these CRAS sets is connected to the PC running the Analyzer via a null modem cable that is attached to a Digiboard on the PC. The Analyzer PC is also connected to our VM system through the standard 3270 terminal interface. The Analyzer will upload the TPF log to the VM throughout the day for Systems people to browse in other locations.

The Digiboard that is connected to the Analyzer PC has 16 serial ports. Three of these ports are used by the CRAS sets, one is used by another PC that runs a development version of the Analyzer, two are connected to modems for remote access to the Analyzer and the remaining ports are for future expansion.

The Analyzer was designed in such a way that when something new needs to be automated, a separate code module is written and can be loaded into the system without having to take the program down. For example, suppose that the report that organizes the Recoup information is to be automated. A code module is written that will analyze the Recoup messages. Once this module is tested under the development system, it can be loaded onto the Production Analyzer PC and then activated by the Operator through functions within the Analyzer. By standard, the actual report should be generated by a separate code module so that if the report needs to be changed, the analyzes module will not have to be taken down or modified.

The Analyzer was developed to be interactive with the user. The entire program operates within the Common User Interface guidelines of IBM's SAA. High level queries can be made to get information about such things as trends in core and file usage, tape drive status, online TPF job status and so on. Because of OS/2, the user can also access other programs that are on the PC without worrying about interfering with the Analyzer program. When using other PC programs, the user will be made to feel as though they are truly operating within an integrated environment.

The program works intricately with OS/2's Database Manager, an SQL database that was designed to take over where DB/2 left off. As a standard within the Analyzer environment, all data that is collected from the TPF messages is saved in tables within the database. Currently there are tables for TPF tapes used, TPF DASD, TPF jobs, core and file levels and TPF SERRCs. All of this information is then made available to other code modules.

Keeping the raw data in the database allows ad hoc queries to be performed directly from OS/2's Query Manager or reports/graphs to be generated by separate programs. In fact, it is quite possible to generate all necessary reports directly from within the reporting facilities that are a part of the Query Manager, rather than generate the reports through coding.

A good example of a code module using the database tables is the module written to track the module capture job. The module capture code module does not have to decode the TPF messages for the tapes used, rather it queries the database and gets information in a convenient record format. The job also queries the database to get a list of the DASD that will be captured before the job starts.

As each new code module is added to the Analyzer, it makes the services it provides available to other code modules, allowing for more and more high level automation. An example of this is the code module for TPF tape management. There are actually two code modules that together make up what is called the Tapes Manager. One does the analysis of the TPF messages and updates the TPF tape table in the database, while the other provides an array of tape request functions that allow code modules to request information on tapes that have not yet been mounted, tapes that are currently on the system or tapes that are no longer on the system.

A third code module works in conjunction with the two tape modules to graphically display the tape drives on the TPF system and what tapes are mounted on them. Windows are defined by the user that display whatever TPF tape drives they want to show. For instance, there can be a window for each string of tape drives and a window for the drives that are assigned for the real time tapes. Each drive is represented by an icon, and the icon can be manipulated so that it displays a small window showing the last 15 volsers that were mounted on it.

The code modules can, and should, interact with the users of the program, allowing them to get information about the subject that the module is automating. For example, the module capture routine has a window that will show what modules are being capture by the system at the time, how long the capture has been active, and so forth. This window can be opened and closed at will by the operator. Each of these windows, when closed, is represented by an icon in a separate window of the Analyzer.

Under development, is the ability to automatically start the offline job for jobs that have run on TPF. Once the job has been automated on TPF to the point of starting the job online (if required), getting the tapes used and the start and end times, the offline job can then be released with the online tapes.

When the Analyzer was originally developed, it worked within the MS-DOS environment. It didn't take long, however, to outgrow DOS. With workstation programs, HLLAPI, and a database package installed, there wasn't much space left for the program. OS/2 was the best choice. It's Extended Edition version comes with a communications manager, an enhanced version of HLLAPI, the Database Manager and a good Graphic User Interface. All of these things work well together, and conform to IBM's SAA guidelines.

Our goals for automation within TPF are very wide. Indeed, it is nice to have online jobs automated, and the offline jobs submitted automatically, but our vision is much more. The raw data of TPF should be presented to the user to provide information that would be impossible to get otherwise. Outage situations should be analyzed and cross referenced with previous outages to create suggestions as to how to deal with the current situation. Rather then just displaying the information to the user, advice should be given along with the information to help it have more meaning.