Ten Years After, Ten Years Gone
Mark Gambino, IBM TPF Development

A major earthquake rocks the San Francisco bay area. A well-known musical group that was banned from playing in many small arenas because of crowd problems uses their original name, The Warlocks, to fool town officials into allowing the band to play concerts in Hampton, Virginia. News out of that IBM development lab in Danbury, Connecticut is very positive regarding their new software release (TPF 3.1). The year is 1989. With most people focusing on the new millennium and what that will bring, it seems only fitting to look at where TPF was just 10 short years ago. The TPF community is excited about the new features available in the TPF 3.1 release and is preparing for Y90 . . . you know, the 1990s.

To get a feel for what TPF communications was like 10 years ago, all the Systems Network Architecture (SNA) tables resided below the 16-M line and the largest SNA shop had just expanded to beyond 32-K resource vector table (RVT) entries defined. That is where some of us learned, the hard way, about what load halfword (LH) instructions do for 2-byte resource identifier (RID) values that are larger than 32 K. Back then, breaking the 32-K RVT plateau was not a concern because the 2-byte RID structure allowed for as many as 64-K RVTs to be defined and no customer would ever have a network that large, right? Those famous last words eventually led to the 3-byte RID project, which enabled the TPF 4.1 release to have as many as 8-M RVTs defined. No one will ever reach 8-M RVTs defined; hopefully, if they do, I will be retired by then!

In the late 1980s, channel-to-channel (CTC) support was now available in the TPF system, creating much faster pipe connections to adjacent hosts. LU 6.2 support was also new, providing your application developers with a standard networking interface that could be used cross-platform. No longer did TPF application programmers have to code to the low-level ROUTC interface that is unique to the TPF system. Instead, they could use a standard API on the TPF system—what a concept! It is a little-known fact that much of the Internet technology is modeled after the years of TPF networking experience in the field. For example, the routing protocol for output messages in the TPF system was called ROUTC. The TCP/IP architects wanted to make their routing protocol sound more current, so the name they chose was route-d.

In the late '80s, LU 6.2 was the greatest thing since sliced bread. However, it was only the beginning. Parallel sessions support was a "wish list" item and technologies like ESCON channels, Advanced Peer-to-Peer Networking (APPN), and High-Performance Routing (HPR) were not even imagined yet. Now, all of those are part of the extended family that we like to call the TPF 4.1 system.

Communications was not the only area that was growing by leaps and bounds. Back then, C was a verb (as in "I see you, you see me.") All of a sudden, C became a language and the expression changed to "I C you; you C compiler errors." The name C did not go over too well because too many people thought of C as a grade, meaning average. To make the name more appealing, the subsequent enhancements to the language became C++. We are still anxiously awaiting the next generation language, B- (B minus), to be announced.

Ten years ago, if you asked people what "the Web" was, the most common answer was that it was a weapon used by a certain comic book character. Today, the Web is well-known throughout the industry and is ingrained into our culture. Actions like point and click and drag and drop usually refer to computer applications nowadays rather than to organized crime. TCP/IP network connectivity and Internet server support became realities in the TPF system. The new networking technologies, combined with the desire to shelter application programmers from the network intricacies and provide any-to-any connectivity, created a new buzzword called middleware. What is this thing called middleware? Is it the layer of clothing skiers wear between their thermal underwear and outer layer of fleece? Is it the middle management that proposals have to go through before they reach the real decision makers? No. Middleware, instead, is the layer that resides between the user applications and operating system services components. The birth of real middleware came in TPF 3.1 with the introduction of an SQL interface provided by the TPF Application Requester (TPFAR) feature. Recently, the M & M boys (MQSeries and MATIP support) have joined the ranks of the middleware in the TPF 4.1 system.

In 1979, how did application programmers access the TPF database? Through the FIND/FILE interface macros. In 1989, how did application programmers access the TPF database? Still through the FIND/FILE interface macros. (Do you see the pattern here?) Who would have thought that in 1999 there would be a TPF file system with APIs that allow programmers to use standard file access methods! Application programmers in the 1980s had to understand the structure of the file database, meaning that records were 381-, 1055-, or 4096-bytes long, and had to know the chaining mechanisms involved to create and access large data structures. This can be complex, which led one application programmer to yell, "Show me the money—I mean, show me the data!" This wish was granted with TPF collection support (TPFCS). What about data integrity? Transaction services support, which was introduced in TPF 4.1, allows applications to define committable units of work, and the system guarantees that all of your updates are made or none of the updates are made. This is even true across an outage! Many programmers who worked in the TPF 3.1 environment and who had to write all the error code to handle partial database updates became "committable resources" themselves, if you know what I mean.

Automated operations is another thing that we sometimes take for granted nowadays. In the 1980s, you had to manually type a series of commands and hope that there were no typos along the way. Today, you just point and click an EOCF/2 icon and the appropriate script is run. The more important benefit is being able to monitor console messages and react immediately to urgent requests rather than sift through the printer copy of the console messages on a timely basis, hoping to see the "Five minutes until core meltdown" message in time to take evasive actions.

In the TPF 3.1 era, if you wanted to load or change a real-time program, an outage was required to do so. The same was true if you wanted to change the SNA network configuration that was defined to TPF. Making changes "on the fly" was science fiction back then. However, if you wait long enough, most science fiction eventually becomes a reality. The E-type loader and multiple images capabilities that were provided with the general availability (GA) version of TPF 4.1 enabled you to load program changes while the TPF system was up and running, and to load and test the changes on a subset of the processors in the loosely coupled complex rather than having to upgrade all processors at once. The dynamic LU project gave the TPF system the ability to learn about new SNA resources while the TPF system was active and network traffic was flowing. This was further enhanced by the APPN project, which allowed the remainder of the network to dynamically discover resources that reside within the TPF system.

Capacity planning . . . in real life we have to adhere to a personal budget. While many of us would like to buy fancy sports cars, expensive audiovisual systems, and private jets, our bank accounts say otherwise. In TPF 3.1, the bank account balance in many cases was the 16-M line. Working storage and many tables had to fit within that 16-M limit. This problem was alleviated by bringing the concept of address spaces into the TPF 4.1 system, allowing for up to 2 G of core memory to be used as you see fit. Address spaces also solved many of the data integrity problems caused by renegade applications corrupting storage used by other applications or systems services. With each entry control block (ECB) now in its own address space, we call this the "if you can't see it, you can't hurt it" defense. File usage has also skyrocketed to the point where full 4-byte file addresses are now used, allowing for databases of up to 4 G of file addresses (records). In the 1980s, if you asked what a tape robot was, the answer might be that it was something from the television series "Lost in Space." At that time, if you had told someone that there would be control unit caches (and even workstations on the desktop) with gigabytes worth of core memory, you would have been given a one-way ticket to the local hospital for observation. Tightly coupled support was still new to the TPF community. Would the idea of multiple engines (I-streams) within a central electronics complex (CEC) sharing the same core memory catch on? It definitely has, not only in user applications, but also in system services like DASD I/O.

When you think back to the year 1989 and compare TPF 3.1 at that time to what TPF 4.1 is today, it is amazing how much has changed. The TPF product has gone from being 20-something years old to 30-something years old. Yes, like many of us, TPF has put on a few pounds over the years, but that is expected from all the new functions that have been provided. However, in many aspects, the TPF system is leaner from the amount of fine tuning and performance analysis that has been done by the "doctors" here in the IBM Poughkeepsie lab. Are we licensed to practice medicine? Next subject, please!

As we look toward the next 10 years, we hope to survive any TPF mid-life crisis that pops up, assuming we all survive Y2K first, that is! What does the future have in store for TPF? "Only the Shadow knows."