Does TPF Have a Future?
by Dan Evans

The recent announcement by AMR of their intent to move some transaction processing off of TPF and onto a Unix platform, coupled with IBM's continued economic difficulties makes one wonder if TPF has a future. AMR cited the cost of mainframes as the primary reason for the decision, and of course, these mainframes are the only architecture on which TPF executes. AMR also mentioned the Unix has less expensive C tools with more variety available.

Unix currently does not even approach TPF's transaction capability. The overhead of process creation prohibits high transaction rates. Unix transaction processors take the approach of CICS to avoid creation overhead and the simplistic dispatching algorithm. However, Unix systems are beginning to incorporate the new "threads" and "lightweight process" designs to reduce process overhead. But even if Unix is gradually evolving into a transaction processing system, shouldn't TPF development happen at about the same speed as Unix, so that TPF will always be a better transaction processor? If Unix can't compete with TPF now, why is it being considered as an alternative? the answer is "Numbers"!

In 1986, you could put a Unix system on your desktop for $5000. This was AT&T Unix System V Release 2.0 for 80286 processors, with a 20M hard disk. the same amount today will get a Sun SPARCstation with Solaris, the Sun merger of AT&T and Berkeley Unix systems, with a 400M disk. As long as the system isn't running a Graphical User Interface such as OpenLook or Motif, it can probably do 10 transactions a second in spite of Unix. Arraying 100 of these systems on a local area network gives 1000 transactions a second in spite of disk space, at a cost of $500,000, with no single point of failure. Of course, systems don't scale up easily, and calculations like that hide a wealth complexity in the end result. However, transaction processing does lend itself to parallel processing in ways that batch processing does not. If transactions are independent units of work and are easily handled on any one of many available processors, the numbers certainly point to further investigation.

The numbers make sense because the economies of scale are being skewed. It used to be that one large computer was cheaper than two smaller computers of equivalent power. the power of the smaller chips has grown rapidly and their incredible numbers contributes to their low cost. How many 80x86 machines have been sold, versus how many 370 architecture machines? If it takes a fixed investment to develop a product, the more units sold, the cheaper each unit becomes. this is why Microsoft can offer C development kits for hundred of dollars, whereas TPF, the same thing has to cost thousands of dollars because of the small market.

What will the future bring? The TPF community has known for some time what the rest of the world is gradually realizing: real-time transaction processing systems are here to stay. However, TPF has been so slow to change that the rest of the world is already passing it by. Let's look at several scenarios.

Scenario One
In 1988, Prisym, Inc. announced C for TPF systems. The TPF community embrace the innovation. C source debugging rapidly followed along with a hierarchic file system with device independence. Specifically designed for TPF, the C compiler allowed the importation of large programs into TPF, and by 1992, you did not need a separate development system for TPF. None of this compromised the TPF performance because it did not require changes to the operating system. Meanwhile, IBM saw the advantage of having a small real-time operating system, and began moving TPF support down to the smallest 370 machines. At the same time, they began to seriously look at the 370 architecture to see where reductions in complexity could allow them to build desktop 370's. They also took a few notes form the Series/1 architecture and added stack instruction to its architecture while combining some often used instruction sequences into more powerful instructions, such as Load Multiple and Branch. VM/CMS/Lite developed into the operating system of choice for Desk/370 after the addition of Ethernet LAN support. Small TPF-based real-time transaction servers began popping up all over the networks, and because of its disk performance and added LAN support, TPF became the only operating system used for file servers, from the smallest to the largest machines. Larger TPF systems became full-blown database machines capable of managing the data on thousands of disks and networking to with other TPF databases to provide shared access and redundancy.

Scenario Two
In 1989, IBM offered a C compiler for TPF. In order to reduce development costs, it was adapted from a compiler designed for a different environment. Then, they also hinted that parts of TPF or TPF utilities would be written in C, so TPF customers would therefore ultimately have to purchase it. Reception was understandably lukewarm but what could you do? After all, the only alternatives were available from small, unknown companies. How could they possibly do any better than IBM? TPF development slowly moved forward, but the trend was clearly to keep TPF as an auxiliary operating system dependent on others for complete functioning. The increasing power and reduced cost of Unix systems had not been missed by IBM. They had climbed on the bandwagon with RS/6000 and AIX, one more flavor of Unix. The attraction of selling one computer to each person in a company, as opposed to one for the company, was a powerful magnet. The potential sales outweighed the significantly reduced cost of the small system. Besides, it seemed like users could not get enough graphical user interfaces. This delighted hardware manufactures because nothing sucks up more processing cycles. IBM made arrangements with other companies to share development costs and agreed on designs. IBM was becoming one of the pack, just another computer company. The cost of maintaining development support for multiple operating systems of the mainframes which were selling more slowly with each passing year meant slower TPF development. Users, frustrated with low productivity and few development tools in TPF began to look for alternatives in the smaller machines. Even though they could never equal the real-time response of a TPF system, interconnected small machines on high-speed LAN's began to equal the throughput of larger TPF systems. The TPF user community shrank and stagnated.

We already know that the first scenario, although set in the past, did not occur. We see features of the second scenario around us. The die may already be cast. The answer to the question posed by the title of this article could very well be: No. But, let's stretch our imagination for one final scenario.

The TPF Users Group recognizes that real-time processing is so critical to the business operations of its members that they need operating system self-determination. They negotiate with IBM to acquire the rights to TPF and form an independent organization, somewhat like the Open Software Foundation, to develop real-time operating systems and support software for all members. TPF becomes a portable operating system and everyone lives happily ever after. One never knows.