Making Faster
by Stu Waldron

Overview
The fundamental TPF architecture was defined as early as the late fifties and the early iterations in the mid-sixties. The basic concept was to develop a transaction orientated operating system where expensive "job" preparation and execution overhead was minimized. Operating system level pre-built structures were ready to dynamically accept inbound traffic, process it, respond and once again return to a static state with little or no ongoing consumption of resources. As memory was precious in those days, there was a heavy dependency on efficient I/O processing to DASD. This proved to be a premier feature in TPF because as the numbers of users were increased and more volumes added for data and I/O capacity, there was no net increase in overhead. In other words, there were no complicated structures that reached a point of diminishing returns. The application programs themselves were also written very close to the low level TPF architecture. Application programmers were in tight control of database design. Many lower level system services common in other operating systems were implemented at the applications layer. The result of this was an overall solution for handling reservation traffic that was repeatedly proven to be as much as ten times (or more) as efficient as anything remotely comparable to TPF, even running on the same exact hardware. A system that today is already handling the "billion transactions per day" that many claim to aspire to. The question, and the purpose of this paper, is to consider how such a system participates in the rapidly changing environment of commodity platforms, commodity software, free software and component technology.

TPF Today
TPF today is not the same system it was even three years ago, let alone what it was in the seventies and eighties where most people of only passing knowledge of the platform place it (the common retort is "oh! that legacy system"). TPF’s C support is better than ever. A far cry from the initial TPF Target C, TPF now supports DLMs and DLLs of any size. A large majority of the POSIX interfaces are supported and efforts to supply the Standard Template Library considered. C++ is also supported, with work on class libraries and required functionality in catch and throw initiated. TPFDF has also improved over time as a great data base facility for procedural programs. TPF also has another data base facility in the form of collection classes that are well suited for component architectures. They provide the application the appearance that data structures are stored intact (i.e. an array is stored as an array). All the tools necessary for object programming are not there yet, but TPF is well on its way. To round this out, support for TPF (both assembler and C language) in VisualAge was rolled out this year. One of the immediate benefits was IBM’s ability to port in the Apache web server to TPF with relatively minor changes.

The porting of Apache provides TPF with a full function web server, the same one currently servicing more than 50% of the Internet. Apache will allow TPF to stay current with the evolving HTTP protocol. A web server permits TPF applications to be connected directly to a web server without the latency delay of a channel transfer or the loss of reliability by less robust platforms. Other common services will follow to allow direct interfaces to TPF applications. This will provide for access to the many newer distribution channels without creating a confusing array of gateways and protocol converters.

TPF Tomorrow
When IBM initially started providing C support on TPF, where that path would lead wasn’t known. C is not just a language -- it has become an environment. Consider the file system services, POSIX functions, the library functions and the process model (or a thread model) that are implied by this environment. . Similarly, C++ and Java carry a lot of assumptions with them. Particularly Java, which has the bean model and of particular importance, the server bean model or EJB (enterprise Java beans). These environments were constructed on non-mainframe platforms intended for a limited number of users (or one). While it may be tempting to dismiss these platforms, such as NT and UNIX, as great for single user or departmental solutions, there is much more to them than that. What we are witnessing today is a concerted effort by many vendors, IBM among them, to scale these solutions. Why bother when you have MVS and TPF for large scale commercial processing? Simple, there is a tidal wave of software out there for NT and UNIX. UNIX in particular, and several derivatives such as Linux, have been cooperative efforts that evolved over time from some of the best minds in the industry. Memory and list management in UNIX is outstanding and its process/thread model quite efficient. UNIX has been used in mission critical applications for decades now. The more interesting question these days is the choice of platforms, such as RISC or 390.

As 390 architecture still holds quite an advantage in its channel architectures, cache management (to handle heterogeneous processing) and reliability, IBM solved the problem of how to make this tidal wave of software available to 390 users. . They made OS/390 (MVS with a lot of changes) a UNIX compliant system, which allows you to do just about anything you could do on UNIX on MVS. In fact, now saying something is a "UNIX box" is ambiguous since OS/390 is UNIX branded.

Where does this leave TPF? TPF will go on serving the market it was designed for, the high end. In order to be flexible enough to run a large variety of programs and faithfully implement the standards, UNIX systems inherit a lot of overhead. The market they serve, the mainstream, is for numbers of users from one to tens of thousands. At the high end, there are workable solutions, but the number of boxes, complexity and costs get excessive. What is still needed is an operating system that targets serving tens to hundreds of thousands of users efficiently. However, that system can not completely ignore that tidal wave of software out there. Business today can ill afford to create every solution from scratch. It is necessary to be able to reuse pieces of code that provide the functionality required in a high end solution. Ideally, a high end operating system would support many of the base concepts required by software (such as a process model and process control) and base services as well (such as naming and security).

This is where we are taking TPF. TPF will continue to service assembler find/file programs as not only will some existing code will be with us ten years from now, but in many cases where path length is absolutely critical, there is nothing more efficient. TPF will also support C, C++ and Java and a high end model of their associated environments. Will a TPF Java program be as efficient as an assembler program or even C? Of course not, but consider that in many installations today less than 20% of the software accounts for 80% or more of the execution path length. That leaves a lot of software that can be brought to market quickly without exposing the system to performance problems.

Component Technology
Component technology is just a polite way of describing objects and the OO revolution happening around us. I say a polite way because many, if not most, don’t understand, fear it, and hence resist it. However the failure to understand and embrace objects can be just as lethal to the bottom line as ignoring the obvious business drivers of the eighties and nineties phenomenon of downsizing. All the signs were there; too many business applications that had matured and stabilized being serviced by overpowered, overpriced mainframes. Mature and stable business processes like accounting are now easily serviced by providers like SAP using commodity software designed to run on commodity platforms like RISC machines. Commodity software is best organized into reusable building blocks, also known as components, also known as objects. As object technology itself matures, more complex applications will be commoditized and another round of bloodletting will occur for the unprepared. Component technology is here to stay.

What is missing is a high end component model. This is being addressed somewhat by Enterprise Java Bean support in OS/390, however its target is the mainstream, which leaves a significant opportunity for TPF: support for a component model suited for ultra-high volume systems with a basic set of services running as efficiently as possible.

Summary
IBM is exploiting the advantages of TPF which go beyond technology. Of primary importance is the close relationship with TPF customers. Together, IBM and the TPF users will design and construct a component model that handles hundreds of thousands of users on a limited number of large servers that, as TPF does today, present a single image in order to avoid unnecessary problems with distributed computing. This will allow TPF enabled industries to continue to build and maintain large scale solutions that exploit economies of scale and drive profits levels impossible without systems like TPF.