A Post Card from Miami Beach
by Waseem Majeed

This TPF User Group Conference was a marked departure from more recent meetings. For one, the level of attendance was significantly higher than in previous meetings with over 350 attendees from 64 different companies. Also, for the first time ever, vendors were allowed full participation at the conference. Previously, they were restricted from the general session and sub-committee meetings. That definitely added to the fervor and excitement of the conference with some very good technical presentations, informative sub-committee meetings, lively discussions on hot topics and some ground breaking product demonstrations. Rather than just chronicle the different meetings and sessions and recap the presentations, I thought I would do something different. Others have done a very nice job of summarizing the proceedings and most presentations are available for online viewing and browsing. With that in mind I wanted to share some thoughts and impressions (so no eggs, tomatoes or lawsuits please!) from the meeting so you can get a sense of the spirit and excitement at the conference.

Yes, excitement is the right word! The air was practically saturated with it and one had to work really hard to avoid it. Some of it may have been due to the locale and setting, which was very scenic, serene and quite green. Many people enjoyed sunning themselves by the poolside, walked along the beach and some even ventured into the water for a quick dip. However, I think a large part of it was due to the flurry of new product developments, progress reports, announcements and demonstrations. After several years in the doldrums the TPF industry has finally awakened from its slumber. On the strengths and basis of the TPF 4.1 product and ISO-C implementation, TPF is on the move. The pace of development has definitely quickened and TPF is racing towards a rendezvous with the new millennium. A lot of this has been made possible by the success and features of ISO-C, but some of this certainly has to do with the changes within IBM and the TPF lab itself.

Looks as if a new generation is coming to the fore, unencumbered by previous experiences and conceptions about TPF and what it can and cannot do and what it should and should not attempt. With new management at the helm the TPF lab is on the march. On the IBM side, there were tangible and palpable differences. A new younger generation is playing a more prominent role in the technical and business decisions. Many outsiders have been brought into the TPF fold bringing with them their experience and expertise from other environments and industries. They have helped to enrich and invigorate TPF. Together with traditional TPF hacks, they are helping break new ground and forging ahead towards new computing frontiers for TPF. Already announced, in the works or planned, are various products like Apache Web Server, C++, MQ Queue Manager, Pockets and Folders, Collection Support, Java and support for Enterprise Java Beans, UNIX file systems and POSIX compliance. Expect more from this new team and this partnership.

On the vendor side, it was just as interesting if not more so. Again the striking thing was the presence of several vendors who are new to TPF. Looks like they have a sense for markets and where they are headed. Their presence should serve as vote of confidence for the TPF industry. The perennial favorite at the conference Thiru was back again. With his CMSTPF/GI a graphical interface to his widely accepted test and trace tool, he once again displayed his technical wizardry and understanding of user needs. He has served to set the standard and has become the benchmark by which TPF test tools are measured. IBM is working on their version of a graphical debugger but they are somewhat behind. Parthenon displayed their personal TPF (PC/TPF) product which provides a complete workstation development solution. One can assemble, load, execute and trace S/370 assembler code on a PC. Two other vendors that merit special mention and attention are Starpush and Sapiens.

Starpush demonstrated a code generator for TPF. It does have some interesting and useful features. Based on the short demo, I got the impression it is not a complete code generator but more of a construction kit. It can generate code for input/output , standard editing and data conversion operations but true application logic has to be programmed with some assistance from the tool. So, it is not so much the tool itself but the fact that people are even thinking about automatic code generation that is of interest here. More promising and fascinating was the FALCON (Fast Assembler Language Conversion) tool developed by Sapiens. Originally developed for the MVS environment, it has been adapted for TPF with some success in the prototype version. It generates C code by first analyzing and understanding existing assembler language code. The programming logic of the application is deduced from the code and recorded in an internal meta language. This internal specification then drives the code generation phase. So, even though it is strictly a translator, the fact it must first produce programming specifications presents some interesting possibilities. It doesn't require a great leap in understanding to figure this out. It is but a small step from this to having the tool generate TPF code from some graphical or textual specifications provided by an analyst. Perhaps the industry needs to take a fresh look at code generators. Earlier attempts fell short of expectations. Was it a failure in implementation or was it trying to solve the wrong problem or using a flawed approach. Maybe it was the right idea at the wrong time. At the time, the general feeling was that TPF was at the end of its product cycle. Recent events and developments have proved that this is far from the case. That being so, perhaps the time has come for another serious attempt at a TPF case tool.

From the database perspective, users have argued for a standardized API, be that TPFDF, SQL or whatever other IBM or industry standard exists. Looks like IBM is banking on Collection Support to fill that need. That being the case, the introduction of this API and associated database services, needs to be planned and well thought out. It must coexist with current API's, while providing a base for future applications and growth. Without proper care and planning, we could end up with a mish mash of solutions and interfaces. Users need to work with IBM to develop standards and guidelines and to get support and consulting on the best use. On the database hardware side, the TimeFinder product from EMC provides some good possibilities for the large users. This software/hardware can provide seamless and transparent backups of any or all online DASD and can even keep databases at two different sites completely synchronized when coupled with EMC's SRDF offering.

The users certainly didn't take a backseat at the conference and contributed their own share of fun and excitement. Galileo had the audience on the edge of their seats with a slick play on the "Mission Impossible" theme to showcase their efforts in making software development productivity measurement 'really possible'. In the process they turned a dry subject into an entertaining and educational feature. Domingos Feriera from The Sabre Group and Steve Kujawa from VISA had the audience in stitches while presenting Multi DASD copy and Stress testing respectively. With Domingos half the challenge was getting through his accent, but those who got beyond that hurdle were doubly rewarded with his wry wit and subject matter expertise. Steve's laid back style was a hit with the audience as he let us in on the secrets to achieving virtually 100% uptime. (He's dead meat, though, if his wife finds out what he's been saying about her).

Another interesting trend for TPF is the move towards POSIX. This is a positive trend and should enhance the capabilities of TPF and make it acceptable to a wider and more diversified customer base. The idea that major TPF CRS' will be running on UNIX in the next few years has all but faded to the background. However, a new notion seems to be slowly taking hold. This is just as flawed and perhaps just as risky. The new developments in the UNIfication and POSIcising of TPF are creating the impression that TPF can be a UNIX clone. Many UNIX features have become de facto industry standards and are required for open connectivity, portability and interoperability. The useful and necessary features should be supported and all good ideas must be considered, but only with a view to enhancing the strengths of TPF. We should not lose sight of the TPF market niche, its bread and butter capabilities and its very raison d'être. TPF was born and bred to be a highly available, very responsive, widely distributed, OLTP system supporting massively large databases. TPF needs to look beyond the travel, hospitality and financial industries to other industries and customers with similar needs. The UNIX features should appeal to these users. TPF as a Web Server or as a News or Email server combines the best of both worlds and requires the best features of each technology. However, I do not think one would want to edit a document or compile a program on TPF.

Nice as these developments are, the challenge will be in adopting and adapting the new tools and technologies to our best advantage. There is an old saying "Be careful what you ask for, because you might get it". What a turnaround it has been! Used to be that IBM would promise something in two years and deliver in four. Now, they promise something in two and deliver in a year. IBM is now doing its part. They have thrown the gauntlet and put the ball squarely in our court. Can we, as users, keep pace and make the most of these new trends and directions. Are we, as an industry, nimble enough, strong enough and smart enough to take TPF to a new plane of productivity, utility and profitability for our respective companies. Do we have the goods to deliver on the new promise or will it be a case of too much too late? It's up to us so let's do it!

This is not to imply the picture is completely rosy. If there is a cloud in this blue sky, it has to be the lack of expansion of the TPF customer base. Due to mergers, acquisitions, porting of applications off of TPF and a concerted push by major CRS vendors towards outsourcing, the number of TPF customers has actually been shrinking. This has not impacted the bottom line due to the success of TPF 4.1. At this point the TPF Lab is financially sound and quite profitable and we have every reason to believe the Lab has a sound strategic and financial footing. However, it does not require one to be a math or finance wiz to figure out that if this trend continues, sooner or later, the revenue line will dip below the cost line. Before that happens we, as the TPF industry, need to keep a watchful eye on this trend. Rather than be mere spectators and silent partners, we should take an active interest in promoting and furthering the development and acceptance of TPF. A few years ago, the widely heard refrain was. "TPF is dead." In the days of yore, they used to say "The King is dead, long live the King". So, today we say, "the Old TPF is dead, long live the New TPF".