|
Jane C. Blake,
Managing Editor
Digital's transaction processing systems are integrated hardware and software products that
operate in a distributed environment to support commercial applications, such as bank cash
withdrawals, credit card transactions, and global trading. For these applications, data integrity
and continuous access to shared resources are necessary system characteristics; anything less would
jeopardize the revenues of business operations that depend on these applications. Papers in this
issue of the Journal look at some of Digital's technologies and products that provide these system
characteristics in three areas: distributed transaction processing, database access, and system
fault tolerance.
Opening the issue is a discussion of the architecture, DECdata, which ensures reliable inter operation
in a distributed environment. Phil Bernstein, Bill Emberton, and Vijay Trehan define some transaction
processing terminology and analyze a TP application to illustrate the need for separate architectural
components. They then present overviews of each of the components and interfaces of the distributed
transaction processing architecture, giving particular attention to transaction management.
Two products, the ACMS and DECintact monitors, implement several of the functions defined by the DECdata
architecture and are the twin topics of a paper by Tom Speer and Mark Storm. Although based on different
implementation strategies, both ACMS and DECintact provide TP-specific services for developing
executing, and managing TP applications. Tom and Mark discuss the two strategies and then highlight
the functional similarities and differences of each monitor product.
The ACMS and DECintact monitors are layered on the VMS operating system, which provides base services
for distributed transaction management. Described by Bill Laing, Jim Johnson, and Bob Landau, these
VMS services, called DECdtm, are an addition to the operating system kernel and address the problem.
of integrating data from multiple systems and databases. The authors describe the three DECdtm
components, an optimized implementation of the two-phased commit protocol, and some VAXcluster-specific
optimizations.
The next two papers turn to the issues of measuring TP system performance and of sizing a system to ensure a
TP application will run efficiently. Walt Kohler, Yun-Ping Hsu, and Wael Bahaa-El-Din discuss how Digital
measures and models TP system performance. They present an overview of the industry standard TPC
Benchmark A and Digital's implementation, and then describe an alternative to benchmark measurement - a
multilevel analytical model of TP system performance that simplifies the systems complex behavior to a manageable
set of parameters. The discussion of performance continues but takes a different perspective in the paper on sizing
TP systems. Bill Zahavi, Fran Habib, and Ken Omahen have written a methodology for estimating the appropriate system
size for a TP application. The tools, techniques and algorithms they describe are used when an application is still in
its early stages of development. High performance must extend to the database system. In their paper on
database availability, Ananth Raghavan and T.K. Rengarajan examine strategies and novel techniques that minimize the
affects of downtime situations. The two databases referenced in their discussion are the VAX Rdb/VMS and VAX DBMS
systems. Both systems use a database kernel called KODA, which provided transaction capabilities and commit processing.
Peter spiro, Achok Joshi, and T.K. Rengarajan explain the importance of commit processing relative to throughput and
describe new designs for improving the performance of group commit processing. These designs were tested,
and the results of these tests and the authors observations are presented.
Equally as important in TP systems ad database availability is system availability. The topic of the final
paper in this issue is a system designed to be continuously available, the VAZft 3000 fault-tolerant system.
Authors Bill Bruckert, Carlos Alouso, and Jim Melvin give an overview of the system and then focus on the
four-phase verification strategy devised to ensure transparent system recovery from errors.
I thank Carlos Borgialli for his help in preparing this issue and for writing the issue's foreword.
|
|