Previous | Contents | Index |
You need the same rights and privileges to manage the RTR environment and RTR applications in Versions 3 and 4 as in Version 2.
To manage RTR, you must have one of the following OpenVMS system rights or privileges: OPER, SETPRV, RTR$OPERATOR. To use the RTR API rtr_request_info, you must have the following right: RTR$INFO.
To run an application, you must have the following OpenVMS privilege:
TMPMBX.
2.9 Memory and Disk Requirements
Generally, RTR Version 3 and 4 may make more demands on system memory than RTR Version 2, which can cause performance reduction. Adding more memory may be useful in improving performance.
Table 2-2 lists the OpenVMS requirements for space on the system disk. These sizes are approximate; actual size may vary depending on system environment, configuration, and software options. For additional details, see the Reliable Transaction Router for OpenVMS Software Product Description.
Requirement | RTR Version 2 | RTR Versions 3 and 4 |
---|---|---|
Disk space (installation) | 40,000 blocks (20MB) | 50,000 blocks (25MB) |
Disk space (permanent) | 24,000 blocks (12MB) | 36,000 blocks (18MB) |
To restore the RTR Version 2 environment if RTR Version 3 or 4 does not work with your applications as expected, use the following procedure:
$ RTR STOP RTR $ RTR DISCONNECT SERVER |
RTR Version 4 has added:
In RTR Version 3, a new RTR daemon process (called RTRD)
is used by the RTRACP process to build TCP/IP connections for internode
links. The RTR daemon process is present only on systems with IP
networking installed
and with IP enabled as an RTR transport (see Chapter 4, Network Issues, for
information on setting your network transport).
3.2 Command Server Process
The command server process name is RTRCSV_<username>.
In RTR Version 2, a command server was started for every user login invocation to RTR to enter operator commands. With RTR Version 3, there is one command server per node for each user logged in through a common user name.
Command server timeouts are the same in RTR V3 and V4 as in RTR V2. |
In RTR Versions 3 and 4, LIBRTR supersedes RTRSHR. This library module LIBRTR contains most of the RTR code, in contrast with RTR Version 2, where only RTR code specific to application context was contained in RTRSHR. All RTR Version 2 binaries have been superseded by the two executables LIBRTR.EXE and RTR.EXE, in RTR Versions 3 and 4. Table 3-1 shows the executables of RTR Versions 2, 3, and 4.
RTR Version 2 | RTR Versions 3 and 4 |
---|---|
RTRSHR | LIBRTR |
RTR | RTR |
RTRCOMSERV | Now part of LIBRTR. |
RTRACP | Now part of LIBRTR. |
RTRRTL | No longer apply. |
3.4 The ACP Process
The RTR Application Control Process (ACP) handles application control,
and has the process name RTRACP. This is unchanged from RTR Version 2.
3.5 Interprocess Communication
In RTR Version 2, global sections (cache) were used for interprocess communication. In RTR Versions 3 and 4, interprocess communication is handled with mailboxes. Each RTR process, including any application process, has three mailboxes to communicate with the RTRACP process:
With RTR Version 2, the SHOW RTR/PARAMS command showed the following:
The /PARAMS qualifier is obsolete in RTR Versions 3 and 4, and the
parameters it showed no longer apply. In RTR Versions 3 and 4, these
parameters are handled with OpenVMS mailboxes, which you can check
using OpenVMS procedures. See the OpenVMS System Manager's Manual:
Essentials for more information.
3.7 Counters
In RTR Version 2, shared memory in global sections was directly
accessible using the RTR command server. In RTR Versions 3 and 4,
process counters are still kept in shared memory, but they are accessed
from the command server through RTRACP. Thus, accessing these and other
counters involves communicating with RTRACP. Other counters are
contained within the address space of the ACP.
3.8 Quorum Issues
Network partitioning in RTR Versions 3 and 4 is based on a router and backend count, whereas in RTR Version 2 it was based on quorum. However, quorum is still used in RTR Versions 3 and 4; state names and some quorum-related displays have changed.
Additionally, the quorum-related condition of a node in a minority
network partition is handled more gracefully in
RTR Versions 3 and 4. In RTR Version 2, a shadowed node in a minority
network partition would just lose quorum; in RTR Versions 3 and 4, the
MONITOR QUORUM command states that the node is "in minority," providing
more information. The algorithms used to determine quorum have also
changed significantly for a more stable traffic pattern.
3.9 Server-Process Partition States
As in RTR Version 2, there are three server-process partition states:
With RTR Versions 3 and 4, a server process that is initially the primary in a standby or shadow environment returns to primary after recovery from a network loss. (With RTR Version 2, there was no way to specify which node would become the primary after network recovery.) Unlike RTR Version 2, where the location of a primary server after a network outage was unpredictable, as long as servers have not been restarted and both servers are accessible, RTR Versions 3 and 4 retain the original roles.
With RTR Version 3.2, RTR provided commands such as SET PARTITION/PRIMARY that the operator can use to specify a process partition state.
With RTR Versions 3 and 4, two network transports are available:
At least one transport is required. If a destination supports both transports, RTR Versions 3 and 4 can use either.
Any node can run either protocol, but the appropriate transport software must be running on that node. For example, for a node to use the DECnet protocol, the node must be running DECnet software. (For specific software network version numbers, see the RTR Version 4 OpenVMS Software Product Description.)
A link can fail over to either transport within RTR. Sufficient
redundancy in the RTR configuration provides greater flexibility to
change transports for a given link when necessary.
4.1 DECnet Support
With RTR Version 2, the only transport was DECnet Phase IV; it provided
DECnet Phase V support but without longnames. With RTR Version 3, both
DECnet Phase IV and DECnet-Plus (DECnet/OSI or DECnet Phase V) are
supported,
including support for longnames and long addresses.
4.2 TCP/IP Support
DECnet-Plus and TCP/IP provide multihoming capability: a multihomed IP node can have more than one IP address. RTR does name lookups and name address translations, as appropriate, using a name server. To use multihomed and TCP/IP addresses, Compaq recommends that you have a local name server that provides the names and addresses for all RTR nodes. The local name server should be available and responsive.
Name servers for all nodes used by RTR should contain the node names and addresses of all RTR nodes. Local RTR name databases must be consistent.
Include all possible addresses of nodes used by RTR, even those addresses not actually used by RTR. For example, a node may have two addresses, but RTR uses only one. Include both addresses in the local name database. |
For more details on multihoming or dual-rail setup, see the appendix in
the Reliable Transaction Router Application Design Guide
"Dual-Rail Setup."
4.3 Specifying a Preferred Transport
During installation, the system manager can specify either transport, using logical names RTR_DNA_FIRST or RTR_TCP_FIRST . For example, in the RTR$STARTUP.COM file (found in SYS$STARTUP), the following line specifies DECnet as the default transport:
$ DEFINE/SYSTEM RTR_PREF_PROT RTR_DNA_FIRST |
To set the default transport to TCP/IP, remove (comment out) this definition from RTR$STARTUP.COM and restart RTR. For the change to take immediate effect, you must undefine the old logical name before restarting RTR.
You can also change the above command in RTR$STARTUP.COM to the following:
$ DEFINE/SYSTEM RTR_PREF_PROT RTR_TCP_FIRST |
When creating a facility using TCP/IP as the default, you can specify dna.nodename to override TCP/IP and use DECnet for a specific link. Similarly, when using DECnet as the default, you can specify tcp.nodename to use TCP/IP for a specific link. If the wrong transport has been assigned to a link, you must trim all facilities to remove nodes using the link (use the TRIM FACILITY command) to remove the link, then add the nodes back into the facility specifying the correct transport.
To run the DECnet protocol exclusively, use the following definition for the RTR preferred protocol logical name:
$ DEFINE/SYSTEM RTR_PREF_PROT RTR_DNA_ONLY |
For examples of this command syntax, see the section on Network
Transports in the Reliable Transaction Router System Manager's
Manual.
4.3.1 Supported Products
Network products supported are listed in the RTR Version 3 and the RTR Version 4 Software Product Descriptions.
Changes that affect system management have been introduced with RTR
Versions 3 and 4. The following sections describe these changes.
5.1 RTR Management Station
With RTR Versions 3 and 4, you can manage RTR from a node where RTR is
running, from a remote node from which you send RTR commands to a node
running RTR, or from a web browser. The node where you enter commands,
interact with the browser, or view results is your management station.
5.1.1 Browser Interface
With the RTR browser interface, your management station has a
network-browser-like display from which you can view RTR status and
issue many RTR commands with a point-and-click operation. You use the
browser interface, for example, with Microsoft Internet Explorer on an
NT system. For more details on the browser interface, see Reliable
Transaction Router Getting Started and the Reliable
Transaction Router System Manager's Manual.
5.2 OpenVMS Quotas
RTR Version 2 used OpenVMS quota values specified on the RTR START command or calculated defaults. Because RTR Versions 3 and 4 use dynamic allocation (with the exception of the number of partitions that is statically defined), RTR does not calculate the required quotas, but depends on the system manager to configure quotas adequately. The value of maximum partitions is now set at 500. (See the RTR System Manager's Manual and Release Notes for further information on partitions.)
For example, with RTR Version 2 you were required to explicitly specify the number of links or the number of facilities if defaults were too low. You no longer need to specify each RTR parameter value manually. Additionally, because RTR Versions 3 and 4 use mailboxes, you use the appropriate OpenVMS quotas to establish sufficient resources to support RTR Version 3 and 4 interprocess communication.
In RTR Versions 3 and 4, all these parameters are governed by OpenVMS
quotas. To establish these for RTR Versions 3 and 4, Compaq recommends
that you record the actual quotas used by RTR Version 2 on each node
and add 50 percent to these values for RTR Versions 3 and 4. See
Table 2-1 for some specifics.
5.3 Startup
There is a new RTR$STARTUP.COM file in SYS$STARTUP.
It contains several changes including specifying RTR file locations,
and choice of transport (protocol).
5.4 Creating Facilities
You create facilities the same way in RTR Versions 3 and 4 as in RTR
Version 2
5.4.1 Naming Nodes
With the addition of TCP/IP and DECnet-Plus (DECnet/OSI) to RTR Version
3 and 4, you can now use longnames for node names.
5.4.2 Modifying Facility Configurations
To modify facilities, you use the same procedures in RTR Versions 3 and 4 as in RTR Version 2. One facility command has been changed:
SET FACILITY/BROADCAST=MINIMUM=n |
SET FACILITY/BROADCAST_MINIMUM_RATE=n |
All supported operating systems can interoperate together in the RTR environment, as described in Table 5-1.
RTR Versions 3 and 4 nodes interoperate with... | Description |
---|---|
Other RTR Version 2 nodes | In RTR Versions 3 and 4, RTR uses data marshalling (examination of byte format of messages) and can handle data of more than one byte format, making the appropriate translation as required. However, an application running with RTR may not adequately handle different byte formats used on different hardware architectures. RTR Versions 3 and 4 let you run both RTR Version 2 and RTR Version 3 or 4 nodes in the same environment, but because the RTR Version 2 API does not have the data marshalling capability, an RTR Version 2 application must deal with the different data formats. |
Other RTR Version 3 nodes | RTR Version 3 is fully compatible with other nodes running RTR Version 3. See the RTR Version 3 Release Notes for specifics on known requirements and restrictions. |
Other RTR Version 4 nodes | RTR Version 4 is fully compatible with other nodes running RTR Version 4. See the RTR Version 4 Release Notes for specifics on known requirements and restrictions. |
Several screens that provide dynamic information on transactions and
system state have changed for RTR Versions 3 and 4, as described in the
following sections.
5.6.1 RTR Version 2 Screens
Table 5-2 lists the RTR Version 2 screens that are no longer available in RTR Versions 3 and 4. In general, information in these monitor pictures is no longer applicable. For example, there is no longer a need to examine cache, because RTR Version 3 dealt with memory management using OpenVMS mailboxes.
bequorum | cache | chmdata | chmmsg |
congestion | declare | delayproc | dtinfo |
facility | failure | memory | inbytes |
inmessages | inpackets | locks | msgacpsys |
outbytes | outmessages | outpackets | packets |
process | RTR | toptps | trquorum |
Table 5-3 lists the monitor screens that were new to RTR Version 3. No new monitor screens have been added for RTR Version 4, though a small number have been changed.
Picture name | Description |
---|---|
accfail | Shows link transport name for links on which a connection attempt was declined, with a reason for failure. The most recent entry is highlighted. |
acp2app | Displays counts of messages and number of bytes from RTRACP to the application, as viewed from a specific node. |
active | Displays a list of RTR processes, and for each process the number of transactions they have started, the number of transactions they have completed and the number of transactions that are still active. |
app2acp | Displays counts of messages and number of bytes from the application to RTRACP, as viewed from a specific node. |
broadcast | Displays information about RTR user events by process, including number of user events enqueued, received, and discarded. |
calls | Displays the total number of RTR API calls and their success or failure for the processes on all the nodes being monitored. All RTR message are also show by message type. (Pending messages are ones that an application has not received yet). Use the /IDENTIFICATION=process-id qualifier to display the values for one specific process, otherwise the total values for all processes are displayed. |
channel | Displays the roles of the channels declared by an application. This can be useful as a debugging tool in the early stages of application development. |
connects | Displays connection status summary, including the number of links up and down, and a list of links with state (up or down), architecture, network transport, and fail-reason, if any. |
event | Displays event routing data by facility. Information includes events in transit and destination information showing number of events enqueued, processed, and discarded. |
flow | Displays the flow control counters. |
frontend | Displays frontend status and counts by node and facility, including frontend state current router, reject status, retry count, and quorum rejects. |
group | Shows server and transaction concurrency on a partition basis. |
ipc | Shows counts of inter-process communication (IPC) activity in the RTR ACP and active RTR applications. |
ipcrate | Displays rate information on IPC messages, byte counts, and IO primitive usage. |
journal | Displays the current journal usage on a node. Local node journal statistics are provided, and data for non-local journals accessed from the local node. Include statistics covering total number of entries and records written, the number of records read, and how many bytes were involved. Bar graphs showing current usage of journal blocks (as a percentage of the total) are also provided. |
link | Displays a number of per link data items. The /LINK=link-name qualifier can be used if the values for one specific link are to be displayed, otherwise the total values for all links are displayed. |
netbytes | Displays a list of the links to other nodes. For each link, the total number of bytes received and sent on that link and the number of bytes received and sent per second are displayed. |
netstat | For each link, displays the connection status in detail, with the link state (up or down), and architecture type of remote node (such as VAX, I386, Alpha, and so on). |
partit | Displays the status of server partitions. Shows the partition identifiers, key ranges and key segments, and the status of the servers (active, recovering and so on). |
queues | Shows transaction queues on a partition basis. |
quorum | Tracks (by facility) the configuration, reachability, and quorum status of one or more nodes. |
recovery | Displays the status of server recovery procedures, such as waiting for quorum, catching up transactions, and so on. |
rejects | Displays the last rtr_mt_rejected message received by each running process. |
rejhist | Displays the last ten rtr_mt_rejected messages received by the selected process. |
response | Displays the elapsed time that a transaction has been active on the opened channels of a process. |
rolequor | A detailed picture of the various data items displayed in the QUORUM picture, separated by roles. If a quorum problem is encountered, this picture may be useful for problem diagnosis. |
routers | Displays information on a router node. It gives an indication of the utilization of the router in terms of transactions and broadcasts routed through this node. Useful to monitor performance, or locate problems. |
routing | Displays statistics of transaction and broadcast traffic by facility. |
rscbe | Displays the most recent calls history for the RSC subsystem on a backend node. |
stalls | Displays in real time any network links that are currently stalling in their outbound traffic, and provides a history of the stalls that the various links encountered during their lifetime. |
system | Displays the state of critical resources within the RTR environment. If a resource has exceeded a predefined threshold, a warning indicator is displayed. |
tps | Displays the rate of transaction commits carried out by each process using RTR. |
tpslo | Displays low end of the rate of transaction commits carried out by each process using RTR. |
traffic | Displays a list of the links to other nodes. Shown for each link are: byte rate, packet rate, message rate and congestion, in both directions. Average packets per second is also shown. |
trans | Displays transactions for a frontend, router and backend. |
v2calls | Shows RTR Version 2 verb usage through the interoperability subsystem. The screen layout is identical to the RTR Version 2 monitor calls picture. |
xa | Displays XA counter information including success and failure as well as call and readonly counters. |
Previous | Next | Contents | Index |