Previous | Contents | Index |
Callout servers are checking or verification applications running on a router or backend; they receive a copy of every transaction passing through the node where the callout server is running.
Like any other server, callout servers can abort any transaction in which they participate. Callout servers are typically used to provide an additional security or checking service; transactions can be inspected by the callout server and aborted if they fail to meet user-defined criteria.
Callout servers require that a journal be created on the node where the server runs. For a backend callout server, there would already be a journal because backends require journals, but if the callout server is running on a router, a journal is required on the router node.
Assume that callout servers are to run on the router nodes (TR1 and TR2) in the configuration shown in Figure 2-1. Example 2-5 shows the commands needed to set up callout servers on the routers.
Example 2-5 Configuration of Callout Servers |
---|
% rtr RTR> set environment/node= - _RTR> (FE1,FE2,FE3,TR1,TR2,BE1,BE2,BE3) RTR> start rtr RTR> create facility funds_transfer/frontend=(FE1,FE2,FE3) - _RTR> /router=(TR1,TR2) - _RTR> /backend=(BE1,BE2,BE3) - _RTR> /call_out=router |
To avoid problems with quorum resolution, design your configuration with an odd number of routers. This ensures that quorum can be achieved.
To improve failover, place your routers on separate nodes from your backends. This way, failure of one node does not take out both the router and the backend.
If your application requires frontend failover when a router fails, frontends must be on separate nodes from the routers, but frontends and routers must be in the same facility. For a frontend to fail over, there must be more than one router in the facility.
To identify a node used only for quorum resolution, define the node as a router or as a router and frontend. On this node, define all backends in the facility, but no other frontends.
With a widely dispersed set of nodes (such as nodes distributed across
an entire country), use local routers to deal with local frontends.
This can be more efficient than having many dispersed frontends
connecting to a small
number of distant routers. However, in some configurations such as
those without long network links, it may be more effective to place
routers near backends. For more information on configuration
considerations, see the Reliable Transaction Router Application Design Guide.
2.8 Router Load Balancing and Flow Control
Router load balancing, or intelligent reconnection of frontends to a router, enables a frontend to select a preferred router, the router that is least loaded. Load balancing is coordinated by backends.
Load balancing is an aspect of RTR flow control that manages message flow between processes and nodes controlled by RTR. Flow control regulates the sending rate of a process to achieve maximum message throughput and avoid resource exhaustion. RTR flow control is designed to deal with short-term system overload; if flow control indicates frequent resource depletion, the topology of the system should be evaluated. Use MONITOR TRAFFIC to check for congestion rates on RTR links between nodes.
Within a facility, RTR monitors four role directions:
Messages travel from a sender to a recipient at a rate, controlled by RTR, that the recipient can handle. RTR achieves this by limiting the rate of the sender using credits. Each sending RTRACP asks its partner for credit or permission to send. When a recipient is prepared to receive data, it grants credits. Each time a sender sends data, its credit is reduced until it is exhausted. Exhausted credit must be replenished before more data can be sent. Availability of credits can be checked with the MONITOR FLOW display. You use the /BALANCE qualifier on the CREATE FACILITY and SET FACILITY commands to control this. (The RTR Version 2 implementation of load balancing treated all routers as equal which might cause reconnection timeouts in geographically distant routers.)
When used with CREATE FACILITY, the /BALANCE qualifier enables load balancing for frontend-to-router connections across the facility. Use SET FACILITY/NOBALANCE and /BALANCE to switch load balancing off and on.
The default behavior (/NOBALANCE) connects a frontend to the preferred router. Preferred routers are selected in the order specified in the /ROUTER=(tr1,tr2,tr3,...) qualifier used with the CREATE FACILITY command. If the /ALL_ROLES qualifier is also used, the nodes specifed have lower priority than the nodes specifed by the /ROUTER qualifier. RTR automatic failback ensures that the frontend will reconnect to the first router in the specified order when it becomes available. Manual balancing can be attained by specifying different router orders across frontends.
When the /BALANCE qualifier is used, the list of routers specified in the router list is randomized, making the preferred router a random selection within the list. Randomness assures that there will be a balance of load in a configuration with a large number of frontends. RTR's process for automatic failback will maintain the load distribution on the routers. Failback is controlled so as not to overload configurations with a small number of routers.
For example, assume the following command is issued from a frontend:
RTR CREATE FACILITY test/FRONTEND=Z/ROUTER=(A,B,C) |
The frontend attempts to select a router based on the priority list A, B, C, with A being the preferred router. If the /BALANCE qualifier is added to the end of this command, the preferred router is randomly selected from the three nodes. This random list exists for the duration of the facility. If a facility is stopped, a new random list is made when the facility is recreated, unless a router does not have quorum (sufficient access to backend systems). A router without quorum will no longer accept connections from frontend systems until it has again achieved quorum.
Consider the following points when using load balancing:
The commands to set, show or monitor load balancing are:
Adding concurrent processes (concurrency) for server application processes usually increases performance. Concurrency permits multiple server channels to be connected to an instance of a partition.
Concurrency should be added during the testing phase, before an application goes into production to verify that performance does increase. For example, if multiple servers require a lock to the same part of the database, transaction throughput could decrease rather than increase with concurrent servers. For transaction throughput to increase in such a case, transactions must lock independent parts of the database.
Consider the following factors when adding concurrency:
RTR supports two levels of rights or privileges:
In general, rtroper or RTR$OPERATOR is required to issue any command that affects the running of the system, and rtrinfo or RTR$INFO is required for using monitor and display commands.
Setting RTR Privileges on UNIX Systems
On UNIX machines, RTR privileges are determined by the user ID and group membership. For RTR users and operators, create the group rtroper and add RTR operators and users as appropriate.
The root user has all privileges needed to run RTR. Users in the group rtroper also have all privileges with respect to RTR, but may not have sufficient privilege to access resources used by RTR, such as shared memory or access to RTR files.
The rtrinfo group is currently used only to allow applications to call rtr_request_info() . For other users, create the groups rtroper and rtrinfo . Users who do not fall into the above categories, but are members of the rtrinfo group, can use only the RTR commands that display information (SHOW, MONITOR, CALL RTR_REQUEST_INFO, etc.).
Depending on your UNIX system, see the addgroup , groupadd or mkgroup commands or the System Administration documentation for details on how to add new groups to your system.
If the groups rtroper and rtrinfo are not defined, all users automatically belong to them. This means that there is no system management required for systems that do not need privilege checking.
If the RTR executable is modified after installation to no longer be suid root or the mode of the /rtr directory is changed, an application process (for example a client) can encounter a fatal error (Unable to locate a socket) when the client process finds it cannot create an rtr_ipc_sock_* file for the process ID (pid) in the /rtr directory. To avoid this, ensure that application processes run with user and group IDs that have access permission to create and remove files in this directory. |
Setting RTR Privileges on OpenVMS Systems
Use the AUTHORIZE utility to create the Rights Identifiers RTR$OPERATOR and RTR$INFO if they do not already exist on your system, and assign them to users as appropriate. The RTR System Manager must have the RTR$OPERATOR identifier or the OPER privilege.
Setting RTR Privileges on Windows NT Systems
Administrator privileges are needed for RtrOperator rights by the RTR
System Manager.
2.11 RTRACP Virtual Memory Sizing for all Systems
Basic memory requirements of an unconfigured RTR Application Control Process (RTRACP) on all supported operating systems is approximately 5.8 Mbytes. Additional memory may be required depending on the operating system environment being used by the RTRACP. While there is no penalty for allocating more virtual memory than is used, applications may fail if too little memory is allocated.
The following allowances for additional virtual memory should be made:
For each | Add an additional |
---|---|
Link | 202 Kbytes |
Facility | 13 Kbytes plus 80 bytes for each link in the facility |
Client or server
application process |
190 Kbytes for the first channel |
Additional application channel | 1350 bytes |
You must also prepare for the number of active transactions in the system. Unless the client applications are programmed to initiate multiple concurrent transactions (multi-threading), this number cannot exceed the total number of client channels in the system. This should be verified with the application provider.
It is also necessary to determine the size of the transaction messages in use:
The RTRACP virtual memory sizing requirements for replies are:
Thus if you want to send a million replies, make provision for a virtual address space of 138 Mbytes.
The total of all the contributions listed will provide an estimate of the virtual memory requirements of the RTRACP. A generous additional safety factor should be applied to the total virtual memory sizing requirement.
On OpenVMS, it is better to grant the RTRACP resource limits exceeding its real requirements than to risk loss of service in a production environment as a result of insufficient resource allocation. The total result should be divided by the virtual memory size in pages to obtain the final virtual memory requirement. Process memory and page file quotas should be set to accommodate at least this much memory.
Resource Sizing for UNIX and Windows
On other operating systems, just make sure your machine has the
physical memory and the disk space for a swap file.
2.11.1 OpenVMS Virtual Memory Sizing
On OpenVMS, process quotas are controlled by qualifiers to the START RTR command. START RTR accepts both links and application processes as qualifiers which can be used to specify the expected number of links and application processes in the configuration. The values supplied are used to calculate reasonable and safe minimum values for the following RTRACP process quotas:
Both the /LINKS and /PROCESSES qualifiers have high default values.
The default value for /LINKS is 512. This value is high but is chosen to protect RTR routers against a failover where the number of frontends is large and the number of surviving routers is small. The maximum value for /LINKS is 1200.
The default value for /PROCESSES is 64. This value is large for frontend and router nodes but is sized for backends hosting applications. Backends with complex applications may have to set this value higher.
The maximum value for /PROCESSES is the OpenVMS allowed maximum. Warning messages are generated if the requested (or default) memory quotas conflict with the system-wide WSMAX parameter, or if the calculated or specified page file quota is greater than the remaining free page file space.
The default values for /LINKS and /PROCESSES require a large page file. RTR issues a warning if insufficient free space remains in the page file to accommodate RTR, so choose values appropriate for your configuration.
The /LINKS and /PROCESSES qualifiers do not take into account memory requirements for transactions. If an application passes a large amount of data from client to server or vice-versa, this should be included in the sizing calculations. For further information on the START RTR qualifiers, see the START RTR command in the Command Reference section.
Once the requirements have been determined for the START RTR qualifiers of /PGFLQUOTA or /LINK and /PROCESSES, RTR should be started with these qualifiers set to ensure the appropriate virtual memory quotas are set.
The OpenVMS AUTHORIZE utility does not play a role in the determination of RTRACP quotas. RTR uses AUTHORIZE quotas for the command line interface and communication server, COMSERV. Virtual memory sizing for the RTRACP is determined through the qualifiers of the START RTR command. |
The RTRACP requires the operator to size the process limits for the RTRACP before starting RTR on all platforms. No direct control of the process quotas of the RTRACP is offered for UNIX based platforms. However, log file entries will result if hard limits are less than the preferred values for the RTRACP.
This list shows the minimum limits for the RTRACP on the following UNIX platforms:
The START RTR qualifiers /LINK and /PROCESSES apply only to the OpenVMS
platform. Process quotas on UNIX platforms must be determined through
operating system handling of virtual memory sizing.
2.12 RTR Shared Memory Sizing
Each operating system where RTR runs has different requirements for shared memory, a system-wide resource. These requirements are as follows:
For all flavors of UNIX:
To start and operate, RTR allocates a shared memory segment of approximately 160,040 bytes. This portion of memory will be exclusively used for management operations such as establishing connections between a frontend and a router.
In addition to this memory, RTR also uses a shared memory segment of 12,592 bytes for every process including the RTR COMSERV that has opened an RTR channel. This requirement is independent of the number of threads used in the application process.
Often, RTR needs to service multiple client/server applications on a given node. To minimize shared-memory related operations on each client/server application open-channel request, the RTRACP allocates shared memory in large chunks, in amounts that differ on different platforms. These large chunks are later used on demand. Please consult your operating system documentation for more information on various tunable shared memory-related parameters.
For Windows and OpenVMS:
RTR uses a different mechanism on Windows and OpenVMS platforms. For
OpenVMS, RTR uses global sections and on Windows, memory-mapped I/O.
However, the basic memory requirement of 12,592 bytes for every
application process, and 160,040 bytes for management activities,
remains the same. Please consult your operating system documentation
for more information on and limits of memory-mapped I/O and global
sections.
2.13 Environment Variables Used by RTR
RTR can use several environment variables for specific needs. How you set these depends on your operating system. Use them to tune journal access, manage flow control, compress reply data, and establish network transports.
You set these environment variables on OpenVMS with ASSIGN or DEFINE. On UNIX the command differs according to the shell. For the c shell (csh), use SETENV; for the bourne shell (sh) or bourne again shell (bash), use SET followed by EXPORT. Other shells may use a different command. On Windows, set the environment variables through the Advanced Properties tab on My Computer.
Previous | Next | Contents | Index |