DIGITAL SNA Peer Server Version 1.4 Release Notes May 1, 1997 Copyright (c) 1997 by Electronic Data Systems Corporation. All rights reserved. Copyright (c) 1994, 1995 by Digital Equipment Corporation. All rights reserved. This document contains information not included in the DIGITAL SNA Peer Server V1.4 documentation. It includes information about required and optional software, installation notes, oper- ating notes, problem corrections, and known problems with this software version and known restrictions in companion software. IMPORTANT Please read these notes before installing or using the software. Revision Information: This is a new document. Operating System Version: DIGITAL UNIX V4.0 DECnet Version: DECnet/OSI for DIGITAL UNIX V4.0 Software Version: DIGITAL SNA Peer Server V1.4 Copyright (c) 1997 Electronic Data Systems Corporation Copyright (c) 1994, 1995 Digital Equipment Corporation Contents 1 INTRODUCTION 1 1.1 Version 1.4 Overview 1 2 SUPPORTING SOFTWARE 2 2.1 DIGITAL UNIX 2 2.2 DECnet/OSI 2 2.3 X.25 2 3 INSTALLATION NOTES 3 3.1 Removing Prior Versions 3 3.2 Installing or upgrading Peer Server Dependent software 4 3.3 NCL Initialization File Reuse 4 4 OPERATING NOTES 4 4.1 10,000 concurrent sessions system requirements 4 4.1.1 Memory and swap space requirements for 10,000 sessions 4 4.1.2 System tuning associated with 10,000 concurrent sessions 5 4.1.3 Modification of system parameters necessary for 10,000 concurrent sessions 6 4.1.3.1 Variables in the /sys/conf/NODE-NAME file 6 4.1.3.2 Variables in the /etc/sysconfigtab 6 4.1.4 Build kernel and reboot system 7 4.1.5 Summary of system parameters to modify 7 4.1.6 DECnet/OSI patch 8 4.2 Extended TN3270 operation 8 4.3 Synchronous Communications Support (SDLC) 8 4.3.1 Modem Control and Link Configurations 9 4.3.1.1 Naming of Synchronous Lines 9 4.3.1.2 Modem Connect Line Speed Characteristic 10 4.3.2 Additional Sync Comm Information 10 4.3.2.1 SCC Device Name 11 4.3.2.2 CCITT V.24 Incompatibility with EIA RS-232C (DSYT1 and DNSES only) 11 4.4 Logical Link Control Type 2 (LLC2) 11 4.4.1 Token Ring Support 11 4.4.2 Ethernet 802.3 vs. Ethernet V2 12 4.4.3 Ethernet Support 12 4.4.4 FDDI Support 12 4.4.5 Operation in Bridged Token-Ring and FDDI Environments 12 4.5 WANDD Loader 12 4.6 Automatic Generation of NOTIFY(ONLINE) and NOTIFY(OFFLINE) 12 iii Contents 4.7 Using More Than 128 DECnet Connections 13 4.8 Specifying PU Name and Session Number on Client Connections 14 4.9 Session Termination Support Added for Non-IBM Mainframes 14 4.10 Startup Initialization Delay 15 4.11 OS/2 LAN Support Withheld 15 5 FIXES INCLUDED IN PEER SERVER V1.4 15 5.1 Fixed in Version 1.3 ECO-01 15 5.2 Fixed in Version 1.3 ECO-02 16 5.3 Fixed after release of Version 1.3 ECO-02 17 6 MAPPING BETWEEN IBM LLC2 AND EMA LLC2 PARAMETERS 20 7 KNOWN IBM RESTRICTIONS 21 7.1 Configuring Multiple Lines as PU T2.1 on the Peer Server 21 7.2 IBM 3745 Scanner Problem Running Above 230Kbps 21 7.3 NCP Problems with SDLC Mixed Modulo Stations on a Multipoint Line 22 7.4 NCP problems with modulo 128 Token Ring Stations 22 7.5 INIT-SELF rejected with sense code 10105006 22 7.6 VTAM ABEND S0C4 at ISTATCTR+1F0 23 8 KNOWN PROBLEMS AND PRODUCT RESTRICTIONS 23 8.1 Common Trace Facility (CTF) Notes 23 8.1.1 LU Tracepoints Not Supported 23 8.2 Restrictions with Network Management 23 8.2.1 NCL Delete Transmission Group "Wrong State" Exception 23 8.2.2 NCL Enable SDLC Link Station "Invalid Parameter" Exception 24 8.2.3 LocalEntityName Instance Specification 24 8.2.4 NCL Wild Card Parsing of Peer Server entities 25 8.2.5 System crash with repeated enable/disable of LLC2 entities 26 8.2.6 Problem with authorization 28 8.3 Restrictions with SNA LU Services 30 8.3.1 Independent LU Capability Problem with Passive Listens 30 8.4 SDLC Datalink Restrictions 30 8.4.1 Multipoint Full Duplex Configuration Requires TWA 30 8.4.2 Using the PBXDI ISA-bus synchronous communications controller 30 iv Contents 8.5 QLLC Datalink Restrictions 31 8.5.1 Temporary TGs Lack Automated Call Startup 31 8.5.2 Filtername Mismatch Error is Ambiguous 31 8.5.3 QLLC Link and Station must be Enabled Before TG 31 8.6 TN3270 Server restrictions 32 8.6.1 Occasional erroneous output associated with the TN3270 server 32 8.6.2 TN3270 Server drops connection when client does not respond properly 32 8.7 LLC2 Datalink performance issues 32 8.7.1 FDDI performance 32 9 FILES INSTALLED/MODIFIED 33 TABLES 1 IBM LLC2 and MA LLC2 Parameter Mapping 20 2 Peer Server Installed Files 33 3 Peer Server Configuration Files 34 4 Peer Server Files Used for Logging 35 5 Peer Server Modified Files 35 v 1 Introduction These release notes are for Version 1.4 of the DIGITAL SNA Peer Server. Following product installation, this file can be found in /var/sna/t21_V14-0_release_notes. The abbreviated Peer Server product name is used throughout this document. 1.1 Version 1.4 Overview The primary enhancements provided by Version 1.4 of the Peer Server are: o Support for DIGITAL UNIX 4.0n. A significant aspect of this feature is that Peer Server V1.4 no longer uses Common Agent for management. The Common Agent capability was deleted from DIGITAL UNIX 4.0. This means that Peer Server management must utilize NCL. (Management via SNMP required Common Agent and therefore is not supported in Peer Server V1.4.) Note that Peer Server V1.4 is NOT supported on systems prior to DIGITAL UNIX 4.0. o Support for 10,000 concurrent sessions. (Previous versions of Peer Server supported 1,024 concurrent sessions.) Refer to Section 4.1 for more information on system require- ments for allowing 10,000 concurrent sessions. o Full support for Extended TN3270 as specified in RFC 1647 TN3270 Enhancements. This version supports all clients which abide by the RFC. (This includes the Apertus' TN3270 client which supports TN3270 Extensions.) Additional capabilities are also incor- porated into this TN3270 Extended server. These enhancements include error logging, enhanced configuration capability, better handling of illegal or improper client responses. Clients using TN3270 capability without the extensions are also supported. Refer to Section 4.2 for more information on Extended TN3270 support. Peer Server V1.4 is a functional superset of Version 1.3. All capabilities supported in V1.3 are supported in V1.4 with the exception of the Common Agent support mentioned above. Please note that there is a new release of the Peer Server documentation in conjunction with this release of the Peer Server software. 1 2 Supporting Software The Peer Server V1.4 kit no longer bundles Peer Server depen- dent software. You must have the supporting software installed before you can install the Peer Server software. This includes the correct version of the DIGITAL UNIX operating system and certain required, DECnet/OSI for DIGITAL UNIX and X.25 for DIGITAL UNIX subsets. For optional DECnet and/or X.25 function- ality, DECnet/OSI and/or X.25 software must also be installed and configured. The DIGITAL SNA Peer Server Installation and Configuration manual describe these requirements. See DECnet/OSI and/or X.25 documentation if those functionalities are required. 2.1 DIGITAL UNIX The Peer Server V1.4 software installs on DIGITAL UNIX V4.0 or higher. 2.2 DECnet/OSI The Peer Server can use both TCP/IP and DECnet networks. If you plan to use DECnet, you must install DECnet/OSI for DIGITAL UNIX V4.0 or later. If you wish to install or upgrade DECnet/OSI software on a sys- tem that is already running a version the Peer Server, then you must delete the Peer Server subsets before installing DECnet/OSI. See Section 3.1 for instructions on removing the Peer Server software. Once you have installed and configured DECnet/OSI, you should install the Peer Server V1.4 software. Unless you are planning to use DECnet communications to access the Peer Server, there is no requirement that the DECnet/OSI layered product be installed on the Peer Server system. (However, certain DECnet/OSI subsets are required to be in- stalled and configured on the Peer Server system. See the DIGITAL SNA Peer Server Installation and Configuration manual. 2.3 X.25 With the Peer Server, SNA traffic may be sent over X.25 Packet Switched Data Network (PSDN) backbones using the Qualified Logical Link Control (QLLC) protocol. This enables X.25 cus- tomers to communicate with their IBM machines using SNA pro- tocols over X.25 networks. To do so, you need the X.25 for 2 DIGITAL UNIX Systems V3.0 or later software product (avail- able separately) installed prior to installing the Peer Server. You must also have the IBM software resident and configured on the IBM machine (for example X.25 Network Control Program Packet Switching Interface (NPSI), for IBM mainframe front end communications processors). The use of QLLC is supported with both types of X.25 PSDN ac- cess provided by the X.25 for DIGITAL UNIX Systems product, namely: by direct synchronous communications link (LAPB) or by LAN access to an X.25 Relay node (LLC2). Unless you are planning to use the QLLC protocol with the Peer Server, there is no requirement that the X.25 layered product be installed on the Peer Server system. (However, certain X.25 subsets are required to be installed and configured on the Peer Server system. See the DIGITAL SNA Peer Server Installation and Configuration manual. If you wish to install or upgrade X.25 software on a system that is already running version the Peer Server, then you must delete the Peer Server subsets before installing X.25. See Section 3.1 for instructions on removing the Peer Server soft- ware. Once you have installed and configured X.25, you should install the Peer Server V1.4 software. 3 Installation Notes 3.1 Removing Prior Versions If you have an earlier version of the Peer Server software in- stalled on your node, you must be delete it prior to installing this kit. To see whether a previous version is installed, issue the command setld -i | grep T21 The following subsets must be deleted to completely remove the Peer Server software prior to installing a new version of the Peer Server software. T21MGMTnnn T21SRVRnnn Delete the named subsets listed as installed (substituting the correct subset numbers for "nnn") using the command # setld -d subset subset ... 3 3.2 Installing or upgrading Peer Server Dependent software If you install or upgrade Peer Server DECnet/OSI and/or X.25 Dependent software, then you must delete and reinstall Peer Server software. See Section 3.1 for instructions on removing the Peer Server software. Once you have installed and config- ured DECnet/OSI and/or X.25 software, you should reinstall the Peer Server software. The following is the list of Peer Server dependent software that requires reinstallation of the Peer Server if upgraded or installed after Peer Server installed. CTAANALnnn (X.25) CTABASEnnn (X.25) WDABASEnnn (X.25) WDADATALNKSnnn (X.25) WDADRIVERSnnn (X.25) ZZAUTILnnn (X.25) DNAKBINnnn (DECnet/OSI) DNANETMANnnn (DECnet/OSI) DNABASEnnn (DECnet/OSI) 3.3 NCL Initialization File Reuse Previous Peer Server customers can retain the NCL initial- ization script(s) configured for earlier versions of the Peer Server, provided they anticipate the same configuration for their V1.4 installation. (The default startup script is t21_init_sna_server.ncl in the /var/sna directory.) 4 Operating Notes 4.1 10,000 concurrent sessions system requirements In order to establish a large number of concurrent sessions, you must consider several requirements. 4.1.1 Memory and swap space requirements for 10,000 sessions As the number of sessions increases the amount of real and virtual memory required increases linearly. The equation to determine swap space requirements are: Virtual_memory = ((NumberOfSessions/50) + 1) * 3.58 megabytes Resident_memory = ((NumberOfSessions/50) + 1) * 1.8 megabytes 4 For example to support 10,000 concurrent sessions, vir- tual_memory = 719.58 megabytes and resident_memory = 361.8 megabytes. These are the minimum swap space and physical mem- ory requirements associated with Peer Server supporting 10,000 sessions. Overhead for various configurations of the operat- ing system or other disk requirements must be added to these Peer Server requirements to determine overall system require- ments. Operation with lesser concurrent sessions scale linearly according to the above equation. 4.1.2 System tuning associated with 10,000 concurrent sessions DIGITAL UNIX V4.0 limits the number of files to be concurrently opened by a process to 4096. To allow a process to open more file descriptors (including sockets), and therefore increase dynamic port allocation, the following steps are necessary: 1. Add a script name /sbin/rc3.d/S02kernel with the following lines: #!/bin/sh dbx -k /vmunix < /modify.kernel 2. In the root directory, create /modify.kernel file with the following lines: p open_max_hard a open_max_hard = 32768 p open_max_hard p open_max_soft a open_max_soft = 32768 p open_max_soft p ipport_userreserved a ipport_userreserved = 32768 p ipport_userreserved quit 3. In order to use the above patch modify the startup script file /sbin/init.d/t21_sna_server. At the very beginning of the file add 2 lines: ulimit -n unlimited echo 'ulimit -n' 5 4.1.3 Modification of system parameters necessary for 10,000 concurrent sessions To support 10,000 concurrent sessions, you must set or modify several variables as described in the sections that follow. 4.1.3.1 Variables in the /sys/conf/NODE-NAME file Set several variables in the /sys/conf/NODE-NAME file according to the list below. If these variables are not currently in the file then you must add them. If they are already in the file, then change the assigned values to reflect those shown below. In /sys/conf/NODE-NAME (where NODE-NAME is your system node name) add the following lines with the corresponding values : # # Special options (See configuring the kernel chapter # in the DIGITAL UNIX System Administration manual.) # dfldsiz 134217728 maxdsiz 2147483648 dflssiz 536870912 maxssiz 536870912 maxusers 2048 max_vnodes 32768 4.1.3.2 Variables in the /etc/sysconfigtab Modify the /etc/sysconfigtab per the following list for the variables associated with the listed subsystems: proc: per-proc-stack-size = 8388608 per-proc-data-size = 1073741824 maxusers = 2048 task-max = 16404 thread-max = 32808 vm: vm-mapentries = 40960 vm-vpagemax = 1048576 6 4.1.4 Build kernel and reboot system The final step to incorporate the above is to build the kernel. Execute the following commands. # doconfig -c # cp /vmunix /vmunix.sav # mv /sys/conf/vmunix /vmunix # shutdown -r now 4.1.5 Summary of system parameters to modify SUBSYSTEM ATTRIBUTES DEFAULT DESIRED VALUE proc per-proc- 33554432 8388608 stack-size proc per-proc- 134217728 1073741824 data-size proc open-max- 4096 32768 soft proc open-max- 4096 32768 hard proc maxusers 64 2048 proc task-max 20+8*maxusers 16404 proc thread-max 2*task-max 32808 vm vm- 200 40960 mapentries vm vm-vpagemax 16384 1048576 /sys/conf/ /sys/conf/ /sys/conf/ /sys/conf/ /sys/conf/ /sys/conf/ 7 4.1.6 DECnet/OSI patch With the current DECnet/OSI DNABASE subset, a restriction of 4000 concurrent is imposed. There is a patch available for the DECnet/OSI product that overcomes this limitation. Replace the /usr/var/opt/DNABASE401/var/subsys/dna_base.mod with the DECnet patch. This patch allows 10,000 links by changing the following parameters using NCL. (See Section 4.7 for more information.) # ncl ncl> set nsp max remote nsap 10050 ncl> set nsp max transport connect 10020 4.2 Extended TN3270 operation The TN3270 Server has been extensively modified to provide full support for TN3270 Enhancements as specified in RFC 1647. In addition, the TN3270 configuration utility has been upgraded to support the optional capabilities in the RFC and to allow concurrent TN3270 connections over multiple ports and with different configurations. Complete documentation of enhanced TN3270 operation and configuration is included in the DIGITAL SNA Peer Server Management Manual. The TN3270 Enhanced Server is automatically installed with the Peer Server. Configuration and automatic startup is optional. 4.3 Synchronous Communications Support (SDLC) As with earlier versions of the Peer Server, V1.4 supports IBM's Synchronous Data Link Control (SDLC) WAN datalink pro- tocol for the SNA connection to the adjacent PU T2.1 or PU T4 node. (The AS/400 is an example of a PU T2.1 node. A mainframe front-end communications processor, such as a 3745 running IBM's NCP software is an example of a PU T4.) As with earlier versions, the synchronous port used by the Peer Server is provided by a combination of the DIGITAL UNIX WAN Device Drivers software and the synchronous communications hardware itself. Four types of synchronous hardware are sup- ported with V1.4 of the Peer Server: the built-in, integral SCC sync port, the optional DSYT1 high speed TURBOchannel adapter (DSYT1-BA), the optional DNSES EISA synchronous communications controller hardware, and the optional PBXDI ISA synchronous communications controller. The SCC port supports the V.24/RS-232 interface only, at speeds up to 19.2 kb/s. The optional DSYT1 (also known as the DIGITAL WANcontroller 720) and the DNSES adapters both contain two lines per device and support SDLC up to T1/E1 (2.048 Mb/s for a single line and up to 64 kb/s when both lines are used. 8 The DSYT1 and DNSES support both the V.24/RS-232 and the V.35 physical interfaces. The PBXDI controller supports two lines; one model (PBXDI-AA) supports the V.24/RS-232 interface, an- other (PBXDI-AB) supports both RS-232 and V.35 interfaces. (A third model (PBXDI-AC) exists but should not be used for SDLC.) External clocking (modems, modem eliminators, or NCP "direct attach" lines) are required in all cases. The SCC port is present on all DIGITAL 3000 systems supported by the Peer Server. On the 3000-300 and 3000-300L, however, the SCC port may only be used with the Peer Server when it is otherwise not used as the console port (that is, a monitor must be used for the console). Multiple DSYT1, DNSES, or PBXDI devices may be used for a higher number of concurrent links (each device having two links). All require a free bus slot (TURBOchannel, EISA, and ISA bus, respectively), and slot availability varies by spe- cific system model. For example, the 3000 Model 800/S has up to 6 free TURBOchannel slots (thus 13 links - 6 DSYT1's and a single SCC - may be used in the extreme case). Similarly, the maximum number of DNSES units is limited by free EISA slots and specific hardware configuration. Consult the Digital Systems and Options Catalog (Alpha product hardware information) for full configuration details. As in earlier versions, the Peer Server can be configured with multipoint secondary station support (multiple SDLC stations on a given physical link when that link is configured with the Peer Server assuming the secondary role). By providing more than one station address on a link, this feature permits the Peer Server to support more than 255 dependent Logical Units on a single physical line, thus saving costs. 4.3.1 Modem Control and Link Configurations 4.3.1.1 Naming of Synchronous Lines The Modem Connect Line entities are assigned names based on the order that the devices are named during configuration. When you execute or re-execute wddsetup, make sure that the devices are named in the same order as previously, or else the existing configurations of Peer Server or other products may become invalid. Please specify which device(s) {dsy scc none} are to be used. [scc]: dsy scc 9 4.3.1.2 Modem Connect Line Speed Characteristic The following pertains to half-duplex and multipoint config- urations, in which the local Peer Server DTE is toggling its Request to Send (RTS). Because of a limitation with the SCC, DSYT1, and DNSES hard- ware with regard to transmit interrupts, the respective device driver must compute the time to retain RTS assertion following the last data byte transmitted. The delay time is a function of the actual line speed, and the driver must therefore be aware of the speed of the link. To accommodate the RTS drop delay computation, the Modem Connect Line entity includes a characteristic attribute called "Speed." If the Peer Server is to be used in a half-duplex or multipoint configuration, the Speed characteristic must be set to the actual speed at which the line is being externally clocked. Speed is entered in bits per second, for example a 19.2 kb/s link would have Speed set to 19200. (When set to zero, the default RTS computation assumes an actual line speed of 1200 bits per second.) Failure to set the speed accurately results in unpredictable results with half-duplex and multidrop lines, from reduced line throughput to transmission failures. On a multipoint line, it is possible that a misconfigured Peer Server line could affect data transfer between other tributary stations and the primary station. The wddsetup step of the V1.4 Peer Server (/usr/sbin/wddsetup), invoked as part of the product installation, includes prompt- ing for line speed when the line specified is half duplex or full duplex multipoint. If the DIGITAL UNIX WAN Device Drivers are already present and configured on the Peer Server tar- get node and the wddsetup step is not re-run during the Peer Server installation, you must ensure that the Speed character- istic is properly set for the lines to work properly in half duplex and multipoint modes. Re-running /usr/sbin/wddsetup or manually editing the Modem Connect startup NCL file in /var/dna/scripts/wdd.mconnect.ncl and restarting renders the change permanent. 4.3.2 Additional Sync Comm Information 10 4.3.2.1 SCC Device Name The SCC built-in synchronous port is referred to by NCL manage- ment and the WAN Device Driver scripts (wddsetup) as the "sscc" device, and as communications port "sscc0". The latter forms must be used when entering network management or configuration commands. 4.3.2.2 CCITT V.24 Incompatibility with EIA RS-232C (DSYT1 and DNSES only) An incompatibility exists between the CCITT V.24 and EIA RS- 232C physical interface standards with respect to pins 18, 21 and 23. The DSYT1 and DNSES are engineered for strict accor- dance with the newer V.24 standard, and is therefore incom- patible with the older RS-232C interface. In order to permit the DSYT1 and DNSES to be used with RS-232C compliant devices, a V.24 hardware adapter connector is supplied with the V.24 cable set (BS19D-02), part number 12-27591-01. The adapter is attached to the DCE end of the BC19D-02 V.24 cable. Refer to the information sheet supplied with the cable hardware for more information (info sheet EK-BS19D-IS-001). Failure to use the adapter where indicated results in an in- ability to activate the line and possibly even damage the modem or interface module. If you are unsure whether the adapter should be used or not, it should be fitted as a matter of course. Note that doing so may disable remote and local loop functions. This issue does not apply to the SCC sync port, which has tolerance for the difference in the standards. No adapter is required with the SCC. 4.4 Logical Link Control Type 2 (LLC2) 4.4.1 Token Ring Support The hardware adapter required for Token Ring is the DIGITAL TRNcontroller 700 (DETRA) TURBOchannel card or the EISA Token Ring Communications Controller (DW300/DT424). Both accommo- date 4 and 16 Mb/s ring speeds (selectable). A single hardware adapter may be used simultaneously with multiple protocols (for example DECnet/OSI, IP, and X.25), using different SAPs. 11 4.4.2 Ethernet 802.3 vs. Ethernet V2 Peer Server V1.4 (and DIGITAL UNIX) supports Ethernet using IEEE 802.3 frame format, and not Ethernet V2. This may be an issue when configuring SNA over Ethernet to an IBM SNA node, which typically has a configuration option for Ethernet 802.3 or V2 (with protocol type 80d5). Ensure that your IBM Ethernet implementations (both destina- tion nodes and bridges, such as the IBM 8209) are configured to use the 802.3 format for Ethernet frame transmission for communication with the Peer Server. 4.4.3 Ethernet Support All Digital-supplied Alpha Ethernet adapters supported under DIGITAL UNIX V4.0 are supported with this version of the Peer Server. 4.4.4 FDDI Support All Digital-supplied Alpha FDDI adapters supported under DIGITAL UNIX V4.0 are supported with this version of the Peer Server. 4.4.5 Operation in Bridged Token-Ring and FDDI Environments When running the Peer Server in a bridged environment, it is possible that an intervening bridge or LAN segment supports a maximum frame size than that configured in the two communi- cating systems. The Peer Server detects this and automatically reduces the maximum frame size used in this case. 4.5 WANDD Loader The wdd_loader program runs as a daemon process and is re- sponsible for handling microcode loading and dumping for those synchronous devices that require it. This daemon must not be killed; doing so may result in a system panic. 4.6 Automatic Generation of NOTIFY(ONLINE) and NOTIFY(OFFLINE) Starting with V1.2, the Peer Server sends ACTLU responses that indicate the LU is not available by default. The product then sends NOTIFY(ONLINE) when an access routine connects to the LU, and NOTIFY(OFFLINE) when the access routine disconnects. This is different from previous versions of the product and also different from the PU2.0 Gateway-ST and Gateway-CT products. 12 This behavior can be modified such that the Peer Server behaves exactly as previous version and products if necessary, but do- ing so means that the product cannot be used for 3270 Terminal Emulator access to AS/400 systems. While this behavior is typically closer to real IBM equipment, it does cause problems when connections to the LU are made and broken in quick succession. It also causes problems when the LU is in session when the Peer Server is deactivated, as the next time the LU is used the host application may attempt to re- BIND to the LU (which can override the real session activation request sent by the client). To turn off this feature, you should edit the file /var/subsys/t21scl.stanza and modify the line use-notify = 1 to be use-notify = 0 then enter the following command (as root): # sysconfigdb -u -f /var/subsys/t21scl.stanza t21scl This modifies the permanent database, which takes affect next time the system is booted. To modify the running system, use the following command: # sysconfig -r t21scl use-notify=0 The new setting takes effect the next time each LU is acti- vated. 4.7 Using More Than 128 DECnet Connections The default DECnet configuration allows for a maximum of 128 concurrent DECnet links. If you wish to have more than 128 DECnet connections into the Peer Server system, you must edit the DECnet NSP startup script. Edit /var/dna/scripts/start_nsp_transport.ncl and add the fol- lowing two lines between the "create nsp" and "enable nsp" commands. The maximum remote nsaps characteristic must be set to at least 3 greater than the maximum transport connections characteristic. set nsp maximum transport connections = x set nsp maximum remote nsaps = y 13 In addition, edit /var/dna/scripts/start_osi_transport.ncl and add the following two lines immediately before the (enable osi transport) commands. The maximum remote nsaps characteristic must be set to at least 3 greater than the maximum transport connections characteristic. set osi transport maximum transport connections = x set osi transport maximum remote nsaps = y In the above two cases, x is the number of DECnet connections that you wish to allow, and y is at least 3 more than that. (See Section 4.1.6 for information on allowing more than 4000 connections.) Refer to the DECnet/OSI documentation for full details. 4.8 Specifying PU Name and Session Number on Client Connections The Peer Server Logical Units (LUs) are named entities and have an attribute called "Old Name" that can be set so that existing client applications can continue to connect to spe- cific LUs using the PU name and Session Address syntax used with DECnet/SNA Gateway-ST and -CT. Specify the Old Name in the format [pu-name.][session-number]. If a client connection is received by the Peer Server specify- ing only a PU-name and no session number, the Peer Server will not use the PU-name when attempting to match the connection to an LU with an "old name" set. 4.9 Session Termination Support Added for Non-IBM Mainframes Certain IBM plug compatible (PCM) mainframe SNA implemen- tations, e.g. Fujitsu, are known to require dependent SLU initiated session termination with (Rq)TERM-SELF instead of (Rq)UNBIND. This support has been added to Peer Server LU Services. In the case where an LU-LU session (Rq)UNBIND sent from Peer Server to the mainframe is rejected with 1003 -Rsp, a TERM-SELF is sent from Peer Server to solicit an UNBIND from the PLU. This change addresses PCM compatibility without affecting stan- dard IBM mainframe session behavior. 14 4.10 Startup Initialization Delay The time between starting the Peer Server (from system boot, running of the t21icu Configuration utility, or explicit ex- ecution of "t21_sna_server start" from /sbin/init.d) and full initialization may be on the order of minutes, particularly if your specific t21_init_sna_server.ncl has a very large number of LUs and related entities specified. Confirmation of initialization completion can be seen by run- ning a DIGITAL UNIX system utilization utility (for example iostat, to show CPU utilization dropoff following comple- tion) or by interactively running NCL on the Peer Server ma- chine, confirming final entity enabling. In addition, the file /var/tmp/t21_init_sna_server.log contains the output from the initialization. 4.11 OS/2 LAN Support Withheld As documented in DIGITAL SNA Peer Server Guide to IBM Resource Definition, formal support for connections to OS/2 Extended Services and Communications Manager are limited in this release to SDLC. 5 Fixes included in Peer Server V1.4 The following section describes restrictions and problems present in Peer Server V1.3 that have been fixed. All these fixes have been consolidated into Peer Server V1.4. 5.1 Fixed in Version 1.3 ECO-01 Some restrictions and problems present in the V1.3 Peer Server product have been resolved in the Version 1.3 ECO-01 software. These fixes have been included in Peer Server V1.4 and are listed below. 1. Resolved a UNIX kernel memory corruption problem. Prior to ECO-01, the Peer Server might corrupt kernel pool during outbound session allocation. This problem could re- sult in various crashes with "Unaligned kernel space access from kernel mode" being prevalent. This problem could occur in any version of DIGITAL UNIX, however the new memory allocation scheme in DIGITAL UNIX V3.2C made the problem more likely to occur. 15 The problem was resolved by allocating the correct buffer size required for the connect response message. 5.2 Fixed in Version 1.3 ECO-02 Some restrictions and problems present in the V1.3 Peer Server product have been resolved in the Version 1.3 ECO-02 software. These fixes have been included in Peer Server V1.4 and are listed below. 1. Resolved a system crash when token ring giving errors. When the Peer Server was utilized with a token ring and the token ring network began to report errors to the Peer Server the Peer Server would cause a system crash. The characteris- tics of this crash were that it would fail freeing an mbuf at line 1342 in module t21llc/src/t21llc_dlpi_actn.c. 2. Resolved problem with passive connections. The Peer Server was not handling GAP V3 passive connections properly where no USS data was requested by the client. The symptom of this problem was that the OpenVMS Printer Emulator V1.3 would hang a session when that session termi- nated. 3. Improved TCP/IP Transport Performance When the Peer Server was accessed by an access routine using TCP/IP as the transport, performance was non-optimal. This was because the PAI used in the CAD daemon process used write() instead of writev() when sending TCP/IP messages to the access routines. The Peer Server now uses the writev() service so that the message header doesn't get separated from the message body. 4. Corrected software version number in transmitted XID The Peer Server was sending an obsolete software version number (V110) in the XID software subvector for the Product Set ID. Now it sends the correct current version number. 5. Prohibited use of permanent TGs that are in the connecting state The Peer Server was attempting to a use a permanent TG when it was in the connecting protocol state in the expectation it would eventually become active. Even though this is a valid assumption it is better to give an error and allow the user to go and understand why the TG hasn't come up fully. 16 6. Prevented SDLC circuits from resetting every inactivity seconds If the Peer Server was operated with a product such as MEGAPAC (an SDLC spoofing product) that generated multi- ple SNRMs the SDLC inactivity timer would fire every in- activity timer seconds and reset the SDLC station with the result that all sessions would be taken down and have to be reestablished when the station immediately restarted. This fix corrects this problem that was reported in IPMT case CFS.38280. 7. Fix internal trace so it works on multi-processor systems The internal trace facility truncated its output on multi- processor systems. This fix corrects this problem. 8. Fix CTF QLLC analysis routine CTF would SEGV in some cases when analyzing QLLC trace files. This fix corrects this problem that was reported in IPMT case CFS.35011. 9. Fix GAP version negotiation problem If multiple clients connected to the Peer Server simultane- ously it was possible for an incorrect GAP version number to be sent back to the client. This fix corrects this problem. 10. Fix assorted memory leaks in t21cad process The t21cad processes would fail to return heap memory and threads resources to the system. This could lead to hanging or unexpected termination of the t21cad processes. This fix corrects this problem. 11. Fix incorrect reported byte offset in XID Negotiation Control Vector If the Peer Server detected an error in a received XID it would report an incorrect error byte offset that was always a negative number. This fix corrects this problem. 5.3 Fixed after release of Version 1.3 ECO-02 Some restrictions and problems present in the V1.3 Peer Server product have been resolved after the release of the Version 1.3 ECO-02 software. These fixes have been included in Peer Server V1.4 and are listed below. 1. Disabling SDLC Link Station caused Kernel memory Fault 17 Certain combinations of Enable and Disable commands issues to the SDLC link and Station entities caused the UNIX system to crash with a kernel memory fault. 2. Resolved a system crash when QLLC station was disabled and enabled. When a QLLC station was disabled and then subsequently en- abled again the Peer Server would cause a system crash. The characteristics of this crash were that it would fail in routine dupb, called from routine get_segment called from routine send_remote_mu called from routine send_remote_ nonsess_mu. 3. Increased Peer Server components flow control high and low watermarks When a high speed datalink is used to exchange large RU's with large pacing windows backpressuring results due to insufficient buffering levels within the Peer Server. Increasing the watermarks improves flow through the Peer Server under these conditions. 4. Retransmit unacknowledged SDLC I-frames In SDLC TWA mode if an I-frame was received with the P bit set where the N(r) didn't acknowledge all the I-frames we had previously sent we were discarding that I-frame and not retransmitting the I-frames that the link partner missed. Consequently the partner would eventually disconnect the link. The fix is to not ignore the received I-frame and to retransmit the missed I-frames. 5. Enable TN3270 server LUs to be restricted Making an LU restricted precluded TN3270 clients from uti- lizing it. The TN3270 server server now presents correct authentication information enabling a system manager to re- strict particular LUs to particular TN3270 clients. This change fixes the problem reported in IPMT case CFS.44037. 6. Make the TN3270 server listen over all network adapters Previously the tn3270 server only listened for connections on the network adapter corresponding to the Peer Server system's host name/address. It now listens on all network adapters. This enables the Peer Server system to be known by multiple Internet addresses. This changes fixes the problem reported in SNAGWY note 6783. 7. Limited Product Set ID Control Vector to 60 bytes 18 The Hardware Common Name Product ID subfield of the Product Set ID CV could exceed 15 characters, thus causing the en- tire CV to exceed 60 bytes. This field is now limited to a maximum 15 characters as is required. 8. SDLC frames were not always transmitted when of maximum frame size. SDLC frames that were of maximum size as configured for the link would not get transmitted when the maximum frame size was set to exactly match MAXDATA. This was due to the omission of the SDLC frame header bytes from the calculation of the maximum frame size provided to the sync driver. This problem was reported by installations using the PBXDI-AA and PBXDI-AB synchronous interface cards. 9. Inbound X.25 QLLC call is rejected with error "No Filters in Use" If the Wide Area Networking Support V2.0 for DIGITAL UNIX product is installed replacing the V1.3 version, X.25 QLLC calls inbound to the DIGITAL UNIX system from the IBM system are rejected. The Peer Server logs the event "Incoming Call Failed" with a reason of "No Filters in Use". This changes fixes the problem reported in IPMT case CFS.44044. 10. The Peer Server was clearing the ASPI bit (Adaptive Session Pacing Indicator) in Bind responses when the original Bind was not extended (no Control Vector 60). 11. Changed T21kit builder to not strip debug symbolics from ECO's. 12. Keep X25 receive credits at 2 instead of 1. 13. Reset signal handlers for synchronous signals to the UNIX default signal handler in the t21cad process. This overrides the DECthreads handler for the synchronous signals which turns them into exceptions. Resetting the synchronous signal handler to the default signal handler preserves the stack and the PC where the signal occurred which provides useful core images. 14. Pass unbinds from Access Routines thru the Peer Server un- changed. Previously the Peer Server mapped all unbinds sent from an Access Routine to a normal unbind before forwarding it to the remote LU. 19 6 Mapping between IBM LLC2 and EMA LLC2 parameters Mapping between the IBM LLC2 parameters and the EMA LLC2 param- eters (settable through NCL): NOTE: The default value for Holdback Timer is 500ms. To achieve a reasonable level of throughput, this parameter must be set to a value of 10ms. Table_1:_IBM_LLC2_and_MA_LLC2_Parameter_Mapping________________ IBM_Parameter_______EMA_parameter_______Comments_______________ Reply Timer (T1) Acknowledge Timer Inactivity Timer Not Implemented (Ti) Receiver Holdback Timer Has granularity of Acknowledgment 10mS (0.01 seconds). Timer (T2) It should be set to 0ms for best through- put at the expense of additionally consumed CPU time; 20ms is a reasonable value. Maximum Length of Maximum PDU Size This is a status at- I-Field (N1) tribute of the LLC2 SAP so cannot be modified. It is determined from the LAN type. Maximum Data Size This is a character- istic attribute of the LLC2 SAP LINK and can be modified. This attribute further con- strains the Maximum PDU Size of the LLC2 SAP. Maximum Number of Retry Maximum Retransmissions (N2) Number of I-Format Not Implemented LPDUs received before Sending Acknowledgment (N3) 20 Table_1_(Cont.):_IBM_LLC2_and_MA_LLC2_Parameter_Mapping________ IBM_Parameter_______EMA_parameter_______Comments_______________ Number of Not Implemented Acknowledgments Needed to Increment Ww (Nw) Maximum Number Remote Receive This is a status at- of Outstanding Window Size tribute and so cannot I-Format LPDUs be modified. It is (Tw) determined from the receive window size advertised by the re- mote system in its XID frame. Receive Window Local Receive Size_(RW)___________Window_Size________________________________ 7 Known IBM Restrictions The following sections list problems identified with IBM soft- ware that you may encounter when installing/running the Peer Server in your environment. APAR and PTF numbers are provided that can be used to ensure that your VTAM/NCP installation has the fixes applied. 7.1 Configuring Multiple Lines as PU T2.1 on the Peer Server If you are configuring multiple lines on the Peer Server to connect to the IBM front-end (3725/3745) as a PU T2.1 link (XID=YES on the PU macro) you need to code CONNTYPE=LEN on the PU macro also. If CONNTYPE=LEN is not coded, the activation of the second line fails with a sense code of 081D. This problem is due to the Peer Server sending the same CP Name on each link (which is consistent with an SNA LEN node). 7.2 IBM 3745 Scanner Problem Running Above 230Kbps When connecting a DSYT1 or DNSES high speed SDLC line to an IBM 3745 communications controller running a line speed of 256 kbps, the IBM controller may report hardware underruns. The symptoms of the problem on the Peer Server are that the link fails and restarts, or that a large number of SDLC frames are retransmitted. This problem is not seen on links running at 230 kbps or below. Contact your Digital Customer Support Center for 21 an update or resolution to the problem if you plan to run SDLC links at that speed. 7.3 NCP Problems with SDLC Mixed Modulo Stations on a Multipoint Line A problem exists in NCP V4, V5, and V6 whereby an SDLC mul- tipoint line configured with PU T2.1 stations of both normal (modulo 8) and extended (modulo 128) SDLC window sizes can er- roneously issue a modulo 128 SDLC poll from the NCP to a modulo 8 station. This has been corrected with the following APARS from IBM: o NCP V4 IR24362 o NCP V5 IR24307 o NCP V6 IR24170 7.4 NCP problems with modulo 128 Token Ring Stations A problem exists in NCP, whereby the NCP is always indicating that it is able to receive 128 frames between acknowledge- ments from a modulo 128 token ring station, when in fact it can only receive much fewer. This results in many unnecessary retransmissions, and can seriously degrade performance. This has been corrected with the following APARs from IBM: o NCP V6 IR25667 7.5 INIT-SELF rejected with sense code 10105006 Under heavy traffic conditions, VTAM can intermittently reject INIT-SELF requests with a sense code of 10105006, when in fact the response should be positive. This has been corrected with the following APARs from IBM: o VTAM V4R1 OW02909 o VTAM V4R2 OW04173 22 7.6 VTAM ABEND S0C4 at ISTATCTR+1F0 VTAM can intermittently ABEND while writing an internal trace record when the Peer Server has just established the data link connection over X.25 or token ring. This has been corrected with the following APARs from IBM: o VTAM V4 OW06433 8 Known Problems and Product Restrictions Known restrictions existing in the V1.4 software are detailed in this section. 8.1 Common Trace Facility (CTF) Notes 8.1.1 LU Tracepoints Not Supported Because of a CTF limitation in the maximum number of concur- rently declared tracepoints, the V1.4 release of the Peer Server does not support tracing on the LU entity (LU trace- points) as described in DIGITAL SNA Peer Server Management. Tracing can be done by Transmission Group to trace activity across all sessions on the TG, or on an individual session basis (Session tracepoints). Despite the lack of LU tracepoints, tracing of session(s) be- longing to a specific LU can be accomplished as shown in the following CTF command example (LU name "t001"): ctf>start sna lu services lu t001 session * 8.2 Restrictions with Network Management 8.2.1 NCL Delete Transmission Group "Wrong State" Exception A Transmission Group that is dependent LU capable and has one or more enabled LU Services dependent LUs referencing it cannot be deleted until the LUs are themselves disabled. An attempt to delete a Transmission Group with active dependent LUs results in a "Wrong State" error exception. "Wrong State" is also pro- duced for failed attempts to delete a Transmission Group when it is not first disabled (entity state OFF and protocol state RESET), and both conditions must be considered when "Wrong State" is returned. 23 An example of the failure follows: ncl> dele sna cp serv t g tg005 Node 0 SNA CP Services Transmission Group TG005 AT 1994-11-23-10:11:07.000-05:00I----- FAILED IN DIRECTIVE: Delete DUE TO: Error specific to this entity's class REASON: Wrong state Description: Wrong state 8.2.2 NCL Enable SDLC Link Station "Invalid Parameter" Exception An ambiguous NCL exception of "Invalid Parameter" is produced when attempting to enable an SDLC link station where the sta- tion and/or the parent link have an invalid Send or Receive Frame Size set, respectively. This is most commonly a problem when configuring SDLC for the built-in SCC device, which has an upper limit size of 1021 bytes. Note the exception occurs when enabling the station and not the link. For example, a link configured with a Receive Frame Size of 1024 (too high) is successfully enabled. Its child station, with a Send Frame size of 1000 (legal) incurs an NCL "Invalid Parameter" exception at the point it is enabled. The solution in this example is to disable and correct the link Receive Frame Size, re-enable the link, and enable the station. An example of the failure follows: ncl> enable sdlc link sdlc-0 sta stn-40 Node 0 SDLC Link sdlc-0 Station stn-40 AT 1994-05-03-13:04:59.000-04:00I----- FAILED IN DIRECTIVE: Enable DUE TO: Error specific to this entity's class REASON: Failure Description: Failure Reason = Invalid Parameter 8.2.3 LocalEntityName Instance Specification With Peer Server V1.0 it was possible to specify a LocalEntityName instance using parenthesis, for example: set sna cp services trans group tg-1 - datalink = (sdlc link foo station bar) In fact, the V1.0 Initialization and Configuration Utility (t21icu) generated this syntax automatically. 24 Due to some changes in NCL processing introduced with DECnet/OSI V2.0, it is no longer valid to specify LocalEntityNames using parenthesis, and doing so generates an NCL syntax error, e.g. "SYNTAX ERROR: No match was found for this string." The Peer Server t21icu has been modified to accommodate the new requirement; however, if you were a Peer Server V1.0 customer and will be running NCL scripts generated with the older t21icu utility (or generated manually), those scripts must be modified in order to work properly with V1.4. The t21_init_sna_server.log file, located in /var/tmp and cre- ated at the time of Peer Server startup, reveals this error. 8.2.4 NCL Wild Card Parsing of Peer Server entities Certain combinations of wild cards when accessing Peer Server entities via NCL do not work properly. Examples: The NCL command: ncl> show sna lu serv lu t11A5*1 Generates the following: Node 0 SNA LU Services LU T11A5121 AT 1997-04-24-14:08:22.327-04:00I2.051 Identifiers Name = T11A5121 Node 0 SNA LU Services LU T11A5131 AT 1997-04-24-14:08:22.327-04:00I2.051 Identifiers Name = T11A5131 We see that two LU's are displayed, but the NCL command: ncl> show sna lu serv lu t1*1 Node 0 SNA LU Services LU t1*1 AT 1997-04-24-14:09:58.760-04:00I2.061 FAILED IN DIRECTIVE: Show DUE TO: No such Entity Instance exists Yields the above error on the same system. The problem is not restricted to LUs as the following example shows: 25 The NCL command: ncl> show sna cp serv t g t* Generates the following: Node 0 SNA CP Services Transmission Group T3172001 AT 1997-04-24-14:16:52.130-04:00I2.215 Identifiers Name = T3172001 We see that a TG is displayed, but the NCL command: ncl> show sna cp serv t g t*1 Node 0 SNA CP Services Transmission Group t*1 AT 1997-04-24-14:16:57.512-04:00I2.215 FAILED IN DIRECTIVE: Show DUE TO: No such Entity Instance exists Yields the above error on the same system. This problem has existed on all previous versions of the Peer Server. This will be fixed in the next release. 8.2.5 System crash with repeated enable/disable of LLC2 entities A problem with the node crashing when enabling and disabling LLC2 entities exists on DIGITAL UNIX V4.0, V4.0A, and V3.n op- erating systems. The problem is NOT related to any Peer Server software. However, certain Peer Server configurations depend on LLC2 and users of Peer Server who use LLC2 should be aware of this problem. The following sequence of commands, when repeated enough times, causes the node to crash. Note that the problem has been reproduced on various systems, with various versions of the operating system. The problem apparently is related to the LLC2 DECnet module, as a system crash was predictably re- peatable when using LLC2 entities but could not be generated when using other entities (eg. SDLC). Another interesting as- pect of this apparently long standing LLC2 problem is the fact that the system would not crash when the same NCL commands were invoked via NCL prompts. A crash could only be generated when the commands were redirected to NCL (via file or command line). The following shows the sequence of commands that causes the system crash. Initial configuration: 26 create node 0 llc2 create node 0 llc2 sap SNA-0 set node 0 llc2 sap SNA-0 - lan station = CSMA-CD station CSMACD-0, - local lsap address = 04 create node 0 llc2 sap SNA-0 link LINK-0 set node 0 llc2 sap SNA-0 link LINK-0 - acknowledge timer = 1000, - holdback timer = 500, - local receive window size = 127, - maximum data size = 8000, - remote lsap address = 04, - remote mac address = 10-00-5A-D4-AA-D3, - retry maximum = 10 create csma-cd create csma-cd station CSMACD-0 communication port tu0 enable node 0 llc2 - sap SNA-0 enable node 0 llc2 - sap SNA-0 link * enable csma-cd station CSMACD-0 Commands [in file] that cause crash: ncl disable LLC2 SAP SNA-0 Link LINK-0 sleep 3 ncl disable LLC2 SAP SNA-0 sleep 3 ncl enable LLC2 SAP SNA-0 sleep 3 ncl enable LLC2 SAP SNA-0 Link LINK-0 sleep 3 When the above file is repeatedly executed the node eventually crashes. A crash was also generated when using a Token Ring SAP in- stead of a CSMA-CD SAP. (Apparently the cause of the crash is independent of a particular SAP.) This is a serious problem is being addressed and should be fixed in a future release of DECnet software. Workaround: Until this problem is fixed users should avoid repeatedly en- abling/disabling LLC2 SAP and LINK entities. 27 8.2.6 Problem with authorization There are two, (probably associated), problems with reset- ting the SNA LU SERVICES AUTHORIZATION NODE characteristic attribute. It is not possible to reset the SNA LU Services Authorization Node Characteristic attribute back to the default 0:. value and allow access when the Transport attribute is set to DECnet. The two manifestations of this problem are: 1.) Setting the SNA LU SERVICES AUTHORIZATION NODE character- istic attribute to 0:. does not result in access being allowed when the SNA LU SERVICES AUTHORIZATION TRANSPORT characteristic attribute is set to DECnet. Note that the initial value, (ie. value at create time), is 0:. and access IS allowed. Apparently when the attribute is dynamically reset to 0:. the setting is not the same as the default, even though the display indicates the same value. 2.) Setting the SNA LU SERVICES AUTHORIZATION NODE characteris- tic attribute to the default, by not supplying a value with the set command, does not result in any change of the attribute. Also the t21mad daemon process exits. examples: Case 1: The ncl command: sho sna lu ser aut hillman node results in following display: Node 0 SNA LU Services Authorization hillman 1997-01-23-11:50:06.000-05:00I----- Characteristics Node = 0:. If the Node Characteristic has never been reset then access is allowed. The ncl command: set SNA LU SERVICES AUTHORIZATION NODE Node = 0:. results in following display: Node 0 SNA LU Services Authorization hillman 1997-01-23-11:50:06.000-05:00I----- Characteristics Node = 0:. Even though the value looks the same, access is denied. 28 Case 2: The ncl command: sho sna lu ser aut hillman node results in following display: Node 0 SNA LU Services Authorization hillman 1997-01-23-11:50:06.000-05:00I----- Characteristics Node = DEC:.lkg.snads The ncl command: set SNA LU SERVICES AUTHORIZATION NODE results in following display: Node 0 SNA LU Services Authorization hillman AT 1997-01-23-11:50:06.000-05:00I----- Characteristics Node = DEC:.lkg.snads AND the t21mad daemon process exits. Note: There is no core file when the t21mad daemon process exits. The t21mad daemon process is immediately started up again by the t21mcd daemon process, unless the set command that caused the failure is executed in quick succession. (The t21mcd algo- rithm keeps processes running, unless they exit 2 times in 30 seconds.) Work around: It is not possible to set the SNA LU Services Authorization Node Characteristic attribute back to the default 0:. value and allow access when the Transport attribute is set to DECnet. However, access can be allowed if the Transport attribute is set to TCPIP and the SNA LU Services Authorization Internet Node attribute is set to "". (Note that the Transport charac- teristic attribute by itself does not restrict access. It is just a flag that indicates whether the (DECnet) Node or the Internet Node will be used to determine access.) The only way of actually setting the Node Characteristic attribute back to the default of 0:. is by deleting and recreating the Authorization Entity instance. If the t21mad daemon is not running, it is necessary to restart the Peer Server. This will be fixed in the next release. 29 8.3 Restrictions with SNA LU Services 8.3.1 Independent LU Capability Problem with Passive Listens Due to a problem in the Peer Server SNA LU Services mod- ule, client "passive" LU-LU session listens (Secondary LU, Independent LU session) fail if the LU has been configured as Independent but with a Capability characteristic value of "secondary". The nature of the failure is "LU Unavailable". The workaround is to configure the LU with a Capability of "both". The Installation and Configuration utility uses "both" as the default, so the problem is encountered only if the Capability is explicitly supplied as "secondary". 8.4 SDLC Datalink Restrictions 8.4.1 Multipoint Full Duplex Configuration Requires TWA As noted elsewhere, the use of two-way simultaneous (TWS) transfer mode (as specified on the SDLC link station) in the full duplex multipoint case is not recommended. A known re- striction with RTS/CTS handling in the Wide Area Device Drivers (wdd) can cause aborted frame retransmission and, ultimately, link failure. 8.4.2 Using the PBXDI ISA-bus synchronous communications controller o Only those models which support RS-232C or V.35 interfaces (PBXDI-AA and PBXDI-AB) are supported for use with SDLC. o Half-Duplex is not supported. o Transmission of frames larger than 1022 bytes (including SDLC header) is not supported. The SDLC Link Send Frame Size should not exceed 1020 for modulo 7, and 1019 for modulo 127. o The Interface Type attribute for the Modem Connect Line entity does not accurately reflect whether an RS-232C or V.35 interface is currently in use. This inaccuracy lies in the management attribute only and does not actually affect the operation of the controller. 30 8.5 QLLC Datalink Restrictions 8.5.1 Temporary TGs Lack Automated Call Startup Dependent LU session traffic (for example, 3270 LU2 termi- nal sessions) does not automatically activate a temporary Transmission Group when initiated from the client (Secondary LU) side. This handling is based on the assumption that an SLU not yet activated from the IBM host (that is, no ACTLU yet received) cannot initiate an LU-LU session (using INIT-SELF). There are circumstances with QLLC and X.25 networks where call initiation based on an SLU request for session could bene- fit from such behavior. We will be addressing this issue in a future release. 8.5.2 Filtername Mismatch Error is Ambiguous As explained in Peer Server documentation, the use of filter names must be consistent between the X.25 product and Peer Server QLLC entity configurations. If a QLLC link is set up with a filter name that doesn't exist in the X.25 configura- tion, an Enable of the TG fails with: FAILED IN DIRECTIVE: Enable DUE TO: The target implementation does not support this entity class Should this error occur, CTF may be used to determine if the problem is due to a filter mismatch as shown below (no event message is generated in this case). ctf>start qllc link *,live . . 08:06:31.94| Tx| 14| X_LISTEN_RE| 08:06:31.95|Rx | 16| X_ERROR_ACK (xerrno=-176) Specified . filter does not exist . 8.5.3 QLLC Link and Station must be Enabled Before TG When configuring a Transmission Group (TG) for use with a QLLC link, the link and underlying link station must be enabled before enabling the corresponding (TG). The enable of the TG fails with an "entity class not supported" exception error if this order is not followed. 31 8.6 TN3270 Server restrictions 8.6.1 Occasional erroneous output associated with the TN3270 server The TN3270 server generates occasional erroneous messages that appear on the user stdout/stderr. Examples: ***Routine: snalog_text status = 0x0203F6E2 ***Routine: snalog_text status = 0x0203F6E2 The messages do not indicate any problem and should be ignored. This will be fixed in the next release. 8.6.2 TN3270 Server drops connection when client does not respond properly When a client has successfully negotiated the RESPONSES func- tion and then does not respond appropriately when receving a RU when in definite response mode, the TN3270 server drops the connection. While the client is required to respond to every RU when in definite response mode, the TN3270 server could be more tolerant with such a minor infraction by the client. However, the current version of the TN3270 server is intol- erant of such infractions and drops the connection every time the client fails to properly respond. In the next release, the TN3270 server will be modified so that a certain number of RUs sent to the client, that are not properly responded to, will be accounted for, and cleared when the client does send a definite response to one of the subsequent RUs. Workaround: If this is happening with your client, you can disable the RESPONSES function when configuring the TN3270 server. 8.7 LLC2 Datalink performance issues 8.7.1 FDDI performance In order to allow high thruput of data via the FDDI datalink it is necessary to change the LLC2 SAP LINK Holdback Timer Characteristic attribute from the default value of 500ms to 10ms. Future versions will have the default value set to 10ms, but for now it is necessary for the user to change this value manually. To have this value set when the Peer Server is started, it is necessary to change the Peer Server startup NCL 32 file in the /var/sna directory. The Peer Server uses this NCL startup configuration file everytime the Peer Server starts. The name of the file is t21_init_sna_server.ncl. Because of the overhead associated with data processing thru the UNIX kernel, FDDI performance is a function of system capability, in partic- ularly CPU speed. (This is also true when using TCP over a FDDI datalink.) Testing has indicated that with the Holdback Timer set to 10 ms, and a properly sized system, FDDI data rates in access of 80 megabit/seconds can be achieved when using Peer Server. (Note: The degradation in performance associated with the Holdback Timer is also manifested in CSMA-CD and Token Ring data links. Although it is more noticable in FDDI, performance degradation can be significant in any of the datalinks and the default setting of the Holdback Timer attribute should be changed for all data links.) 9 Files Installed/Modified The following table lists all the files placed onto the sys- tem during the Peer Server installation, or created during configuration. Table_2:_Peer_Server_Installed_Files___________________________ ___Directory______________Filename_____________________________ /dev/streams/ t21_smgd t21cpnm t21ctrl t21mgmt t21llc q25_qllc q25_mgd q25_xpi t21sd_ctrl t21sd_mgmt t21trc t21sdlc t21sess0 t21wadd /sbin/init.d/ t21_sna_server /sbin/rc0.d/ K09t21_sna_server /sbin/rc3.d/ S90t21_sna_server 33 Table_2_(Cont.):_Peer_Server_Installed_Files___________________ ___Directory______________Filename_____________________________ /sys/opt/T21SRVR140/ config.file files /usr/sbin/ t21cad t21cadgas t21mad t21mcd t21smd t21smc q25mad q25mcd t21trcd t21setup tn3270_config tn3270_server tn3270_server_start /var/sna/ q25mcd.conf t21icu t21mcd.conf t21smc.conf t21strsetup.conf t21_V14-0_release_notes /var/subsys/ t21llc.mod t21qllc.mod t21scl.mod t21scl.stanza t21sdlc.mod __________________________t21spd.mod___________________________ The following table lists Peer Server configuration files. Table_3:_Peer_Server_Configuration_Files_______________________ ___Directory______________Filename_____________________________ /var/sna/ t21_init_sna_server.ncl t21_init_sna_server.ans ___/usr/sbin/_____________tn3270_conf.nnn______________________ 34 The following table lists files the Peer Server uses for log- ging. Table_4:_Peer_Server_Files_Used_for_Logging____________________ ___Directory______________Filename_____________________________ var/tmp/ t21_init_sna_server.log ___var/adm/_______________syslog.dated/date/daemon.log_________ The following table lists various operating system or layered product files that are replaced or modified by the Peer Server installation and configuration. Table_5:_Peer_Server_Modified_Files____________________________ ___Directory______________Filename_____________________________ /etc/eca/ mir.dat /usr/bin/ ctf /usr/share/ctf/ ctfua_library.c libctflibs.a ___/usr/share/dna/________cnmDictionary.dat____________________ 35