DCE Version 2.0 for Digital UNIX Release Notes June 1996 These release notes describe last-minute changes, fixes, and known errors to DCE Version 2.0 for Digital UNIX Version 4.0. Operating System and Version: Digital UNIX Version 4.0 Software Version: DCE Version 2.0 for Digital UNIX Digital Equipment Corporation Maynard, Massachusetts __________________________________________________________ June 1996 Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital or an authorized sublicensor. Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description. © Digital Equipment Corporation. All rights reserved. 1996. All Rights Reserved. This document was prepared using VAX DOCUMENT, Version 2.1. ________________________________________________________________ Contents 1 Digital UNIX Version 4.0 Patch Required...... 1 2 Version Information.......................... 1 3 Installation Notes........................... 1 3.1 Daemons Core Dump when Out of Disk Space.................................... 1 4 Additional Unsupported Software.............. 1 4.1 NSedit................................... 1 5 Notice of Retirement......................... 2 6 Restrictions................................. 2 7 Configuration................................ 3 7.1 Client Configurations May Not Show Transarc Solaris-based Cell Servers in List of Known Cells on LAN............... 3 7.2 Use of dce_error_inq_text() ............. 3 7.3 Configuring DTS Clerks................... 3 7.4 TELNET Daytime Option May Be Required for HP Clients............................... 3 7.5 Changing an IP Address................... 4 7.6 Interoperating with Other Vendors' CDS Replicas During Configuration............ 6 8 DCE Application Development.................. 6 8.1 Privacy Protection....................... 6 8.2 XDS Programming.......................... 6 8.3 GSSAPI and Kerberos Interoperability..... 7 8.4 Extended Registry Attributes (ERA) Limitations.............................. 7 8.5 pthread_once Routine..................... 8 8.6 Garbage Collection....................... 8 8.7 RPC Connection Activity.................. 9 8.8 RPC_SUPPORTED_PROTSEQS................... 9 9 DTS.......................................... 9 iii 9.1 The dtscp set server principal name Command.................................. 9 9.2 Managing Your Network's Clocks........... 9 9.3 dtsd Does Not Start when Master CDS Server Is Unavailable.................... 10 9.4 ACL File................................. 10 9.5 The dtscp show decnet time source Command.................................. 10 10 The dcecp Command............................ 10 10.1 Documented Commands that Do Not Work..... 10 10.2 Syntax Changes to Documented Commands.... 11 10.3 Keytab Object............................ 11 10.4 New dcecp Actions and Options............ 11 10.5 Using CTRL-C at dcecp Prompt Resets tty Settings................................. 12 11 The dced Daemon.............................. 12 11.1 Failed Opening Ep.db Error Message....... 12 12 Naming....................................... 13 12.1 CDS May Have Stale Binding for Foreign Cell if Foreign Cell Reconfigured........ 13 12.2 Deleting Replicas........................ 13 12.3 cds_attributes........................... 13 12.4 ACL File................................. 13 12.5 CDS Interoperability Configuration Problem with Other Vendors' OSF DCE Release 1.1-Based Products............... 14 12.6 cdscp: Backslash (\) Continuation Character Does Not Work Properly......... 15 12.7 Potential Temporary CDS create clearinghouse Anomaly.................... 15 12.8 Unable to Find CDS Servers on the LAN.... 16 12.9 CDS Soft Links Are Not Always Followed... 16 13 Security..................................... 16 13.1 Using DCE SIA on Applications that Are Linked with the -non_shared Qualifier.... 17 13.2 Enabling and Disabling DCE SIA When Basic X Environment Not Present................ 17 13.3 Password Management Server............... 17 13.4 Deleting Replicas........................ 17 13.5 Deleting Local Accounts Created by passwd_export............................ 18 13.6 chpass Functionality..................... 18 13.7 Account Lifetime Policy.................. 18 iv 13.8 passwd_export Fails to Rehash /etc/passwd.............................. 18 13.9 Password Usage in /opt/dcelocal/etc/passwd_override when Using passwd_export................. 19 13.10 Permissions Required for Adding a New Account.................................. 19 13.11 Replica Migration from DCE Version 1.0.3 to Version 1.1........................... 19 13.12 Delegation Restrictions.................. 19 13.13 Extended Registry Attribute (ERA) Restrictions............................. 20 13.14 pwd_strengthd Supplied in /usr/examples/dce/pwd_mgmt............... 20 13.15 Password Strength Server Documentation... 20 13.16 gss_accept_sec_context() and Login Contexts................................. 20 13.17 Credential Refresh Problem with gss_accept_sec_context() ................ 21 13.18 Inclusion of Security Component Fixes from OSF................................. 21 13.19 Manpage Notes for this Release........... 21 13.20 Use of Registry Cursors Lacks Transaction Semantics................................ 22 13.21 Starting the Audit Daemon and Accessing the Manpage for the DCE Audit Daemon..... 22 13.22 Memory Leaks with sec_login* Routines.... 22 14 Cell Alias Restrictions...................... 22 15 Hierarchical Cells and Transitive Trust...... 23 16 DCE Distributed File Service Version 2.0 Notes........................................ 27 16.1 Upgrade of Existing Digital DFS Version 1.3 and DFS T2.0 FLDBs Is Required....... 27 16.2 Authenticated Remote Login Unsupported for Version 2.0.......................... 29 16.3 Limitations on Digital UNIX Version 4.0 ACL Support.............................. 29 16.4 DFS Warnings............................. 29 16.5 df Command............................... 29 16.6 DCE DFS Does Not Return ENOSPC Properly................................. 30 16.7 Possible System Hang or Panic on Shutdown or Reboot................................ 30 v 16.8 DCE RPC Data Privacy..................... 30 16.9 Certain Commands May Not Restore DCE DFS Mount Points Correctly................... 30 16.10 Single-Site Semantics Not Fully Implemented for Memory-Mapped Files...... 30 16.11 Restriction on Creating and Access of Special Devices Using DCE DFS............ 31 16.12 Support of Files Larger than 2 GB........ 31 16.13 UFS No Longer Required for DFS Client Cache Directory.......................... 31 16.14 The msync System Call Now Fully Supported................................ 31 16.15 Support for fuser System Call Now Available................................ 31 16.16 Adding DFS Filesets to a DFS Server...... 32 17 OSF DCE Administration Reference............. 34 vi 1 Digital UNIX Version 4.0 Patch Required If you are running on Digital UNIX Version 4.0, you will be affected by the DECthreads problems described in Digital UNIX Version 4.0 Release Note Addendum. You will need to contact your Customer Support Center and fetch the DECthreads patch kit. If you run DCE Version 2.0 without the patch kit, cdsadv will be unable to create clerk processes and other processes may core dump. This problem will be fixed in future updates to Version 4.0. 2 Version Information This release of DCE Version 2.0 for Digital UNIX Version 4.0 is based on the OSF R1.1 release and the Warranty Patch. 3 Installation Notes Note that R1.0.3 servers may generate the following informational message when speaking to R1.1 clients: assoc->xxx Protocol version mismatch - major ->5 minor-->1 You can ignore this message. 3.1 Daemons Core Dump when Out of Disk Space DCE daemons periodically write to files in /opt/dcelocal /var to checkpoint databases, as well as to log messages. The daemons may abort if the file system on which /opt /dcelocal/var resides runs out of disk space. 4 Additional Unsupported Software The following sections describe software products that are not supported for this release of DCE for Digital UNIX. 4.1 NSedit NSedit, developed by Project Pilgrim at the University of Massachusetts, is provided in the /opt/dce/nosupport tree as a prototype of a DCE namespace management tool. This version is provided for your review; it is not supported. NSedit provides a user-friendly environment in which to create, view, and modify entries in the CDS namespace. 1 Please refer to the README file and User's Guide in the /opt/dce/nosupport/nsedit directory for instructions on its use. 5 Notice of Retirement The following control programs are included as part of DCE Version 2.0 for Digital UNIX, but will be retired in future releases. The functionality found in these control programs is duplicated in dcecp. acl_edit cdscp rgy_edit rpccp sec_admin dtscp 6 Restrictions Please note the following restrictions for this release: o There is a bug in the interoperation of DCE Version 2.0 and DECnet Version 4.0. In some DECnet environments, processes that use RPC, including the DCE daemons, experience a variety of RPC communication errors. This will be fixed in a future maintenance release. o This version of DCE cannot coexist or interoperate with Digital Authentication Server Version 1.0. o DECnet over TCP/IP (DOTI) is not supported by this version of DCE and will prevent a successful initiation of dced. If you have enabled DOTI with decnetsetup, you may disable it with the following command: % ncl delete session control transport service doti o Diskless machines are not supported in this release. DMS support for DCE for Digital UNIX Version 2.0 is not available at this It is assumed that /usr is local and is writeable on the machine where DCE is installed. 2 7 Configuration The following sections discuss configuration issues for this release of DCE for Digital UNIX. 7.1 Client Configurations May Not Show Transarc Solaris-based Cell Servers in List of Known Cells on LAN DCE for Digital UNIX Version 2.0 client configurations may not show the cell served by Transarc Solaris servers in the list of known cells on the LAN. This does not affect the ability to configure Digital UNIX Version 2.0 clients into Solaris cells. Enter the cell name at the prompt to perform the configuration. 7.2 Use of dce_error_inq_text() The dce_error_inq_text() routine returns the text string for a DCE status code. On Digital UNIX, for dce_error_inq_ text() to return text for all possible error codes, you must set the environment variable LANG as follows: % setenv LANG en_US.ISO8859-1 7.3 Configuring DTS Clerks On systems with DECnet installed, dcesetup will ask the user if the system should accept time from DECdts servers during the configuration. If the node is configured to run a dts clerk (e.g. client configuration), dcesetup will not execute the required command to allow this to happen. Users may either enter the required command by hand ("dtscp set decnet time source true") once dts is started, or modify their configuration to become a dts server which will cause this command to be executed properly. 7.4 TELNET Daytime Option May Be Required for HP Clients Hewlett-Packard DCE clients may try to utilize the TELNET daytime option when being configured into a DCE cell. If the security server is running on a Digital UNIX machine, you may need to enable the daytime option before the Hewlett-Packard client can be configured successfully. To configure the option, follow these steps: 1. Edit /etc/inetd.conf. 3 2. Make sure that the # character does not precede the daytime configuration line. For example: #daytime stream tcp nowait root internal daytime #daytime dgram udp wait root internal daytime You may need to change this to: daytime stream tcp nowait root internal daytime daytime dgram udp wait root internal daytime 3. Stop and restart inetd: /sbin/init.d/inetd stop /sbin/init.d/inetd start 7.5 Changing an IP Address Before changing a host address, you need to prepare DCE for the change by performing the following procedure: 1. Log in as root on your local system. 2. Start DCE (dcesetup start). 3. Perform a dce_login as cell_admin. 4. If the system is a security server, enter the following cdscp command: # cdscp remove obj /.:/subsys/dce/sec/master CDS_Towers You may see rpc_binding errors reported; this is normal behavior. 5. Clean DCE (dcesetup clean). 6. If the system is a security server, edit the following file: /opt/dcelocal/etc/security/pe_site Replace the old IP address with the new one. 7. Change your host address and reboot. 8. Stop the system with the dcesetup stop command. 9. If your system is a security server or a CDS server, start the following daemon: /opt/dcelocal/bin/dced. 4 10.If your system is a security server, do the following: o setenv BIND_PE_SITE 1 o Start the following daemons: /opt/dcelocal/bin/cdsadv /opt/dcelocal/bin/secd 11.If your system is a CDS server, start the following daemons if you have not already started them: o /opt/dcelocal/bin/cdsadv o /opt/dcelocal/bin/cdsd o /opt/dcelocal/bin/gdad Wait about 2 minutes for the servers to update their addresses. pc_ns_binding_unexport errors will be fixed later. 12.Stop DCE (dcesetup stop). 13.Start DCE (dcesetup start). 14.Perform a dce_login as cell_admin (dce_login cell_ admin). 15.Get your hostname from /opt/dcelocal/dce_cf.db. The hostname is the string after hostname hosts/. Enter the following rpccp commands. Replace $HOSTNAME with your hostname. # Export the DFS endpoint mapper host binding rpccp unexport -i e1af8308-5d1f-11c9-91a4-08002b14a0fa,3.0 \ /.:/hosts/$HOSTNAME/self rpccp export -i e1af8308-5d1f-11c9-91a4-08002b14a0fa,3.0 \ -b ncadg_ip_udp:'[135]' \ /.:/hosts/$HOSTNAME/self 16.If you are working from a CDS server, it may take some time for old addresses to be purged from the namespace. 17.If this system is a CDS server, client systems that are not on the same LAN and manually configured their CDS server location will have to clear the old location and add the new one. 5 7.6 Interoperating with Other Vendors' CDS Replicas During Configuration If you have a cell whose CDS master replica will be on Digital DCE Version 2.0 and you wish to create CDS Replica on another vendor's machine, you may need to answer y to the following dcesetup question: Will there be any DCE pre-R1.1 CDS servers in this cell? (y\n\?) Other DCE vendors may lack default support for CDS_ DirectoryVersion 4.0, so by answering y, cdsd on Digital DCE Version 2.0 will start up using CDS_DirectoryVersion 3.0, which will enable you to configure other vendors' CDS replicas into a Digital DCE Version 2.0 cell. 8 DCE Application Development The following sections discuss DCE application development issues for this release of DCE for Digital UNIX. 8.1 Privacy Protection This product does not support privacy protection unless you install the privacy kit. This affects the following routines: o GSSAPI-Requests to encrypt data (with gss_seal) will result in the data being only integrity protected. o rpc-Using rpc_c_protect_level_pkt_privacy will return an error. 8.2 XDS Programming When you are programming to the XDS interface and include the file xdscds.h, you must compile your source module with the -std1 switch. If you do not compile with the - std1 switch, you may get DS_E_BAD_SESSION errors on your calls to XDS. 6 8.3 GSSAPI and Kerberos Interoperability When using the Kerberos mechanism (requested by the mech- type constant GSSDCE_C_OID_KRBV5_DES), the protocol used is that specified in the IETF Internet Draft titled The Kerberos Version 5 GSS-API Mechanism. This document is available from Internet repositories as draft-ietf-cat- kerb5gss-01.txt. 8.4 Extended Registry Attributes (ERA) Limitations o Attribute set expansion to member attributes does not take place on attribute lookups. o Update triggers are not implemented. The trig_type and trig_binding fields in the schema entry can be set to update, but that setting is ignored. o The use_defaults mechanism is not implemented. o The unique flag in the schema entry does not currently guarantee uniqueness on attribute updates. o The intercell_action flag is currently ignored; it is always set to reject. o Extended attributes cannot be attached to the policy object. o The sec_attr_trig_update, sec_rgy_attr_get_effective, and sec_rgy_attr_test_and_update calls are not implemented. o Specific instances of multivalued attributes cannot be deleted using the sec_rgy_attr_delete call. This call deletes every instance of the specified attribute type. The attr_value field of the input sec_attr_t is ignored. Note that this interface to delete attribute instances allows the deletion of individual instances of a multivalued attribute type. Use the sec_rgy_ attr_update call to delete a specific instance of a multivalued attribute. 7 8.5 pthread_once Routine Do not call the DCE API from within a pthread_once routine. Doing so has the potential to cause a deadlock. 8.6 Garbage Collection Using the Interface Definition Language (IDL) in C++ mode (-lang cxx), garbage collection is supported for distributed objects. For client applications that use a large number of servers or objects, there are two environment variables that can be set to increase the performance of garbage collection by reducing the number of RPC pings to its servers. If a client application uses more than 20 different servers, you can set the environment variable RPC_RECLAIM_ MAX_SERVER (20 is the default) to a higher value before starting the application. For a client application that uses more than 100 dis- tributed objects per server, you can set the environment variable RPC_RECLAIM_MAX_OBJECT (100 is the default) to a higher value before starting the application. For a typical application, the default values provided should be adequate; therefore, only increase those values if necessary since they will increase the use of system resources (that is, memory). By default, distributed objects are pinged every 5 minutes by a client, and are reclaimed by the server if not pinged for over 1 day. These periods can be tuned by applying the cxx_reclaim attribute to the interface. For example: [cxx_reclaim(2,20)] interface interface_name { ... } This sets the ping period to 20 minutes, and only reclaims an object after 2 days of activity. It is anticipated that the default, implicit attribute of [cxx_reclaim(1,5)] will be reasonable in most cases. Garbage collection can also be suppressed by applying an attribute of [cxx_reclaim(0,0)] to the interface. 8 8.7 RPC Connection Activity DCE server processes, which use connection-oriented, network-transport protocols, may not receive a timely alert if the virtual connection over the underlying protocol becomes inactive. This condition may occur in instances where prior RPCs were used to initialize active context handles, which specify a context handle rundown routine. In such instances, if the virtual connection from the remote client process were quietly broken (for example, the client computer is powered off), the underlying network protocol may not alert the DCE server process. 8.8 RPC_SUPPORTED_PROTSEQS The RPC_SUPPORTED_PROTSEQS environment variable is an unsupported feature. Do not set the RPC_SUPPORTED_PROTSEQS environment variable when starting the core DCE daemons. If you do define RPC_SUPPORTED_PROTSEQS, the dcesetup program may exit with a failure, if the value for RPC_ SUPPORTED_PROTSEQS does not include both ncadg_ip_udp and ncacn_ip_tcp. Application programs may not have this restriction. 9 DTS The following sections discuss DTS issues for this release of DCE for Digital UNIX. 9.1 The dtscp set server principal name Command The dtscp set server principal name command is not supported. 9.2 Managing Your Network's Clocks If you are using DTS, use the dtscp change command or dtscp update command to change the local system time instead of the dtscp date command. Also, do not run any alternative time services, such as ntpd or timed. 9 9.3 dtsd Does Not Start when Master CDS Server Is Unavailable If you try to start DCE while the master CDS server is unavailable, dtsd will fail after a few minutes, even if you have a CDS replica around. If you know the master CDS server for the host/ and the root will be unavailable for a while, designate a CDS replica as the master CDS server to prevent this failure of dtsd. 9.4 ACL File If you are upgrading a host from a previous version of DCE, you will notice that the DTS ACL file will be renamed from /opt/dcelocal/var/adm/time/mgt_acl to /opt/dcelocal /var/adm/time/dtsd.acl. 9.5 The dtscp show decnet time source Command The dtscp show decnet time source command should show this attribute as either true or false; instead, it shows all of the DTS counters. The bug has been fixed, and it will be included in the next ECO kit. For now, please use the dtscp show all command to check the DECnet time source attribute. 10 The dcecp Command The following sections discuss dcecp issues for this release of DCE for Digital UNIX. 10.1 Documented Commands that Do Not Work The following dcecp commands do not work for this release: o host configure o server disable o xattrschema create -trigtype update o cell ping help text is incorrect o server stop and server ping will select a random server if more than one server instance registers the same interface, object UUID, and binding o cellalias set (disabled) o cdsalias set (disabled) 10 10.2 Syntax Changes to Documented Commands Note the following changes to these commands for this release of DCE for Digital UNIX: o link create -linkto has been changed to link create -to o The registry show -replica command displays a new attribute field, supportedversions. o The registry show -replica command formerly displayed the version attribute. This attribute has been moved to the registry show -attr command. o The registry delete -only command has been changed to registry destroy. Note that the registry delete command still exists, but the -only option is not available. o The registry set command has been renamed registry designate. All options formerly supported by set are available in designate. o The registry modify -version command has been added to support cell migration. 10.3 Keytab Object This version of DCE does not include privacy protection, unless you install the privacy kit (as described in Section 8.1). Avoid using the dcecp keytab commands to manage remote keytabs, since passwords will be passed in the clear over the network. If you do use the keytab commands, you must use the -noprivacy option. This option is missing from the keytab(8dce) manpage. 10.4 New dcecp Actions and Options This section describes the new dcecp command actions and options for this release of DCE for Digital UNIX. The following objects are available to dcecp in this release: o cds-manages CDS server information on the specified host 11 o cdsclient-manages CDS client information on the specified host The following commands are available to dcecp in this release: o cdscache discard-discards all cdsadv cache information on the specified host o cellalias catalog-returns the cell alias names currently in use 10.5 Using CTRL-C at dcecp Prompt Resets tty Settings Entering CTRL-C at the dcecp prompt in interactive mode disables CTRL-Z. It also disables shell level echoing after exiting dcecp, and may also reset other tty settings. Please avoid using CTRL-C inside dcecp in this release. 11 The dced Daemon The following section discusses dced daemon issues for this release of DCE for Digital UNIX. 11.1 Failed Opening Ep.db Error Message The dced daemon periodically checkpoints the endpoint database to /opt/dcelocal/var/dced/Ep.db. The endpoint database file Ep.db may be unusable if dced is killed while it is in the process of updating the file. The time frame for the existence of this condition is minuscule but the condition does exist. To correct the problem, rename /opt/dcelocal/var/dced /Ep.db to a temporary file and restart dced through dcesetup, if the error message "Failed opening Ep.db, 0x16c9a0d3" is reported or logged in /opt/dcelocal /var/dced.log during dced startup. Note that all DCE applications running on the machine will need to be restarted as well unless the machine was rebooted. 12 12 Naming The following sections discuss naming issues for this release of DCE for Digital UNIX. 12.1 CDS May Have Stale Binding for Foreign Cell if Foreign Cell Reconfigured When you reference a foreign cell from your local system, for example with a cdscp show cell command, the bindings from that lookup are placed in the local system's cache. If the foreign cell is reconfigured, the cached data in your cell becomes stale and the local cell cannot detect it. A subsequent cdscp show cell command will hang. To clear out the bad data from the CDS cache, you must enter a dcesetup clean command on your system. 12.2 Deleting Replicas After deleting a CDS replica, it may be necessary to run dcesetup clean on other systems in the cell running as CDS replicas or as the CDS master. This will clear the local caches on those systems. 12.3 cds_attributes Upon installation, if the file /opt/dcelocal/etc/cds_ attributes (or /opt/dce/etc/cds_attributes) already exists, it will get renamed to cds_attributes.sav and a new cds_attributes file will be installed. If you have modified this file to include additional object identifiers, you need to manually merge the two versions. 12.4 ACL File If you are upgrading a host from a previous version of DCE, you will notice that the naming ACL will be renamed as follows: from /opt/dcelocal/var/directory/cds/server_mgmt_ acl.dat to/opt/dcelocal/var/directory/cds/server_mgmt_acl_ v1.dat from /opt/dcelocal/var/directory/cds/gda_mgmt_acl.dat to /opt/dcelocal/var/directory/cds/gda_mgmt_acl_v1.dat 13 from /opt/dcelocal/var/adm/directory/cds/clerk_mgmt_ acl.dat to /opt/dcelocal/var/adm/directory/cds/clerk_mgmt_acl_ v1.dat 12.5 CDS Interoperability Configuration Problem with Other Vendors' OSF DCE Release 1.1-Based Products If you plan to configure a cell with both Digital DCE Version 2.0 CDS servers and other vendors' OSF DCE Release 1.1-based CDS servers, you may run into an interoperability problem during configuration. The problem occurs if the master replica of the cell root directory resides on a Digital DCE Version 2.0 CDS server and the cell root directory's CDS_DirectoryVersion attribute value is already at 4.0. You may be unable to add a CDS replica onto the other vendors' CDS server machine in this case. You will receive a message similar to: Old replica cannot be included in new replica set (dce \ cds) The possible workarounds for this problem are as follows: o Create the cell from the other vendor's system and then add the Digital DCE Version 2.0 CDS server as a replica. o If you wish the CDS master replica to be on the Digital DCE Version 2.0 machine, answer y to the following dcesetup question during new cell creation: Will there be any DCE pre-R1.1 CDS servers in this cell? (y\n\?) As a result, the cell root directory's CDS_DirectoryVersion will begin at 3 .0 and you will be able to configure with Transarc DCE Release 1.1 for Solaris 2.4. Realize that you will need to upgrade the CDS_DirectoryVersion to 4.0 on the cell root directory to take advantage of new R1.1-based features in CDS such as cell aliasing, hierarchical cells, and delegation ACLs (use the dcecp -c directory modify /.: -upgrade command. 14 12.6 cdscp: Backslash (\) Continuation Character Does Not Work Properly The cdscp manpage states that to continue a long command line onto the next line, type a space and then a \ (backslash) at the end of the first line and a secondary prompt will appear. This feature does not currently work (it will cause cdscp to become nonresponsive) and will be fixed in a future release. A workaround is to use the corresponding command in dcecp. 12.7 Potential Temporary CDS create clearinghouse Anomaly Shortly after creating a clearinghouse, a subsequent show of that clearinghouse may return the following information: show clearinghouse /.../abc_cell/abc_ch SHOW CLEARINGHOUSE /.../abc_cell/abc_ ch AT 1996-01-10-11:25:31 CDS_CTS = 1996-01-10-16:25:28.340624100/08-00-2b-bc-bb-92 CDS_UTS = 1996-01-10-16:25:28.693937300/08-00-2b-bc-bb-92 CDS_ObjectUUID = 7f11c5e0-4b6b-11cf-bef1-08002bbcbb92 CDS_AllUpTo = 0 CDS_DirectoryVersion = 3.0 CDS_CHName = /.../abc_cell/abc_ch CDS_CHLastAddress = CDS_CHState = on CDS_CHDirectories = : UUID of Directory = a1122d81-4b69-11cf-bef1-08002bbcbb92 Name of Directory = /.../abc_cell CDS_ReplicaVersion = 3.0 CDS_NSCellname = /.../abc_cell Empty set. (dce / cds) Empty set. (dce / cds) As you can see, the CDS_CHLastAddress and the clearing- house counters are empty. This is a harmless temporary state that will be reconciled after the server runs a background pass on the clearinghouse. 15 12.8 Unable to Find CDS Servers on the LAN When configuring a DCE client, an attempt is made to discover CDS servers on your LAN (and the cells they serve). If a served cell does not show up on the list but you know the CDS server is up and running, there is one known explanation, as follows. The dcesetup command uses a utility program called getcells that is not meant to be run directly by users. If someone runs getcells on a CDS server machine while the server is running, it has the effect of overwriting certain endpoints in the RPC endpoint mapper, which causes the CDS server to not hear CDS solicit messages from prospective DCE client machines. The workaround is to do the following on the CDS server machine (as root): Shutdown CDS via: cdscp disable server cdscp disable clerk Restart CDS via: /opt/dcelocal/bin/cdsadv /opt/dcelocal/bin/cdsd 12.9 CDS Soft Links Are Not Always Followed When a soft link gets created, its CDS_LinkTarget attribute points to another entry in the namespace. If that entry is a directory, then: dcecp -c directory show and: dcecp -c directory show should return the same information. There is a bug in this release in CDS that often causes the second command above to return "Requested entry does not exist" instead of following the link to its eventual target. 13 Security The following sections discuss security issues for this release of DCE for Digital UNIX. 16 13.1 Using DCE SIA on Applications that Are Linked with the -non_shared Qualifier When DCE SIA is enabled on a system, two restrictions apply to the use of statically linked applications (applications linked with the -non_shared qualifier). First, UNIX security functions called from statically linked applications will bypass DCE. For example, calls to getpwent() will return entries from etc/passwd but not the DCE registry. Second, DCE SIA disallows so-called binary compatibility for statically linked applications built on versions prior to Digital UNIX V4.0. This restriction applies specifically to applications that use UNIX security functions, such as login or getpwent(). Such applications must be recompiled on the current version of the operating system. ________________________Note ________________________ You may use the file command to determine whether or not a given executable image is built with the -non_shared qualifier. _____________________________________________________ 13.2 Enabling and Disabling DCE SIA When Basic X Environment Not Present Configuration scripts for DCE SIA assume the presence of the file /usr/var/X11/xdm/xdm-config. You must have the "Basic X Environment" installed or create this file in another manner. 13.3 Password Management Server The Password Management server is not supplied with this release, because privacy protection is not enabled. 13.4 Deleting Replicas After deleting a security replica, it may be necessary to run dcesetup clean on other systems in the cell running as security replicas or as the security master. This will clear the local caches on those systems. 17 13.5 Deleting Local Accounts Created by passwd_export If you use passwd_export to migrate DCE accounts to /etc /passwd, be particularly careful when using removeuser to delete any of these accounts from the local system registry. The default home directory for a new DCE account may be the slash (/) instead of /usr/users/account_ name. You can use rgy_edit to change this default. When deleting an account, removeuser asks whether to delete the account's home directory. Never delete the slash (/) and answer n to the question. 13.6 chpass Functionality DCE for Digital UNIX does not implement chpass. Instead, this functionality is available through rgy_edit or the passwd, chsh, and chfn commands when DCE SIA is enabled. For more information about the chpass command, see Chapter 3 of the DCE for Digital UNIX Product Guide. 13.7 Account Lifetime Policy Never set the cell-wide account lifetime policy to anything other than forever, which is the default. The value is a lifetime limit for all accounts in the cell, including DCE host and server accounts. When the account expires, the cell becomes unavailable. Until this is fixed, the account lifetime policy effectively limits the life of the cell. Note that if a cell master cannot be restarted because key accounts, such as dce-rgy, have expired or are otherwise unusable, you can start secd in locksmith mode, then use rgy_edit to reset the cell-wide account lifetime policy to forever. 13.8 passwd_export Fails to Rehash /etc/passwd Under Digital UNIX, modifications to /etc/passwd, the user account database, are normally propagated to a hashed version that is internal to the system. Commands such as adduser and passwd propagate /etc/passwd changes automatically. Unfortunately, if you use the passwd_export utility to export registry accounts to /etc/passwd, the utility fails to propagate the changes. After you run passwd_export, you 18 can do this manually with the mkpasswd utility. See the associated manpage. 13.9 Password Usage in /opt/dcelocal/etc/passwd_override when Using passwd_export The passwd_export utility will write password fields in passwd_override entries to account entries in /etc/passwd. If you anticipate that passwd_export will be run on your system and intend to change the password of an account that exists both in /etc/passwd and passwd_override, you must run passwd twice to change the password in each location. 13.10 Permissions Required for Adding a New Account The Security Service portion of the DCE Administration Guide: Core Components may be incorrect in the permissions that it lists for creating a new account. The permissions should appear as follows: rmaug on the principal named in the account tm on the group named in the account tm on the organization named in the account r on the registry policy object 13.11 Replica Migration from DCE Version 1.0.3 to Version 1.1 It is possible to move the cell security registry forward from DCE Version 1.0.3 to Version 1.1 even though one or more replicas are still running Version 1.0.3. In this case the Version 1.0.3 replicas are simply instructed to shut down. However, their bindings remain in the /.: /sec group in the CDS namespace, and this may result in erroneous bindings. Manually remove the bindings from a cell that is migrated in this manner. 13.12 Delegation Restrictions All delegates in a delegation chain must come from the same cell as the delegation initiator. Although a delegated identity can be projected to a foreign cell, the delegation chain cannot be continued in the foreign cell. 19 13.13 Extended Registry Attribute (ERA) Restrictions Credentials returned by ticket requests from foreign cells will not include ERAs. In OSF DCE Version 1.1, the privilege server removes ERAs from credentials requested by foreign cells. 13.14 pwd_strengthd Supplied in /usr/examples/dce/pwd_mgmt This release of DCE does not support RPC packet encryption (also called packet privacy). Therefore, the usefulness of pwd_strengthd is limited in that it requires encryption to avoid sending passwords in clear text. For this reason, the pwd_strengthd binary is not supplied. The source code is provided in the example directory. 13.15 Password Strength Server Documentation The example in Section 30.6.3 of the Administration Guide - Core Components on how to create a principal having password strength server attributes is incorrect. No further information is available at this time. 13.16 gss_accept_sec_context() and Login Contexts When a server application calls gss_accept_sec_context and requests the client's name, the GSSAPI runtime requires a login context capable of initiating security contexts (that is, it is a GSS_C_ACCEPT credential). gss_accept_ sec_context may create a security context. However, this context is simply cached and will never be refreshed. The following workarounds are recommended for use in the server code: o Do not ask for the client's name. If the server is using DCE ACLs, it does not need the client's name to do access control. If the client's name is subsequently needed (for example, for auditing), the application can do the translation itself, using its own login-context management. o Use a fresh credential for each gss_accept_sec_ context. A fresh credential will not have a cached login context, so gss_accept_sec_context will always create one. 20 o Use the Kerberos mechanism. This mechanism transfers the client's name within the authentication token, so no registry translation is needed. However, the server does not get the client's PAC in this manner. 13.17 Credential Refresh Problem with gss_accept_sec_context() An optional argument to gss_accept_sec_context(), src_ name, returns the principal name of the initiator. If src_ name is requested, and the verifier_cred_handle points to an ACCEPT type of credential, the resulting login context will not be automatically refreshed. The workaround is to always pass NULL to the src_name argument. Most applications will be unaffected by this restriction: if they use DCE ACLs for access control, the client name is not necessary. 13.18 Inclusion of Security Component Fixes from OSF This release includes many security component patches supplied from OSF in the 1.1 Maintenance Release. All patches for high- and medium-level bugs are included here, as well as many nonessential patches that fix minor problems. If you have a question about whether a specific bug fix known to OSF is included here, please contact your Digital support representative. 13.19 Manpage Notes for this Release Please note the following problems for these manpages: o The manpage for dce_attr_sch_aclmgr_strings is missing. o The manpage for sec_login_become_impersonator is missing an argument. See the prototype definition in dce/sec_login.h for the correct argument list. o The manpage for sec_audit_events documents the wrong event value. The event ERA_LookupByName is defined as 0x130. 21 13.20 Use of Registry Cursors Lacks Transaction Semantics It is not possible for an application to obtain a snapshot of the registry database. Therefore, if an application obtains a valid registry item with the sec_rgy_pgo_get_ next() routine, the validity of the reference cursor will not persist if the item is deleted from the registry database. 13.21 Starting the Audit Daemon and Accessing the Manpage for the DCE Audit Daemon There is a naming conflict between the auditd image shipped with DCE and the auditd image shipped in OSFBASE for enhanced security (located in /usr/sbin/auditd). To start the DCE audit daemon in dcesetup, select Configure, select the Modify DCE cell configuration and then select Enable Auditing. This will ensure that the DCE audit daemon in /opt/dcelocal/bin/auditd is being started. To access the manpage for the DCE Audit Daemon, enter the following command: % man 8sec auditd 13.22 Memory Leaks with sec_login* Routines The sec_login_purge_context() routine is supposed to free all memory associated with a login context. However, it does not. An application that calls sec_login_purge_ context() may have to be restarted periodically if its virtual memory size grows excessively. 14 Cell Alias Restrictions o The dcecp command cellalias set has been disabled. If you wish to create an alternate cell name, use the dcecp command cellalias create. This will create a cell alias name without changing the primary cell name. o Cell alias names are not automatically propagated across cell boundaries. In other words, cell aliases are only recognized within the cell, unless the cell is also registered in a global directory service and the foreign registry. 22 o Cell alias creation will fail if a cell includes OSF DCE Version 1.0.x-based clients. The dcecp cellalias script attempts to update every cell-member host by contacting its DCE host daemon (dced). After the script detects an error (such as failing on a Version 1.0.x-based client), it will proceed to undo the alias creation operation for the entire cell. o Transitive trust validation is performed using the pathname of the target principal. Transitive trust will succeed for a cell alias name only if there is a trust path expressed for that alias. o Ticket requests to alias names for the local privilege server are treated as foreign cell requests. In OSF DCE Version 1.1, the privilege server removes ERAs from credentials requested by foreign cells. Therefore, credentials returned by ticket requests to alias names will not include ERAs. The following scenario shows this limitation: 1. Create old_cell 2. Add new_cell as an alias for old_cell 3. dce_login as /.../old_cell/ 4. Request credentials to application service /.../new_ cell/ The credentials returned for /.../new_cell/ will not include ERAs. The privilege server treats the request to /.../new_cell as an intercell request from /.../old_cell to /.../new_cell, and removes any ERAs that may be attached to the principal. 15 Hierarchical Cells and Transitive Trust The transitive trust feature, intended to be automatic, actually requires a prior registration of various cell principals in the registries of different hosts involved in a multicell traversal. This is a manual procedure performed by an administrator. 23 This section augments documentation in the DCE Administration Guide: Core Components pertaining to cell hierarchies and multicell trust relationships. Cell renaming does not work. This means that the only way an existing cell can become part of a hierarchy is if it is created with a name that represents its position in the hierarchy. The impact of these restrictions on the configuration and administration of a multicell environment is best illustrated by a revised statement of the administrative operations taken to create a cell hierarchy. Consider the cell hierarchy that is pictorially described as follows: A | B Note that both parent and child must be based on R1.1. The steps to create such a hierarchy are as follows: 1. Configure a parent cell with a global name and register with its global directory service. For example: /.../majik_cell.dpe.lkg.dec.com 2. Configure a child cell with a cell name that is the concatenation of the parent name with the child cell name. For example: /.../majik_cell.dpe.lkg.dec.com/dce007_cell 3. Ensure that the child cell's root directory version is at 4.0. In the child cell, enter the following commands: dce_login cell_admin dcecp -c directory modify /.: -upgrade The root directory's CDS_DirectoryVersion should now be at 4.0. Verify this with the following command: dcecp -c dir show /.: 4. Set up namespace access from parent to child. In the parent cell, enter the following command: cdscp define cached server tower 24 5. Set up explicit cross-cell trust between the parent and child. In the parent cell, enter the following command: dcecp> registry connect /.../majik_cell.dpe.lkg.dec.com/dce007_cell\ -facct \ -facctpwd \ -fgroup \ -forg \ -group \ -org \ -mypwd \ -expdate 6. Force the current child cell name to be recognized as the primary alias . In the child cell, enter the following command: dcecp -c cdsalias create /.../dce007_cell dcecp -c cdsalias delete /.../dce007_cell 7. Change the ACL on the root directory. In the parent cell, enter the following command: dcecp> acl modify /.: -add \ {foreign_group /.../majik_cell.dpe.lkg.dec.com/dce007_cell/\ subsys/dce/cds-server r--t-i-} 8. Connect the child to the parent in the parent's namespace. In the child, enter the following command: dcecp -c cdsalias connect 9. Test to see that the connection succeeded. In the parent cell, enter the following command: dcecp -c dir synchronize /.: dcecp -c dir show /.:/dce007_cell Since an explicit direct trust relationship has been established in step 6, principals within the parent and child can authenticate to each other without transitive trust being invoked. However, consider the following multicell examples where transitive trust is needed: A-C A | / \ B B C 25 In the first example, A is a peer cell of C, and both are registered in the DNS namespace. In the second example, the steps previously shown are duplicated to create another child cell, C. In both cases, note below how transitive trust is achieved between cells B and C. First, direct trust relationships must be established as described in the DCE Administration Guide. Note that the command that establishes these direct trust relationships sets up principals in each cell to represent the foreign cells. This is described in the Administering a Multicell Environment in the OSF DCE Administration Guide. For transitive trust to work, the cell principal of the originating cell must register its cell principal with the target cell and all intermediate cells. Ensure that the UUID of a principal is registered consistently in each cell. This administration step is missing from the documentation. Here is a simple example that shows how these rules are borne out in practice. For the following multicell relationship, the cell registries contain the following cell principals as a result of establishing direct trust relationships: A krbtgt/B C krbtgt/A krbtgt/C B krbtgt/A For a principal in B to authenticate to C using the transitive trust relationship, C must first have the krbtgt/B principal in its registry. The complete set of principals in the different cells must be: A krbtgt/B C krbtgt/A krbtgt/C krbtgt/B B krbtgt/A The simplest way to register the required new principal in C is to display attribute data for the krbtgt/B principal on cell A, and to use this to extract the UUID and apply it to a dcecp principal create of krbtgt/B on cell C. Likewise, in the second multicell example, for a principal 26 in B to authenticate to C, cell C must have the B cell principal in its registry. The cell registration rule previously described has ramifications for complex hierarchical cells. Not only must the source cell be registered in the target, but also in all intermediate cells. Consider the following cell topology: A--X / \ B C For a principal in cell B to authenticate to cell C, the B cell principal must be registered in A and C. However, B must also be registered in X. 16 DCE Distributed File Service Version 2.0 Notes The following sections discuss the DCE Distributed File Service (DFS) issues for this release of DCE for Digital UNIX. 16.1 Upgrade of Existing Digital DFS Version 1.3 and DFS T2.0 FLDBs Is Required The format of the on-disk fileset location database (FLDB) has changed in Version 2.0 to be consistent with DFS FLDB formats from other vendors. This enables mixed-vendor replicated FLDB servers to exist in a single cell. If you are upgrading from Digital DFS Version 1.3 or DFS T2.0 and you wish to preserve an existing DFS cell configuration, you must migrate the existing FLDB to the new Version 2.0 format. A shell script is provided to perform the migration. ________________________Note ________________________ If your cell's FLDB server is running a DFS product other than Digital DFS Version 1.3 or DFS T2.0, or if you are reconfiguring DFS after installing Digital DFS Version 2.0, you do not have to migrate the FLDB. 27 To upgrade the cell's FLDB server machine to Digital DFS Version 2.0 and to preserve an existing Version 1.3 or T2.0 FLDB, complete the following steps: 1. If there is more than one FLDB server in your cell (that is, you are running with replicated FLDBs), remove all but one server from the cell's configuration. Use the following command to get a list of FLDB servers in the cell: % rpccp show group /.:/fs Typically, there is only one. 2. Delete your existing T2.0 (or earlier) DCE and DFS subsets by using the setld command. See the Digital UNIX System Administration manual for information about removing subsets. 3. Upgrade to Digital UNIX Version 4.0 (or greater). See the Digital UNIX Version 4.0 Installation Guide and Release Notes for more information. 4. Install the DCE and DFS Version 2.0 subsets. Be sure to install the DFS utilities subset, DCEDFSUTL200. See the DCE Installation and Configuration guide for more information. 5. Rebuild the UNIX kernel by using the /usr /sbin/ doconfig command. Be sure to select the Distributed File System kernel option. 6. Reboot your system. DCE and DFS should start automatically on reboot, but DFS filesets will not be available because the FLDB is in the incorrect format. 7. Log in as root, and then dce_login as DCE cell_ admin. 8. Migrate the FLDB to the new Version 2.0 format by executing the following command line: % sh /opt/dcelocal/bin/flmigrate.sh The flmigrate script requires no interaction, and notifies you of success or failure. If the process does not succeed, the script replaces the 28 original FLDB and starts an 'flserver' process that understands the pre-Version 2.0 FLDB format. _____________________________________________________ 16.2 Authenticated Remote Login Unsupported for Version 2.0 The NFS/DFS Gateway included in DCE DFS Version 2.0 does not allow users to authenticate themselves remotely. The dfs_login, dfs_logout, and dfsgwd components are not yet fully functional. However, NFS users can gain authenticated access to the DCE DFS namespace by using the dfsgw utility running on the Gateway host. For information about this utility, see dfsgw(8). 16.3 Limitations on Digital UNIX Version 4.0 ACL Support Digital UNIX Version 4.0 supports Access Control Lists (ACLs) on file system data. DCE DFS Version 2.0 supports access checks on files and directories with ACLs. However, the ability to view or modify the ACLs using either DCE or the DCE DFS pathname of the file or directory is not supported at this time. To view or modify the ACL on a file or directory, use the local pathname for the file and the operating system commands setacl and getacl. 16.4 DFS Warnings After a DCE DFS server reboots or is configured, you will see the following informational message: DFS: THE FX SERVER 16.141.112.44 IN CELL QA1_CELL IS TEMPORARILY IN TSR MODE After a few minutes, the DCE DFS server automatically exits from Token State Recovery (TSR) mode and begins to function normally. This message should occur after a DCE DFS server reboots and after a DCE DFS server is configured. 16.5 df Command The df command returns a constant value when run against the DCE DFS file system: Filesystem 512-blocks Used Avail Capacity Mounted on DCE File System 18000000 0 1800000 0% /... 29 Note that you can specify the -k flag to cause the numbers to be displayed in kilobytes. Files are allocated within the DCE DFS namespace, but the current architecture does not provide a reasonable estimate of the capacity or the use within the namespace. 16.6 DCE DFS Does Not Return ENOSPC Properly DCE DFS does not return ENOSPC properly. The DCE DFS client code allows an application writing to a UFS file system exported by DCE DFS to pass 100 percent capacity. The application can write up to 111 percent capacity, without generating an error. The file write will be incomplete. 16.7 Possible System Hang or Panic on Shutdown or Reboot Entering the shutdown or reboot commands after the DCE DFS daemons dfsd or fxd are running can cause the system to hang or panic. To work around a hang, press the hard reset button to return to console mode and reboot the system. 16.8 DCE RPC Data Privacy DCE RPC data privacy is not supported by this version of DCE DFS. 16.9 Certain Commands May Not Restore DCE DFS Mount Points Correctly The cp -[rR], tar, cpio, pax, restore, and vrestore commands may not correctly restore DCE DFS mount points if the local file system is used for recovery. To avoid this problem, restore the mount points in the DCE DFS namespace (for example, /:/path). 16.10 Single-Site Semantics Not Fully Implemented for Memory-Mapped Files DCE DFS Version 2.0 does not fully implement single-site semantics for memory-mapped files. If a file that is opened for write is memory-mapped on Client A and Client B reads the same file, Client B may not see the most recent writes to memory made by Client A. 30 16.11 Restriction on Creating and Access of Special Devices Using DCE DFS DCE DFS Version 2.0 does not support the ability to create and access special devices. If you attempt to create a special device, the mknod system call returns an error status and sets errno to EINVAL. If you attempt to access an existing special device, the creat or open system call returns an error status and sets errno to ENONENT. 16.12 Support of Files Larger than 2 GB DCE DFS Version 2.0 supports access to files larger than 2 GB (up to the limits of the DFS server's underlying file system) in both homogeneous Digital UNIX environments and heterogeneous environments that include DCE DFS servers or clients that also support files larger than 2 GB. The Digital UNIX DCE DFS server allows 32-bit clean clients to access the first 2 GB of files larger than 2 GB. To a 32-bit client, files longer than 2[31 -1] bytes appear to have a length of exactly 2[31 -1]. 16.13 UFS No Longer Required for DFS Client Cache Directory In DCE DFS Version 1.3, Digital recommended specifying a UFS pathname for the cache directory when you configure a DFS client and choose disk caching. This restriction has been lifted by DCE DFS Version 2.0. You can now use AdvFS pathnames for the cache directory. 16.14 The msync System Call Now Fully Supported The msync system call is now fully supported for memory- mapped DFS files. 16.15 Support for fuser System Call Now Available In DCE DFS Version 1.3, using the fuser system could cause a race condition that may result in the system panic. This condition is corrected in DCE DFS Version 2.0. 31 16.16 Adding DFS Filesets to a DFS Server This note provides information about adding DFS filesets to a DFS server. Similar information is also available in the DCE Administration Guide. In general, to add a DFS fileset to an existing server, you must be logged in as root to perform local mounts. To add a DFS fileset to an existing DFS server, follow these steps: 1. Obtain credentials that include the subsys/dce/dfs- admin group by using the following command: # dce_login cell_admin The dce_login command prompts you for the password for the DCE cell_admin account. 2. Create the file system or fileset, if necessary. To create a UFS file system, use the following command: # newfs /dev/rrzxx Replace xx with the number of your device. To create an AdvFS fileset, use the following command: # mkfset AdvFS_Domain AdvFS_Fileset Replace AdvFS_Domain and AdvFS_Fileset with the specification of your AdvFS domain and fileset, respectively. 3. Mount the file system (UFS) or fileset (AdvFS) on the local system. For file systems, use the following command: # mount /dev/rzXg /local_mount_point In the previous example, X is the device number and local_mount_point is the local file-system mount point (pathname). For filesets, use the following command: # mount AdvFS_Domain#AdvFS_Fileset /local_mount_point Replace AdvFS_Domain and AdvFS_Fileset with the specification of your AdvFS domain and fileset, respectively. 32 4. Create a fileset entry in the Fileset Location Database (FLDB), using the following command: # fts crfldbentry -ftname FName -server /.:/hosts/Server -aggrid AId In the example, FName is the DFS fileset name that will be assigned to the created fileset. Server is the server name, which must match the name assigned in the name server, and AId is a unique aggregate ID to be assigned to the created fileset. Check the /opt /dcelocal/var/dfs/dfstab file to verify that the AId you have chosen is unique to the host. 5. Add an entry to the /opt/dcelocal/var/dfs/dfstab file. Each field must be separated by at least one space or tab character. For UFS file systems, use the following syntax: /dev/rzXg /local_mount_pointufs AId FILESET_ID For an AdvFS fileset, use the following syntax: AdvFS_Domain#AdvFS_Fileset /local_mount_point advfs AId FILESET_ID In the preceding examples, FILESET_ID is the fileset ID assigned by the fts crfldbentry command to the readWrite fileset. The format for the FILESET_ID is 0,, N, where N is an integer. Use the fts lsfldbentry command to determine the FILESET_ID value if necessary. 6. Export the fileset to DFS by using either of the following commands: # dfsexport /local_mount_point # dfsexport -all 7. Create a mount point in the DFS namespace by using the fts crmount command as follows: # dce_login root . . . # fts crmount -dir /:/dfs_mount_point -fileset FName 33 In the preceding example, dfs_mount_point is the actual pathname of the DFS mount point (where the mount point is created). The FName argument is the DFS fileset name that will be assigned to the created fileset. The dce_ login command is shown because you need write access in order to create a mount point and the dce_login command gives you the appropriate DCE credentials. Although not shown in the example, the dce_login command prompts you for the password for the DCE root account. ________________________Note ________________________ The DCE DFS mount point is not recorded in any file in the system. Digital recommends that you record the mount point for future reference. For example, if the fileset is deleted, the mount point will need to be deleted as well. _____________________________________________________ 17 OSF DCE Administration Reference The OSF DCE Administration Reference (from OSF DCE R1.0.3) has been renamed. It is now called the OSF DCE Command Reference. 34