DIGITAL_Parallel_Software_Environment_______________
                    Release Notes: PVM and MPI



                    January 1999

                    This document contains information about DIGITAL's
                    Parallel Virtual Machine (PVM) and Message Passing
                    Interface (MPI) software.







                    Revision/Update Information:  January, 1999

                    Operating System Versions:    PVM:
                                                       DIGITAL UNIX V4.0a
                                                  and higher
                                                  MPI:
                                                       DIGITAL UNIX V4.0a
                                                  and higher

                    Software Versions:            DIGITAL Parallel
                                                  Software Environment
                                                  Version 1.8
                                                  DIGITAL PVM Version 1.8
                                                  DIGITAL MPI Version 1.8

                    Digital Equipment Corporation
                    Maynard, Massachusetts

 






          ________________________________________________________________
          First Printed Edition, March, 1995
          Revision, January 1999

          Digital Equipment Corporation makes no representations
          that the use of its products in the manner described in
          this publication will not infringe on existing or future
          patent rights, nor do the descriptions contained in this
          publication imply the granting of licenses to make, use,
          or sell equipment or software in accordance with the
          description.

          Possession, use, or copying of the software described
          in this publication is authorized only pursuant to a
          valid written license from Digital or an authorized
          sublicensor.

          © Digital Equipment Corporation 1999. All Rights Reserved.

          DIGITAL believes the information in this publication is
          accurate as of its publication date; such information
          is subject to change without notice. DIGITAL is not
          responsible for any inadvertent errors.

          DIGITAL conducts its business in a manner that conserves
          the environment and protects the safety and health of its
          employees, customers, and the community.

          The following are trademarks of Digital Equipment
          Corporation: AdvantageCluster, Alpha AXP, AXP, Bookreader,
          DEC, DEC Fortran, DECconnector, DECmcc, DECnet, DECserver,
          DECstation, DECsystem, DECsupport, DECwindows, DELNI,
          DEMPR, DIGITAL, GIGAswitch, POLYCENTER, ThinWire,
          TURBOchannel, TruCluster, ULTRIX, VAX, VAX DOCUMENT,
          VAX FORTRAN, VMS, and the DIGITAL logo.

          Network File System and NFS are registered trademarks of
          Sun Microsystems, Inc.

          Motif is a registered trademark of Open Software
          Foundation, Inc., licensed by DIGITAL; Open Software
          Foundation, OSF and OSF/1 are registered trademarks of
          Open Software Foundation, Inc.

          PostScript is a registered trademark of Adobe Systems, Inc.

          UNIX is a registered trademark in the United States and
          other countries licensed exclusively through X/Open Company
          Ltd.

          X Window System is a trademark of Massachusetts Institute
          of Technology.

          All other trademarks and registered trademarks are the
          property of their respective holders.

          This document is available on CDROM.

          This document was prepared using VAX DOCUMENT Version 2.1.

 














  _________________________________________________________________

                                                           Contents



  1  PVM Release Notes

        1.1   Future PVM Development........................    1-1
        1.2   Prerequisite Software.........................    1-1
        1.3   Supported Platforms...........................    1-1
        1.4   Installation..................................    1-1
        1.4.1     PVM Software Distributed with PSE.........    1-1
        1.4.2     Patch for MEMORY CHANNEL[TM] Software
                  Version 1.4...............................    1-2
        1.5   Applications Must be Re-Linked for Version
              1.8...........................................    1-2
        1.6   Documentation (PostScript and HTML)...........    1-2
        1.7   New in Version 1.8............................    1-2
        1.8   New in Version 1.6............................    1-2
        1.9   Known Problems................................    1-2
        1.9.1     exec Routines.............................    1-3
        1.9.2     Exhausting Virtual Memory Resources.......    1-3
        1.10  Miscellaneous.................................    1-4
        1.10.1    pvm_exit Blocks to Wait for an Available
                  Receiver..................................    1-4
        1.10.2    -pthread Compile-Line Argument Needed.....    1-4
        1.10.3    PVM Environment Variable Defaults.........    1-4
        1.11  Problems, Suggestions or Comments.............    1-4

  2  MPI Release Notes

        2.1   Prerequisite Software.........................    2-1
        2.2   Supported Platforms...........................    2-1
        2.3   Compatibility.................................    2-1
        2.4   Applications Must be Re-compiled and
              Re-linked.....................................    2-1
        2.5   Installation..................................    2-2
        2.5.1     MPI Software Distributed with PSE.........    2-2

                                                                iii

 







          2.5.2     Patch for MEMORY CHANNEL[TM] Software
                    Version 1.4...............................    2-2
          2.6   Documentation (PostScript and HTML)...........    2-2
          2.7   New in Version 1.8............................    2-2
          2.8   Known Problems................................    2-3
          2.8.1     stdout and stderr Streams are Combined....    2-4
          2.8.2     Running Under a Debugger Leaves Files in
                    /tmp......................................    2-4
          2.8.3     A Standard Variance.......................    2-4
          2.8.4     MPI_REQUEST_FREE..........................    2-4
          2.8.5     MPI_CANCEL................................    2-4
          2.8.6     exec Routines.............................    2-4
          2.8.7     Exhausting Virtual Memory Resources.......    2-5
          2.8.8     Error Message: "No MEMORY CHANNEL
                    installed"................................    2-5
          2.9   Miscellaneous.................................    2-5
          2.9.1     User-Level Access for MEMORY
                    CHANNEL[TM]...............................    2-5
          2.9.2     Shared Memory Segment Limit...............    2-6
          2.9.3     -ump_bufs Requires a Multiple of 32.......    2-6
          2.10  Problems, Suggestions or Comments.............    2-6

    3  Comments, Problems, and Help

          3.1   Sending DIGITAL Your Comments on This
                Product.......................................    3-1
          3.2   Getting Help from DIGITAL.....................    3-1
          3.3   Readers Comments Form-Documentation...........    3-2


    Tables

          1-1       Default Values for PVM Environment
                    Variables.................................    1-4











    iv

 










                                                                        1
        _________________________________________________________________

                                                        PVM Release Notes


        1.1 Future PVM Development

              Consideration is being given to stopping further
              development of DIGITAL PVM and eventually retiring the
              product. Please send a note describing your PVM needs and
              your opinions of this proposal to pvm@ilo.dec.com.

        1.2 Prerequisite Software

              DIGITAL PVM Version 1.8 requires DIGITAL UNIX Version 4.0a
              or higher.

              In addition, if you need MEMORY CHANNEL[TM] support,
              all cluster members require DIGITAL TruCluster software
              (or TruCluster MEMORY CHANNEL[TM] software), version 1.5
              (requires DIGITAL UNIX V4.0d or higher) or higher. You can
              also use version 1.4 of the TruCluster or MEMORY CHANNEL
              software if a patch is installed (see Section 1.4.2).

        1.3 Supported Platforms

              DIGITAL PVM is a DIGITAL proprietary implementation of PVM,
              for Alpha systems running DIGITAL UNIX. Both stand-alone
              SMP systems and MEMORY CHANNEL[TM] clusters are supported.

        1.4 Installation

        1.4.1 PVM Software Distributed with PSE

              For convenience, DIGITAL PVM software is distributed
              as part of the DIGITAL Parallel Software Environment
              (PSE). For installation instructions, refer to the PSE
              installation guide.



                                                    PVM Release Notes 1-1

 



    PVM Release Notes
    1.4 Installation


    1.4.2 Patch for MEMORY CHANNEL[TM] Software Version 1.4

          Any cluster members running version 1.4 of DIGITAL
          TruCluster software (or TruCluster MEMORY CHANNEL[TM]
          software) must have Patch ID TCR141-013 or its successor
          installed. To ensure that you are installing the latest
          patch, please send mail to pvm@ilo.dec.com. To obtain
          patches, please use your regular DIGITAL support channel.

          No patch is needed for systems running version 1.5 or
          higher of DIGITAL TruCluster software (or TruCluster MEMORY
          CHANNEL[TM] software).

          If you have any questions or concerns, please send mail to
          pvm@ilo.dec.com.

    1.5 Applications Must be Re-Linked for Version 1.8

          Programs previously linked with any previous PVM archive
          library, will have to be re-linked to operate within the
          PVM180 environment.

    1.6 Documentation (PostScript and HTML)

          The DIGITAL PVM User Guide can be found in PostScript
          format at /usr/opt/PVM180/pvm_guide.ps. It can also be
          found in HTML format on the Consolidated Layered Products
          Documentation CD-ROM.

    1.7 New in Version 1.8

          DIGITAL PVM Version 1.8 is a bug fix release and contains
          no new features.

    1.8 New in Version 1.6

          DIGITAL PVM Version 1.6 now supports multiple rails of
          MEMORY CHANNEL[TM] for message passing between systems.

    1.9 Known Problems

          The following problems with DIGITAL PVM were known at the
          time of release:


    1-2 PVM Release Notes

 



                                                        PVM Release Notes
                                                       1.9 Known Problems


        1.9.1 exec Routines

              A process that has called a PVM routine may fail if it
              calls execl, execv, execle, execve, execlp, or execvp.
              The exec routine returns EWOULDBLOCK. The problem can
              be avoided if the process calls fork, and calls the exec
              routine in the child process.

        1.9.2 Exhausting Virtual Memory Resources

              When a process that has called a PVM routine forks,
              the child process sometimes loses some virtual memory
              resources. If an application uses multiple generations
              of processes (parent makes PVM call then forks child, which
              makes a PVM call then forks, and so on), the application
              may run out of vm-mapentries. The number of vm-mapentries
              available to an application can be changed using the
              sysconfig(8) command.



























                                                    PVM Release Notes 1-3

 



    PVM Release Notes
    1.10 Miscellaneous


    1.10 Miscellaneous

    1.10.1 pvm_exit Blocks to Wait for an Available Receiver

          A PVM task can send messages before a receiver task is
          available to read those messages. Such messages are queued
          in the sender task. If the sender task calls pvm_exit
          before a receiver task is available to read a message
          then the pvm_exit will block until a receiver task becomes
          available.

    1.10.2 -pthread Compile-Line Argument Needed

          The -pthread argument must be included on the compile line.

    1.10.3 PVM Environment Variable Defaults

          The default values of the DIGITAL PVM environment variables
          are described in Table 1-1.

          Table_1-1_Default_Values_for_PVM_Environment_Variables_____

          Environment_Variable__Default_Value________________________

          PVM_BUF_SIZE          1,048,576

          PVM_MC_CHAN_SIZE      32,768

          PVM_SM_CHAN_SIZE______32,768_______________________________

             ________________________ Note ________________________

             All the "size" parameters should be multiples of 1024.

             ______________________________________________________

    1.11 Problems, Suggestions or Comments

          Any problems, suggestions or comments should be addressed
          to pvm@ilo.dec.com.





    1-4 PVM Release Notes

 










                                                                        2
        _________________________________________________________________

                                                        MPI Release Notes


        2.1 Prerequisite Software

              DIGITAL MPI Version 1.8 requires DIGITAL UNIX Version 4.0a
              or higher. The Fortran runtime libraries are also required.

              In addition, if you need MEMORY CHANNEL[TM] support,
              all cluster members require DIGITAL TruCluster software
              (or TruCluster MEMORY CHANNEL[TM] software), version 1.5
              (requires DIGITAL UNIX V4.0d or higher) or higher. You can
              also use version 1.4 of the TruCluster or MEMORY CHANNEL
              software if a patch is installed (see Section 2.5.2).

        2.2 Supported Platforms

              DIGITAL MPI is a DIGITAL proprietary implementation of MPI,
              for Alpha systems running DIGITAL UNIX. Both stand-alone
              SMP systems and MEMORY CHANNEL[TM] clusters are supported.

        2.3 Compatibility

              DIGITAL MPI is compatible with version 1.1.1 of MPICH from
              Argonne National Labs, and will run over shared memory and
              MEMORY CHANNEL[TM]. Applications built using DIGITAL MPI
              cannot communicate via MPI with applications built using
              MPICH.

        2.4 Applications Must be Re-compiled and Re-linked

              DIGITAL MPI version 1.8 requires applications to be both
              recompiled and re-linked. Simply re-linking applications is
              not sufficient.





                                                    MPI Release Notes 2-1

 



    MPI Release Notes
    2.5 Installation


    2.5 Installation

    2.5.1 MPI Software Distributed with PSE

          For convenience, DIGITAL MPI software is distributed
          as part of the DIGITAL Parallel Software Environment
          (PSE). For installation instructions, refer to the PSE
          installation guide.

    2.5.2 Patch for MEMORY CHANNEL[TM] Software Version 1.4

          Any cluster members running version 1.4 of DIGITAL
          TruCluster software (or TruCluster MEMORY CHANNEL[TM]
          software) must have Patch ID TCR141-013 or its successor
          installed. To ensure that you are installing the latest
          patch, please send mail to mpi@ilo.dec.com. To obtain
          patches, please use your regular DIGITAL support channel.

          No patch is needed for systems running version 1.5 or
          higher of DIGITAL TruCluster software (or TruCluster MEMORY
          CHANNEL[TM] software).

          If you have any questions or concerns, please send mail to
          mpi@ilo.dec.com.

    2.6 Documentation (PostScript and HTML)

          The DIGITAL MPI User Guide can be found in PostScript
          format at /usr/opt/MPI180/mpi_guide.ps. It can also be
          found in HTML format on the Consolidated Layered Products
          Documentation CD-ROM.

          There is a sample MPI program and instructions in /usr
          /examples/mpi.

          Reference pages for all of the MPI routines are included
          in this distribution, and can be accessed with the man
          command.

    2.7 New in Version 1.8

          Included in Version 1.8 of DIGITAL MPI is support for
          V1.0.1 of the ROMIO MPI-2 package from Argonne National
          Laboratory.

    2-2 MPI Release Notes

 



                                                        MPI Release Notes
                                                   2.7 New in Version 1.8


              This version of ROMIO includes everything defined in the
              MPI-2 I/O chapter except shared file pointer functions
              (Section 9.4.4), split collective data access functions
              (Section 9.4.5), support for file interoperability (Section
              9.5), I/O error handling (Section 9.7), and I/O error
              classes (Section 9.8). Since shared file pointer functions
              are not supported, the MPI_MODE_SEQUENTIAL amode to
              MPI_File_open is also not supported. The subarray and
              distributed array datatype constructor functions from
              MPI-2 Chapter 4 (Sections 4.14.4 and 4.14.5) have been
              implemented. They are useful for accessing arrays stored
              in files. The functions MPI_File_f2c and MPI_File_c2f
              (Section 4.12.4) also have been implemented. C, Fortran,
              and profiling interfaces are provided for all functions
              that have been implemented.

              Limitations of this version of ROMIO include the following:

              o  The status argument is not filled in any function.
                 Consequently, MPI_Get_count and MPI_Get_elements will
                 not work when passed the status object from an MPI-I/O
                 operation.

              o  All nonblocking I/O functions use a ROMIO-defined MPIO_
                 Request object instead of the usual MPI_Request object.
                 Accordingly, the two functions, MPIO_Test and MPIO_Wait,
                 are provided to test and wait on these MPIO_Request
                 objects. They have the same semantics as MPI_Test and
                 MPI_Wait.

                     int MPIO_Test(MPIO_Request *request, int *flag, MPI_Status *status);
                     int MPIO_Wait(MPIO_Request *request, MPI_Status *status);

                 The usual functions MPI_Test, MPI_Wait, MPI_Testany, and
                 so forth, will not work for nonblocking I/O.

              o  All functions return only two possible error codes -
                 MPI_SUCCESS on success and MPI_ERR_UNKNOWN on failure.

        2.8 Known Problems

              These are the known restrictions in DIGITAL MPI Version
              1.8:


                                                    MPI Release Notes 2-3

 



    MPI Release Notes
    2.8 Known Problems


    2.8.1 stdout and stderr Streams are Combined

          In this release, the stdout and stderr streams from
          application processes are combined and are both delivered
          to stdout. There is no output on stderr.

    2.8.2 Running Under a Debugger Leaves Files in /tmp

          When a debugger is used, dmpirun creates (and never
          deletes) a number of small files in /tmp. If you repeatedly
          debug large applications, you may need to delete these
          files to avoid filling /tmp.

    2.8.3 A Standard Variance

          The Fortran binding for MPI_ADDRESS specifies that
          addresses be returned as type INTEGER. This specification
          is insufficient, because INTEGERs are 32 bit, whereas
          addresses are 64 bit on Alpha systems.

          In order to allow use of the MPI_ADDRESS call, this release
          uses INTEGER*8 as the return type. This does not affect the
          C binding, or its use in a C program.

    2.8.4 MPI_REQUEST_FREE

          MPI_REQUEST_FREE does not work properly when the request
          is not completed (the mpich portable code has the same
          problem).

    2.8.5 MPI_CANCEL

          MPI_CANCEL does not work (the MPICH portable code does not
          implement this).

    2.8.6 exec Routines

          A process that has called MPI_INIT may fail if it calls
          execl, execv, execle, execve, execlp, or execvp. The exec
          routine returns EWOULDBLOCK.

          This problem can be avoided if the process calls fork, and
          calls the exec routine in the child process.


    2-4 MPI Release Notes

 



                                                        MPI Release Notes
                                                       2.8 Known Problems


        2.8.7 Exhausting Virtual Memory Resources

              When a process that has called MPI_INIT forks, the child
              process sometimes loses some virtual memory resources.
              If an application uses multiple generations of processes
              (parent makes MPI calls then forks child, which makes MPI
              calls then forks, and so on), the application may run out
              of vm-mapentries. The number of vm-mapentries available
              to an application may be changed using the sysconfig(8)
              command.

        2.8.8 Error Message: "No MEMORY CHANNEL installed"

              A call to MPI_INIT that fails with an error reporting "no
              MEMORY CHANNEL installed" usually indicates that the MEMORY
              CHANNEL[TM] patch has not been installed.

              For More Information:

              o  See Section 2.5.2

        2.9 Miscellaneous

        2.9.1 User-Level Access for MEMORY CHANNEL[TM]

              If you are using MEMORY CHANNEL[TM], ensure that each
              machine in the cluster that you intend to use has been
              initialised for user-level access. You can check this by
              searching for a process called imc_mapper on each machine.
              For example,

              # ps a | grep imc_mapper | grep -v grep

                PID TTY      S           TIME CMD
                657 ttyp2    U        0:00.01 /usr/sbin/imc_mapper

              If this process does not exist on any host that you intend
              to use as part of the cluster, you should execute the
              following command (as root) on each such host:

              # /usr/sbin/imc_init

              This only needs to be executed once, and has effect until
              the machine is next shut down.

                                                    MPI Release Notes 2-5

 



    MPI Release Notes
    2.9 Miscellaneous


    2.9.2 Shared Memory Segment Limit

          DIGITAL MPI uses shared memory for communication within a
          single host. The default system-wide maximum shared-memory
          segment a process can allocate is 4MB. For programs with
          a large number of processes this may need to be increased.
          This is done by editing the /etc/sysconfigtab file and
          adding (or modifying) the following entry:

          ipc:
               shm-max=size in bytes

          For this change to take effect, a reboot is necessary.

    2.9.3 -ump_bufs Requires a Multiple of 32

          If you specify a buffer size using the -ump_bufs option in
          the dmpirun command, then you must specify a buffer that is
          a multiple of 32.

          Note: you are wasting memory resources if the size you
          specify is not a multiple of 8192.

    2.10 Problems, Suggestions or Comments

          Any problems, suggestions or comments should be addressed
          to mpi@ilo.dec.com.


















    2-6 MPI Release Notes

 










                                                                        3
        _________________________________________________________________

                                             Comments, Problems, and Help


              This chapter gives you detailed instructions on how to
              submit comments on this product, report problems, and get
              help from the Customer Support Center (CSC).

        3.1 Sending DIGITAL Your Comments on This Product

              DIGITAL welcomes your comments on this product and on its
              documentation. You can send comments to us in the following
              ways:

              o  Internet electronic mail:

                 -  PVM issues: pvm@ilo.dec.com

                 -  MPI issues: mpi@ilo.dec.com

              o  FAX: 978-493-3628 ATTN: PSE Team

              o  A Reader's Comment Card sent to the address on the form.

              o  A letter sent to the following address:

                       Digital Equipment Corporation
                       High Performance Computing Group
                       129 Parker Street  PKO3-2/B12
                       Maynard, Massachusetts  01754-2195  USA

              o  Online questionnaire form. Print or edit the
                 questionnaire form provided near the end of these
                 release notes. Send the form by Internet mail, FAX,
                 or the postal service.

        3.2 Getting Help from DIGITAL

              If you have a customer support contract and have comments
              or questions about DIGITAL Fortran software, you should
              contact DIGITAL's Customer Support Center (CSC), preferably
              using electronic means such as DSNlink. In the United
              States, customers can call the CSC at (800) 354-9000.

                                         Comments, Problems, and Help 3-1

 



    Comments, Problems, and Help
    3.3 Readers Comments Form-Documentation


    3.3 Readers Comments Form-Documentation

          Use the following form as a template for sending comments
          about PSE documentation. This form can be sent by Internet
          mail, FAX, or postal service.








































    3-2 Comments, Problems, and Help

 



                                             Comments, Problems, and Help
                                  3.3 Readers Comments Form-Documentation


              ---------------------------------------------------------------------
              Please complete this survey and send an online version (via Internet)
              or a hardcopy version (via FAX or postal service) to:

                 Internet mail: pse@hpc.pko.dec.com
                 FAX: (978) 493-3628
                 Postal Service:  Digital Equipment Corporation,
                                  High Performance Computing Group Documentation, PKO3-2/B12
                                  129 Parker Street
                                  Maynard, Massachusetts  01754-2195  USA

              Manual Title:  ______________________________________________________

              Order Number:  ______________________________________________________

              Version:  ________________________________________________

              _____________________________________________________________________

              _____________________________________________________________________

              We welcome any comments on this manual or on any PSE documentation.
              Your comments and suggestions help us improve the quality of our
              publications.

              1. If you found any errors, please list them:

              Page  Description

              ____  _______________________________________________________________

              ____  _______________________________________________________________

              ____  _______________________________________________________________

              2. How can we improve the content, usability, or otherwise improve
                 our documentation set?

              _____________________________________________________________________

              _____________________________________________________________________

              _____________________________________________________________________

              _____________________________________________________________________

              Your Name/Title ____________________________________   Dept. ________

              Company _____________________________________________   Date ________

              Internet address or FAX number ______________________________________

              Mailing address _____________________________________________________

                                         Comments, Problems, and Help 3-3

 



    Comments, Problems, and Help
    3.3 Readers Comments Form-Documentation


          ___________________________________________   Phone _________________

          ---------------------------------------------------------------------










































    3-4 Comments, Problems, and Help