HP OpenVMS Systems Documentation

Content starts here

HP TCP/IP Services for OpenVMS
ONC RPC Programming


Previous Contents Index

3.3.2 The Client Side and the Lowest RPC Layer

When you use callrpc, you cannot control either the RPC delivery mechanism or the socket that transports the data. The lowest layer of RPC enables you to modify these parameters, as shown in Example 3-5, which calls the nuser service.

Example 3-5 Using Lowest RPC Layer to Control Data Transport and Delivery

#include <stdio.h>
#include <rpc/rpc.h>
#include <sys/time.h>
#include <netdb.h>
#include "rusers.h"

main(argc, argv)
     int argc;
     char **argv;
{

     struct hostent *hp;
     struct timeval pertry_timeout, total_timeout;
     struct sockaddr_in server_addr;
     int sock = RPC_ANYSOCK;
     register CLIENT *client;
     enum clnt_stat clnt_stat;
     unsigned long nusers;
     int exit();

     if (argc != 2) {
          fprintf(stderr, "usage: nusers hostname\n");
          exit(-1);
     }

     if ((HP = gethostbyname(argv[1])) == NULL) {
          fprintf(stderr, "can't get addr for %s\n",argv[1]);
          exit(-1);
     }

     pertry_timeout.tv_sec = 3;
     pertry_timeout.tv_usec = 0;
     bcopy(hp->h_addr, (caddr_t)&server_addr.sin_addr,
       HP->h_length);
     server_addr.sin_family = AF_INET;
     server_addr.sin_port =  0;
     if ((client = clntudp_create(&server_addr, RUSERSPROG, (1)
       RUSERSVERS, pertry_timeout, &sock)) == NULL) {
          clnt_pcreateerror("clntudp_create");
          exit(-1);
     }

     total_timeout.tv_sec = 20;
     total_timeout.tv_usec = 0;
     clnt_stat = clnt_call(client, RUSERSPROC_NUM, xdr_void, (2)
       0, xdr_u_long, &nusers, total_timeout);

     if (clnt_stat != RPC_SUCCESS) {
          clnt_perror(client, "rpc");
          exit(-1);
     }

     printf("%d users on %s\n", nusers, argv[1]);
     clnt_destroy(client); (3)
     exit(0);
}

  1. This example calls the clntudp_create routine to get a client handle for the UDP transport. To get a TCP client handle, you would use clnttcp_create . The parameters to clntudp_create are the server address, the program number, the version number, a timeout value, and a pointer to a socket. If the client does not hear from the server within the time specified in pertry_timeout , the request may be sent again to the server. When the sin_port is 0, RPC queries the remote Portmapper to find out the address of the remote service.
  2. The lowest-level version of callrpc is clnt_call , which takes a client handle rather than a host name. The parameters to clnt_call are a client handle, the procedure number, the XDR routine for serializing the argument, a pointer to the argument, the XDR routine for deserializing the results, a pointer to where the results will be placed, and the time in seconds to wait for a reply. The number of times that clnt_call attempts to contact the server is equal to the total_timeout value divided by the pertry_timeout value specified in the clntudp_create call.
  3. The clnt_destroy call always deallocates the space associated with the CLIENT handle. It closes the socket associated with the CLIENT handle only if the RPC library opened it. If the socket was opened by the user, it remains open. This makes it possible, in cases where there are multiple client handles using the same socket, to destroy one handle without closing the socket that other handles are using.

To make a stream connection, replace the call to clntudp_create with a call to clnttcp_create :


     clnttcp_create(&server_addr, prognum, versnum, &sock,
       inbufsize, outbufsize);

Here, there is no timeout argument; instead, the "receive" and "send" buffer sizes must be specified. When the program makes a call to clnttcp_create , RPC creates a TCP client handle and establishes a TCP connection. All RPC calls using the client handle use the same TCP connection. The server side of an RPC call using TCP has svcudp_create replaced by svctcp_create :


               transp = svctcp_create(RPC_ANYSOCK, 0, 0);

The last two arguments to svctcp_create are "send" and "receive" sizes, respectively. If, as in the preceding example, 0 is specified for either of these, the system chooses default values.

The simplest routine that creates a CLIENT handle is clnt_create :


     clnt=clnt_create(server_host,prognum,versnum,transport);

The parameters here are the name of the host on which the service resides, the program and version number, and the transport to be used. The transport can be either udp for UDP or tcp for TCP. You can change the default timeouts by using clnt_control . For more information, refer to Section 2.7.

3.3.3 Memory Allocation with XDR

To enable memory allocation, the second parameter of xdr_bytes is a pointer to a pointer to an array of bytes, rather than the pointer to the array itself. If the pointer has the value NULL , then xdr_bytes allocates space for the array and returns a pointer to it, putting the size of the array in the third argument. For example, the following XDR routine xdr_chararr1 , handles a fixed array of bytes with length SIZE :


xdr_chararr1(xdrsp, chararr)
     XDR *xdrsp;
     char *chararr;
{
     char *p;
     int len;

     p = chararr;
     len = SIZE;
     return (xdr_bytes(xdrsp, &p, &len, SIZE));
}

Here, if space has already been allocated in chararr , it can be called from a server like this:


     char array[SIZE];

     svc_getargs(transp, xdr_chararr1, array);

If you want XDR to do the allocation, you must rewrite this routine in this way:


xdr_chararr2(xdrsp, chararrp)
     XDR *xdrsp;
     char **chararrp;
{
     int len;

     len = SIZE;
     return (xdr_bytes(xdrsp, charrarrp, &len, SIZE));
}

The RPC call might look like this:


     char *arrayptr;

     arrayptr = NULL;
     svc_getargs(transp, xdr_chararr2, &arrayptr);
     /*
      * Use the result here
      */
     svc_freeargs(transp, xdr_chararr2, &arrayptr);

After using the character array, you can free it with svc_freeargs ; this will not free any memory if the variable indicating it has the value NULL . For example, in the earlier routine xdr_finalexample in Section 3.2.5, if finalp->string was NULL , it would not be freed. The same is true for finalp->simplep .

To summarize, each XDR routine is responsible for serializing, deserializing, and freeing memory as follows:

  • When called from callrpc , the XDR routine uses its serializing part.
  • When called from svc_getargs , the XDR routine uses its deserializing part.
  • When called from svc_freeargs , the XDR routine uses its memory deallocator part.

When building simple examples as shown in this section, you can ignore the three modes. See Chapter 4 for examples of more sophisticated XDR routines that determine mode and any required modification.

3.4 Raw RPC

Raw RPC refers to the use of pseudo-RPC interface routines that do not use any real transport at all. These routines, clntraw_create and svcraw_create , help in debugging and testing the noncommunications aspects of an application before running it over a real network. Example 3-6 shows their use.

In this example:

  • All the RPC calls occur within the same thread of control.
  • svc_run is not called.
  • It is necessary that the server handle be created before the client handle.
  • svcraw_create takes no parameters.
  • The last parameter to svc_register is 0, which means that it will not register with Portmapper.
  • The server dispatch routine is the same as it is for normal RPC servers.

Example 3-6 Debugging and Testing the Noncommunication Parts of an Application

/*
* A simple program to increment the number by 1
*/
#include <stdio.h>
#include <rpc/rpc.h>
#include <rpc/raw.h>    /* required for raw */

struct timeval TIMEOUT = {0, 0};
static void server();

main()
     int argc;
     char **argv;
{
     CLIENT *clnt;
     SVCXPRT *svc;
     int num = 0, ans;
     int exit();

     if (argc == 2)
          num = atoi(argv[1]);
     svc = svcraw_create();

     if (svc == NULL) {
          fprintf(stderr,"Could not create server handle\n");
          exit(1);
     }

     svc_register(svc, 200000, 1, server, 0);
     clnt = clntraw_create(200000, 1);

     if (clnt == NULL) {
          clnt_pcreateerror("raw");
          exit(1);
     }

     if (clnt_call(clnt, 1, xdr_int, &num, xdr_int, &ans,
       TIMEOUT) != RPC_SUCCESS) {
          clnt_perror(clnt, "raw");
          exit(1);
     }

     printf("Client: number returned %d\n", ans);
     exit(0) ;
}

static void
server(rqstp, transp)

     struct svc_req *rqstp; /* the request */
     SVCXPRT *transp; /* the handle created by svcraw_create */
{
     int num;
     int exit();

     switch(rqstp->rq_proc) {
     case 0:
          if (svc_sendreply(transp, xdr_void, 0) == FALSE) {
               fprintf(stderr, "error in null proc\n");
               exit(1);
          }
          return;
     case 1:
          break;
     default:
          svcerr_noproc(transp);
          return;
     }

     if (!svc_getargs(transp, xdr_int, &num)) {
          svcerr_decode(transp);
          return;
     }

     num++;
     if (svc_sendreply(transp, xdr_int, &num) == FALSE) {
          fprintf(stderr, "error in sending answer\n");
          exit(1);
     }

     return;
}

3.5 Miscellaneous RPC Features

The following sections describe other useful features for RPC programming.

3.5.1 Using Select on the Server Side

Suppose a process simultaneously responds to RPC requests and performs another activity. If the other activity periodically updates a data structure, the process can set an alarm signal before calling svc_run . However, if the other activity must wait on a file descriptor, the svc_run call does not work. The code for svc_run is as follows:


void
svc_run()
{
     fd_set readfds;
     int dtbsz = getdtablesize();

     for (;;) {
          readfds = svc_fdset;
          switch (select(dtbsz, &readfds, NULL, NULL, NULL)) {

          case -1:
               if (errno != EBADF)
                    continue;
               perror("select");
               return;
          case 0:
               continue;
          default:
               svc_getreqset(&readfds);
          }
     }
}

You can bypass svc_run and call svc_getreqset if you know the file descriptors of the sockets associated with the programs on which you are waiting. In this way, you can have your own select that waits on the RPC socket, and you can have your own descriptors. Note that svc_fds is a bit mask of all the file descriptors that RPC uses for services. It can change whenever the program calls any RPC library routine, because descriptors are constantly being opened and closed, for example, for TCP connections.

Note

If you are handling signals in your application, do not make any system call that accidentally sets errno . If this happens, reset errno to its previous value before returning from your signal handler.

3.5.2 Broadcast RPC

The Portmapper required by broadcast RPC is a daemon that converts RPC program numbers into TCP/IP protocol port numbers. The main differences between broadcast RPC and normal RPC are the following:

  • Normal RPC expects one answer, whereas broadcast RPC expects many answers (one or more from each responding server).
  • Broadcast RPC supports only packet-oriented (connectionless) transport protocols such as UDP/IP.
  • Broadcast RPC filters out all unsuccessful responses; if a version mismatch exists between the broadcaster and a remote service, the user of broadcast RPC never knows.
  • All broadcast messages are sent to the Portmapper port; thus, only services that register themselves with their Portmapper are accessible with broadcast RPC.
  • Broadcast requests are limited in size to 1400 bytes. Replies can be up to 8800 bytes (the current maximum UDP packet size).

In the following example, the procedure eachresult is called each time the program obtains a response. It returns a boolean that indicates whether the user wants more responses. If the argument eachresult is NULL , clnt_broadcast returns without waiting for any replies:


#include <rpc/pmap_clnt.h>
     .
     .
     .
     enum clnt_stat  clnt_stat;
     u_long    prognum;        /* program number */
     u_long    versnum;        /* version number */
     u_long    procnum;        /* procedure number */
     xdrproc_t inproc;         /* xdr routine for args */
     caddr_t   in;             /* pointer to args */
     xdrproc_t outproc;        /* xdr routine for results */
     caddr_t   out;            /* pointer to results */
     bool_t    (*eachresult)();/* call with each result gotten */
     .
     .
     .
     clnt_stat = clnt_broadcast(prognum, versnum, procnum,
       inproc, in, outproc, out, eachresult)

In the following example, if done is TRUE, broadcasting stops and clnt_broadcast returns successfully. Otherwise, the routine waits for another response. The request is rebroadcast after a few seconds of waiting. If no responses come back in a default total timeout period, the routine returns with RPC_TIMEDOUT :


     bool_t done;
     caddr_t resultsp;
     struct sockaddr_in *raddr; /* Addr of responding server */
     .
     .
     .
     done = eachresult(resultsp, raddr)

For more information, see Section 2.8.1.

3.5.3 Batching

In normal RPC, a client sends a call message and waits for the server to reply by indicating that the call succeeded. This implies that the client must wait idle while the server processes a call. This is inefficient if the client does not want or need an acknowledgment for every message sent.

Through a process called batching, a program can place RPC messages in a "pipeline" of calls to a desired server. In order to use batching, the following conditions must be true:

  • No RPC call in the pipeline should require a response from the server. The server does not send a response message until the client program flushes the pipeline.
  • The pipeline of calls is transported on a reliable byte-stream transport, such as TCP/IP.

Because the server does not respond to every call, the client can generate new calls in parallel with the server executing previous calls. Also, the TCP/IP implementation holds several call messages in a buffer and sends them to the server in one write system call. This overlapped execution greatly decreases the interprocess communication overhead of the client and server processes, and the total elapsed time of a series of calls. Because the batched calls are buffered, the client must eventually do a nonbatched call to flush the pipeline. When the program flushes the connection, RPC sends a normal request to the server. The server processes this request and sends back a reply.

In the following example of server batching, assume that a string-rendering service (in the example, a simple print to stdout ) has two similar calls---one provides a string and returns void results, and the other provides a string and does nothing else. The service (using the TCP/IP transport) may look like Example 3-7.

Example 3-7 Server Batching

#include <stdio.h>
#include <rpc/rpc.h>
#include "render.h"

void renderdispatch();

main()
{
     SVCXPRT *transp;
     int exit();

     transp = svctcp_create(RPC_ANYSOCK, 0, 0);
     if (transp == NULL){
          fprintf(stderr, "can't create an RPC server\n");
          exit(1);
     }

     pmap_unset(RENDERPROG, RENDERVERS);

     if (!svc_register(transp, RENDERPROG, RENDERVERS,
       renderdispatch, IPPROTO_TCP)) {
          fprintf(stderr, "can't register RENDER service\n");
          exit(1);
     }

     svc_run();  /* Never returns */
     fprintf(stderr, "should never reach this point\n");
}

void
renderdispatch(rqstp, transp)

     struct svc_req *rqstp;
     SVCXPRT *transp;
{
     char *s = NULL;

     switch (rqstp->rq_proc) {
     case NULLPROC:
          if (!svc_sendreply(transp, xdr_void, 0))
               fprintf(stderr, "can't reply to RPC call\n");
          return;
     case RENDERSTRING:
          if (!svc_getargs(transp, xdr_wrapstring, &s)) {
               fprintf(stderr, "can't decode arguments\n");
               /*
                * Tell client he erred
                */
               svcerr_decode(transp);
               return;
          }
          /*
           * Code here to render the string "s"
           */
          printf("Render: %s\n"), s;
          if (!svc_sendreply(transp, xdr_void, NULL))
               fprintf(stderr, "can't reply to RPC call\n");
          break;
     case RENDERSTRING_BATCHED:
          if (!svc_getargs(transp, xdr_wrapstring, &s)) {
               fprintf(stderr, "can't decode arguments\n");
               /*
                * We are silent in the face of protocol errors
                */
               break;
          }
          /*
           * Code here to render string s, but send no reply!
           */
          printf("Render: %s\n"), s;
          break;
     default:
          svcerr_noproc(transp);
          return;
     }
     /*
      * Now free string allocated while decoding arguments
      */
     svc_freeargs(transp, xdr_wrapstring, &s);
}

In Example 3-7, the service could have one procedure that takes the string and a boolean to indicate whether the procedure will respond. For a client to use batching effectively, the client must perform RPC calls on a TCP-based transport, and the actual calls must have the following attributes:

  • The XDR routine of the result must be zero (NULL).
  • The timeout of the RPC call must be zero. (Do not rely on clnt_control to assist in batching.)

If a UDP transport is used instead, the client call becomes a message to the server and the RPC mechanism becomes simply a message-passing system, with no batching possible. In Example 3-8, a client uses batching to supply several strings; batching is flushed when the client gets a null string (EOF).

In this example, the server sends no message, making the clients unable to receive notice of any failures that may occur. Therefore, the clients must handle any errors.

Using a UNIX-to-UNIX RPC connection, an example similar to this one was completed to render all of the lines (approximately 2000) in the UNIX file /etc/termcap . The rendering service simply discarded the entire file. The example was run in four configurations, in different amounts of time:

  • System to itself, regular RPC --- 50 seconds
  • System to itself, batched RPC --- 16 seconds
  • System to another, regular RPC --- 52 seconds
  • System to another, batched RPC --- 10 seconds

In the test environment, running only fscanf on /etc/termcap required 6 seconds. These timings show the advantage of protocols that enable overlapped execution, although they are difficult to design.

Example 3-8 Client Batching

#include <stdio.h>
#include <rpc/rpc.h>
#include "render.h"

main(argc, argv)
     int argc;
     char **argv;
{
     struct timeval total_timeout;
     register CLIENT *client;
     enum clnt_stat clnt_stat;
     char buf[1000], *s = buf;
     int exit(), atoi();
     char *host, *fname;
     FILE *f;
     int renderop;

     host = argv[1];
     renderop = atoi(argv[2]);
     fname = argv[3];

    f = fopen(fname, "r");
     if (f == NULL){
          printf("Unable to open file\n");
          exit(0);
     }
     if ((client = clnt_create(argv[1],
       RENDERPROG, RENDERVERS, "tcp")) == NULL) {
          perror("clnttcp_create");
          exit(-1);
     }

     switch (renderop) {
     case RENDERSTRING:
          total_timeout.tv_sec = 5;
          total_timeout.tv_usec = 0;
          while (fscanf(f,"%s", s) != EOF) {
               clnt_stat = clnt_call(client, RENDERSTRING,
                 xdr_wrapstring, &s, xdr_void, NULL, total_timeout);
               if (clnt_stat != RPC_SUCCESS) {
                    clnt_perror(client, "batching rpc");
                    exit(-1);
               }
          }
          break;
     case RENDERSTRING_BATCHED:
          total_timeout.tv_sec = 0;       /* set timeout to zero */
          total_timeout.tv_usec = 0;
          while (fscanf(f,"%s", s) != EOF) {
               clnt_stat = clnt_call(client, RENDERSTRING_BATCHED,
                 xdr_wrapstring, &s, NULL, NULL, total_timeout);
               if (clnt_stat != RPC_SUCCESS) {
                    clnt_perror(client, "batching rpc");
                    exit(-1);
               }
          }

          /* Now flush the pipeline */


          total_timeout.tv_sec = 20;
          clnt_stat = clnt_call(client, NULLPROC, xdr_void, NULL,
            xdr_void, NULL, total_timeout);
          if (clnt_stat != RPC_SUCCESS) {
               clnt_perror(client, "batching rpc");
               exit(-1);
          }
          break;
     default:
          return;
     }


     clnt_destroy(client);
     fclose(f);
     exit(0);
}


Previous Next Contents Index