SunRPC and XML-RPC over GnuTLS (SSL, TLS) and ssh


On this page I'm looking into how to do remote procedure calls (RPC) from C in a secure way. We settled at looking at two popular RPC mechanisms (SunRPC and XML-RPC) and how to transport those over GnuTLS (SSL, TLS) and ssh. For XML-RPC, running over SSL is relatively straightforward because almost any competent library supports it. Running over ssh is more difficult, but it is likely to be more useful for users since they likely already have the ssh infrastructure set up. SunRPC is much trickier because there is no standard way to run it over a transport other than Berkeley sockets. However with the SunRPC implementation within glibc you can write your own transport (other than the standard UDP and TCP).

Skip to: SunRPC | XML-RPC | AMQP | Other | Performance


This code is offered unsupported. If it breaks you get to keep both halves. Please don't ask me about it unless you have fully understood this document. You will also need to set up your own certificate authority (CA) and client and server keys. I recommend using



SunRPC (aka ONC-RPC) is a venerable protocol for making remote procedure calls. It is described in several RFCs and has reasonably wide language support. Its most famous users are NFS and NIS. It uses a binary format called XDR.

In the simplest case you write an interface file called (for example) myserver.x then generate stubs using rpcgen myserver.x. You then obviously need to write the implementation of each function for the server. By linking your server to the server stubs and your client to the client stubs, you end up with RPC.

SunRPC can be carried over UDP or TCP. The UDP version is unreliable. Normally SunRPC servers listen on a random port number and register themselves with a portmap service on the server. Clients consult the portmap service on the server (on well-known port 111) in order to know which port to connect to. This obviously a headache from a firewall administration point of view, so nearly all SunRPC services now use a fixed port, and most can be run over TCP.

SunRPC offers various authentication, but they are all (except the insecure DES-based auth) poorly supported in free software. is a good backgrounder on writing SunRPC code, although some of their examples look a little different from mine (missing arguments, etc.).

Basic SunRPC

For full code, download the tarball from the downloads section above.

We define a simple server in test.x:

program TESTPROG {
  version TESTPROG_VERS1 {
    /* Return the string, with "Hello " prepended. */
    string test_hello (string) = 1;
    /* Return the current date on the server, as a string. */
    string test_ctime (void) = 2;
  } = 1;
} = 0x20008000;

This is compiled with rpcgen test.x and linked with a simple server implementation (test_svc_impl.c) and command line client (test_clnt_cli.c).

When the server runs (./test_svc) it will register itself with the local portmapper, so make sure portmap is running. The client is invoked with the hostname of the server: ./test_clnt localhost, and contacts the portmapper, connects to the server over TCP, and runs the functions.

Note that there is no main function in the server. It is generated automatically by rpcgen.

SunRPC without portmap and running on a fixed TCP port

My next version, test_nopmap_*, gets rid of the requirement for the portmapper and listens on a fixed TCP port (5000).

To do this I compiled my interface slightly differently:

rpcgen test_nopmap.x
rm -f test_nopmap_svc.c
rpcgen -m test_nopmap.x > test_nopmap_svc.c

The -m option suppresses generation of the automatic main function and instead I have to write my own:

main (int argc, char **argv)
  int sock = socket (PF_INET, SOCK_STREAM, IPPROTO_TCP);
  if (sock == -1) { perror ("socket"); exit (1); }

  struct sockaddr_in addr;
  addr.sin_family = AF_INET;
  addr.sin_port = htons (5000);
  addr.sin_addr.s_addr = htonl (INADDR_LOOPBACK); // listen only on
  if (bind (sock, (struct sockaddr *) &addr, sizeof addr) == -1)
    { perror ("bind"); exit (1); }

  SVCXPRT *transp = svctcp_create (sock, 0, 0);
  if (!transp) {
    fprintf (stderr, "cannot create tcp service\n");
    exit (1);
  /* Because final arg is 0, this will not register with portmap. */
  if (!svc_register (transp, TESTPROG, TESTPROG_VERS1, testprog_1, 0)) {
    fprintf (stderr, "unable to register (TESTPROG, TESTPROG_VERS1, 0)\n");
    exit (1);

  svc_run ();
  fprintf (stderr, "svc_run returned\n");
  exit (1);

The key function here is svctcp_create which invokes the SunRPC TCP transport (more on transports in the next section).

SunRPC over GnuTLS

SunRPC code is usually descended from the reference implementation written by Sun in the late 1980s. Both BSD and GNU libc implementations share this heritage. The Sun code defines several transports, including UDP and TCP. It is also quite possible to write your own, which is what I did to create the GnuTLS transport. These are derived from the files clnt_tcp.c and svc_tcp.c in the glibc source code.

In fact to save some time I did not copy the whole of clnt_tcp.c, but instead added a hack to just replace the two low level functions I needed. For the server this hack wasn't possible, so there is a complete transport backend copied from svc_tcp.c. This fact actually makes it easier to understand what needs to be changed in order to write a complete client transport.

You can see the modifications required for the client in test_gnutls_clnt_cli.c from the download above, and for the server in test_gnutls_svc_gnutls.c.

For the client all that changes is that we have to initialise GnuTLS itself. The code for doing that was copied mostly from Simple client example with X.509 certificate support.

For the server, the new transport keeps track additionally of the GnuTLS session (as well as the TCP socket). The code was copied from Echo Server with X.509 authentication.

At the protocol level RPC request and reply messages map directly to GnuTLS records. SunRPC offers a way to do asynchronous calls, but we did not test if this works. As far as I can tell even batched SunRPC calls would translate to individual GnuTLS records (in other words, it wouldn't batch them up into a single GnuTLS record), but exactly what happens would depend closely on the implementation of the SunRPC library.

The only change required to client programs would be to (a) initialise GnuTLS and load the CA certification and (b) call clntgnutls_create instead of clnttcp_create. For servers similarly one just initialises GnuTLS and calls svcgnutls_create instead of svctcp_create.

SunRPC over ssh

Anticipating that an ssh infrastructure is going to be more widespread than X.509 certificates, I also looked into running SunRPC over ssh.

The way I envisage this happening is rather like CVS's :ext: method: an external program is forked (usually ssh but may conceivably be other programs) and we write to and read from this external program over a couple of pipes. The external program is then entirely responsible for establishing a connection to the remote server, and for any authentication and encryption which is done.

On the remote server we run NetCat (nc) which talks either through a loopback TCP connection or over a Unix domain socket (nc -U) to the server instance. (Another way of interest would actually be to start another server instance, using SunRPC's support for inetd - see below).

Obviously with the ability to write transports as for GnuTLS it should be easy to write a specific transport for external programs like this. The easiest method would probably be to copy and modify the clnt_unix.c and svc_unix.c (Unix domain sockets) transport so that it sets up the pipe and forks the external program.

SunRPC over Unix domain sockets

This is supported directly by the SunRPC library in glibc. Use the function clntunix_create in the client and svcunix_create in the server.

SunRPC from inetd

There is direct support for running SunRPC servers from inetd. Of course if run in this fashion then a new instance of the server is started for each incoming connection.

SunRPC over multiple transports

Multiple transports may be registered in the custom main function.

SunRPC with usernames and passwords for authentication

I have not covered this because it seems that as soon as you have some sort of RPC mechanism, whether SunRPC or XML-RPC, you can implement a challenge-response password exchange easily enough using remote procedures. Ensure that only the "login" procedures are able to be called until a successful login has happened.

Batch calls (asynchronous)

SunRPC, and the implementation in glibc, allows batch calls. Several batch calls may be queued up (each call returns to the user immediately). The calls are actually sent on the next non-batch call, or if an internal buffer fills up.

Note that this is not pipelining. Batch calls cannot return anything. However you could accumulate results on the server side and have a non-batch call which returns all the results in one go.

The following conditions have to be satisfied:

The server implementation of the batch call must return NULL, as shown in the example below:

void *
test_batch1_1_svc (char **str, struct svc_req *req)
  printf ("batch1: %s\n", *str);
  return 0;

On the client side, the client stubs produced by rpcgen are not sufficient to call a batch call (in fact they will deadlock if you try). You need to write your own client stub. In the table below, left shows the normal client stub produced by rpcgen and right shows the modified client stub:

Stub produced by rpcgen Stub modified for batch call
void *
test_batch1_1(char **argp, CLIENT *clnt)
  static char clnt_res;

  memset((char *)&clnt_res, 0, sizeof(clnt_res));
  if (clnt_call (clnt, test_batch1,
	(xdrproc_t) xdr_wrapstring, (caddr_t) argp,
	(xdrproc_t) xdr_void, (caddr_t) &clnt_res,
	return (NULL);
  return ((void *)&clnt_res);
static struct timeval zero_timeout = {0, 0};

test_batch1_1_asynch (char **argp, CLIENT *clnt)
  if (clnt_call (clnt, test_batch1,
      (xdrproc_t) xdr_wrapstring, (caddr_t) argp,
      NULL, NULL,
      zero_timeout) != RPC_SUCCESS) {
    fprintf (stderr, "warning: batched call failed\n");

The download available in the downloads section above contains a demonstration of batch calls.

SunRPC over IPv6

The standard TCP transport (clnt_tcp.c) is written only with IPv4 in mind. However since it is possible to create your own sockets and pass them to clnttcp_create, you can create your own IPv6 socket and use that. In the downloads section above you will find a verified working IPv6 client and server (test_ipv6*).

Note that if you are writing your own transport then IPv4 vs IPv6 is not an issue because you can make the necessary modifications to your forked clnt_tcp.c to support IPv6.

IPv6 is however not supported by the portmapper. There is a replacement service called rpc_bind which is part of TI-RPC (see below) and this supports IPv6 addresses. However really you shouldn't be using a portmapper at all.

Here are the changes required on the server. Note that for the purposes of this test we are only listening on the loopback address (::1). A real server would want to listen on a public interface.

  int sock = socket (PF_INET6, SOCK_STREAM, 0);
  if (sock == -1) { perror ("socket"); exit (1); }

  struct sockaddr_in6 addr;
  memset (&addr, 0, sizeof addr);
  addr.sin6_family = AF_INET6;
  addr.sin6_port = htons (5000);
  addr.sin6_addr = in6addr_loopback; // listen on ::1 only
  if (bind (sock, (struct sockaddr *) &addr, sizeof addr) == -1)
    { perror ("bind"); exit (1); }

  SVCXPRT *transp = svctcp_create (sock, 0, 0);

And for the client:

  /* Create a TCP client connection to service listening on localhost:5000. */
  struct sockaddr_in6 addr;
  memset (&addr, 0, sizeof addr);
  addr.sin6_family = AF_INET6;
  addr.sin6_port = htons (5000);
  addr.sin6_addr = in6addr_loopback;

  int sock = socket (PF_INET6, SOCK_STREAM, 0);
  if (connect (sock, (struct sockaddr *) &addr, sizeof addr) == -1) {
    perror ("connect");
    exit (1);

  /* Nasty cast here works because IPv4 and IPv6 address structures
   * are compatible up to the port field.  All that clnttcp_create
   * uses this for anyway is to determine if addr->sin_port is zero.
  CLIENT *cl = clnttcp_create ((struct sockaddr_in *) &addr,
			       TESTPROG, TESTPROG_VERS1, &sock, 0, 0);

Other SunRPC issues

The service loop in the standard SunRPC (function svc_run) is very simple-minded. In particular it doesn't provide any way to listen on non-RPC events.

However, it is possible to replace svc_run with one's own function, and there is a public API for this. Your own function has to listen on svc_fdset file descriptors and whatever other events it wants to monitor. If an event is detected on one of the RPC fds, call svc_getreq_fdset.

There is also a global svc_pollfd and function svc_getreq_poll which is similar, for using poll(2). Both svc_fdset and svc_pollfd should be treated as read only, and copied before passing to select or poll. File svc_run.c in the glibc source provides an example using the poll version.

Other SunRPC libraries, TI-RPC

The library investigated above is derived from Sun's original RPC library written in the late 1980s. Solaris later included TI-RPC which is (apparently) an evolution of SunRPC/ONC-RPC. In particular the portmapper (which we are not concerned with) has been replaced with an RPC service which understands IPv6.

Current status of TI-RPC is confused. There are multiple forks, under multiple licenses. It appears to have been relicensed by various people without Sun's agreement (original license is SISSL).


XML-RPC in C and Python

For the source to these examples, get the tarball from the downloads section above. It contains:

All clients and servers talk on the same port so you should be able to use any client with any server if you want to test interoperability (provided they are SSL-compatible of course).

The Python code uses M2Crypto. It was relatively simple to write both client and server.

The C code uses XMLRPC for C and C++ (xmlrpc-c). Writing a client and server in xmlrpc-c is straightforward (although verbose). However the crucial problem is lack of direct support for SSL. On the client side since xmlrpc-c uses Curl for connections, it can use Curl's own SSL support, and this works reliably. On the server side, xmlrpc-c uses the Abyss embedded webserver and this does not support SSL, with no timeline for when this might be available.

xmlrpc-c also supports a CGI mode. This would trivially allow SSL to be supported -- for example through Apache -- but limits us to one call per process.
Another route would be to use an external stunnel process.

Adding GnuTLS support to xmlrpc-c

Currently xmlrpc-c has three server transports (Abyss, CGI and a Win32-specific server). By examining these I was able to estimate the amount of work required to write a GnuTLS-capable transport for xmlrpc-c. This analysis follows.

Xmlrpc-c contains full support for serialising and deserialising XML-RPC messages, dealing with types and so on. However it does not contain any code to handle the HTTP level (listening on sockets, parsing and sending HTTP headers, dealing with chunked-encoding and so forth). In a "standard" xmlrpc-c server these issues are handled by the Abyss backend (if standalone) or by an external webserver (if using the CGI backend).

Therefore these elements -- socket handling, HTTP headers, chunked encoding and HTTPS/GnuTLS support -- need to be written into the new backend. Alternately one might take an embedded webserver which already supports SSL, for example BOA.

So the work is roughly equivalent to writing a simple SSL-capable webserver.

xmlrpc-c over ssh, Unix domain sockets

There is basically no support for xmlrpc-c (or indeed any XML-RPC client or server) over these non-standard transports.


Coming soon ...

Notes on other RPC mechanisms

I wrote a SOAP client from scratch, and this has given me a big aversion towards SOAP. It is an over-complex, under-defined protocol (see the links here).

CORBA has a number of difficulties, not least that the official bindings for C/C++ are really hard to use. On the other hand, bindings for other languages such as Java are rather pleasant.


These tests were performed on a loopback interface. They need to be redone over a real LAN in order to get more realistic results. We also need to fix persistent connections for XML-RPC since the overhead of creating and dropping the connection makes the test very unfair.


All source is available in the tarballs in the downloads section at the top of this page. For SunRPC, see test_gnutls_clnt_perf and test_gnutls_svc. For XML-RPC, see py-server-ssl and c-client-ssl-perf.


Time (seconds) Sent (bytes) Received (bytes)
SunRPC, 100,000 calls 25 16.7 MB 16.3 MB
XML-RPC, 100,000 calls (est) 4020 59.3 MB 132 MB

rjones AT redhat DOT com

$Id: index.html,v 1.12 2007/02/20 18:03:06 rjones Exp $