The RPC Prime application uses most of the RPC features described so far in this appendix to compute prime numbers. Prime number computation might not seem to be the pinnacle of application functionality, but it's an excellent vehicle for demonstrating the power of RPC. In this sample application, the prime number computations are widgets for any computationally intensive operation you might need to perform.
The client side of the Prime application has been designed so that it can operate whether or not a Prime server is available. When executed, the application creates a thread for each designated Prime server. Each thread then attempts to bind to its designated Prime server. The client sends work to all available Prime servers with which it has bound successfully. Each thread that was unsuccessful in binding to the server waits a predetermined period of time before trying to rebind. In addition, the client creates one local thread that computes prime numbers on the client's computer. The source code for the Prime client is in the primec.c source file on the companion CD.
To ensure that work is not replicated between the threads, the client keeps one global variable, NextNumber, which contains the value of the next number to be tested for prime status. Each thread increments this number within the context of a critical section to ensure that no other thread accesses it simultaneously. When the increment is complete, the thread copies the number to a local variable, temp, and exits the critical section. The temp variable can then be used safely because it is local to the thread, so no other thread can modify it. The next thread that accesses NextNumber retrieves the already incremented value of NextNumber, ensuring that no two threads test the same number. This technique is known as the "divide-and-conquer method" and is shown in the following code:
EnterCriticalSection(&GlobalCriticalSection); if((temp = ++NextNumber) >= ULONG_MAX) break; LeaveCriticalSection(&GlobalCriticalSection); |
The client has several options that you can set. Type PRIMEC /? at a command prompt for a list of these features. You execute the client by typing this command:
PRIMEC -N \\FIRST_SERVER_NAME;\\SECOND_SERVER_NAME;... |
After parsing the command-line arguments, the client calls the RpcStringBindingCompose function, shown in the following code, to create a string binding for each server it intends to bind with. A string binding is a string of characters that defines all the attributes for the binding between the client and the server.
status = RpcStringBindingCompose(pszUuid, pszProtocolSequence, pszNetworkAddress[i], pszEndpoint[i], pszOptions, &pszStringBinding[i]); |
The RpcStringBindingCompose function is a convenience function that combines all the pieces of a string binding and returns the combined string in a character array allocated by the function. This memory is later freed by a call to the RpcStringFree function.
As you can see from the example above, a string binding consists of the UUID, protocol sequence, network address, endpoint, and options. Using command-line arguments, the user can modify all the parameters used to create the string binding. The UUID specifies an optional number for identification purposes. This UUID allows clients and servers to distinguish between different objects. In this example, the field is set to NULL by default. The protocol sequence specifies the low-level network protocol for the network communication. Several network protocols are currently supported. Our example uses the named pipes (ncacn_np) protocol that is native to Windows NT. The currently supported network protocols are shown in the following table.
Protocol Sequence | Description |
---|---|
ncacn_np | Named pipes |
ncacn_ip_tcp | Internet address |
ncacn_dnet_nsp | DECNet phase IV |
ncacn_osi_dna | DECNet phase V |
ncadg_ip_udp | Internet address |
ncacn_nb_tcp | NetBIOS |
ncacn_nb_nb | NetBIOS Enhanced User Interface (NetBEUI) |
ncacn_spx | Sequenced Package Exchange (SPX) |
ncadg_mq | Microsoft Message Queue Server |
ncacn_http | Microsoft Internet Information Server (IIS) as Hypertext Transfer Protocol (HTTP) proxy |
ncacn_at_dsp | AppleTalk Data Stream Protocol (DSP) |
ncacn_vns_spp | Banyan Vines Sequenced Packet Protocol (SPP) transport |
ncadg_ipx | Internetwork Packet Exchange (IPX) |
ncalrpc | Local RPC |
The network address is the address of the server that the client wants to bind with. When the named pipes protocol sequence is used, the network address is described in the form \\servername, where servername is the name of the server computer. The type of valid network address depends on the protocol sequence used. Different protocol sequences have different methods of defining network addresses.
The endpoint used to create the binding specifies the network endpoint at which the server application listens. The endpoint is like a street address of a particular server application, and the network address is rather like the name of the city in which the server lives. Like the network address, the type of endpoint reflects the protocol sequence being used. When the named pipes protocol sequence is used, the valid endpoint specifies the pipe that the server is listening to. A valid endpoint for the named pipes protocol sequence is \pipe\pipename, where pipename is an application-defined name for the pipe used for low-level network communication between the client and the server.
The goal of RPC is to provide a high-level interface to networks, allowing a remote call to travel transparently over any type of available transport. The options parameter is a miscellaneous string that you use for whatever special settings are appropriate for a particular protocol sequence. In the case of the named pipes protocol sequence, the only available option is security = true. This setting turns on the security mechanisms for the RPC. For other network protocols, the valid options vary.
The RpcStringBindingCompose function combines the parts of a string binding. You bind a string in order to specify all the parameters for the protocol sequence (network protocol) used. The client then transforms each string binding into the actual binary binding using the RpcBindingFromStringBinding function, as shown here:
status = RpcBindingFromStringBinding(pszStringBinding[i], &BindingHandle[i]); |
The binding becomes a sort of magic cookie, or handle, that you can use to make RPCs. The client then creates a thread to manage each server using the CreateThread function, as shown here:
hthread_remote[count - 1] = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)thread_remote, (LPVOID)count, 0, &lpIDThread[count - 1]); |
Each thread is passed a number that designates a server that the thread is responsible for. In addition, the client initializes a critical section object for later use when you access global variables from the threads, as shown in the following example:
InitializeCriticalSection(&GlobalCriticalSection); |
The client calls the GetComputerName function so that it can pass the returned string to the server. The server uses this string for display purposes so the user can see which client is making RPCs, as shown here:
GetComputerName(computer_name_buffer, &Max_ComputerName_Length); |
After the client obtains a valid binding to each server, it attempts to initialize these servers on a logical level. To do so, it calls a special remote procedure available on each Prime server: InitializePrimeServer. This function notifies the server that a client plans to make requests. As you can see in the following example, the InitializePrimeServer function accepts a binding handle, a context handle, and the name of the computer retrieved by the GetComputerName function:
RpcTryExcept { PrimeServerHandle[iserver] = InitializePrimeServer( BindingHandle[iserver], &phContext[iserver], computer_name_buffer); IsActiveServer[iserver] = TRUE; } RpcExcept(1) { value = TRUE; IsActiveServer[iserver] = FALSE; } RpcEndExcept |
After the client initializes, prime number computation begins. The thread_local function computes prime numbers locally on the client computer using the IsPrime function, as shown here:
if(IsPrime(temp - 1) != 0) |
The thread_remote function makes an RPC to determine whether a number is prime by using the RemoteIsPrime function. In this case, because RemoteIsPrime is an RPC, it is embedded in an exception handler, as shown below:
RpcTryExcept { if(RemoteIsPrime(BindingHandle[count-1], PrimeServerHandle[count-1], temp - 1) != 0) { /* Code displays prime number. */ } } RpcExcept(1) { /* If exception occurred, respond gracefully. */ } RpcEndExcept |
If an exception occurs, the client attempts to recognize the error and displays an error message on the console. If the exception indicates that the server is off line, the client thread waits a specified period of time before attempting to rebind to that server. When the client terminates normally, via the Esc key, a special RPC called TerminatePrimeServer is made to notify the server of the client's plans to exit, as shown in the following code. The server can then take action to free memory and update its display to reflect the new status.
TerminatePrimeServer(BindingHandle[iserver], PrimeServerHandle[iserver]); |
The Prime server has an important, if unrewarding, job. It registers its interface and listens for client requests. The source code for the Prime server module is in the primes.c and primep.c files on the companion CD. The RpcServerUseProtseqEp function tells the RPC run-time module to register a protocol sequence, an endpoint, and a security attribute on which to accept RPCs. This call designates the station the server listens to so that it can hear the client's cries for help, as shown in the following code:
status = RpcServerUseProtseqEp(pszProtocolSequence, cMaxCalls, pszEndpoint, pszSecurity); |
The RpcServerRegisterIf function registers the server's interface. It accepts the handle to the interface being registered and two optional management parameters (which are not used in this example). This interface is defined in the Prime IDL file as shown here:
status = RpcServerRegisterIf(prime_v1_0_s_ifspec, NULL, NULL); |
The last RPC run-time function that the Prime server calls begins listening for client requests. In this example, the RpcServerListen function never returns:
status = RpcServerListen(cMinCalls, cMaxCalls, fDontWait); |
Until a client initiates an RPC, the server can do nothing. To avoid this waste of resources, we created a special thread for the server to perform maintenance tasks even when the server is not in use. We created this thread using the CREATE_SUSPENDED flag so that we can subsequently modify it using the THREAD_PRIORITY_LOWEST flag. This technique ensures that the maintenance thread consumes the minimal amount of CPU cycles when it is restarted using the ResumeThread function. In the following example, the maintenance thread provides some prime number statistics and checks to see whether the Esc key was pressed:
hthread_server = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)thread_server, NULL, CREATE_SUSPENDED, &lpIDThread); SetThreadPriority(hthread_server, THREAD_PRIORITY_LOWEST); ResumeThread(hthread_server); |
The Prime server also includes a special context rundown routine, which you can see in the primep.c file on the companion CD. As you might recall, the client calls the TerminatePrimeServer function when the user presses the Esc key. But what happens if the client terminates abnormally—for example, if the client crashes, the power goes out, or the computer fails? In any case, the server must be fault-tolerant and not allow such an event to impair its performance for other clients that might still be on line. The server must do whatever the TerminatePrimeServer function would have done. The designers of RPC took this situation into account and came up with the special rundown facility. It is a user-defined function that the RPC calls automatically at run time when the client terminates. If the client terminates normally by calling the TerminatePrimeServer function, the rundown routine is skipped. If the client terminates abnormally, the rundown routine is called to perform the necessary cleanup.
The Prime interface definition file specifies this interface between the client and the server. The Prime interface is defined in the prime.idl file. The UUID, version number, and pointer type used are defined in the following code for the interface header. Following that is the actual interface definition, which consists of the function prototypes with special IDL flags.
[ uuid (906B0CE0-C70B-1067-B317-00DD010662DA), version(1.0), pointer_default(unique) ] interface prime { /* Function definitions */ } |
Debugging distributed RPC applications is slightly different from debugging conventional applications because of the added factor of the network. For this reason, it is best to separate the server initialization code from the remote procedures themselves. We did this in the Prime RPC application using the primes.c and primep.c source files. These files are linked to produce the server application, but during the debugging stage, separating them can be invaluable. By dividing the server application into two parts, you give yourself the option of linking the remote procedures directly with the client application to produce one standard application. You can then test the application as a whole without worrying about the network. Once your program works properly, you can divide it into a client and a server to test the distribution factor.
After all the effort we've exerted to compute prime numbers, it's a shame that there isn't more of a market for them. By now, we could probably package and sell them by the metric ton. Is there any advantage to computing prime numbers in a distributed manner across a network rather than on one computer? The Prime application provides some simple timer routines that indicate how long the computations take. The following table should give you an idea of the practicality of RPCs. You can see that when the number of computations is relatively small (1 to 1000), distributing an application can hurt performance because of the overhead of RPCs. But if the number of calculations is very large (10,000,000 and up), the overhead of RPCs becomes insignificant. Based on our tests, the improvement with the distributed prime computations approached an order of 3.5 times faster with four computers compared to only one. A well-written application will make RPCs only when the possible gain outweighs the cost in overhead.
Calculations | One Computer | Four Computers | Ratio |
---|---|---|---|
1–1000 | 35 seconds | 40 seconds | 0.88 |
100,000–101,000 | 40 seconds | 42 seconds | 0.95 |
1,000,000–1,001,000 | 100 seconds | 61 seconds | 1.64 |
10,000,000–10,001,000 | 581 seconds | 170 seconds | 3.42 |