long count; /* how many bytes to transfer */. Fig. 3-7. (a) The time daemon asks all the other machines for their clock values. (b) The machines answer. (c) The time daemon tells everyone how to adjust their clock.. The second design issue is centralized versus distributed. This theme has occurred repeatedly throughout the book. Collecting all the information in one place allows a better decision to be made, but is less robust and can put a heavy load on the central machine. Decentralized algorithms are usually preferable, but some centralized algorithms have been proposed for lack of suitable decentralized alternatives.. To make release consistency clearer, let us briefly describe a possible simple-minded implementation in the context of distributed shared memory (release consistency was actually invented for the Dash multiprocessor, but the idea is the same, even though the implementation is not). To do an acquire, a process sends a message to a synchronization manager requesting an acquire on a particular lock. In the absence of any competition, the request is granted and the acquire completes. Then an arbitrary sequence of reads and writes to the shared data can take place locally. None of these are propagated to other machines. When the release is done, the modified data are sent to the other machines that use them. After each machine has acknowledged receipt of the data, the synchronization manager is informed of the release. In this way, an arbitrary number of reads and writes on shared variables can be done with a fixed amount of overhead. Acquires and releases on different locks occur independently of one another.. Formally, a memory exhibits entry consistency if it meets all the following conditions (Bershad and Zekauskas, 1991):. ConsistencyDescriptionStrictAbsolute time ordering of all shared accesses mattersSequentialAll processes see all shared accesses in the same orderCausalAll processes see all casually-related shared accesses in the same orderProcessorPRAM consistency + memory coherencePRAMAll processes see writes from each processor in the order they were issued. Writes from different processors may not always be seen in the same order . Another issue is that a process may make thousands of consecutive writes to the same page because many programs exhibit locality of reference. Having to catch all these updates and pass them to remote machines is horrendously expensive in the absence of multiprocessor-type snooping.. 2. How to distribute tuples among machines and locate them later.. Munin and Midway try to improve the performance by requiring the
programmer to mark those variables that are shared and by using weaker consistency models. Munin is based on release consistency, and on every release transmits all modified pages (as deltas) to other processes sharing those pages. Midway, in contrast, does communication only when a lock changes ownership.. Interprocess communication in Mach is based on message passing. To receive messages, a user process asks the kernel to create a kind of protected mailbox, called aport,for it. The port is stored inside the kernel, and has the ability to queue an ordered list of messages. Queues are not fixed in size, but for flow control reasons, if more thannmessages are queued on a port, a process attempting to send to it is suspended to give the port a chance to be emptied. The parameternis settable per port.. The last two calls of Fig. 8-3 return information about the process. The former gives statistical information and the latter returns a list of all the threads.. Mutex typePropertiesFastLocking it a second time causes a deadlockRecursiveLocking it a second time is allowedNonrecursiveLocking it a second time gives an error. 10.5.2. The Cell Directory Service. All of these calls operate by first determining whether CDS or GDS is needed. X.500 names are handled by GDS; DNS or mixed names are handled by CDS, as illustrated in Fig. 10-24. First let us trace the lookup of a name in X.500 format. The XDS library sees that it needs to look up an X.500 name, so it calls theDUA (Directory User Agent), a library linked into the client code. This handles GDS caching, analogous to the CDS clerk, which handles CDS caching. Users have more control over GDS caching than they do over CDS caching and can, for example, specify which items are to be cached. They can even bypass the DUA if it is absolutely essential to get the latest data.. When the ticket-granting server gets the message, it uses its own private key,KAto decrypt the message. When it finds the session key,K1, it looks in the registry and verifies that it recently assigned this key to clientC.Since onlyCknowsKC,the ticket-granting server knows that onlyCwas able to decrypt the reply sent in step 1, and this request must have come fromC..