DonNTU FCIS Masters Autobiography Autosynopsis Library Links Individual task Ïåðåâîä

9.3 DISTRIBUTED MUTUAL EXCLUSION

9.3.1 The Problem

In a distributed computing environment, there are many cases and situations in which it is necessary to give a single entity (or a single group of entities) exclusive control.

This occurs, for example, whenever computations require the presence of a central controller (e.g., because the coordination itself is more efficiently performed this way). During the lifetime of the system, this requirement will occur recurrently; hence, the problem is a continuous one. The typical solution used in these situations is to perform an election so as to select the coordinator every time one is needed.We have discussed and examined how to perform this task in details in Chapter 3. There are some drawbacks with the approach of repeatedly choosing a leader. The first and foremost is that it is usually unfair: Recall that there is no restriction on which entity will become leader; thus, it is possible that some entities will never assume such a role, while others (e.g., the ones with small ids) will always be chosen. This means that the workload is not really balanced within the system; this can also create additional bottlenecks. A secondary (but important) disadvantage of repeatedly electing a leader is its cost: Even if just performed on a (a priori constructed) spanning tree, at least Ω(n) messages will be required each time.

Another situation when exclusive control is necessary is when accessing a critical resource of a system. This is, for example, the case when only a single resource of some type (e.g., a printer, a bus) exists in the system and that resource cannot be used concurrently. In this case, any entity requiring the use of that resource must ensure that when it does so, it is the only one doing so. What is important is not the nature of the resource but the fact that it must be held in mutual exclusion: only one at the time. This means that when more than one entity may want to access the critical resource, only one should be allowed. Any mechanism must also clearly ensure that any request is eventually granted, that is, no entity will wait forever. The approach of using election, to select the entity to which access is granted, is unfortunately not a wise one. This is not (only) because of the cost but because of its unfairness: It does not guarantee that every entity wanting to access a resource will be allowed to do so (i.e., will become leader) within finite time.

This gives rise to a very interesting continuous problem, that of distributed mutual exclusion.We will describe it more precisely using the metaphor of critical operations in a continuous computation C. In this metaphor,

1. every entity is involved in a continuous computation C,

2. some operations that entities can perform in C are designed as critical,

3. an entity may need to perform a critical operation at any time, any number of times,

4. an entity required to perform a critical operation cannot continue C until that operation has been performed,

where an operation may be an action or even an entire subprotocol. A distributed mutual exclusion mechanism is any protocol that ensures the following two properties:

 Mutual exclusion: If an entity is performing a critical operation, no other entity is doing so.

 Fairness: If an entity wants to perform a critical operation, it will do so within finite time.

In the rest of this section we will see how to design efficient protocols with those properties. In the process, we will see that there is an interesting connection between the problem of distributed mutual exclusion and that of managing a distributed queue (another continuous computation). In particular, we will see how any protocol for fair management of a distributed queue can be used to solve the problem of distributed mutual exclusion. Throughout, we will assume restrictions IR.

9.3.2 A Simple and Efficient Solution

The problem of distributed mutual exclusion has a very simple and efficient centralized solution:

Protocol Central:

Initially, an entity is elected as leader; this entity will then coordinate the granting of permissions as follows:

1. each entity wanting to perform a critical operation sends a request to the leader; once granted permission, the entity performs its critical operation, and when finished, it informs the leader;

2. the leader grants permissions to one requesting entity at a time, ensuring that both mutual exclusion and fairness are satisfied.

The last point is achieved, for example, by having the leader keep the pending requests in a first in first out (FIFO) ordered list.

This very simple centralized protocol is not only correct but also quite efficient. In fact, for each critical operation, there is a request from the entity to the leader, a permission (eventually) from the leader to that entity, and the notification of termination from the entity back to the leader. Thus, there will be 3d(x, r) messages for each operation x wants to perform, where r is the leader; so, the operating cost of Central will be no more than

3 diam(G)

messages per critical operation. This means that in a complete graph the cost will be only three messages per critical operation.

The drawbacks of this solution are those of all centralized solutions: Thewoarkload is not balanced; the leader might have to keep a large amount of information; the leader is a fault-tolerance bottleneck. As we are assuming total reliability, we will not worry for the moment about the issue of fault tolerance. The other two issues, however, are motivational enough to look for decentralized solutions.

9.3.3 Traversing the Network

To construct an efficient decentralized mutual-exclusion protocol, let us first reexpress the mechanism of the centralized protocol as follows: In the system there is a single “permission” token, initially held by the leader, and an entity can perform a critical operation only if in possession of such a token. It is this fact that ensures the mutual exclusion property within protocol Central. The fairness property is instead guaranteed in protocol Central because (1) the decision to which entity should the token be given is made by the leader, to whom the token is returned once a critical operation has been performed, and (2) the leader uses a fair decision mechanism (e.g., a FIFO list).

We can still enforce mutual exclusion using the idea of a permission token, and at the same time achieve fairness without having a leader, in a purely decentralized way. For example, we can have the token circulate among all the entities:

Protocol EndlessTraversal:

 A single token continuously performs a traversal of the network.

 - When an entity x receives the token, if it needs to perform a critical operation, it will do so and upon completion, it will continue the circulation of the token; otherwise, it will circulate it immediately.

 - If an entity needs to perform a critical operation, it will wait until it receives the token.

We have discussed at length how to efficiently perform a single traversal of a network. Recall that a complete traversal can be done using a spanning tree of the network, at a cost of 2(n - 1) messages per traversal. If the network is Hamiltonian, that is, it has a spanning cycle, we can use that cycle to perform the traversal transmitting only n messages for a complete traversal. Indeed this is used in many practical systems.

What is the cost per critical operation of operating such a protocol? To answer this question, consider a period of time when all entities are continuously asking for the token; in this case, almost after each move, the token will be allowing an entity to perform a critical operation. This means that in such a situation of heavy load, the cost of EndlessTraversal is just O(1) messages per critical operation. If the requests are few and infrequent, that is, with light load, the amount of messages per request is unpredictable as it depends on the time between successive requests and the speed of the token. From a practical point of view, this means that the management of a seldomly used resource may result in overcharging the network with messages.

Consider now a period of time where the entities have no need to perform any critical operations; during all this time, the token will continue to traverse the network, looking for entities needing it, and finding none. As this situation of no load can continue for an unpredictable amount of time, it follows that, in protocol EndlessTraversal, the number of messages per critical operation, is unbounded!

Let us see howthis unpleasant situation can be improved. Let us consider the virtual ring R associated to the depth-first traversal of the network; in case the network is Hamiltonian, we will use the Hamiltonian cycle as the ring.

In a traversal, the token moves along R in one direction, call it “right.” If a token reaches an entity that does not need to perform a critical operation (or just finished executing one), to cut down the number of message transmissions, instead of automatically forwarding the token along the ring, the entity will do so only if there are indeed requests for the token, that is, if there are entities wanting to perform a critical operation.

The problem is how to make the entity holding the token know if there are entities wanting it. This problem is fortunately easy to solve: An entity needing to perform a critical operation and not in possession of the token will issue a request for the token; the request travels along the ring in the opposite direction of the token, until it reaches the entity holding the token or an entity that has also issued a request for the token. There are many details that must be taken into account to transform this informal description into a protocol. Let us be more precise.

In our description, each link will have a color, and colors change depending on the type of message according to the following two rules:

 - Links are either white or black; initially, all links are white.

 - Whenever a request is sent on a link, that link becomes black; whenever the token is sent on a link, that link becomes white.

The resulting mechanism is then specified as follows:

Mechanism OnDemandTraversal:

1. When an entity needs to perform a critical operation and does not have the token, if its left link is white, it sends a request there and waits for the token.

2. When an entity receives a request (from the right link), if its left link is white, it forwards the request and waits for the token.

3. When an entity has received or receives the token, it will execute the following two steps:

(a) if it needs to perform a critical operation, it performs it;

(b) if its right link is black, it sends the token to the right.

In this way, instead of a blind endless traversal, we can have one that is fueled by requests for the token.

It is not difficult to verify that the corresponding protocol OnDemandTraversal is indeed correct, ensuring both mutual exclusion and fairness. Unlike EndlessTraversal, the cost of protocol OnDemandTraversal is never unbounded. In fact, if there are no requests in the system, the token will not circulate. In other words, each traversal of the token satisfies at least a request, and possibly more. This means that in the worst case, a traversal satisfies exactly one request; in other words, the number of token movements per request is at most n' - 1, where n' is the number of nodes on R. In addition to the token, the protocol also uses request messages.

A request message, moving in the opposite direction of the token, moves along the ring until it finds the token or another entity waiting for the token. (NOTE: the token and a request never cross on a link). This means that a request will cause at most n - 1 transmissions. Therefore, the total number of messages per critical operation in protocol OnDemandTraversal in the worst case is

2(n' - 1) ≤ 4(n - 2).

Notice that although bounded, this is always worse than the cost obtained by Central. In particular, in a complete graph the worst case cost of OnDemandTraversal will be 2(n - 1), while in Central, as we have seen, three messages suffice.

The worst case does not tell us the whole story. In fact, the actual cost will depend on the frequency and the spread of the requests. In particular, like protocol EndlessTraversal, the more frequent the requests and the larger their spread, the more protocol will OnDemandTraversal have a performance approaching O(1) messages per critical operation. This will be so, regardless of the diameter of the topology, even in networks where protocol Central under the same conditions could require O(n) messages per request.

We have seen howto have the tokenmove only if there are requests. Themovements of the token, fueled by requests, were according to a perennial traversal of R, a cycle containing all the entities. If the network is Hamiltonian, we clearly choose R to be the Hamiltonian cycle; else we would like to construct the shortest such cycle. We do know that for any network we can always construct a spanning cycle R with 2(n - 1) nodes: The one obtained by a depth-first traversal of a spanning tree of the network.

DonNTU FCIS Masters Autobiography Autosynopsis Library Links Individual task Ïåðåâîä