Singleton – is single process design pattern. In conventional terms it represents a sequential process that directly
interfaces with other nodes and communicates with them via message passing through input and output ports. It
also serves as a basic block for other patterns [1, p. 10].
Externally replication appears like a singleton, but internally it replicates multiple copies of a
singleton. Therefore, it can serve multiple messages at a time. As shown in figure 1, replication contains a
coordinator that cats an input and output interface.
Figure 1 - Group of replicas structure
The coordinator accepts external input. It finds a replicated singleton that is selected based on load
balancing algorithm and forwards the input to it for processing. After a replicated singleton finishes
processing the message, it sends the result back to coordinator which, in turn, forwards the
message appropriately [1, p. 10].
This cycle process is shown on figure 2.
Figure 2 - Interaction of replicas in a group in order to discovery the result
A pipeline pattern consists of ordered set of stages in which the output of each stage is the input of its
successor. A pipeline has multiple singletons connected in a chain-like fashion where the output of the
singleton is forwarded to the next singleton in a chain. The first stage in a pipeline acts as the input interface
and the last stage of the pipeline acts as the output interface of the design pattern. Structured scheme of the
pipeline pattern is shown on figure 3.
Figure 3 - Structured scheme of the pipeline pattern
A master / slave are also called the task farm, consists of singleton, the Master,
and a replication, the Slave. The master generates requests and passes them to the slaves. The
slave services the request and sends it back to the master. Since the slave is replicated multiple requests
can be serviced at a time [1, p. 10].
Master / Slave is an example of composition pattern. It is created by combining Singleton
and Replication design patterns [1, p. 10]. Structured scheme of the pattern is shown on figure 4.
Figure 4 - Structured scheme of the Master / Slave pattern
Divide and Conquer represents a tree of singletons, where parent processes recursively passes
work down to its children processes. When a child finishes the work it passes result back to its parent.
Divide and Conquer has 2 parameters to specify:
- the width of the tree
- the height of the tree (maximum level of recurtion)
The root or top process of the tree is the interface process for the design pattern. The behavior of the
Divide and Conquer is provided by supporting the functions for sending job to the children processes
and gathering the results from them [1, p. 11]. Structured scheme of the pattern is shown on figure 5.
Figure 5 - Structured scheme of the Divide and Conquer pattern
Repository pattern is the particular case of Replication pattern. This pattern is used when it is
necessary to apply completely independent computations as sets of non-deterministic interactions on elements
of some centralized, perhaps not even ordered, data structure. Data consistency and preservation are
tasks of the repository. The internal representation or order of data is not a central issue [2].
Parallelism is absolute between components (replicas). Any component can be performing different
operations on a different piece of data at the same time, without any prescribed order. The restriction is that no
piece of information is accessed at the same time by different components [2]. Structured scheme of the pattern
is shown on figure 6.
Figure 6 - Structured scheme of the Repository pattern
Geometric decomposition pattern is used when a computation is required that can be performed as a
set of semi-independent operations, on ordered data. Results cannot be constrained to a
one-way flow among processing stages, but each component (replica) executes a relatively autonomous
computation. Communications between components follow fixed and predictable paths, namely the
neighbor-to-neighbor. Over time, the effects propagate to other areas, extending in all directions [3].
Parallelism is introduced as multiple participating concurrent components, each one applying a set of
operations to a data subset. Components communicate partial results by exchanging data, usually
through communication channels. No data objects are directly shared among components; each one
may access its own private data subset only. A component communicates by sending data objects from its
local space to another. The operations in each element depend on the partial results of operations in other
components. By using the intermediate results from predictable neighbors the network of elements generally
becomes a geometric shape [3].
Shapes being formed by the replicas could be predicted on the different stages of the computation. Thus, it is
possible to organize such physical layout of replicas that would minimize total communication overhead. There
are extensively used classes of high performance computing systems where geometric decomposition might
significantly reduce computational time of numerical algorithms.
On figure 7 there is computer network that is composed of multiple interacting replicas. During the atomic
communication operation replicas form a geometric shape. The form of the shape might be used for the
improvement of overhead of the communication.
Figure 7 - Structured scheme of the Geometric decomposition pattern
The design patterns presented above are mostly based on medium and coarse grain parallel
structures. Such patterns are specifically suited to parallel computing over a processor–network.
Литература:
-
Parallel Skeletons and Framework / Composing Parallel Applications Using Design Patterns
(PDF)
Source: Resources on Parallel Patterns /
Pattern Collection / Separation of Concerns in Parallel Patterns
-
Repositories / Parallel Programming Patterns
Source:
http://wing.cs.uiuc.edu/group/patterns/patterns/repositories.html
-
Geometric Decomposition / Parallel Programming Patterns
Source:
http://wing.cs.uiuc.edu/group/patterns/patterns/geometric.html
|