Multiprocessing is the use of two or morecentral processing units (CPUs) within a single computer system. The term also
refers to the ability of a system to support more than one processor and/or the
ability to allocate tasks between them. There are many variations on this basic
theme, and the definition of multiprocessing can vary with context, mostly as a
function of how CPUs are defined (multiple cores on one die, multiple chips in
one package, multiple packages in one system unit, etc.).
Multiprocessing sometimes refers to the
execution of multiple concurrent software processes in a system as opposed to a
single process at any one instant. However, the term multiprogramming is more
appropriate to describe this concept, which is implemented mostly in software,
whereas multiprocessing is more appropriate to describe the use of multiple
hardware CPUs. A system can be both multiprocessing and multiprogramming, only
one of the two, or neither of the two.
Processor symmetry
In a multiprocessing system, all CPUs may
be equal, or some may be reserved for special purposes. A combination of
hardware and operating-system software design considerations determine the
symmetry (or lack thereof) in a given system. For example, hardware or software
considerations may require that only one CPU respond to all hardware
interrupts, whereas all other work in the system may be distributed equally
among CPUs; or execution of kernel-mode code may be restricted to only one
processor (either a specific processor, or only one processor at a time),
whereas user-mode code may be executed in any combination of processors. Multiprocessing
systems are often easier to design if such restrictions are imposed, but they
tend to be less efficient than systems in which all CPUs are utilized equally.
Systems that treat all CPUs equally are
called symmetric multiprocessing (SMP) systems. In systems where all CPUs are
not equal, system resources may be divided in a number of ways, including
asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA)
multiprocessing, and clustered multiprocessing.
Instruction and
data streams
In multiprocessing, the processors can be
used to execute a single sequence of instructions in multiple contexts
(single-instruction, multiple-data or SIMD, often used in vector processing),
multiple sequences of instructions in a single context (multiple-instruction,
single-data or MISD, used for redundancy in fail-safe systems and sometimes
applied to describe pipelined processors or hyperthreading), or multiple
sequences of instructions in multiple contexts (multiple-instruction,
multiple-data or MIMD).
Processor coupling
Tightly-coupled multiprocessor systems
contain multiple CPUs that are connected at the bus level. These CPUs may have
access to a central shared memory (SMP or UMA), or may participate in a memory
hierarchy with both local and shared memory (NUMA). The IBM p690 Regatta is an
example of a high end SMP system. Intel Xeon processors dominated the
multiprocessor market for business PCs and were the only x86 option until the
release of AMD's Opteron range of processors in 2004. Both ranges of processors
had their own onboard cache but provided access to shared memory; the Xeon
processors via a common pipe and the Opteron processors via independent
pathways to the system RAM.
Loosely-coupled multiprocessor systems
(often referred to as clusters) are based on multiple standalone single or dual
processor commodity computers interconnected via a high speed communication
system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a
loosely-coupled system.
Tightly-coupled systems perform better
and are physically smaller than loosely-coupled systems, but have historically
required greater initial investments and may depreciate rapidly.
Power consumption is also a
consideration. Tightly-coupled systems tend to be much more energy efficient
than clusters. This is because considerable economies can be realized by
designing components to work together from the beginning in tightly-coupled
systems, whereas loosely-coupled systems use components that were not
necessarily intended specifically for use in such systems.
SIMD multiprocessing is well suited to
parallel or vector processing, in which a very large set of data can be divided
into parts that are individually subjected to identical but independent
operations. A single instruction stream directs the operation of multiple
processing units to perform the same manipulations simultaneously on
potentially large amounts of data.
For certain types of computing
applications, this type of architecture can produce enormous increases in
performance, in terms of the elapsed time required to complete a given task. However,
a drawback to this architecture is that a large part of the system falls idle
when programs or system tasks are executed that cannot be divided into units
that can be processed in parallel.
Additionally, programs must be carefully
and specially written to take maximium advantage of the architecture, and often
special optimizing compilers designed to produce code specifically for this
environment must be used. Some compilers in this category provide special constructs
or extensions to allow programmers to directly specify operations to be
performed in parallel (e.g., DO FOR ALL statements in the version of FORTRAN
used on the ILLIAC IV, which was a SIMD multiprocessing supercomputer).
SIMD multiprocessing finds wide use in
certain domains such as computer simulation, but is of little use in
general-purpose desktop and business computing environments.
MISD multiprocessing offers mainly the
advantage of redundancy,[citation needed] since multiple processing units
perform the same tasks on the same data, reducing the chances of incorrect
results if one of the units fails. MISD architectures may involve comparisons
between processing units to detect failures. Apart from the redundant and
fail-safe character of this type of multiprocessing, it has few advantages, and
it is very expensive. It does not improve performance. It can be implemented in
a way that is transparent to software.
MIMD multiprocessing architecture is
suitable for a wide variety of tasks in which completely independent and
parallel execution of instructions touching different sets of data can be put
to productive use. For this reason, and because it is easy to implement, MIMD
predominates in multiprocessing.
Processing is divided into multiple threads,
each with its own hardware processor state, within a single software-defined
process or within multiple processes. Insofar as a system has multiple threads
awaiting dispatch, this architecture makes good use of hardware resources.
Similar conflicts can arise at the
hardware level between processors (cache contention, for example), and must
usually be resolved in hardware, or with a combination of software and hardware
(e.g., cache-clear instructions).
Back (Main)
Version on russian language
|