Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

peer-topeer Networks phần 10 ppsx
MIỄN PHÍ
Số trang
32
Kích thước
298.9 KB
Định dạng
PDF
Lượt xem
822

peer-topeer Networks phần 10 ppsx

Nội dung xem thử

Mô tả chi tiết

P1: OTE/SPH P2: OTE

SVNY285-Loo October 18, 2006 7:10

Parallel Sorting Algorithms for MIMD with Shared Memory 237

Y1 Y2 Y3 Yp

Pivot(1) Pivot(2) Pivot(3) Pivot(p–1) Pivot(p)

. . . . . . . . .

Figure 17.2. Schematic diagram of the calculation of pivots.

where Max is the Maximum value of keys, Min is the Minimum value of keys and

n is the number of processors in the system. A schematic diagram is presented in

Fig. 17.2. Yi is the sub-file after phase 1.

This formula can still be used for non-uniform distribution. However, pre￾processing is required to map the non-uniform distribution to uniform distribution.

Before the sorting starts, a cumulative distribution table should be built using the

method specified in many standard statistics textbooks. The pivot used in this algo￾rithm can be found by mapping to the cumulative distribution table. A schematic

diagram for the mapping is presented in Fig. 17.3. In this figure, we split the

file equally over five processors. We maintain the ‘load balance’ by allocating

approximately equal numbers of keys to each processor.

17.4.3.2 Performance

Additional storage of n keys is required only at phase 2 and thus the storage require￾ment is 2n. However, the storage of the old array will be released after this phase.

This algorithm has the following properties:

Better than parallel Quicksort algorithm on average.

Easily applied to difference kinds of distribution pattern of the input data file.

1

0.8

0.6

0.4

0.2

0

Cumulative

distribution

Key value of

uniform

distribution

1

0.8

0.6

0.4

0.2

0

Cumulative

distribution

Key value of non￾uniform distribution

Key range of

processor 1

Key range of

processor 2

Key range of

processor 3

Key range of

processor 4

Key range of

processor 5

Figure 17.3. Mapping of non-uniform distribution.

P1: OTE/SPH P2: OTE

SVNY285-Loo October 18, 2006 7:10

238 17. Distributed and Parallel Algorithms

Eliminates the low amount of parallelism at the beginning of parallel Quicksort

algorithm.

Has a good speedup.

17.5 Parallel Sorting Algorithms for MIMD

with Distributed Memory

Since the final sorted file is distributed over different local memory/computers, a

new definition of sorting is required. The sorting model is briefly described in the

next section.

17.5.1 Definition of Distributed Sorting

A large file is physically distributed in the local memory of p processors. n records are approximately evenly distributed in p processors (i.e., n/p records

for each processor).

No processor has enough resources to collect all keys and sort the file locally

(or do it efficiently).

We denote the elements in the file as X(i, j), where i is the processor number

and j—is an integer from 1 to n/p. The objective of sorting is to rearrange X(i, j)

elements so that the elements will be in the following order:

X(i, 1) < X(i, 2) < ··· < X(i, n/p)

and

X(k, n/p) < X(k + 1, 1)

where k is an integer from 1 to p.

Communication time is usually much longer than computation time in the ex￾ecution of algorithms for distributed memory environments. The communication

time is thus a major criterion for measuring the performance of a distributed algo￾rithm. Therefore, reducing the number of communication messages and volume

of transmissions has become the focus of research (Dechter and Kleinrock, 1986;

Huang and Kleinrock, 1990; Loo and Ng, 1991; Loo et al., 1995; Wegner 1982).

17.6 Conclusions

The design of any parallel algorithms will be strongly influenced by the architecture

of the computers on which they are to be run. Different priorities will be used in

different computer systems.

In MIMD with shared memory, particular attention will be paid to ensure that the

processors will be fully utilized. Locking and synchronization overheads should

be reduced in the design.

P1: OTE/SPH P2: OTE

SVNY285-Loo October 18, 2006 7:10

Conclusions 239

In MIMD with distributed memory, the time for processing the data items is

usually much less than the communication time. Thus, the communication time is

the most important factor for these kinds of systems.

As we can see from the aforementioned discussion, parallel sorting algorithms

are more complicated than serial algorithms. In addition to the factors considered

in serial algorithms, additional factors must be addressed in the design process.

Those factors are summarized as follows:

Number of processors required.

Speedup, scaleability and efficiency.

Idle time of processors and load balancing between processors.

Memory contention or conflict.

Task creation and allocation to processors.

Locking, synchronization and task scheduling method.

Tải ngay đi em, còn do dự, trời tối mất!