Pattern Language for Parallel Programming, 2004.pdf
(
3466 KB
)
Pobierz
707735918 UNPDF
"If you build it, they will come."
And so we built them. Multiprocessor workstations, massively parallel supercomputers, a cluster in
every department ... and they haven't come. Programmers haven't come to program these wonderful
machines. Oh, a few programmers in love with the challenge have shown that most types of problems
can be forcefit onto parallel computers, but general programmers, especially professional
programmers who "have lives", ignore parallel computers.
And they do so at their own peril. Parallel computers are going mainstream. Multithreaded
microprocessors, multicore CPUs, multiprocessor PCs, clusters, parallel game consoles ... parallel
computers are taking over the world of computing. The computer industry is ready to flood the market
with hardware that will only run at full speed with parallel programs. But who will write these
programs?
This is an old problem. Even in the early 1980s, when the "killer micros" started their assault on
traditional vector supercomputers, we worried endlessly about how to attract normal programmers.
We tried everything we could think of: highlevel hardware abstractions, implicitly parallel
programming languages, parallel language extensions, and portable messagepassing libraries. But
after many years of hard work, the fact of the matter is that "they" didn't come. The overwhelming
majority of programmers will not invest the effort to write parallel software.
A common view is that you can't teach old programmers new tricks, so the problem will not be solved
until the old programmers fade away and a new generation takes over.
But we don't buy into that defeatist attitude. Programmers have shown a remarkable ability to adopt
new software technologies over the years. Look at how many old Fortran programmers are now
writing elegant Java programs with sophisticated objectoriented designs. The problem isn't with old
programmers. The problem is with old parallel computing experts and the way they've tried to create a
pool of capable parallel programmers.
And that's where this book comes in. We want to capture the essence of how expert parallel
programmers think about parallel algorithms and communicate that essential understanding in a way
professional programmers can readily master. The technology we've adopted to accomplish this task is
a pattern language. We made this choice not because we started the project as devotees of design
patterns looking for a new field to conquer, but because patterns have been shown to work in ways that
would be applicable in parallel programming. For example, patterns have been very effective in the
field of objectoriented design. They have provided a common language experts can use to talk about
the elements of design and have been extremely effective at helping programmers master object
oriented design.
This book contains our pattern language for parallel programming. The book opens with a couple of
chapters to introduce the key concepts in parallel computing. These chapters focus on the parallel
computing concepts and jargon used in the pattern language as opposed to being an exhaustive
introduction to the field.
The pattern language itself is presented in four parts corresponding to the four phases of creating a
parallel program:
*
Finding Concurrency. The programmer works in the problem domain to identify the available
concurrency and expose it for use in the algorithm design.
*
Algorithm Structure. The programmer works with highlevel structures for organizing a parallel
algorithm.
*
Supporting Structures. We shift from algorithms to source code and consider how the parallel
program will be organized and the techniques used to manage shared data.
*
Implementation Mechanisms. The final step is to look at specific software constructs for
implementing a parallel program.
The patterns making up these four design spaces are tightly linked. You start at the top (Finding
Concurrency), work through the patterns, and by the time you get to the bottom (Implementation
Mechanisms), you will have a detailed design for your parallel program.
If the goal is a parallel program, however, you need more than just a parallel algorithm. You also need
a programming environment and a notation for expressing the concurrency within the program's
source code. Programmers used to be confronted by a large and confusing array of parallel
programming environments. Fortunately, over the years the parallel programming community has
converged around three programming environments.
*
OpenMP. A simple language extension to C, C++, or Fortran to write parallel programs for
sharedmemory computers.
*
MPI. A messagepassing library used on clusters and other distributedmemory computers.
*
Java. An objectoriented programming language with language features supporting parallel
programming on sharedmemory computers and standard class libraries supporting distributed
computing.
Many readers will already be familiar with one or more of these programming notations, but for
readers completely new to parallel computing, we've included a discussion of these programming
environments in the appendixes.
In closing, we have been working for many years on this pattern language. Presenting it as a book so
people can start using it is an exciting development for us. But we don't see this as the end of this
effort. We expect that others will have their own ideas about new and better patterns for parallel
programming. We've assuredly missed some important features that really belong in this pattern
language. We embrace change and look forward to engaging with the larger parallel computing
community to iterate on this language. Over time, we'll update and improve the pattern language until
it truly represents the consensus view of the parallel programming community. Then our real work
will begin—using the pattern language to guide the creation of better parallel programming
environments and helping people to use these technologies to write parallel software. We won't rest
until the day sequential software is rare.
ACKNOWLEDGMENTS
We started working together on this pattern language in 1998. It's been a long and twisted road,
starting with a vague idea about a new way to think about parallel algorithms and finishing with this
book. We couldn't have done this without a great deal of help.
Mani Chandy, who thought we would make a good team, introduced Tim to Beverly and Berna. The
National Science Foundation, Intel Corp., and Trinity University have supported this research at
various times over the years. Help with the patterns themselves came from the people at the Pattern
Languages of Programs (PLoP) workshops held in Illinois each summer. The format of these
workshops and the resulting review process was challenging and sometimes difficult, but without
them we would have never finished this pattern language. We would also like to thank the reviewers
who carefully read early manuscripts and pointed out countless errors and ways to improve the book.
Finally, we thank our families. Writing a book is hard on the authors, but that is to be expected. What
we didn't fully appreciate was how hard it would be on our families. We are grateful to Beverly's
family (Daniel and Steve), Tim's family (Noah, August, and Martha), and Berna's family (Billie) for
the sacrifices they've made to support this project.
— Tim Mattson, Olympia, Washington, April 2004
— Beverly Sanders, Gainesville, Florida, April 2004
— Berna Massingill, San Antonio, Texas, April 2004
Chapter
1.
A Pattern Language for Parallel Programming
Section 1.1.
INTRODUCTION
Section 1.2.
PARALLEL PROGRAMMING
Section 1.3.
DESIGN PATTERNS AND PATTERN LANGUAGES
Section 1.4.
A PATTERN LANGUAGE FOR PARALLEL PROGRAMMING
Chapter
2.
Background and Jargon of Parallel Computing
Section 2.1.
CONCURRENCY IN PARALLEL PROGRAMS VERSUS OPERATING SYSTEMS
Section 2.2.
PARALLEL ARCHITECTURES: A BRIEF INTRODUCTION
Section 2.3.
PARALLEL PROGRAMMING ENVIRONMENTS
Section 2.4.
THE JARGON OF PARALLEL COMPUTING
Section 2.5.
A QUANTITATIVE LOOK AT PARALLEL COMPUTATION
Section 2.6.
COMMUNICATION
Section 2.7.
SUMMARY
Chapter
3.
The Finding Concurrency Design Space
Section 3.1.
ABOUT THE DESIGN SPACE
Section 3.2.
THE TASK DECOMPOSITION PATTERN
Section 3.3.
THE DATA DECOMPOSITION PATTERN
Section 3.4.
THE GROUP TASKS PATTERN
Section 3.5.
THE ORDER TASKS PATTERN
Section 3.6.
THE DATA SHARING PATTERN
Section 3.7.
THE DESIGN EVALUATION PATTERN
Section 3.8.
SUMMARY
Chapter
4.
The Algorithm Structure Design Space
Section 4.1.
INTRODUCTION
Section 4.2.
CHOOSING AN ALGORITHM STRUCTURE PATTERN
Section 4.3.
EXAMPLES
Plik z chomika:
Ryjufka
Inne pliki z tego folderu:
Antipatterns, 2006.pdf
(3423 KB)
Applied Java Patterns, 2001.pdf
(2772 KB)
Concurrent Programming in Java, Design Patterns, 1999.chm
(1416 KB)
Design Patterns Workbook, 2002.pdf
(5089 KB)
Design Patterns Explained With Java and UML2, 2008.pdf
(1343 KB)
Inne foldery tego chomika:
Advanced Topics
Ant
Application Servers
Certification Guides
Data Structures
Zgłoś jeśli
naruszono regulamin