Take advantage of the power of parallel computers with this comprehensive introduction to methods for the design, implementation, and analysis of parallel algorithms. You'll examine many important core topics, including sorting and graph algorithms, discrete optimization techniques, and scientific computing applications, as you consider parallel algorithms for realistic machine models.
Features: presents parallel algorithms as a small set of basic data communication operations in order to simplify their design and increase understanding; emphasizes practical issues of performance, efficiency, and scalability; provides a self-contained discussion of the basic concepts of parallel computer architectures; covers algorithms for scientific computation, such as dense and sparse matrix computations, linear system solving, finite elements, and FFT; discusses algorithms for combinatorial optimization, including branch-and-bound, heuristic search, and dynamic programming; incorporates illustrative examples of parallel programs for commercially available computers; and contains extensive figures and examples that illustrate the workings of algorithms on different architectures.
Take an in-depth look at techniques for the design and analysis of parallel algorithms with this new text. The broad, balanced coverage of important core topics includes sorting and graph algorithms, discrete optimization techniques, and scientific computing applications. The authors focus on parallel algorithms for realistic machine models while avoiding architectures that are unrealizable in practice. They provide numerous examples and diagrams illustrating potentially difficult subjects and conclude each chapter with an extensive list of bibliographic references. In addition, problems of varying degrees of difficulty challenge readers at different levels. Introduction to Parallel Computing is an ideal tool for students and professionals who want insight into problem-solving with parallel computers. Features: Presents parallel algorithms in terms of a small set of basic data communication operations, greatly simplifying the design and understanding of these algorithms. Emphasizes practical issues of performance, efficiency, and scalability. Provides a self-contained discussion of the basic concepts of parallel computer architectures. Covers algorithms for scientific computation, such as dense and sparse matrix computations, linear system solving, finite elements, and FFT. Discusses algorithms for combinatorial optimization, including branch-and-bound, unstructured tree search, and dynamic programming. Incorporates various parallel programming models and languages as well as illustrative examples for commercially- available computers.
Audience: Junior/Senior/Graduate Computer Science and Computer Engineering majors Professional/Reference Courses: Distributed Computing Parallel Programming Parallel Algorithms Prerequisites: Operating Systems and Analysis of Algorithms