An Introduction to Parallel Programming, Second Edition presents a tried-and-true tutorial approach that shows students how to develop effective parallel programs with MPI, Pthreads and OpenMP.
As the first undergraduate text to directly address compiling and running parallel programs on multi-core and cluster architecture, this second edition carries forward its clear explanations for designing, debugging and evaluating the performance of distributed and shared-memory programs while adding coverage of accelerators via new content on GPU programming and heterogeneous programming. New and improved user-friendly exercises teach students how to compile, run and modify example programs.
Auflage
Sprache
Verlagsort
Verlagsgruppe
Elsevier Science & Technology
Zielgruppe
Für höhere Schule und Studium
Students in undergraduate parallel programming or parallel computing courses designed for the computer science major or as a service course to other departments; professionals with no background in parallel computing.
Produkt-Hinweis
Broschur/Paperback
Klebebindung
Illustrationen
65 illustrations; Illustrations
Maße
Höhe: 232 mm
Breite: 189 mm
Dicke: 19 mm
Gewicht
ISBN-13
978-0-12-804605-0 (9780128046050)
Copyright in bibliographic data and cover images is held by Nielsen Book Services Limited or by the publishers or by their respective licensors: all rights reserved.
Schweitzer Klassifikation
Peter Pacheco received a PhD in mathematics from Florida State University. After completing graduate school, he became one of the first professors in UCLA's "Program in Computing,? which teaches basic computer science to students at the College of Letters and Sciences there. Since leaving UCLA, he has been on the faculty of the University of San Francisco. At USF Peter has served as chair of the computer science department and is currently chair of the mathematics department.His research is in parallel scientific computing. He has worked on the development of parallel software for circuit simulation, speech recognition, and the simulation of large networks of biologically accurate neurons. Peter has been teaching parallel computing at both the undergraduate and graduate levels for nearly twenty years. He is the author of Parallel Programming with MPI, published by Morgan Kaufmann Publishers. Matthew Malensek is an Assistant Professor in the Department of Computer Science at the University of San Francisco. His research interests are centered around big data, parallel/distributed systems, and cloud computing. This includes systems approaches for processing and managing data at scale in a variety of domains, including fog computing and Internet of Things (IoT) devices.
Autor*in
University of San Francisco, USA
Assistant Professor, Department of Computer Science, University of San Francisco, CA, USA
1. Why parallel computing
2. Parallel hardware and parallel software
3. Distributed memory programming with MPI
4. Shared-memory programming with Pthreads
5. Shared-memory programming with OpenMP
6. GPU programming with CUDA
7. Parallel program development
8. Where to go from here