Shared Memory Application Programming

Concepts and Strategies in Multicore Application Programming
 
 
Morgan Kaufmann (Verlag)
  • 1. Auflage
  • |
  • erschienen am 6. November 2015
  • |
  • 556 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
978-0-12-803820-8 (ISBN)
 

Shared Memory Application Programming presents the key concepts and applications of parallel programming, in an accessible and engaging style applicable to developers across many domains. Multithreaded programming is today a core technology, at the basis of all software development projects in any branch of applied computer science. This book guides readers to develop insights about threaded programming and introduces two popular platforms for multicore development: OpenMP and Intel Threading Building Blocks (TBB). Author Victor Alessandrini leverages his rich experience to explain each platform's design strategies, analyzing the focus and strengths underlying their often complementary capabilities, as well as their interoperability.

The book is divided into two parts: the first develops the essential concepts of thread management and synchronization, discussing the way they are implemented in native multithreading libraries (Windows threads, Pthreads) as well as in the modern C++11 threads standard. The second provides an in-depth discussion of TBB and OpenMP including the latest features in OpenMP 4.0 extensions to ensure readers' skills are fully up to date. Focus progressively shifts from traditional thread parallelism to modern task parallelism deployed by modern programming environments. Several chapter include examples drawn from a variety of disciplines, including molecular dynamics and image processing, with full source code and a software library incorporating a number of utilities that readers can adapt into their own projects.


  • Designed to introduce threading and multicore programming to teach modern coding strategies for developers in applied computing
  • Leverages author Victor Alessandrini's rich experience to explain each platform's design strategies, analyzing the focus and strengths underlying their often complementary capabilities, as well as their interoperability
  • Includes complete, up-to-date discussions of OpenMP 4.0 and TBB
  • Based on the author's training sessions, including information on source code and software libraries which can be repurposed


After obtaining a PhD in theoretical physics in Argentina - where he was born - he spent several years as visiting scientist, working in theoretical particle physics in different research laboratories in the USA and Europe, in particular at the CERN theory division. In 1978, he was appointed full professor at the University of Paris XI in Orsay, France. His basic interests shifted to computational sciences in the early 90's, and he was at this time the founding director of IDRIS supercomputing center in Orsay, which he directed until 2009. In 2004-2009, he coordinated the DEISA European supercomputing infrastructure, a consortium of national supercomputing centers that pioneered the deployment of high performance computing services au the continental scale. He is currently emeritus research director at 'Maison de la Simulation", a CEA-CNRS-INRIA-University research laboratory providing high level support to HPC. He was decorated in 2011 'Chevalier de l'Ordre National du Mérite" by the French Republic.
  • Englisch
  • USA
Elsevier Science
  • 9,75 MB
978-0-12-803820-8 (9780128038208)
0128038209 (0128038209)
weitere Ausgaben werden ermittelt
  • Front Cover
  • Shared Memory Application Programming: Concepts and strategies in multicore application programming
  • Copyright
  • Contents
  • Preface
  • Pedagogical Objectives
  • Programming Environments
  • Book Organization
  • Biography
  • Acknowledgments
  • Chapter 1: Introduction and Overview
  • 1.1 Processes and Threads
  • 1.2 Overview of Computing Platforms
  • 1.2.1 Shared Memory Multiprocessor Systems
  • 1.2.2 Distributed Memory Multiprocessor Systems
  • 1.2.3 Multicore Evolution
  • 1.3 Memory System of Computing Platforms
  • 1.3.1 Reading Data from Memory
  • 1.3.2 Writing Data to Memory
  • 1.4 Parallel Processing Inside Cores
  • 1.4.1 Hyperthreading
  • 1.4.2 Vectorization
  • 1.5 External Computational Devices
  • 1.5.1 GPUs
  • 1.5.2 Intel Xeon Phi
  • 1.6 Final Comments
  • Chapter 2: Introducing Threads
  • 2.1 Applications and Processes
  • 2.1.1 Role of the Stack
  • 2.1.2 Multitasking
  • 2.2 Multithreaded Processes
  • 2.2.1 Launching Threads
  • 2.2.2 Memory Organization of a Multithreaded Process
  • 2.2.3 Threads as Lightweight Processes
  • 2.2.4 How can Several Threads Call the Same Function?
  • 2.2.5 Comment About Hyperthreading
  • 2.3 Programming and Execution Models
  • 2.3.1 Thread Life Cycles
  • 2.4 Benefits of Concurrent Programming
  • Chapter 3: Creating and Running Threads
  • 3.1 Introduction
  • 3.2 Overview of Basic Libraries
  • 3.3 Overview of Basic Thread Management
  • 3.4 Using Posix Threads
  • 3.4.1 Pthreads Error Reporting
  • 3.4.2 Pthreads Data Types
  • 3.4.3 Thread Function
  • 3.4.4 Basic Thread Management Interface
  • 3.4.5 Detachment State: Joining Threads
  • 3.4.6 Simple Examples of Thread Management
  • 3.5 Using Windows Threads
  • 3.5.1 Creating Windows Threads
  • 3.5.2 Joining Windows Threads
  • 3.5.3 Windows Examples of Thread Management
  • 3.6 C++11 Thread Library
  • 3.6.1 C++11 Thread Headers
  • 3.6.2 C++11 Error Reporting
  • 3.6.3 C++11 Data Types
  • 3.6.4 Creating and Running C++11 Threads
  • 3.6.5 Examples of C++11 Thread Management
  • 3.7 SPool Utility
  • 3.7.1 Under the SPool Hood
  • 3.8 SPool Examples
  • 3.8.1 Addition of Two Long Vectors
  • 3.8.2 Monte Carlo Computation of p
  • 3.8.3 Thread Safety Issue
  • 3.9 First Look at OpenMP
  • 3.9.1 Example: Computing the Area Under a Curve
  • 3.10 Database Search Example
  • 3.11 Conclusions
  • 3.12 Annex: Coding C++11 Time Durations
  • Chapter 4: Thread-Safe Programming
  • 4.1 Introduction
  • 4.1.1 First Example: The Monte Carlo p Code
  • 4.2 Some Library Functions are not Thread Safe
  • 4.3 Dealing with Random Number Generators
  • 4.3.1 Using Stateless Generators
  • 4.3.2 Using Local C++ Objects
  • 4.4 Thread Local Storage Services
  • 4.4.1 C++11 thread_local Keyword
  • 4.4.2 OpenMP threadprivate Directive
  • 4.4.3 Windows TLS Services
  • 4.4.4 Thread Local Classes in TBB
  • 4.4.5 Thread Local Storage in Pthreads
  • 4.5 Second Example: A Gaussian Random Generator
  • 4.6 Comments on Thread Local Storage
  • 4.7 Conclusion
  • Chapter 5: Concurrent Access to Shared Data
  • 5.1 First Comments on Thread Synchronization
  • 5.2 Need for Mutual Exclusion Among Threads
  • 5.3 Different Kinds of Mutex Flavors
  • 5.4 Pthreads Mutual Exclusion
  • 5.4.1 Mutex-Spin-Lock Programming Interface
  • 5.4.2 Simple Example: Scalar Product of Two Vectors
  • 5.4.3 First, Very Inefficient Version
  • 5.4.4 Another Inefficient Version Using Spin Locks
  • 5.4.5 Correct Approach
  • 5.5 Other Simple Examples
  • 5.5.1 Reduction Utility
  • 5.5.2 SafeCout: Ordering Multithreaded Output to stdout
  • 5.6 Windows Mutual Exclusion
  • 5.7 OpenMP Mutual Exclusion
  • 5.8 C++11 Mutual Exclusion
  • 5.8.1 Scoped Lock Class Templates
  • Why is another scoped locking class needed?
  • 5.9 TBB Mutual Exclusion
  • 5.9.1 Mutex Classes
  • 5.9.2 Scoped_lock Internal Classes
  • 5.10 First Look at Atomic Operations
  • 5.10.1 Atomic Operations in OpenMP
  • AtomicTest.C example in OpenMP
  • 5.11 Container Thread Safety
  • 5.11.1 Concurrent Containers
  • 5.11.2 TBB Concurrent Container Classes
  • 5.12 Comments on Mutual Exclusion Best Practices
  • Compound thread-safe operations are not thread-safe
  • Chapter 6: Event Synchronization
  • 6.1 Idle Waits Versus Spin Waits
  • Idle wait, also called system wait
  • Busy wait, also called spin wait
  • Plan for the rest of the chapter
  • 6.2 Condition Variables in Idle Waits
  • 6.3 Idle Waits in Pthreads
  • 6.3.1 Waiting on a Condition Variable
  • 6.3.2 Wait Protocol
  • Why should the predicate be checked again on return?
  • Why an atomic mutex lock and wait?
  • 6.3.3 Waking Up Waiting Threads
  • 6.4 Windows Condition Variables
  • Handling timed waits.
  • 6.5 C++11 condition_variable Class
  • Another form of the wait() functions
  • 6.6 Examples of Idle Wait
  • 6.6.1 Synchronizing with an IO Thread
  • 6.6.2 Timed Wait in C++11
  • 6.6.3 Barrier Synchronization
  • Problem: this barrier cannot be reused
  • Solution: a reusable barrier
  • Comments
  • 6.7 C++11 Futures and Promises
  • 6.7.1 std::future Class
  • std::asynch() function
  • Basic std::future member functions
  • 6.7.2 std::promise Class
  • Chapter 7: Cache Coherency and Memory Consistency
  • 7.1 Introduction
  • 7.2 Cache Coherency Issue
  • 7.2.1 False Sharing Problem
  • 7.2.2 Memory Consistency Issue
  • 7.3 What is a Memory Consistency Model?
  • 7.3.1 Sequential Consistency
  • 7.3.2 Problems with Sequential Consistency
  • 7.4 Weak-Ordering Memory Models
  • 7.4.1 Memory Fences
  • 7.4.2 Case of a Mutex
  • 7.4.3 Programming a Busy Wait
  • 7.5 Pthreads Memory Consistency
  • 7.6 OpenMP Memory Consistency
  • 7.6.1 Flush Directive
  • Chapter 8: Atomic Types and Operations
  • 8.1 Introduction
  • 8.2 C++11 std::atomic Class
  • 8.2.1 Member Functions
  • 8.3 Lock-Free Algorithms
  • 8.3.1 Updating an Arbitrary, Global Data Item
  • 8.3.2 AReduction Class
  • 8.4 Synchronizing Thread Operations
  • 8.4.1 Memory Models and Memory Ordering Options
  • 8.4.2 Using the Relaxed Model
  • 8.4.3 Global Order in Sequential Consistency
  • 8.4.4 Comparing Sequential Consistency and Acquire-Release
  • 8.4.5 Consume Instead of Acquire Semantics
  • 8.4.6 Usefulness of memory_order_acq_rel
  • 8.5 Examples of Atomic Synchronizations
  • 8.5.1 SPINLOCK Class
  • 8.5.2 Circular Buffer Class
  • 8.6 TBB atomic Class
  • 8.6.1 Memory Ordering Options
  • 8.7 Windows Atomic Services
  • 8.8 Summary
  • Chapter 9: High-Level Synchronization Tools
  • 9.1 Introduction and Overview
  • 9.2 General Comments on High-Level Synchronization Tools
  • 9.3 Overview of the vath Synchronization Utilities
  • 9.3.1 Code Organization
  • 9.4 Timers
  • 9.4.1 Using a Timer
  • 9.4.2 Comment on Implementation
  • 9.5 Boolean Locks
  • 9.5.1 Using the BLock Class
  • 9.5.2 BLock Examples
  • 9.5.3 Boolean Locks Implementing Spin Waits
  • 9.6 SynchP Template Class
  • 9.6.1 Using SynchP Objects
  • 9.6.2 Comments
  • 9.6.3 SynchP Examples
  • 9.6.4 OpenMP Version: The OSynchP Class
  • 9.7 Idle and Spin Barriers
  • 9.7.1 Using the Barrier Classes
  • 9.7.2 ABarrier Implementation
  • 9.8 Blocking Barriers
  • 9.8.1 Using a Blocking Barrier
  • 9.8.2 First Simple Example
  • 9.8.3 Programming a Simple SPMD Thread Pool
  • How the worker threads are launched
  • How main() drives the worker thread activity
  • How worker threads terminate
  • Full example
  • 9.9 ThQueue Class
  • Design requirements
  • 9.9.1 Using the ThQueue Queue
  • 9.9.2 Comments on the Producer-Consumer Paradigm
  • 9.9.3 Examples: Stopping the Producer-Consumer Operation
  • Several Producers: TQueue1.C
  • Several Consumers: TQueue2.C
  • 9.10 Reader-Writer Locks
  • 9.10.1 Pthreads RWMUTEX
  • 9.10.2 Windows Slim Reader-Writer Locks
  • 9.10.3 TBB Reader-Writer Locks
  • 9.11 RWlock Class
  • 9.11.1 Example: Emulating a Database Search
  • 9.11.2 Example: Accessing a Shared Container
  • Writer threads
  • Reader threads
  • 9.11.3 Comment on Memory Operations
  • 9.12 General Overview of Thread Pools
  • Chapter 10: OpenMP
  • 10.1 Basic Execution Model
  • 10.2 Configuring OpenMP
  • Controlling program execution
  • Controlling parallel regions
  • Controlling the automatic parallelization of loops
  • 10.3 Thread Management and Work-Sharing Directives
  • 10.3.1 Parallel Directive
  • Functional parallel clauses:
  • 10.3.2 Master and Single Directives
  • 10.3.3 Sections and Section Work-Sharing Directives
  • 10.3.4 FOR WORK SHARING DIRECTIVE
  • 10.3.5 Task Directive
  • 10.3.6 Data-Sharing Attributes Clauses
  • Comments on data-sharing clauses
  • 10.4 Synchronization Directives
  • 10.4.1 Critical Directive
  • Critical sections can be named
  • Named critical sections versus mutex locking
  • 10.5 Examples of Parallel and Work-Sharing Constructs
  • 10.5.1 Different Styles of Parallel Constructs
  • Adding directives to the listing above
  • A version using parallel for
  • Yet another version of the Monte-Carlo code
  • 10.5.2 Checking ICVs and Tracking Thread Activity
  • Checking ICVs
  • Tracking the mapping of tasks to threads
  • 10.5.3 Parallel Section Examples
  • Behavior of the barrier directive
  • Dispatching a I/O operation
  • Data transfers among tasks
  • 10.6 Task API
  • 10.6.1 Motivations for a Task-Centric Execution Model
  • 10.6.2 Event Synchronizations and Explicit Tasks
  • 10.6.3 Task Directive and Clauses
  • 10.6.4 Task Synchronization
  • 10.6.5 OpenMP 4.0 Taskgroup Parallel Construct
  • 10.7 Task Examples
  • 10.7.1 Parallel Operation on the Elements of a Container
  • 10.7.2 Traversing a Directed Acyclic Graph
  • Description of the parallel context
  • Graph traversal code
  • 10.7.3 Tracking How Tasks Are Scheduled
  • 10.7.4 Barrier and Taskwait Synchronization
  • 10.7.5 Implementing a Barrier Among Tasks
  • 10.7.6 Examples Involving Recursive Algorithms
  • Area under a curve, blocking style
  • Area under a curve, nonblocking style
  • 10.7.7 Parallel Quicksort Algorithm
  • Recursive task function
  • 10.8 Task Best Practices
  • 10.8.1 Locks and Untied Tasks
  • 10.8.2 Threadprivate Storage and Tasks
  • 10.9 Cancellation of Parallel Constructs
  • 10.9.1 Cancel Directive
  • 10.9.2 Checking for Cancellation Requests
  • 10.9.3 Canceling a Parallel Region
  • 10.9.4 Canceling a Taskgroup Region
  • 10.10 Offloading Code Blocks to Accelerators
  • 10.10.1 Target Construct
  • 10.10.2 Target Data Construct
  • 10.10.3 Teams and Distribute Constructs
  • 10.11 Thread Affinity
  • 10.11.1 Spread Policy
  • 10.11.2 Close Policy
  • 10.11.3 Master Policy
  • 10.11.4 How to Activate Hyperthreading
  • 10.12 Vectorization
  • 10.12.1 Simd Directive
  • Simd clauses
  • 10.12.2 Declare simd Directive
  • Declare simd clauses
  • 10.13 Annex: SafeCounter Utility Class
  • Chapter 11: Intel Threading Building Blocks
  • 11.1 Overview
  • 11.2 TBB Content
  • 11.2.1 High-Level Algorithms
  • 11.3 TBB Initialization
  • 11.4 Operation of the High-Level Algorithms
  • 11.4.1 Integer Range Example
  • 11.4.2 Built-in TBB Range Classes
  • 11.4.3 Fork-Join Parallel Pattern
  • 11.5 Simple parallel_for Example
  • 11.5.1 Parallel_for Operation
  • 11.5.2 Full AddVector.C Example
  • 11.6 Parallel Reduce Operation and Examples
  • 11.6.1 First Example: Recursive Area Computation
  • 11.6.2 Second Example: Looking for a Minimum Value
  • Complete MinVal.C example
  • 11.7 Parallel_for_each Algorithm
  • 11.7.1 Simple Example: Foreach.C
  • 11.8 Parallel_do Algorithm
  • 11.9 Parallel_invoke Algorithm
  • 11.9.1 Simple Example: Invoke.C
  • 11.10 High-Level Scheduler Programming Interface
  • 11.10.1 Defining Tasks in the task_group Environment
  • 11.10.2 Task_group Class
  • 11.11 Task_group Examples
  • 11.11.1 Area Computation, with a Fixed Number of Tasks
  • 11.11.2 Area Computation, Tasks Wait for Children
  • 11.11.3 Area Computation, Non-Blocking Style
  • 11.11.4 Canceling a Group of Tasks
  • 11.12 Conclusions
  • Chapter 12: Further Thread Pools
  • 12.1 Introduction
  • 12.2 SPool Reviewed
  • 12.3 NPool Features
  • 12.4 NPool API
  • 12.5 Building Parallel Jobs
  • 12.5.1 Task Class
  • 12.5.2 TaskGroup Class
  • 12.5.3 Memory Allocation Best Practices
  • 12.6 Operation of the NPool
  • 12.6.1 Avoiding Deadlocks in the Pool Operation
  • Mapping tasks to threads
  • Running recursive tasks
  • 12.6.2 Nested Parallel Jobs
  • 12.6.3 Passing Data to Tasks
  • 12.6.4 Assessment of the NPool Environment
  • 12.7 Examples
  • 12.7.1 Testing Job Submissions
  • 12.7.2 Running Unbalanced Tasks
  • 12.7.3 Submission of a Nested Job
  • 12.7.4 TaskGroup Operation
  • 12.7.5 Parent-Children Synchronization
  • 12.7.6 Task Suspension Mechanism
  • 12.7.7 Traversing a Directed Acyclic Graph
  • 12.7.8 Further Recursive Examples
  • 12.8 Running Parallel Routines in Parallel
  • 12.8.1 Main Parallel Subroutine
  • 12.8.2 Client Code
  • 12.9 Hybrid MPI-Threads Example
  • Chapter 13: Molecular Dynamics Example
  • 13.1 Introduction
  • 13.2 Molecular Dynamics Problem
  • 13.2.1 Relevance of the Mechanical Model
  • 13.3 Integration of the Equations of Motion
  • 13.3.1 Parallel Nature of the Algorithm
  • 13.4 Md.C Sequential Code
  • 13.4.1 Input and Postprocessing
  • 13.4.2 Sequential Code, Md.C
  • 13.5 Parallel Implementations of Md.C
  • 13.5.1 General Structure of Data Parallel Programs
  • 13.5.2 Different Code Versions
  • 13.5.3 OpenMP Microtasking Implementation, MdOmpF.C
  • 13.5.4 OpenMP Macrotasking Implementation, MdOmp.C
  • Work-sharing among threads
  • Barrier synchronizations
  • 13.5.5 SPool Implementation, MdTh.C
  • Idle-wait versus spin-wait barriers
  • 13.5.6 TBB Implementation, MdTbb.C
  • 13.6 Parallel Performance Issues
  • 13.7 Activating Vector Processing
  • 13.8 Annex: Matrix Allocations
  • 13.9 Annex: Comments on Vectorization With Intel Compilers
  • 13.9.1 Data Alignment
  • Chapter 14: Further Data Parallel Examples
  • 14.1 Introduction
  • 14.1.1 Examples in This Chapter
  • 14.1.2 Finite-Difference Discretization
  • 14.2 Relaxation Methods
  • 14.3 First Example: Heat.C
  • 14.3.1 Auxiliary Functions in HeatAux.C
  • 14.3.2 Sequential Heat.C Code
  • 14.4 Parallel Implementation of Heat.C
  • 14.4.1 OpenMP Microtasking: HeatF.C
  • 14.4.2 OpenMp Macrotasking: HeatOmp.C
  • 14.4.3 NPool Macrotasking: HeatNP.C
  • 14.4.4 TBB Microtasking: HeatTBB.C
  • 14.5 Heat Performance Issues
  • 14.6 Second Example: Sor.C
  • 14.6.1 Gauss-Seidel With White-Black Ordering
  • 14.6.2 Sequential Code Sor.C
  • 14.7 Parallel Implementations of Sor.C
  • 14.7.1 OpenMP Microtasking: SorF.C
  • 14.7.2 OpenMP and NPool Macrotasking
  • 14.7.3 TBB Microtasking: SorTBB
  • 14.8 Sor Performance Issues
  • 14.9 Alternative Approach to Data Dependencies
  • Chapter 15: Pipelining Threads
  • 15.1 Pipeline Concurrency Pattern
  • 15.2 Example 1: Two-Dimensional Heat Propagation
  • 15.2.1 Physical Problem
  • 15.2.2 Method of Solution
  • 15.2.3 FFT Routines
  • 15.3 Sequential Code Heat.C
  • 15.4 Pipelined Versions
  • 15.4.1 Using a Circular Array of Working Buffers
  • 15.4.2 Pipeline With Handshake Synchronization
  • 15.4.3 Examples of Handshake Synchronization
  • 15.4.4 Pipeline With Producer-Consumer Synchronization
  • 15.4.5 Examples of Producer-Consumer Synchronization
  • 15.5 Pipeline Classes
  • 15.5.1 Synchronization Member Functions
  • 15.6 Example: Pipelined Sor
  • 15.6.1 Sor Code
  • 15.7 Pipelining Threads in TBB
  • 15.7.1 Filter Interface Class
  • 15.7.2 Pipeline Class
  • 15.7.3 TBB Pipelined Sor Code
  • 15.7.4 SORSTAGE Class
  • Private data members
  • Member functions
  • 15.7.5 Full PSORTBB.C Code
  • 15.8 Some Performance Considerations
  • 15.9 Annex: Solution to the Heat Diffusion Equation
  • Finite-difference discretization
  • Fourier transforms
  • Two-dimensional Fourier transforms
  • Equation for the Fourier coefficients
  • 15.10 Annex: FFT Routines
  • One-dimensional FFT
  • Two-dimensional FFT
  • Half of two-dimensional FFT
  • Fourier transform of real-valued functions
  • Chapter 16: Using the TBB Task Scheduler
  • 16.1 Introduction
  • 16.2 Structure of the TBB Pool
  • 16.2.1 TBB Task Queues
  • 16.2.2 First Look at the Scheduling Algorithm
  • 16.3 TBB Task Management Classes
  • 16.4 Complete Scheduler API: The Task Class
  • 16.4.1 Constructing Tasks
  • 16.4.2 Mapping Tasks to Threads
  • 16.4.3 Hierarchical Relations Among Tasks
  • 16.4.4 Allocating TBB Tasks
  • 16.4.5 Task Spawning Paradigms
  • 16.4.6 Spawning Tasks With Continuation Passing
  • 16.4.7 Spawning Tasks and Waiting for Children
  • 16.4.8 Task Spawning Best Practices
  • 16.5 Miscellaneous Task Features
  • 16.5.1 How Threads Run Tasks
  • 16.5.2 Complete Task Scheduling Algorithm
  • 16.5.3 Recycling Tasks
  • 16.5.4 Memory Affinity Issues
  • 16.6 Using the TBB Scheduler
  • 16.6.1 Blocking Versus Continuation Passing Examples
  • Main thread code
  • Example 1: blocking on children
  • Example 2: continuation passing
  • 16.6.2 Recursive Computation of the Area Under a Curve
  • Continuation passing case
  • 16.7 Job Submission by Client Threads
  • 16.7.1 Generic Interface for Job Submission
  • 16.7.2 Submitting an IO Task
  • 16.7.3 Submitting Complex, Recursive Jobs
  • Comment on empty tasks
  • 16.8 Example: Molecular Dynamics Code
  • 16.8.1 Parallel Tasks for This Problem
  • 16.8.2 Computing Particle Trajectories
  • 16.9 Recycling Parallel Region Tasks
  • 16.9.1 Recycling Protocol
  • 16.10 Annex: Task Class Member Functions
  • Annex A: Using the Software
  • A.1 Libraries Required
  • A.2 Software Organization
  • A.2.1 Building and Testing the vath Library
  • A.2.2 Compiling and Running the Examples
  • A.3 vath Classes
  • Annex B: C++ Function Objects and Lambda Expressions
  • B.1 Function Objects
  • B.2 Function Object Syntax
  • B.2.1 Implementing Callbacks
  • B.2.2 Abilities of Function Objects
  • Function objects have better runtime performance
  • Easier handling of functions with internal states
  • Extending the capabilities of generic programming
  • B.2.3 Example
  • B.2.4 Function Objects in TBB
  • B.3 Lambda Expressions
  • Bibliography
  • Index
  • Back Cover

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

60,63 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
PDF mit Adobe DRM
siehe Systemvoraussetzungen
Hinweis: Die Auswahl des von Ihnen gewünschten Dateiformats und des Kopierschutzes erfolgt erst im System des E-Book Anbieters
E-Book bestellen

Unsere Web-Seiten verwenden Cookies. Mit der Nutzung dieser Web-Seiten erklären Sie sich damit einverstanden. Mehr Informationen finden Sie in unserem Datenschutzhinweis. Ok