Intel Xeon Phi Processor High Performance Programming

Knights Landing Edition
 
 
Morgan Kaufmann (Verlag)
  • 2. Auflage
  • |
  • erschienen am 31. Mai 2016
  • |
  • 662 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
978-0-12-809195-1 (ISBN)
 

This book is an all-in-one source of information for programming the Second-Generation Intel Xeon Phi product family also called Knights Landing. The authors provide detailed and timely Knights Landingspecific details, programming advice, and real-world examples. The authors distill their years of Xeon Phi programming experience coupled with insights from many expert customers - Intel Field Engineers, Application Engineers, and Technical Consulting Engineers - to create this authoritative book on the essentials of programming for Intel Xeon Phi products.

Intel® Xeon Phi? Processor High-Performance Programming is useful even before you ever program a system with an Intel Xeon Phi processor. To help ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi processors, or other high-performance microprocessors. Applying these techniques will generally increase your program performance on any system and prepare you better for Intel Xeon Phi processors.


  • A practical guide to the essentials for programming Intel Xeon Phi processors
  • Definitive coverage of the Knights Landing architecture
  • Presents best practices for portable, high-performance computing and a familiar and proven threads and vectors programming model
  • Includes real world code examples that highlight usages of the unique aspects of this new highly parallel and high-performance computational product
  • Covers use of MCDRAM, AVX-512, Intel® Omni-Path fabric, many-cores (up to 72), and many threads (4 per core)
  • Covers software developer tools, libraries and programming models
  • Covers using Knights Landing as a processor and a coprocessor


Jim Jeffers was the primary strategic planner and one of the first full-time employees on the program that became Intel ® MIC. He served as lead SW Engineering Manager on the program and formed and launched the SW development team. As the program evolved, he became the workloads (applications) and SW performance team manager. He has some of the deepest insight into the market, architecture and programming usages of the MIC product line. He has been a developer and development manager for embedded and high performance systems for close to 30 years.
  • Englisch
  • San Diego
  • |
  • USA
Elsevier Science
  • 45,17 MB
978-0-12-809195-1 (9780128091951)
0128091959 (0128091959)
weitere Ausgaben werden ermittelt
  • Front Cover
  • Intel Xeon Phi Processor High Performance Programming: Knights Landing Edition
  • Copyright
  • Contents
  • Acknowledgments
  • Foreword
  • Extending the Sports Car Analogy to Higher Performance
  • What Exactly Is The Unfair Advantage?
  • Peak Performance Versus Drivable/Usable Performance
  • How Does The Unfair Advantage Relate to This Book?
  • Closing Comments
  • Preface
  • Sports Car Tutorial: Introduction for Many-Core Is Online
  • Parallelism Pearls: Inspired by Many Cores
  • Organization
  • Structured Parallel Programming
  • What's New?
  • lotsofcores.com
  • Section I: Knights Landing
  • Chapter 1: Introduction
  • Introduction to Many-Core Programming
  • Trend: More Parallelism
  • Why Intel® Xeon PhiT Processors Are Needed
  • Processors Versus Coprocessor
  • Measuring Readiness for Highly Parallel Execution
  • What About GPUs?
  • Enjoy the Lack of Porting Needed but Still Tune!
  • Transformation for Performance
  • Hyper-Threading Versus Multithreading
  • Programming Models
  • Why We Could Skip To Section II Now
  • For More Information
  • Chapter 2: Knights Landing overview
  • Overview
  • Instruction Set
  • Architecture Overview
  • Tile
  • Mesh: On-Die Interconnect
  • Cluster modes
  • MCDRAM (High-Bandwidth Memory) and DDR (DDR4)
  • I/O (PCIe Gen3)
  • Motivation: Our Vision and Purpose
  • Performance
  • Summary
  • For More Information
  • Chapter 3: Programming MCDRAM and Cluster modes
  • Use Cache Mode and Default Cluster Mode (at First)
  • Programming for Cluster Modes
  • Programming for Memory Modes
  • Memory Usage Models
  • What Is the memkind Library (and hbwmalloc)?
  • Maximizing Performance With Memory Usage Models
  • Critical review for our Hello MCDRAM
  • NUMACTL -H
  • Learning NUMA Node Numbering
  • Ways to Observe MCDRAM Allocations
  • Numactl: Move All Allocations to MCDRAM
  • Oversubcription of MCDRAM: A Killer or an Opportunity?
  • Autohbw: Move Selected Allocations to MCDRAM
  • Memkind/FASTMEM: Explicit Usage of MCDRAM
  • Explicit memory usage in C/C++: memkind
  • C++ notes
  • Explicit memory usage in Fortran: FASTMEM
  • Fortran FASTMEM failure modes
  • ALLOCATE prefers MCDRAM
  • ALLOCATE requires MCDRAM
  • Query Memory Mode and MCDRAM Available
  • SNC Performance Implications of Allocation and Threading
  • Allocation With SNC
  • How to Not Hard Code the NUMA Node Numbers
  • Approaches to Determining What to Put in MCDRAM
  • Approach 1: Observing or Emulating MCDRAM Effects
  • Stage 1: Code modification
  • Stage 2: Manual Execution
  • Stage 3: Autotuning Configuration (Optional)
  • Approach 2: Using Intel VTune to Determine MCDRAM Candidate Data Structures
  • Stage 1: Profiling Data Collection
  • Stage 2: Profiling Data Analysis
  • Stage 3: Code Modification
  • Results Analysis of the Two Approaches
  • Summary of Two Approaches to ``What Goes in MCDRAM´´
  • Why Rebooting Is Required to Change Modes
  • BIOS
  • Save/Restore/Change All BIOS Setting
  • Summary
  • For More Information
  • Chapter 4: Knights Landing architecture
  • Tile architecture
  • Core and VPU
  • Front-end unit
  • Allocation unit
  • Integer execution unit
  • Memory execution unit
  • Vector processing unit
  • Threading
  • L2 Architecture
  • Cluster modes
  • All-to-All Cluster Mode
  • Quadrant Cluster Mode
  • SNC-4 Mode
  • Hemisphere Cluster and SNC-2 Modes
  • Cluster Mode Summary
  • Memory interleaving
  • Memory modes
  • Cache Mode
  • Flat Mode
  • Hybrid Mode
  • Capacity, Bandwidth, Latency
  • Interactions of cluster and memory modes
  • Summary
  • For More Information
  • Chapter 5: Intel Omni-Path Fabric
  • Overview
  • Host Fabric Interface
  • Intel OPA Switches
  • Intel OPA Management
  • Performance and Scalability
  • Extreme Message Rates
  • Low Latency
  • Addressing
  • Multicast
  • Transport Layer APIs
  • OFA Open Fabric Interface
  • Performance-Scaled Messaging
  • Open Fabrics Verbs and Compatibility
  • Quality of Service
  • Service Levels
  • Traffic Flow Optimization and Packet Interleaving
  • Credit-Based Flow Control
  • Security
  • Partition-Based Security
  • Management Security
  • Virtual Fabrics
  • Unicast Address Resolution
  • Typical Flow for Well Behaved Applications
  • Out of Band Mechanisms
  • Multicast Address Resolution
  • Typical Flow for Well-Behaved Applications
  • Summary
  • For More Information
  • Chapter 6: µarch optimization advice
  • Best Performance From 1, 2, or 4 Threads Per Core, Rarely 3
  • Hyperthreading: Do Not Turn It Off
  • Memory subsystem
  • Caches
  • MCDRAM and DDR
  • Advice: Large Pages Can Be Good (2M/1G)
  • µarch nuances (tile)
  • Instruction Cache, Decode, and Branch Predictors
  • Integer
  • Vector
  • Memory Accesses and Prefetch Options
  • Code Examples
  • Direct mapped MCDRAM cache
  • Advice: use AVX-512
  • Advice: Upgrade to AVX-512 From AVX/AVX2 and IMCI
  • Scalar Versus Vector Code
  • Instruction Latency Tables
  • Advice: Use AVX-512 Extensions for Knights Landing
  • Advice: Use AVX-512ER
  • IMCI to AVX-512: Reciprocal and Exponentials
  • Advice: Use AVX-512CD
  • Advice: Use AVX-512PF
  • IMCI to AVX-512: Software Prefetching
  • Advice: Gather and Scatter Instructions Only When Irregular
  • IMCI to AVX-512: Gathers/Scatters
  • IMCI to AVX-512: Swizzle Instructions
  • IMCI to AVX-512: Unaligned Loads/Stores
  • IMCI to AVX-512: Data Conversion Instructions
  • IMCI to AVX-512: Nontemporal Stores/Cache Line Evicts
  • Summary
  • For more information
  • Section II: Parallel programming
  • Chapter 7: Programming overview for Knights Landing
  • To Refactor, or Not to Refactor, That Is the Question
  • Evolutionary Optimization of Applications
  • Revolutionary Optimization of Applications
  • Know When to Hold'em and When to Fold'em
  • For More Information
  • Chapter 8: Tasks and threads
  • OpenMP, Fortran 2008, Intel TBB, Intel MKL
  • Importance of Thread Pools
  • OpenMP
  • Parallel Processing Model
  • Directives
  • Significant Controls Over OpenMP
  • OpenMP Nesting-Use Hot Teams
  • Fortran 2008
  • DO CONCURRENT
  • DO CONCURRENT and DATA RACES
  • DO CONCURRENT Definition
  • DO CONCURRENT Versus FOR ALL
  • DO CONCURRENT Versus OpenMP ``Parallel´´
  • Intel TBB
  • Why TBB?
  • Using TBB
  • parallel_for
  • blocked_range
  • Partitioners
  • parallel_reduce
  • parallel_invoke
  • TBB Flow Graph
  • TBB Memory Allocation, memkind, and MCDRAM
  • hStreams
  • Summary
  • For More Information
  • Chapter 9: Vectorization
  • Why Vectorize?
  • How to Vectorize
  • Three Approaches to Achieving Vectorization
  • Six-Step Vectorization Methodology
  • Step 1. Measure Baseline Release Build Performance
  • Step 2. Determine Hotspots Using Intel VTuneT Amplifier
  • Step 3. Determine Loop Candidates Using Intel Compiler Vec-Report
  • Step 4. Get Advice Using Intel Advisor
  • Step 5. Implement Vectorization Recommendations
  • Step 6: Repeat!
  • Streaming Through Caches: Data Layout, Alignment, Prefetching, and so on
  • Why Data Layout Affects Vectorization Performance
  • Data Alignment
  • Prefetching
  • Compiler prefetches
  • Compiler prefetch controls (prefetching via directives/pragmas)
  • Manual prefetches
  • Streaming Stores
  • When streaming stores will be generated for Knights Landing
  • Nontemporal: compiler generation of nontemporal stores
  • Compiler Tips
  • Avoid Manual Loop Unrolling
  • Requirements for a Loop to Vectorize (Intel Compiler)
  • Importance of Inlining, Interference With Simple Profiling
  • Compiler Options
  • Memory Disambiguation Inside Vector-Loops
  • Compiler Directives
  • SIMD Directives
  • Requirements to vectorize with SIMD directives
  • SIMD directive clauses
  • Use SIMD directives with care
  • The Vector and Novector Directives
  • Use vector directives with care
  • The ivdep Directive
  • ivdep example in fortran
  • ivdep examples in C
  • Random Number Function Vectorization
  • Data Alignment to Assist Vectorization
  • Step 1: aligning the data
  • How to define aligned STATIC arrays
  • Step 2: inform the compiler of the alignment
  • How to tell the compiler all memory references are nicely aligned for the target
  • Use Array Sections to Encourage Vectorization
  • Fortran Array Sections
  • Subscript triplets
  • Vector subscripts
  • Implications for array copies, efficiency issues
  • Look at What the Compiler Created: Assembly Code Inspection
  • How to Find the Assembly Code
  • Numerical Result Variations With Vectorization
  • Summary
  • For More Information
  • Chapter 10: Vectorization advisor
  • Getting Started With Intel Advisor for Knights Landing
  • Enabling and Improving AVX-512 Code With the Survey Report
  • Preparing Your Application
  • Running a Survey Analysis With Trip Counts
  • One-Stop-Shop Performance Overview in the Survey Report
  • Enabling AVX-512 Speedups Via Recommendations
  • Fixing ineffective AVX-512 peeled/remainder loop issues
  • Speedups with approximate reciprocal, reciprocal square root, and exponent/mantissa extraction
  • Inefficient memory access in assumed-shape array and AVX-512 gather/scatter
  • Making Expert Users Happy: Knights Landing-Specific Traits and ISA Analysis
  • Compress/Expand Trait
  • Gather/Scatter Traits
  • Conflict(-free) subset detection Trait
  • Memory Access Pattern Report
  • AVX-512 Gather/Scatter Profiler
  • Mask Utilization and FLOPs Profiler
  • Advisor Roofline Report
  • Explore AVX-512 Code Characteristics Without AVX-512 Hardware
  • Example - Analysis of a Computational Chemistry Code
  • Summary
  • For More Information
  • Chapter 11: Vectorization with SDLT
  • What Is SDLT?
  • Getting Started
  • SDLT Basics
  • Primitives
  • Containers
  • Accessors
  • Proxy Objects
  • Example Normalizing 3d Points With SIMD
  • What Is Wrong With AOS Memory Layout and SIMD?
  • SIMD Prefers Unit-Stride Memory Accesses
  • Integrating SOA Data Layout
  • Alpha-Blended Overlay Reference
  • Vectorizing With AOS
  • Alpha-Blended Overlay With SDLT
  • Additional Features
  • Summary
  • For More Information
  • Chapter 12: Vectorization with AVX-512 intrinsics
  • What Are Intrinsics?
  • Which Compilers Support Intrinsics?
  • Assembly Language Programming Options
  • AVX-512 Overview
  • Mask Registers and Predication
  • AVX-512 Instruction Encodings
  • Intel Software Development Emulator
  • Innovation Beyond Intel AVX-512 Foundational Instructions
  • Migrating From Knights Corner
  • AVX-512 Detection
  • Quirks About When Intrinsics Can Be Used
  • Learning AVX-512 Instructions
  • Learning AVX-512 Intrinsics
  • Step-by-Step Example Using AVX-512 Intrinsics
  • Intrinsics for MILC
  • The Data Types
  • Understanding Operations in mult_adj_su3_mat_4vec Function
  • Intrinsics Compile Time Type Checking
  • Results Using Our Intrinsics Code
  • For More Information
  • Chapter 13: Performance libraries
  • Intel Performance Library Overview
  • Intel Math Kernel Library Overview
  • Intel Data Analytics Library Overview
  • Together: MKL and DAAL
  • Intel Integrated Performance Primitives Library Overview
  • Intel Performance Libraries and Intel Compilers
  • MKL and Intel Compilers
  • DAAL and Intel Compilers
  • IPP and Intel Compilers
  • Native (Direct) Library Usage
  • High-Bandwidth Memory
  • Math Kernel Library
  • Data Analytics Acceleration Library
  • Integrated Performance Primitives
  • Offloading to Knights Landing While Using a Library
  • Automatic Offload in MKL
  • Data Analytics Acceleration Library
  • Integrated Performance Primitives
  • Precision Choices and Variations
  • Fast Transcendentals and Mathematics
  • Understanding the Potential for Floating-Point Arithmetic Variations
  • Performance Tip for Faster Dynamic Libraries
  • For More Information
  • Chapter 14: Profiling and timing
  • Introduction to Knight Landing Tuning
  • Event-Monitoring Registers
  • List of Events Used in This Guide
  • Efficiency Metrics
  • Cycles Per Instruction
  • Cycles per instruction - description and usage
  • CPI - tuning suggestions
  • Compute to Data Access Ratio
  • Compute to Data Access Ratio - description and usage
  • Compute to Data Access Ratio - tuning suggestions
  • Potential Performance Issues
  • General Cache Usage
  • General cache usage - description and usage
  • General cache usage - tuning suggestions
  • TLB Misses
  • TLB misses - description and usage
  • TLB misses - tuning suggestions
  • VPU Usage
  • VPU usage - description and usage
  • VPU usage - tuning suggestions
  • Microcoded VPU Instructions
  • Microcoded instruction intensity - description and usage
  • Microcoded instruction usage - tuning suggestions
  • Memory Bandwidth
  • Memory bandwidth - description and usage
  • Memory bandwidth - tuning suggestions
  • Intel VTune Amplifier XE Product
  • Avoid Simple Profiling
  • Performance Application Programming Interface
  • MPI Analysis: ITAC
  • HPCToolkit
  • Tuning and Analysis Utilities
  • Timing
  • System Routines
  • Time Stamp Counter
  • Frequency Variation
  • Summary
  • For More Information
  • Chapter 15: MPI
  • Internode Parallelism
  • MPI on Knights Landing
  • MPI overview
  • MPI Programming Evolves
  • Future of MPI Versus Other Internode Techniques
  • How to run MPI applications
  • Startup of Pure MPI Programs
  • Startup of Hybrid MPI/OpenMP Programs
  • Startup of MPI Programs Using Offload
  • Analyzing MPI application runs
  • Debug Information
  • MPI Statistics
  • Intel MPI Performance Snapshot
  • Intel Trace Analyzer and Collector
  • Intel VTune Analyzer
  • Tuning of MPI applications
  • Process Placement
  • MPI Runtime Settings
  • MPI Collective Algorithms
  • MPITUNE Tool
  • Heterogeneous clusters
  • Recent trends in MPI coding
  • Simple Hybrid Coding
  • Overlap of Communication and Computation
  • MPI Calls on Threads
  • Nonblocking Collectives
  • One-sided Shared Memory Calls
  • Putting it all together
  • Summary
  • For more information
  • Chapter 16: PGAS programming models
  • To share or not to share
  • Why Use PGAS on Knights Landing?
  • Programming with PGAS
  • OpenSHMEM
  • UPC
  • Fortran Coarrays
  • MPI-3 RMA
  • PGAS Versus Multithreading
  • Performance evaluation
  • Beyond PGAS
  • Summary
  • For More Information
  • Chapter 17: Software-defined visualization
  • Motivation for Software-Defined Visualization
  • Software-Defined Visualization Architecture
  • OpenSWR: OpenGL Raster-Graphics Software Rendering
  • Embree: High-performance Ray Tracing Kernel Library
  • OSPRay: Scalable Ray Tracing Framework
  • OSPRay API Overview
  • Formulating a High-Performance Implementation
  • Vector parallelism
  • Thread parallelism
  • Node (cluster level) parallelism
  • Summary
  • Image Attributions
  • For More Information
  • Chapter 18: Offload to Knights Landing
  • Offload Programming Model-Using With Knights Landing
  • Processors Versus Coprocessor
  • Coprocessor Specific Differences
  • Offload Model Considerations
  • Offloading via MKL
  • Offloading via OpenMP
  • OpenMP Target Directives
  • Offload: omp Target
  • Data Environments: omp Target Data
  • Concurrent Host and Target Execution
  • Offload Over Fabric
  • Summary
  • For More Information
  • Chapter 19: Power analysis
  • Power Demand Gates Exascale
  • Power 101
  • Hardware-Based Power Analysis Techniques
  • Software-Based Knights Landing Power Analyzer
  • ManyCore Platform Software Package Power Tools
  • Running Average Power Limit
  • Performance Profiling on Knights Landing
  • Intel Remote Management Module
  • Summary
  • For More Information
  • Section III: Pearls
  • Chapter 20: Optimizing classical molecular dynamics in LAMMPS
  • Molecular Dynamics
  • LAMMPS
  • Knights Landing Processors
  • LAMMPS Optimizations
  • Data Alignment
  • Data Types and Layout
  • Vectorization
  • Neighbor List
  • Long-Range Electrostatics
  • MPI and OpenMP Parallelization
  • Performance Results
  • System, Build, and Run Configurations
  • Workloads
  • Organic Photovoltaic Molecules
  • Hydrocarbon Mixtures
  • Rhodopsin Protein in Solvated Lipid Bilayer
  • Coarse Grain Liquid Crystal Simulation
  • Coarse-Grain Water Simulation
  • Summary
  • For More Information
  • Chapter 21: High performance seismic simulations
  • High-Order Seismic Simulations
  • Numerical background
  • Application characteristics
  • Intel Architecture as Compute Engine
  • Highly-efficient small matrix kernels
  • Sparse matrix kernel generation and sparse/dense kernel selection
  • Dense matrix kernel generation: AVX2
  • Dense matrix kernel generation: AVX-512
  • Kernel performance benchmarking
  • Incorporating Knights Landing's different memory subsystems
  • Performance Evaluation
  • Mount Merapi
  • 1992 Landers
  • Summary and take-aways
  • For More Information
  • Chapter 22: Weather research and forecasting (WRF)
  • WRF Overview
  • WRF Execution Profile: Relatively Flat
  • History of WRF on Intel Many-Core (Intel Xeon Phi Product Line)
  • Our Early Experiences With WRF on Knights Landing
  • The CONUS12km Benchmark
  • Compiling WRF for Intel Xeon and Intel Xeon Phi Systems
  • Configuration Details
  • WRF CONUS12km Benchmark Performance
  • MCDRAM Bandwidth
  • Vectorization: Boost of AVX-512 Over AVX2
  • Core Scaling
  • Summary
  • For More Information
  • Chapter 23: N-Body simulation
  • Parallel Programming for Noncomputer Scientists
  • Step-by-Step Improvements
  • N-Body simulation
  • Physics Background
  • Mathematical model
  • Time complexity
  • Evolution in time
  • optimization
  • Initial Implementation (Optimization Step 0)
  • Thread parallelism (optimization step 1)
  • Scalar Performance Tuning (Optimization Step 2)
  • Vectorization with SOA (optimization step 3)
  • Memory traffic (optimization step 4)
  • Impact of MCDRAM on Performance
  • Summary
  • For More Information
  • Chapter 24: Machine learning
  • Convolutional Neural Networks
  • Neural Networks
  • Cache-Blocking Strategy for Convolutional Layers
  • Data Layout
  • Vectorization and Register Blocking
  • Threading and Work Partitioning
  • The Experimental Setup
  • OverFeat-FAST Results
  • K-Nearest Neighbors
  • Algorithms
  • kd-trees
  • Optimized Kd-Tree Construction
  • Algorithmic choices
  • Optimized Kd-Tree Query
  • KNN Results
  • Scalability
  • Sensitivity analysis
  • Choice of Bucket Size
  • Choice of Level at Which to Switch From Data-Parallel Tree Construction to a Thread-Parallel Approach
  • Absolute Performance Results
  • For More Information
  • Chapter 25: Trinity workloads
  • Out of the Box Performance
  • Building the Workloads
  • Items to Consider Before Running on Knights Landing
  • Memory modes and cluster modes
  • Strong scaling vs weak scaling vs problem sizing
  • Running on Knights Landing: Quadrant-Cache Mode
  • Running Trinity on Knights Landing: Quadrant-Flat vs Quadrant-Cache
  • Hybrid mode
  • Running on Knights Landing: SNC-4-Cache vs Quadrant-Cache
  • Comparing Knights Landing Performance vs Intel Xeon E5-2697 v4 Processor
  • Summary of Out of Box Section
  • Optimizing MiniGhost OpenMP Performance
  • Algorithm Overview
  • Original Implementation
  • Initial Configuration Sizing, Testing, and Tuning
  • Initial sizing and baseline performance
  • Problem-size selection
  • Code Optimizations
  • Copy elimination
  • Parallelization of serial code
  • Data-access reduction
  • Effect of OpenMP code changes on an Intel Xeon E5-2697 v4 processor
  • MiniGhost Optimization Summary
  • Summary
  • For More Information
  • Chapter 26: Quantum chromodynamics
  • LQCD
  • The QPhiX Library and Code Generator
  • Wilson-Dslash Operator
  • Configuring the QPhiX Code Generator
  • The Experimental Setup
  • Results
  • Overall Performance
  • Overall Performance Analysis
  • Impact of Memory Bandwidth
  • Streaming Bandwidth Roofline Estimate
  • Peak Achievable Knights Landing Performance
  • Memory Bandwidth Analysis
  • Memory and Cluster Modes
  • Analysis
  • Threading
  • Threading (or Threads per Core) Analysis
  • Prefetching
  • Software Prefetch Analysis
  • Conclusion
  • For More Information
  • Contributors
  • Glossary
  • Index
  • Back Cover

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

51,11 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
PDF mit Adobe DRM
siehe Systemvoraussetzungen
Hinweis: Die Auswahl des von Ihnen gewünschten Dateiformats und des Kopierschutzes erfolgt erst im System des E-Book Anbieters
E-Book bestellen

Unsere Web-Seiten verwenden Cookies. Mit der Nutzung des WebShops erklären Sie sich damit einverstanden. Mehr Informationen finden Sie in unserem Datenschutzhinweis. Ok