Control of Complex Systems

Theory and Applications
 
 
Elsevier Reference Monographs (Verlag)
  • 1. Auflage
  • |
  • erschienen am 29. Juli 2016
  • |
  • 762 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
978-0-12-805437-6 (ISBN)
 

In the era of cyber-physical systems, the area of control of complex systems has grown to be one of the hardest in terms of algorithmic design techniques and analytical tools. The 23 chapters, written by international specialists in the field, cover a variety of interests within the broader field of learning, adaptation, optimization and networked control. The editors have grouped these into the following 5 sections: 'Introduction and Background on Control Theory", 'Adaptive Control and Neuroscience", 'Adaptive Learning Algorithms", 'Cyber-Physical Systems and Cooperative Control", 'Applications". The diversity of the research presented gives the reader a unique opportunity to explore a comprehensive overview of a field of great interest to control and system theorists.

This book is intended for researchers and control engineers in machine learning, adaptive control, optimization and automatic control systems, including Electrical Engineers, Computer Science Engineers, Mechanical Engineers, Aerospace/Automotive Engineers, and Industrial Engineers. It could be used as a text or reference for advanced courses in complex control systems.

• Collection of chapters from several well-known professors and researchers that will showcase their recent work

• Presents different state-of-the-art control approaches and theory for complex systems

• Gives algorithms that take into consideration the presence of modelling uncertainties, the unavailability of the model, the possibility of cooperative/non-cooperative goals and malicious attacks compromising the security of networked teams

• Real system examples and figures throughout, make ideas concrete


  • Includes chapters from several well-known professors and researchers that showcases their recent work
  • Presents different state-of-the-art control approaches and theory for complex systems
  • Explores the presence of modelling uncertainties, the unavailability of the model, the possibility of cooperative/non-cooperative goals, and malicious attacks compromising the security of networked teams
  • Serves as a helpful reference for researchers and control engineers working with machine learning, adaptive control, and automatic control systems
  • Englisch
  • Oxford
  • |
  • USA
  • 32,47 MB
978-0-12-805437-6 (9780128054376)
0128054379 (0128054379)
weitere Ausgaben werden ermittelt
  • Front Cover
  • Control of Complex Systems: Theory and Applications
  • Copyright
  • Contents
  • Contributors
  • About the Editors
  • Preface
  • Section 1: Introduction and Preface
  • Chapter One: Introduction to Complex Systems and Feedback Control
  • 1 Introduction
  • 2 Types of Complexity in Nature
  • 2.1 Cosmology and Microphysics
  • 2.2 Fluids
  • 2.3 Chemistry
  • 2.4 Ecosystems
  • 2.5 Social Systems
  • 2.6 Biology
  • 2.7 Human and Cognitive Systems
  • 2.8 Human Engineered Systems
  • 3 Characteristics of Complex Systems
  • 4 Feedback Control Techniques in Complex Systems
  • 4.1 Continuous-Time and Discrete-Time Control Systems
  • 4.2 Time-Invariant/Linear Time-Varying Control Systemsand the Notions of Controllability and Observability
  • 4.3 Basic Concepts of a Control System
  • 5 Control Techniques
  • 5.1 Optimal Control
  • 5.2 Adaptive Control
  • 5.3 Robust Control
  • 5.4 Intelligent Control
  • 5.5 Game-Theoretic Control
  • 5.6 Cooperative and Distributed Control
  • 5.7 Bandwidth Effective Control
  • 6 Conclusion
  • References
  • Section 2: Adaptive Control and Neuroscience
  • Chapter Two: Hierarchical Adaptive Control of Rapidly Time-Varying Systems Using Multiple Models
  • 1 Introduction
  • 2 Mathematical Preliminaries
  • 2.1 The Classical Adaptive Control Problem
  • 2.1.1 Adaptive Control Using a Single Model
  • 2.1.2 Multiple Models
  • 3 Multiple Model-Based Adaptive Control
  • 3.1 Switching
  • 3.1.1 Multiple Fixed Models
  • 3.1.2 Multiple Adaptive Models
  • 3.2 Switching and Tuning [14, 16, 17, 20]
  • 3.3 Second-Level Adaptation [19, 21, 22]
  • 3.3.1 Adaptive Models
  • 3.3.2 Algebraic Computation of a
  • 3.3.3 Dynamic Estimation of ?
  • 3.3.4 Multiple Fixed Models
  • 3.4 Interactive/Evolutionary Adaptation
  • 3.4.1 Statement of the Problem
  • 3.4.2 The Method
  • Empirical Results
  • Simulation
  • Theoretical Results
  • 3.5 Summary
  • 4 Extensions
  • 4.1 Linear Single-Input Single-Output Systems
  • 4.2 Nonlinear Systems
  • 4.3 Linear Stochastic Systems
  • 5 Simulation Studies: Time-Varying Systems
  • 6 Work in Progress, Principal Issues, and Future Research
  • 6.1 The Need for a Hierarchy
  • 6.2 Higher-Level Switching
  • 6.3 Interactive Adaptation
  • 6.4 Second-Level Adaptation
  • 6.5 Combination of Methods
  • 6.5.1 Simulation 1
  • 6.5.2 Simulation 2
  • 7 Conclusion
  • Acknowledgments
  • References
  • Chapter Three: Adaptive Stabilization of Uncertain Systems With Model-Based Control and Event-Triggered Feedback Updates
  • 1 Introduction
  • 2 Stabilization of Deterministic Discrete-Time Systems Using Event-Triggered Control
  • 3 Parameter Estimation
  • 4 Adaptive Stabilization of Stochastic Model-Based Networked Control Systems
  • 5 Applications of the Adaptive Model-Based Event-Triggered Control Framework
  • 5.1 Continuous-Time Systems
  • 5.2 Actuator Fault Detection and Reconfiguration
  • 5.3 Identification and Rejection of Input Disturbances
  • 5.4 Identification and Stabilization of Switched Systems With State Jumps
  • 6 Conclusions
  • References
  • Chapter Four: A Neural Field Theory for Loss of Consciousness: Synaptic Drive Dynamics, System Stability, Attractors, Parti ...
  • 1 Introduction
  • 2 A Model for Excitatory and Inhibitory Neural Populations
  • 3 A Two-Class Mean Excitatory and Mean Inhibitory Synaptic Drive Model
  • 4 Large Neural Populations, Partial Synchronization, and Suppression of Consciousness
  • 5 Generalized Neural Population Models, Nonmonotonic Postsynaptic Potentials, and Drug Biphasic Response
  • 6 A Second-Order Mean Excitatory and Inhibitory Synaptic Drive Model
  • 7 Numerical Simulations of the Anesthetic Cascade Model
  • 7.1 Two-Class Mean Excitatory and Mean Inhibitory Synaptic Drive Model
  • 7.2 Partial Synchronization of Synaptic Drive Dynamics
  • 8 Conclusion
  • Acknowledgments
  • References
  • Section 3: Adaptive Learning Algorithms
  • Chapter Five: Optimal Tracking Control of Uncertain Systems: On-Policy and Off- Policy Reinforcement Learning Approaches
  • 1 Introduction
  • 2 Optimal Tracking Control of Constrained-Input Continuous-Time Systems
  • 2.1 Standard Formulation and Solution to the Optimal Tracking Control Problem
  • 2.2 A New Optimal Tracker for Continuous-Time Systems
  • 2.2.1 Tracking HJB Equation and On-Policy RL Technique
  • 2.2.2 Actor-Critic Structure for Solving the Nonlinear Tracking Problem
  • 2.2.3 Off-Policy RL for Solving the Optimal Tracking Problem
  • 3 Optimal Tracking Control of Constrained-Input Discrete-Time Systems
  • 3.1 Standard Formulation and Solution to the Optimal Tracking Control Problem
  • 3.2 Optimal Tracker for Discrete-Time Systems
  • 3.2.1 New Formulation for the Tracking Problem of Nonlinear Discrete-Time Systems
  • 3.2.2 Actor-Critic Structure for Solving the Nonlinear Tracking Problem
  • 3.2.3 Learning Rules for Actor and Critic NNs
  • 4 Conclusion
  • Acknowledgments
  • References
  • Chapter Six: Addressing Adaptation and Learning in the Context of Model Predictive Control With Moving-Horizon Estimation
  • 1 Introduction
  • 2 Problem Formulation
  • 2.1 Moving-Horizon Estimation
  • 2.2 Model Predictive Control
  • 2.3 Adaptive MPC Combined with MHE
  • 3 Stability Results
  • 3.1 State Boundedness
  • 4 Simulation Study
  • 5 Conclusions
  • References
  • Chapter Seven: Stochastic Adaptive Dynamic Programming for Robust Optimal Control Design
  • 1 Introduction
  • 2 Model and Problem Formulation
  • 2.1 System Description
  • 2.2 Problem Formulation
  • 3 Discount-Optimal Control
  • 4 Bias-Optimal Control
  • 5 Stochastic ADP Design
  • 5.1 Stochastic ADP Algorithms
  • 5.2 Convergence Analysis
  • 5.2.1 Convergence Proof for Algorithm 7.1
  • 5.2.2 Convergence Proof for Algorithm 7.2
  • 6 Stochastic Robust Optimal Control
  • 6.1 System Description
  • 6.2 Robust Optimal Control Design
  • 7 Stochastic RADP Design
  • 7.1 Stochastic RADP Design Algorithm
  • 7.2 Convergence and Stability Analysis
  • 8 Illustrative Examples
  • 8.1 Two-Joint Arm Movements in a Divergent Force Field
  • 8.2 Vehicle Suspension System With Two Degrees of Freedom
  • 8.3 Single-Joint Arm Movement
  • 9 Summary
  • Acknowledgments
  • References
  • Chapter Eight: Model-Based Reinforcement Learning for Approximate Optimal Regulation
  • 1 Introduction
  • 2 Problem Formulation
  • 3 Approximate Optimal Control
  • 3.1 Value Function Approximation
  • 3.2 Simulation of Experience via BE Extrapolation
  • 4 Stability Analysis
  • 4.1 Boundedness of the Least-Squares Gain Under Persistent Excitation
  • 4.2 Regulation and Convergence to Optimality
  • 5 Simulation
  • 5.1 Problem with a Known Basis
  • 5.2 Problem with an Unknown Basis
  • 6 Experimental Validation
  • 6.1 Experimental Platform
  • 6.2 Controller Implementation
  • 6.3 Results
  • 7 Conclusion
  • References
  • Chapter Nine: Continuous-Time Distributed Adaptive Dynamic Programming for Heterogeneous Multiagent Optimal Synchronizatio ...
  • 1 Introduction
  • 2 Graphs and Synchronization of Multiagent Systems
  • 2.1 Graph Theory
  • 2.2 Synchronization and Tracking Error Dynamic Systems
  • 3 Optimal Distributed Synchronization Control for Heterogeneous Multiagent Differential Graphical Games
  • 3.1 Cooperative Performance Index Function
  • 3.2 Nash Equilibrium
  • 4 Heterogeneous Multiagent Differential Graphical Games by an Iterative ADP Algorithm
  • 4.1 Derivation of the Cooperative Policy Iteration Algorithm for Heterogeneous Multiagent Differential Graphical Games
  • 4.2 Properties of the Cooperative Policy Iteration Algorithm for Heterogeneous Multiagent Differential Graphical Games
  • 4.3 Heterogeneous Multiagent Policy Iteration Algorithm
  • 5 Simulation Study
  • 6 Conclusion
  • References
  • Chapter Ten: Model-Free Learning of Nash Games With Applications to Network Security
  • 1 Introduction
  • 2 Actuation Corruption Attacks
  • 2.1 Background on Networks and Graphs
  • 2.2 Q-Learning-Based Approach
  • 2.3 Simulation
  • 3 Communication Network Corruption (Jamming) Attacks
  • 3.1 Background on Dynamic Graphs
  • 3.2 Performance Design
  • 3.3 Error Dynamics
  • 3.4 Zero-Sum Game Formulation
  • 3.5 Hamilton-Jacobi-Isaacs Equation
  • 3.6 Flocking with Learning Ideas in the Presence of an Intelligent Jammer
  • 3.7 Simulation
  • 4 Conclusion and Future Work
  • References
  • Section 4: Networked Systems and Cooperative Control
  • Chapter Eleven: Adaptive Optimal Regulation of a Class of Uncertain Nonlinear Systems Using Event Sampled Neural Network A ...
  • 1 Introduction
  • 2 Preliminaries
  • 2.1 Notation
  • 2.2 Stability Notion
  • 2.3 Background and Problem Formulation
  • 2.4 Function Approximation
  • 3 Adaptive Event-Triggered State Feedback Control
  • 3.1 Structure
  • 3.2 Controller Design
  • 3.3 Closed-Loop System: Impulsive Dynamical Model
  • 3.4 Stability Analysis
  • 4 Approximate Optimal Controller Design
  • 4.1 Background
  • 4.2 Problem Statement
  • 4.3 Identifier Design
  • 4.4 Controller Design
  • 4.5 Impulsive Dynamical System
  • 4.5.1 Flow Dynamics
  • 4.5.2 Jump Dynamics
  • 4.6 Minimum Inter-sample Time
  • 5 Simulations
  • 5.1 Stabilizing Controller
  • 5.2 Optimal Controller
  • 6 Conclusions
  • Acknowledgments
  • References
  • Chapter Twelve: Decentralized Cooperative Control in Degraded Communication Environments
  • 1 Introduction
  • 2 Preliminaries
  • 2.1 Notation
  • 2.2 Graph Theory
  • 2.3 Impulsive Delayed Systems
  • 3 Problem Statement
  • 4 Method
  • 4.1 Interconnecting the Nominal and Error System
  • 4.2 MASs with Nontrivial Sets B
  • 4.3 Computing Transmission Intervals t
  • 5 Experimental Validation
  • 6 Conclusion
  • Acknowledgments
  • Appendix
  • 6.1 Proof of Theorem 1
  • 6.2 Proof of Corollary 2
  • References
  • Chapter Thirteen: Multiagent Layered Formation Control Based on Rigid Graph Theory
  • 1 Introduction
  • 2 Preliminaries
  • 3 Problem Formulation
  • 3.1 Nonplanar Multiagent Layered Formation Control With a Single-Integrator Model
  • 3.2 Nonplanar Multiagent Layered Formation Control With a Double-Integrator Model
  • 4 Control Algorithms
  • 4.1 Single-Integrator Model
  • 4.2 Double-Integrator Model
  • 5 Simulation Results
  • 5.1 Single-Integrator Model
  • 5.2 Double-Integrator Model
  • 6 Conclusion
  • References
  • Chapter Fourteen: Certainty Equivalence, Separation Principle, and Cooperative Output Regulation of Multiagent Systems by ...
  • 1 Introduction
  • 2 Linear Output Regulation
  • 3 Solvability of the Linear Output Regulation Problem
  • 4 Linear Multiagent Systems and a Distributed Observer
  • 5 Some Stability Results
  • 6 Solvability of the Cooperative Linear Output Regulation Problem
  • 7 Some Variants and Extensions
  • 7.1 Multiple Leaders and Containment Control
  • 7.2 Local Exogenous Signals Versus Global Exogenous Signals
  • 7.3 Synchronized Reference Generator and the Output Synchronization
  • 7.4 Discrete Distributed Observer
  • 7.5 Adaptive Distributed Observer
  • 8 Concluding Remarks
  • Acknowledgments
  • Appendix: Graph
  • References
  • Chapter Fifteen: Cooperative Learning for Robust Connectivity in Multirobot Heterogeneous Networks
  • 1 Introduction
  • 2 Heterogeneous Robotic Team
  • 2.1 Motion Dynamics
  • 3 Network Connectivity
  • 3.1 Received Signal Strength
  • 3.2 Antenna Diversity
  • 3.3 Connectivity Matrix
  • 4 Deployment Maintaining Network Connectivity
  • 5 Mobile Sensor Controller
  • 5.1 Potential Field Method
  • 6 Aerial Relay Controller
  • 6.1 Cooperative Q-Learning
  • 7 Simulation Results
  • 8 Conclusions
  • Acknowledgments
  • References
  • Chapter Sixteen: Discrete-Time Flocking of Wheeled Vehicles With a Large Communication Delay Through a Potential Function ...
  • 1 Introduction
  • 2 Problem Formulation
  • 3 Preliminaries
  • 3.1 Graph Theory
  • 3.2 Artificial Potential Functions
  • 4 Main Results
  • 5 Simulation
  • 5.1 Complete Graph
  • 5.2 Tree Graph
  • 6 Conclusions
  • References
  • Chapter Seventeen: Cooperative Control and Networked Operation of Passivity-Short Systems
  • 1 Introduction
  • 2 Passive Systems
  • 3 Passivity-Short Systems
  • 4 Input Feedforward Passive Systems
  • 5 Control Designs
  • 6 Interconnection of Passivity-Short Systems
  • 6.1 Parallel Connection
  • 6.2 Series Connection
  • 6.3 Positive Feedback Connection
  • 6.4 Negative Feedback Connections
  • 7 Cooperative Control of Passivity-Short Systems
  • 8 Conclusion
  • Acknowledgments
  • Appendix
  • References
  • Chapter Eighteen: Synchronizing Region Approach for Identical Linear Time-Invariant Agents
  • 1 Introduction
  • 2 Graph Properties and Notation
  • 3 System Dynamics and Control Goal
  • 3.1 Continuous-Time and Discrete-Time Cooperative Tracker
  • 3.2 Discrete-Time Cooperative Observer
  • 3.3 Static Distributed Output-Feedback Cooperative Tracker
  • 3.4 Multiagent Systems with Uniform Control Delays
  • 4 Sufficient Conditions for Synchronization
  • 4.1 Synchronizing Regions and Cooperative Stability Conditions
  • 4.2 Discrete-Time Cooperative Tracker Distributed Control Design
  • 4.3 Discrete-Time Cooperative Observer Design
  • 4.4 Continuous-Time Cooperative Tracker Distributed Static Output-Feedback Design
  • 4.5 Delayed Control Cooperative Tracker Design
  • 4.5.1 Asymptotic Delay-Dependent Cooperative Stability
  • 4.5.2 Exponential Delay-Dependent Cooperative Stability
  • 5 Robust Synchronizing Region for Disturbances and Uncertain Systems
  • 5.1 H8-Synchronizing Regions
  • 5.2 Synchronizing Regions for Uncertain Agents
  • 6 Conclusions
  • References
  • Section 5: Applications
  • Chapter Nineteen: The Stereographic Product of Positive-Real Functions is Positive Real
  • 1 Introduction: The o Operation
  • 2 Circuits of Interest
  • 3 Positive-Real and Richards Functions
  • 4 Synthesis
  • 5 Examples
  • 6 Conclusions
  • References
  • Chapter Twenty: Collective Target Tracking Mean Field Control for Markovian Jump-Driven Models of Electric Water Heating Loads
  • 1 Introduction
  • 2 Background on Markovian Jump Linear-Quadratic Gaussian Mean Field Control (Following [1])
  • 3 Electric Water Heater Models
  • 4 Classical Markovian Jump Linear-Quadratic Tracking
  • 5 Collective Target Tracking Mean Field Model
  • 6 Fixed Point Analysis and ?-Nash Theorem
  • 6.1 ?-Nash Theorem
  • 7 Simulations
  • 8 Conclusion
  • Acknowledgments
  • Appendix
  • Proof of Theorem 2
  • References
  • Chapter Twenty-One: Trajectory Planning Based on Collocation Methods for Adaptive Motion Control of Multiple Aerial and Gr ...
  • 1 Introduction
  • 1.1 Trajectory Planning for Multiple Autonomous Vehicles
  • 2 Collocation Methods
  • 2.1 Numerical Analysis and Spectral Methods
  • 2.1.1 Pseudospectral Approximation
  • 2.2 Optimal Control
  • 2.2.1 General Formulation of the Optimal Control Problem
  • 2.3 Nonlinear Optimization
  • 2.4 Collocation Methods
  • 2.4.1 Pseudospectral Collocation Method
  • 2.4.2 Subinterval Pseudospectral Collocation Method
  • 3 Multivehicle Optimal Trajectory Planning With Collocation Methods
  • 3.1 Trajectory Planning Using Collocation Methods
  • 3.1.1 Multivehicle Problem
  • 3.1.2 Advantages and Drawbacks of Collocation Methods With Respect to Trajectory Planning
  • 3.2 S-Adaptive Pseudospectral Collocation Method
  • 3.2.1 Verification Process
  • 4 Applications and Validation of Collocation Methods in Multivehicle Trajectory Planning
  • 4.1 Teams of Multiple Rotary-Wing Unmanned Aerial Vehicles
  • 4.1.1 Collision Avoidance
  • 4.1.2 Results of the S-Adaptive PS Method
  • 4.1.3 Scalability of the S-Adaptive Method
  • 4.1.4 S-Adaptive Versus Legendre-Gauss-Lobatto Pseudospectral Method Comparison
  • 4.2 Experimental Validation on Unmanned Aerial Vehicles
  • 4.2.1 Collision Avoidance With the S-Adaptive Pseudospectral Method
  • 4.3 Teams of Multiple Unmanned Ground Vehicles
  • 4.3.1 Collision Avoidance Without Obstacles
  • 4.3.2 Results of the S-Adaptive PS Method
  • 4.3.3 Collision Avoidance With Fixed Obstacles
  • 4.3.4 Results of the S-Adaptive PS Method in Scenario S4
  • 4.3.5 Results of the S-Adaptive PS Method in Scenario S5
  • 4.4 Experimental Validation on Unmanned Ground Vehicles
  • 4.4.1 Collision Avoidance Without Fixed Obstacles
  • 4.4.2 Collision Avoidance With Fixed Obstacles
  • 4.4.3 Multiple Fixed Objects
  • Acknowledgments
  • References
  • Chapter Twenty-Two: Intelligent Control of a Prosthetic Ankle Joint Using Gait Recognition
  • 1 Introduction
  • 2 Approaches
  • 2.1 Modeling of the Prosthetic Ankle During Gait
  • 2.2 Recognition of the User's Intent
  • 2.3 Generation of the Gait-Based Kinematic References
  • 2.4 Compensation for the Ground Reaction Torque
  • 2.5 Learning-Based Control of the Prosthetic Ankle Joint
  • 3 Framework for Modeling and Control of the Prosthetic Ankle Joint During Gait
  • 3.1 Link-Segment Diagram and Dynamics of the Ankle Joint
  • 3.2 Framework for Control of the Prosthetic Ankle Joint
  • 3.2.1 Control Algorithm
  • Critic Network
  • Action Network
  • Assumption
  • Theorem
  • 3.2.2 Recognition of the User's Intent
  • 3.2.3 Generation of the Gait-Based Kinematic References
  • 3.2.4 Compensation of the Ground Reaction Torque
  • 4 Discussion and Outline of Stability and Performance Analysis
  • 4.1 Selection of the Performance Index
  • 4.2 Selection of the Control Goal
  • 4.3 Convergence of the Critic Network Output to the True Long-Term Performance Index
  • 4.4 Convergence of the Tracking Error ea
  • 4.5 Optimization of the Long-Term Performance Index L and Its Approximation J
  • 5 Simulation Results
  • 5.1 Ideal Condition
  • 5.2 Effect of Noise
  • 5.3 Conditions with Variations in Walking Speed
  • 6 Conclusions
  • References
  • Chapter Twenty-Three: Novel Robust Adaptive Algorithms for Estimation and Control: Theory and Practical Examples
  • 1 Introduction
  • 2 Review of Well-Known Adaptive Algorithms and Motivation for Increased Robustness
  • 2.1 Gradient Descent Adaptive Law
  • 2.2 Least Squares Adaptive Law
  • 2.3 Observer-Based Methods
  • 2.4 The Need for New Robust Adaptive Algorithms
  • 3 Novel Adaptive Algorithms
  • 4 Example of Parameter Estimationin Vehicular Systems
  • 4.1 Vehicle Longitudinal Dynamics
  • 4.2 Observer Design
  • 4.3 Experimental Results
  • 5 Example in Control of Robotic Systems
  • 5.1 Problem Formulation
  • 5.2 Adaptive Control Design
  • 5.3 Design of the Adaptive Law
  • 5.4 Results
  • 5.4.1 Simulation Results
  • 5.4.2 Experimental Results
  • 6 Example in Adaptive Optimal Control
  • 6.1 Problem Statement
  • 6.2 Development of the Adaptive Identifier
  • 6.3 Tracking Control Design
  • 6.4 Simulation Results
  • 7 Conclusions and Further Reading
  • Acknowledgments
  • References
  • Author Index
  • Subject Index
  • Back Cover

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

170,17 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
PDF mit Adobe DRM
siehe Systemvoraussetzungen
Hinweis: Die Auswahl des von Ihnen gewünschten Dateiformats und des Kopierschutzes erfolgt erst im System des E-Book Anbieters
E-Book bestellen

Unsere Web-Seiten verwenden Cookies. Mit der Nutzung dieser Web-Seiten erklären Sie sich damit einverstanden. Mehr Informationen finden Sie in unserem Datenschutzhinweis. Ok