Handbook of Mixture Analysis

 
 
Routledge Cavendish (Verlag)
  • erschienen am 4. Januar 2019
  • |
  • 522 Seiten
 
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
978-0-429-50824-0 (ISBN)
 

Mixture models have been around for over 150 years, and they are found in many branches of statistical modelling, as a versatile and multifaceted tool. They can be applied to a wide range of data: univariate or multivariate, continuous or categorical, cross-sectional, time series, networks, and much more. Mixture analysis is a very active research topic in statistics and machine learning, with new developments in methodology and applications taking place all the time.

The Handbook of Mixture Analysis is a very timely publication, presenting a broad overview of the methods and applications of this important field of research. It covers a wide array of topics, including the EM algorithm, Bayesian mixture models, model-based clustering, high-dimensional data, hidden Markov models, and applications in finance, genomics, and astronomy.

Features:

    • Provides a comprehensive overview of the methods and applications of mixture modelling and analysis

    • Divided into three parts: Foundations and Methods; Mixture Modelling and Extensions; and Selected Applications

    • Contains many worked examples using real data, together with computational implementation, to illustrate the methods described

    • Includes contributions from the leading researchers in the field

    The Handbook of Mixture Analysis is targeted at graduate students and young researchers new to the field. It will also be an important reference for anyone working in this field, whether they are developing new methodology, or applying the models to real scientific problems.

    • Englisch
    • Milton
    • |
    • Großbritannien
    Taylor & Francis Ltd
    • Für höhere Schule und Studium
    87 schwarz-weiße Abbildungen, 85 schwarz-weiße und 2 farbige Zeichnungen, 34 schwarz-weiße Tabellen
    978-0-429-50824-0 (9780429508240)
    weitere Ausgaben werden ermittelt
    Sylvia Fruhwirth-Schnatter is Professor of Applied Statistics and Econometrics at the Department of Finance, Accounting, and Statistics, Vienna University of Economics and Business, Austria. She has contributed to research in Bayesian modelling and MCMC inference for a broad range of models, including finite mixture and Markov switching models as well as state space models. She is particularly interested in applications of Bayesian inference in economics, finance, and business. She started to work on finite mixture and Markov switching models 20 years ago and has published more than 20 articles in this area in leading journals such as JASA, JCGS, and Journal of Applied Econometrics. Her monograph Finite Mixture and Markov Switching Models (2006) was awarded the Morris-DeGroot Price 2007 by ISBA. In 2014, she was elected Member of the Austrian Academy of Sciences.


    Gilles Celeux is Director of research emeritus with INRIA Saclay-Ile-de-France, France. He has conducted research in statistical learning, model-based clustering and model selection for more than 35 years and he leaded to Inria teams. His first paper on mixture modelling was written in 1981 and he is one of the co-organisators of the summer working group on model-based clustering since 1994. He has published more than 40 papers in international Journals of Statistics and wrote two textbooks in French on Classification. He was Editor-in-Chief of Statistics and Computing between 2006 and 2012 and he is the present Editor-in-Chief of the Journal of the French Statistical Society since 2012.


    Christian P. Robert is Professor of Mathematics at CEREMADE, Universite Paris-Dauphine, PSL Research University, France, and Professor of Statistics at the Department of Statistics, University of Warwick, UK. He has conducted research in Bayesian inference and computational methods covering Monte Carlo, MCMC, and ABC techniques, for more than 30 years, writing The Bayesian Choice (2001) and Monte Carlo Statistical Methods (2004) with George Casella. His first paper on mixture modelling was written in 1989 on radiograph image modelling. His fruitful collaboration with Mike Titterington on this topic spans two enjoyable decades of visits to Glasgow, Scotland. He has organised three conferences on the subject of mixture inference, with the last one at ICMS leading to the edited book Mixtures: Estimation and Applications (2011), co-authored with K. L. Mengersen and D. M. Titterington.
    Sylvia Fruhwirth-Schnatter is Professor of Applied Statistics and Econometrics at the Department of Finance, Accounting, and Statistics, Vienna University of Economics and Business, Austria. She has contributed to research in Bayesian modelling and MCMC inference for a broad range of models, including finite mixture and Markov switching models as well as state space models. She is particularly interested in applications of Bayesian inference in economics, finance, and business. She started to work on finite mixture and Markov switching models 20 years ago and has published more than 20 articles in this area in leading journals such as JASA, JCGS, and Journal of Applied Econometrics. Her monograph Finite Mixture and Markov Switching Models (2006) was awarded the Morris-DeGroot Price 2007 by ISBA. In 2014, she was elected Member of the Austrian Academy of Sciences.


    Gilles Celeux is Director of research emeritus with INRIA Saclay-Ile-de-France, France. He has conducted research in statistical learning, model-based clustering and model selection for more than 35 years and he leaded to Inria teams. His first paper on mixture modelling was written in 1981 and he is one of the co-organisators of the summer working group on model-based clustering since 1994. He has published more than 40 papers in international Journals of Statistics and wrote two textbooks in French on Classification. He was Editor-in-Chief of Statistics and Computing between 2006 and 2012 and he is the present Editor-in-Chief of the Journal of the French Statistical Society since 2012.


    Christian P. Robert is Professor of Mathematics at CEREMADE, Universite Paris-Dauphine, PSL Research University, France, and Professor of Statistics at the Department of Statistics, University of Warwick, UK. He has conducted research in Bayesian inference and computational methods covering Monte Carlo, MCMC, and ABC techniques, for more than 30 years, writing The Bayesian Choice (2001) and Monte Carlo Statistical Methods (2004) with George Casella. His first paper on mixture modelling was written in 1989 on radiograph image modelling. His fruitful collaboration with Mike Titterington on this topic spans two enjoyable decades of visits to Glasgow, Scotland. He has organised three conferences on the subject of mixture inference, with the last one at ICMS leading to the edited book Mixtures: Estimation and Applications (2011), co-authored with K. L. Mengersen and D. M. Titterington.
    • Cover
    • Half Title
    • Title Page
    • Copyright Page
    • Table of Contents
    • Preface
    • Editors
    • Contributors
    • List of Symbols
    • I: Foundations and Methods
    • 1: Introduction to Finite Mixtures
    • 1.1 Introduction and Motivation
    • 1.1.1 Basic formulation
    • 1.1.2 Likelihood
    • 1.1.3 Latent allocation variables
    • 1.1.4 A little history
    • 1.2 Generalizations
    • 1.2.1 Infinite mixtures
    • 1.2.2 Continuous mixtures
    • 1.2.3 Finite mixtures with nonparametric components
    • 1.2.4 Covariates and mixtures of experts
    • 1.2.5 Hidden Markov models
    • 1.2.6 Spatial mixtures
    • 1.3 Some Technical Concerns
    • 1.3.1 Identifiability
    • 1.3.2 Label switching
    • 1.4 Inference
    • 1.4.1 Frequentist inference, and the role of EM
    • 1.4.2 Bayesian inference, and the role of MCMC
    • 1.4.3 Variable number of components
    • 1.4.4 Modes versus components
    • 1.4.5 Clustering and classification
    • 1.5 Concluding Remarks
    • Bibliography
    • 2: EM Methods for Finite Mixtures
    • 2.1 Introduction
    • 2.2 The EM Algorithm
    • 2.2.1 Description of EM for finite mixtures
    • 2.2.2 EM as an alternating-maximization algorithm
    • 2.3 Convergence and Behavior of EM
    • 2.4 Cousin Algorithms of EM
    • 2.4.1 Stochastic versions of the EM algorithm
    • 2.4.2 The Classification EM algorithm
    • 2.5 Accelerating the EM Algorithm
    • 2.6 Initializing the EM Algorithm
    • 2.6.1 Random initialization
    • 2.6.2 Hierarchical initialization
    • 2.6.3 Recursive initialization
    • 2.7 Avoiding Spurious Local Maximizers
    • 2.8 Concluding Remarks
    • Bibliography
    • 3: An Expansive View of EM Algorithms
    • 3.1 Introduction
    • 3.2 The Product-of-Sums Formulation
    • 3.2.1 Iterative algorithms and the ascent property
    • 3.2.2 Creating a minorizing surrogate function
    • 3.3 Likelihood as a Product of Sums
    • 3.4 Non-standard Examples of EM Algorithms
    • 3.4.1 Modes of a density
    • 3.4.2 Gradient maxima
    • 3.4.3 Two-step EM
    • 3.5 Stopping Rules for EM Algorithms
    • 3.6 Concluding Remarks
    • Bibliography
    • 4: Bayesian Mixture Models: Theory and Methods
    • 4.1 Introduction
    • 4.2 Bayesian Mixtures: From Priors to Posteriors
    • 4.2.1 Models and representations
    • 4.2.2 Impact of the prior distribution
    • 4.2.2.1 Conjugate priors
    • 4.2.2.2 Improper and non-informative priors
    • 4.2.2.3 Data-dependent priors
    • 4.2.2.4 Priors for overfitted mixtures
    • 4.3 Asymptotic Properties of the Posterior Distribution in the Finite Case
    • 4.3.1 Posterior concentration around the marginal density
    • 4.3.2 Recovering the parameters in the well-behaved case
    • 4.3.3 Boundary parameters: overfitted mixtures
    • 4.3.4 Asymptotic behaviour of posterior estimates of the number of components
    • 4.4 Concluding Remarks
    • Bibliography
    • 5: Computational Solutions for Bayesian Inference in Mixture Models
    • 5.1 Introduction
    • 5.2 Algorithms for Posterior Sampling
    • 5.2.1 A computational problem? Which computational problem?
    • 5.2.2 Gibbs sampling
    • 5.2.3 Metropolis-Hastings schemes
    • 5.2.4 Reversible jump MCMC
    • 5.2.5 Sequential Monte Carlo
    • 5.2.6 Nested sampling
    • 5.3 Bayesian Inference in the Model-Based Clustering Context
    • 5.4 Simulation Studies
    • 5.4.1 Known number of components
    • 5.4.2 Unknown number of components
    • 5.5 Gibbs Sampling for High-Dimensional Mixtures
    • 5.5.1 Determinant coefficient of determination
    • 5.5.2 Simulation study using the determinant criterion
    • 5.6 Concluding Remarks
    • Bibliography
    • 6: Bayesian Nonparametric Mixture Models
    • 6.1 Introduction
    • 6.2 Dirichlet Process Mixtures
    • 6.2.1 The Dirichlet process prior
    • 6.2.2 Posterior simulation in Dirichlet process mixture models
    • 6.2.3 Dependent mixtures - the dependent Dirichlet process model
    • 6.3 Normalized Generalized Gamma Process Mixtures
    • 6.3.1 NRMI construction
    • 6.3.2 Posterior simulation for normalized generalized gamma process mixtures
    • 6.4 Bayesian Nonparametric Mixtures with Random Partitions
    • 6.4.1 Locally weighted mixtures
    • 6.4.2 Conditional regression
    • 6.5 Repulsive Mixtures (Determinantal Point Process)
    • 6.6 Concluding Remarks
    • Bibliography
    • 7: Model Selection for Mixture Models - Perspectives and Strategies
    • 7.1 Introduction
    • 7.2 Selecting G as a Density Estimation Problem
    • 7.2.1 Testing the order of a finite mixture through likelihood ratio tests
    • 7.2.2 Information criteria for order selection
    • 7.2.2.1 AIC and BIC
    • 7.2.2.2 The Slope Heuristics
    • 7.2.2.3 DIC
    • 7.2.2.4 The minimum message length
    • 7.2.3 Bayesian model choice based on marginal likelihoods
    • 7.2.3.1 Chib's method, limitations and extensions
    • 7.2.3.2 Sampling-based approximations
    • 7.3 Selecting G in the Framework of Model-Based Clustering
    • 7.3.1 Mixtures as partition models
    • 7.3.2 Classification-based information criteria
    • 7.3.2.1 The integrated complete-data likelihood criterion
    • 7.3.2.2 The conditional classification likelihood
    • 7.3.2.3 Exact derivation of the ICL
    • 7.3.3 Bayesian clustering
    • 7.3.4 Selecting G under model misspecification
    • 7.4 One-Sweep Methods for Cross-model Inference on G
    • 7.4.1 Overfitting mixtures
    • 7.4.2 Reversible jump MCMC
    • 7.4.3 Allocation sampling
    • 7.4.4 Bayesian nonparametric methods
    • 7.4.5 Sparse finite mixtures for model-based clustering
    • 7.5 Concluding Remarks
    • Bibliography
    • II: Mixture Modelling and Extensions
    • 8: Model-Based Clustering
    • 8.1 Introduction
    • 8.1.1 Heuristic clustering
    • 8.1.2 From k-means to Gaussian mixture modelling
    • 8.1.3 Specifying the clustering problem
    • 8.2 Specifying the Model
    • 8.2.1 Components corresponding to clusters
    • 8.2.2 Combining components into clusters
    • 8.2.3 Selecting the clustering base
    • 8.2.4 Selecting the number of clusters
    • 8.3 Post-processing the Fitted Model
    • 8.3.1 Identifying the model
    • 8.3.2 Determining a partition
    • 8.3.3 Characterizing clusters
    • 8.3.4 Validating clusters
    • 8.3.5 Visualizing cluster solutions
    • 8.4 Illustrative Applications
    • 8.4.1 Bioinformatics: Analysing gene expression data
    • 8.4.2 Marketing: Determining market segments
    • 8.4.3 Psychology and sociology: Revealing latent structures
    • 8.4.4 Economics and finance: Clustering time series
    • 8.4.5 Medicine and biostatistics: Unobserved heterogeneity
    • 8.5 Concluding Remarks
    • Bibliography
    • 9: Mixture Modelling of Discrete Data
    • 9.1 Introduction
    • 9.2 Mixtures of Univariate Count Data
    • 9.2.1 Introduction
    • 9.2.2 Finite mixtures of Poisson and related distributions
    • 9.2.3 Zero-inflated models
    • 9.3 Extensions
    • 9.3.1 Mixtures of time series count data
    • 9.3.2 Hidden Markov models
    • 9.3.3 Mixture of regression models for discrete data
    • 9.3.4 Other models
    • 9.4 Mixtures of Multivariate Count Data
    • 9.4.1 Some models for multivariate counts
    • 9.4.1.1 Multivariate reduction approach
    • 9.4.1.2 Copulas approach
    • 9.4.1.3 Other approaches
    • 9.4.2 Finite mixture for multivariate counts
    • 9.4.2.1 Conditional independence
    • 9.4.2.2 Conditional dependence
    • 9.4.2.3 Finite mixtures of multivariate Poisson distributions
    • 9.4.3 Zero-inflated multivariate models
    • 9.4.4 Copula-based models
    • 9.4.5 Finite mixture of bivariate Poisson regression models
    • 9.5 Other Mixtures for Discrete Data
    • 9.5.1 Latent class models
    • 9.5.2 Mixtures for ranking data
    • 9.5.3 Mixtures of multinomial distributions
    • 9.5.4 Mixtures of Markov chains
    • 9.6 Concluding Remarks
    • Bibliography
    • 10: Continuous Mixtures with Skewness and Heavy Tails
    • 10.1 Introduction
    • 10.2 Skew-t Mixtures
    • 10.3 Prior Formulation
    • 10.4 Model Fitting
    • 10.5 Examples
    • 10.5.1 Simulation study
    • 10.5.2 Experimental data
    • 10.6 Concluding Remarks
    • Bibliography
    • 11: Mixture Modelling of High-Dimensional Data
    • 11.1 Introduction
    • 11.2 High-Dimensional Data
    • 11.2.1 Continuous data: Italian wine
    • 11.2.2 Categorical data: lower back pain
    • 11.2.3 Mixed data: prostate cancer
    • 11.3 Mixtures for High-Dimensional Data
    • 11.3.1 Curse of dimensionality/modeling issues
    • 11.3.2 Data reduction
    • 11.4 Mixtures for Continuous Data
    • 11.4.1 Diagonal covariance
    • 11.4.2 Eigendecomposed covariance
    • 11.4.3 Mixtures of factor analyzers and probabilistic principal components analyzers
    • 11.4.4 High-dimensional models
    • 11.4.5 Sparse models
    • 11.5 Mixtures for Categorical Data
    • 11.5.1 Local independence models and latent class analysis
    • 11.5.2 Other models
    • 11.6 Mixtures for Mixed Data
    • 11.7 Variable Selection
    • 11.7.1 Wrapper-based methods
    • 11.7.2 Stepwise approaches for continuous data
    • 11.7.3 Stepwise approaches for categorical data
    • 11.8 Examples
    • 11.8.1 Continuous data: Italian wine
    • 11.8.2 Categorical data: lower back pain
    • 11.8.3 Mixed data: prostate cancer
    • 11.9 Concluding Remarks
    • Bibliography
    • 12: Mixture of Experts Models
    • 12.1 Introduction
    • 12.2 The Mixture of Experts Framework
    • 12.2.1 A mixture of experts model
    • 12.2.2 An illustration
    • 12.2.3 The suite of mixture of experts models
    • 12.3 Statistical Inference for Mixture of Experts Models
    • 12.3.1 Maximum likelihood estimation
    • 12.3.2 Bayesian estimation
    • 12.3.3 Model selection
    • 12.4 Illustrative Applications
    • 12.4.1 Analysing marijuana use through mixture of experts Markov chain models
    • 12.4.2 A mixture of experts model for ranked preference data
    • 12.4.3 A mixture of experts latent position cluster model
    • 12.4.4 Software
    • 12.5 Identifiability of Mixture of Experts Models
    • 12.5.1 Identifiability of binomial mixtures
    • 12.5.2 Identifiability for mixtures of regression models
    • 12.5.3 Identifiability for mixture of experts models
    • 12.6 Concluding Remarks
    • Bibliography
    • 13: Hidden Markov Models in Time Series, with Applications in Economics
    • 13.1 Introduction
    • 13.2 Regime Switching: Mixture Modelling over Time
    • 13.2.1 Preliminaries and model specification
    • 13.2.2 The functional form of state transition
    • 13.2.2.1 Time-invariant switching
    • 13.2.2.2 Time-varying switching
    • 13.2.2.3 Nested alternatives
    • 13.2.3 Generalizations
    • 13.2.4 Some considerations on parameterization
    • 13.2.5 Stability conditions: combining stable and unstable processes
    • 13.3 Estimation
    • 13.3.1 The complete-data likelihood and the FFBS algorithm
    • 13.3.2 Maximum likelihood estimation
    • 13.3.3 Bayesian estimation
    • 13.3.3.1 Prior specifications for the transition distribution
    • 13.3.3.2 Posterior inference
    • 13.3.4 Sampler efficiency: logit versus probit
    • 13.3.5 Posterior state identification
    • 13.4 Informative Regime Switching in Applications
    • 13.4.1 Time-invariant switching
    • 13.4.1.1 Unconditional switching
    • 13.4.1.2 Structured Markov switching
    • 13.4.2 Time-varying switching
    • 13.4.2.1 Duration dependence and state-identifying restrictions
    • 13.4.2.2 Shape restrictions
    • 13.5 Concluding Remarks
    • Bibliography
    • 14: Mixtures of Nonparametric Components and Hidden Markov Models
    • 14.1 Introduction
    • 14.2 Mixtures with One Known Component
    • 14.2.1 The case where the other component is symmetric
    • 14.2.2 Mixture of a uniform and a non-decreasing density
    • 14.3 Translation Mixtures
    • 14.3.1 Translation of a symmetric density
    • 14.3.2 Translation of any distribution and hidden Markov models
    • 14.4 Multivariate Mixtures
    • 14.4.1 Identifiability
    • 14.4.2 Estimation with spectral methods
    • 14.4.3 Estimation with nonparametric methods
    • 14.4.4 Hidden Markov models
    • 14.5 Related Questions
    • 14.5.1 Clustering
    • 14.5.2 Order estimation
    • 14.5.3 Semi-parametric estimation
    • 14.5.4 Regressions with random (observed or non-observed) design
    • 14.6 Concluding Remarks
    • Bibliography
    • III: Selected Applications
    • 15: Applications in Industry
    • 15.1 Introduction
    • 15.2 Mixtures for Monitoring
    • 15.3 Health Resource Usage
    • 15.3.1 Assessing the effectiveness of a measles vaccination
    • 15.3.2 Spatio-temporal disease mapping: identifying unstable trends in congenital malformations
    • 15.4 Pest Surveillance
    • 15.4.1 Data and models
    • 15.4.2 Resulting clusters
    • 15.5 Toxic Spills
    • 15.5.1 Data and model
    • 15.5.2 Posterior sampling and summaries
    • 15.6 Concluding Remarks
    • Bibliography
    • 16: Mixture Models for Image Analysis
    • 16.1 Introduction
    • 16.2 Hidden Markov Model Based Clustering
    • 16.2.1 Mixture models
    • 16.2.2 Markov random fields: Potts model and extensions
    • 16.2.3 Hidden Markov field with independent noise
    • 16.3 Markov Model Based Segmentation via Variational EM
    • 16.3.1 Links with the iterated conditional mode and the Gibbs sampler
    • 16.4 Illustration: MRI Brain Scan Segmentation
    • 16.4.1 Healthy brain tissue and structure segmentation
    • 16.4.1.1 A Markov random field approach to segmentation and registration
    • 16.4.1.2 Experiments: Joint tissue and structure segmentation
    • 16.4.2 Brain tumor detection from multiple MR sequences
    • 16.4.2.1 Tissue interaction modelling
    • 16.4.2.2 Experiments: Lesion segmentation
    • 16.5 Concluding Remarks
    • Bibliography
    • 17: Applications in Finance
    • 17.1 Introduction
    • 17.2 Finite Mixture Models
    • 17.2.1 i.i.d. mixture models with volatility dynamics
    • 17.2.2 Markov switching models
    • 17.2.3 Markov switching volatility models
    • 17.2.4 Jumps
    • 17.3 Infinite Mixture Models
    • 17.3.1 Dirichlet process mixture model
    • 17.3.2 GARCH-DPM and SV-DPM
    • 17.3.3 Infinite hidden Markov model
    • 17.4 Concluding Remarks
    • Bibliography
    • 18: Applications in Genomics
    • 18.1 Introduction
    • 18.2 Mixture Models in Transcriptome and Genome Analysis
    • 18.2.1 Analyzing the genetic structure of a population
    • 18.2.2 Finding sets of co-transcribed genes
    • 18.2.3 Variable selection for clustering with Gaussian mixture models
    • 18.2.4 Mixture models in the specific case of multiple testing
    • 18.3 Hidden Markov Models in Genomics: Some Specificities
    • 18.3.1 A typical case: Copy number variations
    • 18.3.2 Complex emission distributions
    • 18.3.3 Complex hidden states
    • 18.3.4 Non-standard hidden Markov structures
    • 18.4 Complex Dependency Structures
    • 18.4.1 Markov random fields
    • 18.4.2 Stochastic block model
    • 18.4.3 Inference issues
    • 18.5 Concluding Remarks
    • Bibliography
    • 19: Applications in Astronomy
    • 19.1 Introduction
    • 19.2 Clusters of Stars and Galaxies
    • 19.2.1 Galaxy clusters
    • 19.2.2 Young star clusters
    • 19.2.2.1 Star-cluster models
    • 19.2.2.2 Model fitting and validation
    • 19.2.2.3 Results from the mixture model approach
    • 19.3 Classification of Astronomical Objects
    • 19.3.1 Tests for multiple components
    • 19.3.2 Two or three classes of gamma-ray bursts?
    • 19.3.3 Removal of contaminants
    • 19.3.4 Red and blue galaxies
    • 19.4 Advanced Mixture Model Applications
    • 19.4.1 Regression with heteroscedastic uncertainties
    • 19.4.2 Deconvolution of distributions from data with heteroscedastic errors and missing information
    • 19.5 Concluding Remarks
    • Bibliography
    • Index

    Dateiformat: PDF
    Kopierschutz: Adobe-DRM (Digital Rights Management)

    Systemvoraussetzungen:

    Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

    Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

    E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

    Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

    Bitte beachten Sie bei der Verwendung der Lese-Software Adobe Digital Editions: wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!

    Weitere Informationen finden Sie in unserer E-Book Hilfe.


    Download (sofort verfügbar)

    47,99 €
    inkl. 19% MwSt.
    Download / Einzel-Lizenz
    E-Book bestellen