Big Data

Principles and Paradigms
 
 
Morgan Kaufmann (Verlag)
  • 1. Auflage
  • |
  • erschienen am 7. Juni 2016
  • |
  • 494 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
978-0-12-809346-7 (ISBN)
 

Big Data: Principles and Paradigms captures the state-of-the-art research on the architectural aspects, technologies, and applications of Big Data. The book identifies potential future directions and technologies that facilitate insight into numerous scientific, business, and consumer applications.

To help realize Big Data's full potential, the book addresses numerous challenges, offering the conceptual and technological solutions for tackling them. These challenges include life-cycle data management, large-scale storage, flexible processing infrastructure, data modeling, scalable machine learning, data analysis algorithms, sampling techniques, and privacy and ethical issues.


  • Covers computational platforms supporting Big Data applications
  • Addresses key principles underlying Big Data computing
  • Examines key developments supporting next generation Big Data platforms
  • Explores the challenges in Big Data computing and ways to overcome them
  • Contains expert contributors from both academia and industry
  • Englisch
  • Saint Louis
  • |
  • USA
Elsevier Science
  • 27,24 MB
978-0-12-809346-7 (9780128093467)
0128093463 (0128093463)
weitere Ausgaben werden ermittelt
  • Front Cover
  • Big Data: Principles and Paradigms
  • Copyright
  • Contents
  • List of contributors
  • About the Editors
  • Preface
  • Organization of the Book
  • Part I: Big Data Science
  • Part II: Big Data Infrastructures and Platforms
  • Part III: Big Data Security and Privacy
  • Part IV: Big Data Applications
  • Acknowledgments
  • Part I: Big Data Science
  • Big Data Analytics = Machine Learning + Cloud Computing
  • 1.1 Introduction
  • 1.2 A Historical Review of Big Data
  • 1.2.1 The Origin of Big Data
  • 1.2.2 Debates of Big Data Implication
  • Pros
  • Cons
  • 1.3 Historical Interpretation of Big Data
  • 1.3.1 Methodology for Defining Big Data
  • 1.3.2 Different Attributes of Definitions
  • Gartner - 3Vs definition
  • IBM - 4Vs definition
  • Microsoft - 6Vs definition
  • More Vs for big data
  • 1.3.3 Summary of 7 Types Definitions of Big Data
  • 1.3.4 Motivations Behind the Definitions
  • 1.4 Defining Big Data From 3Vs to 32Vs
  • 1.4.1 Data Domain
  • 1.4.2 Business[1] Intelligent (BI) Domain
  • 1.4.3 Statistics Domain
  • 1.4.4 32 Vs Definition and Big Data Venn Diagram
  • 1.5 Big Data Analytics and Machine Learning
  • 1.5.1 Big Data Analytics
  • 1.5.2 Machine Learning
  • 1.6 Big Data Analytics and Cloud Computing
  • 1.7 Hadoop, HDFS, MapReduce, Spark, and Flink
  • 1.7.1 Google File System (GFS) and HDFS
  • 1.7.2 MapReduce
  • 1.7.3 The Origin of the Hadoop Project
  • Lucene
  • Nutch
  • Mahout
  • 1.7.4 Spark and Spark Stack
  • 1.7.5 Flink and Other Data Process Engines
  • 1.7.6 Summary of Hadoop and Its Ecosystems
  • Hadoop key functions
  • Hadoop's distinguishing features
  • 1.8 ML + CC BDA and Guidelines
  • 1.9 Conclusion
  • References
  • Chapter 2: Real-Time Analytics
  • 2.1 Introduction
  • 2.2 Computing Abstractions for Real-Time Analytics
  • 2.3 Characteristics of Real-Time Systems
  • 2.3.1 Low Latency
  • 2.3.2 High Availability
  • 2.3.3 Horizontal Scalability
  • 2.4 Real-Time Processing for Big Data - Concepts and Platforms
  • 2.4.1 Event
  • 2.4.2 Event Processing
  • 2.4.3 Event Stream Processing and Data Stream Processing
  • 2.4.4 Complex Event Processing
  • 2.4.5 Event Type
  • 2.4.6 Event Pattern
  • 2.5 Data Stream Processing Platforms
  • 2.5.1 Spark
  • 2.5.2 Storm
  • 2.5.3 Kafka
  • 2.5.4 Flume
  • 2.5.5 Amazon Kinesis
  • 2.6 Data Stream Analytics Platforms
  • 2.6.1 Query-Based EPSs
  • 2.6.2 Rule-Oriented EPSs
  • Production rules
  • Event-condition-action rules
  • 2.6.3 Programmatic EPSs
  • 2.7 Data Analysis and Analytic Techniques
  • 2.7.1 Data Analysis in General
  • 2.7.2 Data Analysis for Stream Applications
  • 2.8 Finance Domain Requirements and a Case Study
  • 2.8.1 Real-Time Analytics in Finance Domain
  • 2.8.2 Selected Scenarios
  • 2.8.3 CEP Application as a Case Study
  • 2.9 Future Research Challenges
  • References
  • Chapter 3: Big Data Analytics for Social Media
  • 3.1 Introduction
  • 3.2 NLP and Its Applications
  • 3.2.1 Language Detection
  • Alphabet-based LD
  • Dictionary-based LD
  • Byte n-gram-based LD
  • User language profile
  • Combined system
  • 3.2.2 Named Entity Recognition
  • NER pipeline
  • Statistical NLP methods
  • Why CRF?
  • Features for NER
  • Tags and evaluation
  • Applications
  • Recent trends in NER
  • 3.3 Text Mining
  • 3.3.1 Sentiment Analysis
  • Lexicon-based approach
  • Rule-based approaches
  • Statistical methods
  • Domain adaptation
  • 3.3.2 Trending Topics
  • Trending topic detection
  • Online clustering
  • Extract n-grams
  • Jaccard similarity
  • Ranking clusters
  • 3.3.3 Recommender Systems
  • Types of recommender systems
  • Social recommender systems
  • Recommender systems datasets
  • Evaluation metrics for recommender systems
  • Ranking accuracy
  • NLP in recommender systems
  • 3.4 Anomaly Detection
  • Applications to text streams
  • Acknowledgments
  • References
  • Chapter 4: Deep Learning and Its Parallelization
  • 4.1 Introduction
  • 4.1.1 Application Background
  • 4.1.2 Performance Demands for Deep Learning
  • 4.1.3 Existing Parallel Frameworks of Deep Learning
  • 4.2 Concepts and Categories of Deep Learning
  • 4.2.1 Deep Learning
  • Artificial neural networks
  • Concept of deep learning
  • 4.2.2 Mainstream Deep Learning Models
  • Autoencoders
  • Backpropagation
  • Convolutional neural network
  • Architecture overview of CNNs
  • Input layer
  • Convolutional layer
  • Pooling layer
  • Full connection layer
  • 4.3 Parallel Optimization for Deep Learning
  • 4.3.1 Convolutional Architecture for Fast Feature Embedding
  • CUDA programming
  • The architecture of GPU
  • CUDA programming framework
  • Architecture of Caffe
  • Data storage in Caffe
  • Layer topology in Caffe
  • Parallel implementation of convolution in Caffe
  • 4.3.2 DistBelief
  • Introduction of DistBelief
  • Downpour SGD
  • Sandblaster L-BFGS
  • 4.3.3 Deep Learning Based on Multi-GPUs
  • Data parallelism
  • Model parallelism
  • Data-model parallelism
  • Example system of multi-GPUs
  • 4.4 Discussions
  • 4.4.1 Grand Challenges of Deep Learning in Big Data
  • Massive amounts of training sample
  • Incremental streaming data
  • Learning speed in Big Data
  • Scalability of deep models
  • 4.4.2 Future Directions
  • References
  • Chapter 5: Characterization and Traversal of Large Real-World Networks
  • 5.1 Introduction
  • 5.2 Background
  • 5.3 Characterization and Measurement
  • 5.4 Efficient Complex Network Traversal
  • 5.4.1 HPC Traversal of Large Networks
  • 5.4.2 Algorithms for Accelerating AS-BFS on GPU
  • 5.4.3 Performance Study of AS-BFS on GPU's
  • 5.5 k-Core-Based Partitioning for Heterogeneous Graph Processing
  • 5.5.1 Graph Partitioning for Heterogeneous Computing
  • 5.5.2 k-Core-Based Complex-Network Unbalanced Bisection
  • 5.6 Future Directions
  • 5.7 Conclusions
  • Acknowledgments
  • References
  • Part II: Big Data Infrastructures and Platforms
  • Chapter 6: Database Techniques for Big Data
  • 6.1 Introduction
  • 6.2 Background
  • 6.2.1 Navigational Data Models
  • 6.2.2 Relational Data Models
  • 6.3 NoSQL Movement
  • 6.4 NoSQL Solutions for Big Data Management
  • 6.5 NoSQL Data Models
  • 6.5.1 Key-Value Stores
  • 6.5.2 Column-Based Stores
  • 6.5.3 Graph-Based Stores
  • 6.5.4 Document-Based Stores
  • 6.6 Future Directions
  • 6.7 Conclusions
  • References
  • Chapter 7: Resource Management in Big Data Processing Systems
  • 7.1 Introduction
  • 7.2 Types of Resource Management
  • 7.2.1 CPU and Memory Resource Management
  • 7.2.2 Storage Resource Management
  • 7.2.3 Network Resource Management
  • 7.3 Big Data Processing Systems and Platforms
  • 7.3.1 Hadoop
  • 7.3.2 Dryad
  • 7.3.3 Pregel
  • 7.3.4 Storm
  • 7.3.5 Spark
  • 7.3.6 Summary
  • 7.4 Single-Resource Management in the Cloud
  • 7.4.1 Desired Resource Allocation Properties
  • 7.4.2 Problems for Existing Fairness Policies
  • Trivial workload problem
  • Strategy-proofness problem
  • Resource-as-you-pay unfairness problem
  • 7.4.3 Long-Term Resource Allocation Policy
  • Motivation example
  • LTRF scheduling algorithm
  • 7.4.4 Experimental Evaluation
  • 7.5 Multiresource Management in the Cloud
  • 7.5.1 Resource Allocation Model
  • 7.5.2 Multiresource Fair Sharing Issues
  • 7.5.3 Reciprocal Resource Fairness
  • Intertenant resource trading
  • Intratenant weight adjustment
  • 7.5.4 Experimental Evaluation
  • Results on fairness
  • Improvement of application performance
  • 7.6 Related Work on Resource Management
  • 7.6.1 Resource Utilization Optimization
  • 7.6.2 Power and Energy Cost Saving Optimization
  • 7.6.3 Monetary Cost Optimization
  • 7.6.4 Fairness Optimization
  • 7.7 Open Problems
  • 7.7.1 SLA Guarantee for Applications
  • 7.7.2 Various Computation Models and Systems
  • 7.7.3 Exploiting Emerging Hardware
  • 7.8 Summary
  • References
  • Chapter 8: Local Resource Consumption Shaping: A Case for MapReduce
  • 8.1 Introduction
  • 8.2 Motivation
  • 8.2.1 Pitfalls of Fair Resource Sharing
  • 8.3 Local Resource Shaper
  • 8.3.1 Design Philosophy
  • 8.3.2 Splitter
  • 8.3.3 The Interleave MapReduce Scheduler
  • Task slot
  • Slot manager
  • Task dispatcher
  • 8.4 Evaluation
  • 8.4.1 Experiments With Hadoop 1.x
  • Results: shaping resources using splitter
  • Results: incorporating splitter into existing MapReduce schedulers
  • Results: performance of interleave
  • Results: improving the performance of slot configurations
  • Results: when running facebook workload model
  • 8.4.2 Experiments With Hadoop 2.x
  • Implementation in Hadoop YARN
  • Results
  • 8.5 Related Work
  • 8.6 Conclusions
  • Appendix: CPU Utilization With Different Slot Configurations and LRS
  • References
  • Chapter 9: System Optimization for Big Data Processing
  • 9.1 Introduction
  • 9.2 Basic Framework of the Hadoop Ecosystem
  • 9.3 Parallel Computation Framework: MapReduce
  • 9.3.1 Improvements of MapReduce Framework
  • 9.3.2 Optimization for Task Scheduling and Load Balancing of MapReduce
  • 9.4 Job Scheduling of Hadoop
  • 9.4.1 Built-in Scheduling Algorithms of Hadoop
  • 9.4.2 Improvement of the Hadoop Job Scheduling Algorithm
  • 9.4.3 Improvement of the Hadoop Job Management Framework
  • 9.5 Performance Optimization of HDFS
  • 9.5.1 Small File Performance Optimization
  • 9.5.2 HDFS Security Optimization
  • 9.6 Performance Optimization of HBase
  • 9.6.1 HBase Framework, Storage, and Application Optimization
  • 9.6.2 Load Balancing of HBase
  • 9.6.3 Optimization of HBase Configuration
  • 9.7 Performance Enhancement of Hadoop System
  • 9.7.1 Efficiency Optimization of Hadoop
  • 9.7.2 Availability Optimization of Hadoop
  • 9.8 Conclusions and Future Directions
  • References
  • Chapter 10: Packing Algorithms for Big Data Replay on Multicore
  • 10.1 Introduction
  • 10.2 Performance Bottlenecks
  • 10.2.1 Hadoop/MapReduce Performance Bottlenecks
  • 10.2.2 Performance Bottlenecks Under Parallel Loads
  • 10.2.3 Parameter Spaces for Storage and Shared Memory
  • 10.2.4 Main Storage Performance
  • 10.2.5 Shared Memory Performance
  • 10.3 The Big Data Replay Method
  • 10.3.1 The Replay Method
  • 10.3.2 Jobs as Sketches on a Timeline
  • 10.3.3 Performance Bottlenecks Under Replay
  • 10.4 Packing Algorithms
  • 10.4.1 Shared Memory Performance Tricks
  • 10.4.2 Big Data Replay at Scale
  • 10.4.3 Practical Packing Models
  • 10.5 Performance Analysis
  • 10.5.1 Hotspot Distributions
  • 10.5.2 Modeling Methodology
  • 10.5.3 Processing Overhead Versus Bottlenecks
  • 10.5.4 Control Grain for Drop Versus Drag Models
  • 10.6 Summary and Future Directions
  • References
  • Part III: Big Data Security and Privacy
  • Chapter 11: Spatial Privacy Challenges in Social Networks
  • 11.1 Introduction
  • 11.2 Background
  • 11.3 Spatial Aspects of Social Networks
  • 11.4 Cloud-Based Big Data Infrastructure
  • 11.5 Spatial Privacy Case Studies
  • 11.6 Conclusions
  • Acknowledgments
  • References
  • Chapter 12: Security and Privacy in Big Data
  • 12.1 Introduction
  • 12.2 Secure Queries Over Encrypted Big Data
  • 12.2.1 System Model
  • 12.2.2 Threat Model and Attack Model
  • 12.2.3 Secure Query Scheme in Clouds
  • 12.2.4 Security Definition of Index-Based Secure Query Techniques
  • 12.2.5 Implementations of Index-Based Secure Query Techniques
  • An efficient single-keyword secure query scheme
  • An efficient multikeyword secure query scheme
  • 12.3 Other Big Data Security
  • 12.3.1 Digital Watermarking
  • 12.3.2 Self-Adaptive Risk Access Control
  • 12.4 Privacy on Correlated Big Data
  • 12.4.1 Correlated Data in Big Data
  • 12.4.2 Anonymity
  • 12.4.3 Differential Privacy
  • Definitions of differential privacy
  • Differential privacy optimization
  • Key approaches for differential privacy
  • The PINQ framework
  • Differential privacy for correlated data publication
  • 12.5 Future Directions
  • 12.6 Conclusions
  • References
  • Chapter 13: Location Inferring in Internet of Things and Big Data
  • 13.1 Introduction
  • 13.2 Device-based Sensing Using Big Data
  • 13.2.1 Introduction
  • 13.2.2 Approach Overview
  • 13.2.3 Trajectories Matching
  • Directional shadowing problem
  • Fingerprints extraction
  • Fingerprints transition graph
  • 13.2.4 Establishing the Mapping Between Floor Plan and RSS Readings
  • Floor plan in manifold's eyes
  • Unsupervised mapping
  • Skeleton graph extraction
  • Graphs normalization
  • Skeletons matching
  • Corridor points matching
  • Rooms points matching
  • 13.2.5 User Localization
  • 13.2.6 Graph Matching Based Tracking
  • 13.2.7 Evaluation
  • 13.3 Device-free Sensing Using Big Data
  • 13.3.1 Customer Behavior Identification
  • Doppler-based popular item discovery
  • Adaptive Doppler peaks detection
  • Location-based explicit correlation discovery
  • Antenna movement model
  • Integration of multi-RSS
  • Movement pattern-based implicit correlation discovery
  • Problem formulation
  • Segment-based interpolation
  • Iterative clustering algorithm with cosine similarity
  • 13.3.2 Human Object Estimation
  • Data preprocessing
  • Feature extraction
  • Machine learning-based estimation
  • 13.4 Conclusion
  • Acknowledgements
  • References
  • Part IV: Big Data Applications
  • Chapter 14: A Framework for Mining Thai Public Opinions
  • 14.1 Introduction
  • 14.2 XDOM
  • 14.2.1 Data Sources
  • 14.2.2 DOM System Architecture
  • 14.2.3 MapReduce Framework
  • 14.2.4 Sentiment Analysis
  • 14.2.5 Clustering-based Summarization Framework
  • 14.2.6 Influencer Analysis
  • 14.2.7 AskDOM: Mobile Application
  • 14.3 Implementation
  • 14.3.1 Server
  • 14.3.2 Core Service
  • 14.3.3 I/O
  • 14.4 Validation
  • 14.4.1 Validation Parameter
  • 14.4.2 Validation method
  • 14.4.3 Validation results
  • 14.5 Case Studies
  • 14.5.1 Political Opinion: #prayforthailand
  • 14.5.2 Bangkok Traffic Congestion Ranking
  • 14.6 Summary and Conclusions
  • Acknowledgments
  • References
  • Chapter 15: A Case Study in Big Data Analytics: Exploring Twitter Sentiment Analysis and the Weather
  • 15.1 Background
  • 15.2 Big Data System Components
  • 15.2.1 System Back-End Architecture
  • 15.2.2 System Front-End Architecture
  • 15.2.3 Software Stack
  • 15.3 Machine-Learning Methodology
  • 15.3.1 Tweets Sentiment Analysis
  • Naïve Bayes as a baseline
  • Tweet preprocessing
  • Training set
  • Feature engineering
  • Twitter sentiment score feature
  • Color degree feature
  • Smile detection feature
  • Classifier models
  • Support vector machine
  • Random forest
  • Logistic regression
  • Stacking
  • 15.3.2 Weather and Emotion Correlation Analysis
  • Time series
  • Cluster
  • DBSCAN definition
  • Manifold algorithm
  • Cluster evaluation metrics
  • 15.4 System Implementation
  • 15.4.1 Home Page
  • 15.4.2 Sentiment Pages
  • 15.4.3 Weather Pages
  • 15.5 Key Findings
  • 15.5.1 Time Series
  • 15.5.2 Analysis with Hourly Weather Data
  • 15.5.3 Analysis with Daily Weather Data
  • 15.5.4 DBSCAN Cluster Algorithm
  • Dimensional reduction analysis
  • Pair variable analysis
  • 15.5.5 Straightforward Weather Impact on Emotion
  • Higher temperature makes people happier in Melbourne coast
  • People prefer windy days in Melbourne
  • 15.6 Summary and Conclusions
  • Acknowledgments
  • References
  • Chapter 16: Dynamic Uncertainty-Based Analytics for Caching Performance Improvements in Mobile Broadband Wireless Networks
  • 16.1 Introduction
  • 16.1.1 Big Data Concerns
  • 16.1.2 Key Focus Areas
  • 16.2 Background
  • 16.2.1 Cellular Network and VoD
  • 16.2.2 Markov Processes
  • 16.3 Related Work
  • 16.4 VoD Architecture
  • 16.5 Overview
  • 16.6 Data Generation
  • 16.7 Edge and Core Components
  • 16.8 INCA Caching Algorithm
  • 16.9 QoE Estimation
  • 16.10 Theoretical Framework
  • 16.11 Experiments and Results
  • 16.11.1 Cache Hits With NU, NC, NM and k
  • Variation of cache hits with NC
  • Variation of cache hits with NU
  • Variation of cache hits with NM
  • Variation of cache hits with k
  • INCA versus online algorithm
  • 16.11.2 QoE Impact With Prefetch Bandwidth
  • 16.11.3 User Satisfaction With Prefetch Bandwidth
  • 16.12 Synthetic Dataset
  • 16.12.1 INCA Hit Gain
  • 16.12.2 QoE Performance
  • 16.12.3 Satisfied Users
  • 16.13 Conclusions and Future Directions
  • References
  • Chapter 17: Big Data Analytics on a Smart Grid: Mining PMU Data for Event and Anomaly Detection
  • 17.1 Introduction
  • 17.2 Smart Grid With PMUs and PDCs
  • 17.3 Improving Traditional Workflow
  • 17.4 Characterizing Normal Operation
  • 17.5 Identifying Unusual Phenomena
  • 17.6 Identifying Known Events
  • 17.7 Related Efforts
  • 17.8 Conclusion and Future Directions
  • Acknowledgments
  • References
  • Chapter 18: eScience and Big Data Workflows in Clouds: A Taxonomy and Survey
  • 18.1 Introduction
  • 18.2 Background
  • 18.2.1 History
  • 18.2.2 Grid-Based eScience
  • 18.2.3 Cloud Computing
  • 18.3 Taxonomy and Review of eScience Services in the Cloud
  • 18.3.1 Infrastructure
  • 18.3.2 Ownership
  • 18.3.3 Application
  • 18.3.4 Processing Tools
  • 18.3.5 Storage
  • 18.3.6 Security
  • 18.3.7 Service Models
  • 18.3.8 Collaboration
  • 18.4 Resource Provisioning for eScience Workflows in Clouds
  • 18.4.1 Motivation
  • 18.4.2 Our Solution
  • Effective monetary cost optimizations for workflows in IaaS clouds
  • Transformation-based optimizations for workflows in IaaS clouds
  • A declarative optimization engine for workflows in IaaS clouds
  • 18.5 Open Problems
  • 18.6 Summary
  • References
  • Index
  • Back Cover

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

68,96 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
PDF mit Adobe DRM
siehe Systemvoraussetzungen
Hinweis: Die Auswahl des von Ihnen gewünschten Dateiformats und des Kopierschutzes erfolgt erst im System des E-Book Anbieters
E-Book bestellen

Unsere Web-Seiten verwenden Cookies. Mit der Nutzung des WebShops erklären Sie sich damit einverstanden. Mehr Informationen finden Sie in unserem Datenschutzhinweis. Ok