The Data Science Handbook

 
 
Standards Information Network (Verlag)
  • erschienen am 20. Januar 2017
  • |
  • 416 Seiten
 
E-Book | PDF mit Adobe-DRM | Systemvoraussetzungen
978-1-119-09293-3 (ISBN)
 
A comprehensive overview of data science covering the analytics, programming, and business skills necessary to master the discipline
Finding a good data scientist has been likened to hunting for a unicorn: the required combination of technical skills is simply very hard to find in one person. In addition, good data science is not just rote application of trainable skill sets; it requires the ability to think flexibly about all these areas and understand the connections between them. This book provides a crash course in data science, combining all the necessary skills into a unified discipline.
Unlike many analytics books, computer science and software engineering are given extensive coverage since they play such a central role in the daily work of a data scientist. The author also describes classic machine learning algorithms, from their mathematical foundations to real-world applications. Visualization tools are reviewed, and their central importance in data science is highlighted. Classical statistics is addressed to help readers think critically about the interpretation of data and its common pitfalls. The clear communication of technical results, which is perhaps the most undertrained of data science skills, is given its own chapter, and all topics are explained in the context of solving real-world data problems. The book also features:
* Extensive sample code and tutorials using Python(TM) along with its technical libraries
* Core technologies of "Big Data," including their strengths and limitations and how they can be used to solve real-world problems
* Coverage of the practical realities of the tools, keeping theory to a minimum; however, when theory is presented, it is done in an intuitive way to encourage critical thinking and creativity
* A wide variety of case studies from industry
* Practical advice on the realities of being a data scientist today, including the overall workflow, where time is spent, the types of datasets worked on, and the skill sets needed
The Data Science Handbook is an ideal resource for data analysis methodology and big data software tools. The book is appropriate for people who want to practice data science, but lack the required skill sets. This includes software professionals who need to better understand analytics and statisticians who need to understand software. Modern data science is a unified discipline, and it is presented as such. This book is also an appropriate reference for researchers and entry-level graduate students who need to learn real-world analytics and expand their skill set.
FIELD CADY is the data scientist at the Allen Institute for Artificial Intelligence, where he develops tools that use machine learning to mine scientific literature. He has also worked at Google and several Big Data startups. He has a BS in physics and math from Stanford University, and an MS in computer science from Carnegie Mellon.
1. Auflage
  • Englisch
  • New York
  • |
  • USA
John Wiley & Sons Inc
  • Für Beruf und Forschung
  • 6,45 MB
978-1-119-09293-3 (9781119092933)
weitere Ausgaben werden ermittelt
FIELD CADY is the data scientist at the Allen Institute for Artificial Intelligence, where he develops tools that use machine learning to mine scientific literature. He has also worked at Google and several Big Data startups. He has a BS in physics and math from Stanford University, and an MS in computer science from Carnegie Mellon.
1 - Cover [Seite 1]
2 - Title Page [Seite 5]
3 - Copyright [Seite 6]
4 - Dedication [Seite 7]
5 - Contents [Seite 9]
6 - Preface [Seite 19]
7 - Chapter 1 Introduction: Becoming a Unicorn [Seite 21]
7.1 - 1.1 Aren't Data Scientists Just Overpaid Statisticians? [Seite 22]
7.2 - 1.2 How Is This Book Organized? [Seite 23]
7.3 - 1.3 How to Use This Book? [Seite 23]
7.4 - 1.4 Why Is It All in PythonT, Anyway? [Seite 24]
7.5 - 1.5 Example Code and Datasets [Seite 24]
7.6 - 1.6 Parting Words [Seite 25]
8 - Part 1 The Stuff You'll Always Use [Seite 27]
8.1 - Chapter 2 The Data Science Road Map [Seite 29]
8.1.1 - 2.1 Frame the Problem [Seite 30]
8.1.2 - 2.2 Understand the Data: Basic Questions [Seite 31]
8.1.3 - 2.3 Understand the Data: Data Wrangling [Seite 32]
8.1.4 - 2.4 Understand the Data: Exploratory Analysis [Seite 33]
8.1.5 - 2.5 Extract Features [Seite 34]
8.1.6 - 2.6 Model [Seite 35]
8.1.7 - 2.7 Present Results [Seite 35]
8.1.8 - 2.8 Deploy Code [Seite 36]
8.1.9 - 2.9 Iterating [Seite 36]
8.1.10 - 2.10 Glossary [Seite 37]
8.2 - Chapter 3 Programming Languages [Seite 39]
8.2.1 - 3.1 Why Use a Programming Language? What Are the Other Options? [Seite 39]
8.2.2 - 3.2 A Survey of Programming Languages for Data Science [Seite 40]
8.2.3 - 3.3 Python Crash Course [Seite 42]
8.2.4 - 3.4 Strings [Seite 47]
8.2.5 - 3.5 Defining Functions [Seite 52]
8.2.6 - 3.6 Python's Technical Libraries [Seite 57]
8.2.7 - 3.7 Other Python Resources [Seite 62]
8.2.8 - 3.8 Further Reading [Seite 62]
8.2.9 - 3.9 Glossary [Seite 63]
8.3 - Interlude: My Personal Toolkit [Seite 65]
8.4 - Chapter 4 Data Munging: String Manipulation, Regular Expressions, and Data Cleaning [Seite 67]
8.4.1 - 4.1 The Worst Dataset in the World [Seite 68]
8.4.2 - 4.2 How to Identify Pathologies [Seite 68]
8.4.3 - 4.3 Problems with Data Content [Seite 69]
8.4.4 - 4.4 Formatting Issues [Seite 71]
8.4.5 - 4.5 Example Formatting Script [Seite 74]
8.4.6 - 4.6 Regular Expressions [Seite 75]
8.4.7 - 4.7 Life in the Trenches [Seite 80]
8.4.8 - 4.8 Glossary [Seite 80]
8.5 - Chapter 5 Visualizations and Simple Metrics [Seite 81]
8.5.1 - 5.1 A Note on Python's Visualization Tools [Seite 82]
8.5.2 - 5.2 Example Code [Seite 82]
8.5.3 - 5.3 Pie Charts [Seite 83]
8.5.4 - 5.4 Bar Charts [Seite 85]
8.5.5 - 5.5 Histograms [Seite 86]
8.5.6 - 5.6 Means, Standard Deviations, Medians, and Quantiles [Seite 89]
8.5.7 - 5.7 Boxplots [Seite 90]
8.5.8 - 5.8 Scatterplots [Seite 92]
8.5.9 - 5.9 Scatterplots with Logarithmic Axes [Seite 94]
8.5.10 - 5.10 Scatter Matrices [Seite 96]
8.5.11 - 5.11 Heatmaps [Seite 97]
8.5.12 - 5.12 Correlations [Seite 98]
8.5.13 - 5.13 Anscombe's Quartet and the Limits of Numbers [Seite 100]
8.5.14 - 5.14 Time Series [Seite 101]
8.5.15 - 5.15 Further Reading [Seite 105]
8.5.16 - 5.16 Glossary [Seite 105]
8.6 - Chapter 6 Machine Learning Overview [Seite 107]
8.6.1 - 6.1 Historical Context [Seite 108]
8.6.2 - 6.2 Supervised versus Unsupervised [Seite 109]
8.6.3 - 6.3 Training Data, Testing Data, and the Great Boogeyman of Overfitting [Seite 109]
8.6.4 - 6.4 Further Reading [Seite 111]
8.6.5 - 6.5 Glossary [Seite 111]
8.7 - Chapter 7 Interlude: Feature Extraction Ideas [Seite 113]
8.7.1 - 7.1 Standard Features [Seite 113]
8.7.2 - 7.2 Features That Involve Grouping [Seite 114]
8.7.3 - 7.3 Preview of More Sophisticated Features [Seite 115]
8.7.4 - 7.4 Defining the Feature You Want to Predict [Seite 115]
8.8 - Chapter 8 Machine Learning Classification [Seite 117]
8.8.1 - 8.1 What Is a Classifier, and What Can You Do with It? [Seite 117]
8.8.2 - 8.2 A Few Practical Concerns [Seite 118]
8.8.3 - 8.3 Binary versus Multiclass [Seite 119]
8.8.4 - 8.4 Example Script [Seite 119]
8.8.5 - 8.5 Specific Classifiers [Seite 121]
8.8.6 - 8.6 Evaluating Classifiers [Seite 134]
8.8.7 - 8.7 Selecting Classification Cutoffs [Seite 137]
8.8.8 - 8.8 Further Reading [Seite 139]
8.8.9 - 8.9 Glossary [Seite 139]
8.9 - Chapter 9 Technical Communication and Documentation [Seite 141]
8.9.1 - 9.1 Several Guiding Principles [Seite 142]
8.9.2 - 9.2 Slide Decks [Seite 144]
8.9.3 - 9.3 Written Reports [Seite 148]
8.9.4 - 9.4 Speaking: What Has Worked for Me [Seite 150]
8.9.5 - 9.5 Code Documentation [Seite 151]
8.9.6 - 9.6 Further Reading [Seite 152]
8.9.7 - 9.7 Glossary [Seite 152]
9 - Part II Stuff You Still Need to Know [Seite 153]
9.1 - Chapter 10 Unsupervised Learning: Clustering and Dimensionality Reduction [Seite 155]
9.1.1 - 10.1 The Curse of Dimensionality [Seite 156]
9.1.2 - 10.2 Example: Eigenfaces for Dimensionality Reduction [Seite 158]
9.1.3 - 10.3 Principal Component Analysis and Factor Analysis [Seite 160]
9.1.4 - 10.4 Skree Plots and Understanding Dimensionality [Seite 162]
9.1.5 - 10.5 Factor Analysis [Seite 163]
9.1.6 - 10.6 Limitations of PCA [Seite 163]
9.1.7 - 10.7 Clustering [Seite 164]
9.1.8 - 10.8 Further Reading [Seite 171]
9.1.9 - 10.9 Glossary [Seite 171]
9.2 - Chapter 11 Regression [Seite 173]
9.2.1 - 11.1 Example: Predicting Diabetes Progression [Seite 173]
9.2.2 - 11.2 Least Squares [Seite 176]
9.2.3 - 11.3 Fitting Nonlinear Curves [Seite 177]
9.2.4 - 11.4 Goodness of Fit: R2 and Correlation [Seite 179]
9.2.5 - 11.5 Correlation of Residuals [Seite 180]
9.2.6 - 11.6 Linear Regression [Seite 181]
9.2.7 - 11.7 LASSO Regression and Feature Selection [Seite 182]
9.2.8 - 11.8 Further Reading [Seite 184]
9.2.9 - 11.9 Glossary [Seite 184]
9.3 - Chapter 12 Data Encodings and File Formats [Seite 185]
9.3.1 - 12.1 Typical File Format Categories [Seite 185]
9.3.2 - 12.2 CSV Files [Seite 187]
9.3.3 - 12.3 JSON Files [Seite 188]
9.3.4 - 12.4 XML Files [Seite 190]
9.3.5 - 12.5 HTML Files [Seite 192]
9.3.6 - 12.6 Tar Files [Seite 194]
9.3.7 - 12.7 GZip Files [Seite 195]
9.3.8 - 12.8 Zip Files [Seite 195]
9.3.9 - 12.9 Image Files: Rasterized, Vectorized, and/or Compressed [Seite 196]
9.3.10 - 12.10 It's All Bytes at the End of the Day [Seite 197]
9.3.11 - 12.11 Integers [Seite 198]
9.3.12 - 12.12 Floats [Seite 199]
9.3.13 - 12.13 Text Data [Seite 200]
9.3.14 - 12.14 Further Reading [Seite 203]
9.3.15 - 12.15 Glossary [Seite 203]
9.4 - Chapter 13 Big Data [Seite 205]
9.4.1 - 13.1 What Is Big Data? [Seite 205]
9.4.2 - 13.2 Hadoop: The File System and the Processor [Seite 207]
9.4.3 - 13.3 Using HDFS [Seite 208]
9.4.4 - 13.4 Example PySpark Script [Seite 209]
9.4.5 - 13.5 Spark Overview [Seite 210]
9.4.6 - 13.6 Spark Operations [Seite 212]
9.4.7 - 13.7 Two Ways to Run PySpark [Seite 213]
9.4.8 - 13.8 Configuring Spark [Seite 214]
9.4.9 - 13.9 Under the Hood [Seite 215]
9.4.10 - 13.10 Spark Tips and Gotchas [Seite 216]
9.4.11 - 13.11 The MapReduce Paradigm [Seite 217]
9.4.12 - 13.12 Performance Considerations [Seite 219]
9.4.13 - 13.13 Further Reading [Seite 220]
9.4.14 - 13.14 Glossary [Seite 220]
9.5 - Chapter 14 Databases [Seite 223]
9.5.1 - 14.1 Relational Databases and MySQL® [Seite 224]
9.5.2 - 14.2 Key-Value Stores [Seite 230]
9.5.3 - 14.3 Wide Column Stores [Seite 231]
9.5.4 - 14.4 Document Stores [Seite 231]
9.5.5 - 14.5 Further Reading [Seite 234]
9.5.6 - 14.6 Glossary [Seite 234]
9.6 - Chapter 15 Software Engineering Best Practices [Seite 237]
9.6.1 - 15.1 Coding Style [Seite 237]
9.6.2 - 15.2 Version Control and Git for Data Scientists [Seite 240]
9.6.3 - 15.3 Testing Code [Seite 242]
9.6.4 - 15.4 Test-Driven Development [Seite 245]
9.6.5 - 15.5 AGILE Methodology [Seite 245]
9.6.6 - 15.6 Further Reading [Seite 246]
9.6.7 - 15.7 Glossary [Seite 246]
9.7 - Chapter 16 Natural Language Processing [Seite 249]
9.7.1 - 16.1 Do I Even Need NLP? [Seite 249]
9.7.2 - 16.2 The Great Divide: Language versus Statistics [Seite 250]
9.7.3 - 16.3 Example: Sentiment Analysis on Stock Market Articles [Seite 250]
9.7.4 - 16.4 Software and Datasets [Seite 252]
9.7.5 - 16.5 Tokenization [Seite 253]
9.7.6 - 16.6 Central Concept: Bag-of-Words [Seite 253]
9.7.7 - 16.7 Word Weighting: TF-IDF [Seite 255]
9.7.8 - 16.8 n-Grams [Seite 255]
9.7.9 - 16.9 Stop Words [Seite 256]
9.7.10 - 16.10 Lemmatization and Stemming [Seite 256]
9.7.11 - 16.11 Synonyms [Seite 257]
9.7.12 - 16.12 Part of Speech Tagging [Seite 257]
9.7.13 - 16.13 Common Problems [Seite 258]
9.7.14 - 16.14 Advanced NLP: Syntax Trees, Knowledge, and Understanding [Seite 260]
9.7.15 - 16.15 Further Reading [Seite 261]
9.7.16 - 16.16 Glossary [Seite 262]
9.8 - Chapter 17 Time Series Analysis [Seite 263]
9.8.1 - 17.1 Example: Predicting Wikipedia Page Views [Seite 264]
9.8.2 - 17.2 A Typical Workflow [Seite 267]
9.8.3 - 17.3 Time Series versus Time-Stamped Events [Seite 268]
9.8.4 - 17.4 Resampling an Interpolation [Seite 269]
9.8.5 - 17.5 Smoothing Signals [Seite 271]
9.8.6 - 17.6 Logarithms and Other Transformations [Seite 272]
9.8.7 - 17.7 Trends and Periodicity [Seite 272]
9.8.8 - 17.8 Windowing [Seite 273]
9.8.9 - 17.9 Brainstorming Simple Features [Seite 274]
9.8.10 - 17.10 Better Features: Time Series as Vectors [Seite 275]
9.8.11 - 17.11 Fourier Analysis: Sometimes a Magic Bullet [Seite 276]
9.8.12 - 17.12 Time Series in Context: The Whole Suite of Features [Seite 279]
9.8.13 - 17.13 Further Reading [Seite 279]
9.8.14 - 17.14 Glossary [Seite 280]
9.9 - Chapter 18 Probability [Seite 281]
9.9.1 - 18.1 Flipping Coins: Bernoulli Random Variables [Seite 281]
9.9.2 - 18.2 Throwing Darts: Uniform Random Variables [Seite 283]
9.9.3 - 18.3 The Uniform Distribution and Pseudorandom Numbers [Seite 283]
9.9.4 - 18.4 Nondiscrete, Noncontinuous Random Variables [Seite 285]
9.9.5 - 18.5 Notation, Expectations, and Standard Deviation [Seite 287]
9.9.6 - 18.6 Dependence, Marginal and Conditional Probability [Seite 288]
9.9.7 - 18.7 Understanding the Tails [Seite 289]
9.9.8 - 18.8 Binomial Distribution [Seite 291]
9.9.9 - 18.9 Poisson Distribution [Seite 292]
9.9.10 - 18.10 Normal Distribution [Seite 292]
9.9.11 - 18.11 Multivariate Gaussian [Seite 293]
9.9.12 - 18.12 Exponential Distribution [Seite 294]
9.9.13 - 18.13 Log-Normal Distribution [Seite 296]
9.9.14 - 18.14 Entropy [Seite 297]
9.9.15 - 18.15 Further Reading [Seite 299]
9.9.16 - 18.16 Glossary [Seite 299]
9.10 - Chapter 19 Statistics [Seite 301]
9.10.1 - 19.1 Statistics in Perspective [Seite 301]
9.10.2 - 19.2 Bayesian versus Frequentist: Practical Tradeoffs and Differing Philosophies [Seite 302]
9.10.3 - 19.3 Hypothesis Testing: Key Idea and Example [Seite 303]
9.10.4 - 19.4 Multiple Hypothesis Testing [Seite 305]
9.10.5 - 19.5 Parameter Estimation [Seite 306]
9.10.6 - 19.6 Hypothesis Testing: t-Test [Seite 307]
9.10.7 - 19.7 Confidence Intervals [Seite 310]
9.10.8 - 19.8 Bayesian Statistics [Seite 311]
9.10.9 - 19.9 Naive Bayesian Statistics [Seite 313]
9.10.10 - 19.10 Bayesian Networks [Seite 313]
9.10.11 - 19.11 Choosing Priors: Maximum Entropy or Domain Knowledge [Seite 314]
9.10.12 - 19.12 Further Reading [Seite 315]
9.10.13 - 19.13 Glossary [Seite 315]
9.11 - Chapter 20 Programming Language Concepts [Seite 317]
9.11.1 - 20.1 Programming Paradigms [Seite 317]
9.11.2 - 20.2 Compilation and Interpretation [Seite 325]
9.11.3 - 20.3 Type Systems [Seite 327]
9.11.4 - 20.4 Further Reading [Seite 329]
9.11.5 - 20.5 Glossary [Seite 329]
9.12 - Chapter 21 Performance and Computer Memory [Seite 331]
9.12.1 - 21.1 Example Script [Seite 331]
9.12.2 - 21.2 Algorithm Performance and Big-O Notation [Seite 334]
9.12.3 - 21.3 Some Classic Problems: Sorting a List and Binary Search [Seite 335]
9.12.4 - 21.4 Amortized Performance and Average Performance [Seite 338]
9.12.5 - 21.5 Two Principles: Reducing Overhead and Managing Memory [Seite 340]
9.12.6 - 21.6 Performance Tip: Use Numerical Libraries When Applicable [Seite 342]
9.12.7 - 21.7 Performance Tip: Delete Large Structures You Don't Need [Seite 343]
9.12.8 - 21.8 Performance Tip: Use Built-In Functions When Possible [Seite 344]
9.12.9 - 21.9 Performance Tip: Avoid Superfluous Function Calls [Seite 344]
9.12.10 - 21.10 Performance Tip: Avoid Creating Large New Objects [Seite 345]
9.12.11 - 21.11 Further Reading [Seite 345]
9.12.12 - 21.12 Glossary [Seite 345]
10 - Part III Specialized or Advanced Topics [Seite 347]
10.1 - Chapter 22 Computer Memory and Data Structures [Seite 349]
10.1.1 - 22.1 Virtual Memory, the Stack, and the Heap [Seite 349]
10.1.2 - 22.2 Example C Program [Seite 350]
10.1.3 - 22.3 Data Types and Arrays in Memory [Seite 350]
10.1.4 - 22.4 Structs [Seite 352]
10.1.5 - 22.5 Pointers, the Stack, and the Heap [Seite 353]
10.1.6 - 22.6 Key Data Structures [Seite 357]
10.1.7 - 22.7 Further Reading [Seite 363]
10.1.8 - 22.8 Glossary [Seite 363]
10.2 - Chapter 23 Maximum Likelihood Estimation and Optimization [Seite 365]
10.2.1 - 23.1 Maximum Likelihood Estimation [Seite 365]
10.2.2 - 23.2 A Simple Example: Fitting a Line [Seite 366]
10.2.3 - 23.3 Another Example: Logistic Regression [Seite 368]
10.2.4 - 23.4 Optimization [Seite 368]
10.2.5 - 23.5 Gradient Descent and Convex Optimization [Seite 370]
10.2.6 - 23.6 Convex Optimization [Seite 373]
10.2.7 - 23.7 Stochastic Gradient Descent [Seite 375]
10.2.8 - 23.8 Further Reading [Seite 375]
10.2.9 - 23.9 Glossary [Seite 376]
10.3 - Chapter 24 Advanced Classifiers [Seite 377]
10.3.1 - 24.1 A Note on Libraries [Seite 378]
10.3.2 - 24.2 Basic Deep Learning [Seite 378]
10.3.3 - 24.3 Convolutional Neural Networks [Seite 381]
10.3.4 - 24.4 Different Types of Layers. What the Heck Is a Tensor? [Seite 382]
10.3.5 - 24.5 Example: The MNIST Handwriting Dataset [Seite 383]
10.3.6 - 24.6 Recurrent Neural Networks [Seite 386]
10.3.7 - 24.7 Bayesian Networks [Seite 387]
10.3.8 - 24.8 Training and Prediction [Seite 389]
10.3.9 - 24.9 Markov Chain Monte Carlo [Seite 389]
10.3.10 - 24.10 PyMC Example [Seite 390]
10.3.11 - 24.11 Further Reading [Seite 393]
10.3.12 - 24.12 Glossary [Seite 393]
10.4 - Chapter 25 Stochastic Modeling [Seite 395]
10.4.1 - 25.1 Markov Chains [Seite 395]
10.4.2 - 25.2 Two Kinds of Markov Chain, Two Kinds of Questions [Seite 397]
10.4.3 - 25.3 Markov Chain Monte Carlo [Seite 399]
10.4.4 - 25.4 Hidden Markov Models and the Viterbi Algorithm [Seite 400]
10.4.5 - 25.5 The Viterbi Algorithm [Seite 402]
10.4.6 - 25.6 Random Walks [Seite 404]
10.4.7 - 25.7 Brownian Motion [Seite 404]
10.4.8 - 25.8 ARIMA Models [Seite 405]
10.4.9 - 25.9 Continuous-Time Markov Processes [Seite 406]
10.4.10 - 25.10 Poisson Processes [Seite 407]
10.4.11 - 25.11 Further Reading [Seite 408]
10.4.12 - 25.12 Glossary [Seite 408]
10.5 - Parting Words: Your Future as a Data Scientist [Seite 411]
11 - Index [Seite 413]
12 - EULA [Seite 417]

Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Bitte beachten Sie bei der Verwendung der Lese-Software Adobe Digital Editions: wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

47,99 €
inkl. 7% MwSt.
Download / Einzel-Lizenz
PDF mit Adobe-DRM
siehe Systemvoraussetzungen
E-Book bestellen