Python Machine Learning

 
 
Packt Publishing Limited
  • 1. Auflage
  • |
  • erschienen am 23. September 2015
  • |
  • 454 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
978-1-78355-514-7 (ISBN)
 
Unlock deeper insights into Machine Leaning with this vital guide to cutting-edge predictive analyticsAbout This BookLeverage Python's most powerful open-source libraries for deep learning, data wrangling, and data visualizationLearn effective strategies and best practices to improve and optimize machine learning systems and algorithmsAsk - and answer - tough questions of your data with robust statistical models, built for a range of datasetsWho This Book Is ForIf you want to find out how to use Python to start answering critical questions of your data, pick up Python Machine Learning - whether you want to get started from scratch or want to extend your data science knowledge, this is an essential and unmissable resource.What You Will LearnExplore how to use different machine learning models to ask different questions of your dataLearn how to build neural networks using Keras and TheanoFind out how to write clean and elegant Python code that will optimize the strength of your algorithmsDiscover how to embed your machine learning model in a web application for increased accessibilityPredict continuous target outcomes using regression analysisUncover hidden patterns and structures in data with clusteringOrganize data using effective pre-processing techniquesGet to grips with sentiment analysis to delve deeper into textual and social media dataIn DetailMachine learning and predictive analytics are transforming the way businesses and other organizations operate. Being able to understand trends and patterns in complex data is critical to success, becoming one of the key strategies for unlocking growth in a challenging contemporary marketplace. Python can help you deliver key insights into your data - its unique capabilities as a language let you build sophisticated algorithms and statistical models that can reveal new perspectives and answer key questions that are vital for success.Python Machine Learning gives you access to the world of predictive analytics and demonstrates why Python is one of the world's leading data science languages. If you want to ask better questions of data, or need to improve and extend the capabilities of your machine learning systems, this practical data science book is invaluable. Covering a wide range of powerful Python libraries, including scikit-learn, Theano, and Keras, and featuring guidance and tips on everything from sentiment analysis to neural networks, you'll soon be able to answer some of the most important questions facing you and your organization.Style and approachPython Machine Learning connects the fundamental theoretical principles behind machine learning to their practical application in a way that focuses you on asking and answering the right questions. It walks you through the key elements of Python and its powerful machine learning libraries, while demonstrating how to get to grips with a range of statistical models.
  • Englisch
  • Birmingham
978-1-78355-514-7 (9781783555147)
1783555149 (1783555149)
weitere Ausgaben werden ermittelt
  • Cover
  • Copyright
  • Credits
  • Foreword
  • About the Author
  • About the Reviewers
  • www.PacktPub.com
  • Table of Contents
  • Preface
  • Chapter 1: Giving Computers the Ability to Learn from Data
  • Building intelligent machines to transform data into knowledge
  • The three different types of machine learning
  • Making predictions about the future with supervised learning
  • Classification for predicting class labels
  • Regression for predicting continuous outcomes
  • Solving interactive problems with reinforcement learning
  • Discovering hidden structures with unsupervised learning
  • Finding subgroups with clustering
  • Dimensionality reduction for data compression
  • An introduction to the basic terminology and notations
  • A roadmap for building machine learning systems
  • Preprocessing-getting data into shape
  • Training and selecting a predictive model
  • Evaluating models and predicting unseen data instances
  • Using Python for machine learning
  • Installing Python packages
  • Summary
  • Chapter 2: Training Machine Learning Algorithms for Classification
  • Artificial neurons - a brief glimpse into the early history of machine learning
  • Implementing a perceptron learning algorithm in Python
  • Training a perceptron model on the Iris dataset
  • Adaptive linear neurons and the convergence of learning
  • Minimizing cost functions with gradient descent
  • Implementing an adaptive linear neuron in Python
  • Large scale machine learning and stochastic gradient descent
  • Summary
  • Chapter 3: A Tour of Machine Learning Classifiers Using Scikit-Learn
  • Choosing a classification algorithm
  • First steps with scikit-learn
  • Training a perceptron via scikit-learn
  • Modeling class probabilities via logistic regression
  • Logistic regression intuition and conditional probabilities
  • Learning the weights of the logistic cost function
  • Training a logistic regression model with scikit-learn
  • Tackling overfitting via regularization
  • Maximum margin classification with support vector machines
  • Maximum margin intuition
  • Dealing with the nonlinearly separable case using slack variables
  • Alternative implementations in scikit-learn
  • Solving nonlinear problems using a kernel SVM
  • Using the kernel trick to find separating hyperplanes in higher dimensional space
  • Decision tree learning
  • Maximizing information gain - getting the most bang for the buck
  • Building a decision tree
  • Combining weak to strong learners via random forests
  • K-nearest neighbors - a lazy learning algorithm
  • Summary
  • Chapter 4: Building Good Training Sets - Data Preprocessing
  • Dealing with missing data
  • Eliminating samples or features with missing values
  • Imputing missing values
  • Understanding the scikit-learn estimator API
  • Handling categorical data
  • Mapping ordinal features
  • Encoding class labels
  • Performing one-hot encoding on nominal features
  • Partitioning a dataset in training and test sets
  • Bringing features onto the same scale
  • Selecting meaningful features
  • Sparse solutions with L1 regularization
  • Sequential feature selection algorithms
  • Assessing feature importances with random forests
  • Summary
  • Chapter 5: Compressing Data via Dimensionality Reduction
  • Unsupervised dimensionality reduction via principal component analysis
  • Total and explained variance
  • Feature transformation
  • Principal component analysis in scikit-learn
  • Supervised data compression via linear discriminant analysis
  • Computing the scatter matrices
  • Selecting linear discriminants for the new feature subspace
  • Projecting samples onto the new feature space
  • LDA via scikit-learn
  • Using kernel principal component analysis for nonlinear mappings
  • Kernel functions and the kernel trick
  • Implementing a kernel principal component analysis in Python
  • Example 1 - Separating half-moon shapes
  • Example 2 - Separating concentric circles
  • Projecting new data points
  • Kernel principal component analysis in scikit-learn
  • Summary
  • Chapter 6: Learning Best Practices for Model Evaluation and Hyperparameter Tuning
  • Streamlining workflows with pipelines
  • Loading the Breast Cancer Wisconsin dataset
  • Combining transformers and estimators in a pipeline
  • Using k-fold cross-validation to assess model performance
  • The holdout method
  • K-fold cross-validation
  • Debugging algorithms with learning and validation curves
  • Diagnosing bias and variance problems with learning curves
  • Addressing overfitting and underfitting with validation curves
  • Fine-tuning machine learning models via grid search
  • Tuning hyperparameters via grid search
  • Algorithm selection with nested cross-validation
  • Looking at different performance evaluation metrics
  • Reading a confusion matrix
  • Optimizing the precision and recall of a classification model
  • Plotting a receiver operating characteristic
  • The scoring metrics for multiclass classification
  • Summary
  • Chapter 7: Combining Different Models for Ensemble Learning
  • Learning with ensembles
  • Implementing a simple majority vote classifier
  • Combining different algorithms for classification with majority vote
  • Evaluating and tuning the ensemble classifier
  • Bagging - building an ensemble of classifiers from bootstrap samples
  • Leveraging weak learners via adaptive boosting
  • Summary
  • Chapter 8: Applying Machine Learning to Sentiment Analysis
  • Obtaining the IMDb movie review dataset
  • Introducing the bag-of-words model
  • Transforming words into feature vectors
  • Assessing word relevancy via term frequency-inverse document frequency
  • Cleaning text data
  • Processing documents into tokens
  • Training a logistic regression model for document classification
  • Working with bigger data - online algorithms and out-of-core learning
  • Summary
  • Chapter 9: Embedding a Machine Learning Model into a Web Application
  • Serializing fitted scikit-learn estimators
  • Setting up a SQLite database for data storage
  • Developing a web application with Flask
  • Our first Flask web application
  • Form validation and rendering
  • Turning the movie classifier into a web application
  • Deploying the web application to a public server
  • Updating the movie review classifier
  • Summary
  • Chapter 10 : Predicting Continuous Target Variables with Regression Analysis
  • Introducing a simple linear regression model
  • Exploring the Housing Dataset
  • Visualizing the important characteristics of a dataset
  • Implementing an ordinary least squares linear regression model
  • Solving regression for regression parameters with gradient descent
  • Estimating the coefficient of a regression model via scikit-learn
  • Fitting a robust regression model using RANSAC
  • Evaluating the performance of linear regression models
  • Using regularized methods for regression
  • Turning a linear regression model into a curve - polynomial regression
  • Modeling nonlinear relationships in the Housing Dataset
  • Dealing with nonlinear relationships using random forests
  • Decision tree regression
  • Random forest regression
  • Summary
  • Chapter 11: Working with Unlabeled Data - Clustering Analysis
  • Grouping objects by similarity using k-means
  • K-means++
  • Hard versus soft clustering
  • Using the elbow method to find the optimal number of clusters
  • Quantifying the quality of clustering via silhouette plots
  • Organizing clusters as a hierarchical tree
  • Performing hierarchical clustering on a distance matrix
  • Attaching dendrograms to a heat map
  • Applying agglomerative clustering via scikit-learn
  • Locating regions of high density via DBSCAN
  • Summary
  • Chapter 12: Training Artificial Neural Networks for Image Recognition
  • Modeling complex functions with artificial neural networks
  • Single-layer neural network recap
  • Introducing the multi-layer neural network architecture
  • Activating a neural network via forward propagation
  • Classifying handwritten digits
  • Obtaining the MNIST dataset
  • Implementing a multi-layer perceptron
  • Training an artificial neural network
  • Computing the logistic cost function
  • Training neural networks via backpropagation
  • Developing your intuition for backpropagation
  • Debugging neural networks with gradient checking
  • Convergence in neural networks
  • Other neural network architectures
  • Convolutional neural networks
  • Recurrent Neural Networks
  • A few last words about neural network implementation
  • Summary
  • Chapter 13: Parallelizing Neural Network Training with Theano
  • Building, compiling, and running expressions with Theano
  • What is Theano?
  • First steps with Theano
  • Configuring Theano
  • Working with array structures
  • Wrapping things up - a linear regression example
  • Choosing activation functions for feedforward neural networks
  • Logistic function recap
  • Estimating probabilities in multi-class classification via the softmax function
  • Broadening the output spectrum by using a hyperbolic tangent
  • Training neural networks efficiently using Keras
  • Summary
  • Index

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

35,85 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
E-Book bestellen

Unsere Web-Seiten verwenden Cookies. Mit der Nutzung des WebShops erklären Sie sich damit einverstanden. Mehr Informationen finden Sie in unserem Datenschutzhinweis. Ok