The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self-contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines. For students and others with a mathematical background, these derivations provide a starting point to machine learning texts. For those learning the mathematics for the first time, the methods help build intuition and practical experience with applying mathematical concepts. Every chapter includes worked examples and exercises to test understanding. Programming tutorials are offered on the book's web site.
Rezensionen / Stimmen
'This book provides great coverage of all the basic mathematical concepts for machine learning. I'm looking forward to sharing it with students, colleagues, and anyone interested in building a solid understanding of the fundamentals.' Joelle Pineau, McGill University, Montreal 'The field of machine learning has grown dramatically in recent years, with an increasingly impressive spectrum of successful applications. This comprehensive text covers the key mathematical concepts that underpin modern machine learning, with a focus on linear algebra, calculus, and probability theory. It will prove valuable both as a tutorial for newcomers to the field, and as a reference text for machine learning researchers and engineers.' Christopher Bishop, Microsoft Research Cambridge 'This book provides a beautiful exposition of the mathematics underpinning modern machine learning. Highly recommended for anyone wanting a one-stop-shop to acquire a deep understanding of machine learning foundations.' Pieter Abbeel, University of California, Berkeley 'Really successful are the numerous explanatory illustrations, which help to explain even difficult concepts in a catchy way. Each chapter concludes with many instructive exercises. An outstanding feature of this book is the additional material presented on the website ...' Volker H. Schulz, SIAM Review 'A solid and affordable resource for building foundational skills in machine learning.' Ruwan Karunanayaka, University of the Fraser Valley 'This is an excellent book on the mathematical foundations of machine learning. The colourful figures and even some equations make the content both engaging and easy to follow. I will definitely be recommending this book to my students.' Hom Nath Gharti, Queen's University
Sprache
Verlagsort
Produkt-Hinweis
Illustrationen
Worked examples or Exercises; 106 Halftones, color; 3 Halftones, black and white
Maße
Höhe: 254 mm
Breite: 180 mm
Dicke: 21 mm
Gewicht
ISBN-13
978-1-108-45514-5 (9781108455145)
Copyright in bibliographic data and cover images is held by Nielsen Book Services Limited or by the publishers or by their respective licensors: all rights reserved.
Schweitzer Klassifikation
Marc Peter Deisenroth is DeepMind Chair in Artificial Intelligence at the Department of Computer Science, University College London. Prior to this, he was a faculty member in the Department of Computing, Imperial College London. His research areas include data-efficient learning, probabilistic modeling, and autonomous decision making. Deisenroth was Program Chair of the European Workshop on Reinforcement Learning (EWRL) 2012 and Workshops Chair of Robotics Science and Systems (RSS) 2013. His research received Best Paper Awards at the International Conference on Robotics and Automation (ICRA) 2014 and the International Conference on Control, Automation and Systems (ICCAS) 2016. In 2018, he was awarded the President's Award for Outstanding Early Career Researcher at Imperial College London. He is a recipient of a Google Faculty Research Award and a Microsoft P.hD. grant.
Autor*in
University College London
Imperial College London
1. Introduction and motivation; 2. Linear algebra; 3. Analytic geometry; 4. Matrix decompositions; 5. Vector calculus; 6. Probability and distribution; 7. Optimization; 8. When models meet data; 9. Linear regression; 10. Dimensionality reduction with principal component analysis; 11. Density estimation with Gaussian mixture models; 12. Classification with support vector machines.