Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.
Rezensionen / Stimmen
From the reviews:
"This book is based on a course of lectures given by the author on a circle of ideas lying at the interface of information theory, statistical learning theory and statistical interference. . The book is perhaps the first ever compendium of this circle of ideas and will be a valuable resource for researchers in information theory, statistical learning theory and statistical inference." (Vivek S. Borkar, Mathematical Reviews, Issue 2006 d)
Reihe
Auflage
Sprache
Verlagsort
Verlagsgruppe
Illustrationen
ISBN-13
978-3-540-44507-4 (9783540445074)
DOI
Schweitzer Klassifikation
Universal Lossless Data Compression.- Links Between Data Compression and Statistical Estimation.- Non Cumulated Mean Risk.- Gibbs Estimators.- Randomized Estimators and Empirical Complexity.- Deviation Inequalities.- Markov Chains with Exponential Transitions.- References.- Index.