Lnear prediction theory and the related algorithms have matured to the point where they now form an integral part of many real-world adaptive systems. When it is necessary to extract information from a random process, we are frequently faced with the problem of analyzing and solving special systems of linear equations. In the general case these systems are overdetermined and may be characterized by additional properties, such as update and shift-invariance properties. Usually, one employs exact or approximate least-squares methods to solve the resulting class of linear equations. Mainly during the last decade, researchers in various fields have contributed techniques and nomenclature for this type of least-squares problem. This body of methods now constitutes what we call the theory of linear prediction. The immense interest that it has aroused clearly emerges from recent advances in processor technology, which provide the means to implement linear prediction algorithms, and to operate them in real time. The practical effect is the occurrence of a new class of high-performance adaptive systems for control, communications and system identification applications. This monograph presumes a background in discrete-time digital signal processing, including Z-transforms, and a basic knowledge of discrete-time random processes. One of the difficulties I have en countered while writing this book is that many engineers and computer scientists lack knowledge of fundamental mathematics and geometry.
Reihe
Sprache
Verlagsort
Verlagsgruppe
Zielgruppe
Für höhere Schule und Studium
Für Beruf und Forschung
Illustrationen
Maße
Höhe: 23.5 cm
Breite: 15.5 cm
Gewicht
ISBN-13
978-3-540-51871-6 (9783540518716)
DOI
10.1007/978-3-642-75206-3
Schweitzer Klassifikation
1. Introduction.- 2. The Linear Prediction Model.- 2.1 The Normal Equations of Linear Prediction.- 2.2 Geometrical Interpretation of the Normal Equations.- 2.3 Statistical Interpretation of the Normal Equations.- 2.4 The Problem of Signal Observation.- 2.5 Recursion Laws of the Normal Equations.- 2.6 Stationarity - A Special Case of Linear Prediction.- 2.7 Covariance Method and Autocorrelation Method.- 2.8 Recursive Windowing Algorithms.- 2.9 Backward Linear Prediction.- 2.10 Chapter Summary.- 3. Classical Algorithms for Symmetric Linear Systems.- 3.1 The Cholesky Decomposition.- 3.2 The QR Decomposition.- 3.2.1 The Givens Reduction.- 3.2.2 The Householder Reduction.- 3.2.3 Calculation of Prediction Error Energy.- 3.3 Some More Principles for Matrix Computations.- 3.3.1 The Singular Value Decomposition.- 3.3.2 Solving the Normal Equations by Singular Value Decomposition.- 3.3.3 The Penrose Pseudoinverse.- 3.3.4 The Problem of Computing X?1Y.- 3.4 Chapter Summary.- 4. Recursive Least-Squares Using the QR Decomposition.- 4.1 Formulation of the Growing-Window Recursive Least-Squares Problem.- 4.2 Recursive Least Squares Based on the Givens Reduction.- 4.3 Systolic Array Implementation.- 4.4 Iterative Vector Rotations - The CORDIC Algorithm.- 4.5 Recursive QR Decomposition Using a Second-Order Window.- 4.6 Alternative Formulations of the QRLS Problem.- 4.7 Implicit Error Computation.- 4.8 Chapter Summary.- 5. Recursive Least-Squares Transversal Algorithms.- 5.1 The Recursive Least-Squares Algorithm.- 5.2 Potter's Square-Root Normalized RLS Algorithm.- 5.3 Update Properties of the RLS Algorithm.- 5.4 Kubin's Selective Memory RLS Algorithms.- 5.5 Fast RLS Transversal Algorithms.- 5.5.1 The Sherman-Morrison Identity for Partitioned Matrices.- 5.5.2 The Fast Kalman Algorithm.- 5.5.3 The FAEST Algorithm.- 5.6 Descent Transversal Algorithms.- 5.6.1 The Newton Algorithm.- 5.6.2 The Steepest Descent Algorithm.- 5.6.3 Stability of the Steepest Descent Algorithm.- 5.6.4 Convergence of the Steepest Descent Algorithm.- 5.6.5 The Least Mean Squares Algorithm.- 5.7 Chapter Summary.- 6. The Ladder Form.- 6.1 The Recursion Formula for Orthogonal Projections.- 6.1.1 Solving the Normal Equations with the Recursion Formula for Orthogonal Projections.- 6.1.2 The Feed-Forward Ladder Form.- 6.2 Computing Time-Varying Transversal Predictor Parameters from the Ladder Reflection Coefficients.- 6.3 Stationary Case - The PARCOR Ladder Form.- 6.4 Relationships Between PARCOR Ladder Form and Transversal Predictor.- 6.4.1 Computing Transversal Predictor Parameters from PARCOR Coefficients - The Levinson Recursion.- 6.4.2 Computing PARCOR Coefficients from Transversal Predictor Parameters - The Inverse Levinson Recursion.- 6.5 The Feed-Back PARCOR Ladder Form.- 6.6 Frequency Domain Description of PARCOR Ladder Forms.- 6.6.1 Transfer Function of the Feed-Forward PARCOR Ladder Form.- 6.6.2 Transfer Function of the Feed-Back PARCOR Ladder Form.- 6.6.3 Relationships Between Forward and Backward Predictor Transfer Functions.- 6.7 Stability of the Feed-Back PARCOR Ladder Form.- 6.8 Burg's Harmonic Mean PARCOR Ladder Algorithm.- 6.9 Determination of Model Order.- 6.10 Chapter Summary.- 7. Levinson-Type Ladder Algorithms.- 7.1 The Levinson-Durbin Algorithm.- 7.2 Computing the Autocorrelation Coefficients from the PARCOR Ladder Reflection Coefficients - The "Inverse" Levinson-Durbin Algorithm.- 7.3 Some More Properties of Toeplitz Systems and the Levinson-Durbin Algorithm.- 7.4 Split Levinson Algorithms.- 7.4.1 Delsarte's Algorithm.- 7.4.2 Krishna's Algorithm.- 7.4.3 Relationships Between Krishna's Algorithm and Delsarte's Algorithm (Symmetric Case).- 7.4.4 Relationships Between Krishna's Algorithm and Delsarte's Algorithm (Antisymmetric Case).- 7.5 A Levinson-Type Least-Squares Ladder Estimation Algorithm.- 7.6 The Makhoul Covariance Ladder Algorithm.- 7.7 Chapter Summary.- 8 Covariance Ladder Algorithms.- 8.1 The LeRoux-Gueguen Algorithm.- 8.1.1 Bounds on GREs.- 8.2 The Cumani Covariance Ladder Algorithm.- 8.3 Recursive Covariance Ladder Algorithms.- 8.3.1 Recursive Least-Squares Using Generalized Residual Energies.- 8.3.2 Strobach's Algorithm.- 8.3.3 Approximate PORLA Computation Schemes.- 8.3.4 Sokat's Algorithm - An Extension of the LeRoux-Gueguen Algorithm.- 8.3.5 Additional Notes on Sokat's PORLA Method.- 8.4 Split Schur Algorithms.- 8.4.1 A Split Schur Formulation of Krishna's Algorithm.- 8.4.2 Bounds on Recursion Variables.- 8.4.3 A Split Schur Formulation of Delsarte's Algorithm.- 8.5 Chapter Summary.- 9. Fast Recursive Least-Squares Ladder Algorithms.- 9.1 The Exact Time-Update Theorem of Projection Operators.- 9.2 The Algorithm of Lee and Morf.- 9.3 Other Forms of Lee's Algorithm.- 9.3.1 A Pure Time Recursive Ladder Algorithm.- 9.3.2 Direct Updating of Reflection Coefficients.- 9.4 Gradient Adaptive Ladder Algorithms.- 9.4.1 Gradient Adaptive Ladder Algorithm GAL 2.- 9.4.2 Gradient Adaptive Ladder Algorithm GAL 1.- 9.5 Lee's Normalized RLS Ladder Algorithm.- 9.5.1 Power Normalization.- 9.5.2 Angle Normalization.- 9.6 Chapter Summary.- 10. Special Signal Models and Extensions.- 10.1 Joint Process Estimation.- 10.1.1 Ladder Formulation of the Joint Process Problem.- 10.1.2 The Joint Process Model.- 10.1.3 FIR System Identification.- 10.1.4 Noise Cancelling.- 10.2 ARMA System Identification.- 10.2.1 The ARMA Normal Equations.- 10.2.2 ARMA Embedding.- 10.2.3 Ladder Formulation of the ARMA System Identification Problem.- 10.2.4 The PORLA Method for ARMA System Identification.- 10.2.5 Computing the ARMA Parameters from the Ladder Reflection Matrices.- 10.3 Identification of Vector Autoregressive Processes.- 10.4 Parametric Spectral Estimation.- 10.5 Relationships Between Parameter Estimation and Kalman Filter Theory.- 10.6 Chapter Summary.- 11. Concluding Remarks and Applications.- A.1 Summary of the Most Important Forward/Backward Linear Prediction Relationships.- A.2 New PORLA Algorithms and Their Systolic Array Implementation.- A.2.1 Triangular Array Ladder Algorithm ARRAYLAD 1.- A.2.2 Triangular Array Ladder Algorithm ARRAYLAD 2.- A.2.3 Systolic Array Implementation.- A.2.4 Comparison of Ladder and Givens Rotors.- A.2.5 A Square-Root PORLA Algorithm.- A.2.6 A Step Towards Toeplitz Systems.- A.3 Vector Case of New PORLA Algorithms.- A.3.1 Vector Case of ARRAYLAD 1.- A.3.2 Computation of Reflection Matrices.- A.3.3 Multichannel Prediction Error Filters.- A.3.4 Vector Case of ARRAYLAD 2.- A.3.5 Stationary Case - Block Processing Algorithms.- A.3.6 Concluding Remarks.