Matrix Algorithms in MATLAB

 
 
Elsevier Reference Monographs (Verlag)
  • 1. Auflage
  • |
  • erschienen am 1. April 2016
  • |
  • 478 Seiten
 
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
978-0-12-803869-7 (ISBN)
 

Matrix Algorithms in MATLAB focuses on the MATLAB code implementations of matrix algorithms. The MATLAB codes presented in the book are tested with thousands of runs of MATLAB randomly generated matrices, and the notation in the book follows the MATLAB style to ensure a smooth transition from formulation to the code, with MATLAB codes discussed in this book kept to within 100 lines for the sake of clarity.

The book provides an overview and classification of the interrelations of various algorithms, as well as numerous examples to demonstrate code usage and the properties of the presented algorithms. Despite the wide availability of computer programs for matrix computations, it continues to be an active area of research and development. New applications, new algorithms, and improvements to old algorithms are constantly emerging.


  • Presents the first book available on matrix algorithms implemented in real computer code
  • Provides algorithms covered in three parts, the mathematical development of the algorithm using a simple example, the code implementation, and then numerical examples using the code
  • Allows readers to gain a quick understanding of an algorithm by debugging or reading the source code
  • Includes downloadable codes on an accompanying companion website, www.matrixalgorithmsinmatlab.com, that can be used in other software development


In 1989, Dr. Ong U. Routh studied computational mechanics and obtained a PhD degree in Tsinghua University, China. In 1991, he worked as a researcher in Osaka University, Japan, developing finite element software for the numerical simulation of sheet metal forming. Since 1999, he has conducted many industrial software projects for the analysis of structures and multi-bodies systems. His career interest is in the research and implementation of numerical algorithms that is directly used to solve engineering problems, such as finite element analysis, multi rigid bodies analysis, differential equations and matrix computations.
  • Englisch
  • San Diego
  • |
  • USA
  • 6,30 MB
978-0-12-803869-7 (9780128038697)
0128038691 (0128038691)
weitere Ausgaben werden ermittelt
  • Matrix Algorithms in MATLAB
  • Copyright
  • Table of Contents
  • List of Figures
  • List of Figures
  • Preface
  • License Terms
  • 1 Introduction
  • Introduction
  • 1.1 Elements of Linear Algebra
  • 1.1.1 Definitions
  • 1.1.2 Linear Independence and Related Concepts
  • 1.1.3 Solution of Linear Equations
  • 1.1.4 Solution of Eigenvalue Problem
  • 1.2 A Brief Introduction of MATLAB
  • 1.3 Types of Matrices
  • 1.3.1 Square vs. Non-Square Matrices
  • 1.3.2 Symmetric vs. Non-Symmetric Matrices
  • 1.3.3 Full Rank vs. Deficient Rank Matrices
  • 1.3.4 Singular vs. Non-Singular Matrices
  • 1.3.5 Orthogonal vs. Non-Orthogonal Matrices
  • 1.3.6 Defective vs. Non-Defective Matrices
  • 1.3.7 Positive (semi-)Definite vs. Positive Indefinite Matrices
  • 1.3.8 Zero Structured vs. Full Matrices
  • 1.4 Overview of Matrix Computations
  • 1.5 Reordering of Sparse Matrices
  • 1.6 Utility Codes
  • 2 Direct Algorithms of Decompositions of Matrices by Non-Orthogonal Transformations
  • Introduction
  • 2.1 Gauss Elimination Matrix
  • 2.2 LU Decomposition
  • 2.2.1 LU Decomposition by Gauss Elimination
  • 2.2.2 LU Decomposition by Crout Procedure
  • 2.2.3 Driver of LU Decomposition
  • 2.2.4 LU Decomposition of an Upper Hessenberg Matrix
  • 2.2.5 LU Decomposition of a Band Matrix
  • 2.3 LDU Decomposition
  • 2.3.1 LDU Decomposition by Gauss Elimination
  • 2.3.2 LDU Decomposition by Crout Procedure
  • 2.3.3 Driver of LDU Decomposition
  • 2.4 Congruent Decomposition Algorithms for Symmetric Matrices
  • 2.4.1 Reduction of Symmetric Matrix to Diagonal (LDLt)
  • 2.4.2 Cholesky Decomposition (LLt)
  • 2.4.3 Reduction of Symmetric Matrix to Tri-Diagonal (LTLt)
  • 2.4.4 Reduction of Symmetric Matrix to Block Diagonal (LBLt)
  • 2.4.5 Modified Cholesky Decomposition (xLLt)
  • 2.5 Similarity Decomposition Algorithms
  • 2.5.1 Reduction of Square Matrix to Hessenberg by Gauss Elimination
  • 2.5.2 Reduction of Square Matrix to Tri-Diagonal by Gauss Elimination
  • 2.5.3 Reduction of Square Matrix to Tri-Diagonal by Lanczos Procedure
  • 2.6 Reduction of a Symmetric Matrix to Tri-Diagonal and Another Symmetric Matrix to Diagonal of 1s and 0s
  • 2.6.1 Hyper Rotation and Hyper Reflection
  • 2.6.2 GTJGt Decomposition by Hyperbolic Rotation or Hyperbolic Reflection
  • 3 Direct Algorithms of Decompositions of Matrices by Orthogonal Transformations
  • Introduction
  • 3.1 Householder Reflection Matrix and Givens Rotation Matrix
  • 3.2 QR Decomposition
  • 3.2.1 QR Decomposition by Householder Reflections
  • 3.2.2 QR Decomposition by Givens Rotations
  • 3.2.3 QR Decomposition by Gram--Schmidt Orthogonalizations
  • 3.2.4 Driver of QR Decomposition
  • 3.2.5 QR Decomposition of an Upper Hessenberg Matrix
  • 3.2.6 QR Decomposition of a Band Matrix
  • 3.3 Complete Orthogonal Decomposition (QLZ)
  • 3.4 Reduction of Matrix to Bi-Diagonal
  • 3.4.1 QBZ Decomposition by Householder Reflections
  • 3.4.2 QBZ Decomposition by Givens Rotations
  • 3.4.3 QBZ Decomposition by Golub--Kahan--Lanczos Procedure
  • 3.5 Reduction of Square Matrix to Hessenberg by Similarity Transformations
  • 3.5.1 QHQt Decomposition by Householder Reflections
  • 3.5.2 QHQt Decomposition by Givens Rotations
  • 3.5.3 QHQt Decomposition by Arnoldi Procedure
  • 3.6 Reduction of Symmetric Matrix to Tri-Diagonal by Congruent Transformations
  • 3.6.1 QTQt Decomposition by Householder Reflections
  • 3.6.2 QTQt Decomposition by Givens Rotations
  • 3.6.3 QTQt Decomposition by Lanczos Procedure
  • 3.7 Reduction of a Matrix to Upper Hessenberg and Another Matrix to Upper Triangular
  • 3.7.1 QHRZ Decomposition by Orthogonal Transformations
  • 3.7.2 QHRZ Decomposition by Arnoldi Procedure
  • 4 Direct Algorithms of Solution of Linear Equations
  • Introduction
  • 4.1 Brief Introduction of Pseudo-Inverse
  • 4.2 Linear Constraints to Linear Equations
  • 4.2.1 When A is a Rectangular Matrix
  • 4.2.2 When A is a Square Matrix
  • 4.2.3 When Bx=b Has No Solution
  • 4.3 Solution of Five Elementary Linear Equations
  • 4.3.1 Linear Equations of a Zero Matrix: Ox=b
  • 4.3.2 Linear Equations of a Diagonal Matrix: Dx=b
  • 4.3.3 Linear Equations of an Orthogonal Matrix: Qx=b
  • 4.3.4 Linear Equations of a Lower Triangular Matrix: Lx=b
  • 4.3.5 Linear Equations of an Upper Triangular Matrix: Ux=b
  • 4.4 Gauss and Gauss-Jordan Elimination Algorithms
  • 4.5 Householder and Givens Elimination Algorithms
  • 4.6 Solution Algorithms Based on Matrix Decompositions
  • 4.7 Linear Systems Arising From Interpolation
  • 4.7.1 Introducing Function Interpolation
  • 4.7.2 Polynomial Interpolation
  • 4.7.3 Trigonometric Interpolation
  • 5 Iterative Algorithms of Solution of Linear Equations
  • Introduction
  • 5.1 Overview of Iterative Algorithms
  • 5.2 Stationary Iterations: Jacobi, Gauss-Seidel, and More
  • 5.3 General Methodology of Non-Stationary Iterations
  • 5.4 Non-Stationary Iterations Applied to Symmetric Matrix
  • 5.4.1 CG: Conjugate Gradient V is Full Orthogonal Bases of K(A,y0,m) and W=V
  • 5.4.2 CR: Conjugate Residual V is Full Orthogonal Bases of K(A,y0,m) and W=AV
  • 5.4.3 `C'E: `Conjugate' Error W is Full Orthogonal Bases of K(A, y0, m) and V=AW
  • 5.4.4 Numerical Examples for CG,CR, and CE
  • 5.5 Non-Stationary Iterations Applied to Unsymmetric Matrix
  • 5.5.1 FOM: Full Orthogonalization Method V is Full Orthogonal Bases of K(A,y0,m) and W=V
  • 5.5.2 GMRES: Generalized Minimum Residual V is Full Orthogonal Bases of K(A,y0,m) and W=AV
  • 5.5.3 GMERR: Generalized Minimum Error W is Full Orthogonal Bases of K(A',y0,m) and V=A'W
  • 5.5.4 DIOM: Direct Incomplete Orthogonalization Method V is Incomplete Orthogonal Bases of K(A,y0,m) and W=?
  • 5.5.5 DQGMRES: Direct Quasi-Generalized Minimum Residual V is Incomplete Orthogonal Bases of K(A,y0,m) and W=?
  • 5.5.6 DQGMERR: Direct Quasi-Generalized Minimum Error W is Incomplete Orthogonal Bases of K(A',y0,m) and V=?
  • 5.5.7 BCG: Bi-Conjugate Gradient V and W are Bi-Orthogonal Bases of K(A,y0,m) and K(A',y0,m)
  • 5.5.8 QMR: Quasi-Minimal Residual V is Bi-Orthogonal Bases of K(A,y0,m) to K(A',y0,m) and W=?
  • 5.5.9 QME: Quasi-Minimal Error W is Bi-Orthogonal Bases of K(A,y0,m) to K(A',y0,m) and V=?
  • 5.5.10 TFBCG: Transposed Free Bi-Conjugate Gradient V=? and W=? to Make it Equivalent to BICGSTAB
  • 5.5.11 TFQMR: Transpose Free Quasi-Minimal Residual V=? and W=? to Make it Equivalent to TFQMR
  • 5.5.12 Numerical Examples
  • 5.6 Special Algorithms for Normal Equations
  • 5.6.1 Symmetrizing Transformations
  • 5.6.2 Stationary Iterations for Normal Equations
  • 5.6.3 Conjugate Gradient for Normal Equations
  • 5.6.4 Remarks on Iteration Algorithms for Constrained Linear Equations
  • 5.7 Other Important Topics
  • 5.7.1 Preconditioning Techniques
  • 5.7.2 Parallel Computations
  • 5.7.3 Algebraic Multigrid Method
  • 5.7.4 Domain Decomposition Method
  • 6 Direct Algorithms of Solution of Eigenvalue Problem
  • Introduction
  • 6.1 Algorithms of Solution of 2?2 Eigenvalue Problems
  • 6.2 Bound Estimation of Symmetric Eigenvalue Problems
  • 6.3 Power Iterations and Subspace Iterations
  • 6.4 QR Iterations
  • 6.4.1 Reduce A to Hessenberg Form
  • 6.4.2 A QR Iteration Step for a Hessenberg Matrix
  • 6.4.3 Deflation
  • 6.4.4 Shift
  • 6.4.5 MATLAB Implementations of QR Iterations Algorithms
  • 6.5 Calculation of Eigenvectors by Inverse Iterations
  • 6.6 Calculation of Invariant Subspace by Eigenvalue Reordering
  • 6.7 Special Algorithms for Symmetric Tri-Diagonal Matrix
  • 6.7.1 Eigenvalues by Bisection
  • 6.7.2 A Divide-and-Conquer Algorithm
  • 6.8 Jacobi Iterations for Symmetric Matrix
  • 6.9 Algorithms for Symmetrical Positive Definite Ax=?Bx
  • 6.10 Algorithms for Unsymmetrical Ax=?Bx
  • 6.10.1 Hessenberg-Triangular Reduction
  • 6.10.2 Deflation
  • 6.10.3 Shift
  • 6.10.4 MATLAB Implementation of QZ Algorithm
  • 6.11 Algorithms for Symmetric Positive Indefinite Ax=?Bx
  • 6.11.1 A GJR Iteration Step for a Tri-Diagonal and Sign Pair
  • 6.11.2 Deflation
  • 6.11.3 Shift
  • 6.11.4 MATLAB Implementations of GJR Iterations Algorithms
  • 7 Iterative Algorithms of Solution of Eigenvalue Problem
  • Introduction
  • 7.1 General Methodology of Iterative Eigenvalue Algorithms
  • 7.1.1 Basic Ideas
  • 7.1.2 Two Ways to Approximate Dominant Invariant Subspace
  • 7.1.3 Shift and Inverse to Approximate Non-Dominant Invariant Subspace
  • 7.1.4 Better Approximating an Invariant Subspace
  • 7.2 Power Iterations and Subspace Iterations for Ax=?x
  • 7.3 Lanczos Iterations for Ax=?x of a Symmetric Matrix
  • 7.3.1 Basic Lanczos Algorithm
  • 7.3.2 Orthogonality of U
  • 7.3.3 Implicit Restart for a Better Approximate Invariant Subspace
  • 7.3.4 Purging of Converged Undesired Eigenvectors
  • 7.3.5 Locking of Converged Desired Eigenvectors
  • 7.3.6 MATLAB Implementation of Lanczos Algorithm
  • 7.4 Arnoldi Iterations for Ax=?x of an Unsymmetric Matrix
  • 7.4.1 Basic Arnoldi Algorithm
  • 7.4.2 Orthogonality of Q
  • 7.4.3 Implicit Restart for a Better Approximate Invariant Subspace
  • 7.4.4 Purging of Converged Undesired Eigenvectors
  • 7.4.5 Locking of Converged Desired Eigenvectors
  • 7.4.6 MATLAB Implementation of Arnoldi Algorithm
  • 7.5 Other Important Topics
  • 7.5.1 Iterative Algorithms of Generalized Eigenvalue Problems
  • 7.5.2 Nonlinear Eigenvalue Problems
  • 7.5.3 Jacobi--Davidson's Method
  • 8 Algorithms of Solution of Singular Value Decomposition
  • Introduction
  • 8.1 Introduction of SVD and its Algorithms
  • 8.2 SVD Algorithms of a Matrix of 1?n or m?1 or 2?2
  • 8.3 Jacobi Iteration Algorithm for SVD
  • 8.4 QR Iteration Algorithm for SVD
  • 8.5 Special SVD Algorithms of Bi-Diagonal Matrix
  • 8.5.1 SVD by Bisection
  • 8.5.2 A Divide-and-Conquer Algorithm
  • 8.6 Lanczos Algorithm for SVD
  • Bibliography
  • Index

Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

102,28 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
PDF mit Adobe DRM
siehe Systemvoraussetzungen
E-Book bestellen

Unsere Web-Seiten verwenden Cookies. Mit der Nutzung dieser Web-Seiten erklären Sie sich damit einverstanden. Mehr Informationen finden Sie in unserem Datenschutzhinweis. Ok