Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Preface xvii
Preface to the First Edition xix
Introduction xxi
About the Companion Website xxxi
Part I Definitions, Basic Concepts, and Matrix Operations 1
1 Vector Spaces, Subspaces, and Linear Transformations 3
1.1 Vector Spaces 3
1.1.1 Euclidean Space 3
1.2 Base of a Vector Space 5
1.3 Linear Transformations 7
1.3.1 The Range and Null Spaces of a Linear Transformation 8
Reference 9
Exercises 9
2 Matrix Notation and Terminology 11
2.1 Plotting of a Matrix 14
2.2 Vectors and Scalars 16
2.3 General Notation 16
Exercises 17
3 Determinants 21
3.1 Expansion by Minors 21
3.1.1 First- and Second-Order Determinants 22
3.1.2 Third-Order Determinants 23
3.1.3 n-Order Determinants 24
3.2 Formal Definition 25
3.3 Basic Properties 27
3.3.1 Determinant of a Transpose 27
3.3.2 Two Rows the Same 28
3.3.3 Cofactors 28
3.3.4 Adding Multiples of a Row (Column) to a Row (Column) 30
3.3.5 Products 30
3.4 Elementary Row Operations 34
3.4.1 Factorization 35
3.4.2 A Row (Column) of Zeros 36
3.4.3 Interchanging Rows (Columns) 36
3.4.4 Adding a Row to a Multiple of a Row 36
3.5 Examples 37
3.6 Diagonal Expansion 39
3.7 The Laplace Expansion 42
3.8 Sums and Differences of Determinants 44
3.9 A Graphical Representation of a 3 × 3 Determinant 45
References 46
Exercises 47
4 Matrix Operations 51
4.1 The Transpose of a Matrix 51
4.1.1 A Reflexive Operation 52
4.1.2 Vectors 52
4.2 Partitioned Matrices 52
4.2.1 Example 52
4.2.2 General Specification 54
4.2.3 Transposing a Partitioned Matrix 55
4.2.4 Partitioning Into Vectors 55
4.3 The Trace of a Matrix 55
4.4 Addition 56
4.5 Scalar Multiplication 58
4.6 Equality and the Null Matrix 58
4.7 Multiplication 59
4.7.1 The Inner Product of Two Vectors 59
4.7.2 A Matrix-Vector Product 60
4.7.3 A Product of Two Matrices 62
4.7.4 Existence of Matrix Products 65
4.7.5 Products With Vectors 65
4.7.6 Products With Scalars 68
4.7.7 Products With Null Matrices 68
4.7.8 Products With Diagonal Matrices 68
4.7.9 Identity Matrices 69
4.7.10 The Transpose of a Product 69
4.7.11 The Trace of a Product 70
4.7.12 Powers of a Matrix 71
4.7.13 Partitioned Matrices 72
4.7.14 Hadamard Products 74
4.8 The Laws of Algebra 74
4.8.1 Associative Laws 74
4.8.2 The Distributive Law 75
4.8.3 Commutative Laws 75
4.9 Contrasts With Scalar Algebra 76
4.10 Direct Sum of Matrices 77
4.11 Direct Product of Matrices 78
4.12 The Inverse of a Matrix 80
4.13 Rank of a Matrix-Some Preliminary Results 82
4.14 The Number of LIN Rows and Columns in a Matrix 84
4.15 Determination of the Rank of a Matrix 85
4.16 Rank and Inverse Matrices 87
4.17 Permutation Matrices 87
4.18 Full-Rank Factorization 89
4.18.1 Basic Development 89
4.18.2 The General Case 91
4.18.3 Matrices of Full Row (Column) Rank 91
References 92
Exercises 92
5 Special Matrices 97
5.1 Symmetric Matrices 97
5.1.1 Products of Symmetric Matrices 97
5.1.2 Properties of AA' and A'A 98
5.1.3 Products of Vectors 99
5.1.4 Sums of Outer Products 100
5.1.5 Elementary Vectors 101
5.1.6 Skew-Symmetric Matrices 101
5.2 Matrices Having All Elements Equal 102
5.3 Idempotent Matrices 104
5.4 Orthogonal Matrices 106
5.4.1 Special Cases 107
5.5 Parameterization of Orthogonal Matrices 109
5.6 Quadratic Forms 110
5.7 Positive Definite Matrices 113
References 114
Exercises 114
6 Eigenvalues and Eigenvectors 119
6.1 Derivation of Eigenvalues 119
6.1.1 Plotting Eigenvalues 121
6.2 Elementary Properties of Eigenvalues 122
6.2.1 Eigenvalues of Powers of a Matrix 122
6.2.2 Eigenvalues of a Scalar-by-Matrix Product 123
6.2.3 Eigenvalues of Polynomials 123
6.2.4 The Sum and Product of Eigenvalues 124
6.3 Calculating Eigenvectors 125
6.3.1 Simple Roots 125
6.3.2 Multiple Roots 126
6.4 The Similar Canonical Form 128
6.4.1 Derivation 128
6.4.2 Uses 130
6.5 Symmetric Matrices 131
6.5.1 Eigenvalues All Real 132
6.5.2 Symmetric Matrices Are Diagonable 132
6.5.3 Eigenvectors Are Orthogonal 132
6.5.4 Rank Equals Number of Nonzero Eigenvalues for a Symmetric Matrix 135
6.6 Eigenvalues of Orthogonal and Idempotent Matrices 135
6.6.1 Eigenvalues of Symmetric Positive Definite and Positive Semidefinite Matrices 136
6.7 Eigenvalues of Direct Products and Direct Sums of Matrices 138
6.8 Nonzero Eigenvalues of AB and BA 140
References 141
Exercises 141
7 Diagonalization of Matrices 145
7.1 Proving the Diagonability Theorem 145
7.1.1 The Number of Nonzero Eigenvalues Never Exceeds Rank 145
7.1.2 A Lower Bound on r (A - ¿kI) 146
7.1.3 Proof of the Diagonability Theorem 147
7.1.4 All Symmetric Matrices Are Diagonable 147
7.2 Other Results for Symmetric Matrices 148
7.2.1 Non-Negative Definite (n.n.d.) 148
7.2.2 Simultaneous Diagonalization of Two Symmetric Matrices 149
7.3 The Cayley-Hamilton Theorem 152
7.4 The Singular-Value Decomposition 153
References 157
Exercises 157
8 Generalized Inverses 159
8.1 The Moore-Penrose Inverse 159
8.2 Generalized Inverses 160
8.2.1 Derivation Using the Singular-Value Decomposition 161
8.2.2 Derivation Based on Knowing the Rank 162
8.3 Other Names and Symbols 164
8.4 Symmetric Matrices 165
8.4.1 A General Algorithm 166
8.4.2 The Matrix X'X 166
References 167
Exercises 167
9 Matrix Calculus 171
9.1 Matrix Functions 171
9.1.1 Function of Matrices 171
9.1.2 Matrices of Functions 174
9.2 Iterative Solution of Nonlinear Equations 174
9.3 Vectors of Differential Operators 175
9.3.1 Scalars 175
9.3.2 Vectors 176
9.3.3 Quadratic Forms 177
9.4 Vec and Vech Operators 179
9.4.1 Definitions 179
9.4.2 Properties of Vec 180
9.4.3 Vec-Permutation Matrices 180
9.4.4 Relationships Between Vec and Vech 181
9.5 Other Calculus Results 181
9.5.1 Differentiating Inverses 181
9.5.2 Differentiating Traces 182
9.5.3 Derivative of a Matrix with Respect to Another Matrix 182
9.5.4 Differentiating Determinants 183
9.5.5 Jacobians 185
9.5.6 Aitken's Integral 187
9.5.7 Hessians 188
9.6 Matrices with Elements That Are Complex Numbers 188
9.7 Matrix Inequalities 189
References 193
Exercises 194
Part II Applications of Matrices in Statistics 199
10 Multivariate Distributions and Quadratic Forms 201
10.1 Variance-Covariance Matrices 202
10.2 Correlation Matrices 203
10.3 Matrices of Sums of Squares and Cross-Products 204
10.3.1 Data Matrices 204
10.3.2 Uncorrected Sums of Squares and Products 204
10.3.3 Means, and the Centering Matrix 205
10.3.4 Corrected Sums of Squares and Products 205
10.4 The Multivariate Normal Distribution 207
10.5 Quadratic Forms and ¿2-Distributions 208
10.5.1 Distribution of Quadratic Forms 209
10.5.2 Independence of Quadratic Forms 210
10.5.3 Independence and Chi-Squaredness of Several Quadratic Forms 211
10.5.4 The Moment and Cumulant Generating Functions for a Quadratic Form 211
10.6 Computing the Cumulative Distribution Function of a Quadratic Form 213
10.6.1 Ratios of Quadratic Forms 214
References 215
Exercises 215
11 Matrix Algebra of Full-Rank Linear Models 219
11.1 Estimation of ß by the Method of Least Squares 220
11.1.1 Estimating the Mean Response and the Prediction Equation 223
11.1.2 Partitioning of Total Variation Corrected for the Mean 225
11.2 Statistical Properties of the Least-Squares Estimator 226
11.2.1 Unbiasedness and Variances 226
11.2.2 Estimating the Error Variance 227
11.3 Multiple Correlation Coefficient 229
11.4 Statistical Properties under the Normality Assumption 231
11.5 Analysis of Variance 233
11.6 The Gauss-Markov Theorem 234
11.6.1 Generalized Least-Squares Estimation 237
11.7 Testing Linear Hypotheses 237
11.7.1 The Use of the Likelihood Ratio Principle in Hypothesis Testing 239
11.7.2 Confidence Regions and Confidence Intervals 241
11.8 Fitting Subsets of the x-Variables 246
11.9 The Use of the R(.|.) Notation in Hypothesis Testing 247
References 249
Exercises 249
12 Less-Than-Full-Rank Linear Models 253
12.1 General Description 253
12.2 The Normal Equations 256
12.2.1 A General Form 256
12.2.2 Many Solutions 257
12.3 Solving the Normal Equations 257
12.3.1 Generalized Inverses of X'X 258
12.3.2 Solutions 258
12.4 Expected Values and Variances 259
12.5 Predicted y-Values 260
12.6 Estimating the Error Variance 261
12.6.1 Error Sum of Squares 261
12.6.2 Expected Value 262
12.6.3 Estimation 262
12.7 Partitioning the Total Sum of Squares 262
12.8 Analysis of Variance 263
12.9 The R(·|·) Notation 265
12.10 Estimable Linear Functions 266
12.10.1 Properties of Estimable Functions 267
12.10.2 Testable Hypotheses 268
12.10.3 Development of a Test Statistic for H0 269
12.11 Confidence Intervals 272
12.12 Some Particular Models 272
12.12.1 The One-Way Classification 272
12.12.2 Two-Way Classification, No Interactions, Balanced Data 273
12.12.3 Two-Way Classification, No Interactions, Unbalanced Data 276
12.13 The R(·|·) Notation (Continued) 277
12.14 Reparameterization to a Full-Rank Model 281
References 282
Exercises 282
13 Analysis of Balanced Linear Models Using Direct Products of Matrices 287
13.1 General Notation for Balanced Linear Models 289
13.2 Properties Associated with Balanced Linear Models 293
13.3 Analysis of Balanced Linear Models 298
13.3.1 Distributional Properties of Sums of Squares 298
13.3.2 Estimates of Estimable Linear Functions of the Fixed Effects 301
References 307
Exercises 308
14 Multiresponse Models 313
14.1 Multiresponse Estimation of Parameters 314
14.2 Linear Multiresponse Models 316
14.3 Lack of Fit of a Linear Multiresponse Model 318
14.3.1 The Multivariate Lack of Fit Test 318
References 323
Exercises 324
Part III Matrix Computations and Related Software 327
15 SAS/IML 329
15.1 Getting Started 329
15.2 Defining a Matrix 329
15.3 Creating a Matrix 330
15.4 Matrix Operations 331
15.5 Explanations of SAS Statements Used Earlier in the Text 354
References 357
Exercises 358
16 Use of MATLAB in Matrix Computations 363
16.1 Arithmetic Operators 363
16.2 Mathematical Functions 364
16.3 Construction of Matrices 365
16.3.1 Submatrices 365
16.4 Two- and Three-Dimensional Plots 371
16.4.1 Three-Dimensional Plots 374
References 378
Exercises 379
17 Use of R in Matrix Computations 383
17.1 Two- and Three-Dimensional Plots 396
17.1.1 Two-Dimensional Plots 397
17.1.2 Three-Dimensional Plots 404
References 408
Exercises 408
Appendix 413
Index 475
It is difficult to determine the origin of matrices from the historical point of view. Given the association between matrices and simultaneous linear equations, it can be argued that the history of matrices goes back to at least the third century BC. The Babylonians used simultaneous linear equations to study problems that pertained to agriculture in the fertile region between the Tigris and Euphrates rivers in ancient Mesopotamia (present day Iraq). They inscribed their findings, using a wedge-shaped script, on soft clay tablets which were later baked in ovens resulting in what is known as cuneiform tablets (see Figures 1 and 2).
This form of writing goes back to about 3000 BC (see Knuth, 1972). For example, a tablet dating from around 300 BC was found to contain a description of a problem that could be formulated in terms of two simultaneous linear equations in two variables. The description referred to two fields whose total area, the rate of production of grain per field, and their total yield were all given. It was required to determine the area of each field (see O'Connor and Robertson, 1996). The ancient Chinese also dealt with simultaneous linear equations between 200 BC and 100 BC in studying, for example, corn production. In fact, the text, Nine Chapters on the Mathematical Art, which was written during the Han Dynasty, played an important role in the development of mathematics in China. It was a practical handbook of mathematics consisting of 246 problems that pertained to engineering, surveying, trade, and taxation issues (see O'Connor and Robertson, 2003).
The modern development of matrices and matrix algebra did not materialize until the nineteenth century with the work of several mathematicians, including Augustin-Louis Cauchy, Ferdinand Georg Frobenius, Carl Friedrich Gauss, Arthur Cayley, and James Joseph Sylvester, among others. The use of the word "matrix" was first introduced by Sylvester in 1850. This terminology became more common after the publication of Cayley's (1858) memoir on the theory of matrices. In 1829, Cauchy gave the first valid proof that the eigenvalues of a symmetric matrix must be real. He was also instrumental in creating the theory of determinants in his 1812 memoir. Frobenius (1877) wrote an important monograph in which he provided a unifying theory of matrices that combined the work of several other mathematicians. Hawkins (1974) described Frobenius' paper as representing "an important landmark in the history of the theory of matrices." Hawkins (1975) discussed Cauchy's work and its historical significance to the consideration of algebraic eigenvalue problems during the 18th century.
Figure 0.1 A Cuneiform Tablet with 97 Linear Equations (YBC4695-1). Yale Babylonian Collection, Yale University Library, New Haven, CT.
Science historians and mathematicians have regarded Cayley as the founder of the theory of matrices. His 1858 memoir was considered "the foundation upon which other mathematicians were able to erect the edifice we now call the theory of matrices" (see Hawkins, 1974, p. 561). Cayley was interested in devising a contracted notation to represent a system of m linear equations in n variables of the form
where the aij's are given as coefficients. Cayley and other contemporary algebraists proposed replacing the m equations with a single matrix equation such as
Figure 0.2 An Old Babylonian Mathematical Text with Linear Equations (YBC4695-2). Yale Babylonian Collection, Yale University Library, New Haven, CT.
Cayley regarded such a scheme as an operator acting upon the variables, x1, x2, ., xn to produce the variables y1, y2, ., ym. This is a multivariable extension of the action of the single coefficient a upon x to produce a x, except that the rules associated with such an extension are different from the single variable case. This led to the development of the algebra of matrices.
Even though Cayley left his mark on the history of matrices, it should be pointed that his role in this endeavor was perhaps overrated by historians to the point of eclipsing the contribution of other mathematicians in the eighteenth and nineteenth centuries. Hawkins (1974) indicated that the ideas Cayley expressed in his 1858 memoir were not particularly original. He cited work by Laguerre (Edmond Nicolas Laguerre), Frobenius, and other mathematicians who had developed similar ideas during the same period, but without a knowledge of Cayley's memoir. This conclusion was endorsed by Farebrother (1997) and Grattan-Guiness (1994). It is perhaps more accurate to conclude, as Hawkins (1975, p. 570) did, that "the history of matrix theory involved the efforts of many mathematicians, that it was indeed an international undertaking." Higham (2008) provided an interesting commentary on the work of Cayley and Sylvester. He indicated that the multi-volume collected works of Cayley and Sylvester were both freely available online at the University of Michigan Historical Mathematics Collection by using the URL, http://quod.lib.umich.edu/u/umhistmath/ (for Cayley, use http://name.umdl.umich.edu/ABS3153.0013.001, and for Sylvester, use http://name.umdl.umich.edu/AAS8085.0002.001).
The history of determinants can be traced to methods used by the ancient Chinese and Japanese to solve a system of linear equations. Seki Kowa, a distinguished Japanese mathematician of the seventeenth century, discovered the expansion of a determinant in solving simultaneous equations (see, e.g., Smith, 1958, p. 440). However, the methods used by the Chinese and the Japanese did not resemble the methods used nowadays in dealing with determinants. In the West, the theory of determinants is believed to have originated with the German mathematician, Gottfried Leibniz, in the seventeenth century, several years after the work of Seki Kowa. However, the actual development of this theory did not begin until 1750 with the publication of the book by Gabriel Cramer. In fact, the method of solving a system of n linear equations in n unknowns by means of determinants is known as Cramer's rule. The term "determinant" was first introduced by Gauss in 1801 in connection with quadratic forms. In 1812, Cauchy developed the theory of determinants as is known today. Cayley was the first to introduce the present-day notation of a determinant, namely, of vertical bars enclosing a square matrix, in a paper he wrote in 1841. So, just as in the case of matrices, the history of determinants was an international undertaking shaped by the efforts of many mathematicians. For more interesting facts about the history of determinants, see Miller (1930) and Price (1947).
The entry of matrices into statistics was slow. Farebrother (1999) indicated that matrix algebra was not to emerge until the early part of the twentieth century. Even then, determinants were used in place of matrices in solving equations which were written in longhand. Searle (2000, p. 25) indicated that the year 1930 was a good starting point for the entry of matrices into statistics. That was the year of Volume 1 of the Annals of Mathematical Statistics, its very first paper, Wicksell (1930), being "Remarks on Regression." The paper considered finding the least-squares estimates for a linear regression model with one independent variable. The normal equations for getting the model's parameter estimates were expressed in terms of determinants only. No matrices were used. Today, such a subject would have been replete with matrices. Lengthy arguments and numerous equations were given to describe computational methods for general regression models, even in some of the books that appeared in the early 1950s. The slowness of the use of matrices in statistics was partially attributed to the difficulty in producing numerical results in situations involving, for example, regression models with several variables. In particular, the use of a matrix inverse posed a considerable computational difficulty before the advent of computers which came about in only the last 50 years. Today, such computational tasks are carried out quickly and effortlessly for a matrix of a reasonable size using a computer software. During his graduate student days at Cornell University in 1959, Searle (2000) recalled the great excitement he and other classmates in a small computer group had felt when they inverted a 10-by-10 matrix in 7 minutes. At that time this was considered a remarkable feat considering that only a year or two earlier, a friend had inverted a 40-by-40 matrix by hand using electric Marchant or Monroe calculators. That task took 6 weeks! An early beginning to more advanced techniques to inverting a matrix was the Doolittle method, as was described in Anderson and Bancroft (1952, Chapter 15). It is interesting to note that this method was introduced in the U.S. Coast and Geodetic Survey Report of 1878 (see Doolittle, 1881).
Alexander Craig Aitken made important contributions to promoting the use of matrix algebra in statistics in the 1930s. He was a brilliant mathematician from New Zealand with a phenomenal mental capability. It was reported that he could recite the irrational number p to 707 decimal places and multiply two nine-digit numbers in his head in 30 seconds. He was therefore referred to...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.