Xycoon logo
Matrix Algebra
Home    Site Map    Site Search    Xycoon College    Free Online Software    
horizontal divider
vertical whitespace

Introduction to Econometrics - Matrix Algebra

[Home] [Up] [Probability] [Axiom System] [Bayes Theorem] [Random Variables] [Distribution Theory] [Estimator Properties] [Matrix Algebra]

What follows under this section, is an introduction to matrix algebra which is essential in order to understand the discussion of more advanced econometrics and statistics. Most properties are only defined or described without giving examples, exercises, or rigorous proofs. In any case remember, mathematics is pure fun!

We write a column and row vector as follows

Mathematics Matrix Algebra


where the prime (') is an indication of transposition.

The length of a vector is defined as


Note the following, obvious relationships


We write a n*m matrix as follows


Addition of matrices can be defined as


whereas multiplication can be written as


Transposition of matrices has two simple properties


The multiplication of two diagonal matrices results in a diagonal matrix


where a 'diagonal matrix' can be defined as


A special diagonal matrix is a matrix where all diagonal elements are equal to one (identity matrix denoted I). Therefore, it is obvious that


A matrix with only zero elements is neutral in addition


Vectors are told to be linearly independent if


Now we are able to define the rank of a matrix as the number of linearly independent rows or columns. Also note that all zero matrices have rank = 0. The rank of a (m*n) matrix is equal to the rank of the largest sub matrix with a determinant different from zero where the determinant of a matrix is defined by

Note that a determinant of a two by two matrix can be found by


It is also quite obvious that the rank of a product of two matrices is less or equal to the rank of either the one or the other matrix.

Transposition does not alter the rank of a matrix.

If a square matrix has a full rank (rank equal to the number of rows or columns) we call this matrix not singular.

Furthermore, if we have a square non singular matrix then the inverse of a matrix A can be defined as

if and only if


The inverse of an inverse matrix, is equal to the original matrix


and the inverse of the transposed is the transposition of the inverse matrix


A diagonal matrix is non singular if, and only if all diagonal elements are different from zero. Above this, the inverse of such a matrix can be found by writing a diagonal matrix where all diagonal elements are replaced by their respective reciprocals.

The determinant of a product AB (if AB exists) is equal to the product of the determinants if both matrices are square


The rank of AB and the rank of CA are equal to the rank of A if B and C are non singular.

If A and B are of the same order then



then M is said to be idempotent. In our discussions we'll also always assume that these idempotent matrices are symmetric (which means that the element of the i-th row and the j-th column is equal to the element of the j-th row and the i-th column).

An important example of an idempotent matrix is


which can be shown quite easily


Idempotent matrices are very important in order to write variables as deviations from the mean. For instance, if B is an observation matrix and M is the idempotent matrix of (I.IV-20), then we can write


where obviously all columns of B have been written as their deviation from the mean.

Even the sum of the square products B'B of a square matrix B can be written as deviations from the mean: (MB)'MB = B'MB (where M is again idempotent).

When using two different matrices B and C it is still possible to write the deviations from the mean of the sum of square products (B'C) as follows: (MB)'MC = B'MC.

Define the trace of a square matrix A as


It is obvious that the following rules hold for the trace of a matrix:

a) tr (k A) = k tr A where k is a real number

b) tr(A+B) = tr A + tr B

c) tr (AB) = tr (BA)

d) tr A = rank A if A is idempotent

A quadratic form associated with a symmetric square matrix A is defined as the scalar


(A is considered to be a square symmetric matrix).

A is called a positive definite matrix if and only if


A positive semi definite matrix is


Obviously a negative definite and a negative semi definite matrix can be defined analogously.

An indefinite matrix however is defined


The matrix A is positive definite if


All principal minors and the determinant of a matrix A are positive if A is positive definite.

A very important property is that all positive definite matrices are non singular!

If A is positive definite (pos. semi def.) and B is non singular then B'AB is also positive definite (pos. semi def.).

If there exists a m*n matrix A with rank m<n then AA' is positive definite and A'A is positive semi definite but never positive definite.

If there exists a m*n matrix A with rank r<m and r<n then AA' and A'A are both positive semi definite but neither will be positive definite.

If there exists a square symmetric and positive definite matrix A then there always exists a non singular matrix P such that P'P = A. This is a very important property.

Define an eigenvalue lambda and an eigenvector x of the square matrix A as


Since the eigenvector x is different from the zero vector the following is valid


Note that eigenvectors are also called the "latent" roots.

It is very interesting to note that


If we define the complex (imaginary) number i such that


then the latent root (x + iy) of a symmetric square matrix A is always real! Proof:





Since, due to the symmetry of A,


it follows from (I.IV-35) and (I.IV-36) that


If a square matrix A is symmetric then the eigenvectors corresponding to the different eigenvalues are all orthogonal (independent from each other). Suppose we would have two latent roots with corresponding eigenvectors x and y then we can write





and since


which implies that the eigenvectors corresponding to different eigenvalues are independent.

If x is a characteristic (eigen)vector of the square matrix A with root lambda then


If x is an eigenvector of A then -x is also an eigenvector of this matrix. Note the following (almost trivial) relationship


If A is a non singular square matrix then the roots of the inverse of A are equal to the reciprocal values of the roots of A but the eigenvectors are the same. This can be seen quite easily by


and therefore


The latent roots of a positive definite (pos. semi def.) matrix are positive (non negative). Of course, analogously one can formulate a similar property for negative (semi) definite matrices.

The number of roots of a symmetric matrix A that are different from zero is equal to the rank of A.

If all roots of a symmetric matrix A are positive (non negative) then all the diagonal elements of A are positive (non negative).

The non zero roots of AA' and A'A are always the same.

If x1, x2 are eigenvectors (corresponding to different roots) of AA' are pair wise independent with unit length, then the vectors A'x1, A'x2 are also orthogonal but without unit length (except if all l values are equal to one).

Define an orthogonal matrix A as


which also implies that all rows and columns have unit length.

If A is a square matrix with different roots and corresponding eigenvectors


then it follows by definition (I.IV-28) that




Now we put the orthogonal vectors gi into a n*n matrix G like


and therefore it follows that G'G = I (and thus G is orthogonal). Furthermore, by pre-multiplying G by matrix A, we obtain



Now we can easily find that




If a matrix A has rank r then there are only r non zero eigenvectors (with rank 1) and n-r zero eigenvectors (abstraction has been made for the sign).

The determinant of an orthogonal matrix is equal to 1 or -1.

The real roots of an orthogonal matrix G are always equal to 1 or -1 which can be seen quite easily






If A is a positive semi definite matrix of rank r then


therefore P' is equal to


All idempotent matrices have a root of 0 or 1.

For the square idempotent matrix M with rank r the following property is valid


where D is a diagonal matrix with r diagonal elements equal to 1 (and all other elements equal to zero).

All idempotent matrices A are positive semi definite with non negative diagonal elements since


which is just a sum of squares of the elements of Ax.

If a square idempotent matrix A is non singular then A must be equal to the identity matrix I since


If A is idempotent and G is orthogonal then G'AG is idempotent as well since


If A is idempotent then I - A is also idempotent but A(I - A) = O.

If A is idempotent and the element aij = 0 then it follows that the i-th row vector and the j-th column vector consists of nothing but zero elements.

We define the following derivatives of matrices (where A is not necessarily symmetric)













We define the Kronecker (or Tensor) product as


with the following properties


and last but not least


where A is a n*n, and B is a m*m matrix.

Sometimes it is useful to write matrix expressions in partitioned form. Addition and multiplication rules with respect to partitioned matrices are quite simple




which is a generalization of (I.IV-6) and for which Aik has the same number of columns as the number of rows of Bkt.

It is easily verified that the inverse of a symmetric partitioned matrix can be written as


or equivalently


Sometimes a matrix is written in upper or lower triangular form. An upper triangular matrix A(m*n) is a matrix were all elements (for m larger than n) are zero. Analogously a lower triangular matrix can be defined.

Imagine an upper triangular matrix T(3*3) with units in the diagonal, and a diagonal matrix D(3*3)




It can be shown that any matrix square symmetric A can be written as T'DT according to Choleski's decomp. theorem. It is also interesting to see that the A matrix can be transformed to become a diagonal matrix:

D = (T')-1AT.

vertical whitespace

Axiom System
Bayes Theorem
Random Variables
Distribution Theory
Estimator Properties
Matrix Algebra
horizontal divider
No news at the moment...
horizontal divider

© 2000-2012 All rights reserved. All Photographs (jpg files) are the property of Corel Corporation, Microsoft and their licensors. We acquired a non-transferable license to use these pictures in this website.
The free use of the scientific content in this website is granted for non commercial use only. In any case, the source (url) should always be clearly displayed. Under no circumstances are you allowed to reproduce, copy or redistribute the design, layout, or any content of this website (for commercial use) including any materials contained herein without the express written permission.

Information provided on this web site is provided "AS IS" without warranty of any kind, either express or implied, including, without limitation, warranties of merchantability, fitness for a particular purpose, and noninfringement. We use reasonable efforts to include accurate and timely information and periodically updates the information without notice. However, we make no warranties or representations as to the accuracy or completeness of such information, and it assumes no liability or responsibility for errors or omissions in the content of this web site. Your use of this web site is AT YOUR OWN RISK. Under no circumstances and under no legal theory shall we be liable to you or any other person for any direct, indirect, special, incidental, exemplary, or consequential damages arising from your access to, or use of, this web site.

Contributions and Scientific Research: Prof. Dr. E. Borghers, Prof. Dr. P. Wessa
Please, cite this website when used in publications: Xycoon (or Authors), Statistics - Econometrics - Forecasting (Title), Office for Research Development and Education (Publisher), http://www.xycoon.com/ (URL), (access or printout date).

Comments, Feedback, Bugs, Errors | Privacy Policy Web Awards