Matrix diagonalization is the process of performing a similarity transformation on a matrix in order to recover a similar matrix that is diagonal (i.e., all its non-diagonal entries are zero).
Once a matrix is diagonalized it becomes very easy to raise it to integer powers.
Not all matrices are diagonalizable. The diagonalizable matrices are those that have no defective eigenvalues (i.e., eigenvalues whose geometric multiplicity is less than their algebraic multiplicity).
Remember that two square matrices and are said to be similar if there exists an invertible matrix such that
If two matrices are similar, then they have the same rank, trace, determinant and eigenvalues. Not only two similar matrices have the same eigenvalues, but their eigenvalues have the same algebraic and geometric multiplicities.
We can now provide a definition of diagonalizable matrix.
Definition Let be a matrix. We say that is diagonalizable if and only if it is similar to a diagonal matrix.
In other words, when is diagonalizable, then there exists an invertible matrix such thatwhere is a diagonal matrix, that is, a matrix whose non-diagonal entries are zero.
Example Define the matrixand The inverse of isThe similarity transformation gives the diagonal matrix as a result. Hence, is diagonalizable.
We can write the diagonalization as
The -th column of is equal towhere is the -th column of (if you are puzzled, revise the lecture on matrix multiplication and linear combinations).
The -th column of is equal to where is the -th column of .
In turn, is a linear combination of the columns of with coefficients taken from the vector .
Since is diagonal, the only non-zero entry of is . Therefore,
Thus, we have arrived at the conclusion that
The latter equality means that is an eigenvector of associated to the eigenvalue .
This is true for . Thus, the diagonal elements of are the eigenvalues of and the columns of are the corresponding eigenvectors.
The matrix used in the diagonalization must be invertible. Therefore, its columns must be linearly independent. Stated differently, there must be linearly independent eigenvectors of .
In the lecture on the linear independence of eigenvectors, we have discussed the fact that, for some matrices, called defective matrices, it is not possible to find linearly independent eigenvectors. A matrix is defective when it has at least one repeated eigenvalue whose geometric multiplicity is strictly less than its algebraic multiplicity (called a defective eigenvalue).
Therefore, defective matrices cannot be diagonalized.
The next proposition summarizes what we have discussed thus far.
Proposition A matrix is diagonalizable if and only if it does not have any defective eigenvalue.
We have already proved the "only if" part because we have shown above that, if is diagonalizable, then it possesses linearly independent eigenvectors, which implies that no eigenvalue is defective. The "if" part is simple. If possesses linearly independent eigenvectors, then we can adjoin them to form the full-rank matrix and we can form a diagonal matrix whose diagonal elements are equal to the corresponding eigenvalues. Then, by the definition of eigenvalues and eigenvectors, we have that and the diagonalization of follows.
Remember that if all the eigenvalues of are distinct, then does not have any defective eigenvalue. Therefore, possessing distinct eigenvalues is a sufficient condition for diagonalizability.
Suppose we are given a matrix and we are told to diagonalize it. How do we do it?
The answer has already been given in the previous proof, but it is worth repeating.
We provide the answer as a recipe for diagonalization:
Compute the eigenvalues of .
Check that no eigenvalue is defective. If any eigenvalue is defective, then the matrix cannot be diagonalized. Otherwise, you can go to the next step.
For each eigenvalue, find as many linearly independent eigenvectors as you can (their number is equal to the geometric multiplicity of the eigenvalue).
Adjoin all the eigenvectors so as to form a full-rank matrix .
Build a diagonal matrix whose diagonal elements are the eigenvalues of .
The diagonalization is done: .
Importantly, we need to follow the same order when we build and : if a certain eigenvalue has been put at the intersection of the -th column and the -th row of , then its corresponding eigenvector must be placed in the -th column of .
Example Define the matrixThe eigenvalues solve the characteristic equationLet us compute the determinantThus, there are two eigenvalues and . There are no repeated eigenvalues and, as a consequence, no defective eigenvalues. Therefore, is diagonalizable. The eigenvectors associated to solveSincewe can choose, for example,Moreover,so we can choose, as an eigenvector associated to , the following vector:Therefore, the diagonal matrix of eigenvalues isand the invertible matrix of eigenvectors is
Provided a matrix is diagonalizable, there is no unique way to diagonalize it.
For example, we can change the order in which the eigenvalues are put on the diagonal of . Or we can replace a column of with a scalar multiple of itself (which is another eigenvector associated to the same eigenvalue). If there is a repeated eigenvalue, we can choose a different basis for its eigenspace.
Example For instance, in the previous example, we could have definedandAnother possibility would have been to chooseand
The most important application of diagonalization is the computation of matrix powers.
Let be a diagonal matrix:
Then its -th power can be easily computed by raising its diagonal elements to the -th power:
If a matrix is diagonalizable, then and
Thus, all we have to do to raise to the -th power is to 1) diagonalize (if possible); 2) raise the diagonal matrix to the -th power, which is very easy to do; 3) pre-multiply the matrix thus obtained by and post-multiply it by .
Once a matrix has been diagonalized it is straightforward to compute its inverse (if it exists).
In fact, we have thatwhere
Below you can find some exercises with explained solutions.
Suppose that a matrix can be diagonalized as where Suppose that . Show thatand compute .
First of all, let us check that :We can easily compute powers of :
Please cite as:
Taboga, Marco (2021). "Matrix diagonalization", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/matrix-diagonalization.
Most of the learning materials found on this website are now available in a traditional textbook format.