Eigenvalues and Eigenvectors – Recap


In this optional section what I want to do is revisit the concept of eigenvalues and eigenvectors, in case you’ve forgotten what they are. They are a concept that we’re going to need in the next segment.

So let’s consider a simple linear algebra case. We have a vector, v. It’s an Nx1 vector, and we’re going to pre-multiply it by a square matrix NxN, and the result will of course be a vector, an Nx1 vector.

Now for every matrix ‘A’, there are N vectors whose direction is unchanged by A. That means the vector v and the vector v prime will be parallel to each other. And these vectors we refer to as eigenvectors.

Now, although the vector v and the vector v prime will be parallel, they may not be necessarily the same length. So the length is scaled by the eigenvalue that corresponds to the eigenvector.
So that’s a lot of words. Let’s look at a graphical example of this in practice.

We’ve chosen a very simple A matrix; it’s a 2 by 2 matrix, so there will be two eigenvectors and eigenvalues. And you can see an eigenvalue just about coming up … now. That’s the case where the red vector, the input vector, and the blue output vector are parallel to one another. There’s another one … now. Notice that they are parallel to each other but they’re not the same length. So the ratio of the length of the red vector to the blue vector is the eigenvalue.

So the term eigenvector … it’s a German word and it means ‘own vector’. It’s a special vector that belongs to the matrix.

So there are a couple of equations that we can write. First of all we can use the equation that we had before: that v multiplied by the matrix A gives us the output vector v prime. And in the particular case when they’re parallel then they are related by a scalar; that the two vectors are parallel, only their length is different. So we can represent that by the scalar factor lambda, which is an eigenvalue.

So if these two equations are true we can write this, and we’re following through some maths. We’re not interested in the case where v=0, we’re interested in the case where the determinant of A minus lambda, I, is equal to 0. So if we can write that expression for any matrix, it’s easy for a 2 by 2 case, more difficult for 3 by 3, very, very difficult for bigger matrices, and we can solve that for the values of the eigenvalues; these lambda values.

Once we have values for lambda we can substitute them into the equations above and then solve for the eigenvector.

Now there are an infinite number of eigenvectors corresponding to each eigenvalue, so typically we choose eigenvectors that have got a unit length. That’s an additional constraint that helps us choose just a single eigenvector.

We’ll demonstrate this with a simple example: with a 2 by 2 matrix.

The eigenvalues of this matrix are given simply using the MATLAB built-in function eig. We see the two eigenvalues are 1 and -2.

To find the eigenvectors is a little bit more complex. We have to assign two output arguments from the function eig, and the first output argument which I’ve assigned to the matrix x has got two columns. The first column is the first eigenvector. The second column is the second eigenvector. And the eigenvalues now are arranged along the diagonal of the output matrix e.
A fundamental property of the eigenvector is that when it is transformed by the original matrix, it is parallel to itself. Let’s test that.

The original matrix A, and we multiply it by the first eigenvector which is the first column of the matrix x. We see that the resulting vector is indeed parallel to the eigenvector, and it has exactly the same magnitude. It’s been scaled by 1, and that is the value of the first eigenvalue. So the eigenvector multiplied by the matrix A is a vector parallel to the eigenvector with a scale factor of 1. So it is, in fact, the original eigenvector.

Let’s try it with the other eigenvector, which is the second column of the matrix x. We see now that the result is indeed parallel to the second eigenvector, but it’s been scaled by -2, and -2 is the second eigenvalue of the matrix A.


There is no code in this lesson.

If it’s been a while since you last dealt with eigenvalues and eigenvectors here’s a quick recap of the basics.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes an understanding of high school level mathematics; for example, trigonometry, algebra, calculus, physics (optics) and experience with MATLAB command line and programming, for example workspace, variables, arrays, types, functions and classes.

More information...

Rate this lesson


Check your understanding


  1. Kyron Mayhew says:

    Somehow in this 4 minute video, i’ve managed to understand eigen-stuff much better than in the entirety of MZB126

    1. Peter Corke says:

      I am happy to hear that!

Leave a comment