If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Showing that an eigenbasis makes for good coordinate systems

Showing that an eigenbasis makes for good coordinate systems. Created by Sal Khan.

Want to join the conversation?

  • leaf green style avatar for user Carlos Palacio
    Could you do an example or two of problems solved using eigenvectors in the real world? It might help solidify the concept.
    (20 votes)
    Default Khan Academy avatar avatar for user
    • leaf grey style avatar for user basheersubei
      I came across eigenvalues and eigenvectors when I was working on programming a line detection algorithm (basically, given an image from a webcam, find the lines in that image). I found a paper that discusses an efficient algorithm for line detection, and it was all riddled with eigenstuff. That's why I'm here now. Sal is awesome! :D (btw, my college textbook does a horrible job of explaining eigenstuff).

      tl;dr
      eigenstuff is everywhere in science and engineering... It's worth learning.
      just my two cents

      but, yeah, more examples would be great...
      (13 votes)
  • starky ultimate style avatar for user LuminatorBU
    Will there be a tutorial on diagonalization of matrices or complex eigenvalues (such as lambda^2 + 1 case)?
    (19 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user paul.e.gradie
    I just wanted to make the comment - at the end of the final video, Kahn says we can spend the rest of our lives now using this toolkit to solve a universe of problems, in statistics, weather, and who knows what else. A VERY interesting 'what else' is deep learning and artificial intelligence. Linear algebra is the language of artificial intelligence, and you build neural networks by implementing a series of linear algebra operations we studied in this class. Dot products, matrix transpositions, eigenvector calculation - these are all used in machine learning and deep learning algorithms.
    (16 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user SteveSargentJr
    Where do we go from here?

    I'm gonna go out on a limb and guess that there's still a lot of Linear Algebra left to learn... What are the concepts (and in what order) that typically come after Eigenvectors? In other words, if Sal were to continue making Linear Algebra videos, what do you think would be the next couple of topics he'd cover??
    (7 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Paul Ozag
    do all subspaces that are defined by the span of n vectors have a basis of purely eigen-vectors. or ere there some subspaces that don't allow for that transformation?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user Lily
      Remember that eigenvectors are associated with a matrix A, not with a subspace itself, so to talk about a basis of eigenvectors doesn't really make sense without reference to a specific transformation. However, if I understand your question, the answer is no, not every set of eigenvectors from an nxn matrix will necessarily span an n-dimensional space. Although an nxn matrix always has n eigenvalues (remember that some may be repeats as in the video preceding this one), it does not necessarily have n linearly independent eigenvectors associated with those eigenvalues.

      For instance the 2x2 matrix
      (1 1)
      (0 1)
      has only one eigenvector, (1,0) (transpose).
      So the eigenspace is a line and NOT all of R^2.

      Note that in the beginning of this video we make the assumption that we have n linearly-independent eigenvectors. Without this assumption we can't assume the nice behavior seen in the video.

      Hope this answers this (admittedly year-old) question.
      (6 votes)
  • primosaur ultimate style avatar for user Hemen Taleb
    Correct me if I'm wrong, but in this video Sal is actually doing a diagonalization and even discusses the conditions for diagonalization and etc etc. BUT, he doesn't use the term, therefore everyone is confused in the comments asking for diagonalization. Whereas I think, this is an example of diagonalization. Am I wrong?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • mr pants teal style avatar for user Robert
      Yes, he's been doing diagonalization for several videos now, and yes he has seemed to avoid calling it that. Maybe the name for it and other related terms, like similarity, are being saved for a future playlist. Here it looks like we're "just trying to get the intuition" as Sal often puts it. But writing A = C D C^-1 is diagonalization if i recall correctly, and that's been done here, as well as in previous videos.
      (3 votes)
  • blobby green style avatar for user beza222
    Hello,
    These videos are very helpful. For me learning the geometric interpretations of eigen vectors made it a lot easier to conceptualize the material, but im stuck wondering what would happen when you discover that the eigan values for a given transformation are imaginary or complex?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • leaf blue style avatar for user Matthew Daly
      That's a great question! An illustration might help. The go-to example of a linear transformation that does not have real eigenvalues is a non-trivial rotation about the origin. And I hope that doesn't seem weird, because of course no non-zero vector will be translated to a scalar multiple of itself. If that helps, then (spoiler alert!) that's essentially the only way that complex eigenvalues get involved. You can always choose an eigenbasis that works the way you're visualizing it plus the possibility that there might be rotations around planes defined by some pairs of the basis elements.
      (2 votes)
  • blobby green style avatar for user Araoluwa Filani
    Please, your explanations are so wonderful, I took my time to grab everything till the end. Your videos are the most explanatory Linear Algebra videos I have known so far. Please if u can make more videos here on PCA and SVD which somewhat involves eigen vectors.
    (3 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user diegoalonsocortez
    A commutative diagram? In the Khan Academy? That made my day!
    (2 votes)
    Default Khan Academy avatar avatar for user
  • orange juice squid orange style avatar for user Roy Cox
    I'm curious to hear if anyone has insights or ideas for further reading regarding the conceptualization of data (obtained through some experiment or observational study) as a matrix.

    I took this course to get a better understanding of data analysis tools like PCA, ICA and graph analysis. While I get all the operations you can perform, and why the eigenbasis is a natural coordinate system in a sense, this entire series of videos conceptualized a matrix as a transformation you apply to a vector (or set of vectors).

    But how to get from matrix-as-a-transform to matrix-as-data? Why does it make sense to apply linear algebra to data? Why is an eigenvector of a covariance matrix a principal axis of a dataset? Sure, I can calculate it (and I find the links in one of the top posts on PCA very useful), but there's still something "missing". And how are statistical characteristics of data (means, variances, etc) affected by these transformations?

    Any feedback much appreciated!
    (2 votes)
    Default Khan Academy avatar avatar for user

Video transcript

I've talked a lot about the idea that eigenvectors could make for good bases or good basis vectors. So let's explore that idea a little bit more. Let's say I have some transformation. Let's say it's a transformation from Rn to Rn, and it can be represented by the matrix, A. So the transformation of x is equal to the n-by-n matrix, A times x. Now let's say that we have n linearly independent eigenvectors of A. And this isn't always going to be the case, but it can often be the case. It's definitely possible. Let's assume that A has n linearly independent eigenvectors. So I'm going to call them v1, v2, all the way through vn. Now, n linearly independent vectors in Rn can definitely be a basis for Rn. We've seen that multiple times. And what I want to show you in this video is that this makes a particularly good basis for this transformation. So let's explore that. So the transformation of each of these vectors-- I'll write it over here. The transformation of vector 1 is equal to A times vector 1 and since vector 1 is an eigenvector of A, that's going to be equal to some eigenvalue lambda 1 times vector 1. We could do that for all of them. The transformation of vector 2 is equal to A times v2, which is equal to some eigenvalue lambda 2 times v2. And I'm just going to skip all of them and just go straight to the nth one. We have n of these eigenvectors. You might have a lot more. We're just assuming that A has at least n linearly independent eigenvectors. In general, you could take scaled up versions of these and they'll also be eigenvectors. Let's see, so the transformation of vn is going to be equal to A times vn. And because these are all eigenvectors, A times vn is just going to be lambda n, some eigenvalue times the vector, vn. Now, what are these also equal to? Well, this is equal to, and this is probably going to be unbelievably obvious to you, but this is the same thing as lambda 1 times vn plus 0 times v2 plus all the way to 0 times vn. And this right here is going to be 0 times v1 plus lambda 2 times v2 plus all the way, 0 times all of the other vectors vn. And then this guy down here, this is going to be 0 times v1 plus 0 times v2 plus 0 times all of these basis vectors, these eigenvectors, but lambda n times vn. This is almost stunningly obvious, right? I just rewrote this as this plus a bunch of zero vectors. But the reason why I wrote that is, because in a second, we're going to take this as a basis and we're going to find coordinates with respect to that basis, and so this guy's coordinates will be lambda 1, 0, 0, because that's the coefficients on our basis vectors. So let's do that. So let's say that we define this as some basis. So B is equal to the set of-- actually, I don't even have to write it that way. Let's say I say that B, I have some basis B, that's equal to that. What I want to show you is that when I do a change of basis-- we've seen this before-- in my standard coordinates or in coordinates with respect to the standard basis, you give me some vector in Rn, I'm going to multiply it times A, and you're going to have the transformation of it. It's also going to be in Rn. Now, we know we can do a change of basis. And in a change of basis, if you want to go that way, you multiply by C inverse, which is-- remember, the change of basis matrix C, if you want to go in this direction, you multiply by C. The change of basis matrix is just a matrix with all of these vectors as columns. It's very easy to construct. But if you change your basis from x to our new basis, you multiply it by the inverse of that. We've seen that multiple times. If they're all orthonormal, then this is the same thing as a transpose. We can't assume that, though. And so this is going to be x in our new basis. And if we want to find some transformation, if we want to find the transformation matrix for T with respect to our new basis, it's going to be some matrix D. And if you multiply D times x, you're going to get this guy, but you're going to get the B representation of that guy. The transformation of the vector x is B representation. And if we want to go back and forth between that guy and that guy, if we want to go in this direction, you can multiply this times C, and you'll just get the transformation of x. And if you want to go in that direction, you could multiply by the inverse of your change of basis matrix. We've seen this multiple times already. But what I've claimed or I've kind of hinted at is that if I have a basis that's defined by eigenvectors of A, that this will be a very nice matrix, that this might be the coordinate system that you want to operate in, especially if you're going to apply this matrix a lot. If you're going to do this transformation on a lot of different things, you're going to do it over and over and over again, maybe to the same set, then it maybe is worth the overhead to do the conversion and just use this as your coordinate system. So let's see that this will actually be a nice-looking, easy-to-compute-with and actually diagonal matrix. So we know that the transformation-- what is the transformation of-- let's write this in a bunch of different formats. Let me scroll down a little bit. So if I wanted to write the transformation of v1 in B coordinates, what would it be? It's just going to be equal to-- well, these are the basis vectors, right? So it's the coefficient on the basis vectors. So it's going to be equal to lambda 1, and then there's a bunch of zeroes. It's lambda 1 times v1 plus 0 times v2 plus 0 times v3, all the way to 0 times vn. That's what it's equal to. But it's also equal to D, and we can write D like this. D is also a transformation between Rn and Rn, just a different coordinate system. So D is going to just be a bunch of column vectors d1, d2, all the way through dn times-- this is the same thing as D times our B representation of the vector v1. But what is our B representation of the vector v1? Well, the vector, v1 is just 1 times v1 plus 0 times v2 plus 0 times v3 all the way to 0 times vn. v1 is a basis vector. That's just 1 times itself plus 0 times everything else. So this is what its representation is in the B coordinate system. Now, what is this going to be equal to? And we've seen this before. This is all a bit of review. I might even be boring you. This is just equal to 1 times d1 plus 0 times d2 plus 0 times all the other columns. This is just equal to d1. So just like that, we have our first column of our matrix D. We could just keep doing that. I'll do it multiple times. The transformation of v2 in our new coordinate system with respect to our new basis is going to be equal to-- well, we know what the transformation of v2 is. It's 0 times v1 plus lambda 2 times v2 and then plus 0 times everything else. And that's the same thing as D, d1, d2, all the way through dn times our B representation of vector 2. Well, vector 2 is one of the basis vectors. It's just 0 times v1 plus 1 times v2 plus 0 times v3 all the way, the rest is 0. So what's this going to be equal to? This is 0 times d1 plus 1 times d2 and 0 times everything else, so it's equal to d2. I think you get the general idea. I'll do it one more time just to really hammer the point home. The transformation of the nth basis vector, which is also an eigenvector of our original matrix A or of our transformation in standard coordinates, in B coordinates, is going to be equal to what? Well, we wrote it right up here. It's going to be a bunch of zeroes. It's 0 times all of these guys plus lambda n times vn. And this is going to be this guy d1, d2, all the way to dn times the B representation of the nth basis vector, which is just 0 times v1, 0 times v2 and 0 times all of them, except for 1 times vn. And so this is going to be equal to 0 times d1 plus 0 times d2 plus 0 times all of these guys all the way to 1 times dn. So that's going to be equal to dn. And just like that, we know what our transformation matrix is going to look like with respect to this new basis, where this basis was defined or it's made up of n linearly independent eigenvectors of our original matrix A. So what does D look like? Our matrix D is going to look like-- its first column is right there. We figured that one out. Lambda 1, and then we just have a bunch of zeroes. Its second column is right here. d2 is equal to this. It's 0, lambda 2, and then a bunch of zeroes. And then this is in general the case. The nth column is going to have a zero everywhere except along the diagonal. It's going to be lambda n. It's going to be the eigenvalue for the nth eigenvector. And so the diagonal is going to look-- you're going to have lambda 3 all the way down to lambda n. And our nth column is lambda n with just a bunch of zeroes everywhere. So D, when we picked-- this is a neat result. If A has n linearly independent eigenvectors, and this isn't always the case, but we can figure out that eigenvectors and say, hey, I can take a collection of n of these that are linearly independent, then those will be a basis for Rn. n linearly independent vectors in Rn are a basis for Rn. But when you use that basis, when you use the linearly independent eigenvectors of A as a basis, we call this an eigenbasis. The transformation matrix with respect to that eigenbasis, it becomes a very, very nice matrix. This is super easy to multiply. It's super easy to invert. It's super easy to take the determinant of. We've seen it multiple times. It just has a ton of neat properties. It's just a good basis to be dealing with. So that's kind of the big takeaway. In all of linear algebra, we did all this stuff with spaces and vectors and all of that, but in general, vectors are abstract representations of real world things. You could represent a vector as the stock returns or it could be a vector of weather in a certain part of the country, and you can create these spaces based on the number of dimensions and all of that. And then you're going to have transformations. Sometimes, like when we learn about Markov chains, your transformations are essentially what's the probability after one time increment that something state will change to something else, then you'll want to apply that matrix many, many, many, many times to see what the stable state is for a lot of things. And I know I'm not explaining any of this to you well, but I wanted to tell you that all of linear algebra is really just a very general way to solve a whole universe of problems. And what's useful about this is you can have transformation matrices that define these functions essentially on data sets. And what we've learned now is that when you look at the eigenvectors and the eigenvalues, you can change your bases so that you can solve your problems in much simpler ways. And I know it's all very abstract right now, but you now have the toolkit, and the rest of your life, you have to figure out how to apply this toolkit to specific problems in probability or statistics or finance or modeling weather systems or who knows what else.