If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Showing that an eigenbasis makes for good coordinate systems

Showing that an eigenbasis makes for good coordinate systems. Created by Sal Khan.

## Want to join the conversation?

• Could you do an example or two of problems solved using eigenvectors in the real world? It might help solidify the concept.
• I came across eigenvalues and eigenvectors when I was working on programming a line detection algorithm (basically, given an image from a webcam, find the lines in that image). I found a paper that discusses an efficient algorithm for line detection, and it was all riddled with eigenstuff. That's why I'm here now. Sal is awesome! :D (btw, my college textbook does a horrible job of explaining eigenstuff).

tl;dr
eigenstuff is everywhere in science and engineering... It's worth learning.
just my two cents

but, yeah, more examples would be great...
• Will there be a tutorial on diagonalization of matrices or complex eigenvalues (such as lambda^2 + 1 case)?
• I just wanted to make the comment - at the end of the final video, Kahn says we can spend the rest of our lives now using this toolkit to solve a universe of problems, in statistics, weather, and who knows what else. A VERY interesting 'what else' is deep learning and artificial intelligence. Linear algebra is the language of artificial intelligence, and you build neural networks by implementing a series of linear algebra operations we studied in this class. Dot products, matrix transpositions, eigenvector calculation - these are all used in machine learning and deep learning algorithms.
• Where do we go from here?

I'm gonna go out on a limb and guess that there's still a lot of Linear Algebra left to learn... What are the concepts (and in what order) that typically come after Eigenvectors? In other words, if Sal were to continue making Linear Algebra videos, what do you think would be the next couple of topics he'd cover??
• I personally like Jordan Canonical Form. This alternative matrix representation arises naturally from the case when your linear operator in question does not diagonalize. It is an advanced topic that would probably take 5-10 videos to build up properly, but it's a fundamental result.
http://en.wikipedia.org/wiki/Jordan_normal_form
It turns out that what has been shown (diagonalizing a matrix by finding eigenvalues and vectors) is just a special case of Jordan Canonical Form.
• do all subspaces that are defined by the span of n vectors have a basis of purely eigen-vectors. or ere there some subspaces that don't allow for that transformation?
• Remember that eigenvectors are associated with a matrix A, not with a subspace itself, so to talk about a basis of eigenvectors doesn't really make sense without reference to a specific transformation. However, if I understand your question, the answer is no, not every set of eigenvectors from an nxn matrix will necessarily span an n-dimensional space. Although an nxn matrix always has n eigenvalues (remember that some may be repeats as in the video preceding this one), it does not necessarily have n linearly independent eigenvectors associated with those eigenvalues.

For instance the 2x2 matrix
(1 1)
(0 1)
has only one eigenvector, (1,0) (transpose).
So the eigenspace is a line and NOT all of R^2.

Note that in the beginning of this video we make the assumption that we have n linearly-independent eigenvectors. Without this assumption we can't assume the nice behavior seen in the video.

• Please, your explanations are so wonderful, I took my time to grab everything till the end. Your videos are the most explanatory Linear Algebra videos I have known so far. Please if u can make more videos here on PCA and SVD which somewhat involves eigen vectors.
• Correct me if I'm wrong, but in this video Sal is actually doing a diagonalization and even discusses the conditions for diagonalization and etc etc. BUT, he doesn't use the term, therefore everyone is confused in the comments asking for diagonalization. Whereas I think, this is an example of diagonalization. Am I wrong?
• Yes, he's been doing diagonalization for several videos now, and yes he has seemed to avoid calling it that. Maybe the name for it and other related terms, like similarity, are being saved for a future playlist. Here it looks like we're "just trying to get the intuition" as Sal often puts it. But writing A = C D C^-1 is diagonalization if i recall correctly, and that's been done here, as well as in previous videos.
• Hello,
These videos are very helpful. For me learning the geometric interpretations of eigen vectors made it a lot easier to conceptualize the material, but im stuck wondering what would happen when you discover that the eigan values for a given transformation are imaginary or complex?
• That's a great question! An illustration might help. The go-to example of a linear transformation that does not have real eigenvalues is a non-trivial rotation about the origin. And I hope that doesn't seem weird, because of course no non-zero vector will be translated to a scalar multiple of itself. If that helps, then (spoiler alert!) that's essentially the only way that complex eigenvalues get involved. You can always choose an eigenbasis that works the way you're visualizing it plus the possibility that there might be rotations around planes defined by some pairs of the basis elements.
• A commutative diagram? In the Khan Academy? That made my day!
• I think it would be quite difficult to do change of coordinates and eigen-coordinates without a commutative diagram!
• I'm curious to hear if anyone has insights or ideas for further reading regarding the conceptualization of data (obtained through some experiment or observational study) as a matrix.

I took this course to get a better understanding of data analysis tools like PCA, ICA and graph analysis. While I get all the operations you can perform, and why the eigenbasis is a natural coordinate system in a sense, this entire series of videos conceptualized a matrix as a transformation you apply to a vector (or set of vectors).

But how to get from matrix-as-a-transform to matrix-as-data? Why does it make sense to apply linear algebra to data? Why is an eigenvector of a covariance matrix a principal axis of a dataset? Sure, I can calculate it (and I find the links in one of the top posts on PCA very useful), but there's still something "missing". And how are statistical characteristics of data (means, variances, etc) affected by these transformations?

Any feedback much appreciated!