If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Introduction to eigenvalues and eigenvectors

What eigenvectors and eigenvalues are and why they are interesting. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user durval.menezes
    What are the pre-requisites for this lesson? How do I determine what other videos I need to watch in order to understand this one?

    I have < 1 week (for a Quantum Computing course), it mentions specifically this and one other Linear Algebra topic ("orthogonal bases"). I've been serially watching every video in the "Linear Algebra" section from the beginning, but there will not be enough time.

    So, how to determine what videos I can skip in order to reach this one and be able to understand it?
    (55 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user Themis
    So if an eigenvector is a vector transformed from an original vector and an eigenvalue is the scaler multiplier, why do we give them those fancy names anyway? Is it because those values and vectors will produce a perfect base or something instead of searching randomly for a perfect base or a value to transform a set of vectors according to our needs? Why do we even came up with these as humans?
    (9 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user DeWain Molter
      Think of a matrix as if it were one, big, complex number. One of the rules of integers says the prime factorization of any integer is unique... if two numbers have the same prime factorization then they must be the same number.

      It becomes useful (later) to be able to say if two matrices are the same, but it can be very difficult (because the matrices are huge, their values are complex or worse... a variety of causes). Eigenvalues are one part of a process that leads (among other places) to a process analogous to prime factorization of a matrix, turning it into a product of other matrices that each have a set of well-defined properties. Those matrices are tolerably easy to produce, and if two matrices can be 'factored' into the same sets of matrix products, then they are 'equal'.

      And that's just up to the first half of grad school :)
      (33 votes)
  • leaf green style avatar for user henrik.edman
    Does anyone know which transformation video is mentioned in the beginning of this video? I've been looking around, but unable to find it. The closest thing I found was the video about rotating vectors, but it didn't seem to be the right one.
    (20 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user prachi sawan
    what are the applications of eigen values and eigen vectors in real life?...i need a simple answer. i read about tacoma bridge but could not really understand it...
    (5 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user metachromatic
      Eigenvalues and eigenvectors prove enormously useful in linear mapping. Let's take an example: suppose you want to change the perspective of a painting. If you scale the x direction to a different value than the y direction (say x -> 3x while y -> 2y), you simulate a change of perspective. This would represent what happens if you look a a scene from close to the ground as opposed to higher in the air. Objects appear distorted and foreshortened. A change in perspective in a painting is really just a vector transformation that performs a linear map. That is, a set of points (the painting) gets transformed by multiplying the x distance by one value and the y value by a different value. You can capture the process of doing this in a matrix, and that matrix represents a vector that's called the eigenvector.
      If the mapping isn't linear, we're out of the realm of the eigenvector and into the realm of the tensor. So eigenvectors do well with linear mappings, but not with nonlinear mappings. In the case of nonlinear mappings, the fixed points in the eigenvector matrix would be replaced with functions that can take on many different values.
      Eigenvectors pop up in the study of the spread of infectious diseases or vibration studies or heat transfer because these are generally linear functions. Diseases tend to spread slowly, heat spreads gradually, and vibrations propagate gradually. Diseases and vibrations and heat patterns also tend to spread by contact, so you don't get oddball discontinuous patterns of disease spread of vibration propagation or heat transfer (typically). This means that heat patterns and vibration propagation and disease spreading can be simulated reasonably well by linear maps from one set of points to another set of points.
      Nonlinear discontinuous systems like explosions or the orbits of a 3-body system in a gravitational field or a star undergoing gravitational collapse don't lend themselves to simple linear maps. As a result, eigenvectors do not offer a good way of describing those systems. For those kinds of nonlinear systems, you typically need tensors instead of linear maps.
      (18 votes)
  • piceratops sapling style avatar for user shankara dhadi
    can you show a video on singular value decomposition? it would really great. you explain so simply and it is easy to absorb and understand. Thank you, Sal.
    (13 votes)
    Default Khan Academy avatar avatar for user
  • duskpin tree style avatar for user Dan Brickley
    I think I'm realising why I have found this topic slippery in the past. Some presentations (like this one) emphasise the transformational role of the matrix we're finding eigenvectors/values for, i.e. it acts upon vectors to make new vectors; others speak of it more as a static dataset to be analyzed, i.e. a collection of existing datapoints-as-vectors. Bridging those two views is quite subtle... How should we come to think of a dataset as a transformation?
    (6 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user metachromatic
      "How should we come to think of a dataset as a transformation?" By comparing the dataset you're mapping into with the dataset you're mapping from. If both datasets are exactly the same, the transformation is unitary, i.e, the eigenvector is just a set of 1's.
      If the dataset you're mapping into is different from the dataset you've mapped from, then you can often (but not always) find some linear function that relates them. Say, for example, that the rows in the dataset you're mapping into are each multiplied by 2 while the columns are each multiplied by 3. That's a linear transformation, and it can be captured in a matrix that gives an eigenvector that performs that transformation, AKA a linear mapping.
      (7 votes)
  • piceratops ultimate style avatar for user ehaver2021
    they're aren't even numbers in this math problem...
    (4 votes)
    Default Khan Academy avatar avatar for user
  • leaf grey style avatar for user Katie Knight
    can anyone provide data and videos relating to the daily life use of eigenvalues and eigenvectors i.e physical significance or visualization?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user metachromatic
      Visual perspective. Imagine looking at your own legs while you're lying down in bed. Your legs seem to be foreshortened. Now imagine looking at your legs while you're standing in front of a mirror. Your legs appear to be much longer.
      The difference in these two views is captured by a linear transformation that maps one view into another. This linear transformation gets described by a matrix called the eigenvector. The points in that matrix are called eigenvalues.
      Think of it this way: the eigenmatrix contains a set of values for stretching or shrinking your legs. Those stretching or shrinking values are eigenvalues. The eigenvector contins a set of directions for stretching or shrinking your legs. Those stretching or shrinking values are eigenvectors.
      These kinds of linear transformations prove absolutely vital in doing CGI animation in movies. The eigenmatrices and eigenvectors change as you change the location of the virtual camera in a CGI animation.
      Eigenvectors and eigenvalues are also vital in interpreting data from a CAT scan. In that case you have a set of X-ray values and you want to turn them into a visual scene. But you don't just have one set of X-ray scans, you do a bunch of X-ray scans in layers, like layers in a cake. From all these layers of X-ray scans, you want to build up a picture of a 3-Dimensional object, a part of the human body, and you want to be able to rotate that 3-D image in the computer so you can see it from different viewpoints. This requires a set of linear mappings, and in turns demands eigenvalues and eigenvectors.
      (9 votes)
  • duskpin tree style avatar for user Maiar
    I have a question, if we have some vectors that are multiples of each other i.e. they have the same line span, does this mean that they have the same eiegenvalues ?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • primosaur ultimate style avatar for user Derek M.
      Yes, say v is an eigenvector of a matrix A with eigenvalue λ. Then Av=λv. Let's verify c*v (where c is non zero) is also an eigenvector of eigenvalue λ. You can verify this by computing A(cv)=c(Av)=c(λv)=λ(cv). Thus cv is also an eigenvector with eigenvalue λ. I wrote c as non zero, because eigenvectors are non zero, so c*v cannot be zero.
      (4 votes)
  • blobby green style avatar for user shane.tolmie
    Me too - +1 for PCA!
    (4 votes)
    Default Khan Academy avatar avatar for user

Video transcript

For any transformation that maps from Rn to Rn, we've done it implicitly, but it's been interesting for us to find the vectors that essentially just get scaled up by the transformations. So the vectors that have the form-- the transformation of my vector is just equal to some scaled-up version of a vector. And if this doesn't look familiar, I can jog your memory a little bit. When we were looking for basis vectors for the transformation-- let me draw it. This was from R2 to R2. So let me draw R2 right here. And let's say I had the vector v1 was equal to the vector 1, 2. And we had the lines spanned by that vector. We did this problem several videos ago. And I had the transformation that flipped across this line. So if we call that line l, T was the transformation from R2 to R2 that flipped vectors across this line. So it flipped vectors across l. So if you remember that transformation, if I had some random vector that looked like that, let's say that's x, that's vector x, then the transformation of x looks something like this. It's just flipped across that line. That was the transformation of x. And if you remember that video, we were looking for a change of basis that would allow us to at least figure out the matrix for the transformation, at least in an alternate basis. And then we could figure out the matrix for the transformation in the standard basis. And the basis we picked were basis vectors that didn't get changed much by the transformation, or ones that only got scaled by the transformation. For example, when I took the transformation of v1, it just equaled v1. Or we could say that the transformation of v1 just equaled 1 times v1. So if you just follow this little format that I set up here, lambda, in this case, would be 1. And of course, the vector in this case is v1. The transformation just scaled up v1 by 1. In that same problem, we had the other vector that we also looked at. It was the vector minus-- let's say it's the vector v2, which is-- let's say it's 2, minus 1. And then if you take the transformation of it, since it was orthogonal to the line, it just got flipped over like that. And that was a pretty interesting vector force as well, because the transformation of v2 in this situation is equal to what? Just minus v2. It's equal to minus v2. Or you could say that the transformation of v2 is equal to minus 1 times v2. And these were interesting vectors for us because when we defined a new basis with these guys as the basis vector, it was very easy to figure out our transformation matrix. And actually, that basis was very easy to compute with. And we'll explore that a little bit more in the future. But hopefully you realize that these are interesting vectors. There was also the cases where we had the planes spanned by some vectors. And then we had another vector that was popping out of the plane like that. And we were transforming things by taking the mirror image across this and we're like, well in that transformation, these red vectors don't change at all and this guy gets flipped over. So maybe those would make for good bases. Or those would make for good basis vectors. And they did. So in general, we're always interested with the vectors that just get scaled up by a transformation. It's not going to be all vectors, right? This vector that I drew here, this vector x, it doesn't just get scaled up, it actually gets changed, this direction gets changed. The vectors that get scaled up might switch direct-- might go from this direction to that direction, or maybe they go from that. Maybe that's x and then the transformation of x might be a scaled up version of x. Maybe it's that. The actual, I guess, line that they span will not change. And so that's what we're going to concern ourselves with. These have a special name. And they have a special name and I want to make this very clear because they're useful. It's not just some mathematical game we're playing, although sometimes we do fall into that trap. But they're actually useful. They're useful for defining bases because in those bases it's easier to find transformation matrices. They're more natural coordinate systems. And oftentimes, the transformation matrices in those bases are easier to compute with. And so these have special names. Any vector that satisfies this right here is called an eigenvector for the transformation T. And the lambda, the multiple that it becomes-- this is the eigenvalue associated with that eigenvector. So in the example I just gave where the transformation is flipping around this line, v1, the vector 1, 2 is an eigenvector of our transformation. So 1, 2 is an eigenvector. And it's corresponding eigenvalue is 1. This guy is also an eigenvector-- the vector 2, minus 1. He's also an eigenvector. A very fancy word, but all it means is a vector that's just scaled up by a transformation. It doesn't get changed in any more meaningful way than just the scaling factor. And it's corresponding eigenvalue is minus 1. If this transformation-- I don't know what its transformation matrix is. I forgot what it was. We actually figured it out a while ago. If this transformation matrix can be represented as a matrix vector product-- and it should be; it's a linear transformation-- then any v that satisfies the transformation of-- I'll say transformation of v is equal to lambda v, which also would be-- you know, the transformation of [? v ?] would just be A times v. These are also called eigenvectors of A, because A is just really the matrix representation of the transformation. So in this case, this would be an eigenvector of A, and this would be the eigenvalue associated with the eigenvector. So if you give me a matrix that represents some linear transformation. You can also figure these things out. Now the next video we're actually going to figure out a way to figure these things out. But what I want you to appreciate in this video is that it's easy to say, oh, the vectors that don't get changed much. But I want you to understand what that means. It literally just gets scaled up or maybe they get reversed. Their direction or the lines they span fundamentally don't change. And the reason why they're interesting for us is, well, one of the reasons why they're interesting for us is that they make for interesting basis vectors-- basis vectors whose transformation matrices are maybe computationally more simpler, or ones that make for better coordinate systems.