Main content

### Course: Linear algebra > Unit 3

Lesson 2: Orthogonal projections- Projections onto subspaces
- Visualizing a projection onto a plane
- A projection onto a subspace is a linear transformation
- Subspace projection matrix example
- Another example of a projection matrix
- Projection is closest vector in subspace
- Least squares approximation
- Least squares examples
- Another least squares example

© 2024 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Projections onto subspaces

Projections onto subspaces. Created by Sal Khan.

## Want to join the conversation?

- at06:12sal refers to another video, can someone please refer me to it. thanks,

also... why not add an embedded comment with a link to the video if your'e going to reference it?(5 votes) - At16:23the projection is calculated as [-27/13, -18/13]. But the projection vector has a positive horizontal component (it's pointing to the right). Am I missing something? I'm assuming that vector is w.r.t to the original space (vs. the null+row space) since the projection is calculated using vectors from that space.(3 votes)
- The projection is [27/13, -18/13], not [-27/13, -18/13]. The correct projection does point to the right along the rowspace of A.(4 votes)

- How can vector be placed anywhere and still be considered the same vector like @9:59? then why would the solution set @11:02be a different vector? isn't that just moving the null space by 3 to the x-direction..?(1 vote)
- I think it's true that you can draw a vector anywhere (because a vector just has magnitude and direction). However, when you want to use vectors to describe points in a vector space (i.e. when you want your vectors to be position vectors), you need the vector space to have an origin and you need to describe all the points with respect to that origin.

This means, for example, that when you are using the sum of two vectors to describe a point, one of the vectors has to start from the origin (and the other vector would be drawn head-to-tail with it because that is how vectors add). This also applies to describing lines in a vector space.(3 votes)

- what is the difference between null space and complement?(1 vote)
- If you mean the Orthogonal Complement (https://www.khanacademy.org/math/linear-algebra/alternate_bases/othogonal_complements/v/linear-algebra-orthogonal-complements), then they are closely related, but are a different set of subspaces.

The null space of matrix`𝐀`

is defined as all vectors`x⃗`

that satisfy`𝐀x⃗ = 0`

, while the Orthogonal Complement of matrix`𝐀`

can be calculated as all vectors`y⃗`

that satisfy`𝐀ᵀy⃗ = 0`

.

The main difference is that to calculate the null space you use the normal matrix`𝐀`

, an to calculate the Orthogonal Complement you use the transpose of`𝐀`

.(3 votes)

- does khanacademy have anything on math for perspective projection not just orthagonal?(2 votes)
- So to summarize, the projection of any vector in the solution set onto the row space yields the shortest possible solution. Is that correct? What would happen if we were to project the vector onto the null space instead?(2 votes)
- The null space is always parallel to the solution set (i.e. any solution set to a single vector in the column space is always a translation of the null space). What this means is that projecting the solution vector onto the null space would yield the same solution vector(1 vote)

- Would a line that doesn't go through the origin still be considered a subspace since it wouldn't contain the zero vector? Or should we only care about parametric representations of a line such that they always go through the origin?(1 vote)
- Not containing the zero vector is definitively what it means to
**not**be a subspace, so a linear that doesn't go through origin does not contain the zero vector, making such a line not a subspace.(2 votes)

- isn't S one of the solution to AX=b? it might not be the shortest solution but isn't it one of the solutions? if it is than S would be memember of C(AT). Soo, this is project of memember C(AT) on C(AT)(r)?(1 vote)
- Given x and u hat, we can find projection of x by

(x . u hat) u hat = projection of x .

Given projection and u hat, how to we find x vector.

I tried projection matrix, but its not invertible. Please suggest, if any other way?

Thanks(1 vote) - It is worth noting that b = [9 18] is a member of c(A) = [3 6]

that is b = 3c(A).

Why am I saying this?

In order for the shortest solution to AX = b to be a member in c(A_transpose), then b has to be a member of c(A).

Sal mentioned this in previous videos. I just think it's necessary to note.

HL.(1 vote)

## Video transcript

Many videos ago we introduced
the idea of a projection. And in that case we dealt more
particularly with projections onto lines that went
through the origin. So if we had some line-- let's
say L-- and let's say L is equal to the span of
some vector v. Or you could say, alternately,
that L is equal to the set of all multiples of v, such that
the scalar factors are just any real numbers. These are both representations
of lines that go through the origin. We defined a projection of any
vector onto that line. Let me just draw it real
fast. So let me see, we draw some axes. So that is my-- I want to draw
it a little bit straighter than that-- that is my vertical
axis and that is my horizontal axis. Just like that and let's say
I have some line that goes through the origin. Let's say-- that doesn't go
through the origin-- let's say that that line right there
goes through the origin. So that is L. We knew visually that a
projection of some vector x onto L-- so let's say that
that is a vector x. Visually, if you were to draw--
if you have some light coming straight down it would
be the shadow of x onto L. So this right here, that right
there, was the projection onto the line L of the vector x. And we defined it
more formally. We kind of took a
perpendicular. We said that x minus the
projection of x onto L is perpendicular to the line
L, or perpendicular to everything-- orthogonal to
everything-- on the line L. But this is how at least
I visualize this. It's kind of the shadow as you
go down onto the line L. And this was a special case,
in general, of projections. You might notice that L is going
to be a valid subspace. You could prove it
to yourself. It contains the zero vector. It goes through the origin. It's closed under addition--
any member of it plus any other member of it is going to
be another member of it. It's closed under scalar
multiplication-- you can take any member of it and scale it
up or down, it's still going to be an L. So this was a subspace
when we defined this. And just as a bit of a reminder
of what it was, we were able to figure out what
this projection is for some line L. If you have some spanning
vector, the projection onto this line L that goes through
the origin of the vector x, we figured out was x dot your
spanning vector for your line, so x dot v over v dot v,
which is really just the length of v squared. So all of this was a number and
you want it to be in the same direction as your line. It's going to be another
vector in your line. So it's going to be times
the vector v. So it's just going to be a
scaled up or scaled down version of your spanning
vector. Maybe your spanning vector
is like that. And really any vector
in your line could be a spanning vector. Any vector other than
the zero vector. Now that was a projection onto
a line which was a special kind of subspace. But now we're going to broaden
our definition of a projection to any subspace. So we already know that if-- let
me draw a little dividing line to show that we're doing
something slightly different-- if v is a subspace of Rn then
v complement is also a subspace space of Rn. So the orthogonal complement
of v is also a subspace. And let's say we have some
members, or let me write it this way. If we have these two subspaces--
you have a subspace and you have this
orthogonal complement-- we already learned that if you
have any member of Rn-- so let's say that x is a member
of our Rn-- then x can be represented as a sum of a member
of v and a member of the orthogonal complement
of v. Where-- let me write this-- the
vector v is a member of the subspace v and the vector
w is a member of the orthogonal complement
of the subspace v. Just like that. We saw this several
videos ago. We proved that this was true
for any member Rn. Now given that, we can define
the projection of x onto the subspace v as being equal to,
just the part of x -- these are two orthogonal parts of x--
we define the projection onto v as a part of x
that came from v. It's equal to just
that vector v. Alternately you could say that
the projection of x onto the orthogonal complement of-- sorry
I wrote transpose-- the orthogonal complement of v is
going to be equal to w. So this piece right here
is a projection onto the subspace v. This piece right here is a
projection onto the orthogonal complement of the subspace v. Now what I want to do in this
video is show you that these two definitions-- that this
definition right here which is then in conjunction with this
right here-- this is the equivalent to what we learned up
here if the subspace v that we're dealing with is a line. Because this was a
valid subspace. But not all subspaces are
going to be lines. And to see this we can revisit
an example that we saw several videos ago. Several videos ago we had
this matrix here A. This 2 by 2 matrix. And then we had this other
vector b that was a member of the column space of A. We did this problem to show you
that the shortest solution to this right here was a unique member of the row space. Hopefully that gets your memory
on track for this problem when we first did it. But let me graph it and show you
that for the solution of that problem we could have
just as easily taken a projection onto a subspace. Let me graph everything
in this problem. This might help you remember
also about the problem. So let me draw my axes
just like that. So the first thing we learned--
you know you could solve this but I already
did this in a video. I think it was two or three
videos ago-- the null space of A, or all of the x's that
satisfy Ax is equal to zero, is a span of the vector 2, 3. So you go 2 to the right. 1, 2. And then you go 3 up. 1, 2, 3. And so it's the span
of this vector. And so the span of that vector
is just all the points. Well that vector specifies
that point. But if you scale this vector
up and down you're going to specify all of the point
on this line. All the points on that line. Let me draw it like that. That's good enough. It shouldn't curve down
like that at the end. So let me draw that a
little straighter. So this is the null space. That is our null space of
that matrix right there. And then the row space was a
span of the vector 3, minus 2. You see that right here. 3, minus 2 is the first row. This guy is just a multiple
of that one. That's why we don't have
this guy right here in the span as well. And if we were to graph
it, 3, minus 2. You go out 3, then
you go down 1, 2. it would be the span of this
vector right there. Let me draw it like that. Now you take all of the scalar
multiples of that vector and you put those vectors in
standard position. They're going to specify, or
their tips are going to be on points along this line
right there. Along that line right there. I'm trying to make sure I
draw them orthogonally. So this right here
is the row space. That right there is the row
space of A which is the same thing as a column space
of A transpose. And we know that these guys are
each other's orthogonal complements. We know, we've seen this in
multiple videos, that the null space of A is the orthogonal
complement of the row space. And we also know that the
orthogonal complement of the null space is equal
to the row space. Everything in this
is orthogonal to everything in that. Everything in that
is orthogonal to everything in this. You can see it here
in this graph. That these two spaces, which are
represented by these lines that go through the origin,
are orthogonal. And it makes sense that any--
we said at really the beginning of the video-- that
anything in R2 in this situation, can be represented as
some sum of a unique member of our row space and a unique
member of its orthogonal complement. Let's say I have that
point right there. How could I represent it as a
sum of a member of this and a member of that? Well if I go along this guy, I
have this vector right here. I have that vector right
there along that line. And then I have this
vector right here. If I were to shift it-- this is
drawn in standard position, but I can draw a vector
wherever I want. These lines are just all of the
vectors drawn in standard position with their tails
at the origin. But we learned, in really, I
think, the first or second vector videos, that I can draw
them wherever I want. So if I add this vector and that
vector, I can shift this vector over and this vector
will be right there. And there you have it. I took an arbitrary point in R2
and I can represent it as a sum of a member of my row space
and a member of the row space's orthogonal complement
or the null space. But just to review, what we
originally did in that problem is we looked at the solution
set of this. We said the solution set of
this looks like this. It has a particular solution
plus members of your null space, plus homogeneous
solutions. We've seen that multiple
videos ago. So 3, 0-- it looks
like this-- plus members of the null space. So your solution set is going
to be parallel to this but shifted to the right by 3. So it looks-- let me draw it a
little neater than that-- Let me draw it like that. And then it goes down like,
the second part I didn't draw-- there you go. Oh that's not good either. Maybe I'm being too picky. OK, so this is your
solution set. And if you remember in that
video we said, hey there's some member of this solution set
that is also a member of our row space and that member of
the solution set that is a member of our row space
is going to be the shortest solution. And we saw that. You can see it visually
right here. Right? This vector right here. It is in our row space. It is a member of
our row space. And it also specifies a point
on our solution set. And you could see visually
that it's going to be the shortest solution. And one way you could think
about it is, this is the projection-- let me pick a good,
different, new color-- any solution on our solution
set-- let me see right there-- let's say that that is some
arbitrary solution on our solution set. Right? That's going to be a point in R2
and any point in R2 can be represented as a sum of some
vector in our row space and some vector in our null space. And so if I have this vector
right here, how can I do that? Well, I could represent it as
a sum of this guy right here and then this vector
right here. That vector right here. And this vector right
here is clearly a member of my null space. I just shifted over. This line is only when I draw
in standard position. This vector right here-- I'm
just showing it heads to tails-- if I add this member of
my row space to this member of my null space, I get an
arbitrary solution to my solution set. And if you think about it, the
projection of my arbitrary solution onto my row space will
be this guy right here. And that just comes from our--
well there are two ways to think about it-- we could say
that this is the solution right here. We could say our solution right
here is equal to some member of my row space plus some
member of my null space. This is the row space. That is the null space. And so by the definition of a
projection onto a subspace I just gave you, we know that
the projection of this solution onto my-- let me write
a little bit-- onto my row space of my solution,
is just equal to this first thing. It's equal to the component of
it that's in my row space. It's other component, we could
call it, is in the orthogonal complement of my row space. Or it's in my null space. So this is just going to be
equal to the R vector. Now, I want to show you that
that is essentially equal to the definition that
we did before. That this is completely
identical to the definition of a projection onto a line because
in this case the subspace is a line. So let's find a solution set. And the easiest one, the easiest
solution that we could find is if we set C as
equal to 0 here. We know that x equals 3, 0 is
one of these solutions. So x equals 3, 0 looks
like that. So we know x equals 3,
0 is a solution. And what we want to do
is we want to find the shortest solution. Or we want to find the
projection of x onto the row space. Or if we wanted, we could also
think of it is a projection of x onto this line. This line is equal
to the row space. So let's do that. And I'm doing this to show you
that this definition of a projection onto a subspace that
I've just introduced you to in this video, it is
completely identical to the definition, or it's not
identical, it's consistent with the definition of a
projection onto a line. Although this is more general
because a subspace doesn't have to be a line. But in this case it is a line. So let's do that. So the projection of the vector
3, 0 onto our row space, which is a line so we
can use that formula, it is equal to 3, 0 dot the
spanning vector for our row space, right? Dot the spanning vector
for our row space. So it's 3, minus 2. There's a bunch of spanning
vectors for your row space. This is just the one we
happened to pick. So dot 3, minus 2 all
over the spanning vector dotted with itself. 3, minus 2 dot 3, minus 2. And then this is just going to
be one big scalar and then we want to multiply that-- or
essentially scale up-- our actual spanning vector
by that. So this is a projection of
this solution onto my row space, which should give me
this vector right here. Because we're just taking a
projection onto a line, because a row space in this
subspace is a line. And so we used the linear
projections that we first got introduced to, I think, when I
first started doing linear transformations. So let's see this is 3 times
3 plus 0 times minus 2. This right here is equal to 9. This is 3 times 3 plus minus
2 times minus 2. So that's 9 plus 4. That's 13. So it's 9/13 times this
vector right here. So it's going to be equal
to 9/13 times the vector 3, minus 2. Which is equal to the vector
27/13 and then minus 18/13, which is this vector
right here. We got this exact answer when
we first did it, although we just didn't use the projection
onto a line. But now we see that this is
exactly consistent with what we did before. We just used the projection
onto a line. And we see that this is
consistent with our new, broader definition
of a projection. Here we were able to do it
because we did it onto a line. But here I'm calling a
projection onto any subspace. We know how to do it if it's a
line, but so far I've just kind of defined it onto
an arbitrary subspace. But I haven't giving you a nice
mathematical, I guess, or computational way to figure out
what this is going to be if this isn't a line. In fact, I haven't even shown
you when this is general, whether this is definitely
a linear transformation. We know that when you take the
projection onto the line it's a linear transformation. But I haven't shown you that
when we take a projection onto an arbitrary subspace that it
is a linear projection. I'll do that in the
next video.