Main content
Linear algebra
Course: Linear algebra > Unit 3
Lesson 1: Orthogonal complements- Orthogonal complements
- dim(v) + dim(orthogonal complement of v) = n
- Representing vectors in rn using subspace members
- Orthogonal complement of the orthogonal complement
- Orthogonal complement of the nullspace
- Unique rowspace solution to Ax = b
- Rowspace solution to Ax = b example
© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice
Orthogonal complements
Orthogonal Complements as subspaces. Created by Sal Khan.
Want to join the conversation?
- After, should all the rn-vectors should be rn-transpose-vectors like they were before? Or, since he "untransposes" his r-vectors at 13:00, should all the r-vectors he wrote between 18:53and 10:00be non-transpose vectors as well? I'm confused. 12:00(14 votes)
- The "r" vectors are the row vectors of A throughout this entire video. "x" and "v" are both column vectors in "Ax=0" throughout also.
The dot product of two vectors is not affected by the row or column context of either vector.
The "T"s he uses on vectors here are useless and confusing. @he's not using the row vectors any differently than he did anywhere else in this video. 18:53(1 vote)
- Atin the video, isn't A really A transpose? And what Sal calls r1T, r2T aren't they really the transpose of Column 1 Column2, so maybe they should be labelled c1T, c2T? Because the r1T suggests the transpose of a row to a column. 7:43(9 votes)
- The notation confused me for a bit also, but i think he just wants to clarify that any row matrix could be written as a transpose of the column matrix ed v_1T which we are more used to.
But saying that it shoud be from r1T to r_nT not r_mT(5 votes)
- atis every member of N(A) also orthogonal to every member of the column space? 16:00(3 votes)
- every member of N(A) also orthogonal to every member of the column space of A transpose.(9 votes)
- Is it possible to illustrate this point with coordinates on graph?(5 votes)
- Sal did in this previous video: https://www.khanacademy.org/math/linear-algebra/matrix_transformations/matrix_transpose/v/lin-alg--visualizations-of-left-nullspace-and-rowspace(4 votes)
- May you link these previous videos you were talking about in this video ?(6 votes)
- I have a question which gave me really big confusions for past few days....
1.I want to know is ortogonal complement actually a set of vectors which are ortogonal on some other set(subspace)?
2.What is difference beetween ortogonal complement and ortogonal component?If ortogonal complement is set of ortogonal vectors on some set
is component actually 1 of the vectors inside the complement?
3.Is process of getting ortogonal set(complement) of vectors on some particular set called "gran schmidt process" ?
4.And if 3rd is correct then what is NULL space?
Thanks(2 votes)- The orthogonal complement is a subspace of vectors where all of the vectors in it are orthogonal to all of the vectors in a particular subspace. For instance, if you are given a plane in ℝ³, then the orthogonal complement of that plane is the line that is normal to the plane and that passes through (0,0,0).
The orthogonal component, on the other hand, is a component of a vector. Any vector in ℝ³ can be written in one unique way as a sum of one vector in the plane and and one vector in the orthogonal complement of the plane. The latter vector is the orthogonal component.
The Gram-Schmidt Process is not the process of getting the orthogonal complement of a subspace from the original subspace. It is actually a method of creating an orthonormal coordinate system. For more information about orthonomal bases, Sal explains it at https://www.khanacademy.org/math/linear-algebra/alternate_bases/orthonormal_basis/v/linear-algebra-introduction-to-orthonormal-bases.
The 3rd question is not correct, but I'll answer this question anyway. A null space is the set of vectors which when multiplied by a matrix gives the 0 vector.(5 votes)
- What's the "a member of" sign Sal uses atand, again, about 10 seconds later. It looks like a smaller-or-equal sign. Is this a common thing? 19:25(2 votes)
- This is the notation for saying that the one set is a subset of another set, different from saying a single object is a member of a set. This notation is common, yes. This is a short textbook section on definition of a set and the usual notation: http://linear.ups.edu/html/section-SET.html(3 votes)
- is confusing, the rest is actually okay. So...r1 through to rm are row vectors...but wait, they are transposes of row vectors....doesn't that mean they are column vectors...? But then he writes it in that way....Please somebody help to explain...why they are just the same thing as ordinary Matrix A, this been bugging me and is starting to waste my time so I will just leave the question here and if you have a good idea about how it works please tell me :) Thank you for reading my question ^_^ 08:12(3 votes)
- At, is it supposed to be rn transpose or rm transpose multiplied by x ? 10:19
A matrix has m rows and he is essentially multiplying the rm th row by vector x.(2 votes)- Try it with an arbitrary 2x3 (= mxn) matrix A and 3x1 (= nx1) column vector x. You'll see that Ax = (r1 dot x, r2 dot x) = (r1 dot x, rm dot x) (a column vector; ri = the ith row vector of A), as you suggest.(1 vote)
- In this video, Sal examines the orthogonal complement.
Complementary angles add to ninety degrees.
Is the term complement used in other contexts to represent perpendicularity, or does it just mean something that goes with it, as in "This wine complements this food."?
Thanks in advance for your insight.(2 votes)- I usually think of "complete" when I hear "complement". In linguistics, for instance, a complement is a word/ phrase, that is required by another word/ phrase, so that the latter is meaningful (e.g. the verb "to give" needs two complements to make sense => "to give something to somebody").(1 vote)
Video transcript
Say I've got a subspace V. So V is some subspace,
maybe of Rn. I'm going to define the
orthogonal complement of V, let me write that
down, orthogonal complement of V is the set. And, this is shorthand notation
right here, would be the orthogonal complement
of V. So we write this little
orthogonal notation as a superscript on V. And you can pronounce this
as 'V perp', not for 'perpetrator' but for
'perpendicular.' So V perp is equal to the set of
all x's, all the vectors x that are a member of our Rn,
such that x dot V is equal to 0 for every vector V that is
a member of our subspace. So we're essentially saying,
look, you have some subspace, it's got a bunch of
vectors in it. Now if I can find some other
set of vectors where every member of that set is orthogonal
to every member of the subspace in question, then
the set of those vectors is called the orthogonal
complement of V. And you write it this way,
V perp, right there. So the first thing that we just
tend to do when we are defining a space or defining
some set is to see, hey, is this a subspace? Is V perp, or the orthogonal
complement of V, is this a subspace? Well, you might remember from
many, many videos ago, that we had just a couple of conditions
for a subspace. That if-- let's say that a and b
are both a member of V perp, then we have to wonder
whether a plus b is a member of V perp. That's our first condition. It needs to be closed under
addition in order for this to be a subspace. And the next condition as well,
if a is a member of V perp, is some scalar multiple of
a also a member of V perp? And the last one, it has to
contain the zero vector. Which is a little bit redundant
with this, because if any scalar multiple of a is
a member of our orthogonal complement of V, you could
just multiply it by 0. So it would imply that the zero
vector is a member of V. So what does this imply? What is the fact that a and
b are members of V perp? That means that a dot V, where
this V is any member of our original subspace V, is equal
to 0 for any V that is a member of our subspace V. And it also means that b, since
b is also a member of V perp, that V dot any member of
our subspace is also going to be 0, or any b that
is a member of V. So what happens if we
take a plus b dot V? Let's do that. So if I do a plus b dot
V, what is this going to be equal to? This is going to be equal
to a dot V plus b dot V. And we just said, the fact that
both a and b are members of our orthogonal complement
means that both of these quantities are going
to be equal to 0. So this is going to be
equal to 0 plus 0 which is equal to 0. So a plus b is definitely a
member of our orthogonal complement. So we got our check box right
there I'll do it in a different color than
the question mark. Check, for the first condition, for being a subspace. Now is ca a member of V perp? Well let's just take c. If we take ca and dot it with
any member of our original subspace this is the same thing
as c times a dot V. And what is this equal to? By definition a was a member of
our orthogonal complement, so this is going to
be equal to 0. So this is going to be c times
0, which is equal to 0. So this is also a member
of our orthogonal complement to V. And of course, I can multiply
c times 0 and I would get to 0. Or you could just say, look, 0
is orthogonal to everything. You take the zero vector, dot
it with anything, you're going to get 0. So the zero vector is always
going to be a member of any orthogonal complement, because
it obviously is always going to be true for this condition
right here. So we know that V perp, or the
orthogonal complement of V, is a subspace. Which is nice because now we
can apply to it all of the properties that we know
of subspaces. Now the next question, and I
touched on this in the last video, I said that if I have
some matrix A, and lets just say it's an m by n matrix. In the last video I said that
the row space of A is -- well, let me write this way. I wrote that the null space of
A is equal to the orthogonal complement of the
row space of A. And the way that we can write
the row space of A, this thing right here, the row space of
A, is the same thing as the column space of A transpose. So one way you can rewrite this
sentence right here, is that the null space of A is the
orthogonal complement of the row space. The row space is the column
space of the transpose matrix. And the claim, which I have
not proven to you, is that this is the orthogonal
complement of this. This is equal to that, the
little perpendicular superscript. That's the claim, and at least
in the particular example that I did in the last two videos
this was the case, where I actually showed you that
2 by 3 matrix. But let's see if this
applies generally. So let me write my matrix
A like this. So my matrix A, I can
write it as just a bunch of row vectors. But just to be consistent with
our notation, with vectors we tend to associate as column
vectors, so to represent the row vectors here I'm just
going to write them as transpose vectors. Because in our reality, vectors
will always be column vectors, and row vectors are
just transposes of those. r1 transpose, r2 transpose and
you go all the way down. We have m rows. So you're going to
get rm transpose. Don't let the transpose
part confuse you. I'm just saying that these
are row vectors. I'm writing transposes there
just to say that, look these are the transposes of
column vectors that represent these rows. But if it's helpful for you to
imagine them, just imagine this is the first row of the
matrix, this is the second row of that matrix, so
on and so forth. Now, what is the null
space of A? Well that's all of
the vectors here. Let me do it like this. The null space of A is all of
the vectors x that satisfy the equation that this is going to
be equal to the zero vector. Now to solve this equation,
what can we do? We've seen this multiple
times. This matrix-vector product is
essentially the same thing as saying-- let me write it like
this-- it's going to be equal to the zero vector in rm. You're going to have m 0's all
the way down to the m'th 0. So another way to write this
equation, you've seen it before, is when you take the
matrix-vector product, you essentially are taking
the dot product. So to get to this entry right
here, this entry right here is going to be this row dotted
with my vector x. So this is r1, we're calling
this row vector r1 transpose. This is the transpose of some
column vector that can represent that row. But that dot, dot my vector x,
this vector x is going to be equal to that 0. Now, if I take this guy-- let
me do it in a different color-- if I take this guy and
I dot him with vector x, it's going to be equal to that 0. So r2 transpose dot x is
going to be equal to that 0 right there. So, another way to write this
equation is that r1 transpose dot x is equal to 0, r2
transpose dot x is equal to 0, all the way down to rn transpose
dot x is equal to 0. And by definition the null space
of A is equal to all of the x's that are members of--
well in this case it's an m by n matrix, you're going to have
n columns-- so it's all the x's that are members of rn, such
that Ax is equal to 0. Or, you could alternately write
it this way: that if you were to dot each of the rows
with x, you're going to be equal to 0. So you could write it
this way, such that Ax is equal to 0. That's an easier way
to write it. So let's think about it. If someone is a member, if
by definition I give you some vector V. If I were to tell you that
V is a member of the null space of A. Let's call it V1. V1 is a member of
null space of A. That means it satisfies this
equation right here. That means A times
V is equal to 0. A times V is equal to 0 means
that when you dot each of these rows with V, you
get equal to 0. Or another way of saying that
is that V1 is orthogonal to all of these rows, to r1
transpose-- that's just the first row-- r2 transpose, all
the way to rm transpose. So this is orthogonal to all of
these guys, by definition, any member of the null space. Well, if you're orthogonal to
all of these members, all of these rows in your matrix,
you're also orthogonal to any linear combination of them. You can imagine, let's say that
we have some vector that is a linear combination of
these guys right here. So let's say vector w is equal
to some linear combination of these vectors right here. I wrote them as transposes,
just because they're row vectors. But I can just write them as
regular column vectors, just to show that w could be just
a regular column vector. So let's say w is equal to c1
times r1, plus c2 times r2, all the way to cm times rm. That's what w is equal to. So what happens when you take
V, which is a member of our null space, and you
dot it with w? So if you take V, and dot it
with w, it's going to be V dotted with each of these guys,
because our dot product has the distributive property. So if you dot V with each of
these guys, it's going to be equal to c1-- I'm just going
to take the scalar out-- c1 times V dot r1, plus c2 times V
dot r2-- this is an r right here, not a V-- plus,
all the way to, plus cm times V dot rm. And we know, we already just
said, that V dot each of these r's are going to
be equal to 0. So all of these are going
to be equal to 0. So this whole expression is
going to be equal to 0. So if you have any vector that's
a linear combination of these row vectors, if you dot
it with any member of your null space, you're
going to get 0. So let me write this way, what
is any vector that's any linear combination
of these guys? Well, that's the span
of these guys. Or you could say that the row
space, that's the row space. So if w is a member of the row
space, which you can just represent as a column space of A
transpose, then we know that V is a member of
our null space. We know that V dot w is going
to be equal to 0, I just showed that to you
right there. So every member of our null
space is definitely orthogonal to every member of
our row space. So that's what we know so far. Every member of null space of
A is orthogonal to every member of the row space of A. Now, that only gets
us halfway. That still doesn't tell us that
this is equivalent to the orthogonal complement
of the null space. For example, there might be
members of our orthogonal complement of the row space that
aren't a member of our null space. So let's say that I have
some other vector u. Let's say that u is a member of
the orthogonal complement of our row space. I know the notation is a little
convoluted, maybe I should write an r there. But I want to really get set
into your mind that the row space is just the column
space of the transpose. Let's say that u is some member
of our orthogonal complement. What I want to do is show
you that u has to be in your null space. And when I show you that,
then we know. So far we just said that, OK
then, everything in the null space is orthogonal to the row
space, but we don't know that everything that's orthogonal
to the row space, which is represented by this set,
is also going to be in your null space. That's what we have to show, in
order for those two sets to be equivalent, in order
for the null space to be equal to this. So if we know this is true, then
this means that u dot w, where w is a member of our
row space, is going to be equal to 0. Let me write this down right
here, that is going to be equal to 0. And what does that mean? That means that u is
also orthogonal. So this implies that u dot--
well, r, j, any of the row vectors-- is also equal to 0,
where j is equal to 1, through all the way through m. How do I know that? Well, I'm saying that look, you
take u as a member of the orthogonal complement of the row
space, so that means u is orthogonal to any member
of your row space. So in particular the basis
vectors of your row space-- we don't know whether all of these
guys are basis vectors-- these guys are definitely all
members of the row space. Some of them are actually the
basis for the row space. So that means if you take u dot
any of these guys, it's going to be equal to 0. So if u dot any of these guys is
equal to 0, that means that u dot r1 is 0, u dot r2 is equal
to 0, all the way to u dot rm is equal to 0. Well, if all of this is true,
that means that A times the vector u is equal to 0. That implies this, right? You stick u there, you take
all the dot products, it's going to satisfy
this equation. Which implies that u is a member
of our null space. So we've just shown you that
every member of your null space is definitely a member of
the orthogonal complement. And now we've said that every
member of our orthogonal complement is a member
of our null space. And actually I just noticed
that I made a slight error here. This dot product, I don't have
to write the transpose here, because we've defined our dot
product as the dot product of column vectors. So this is the transpose
of some column vectors. So you can un-transpose
it here and just take the dot product. Anyway, minor error there. But that diverts me from my main
takeaway, my punch line, the big picture. We now showed you, any member of
our null space is a member of the orthogonal complement. So we just showed you, this
first statement here is another way of saying, any
member of the null space-- or that the null space is a subset
of the orthogonal complement of the row space. So that's our row space, and
that's the orthogonal complement of our row space. And here we just showed that any
member of the orthogonal complement of our row space
is also a member of your null space. Well, if these two guys are
subsets of each other, they must be equal to each other. So we now know that the null
space of A is equal to the orthogonal complement of the row
space of A or the column space of A transpose. Now, I related the null space
with the row space. I could just as easily make a
bit of a substitution here. Let's say that A is
equal to some other matrix, B transpose. Right? It's going to be the transpose
of some matrix, you could transpose either way. So if I just make that
substitution here, what do we get? We get, the null space of B
transpose is equal to the column space of B transpose,
right? A transpose is B transpose
transposed. Let me get my parentheses
right. And then that thing's orthogonal
complement. So what is this equal to? The transpose of the transpose
is just equal to B. So I can write it as, the null
space of B transpose is equal to the orthogonal complement
of the column space of B. So just like this, we just show
that the left-- B and A are just arbitrary matrices. So this showed us that the null
space, sometimes it's nice to write in words,
is the orthogonal complement of row space. And this right here is showing
us, that the left null space which is just the same thing as
a null space of a transpose matrix, is equal to,
orthogonal-- I'll just shorthand it-- complement
of the column space. Which are two pretty
neat takeaways. We saw a particular example of
it a couple of videos ago, and now you see that it's true
for all matrices.