If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Orthogonal complement of the orthogonal complement

Finding that the orthogonal complement of the orthogonal complement of V is V. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user onceltuca
    Hi. Shouldn´t the "Areas" around V and comp(V) fill up all R^n ? I mean V union comp(V) = R^n .
    (12 votes)
    Default Khan Academy avatar avatar for user
    • female robot ada style avatar for user microbit
      I think the confusion is that when we make a Union, that doesn't include making all linear combinations of the things that we have "united". So let's say that I have R3 and take 3 separate vectors i=[1 0 0] j=[0 1 0] k[0 0 1]. If I take the spans separately span(i), span(j), span(k), they form 3 separate subspaces(3 separate lines). Of course all of them are orthogonal to the other 2. Taking the Union of any 2: Union( span(i), span(j) ) is just the union of 2 lines, but not R2, although inside R2 they are each others orthogonal complements. To span the whole R2, you could write Span( Union( span(i), span(j) ) ), that would not just be 2 lines, but all linear combinations of them, so the whole R2. Of course you would be more efficient by leaving out the initial spans, not to generate the initial lines so Span( Union(i, j) ) would do the same thing. Please note that the inner Union(i, j) are just {i, j}, just a set of 2 vectors, the same with the lines: union of 2 lines does not yet make a new, higher subspace, you have to take their span; the union i and j as 2 vectors do not make a new subspace, you have to take their span to do that.
      To sum up the key idea: The Union is just taking a set at putting the thing into it. What you guys were thinking of as Span(Union(V, Vcomp))=R3 and that would be correct. So Sal's blob drawings are correct. I hope that made it clear and wasn't very circular :)
      (15 votes)
  • male robot donald style avatar for user Zafar Shaikhli
    So basically, this video shows that V = (V^perp)^perp ?
    (16 votes)
    Default Khan Academy avatar avatar for user
  • leaf grey style avatar for user Zeyao Xia Liu
    I have the same question as user onceltuca which was "Hi. Shouldn´t the "Areas" around V and comp(V) fill up all R^n ? I mean V union comp(V) = R^n ." , & because I'm still confused after reading the answers, I'm posting this (=
    What I understand so far:
    In the last video, it was shown that:
    basis for V + basis for Vperp = basis for R^n
    but as V and Vperp are both subspaces (closed under addition & scalar multiplication), so:
    V = span(basis for V) & Vperp = span(basis for Vperp)
    & R^n = span(basis for R^n) => span(basis for V + basis for Vperp)
    does this then => R^n = span(basis for V) + span(basis for Vperp)? Because if so then doesn't that show V union Vperp = R^n?
    If not, please can someone explain why not. Thanks.
    Sorry about the long convoluted question (=
    (5 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user Radosław Rusiniak
      Let me take the simplest example possible to show you why V union V perp IS NOT R^n :)
      Let n = 2, so our R^n is a plane.
      V = span ( [1, 0] ), so V is just the x-axis and all the points that are on it.
      Vperp = span ( [0, 1] ), so to Vperp belong all points on y-axis.
      Now if you take only basis for V you can only "achieve" points on the x-axis, like [0, 0], [1, 0], [123, 0], [-4/5, 0]. You are constrained to move only to right and left.
      Similarly with Vperp - only points on y-axis are "achievable", e.g. [0, 0], [0, 1], [0, 22], [0, -57]. Therefore you can only move up and down.
      V n Vperp = [0, 0] as it's the only point on the x-axis and y-axis - belongs to V and Vperp.
      V union Vperp = only points on axes, on the x-axis from V and on the y-axis from Vperp. You don't have there [2, 3], because it's neither in V or Vperp.
      But when you combine span for V with span for Vperp "magic" can start to happen :) Now you can move in all four directions. For example to get to [2, 3] you can take 2 times [1, 0] vector and 3 times [0, 1] vector. Now you have span([1, 0], [0, 1]) which is our whole space R^n, in this case an R^2 plane.
      (12 votes)
  • boggle blue style avatar for user Bryan
    Are there subspaces for which we cannot find an orthogonal complement?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • primosaur ultimate style avatar for user Derek M.
      As a rather degenerate answer, technically yes because not every vector space is an inner product space in the first place (i.e. not every vector space has a notion of orthogonality).

      However, if a vector space has an inner product, then no, every subspace has an orthogonal complement. Such orthogonal complement might not be computable though, in the sense that you might not be able to find an explicit basis for it.

      This extreme case would only happen in a vector space that is not finite dimensional.

      This answer is a little complicated honestly so if you have questions let me know.

      Edit:
      To add one other thing that might be helpful:
      If one of the subspaces has an orthogonal complement, then all other subspaces do (the thing I said about computability still applies though), and

      If one of the subspaces doesn't have an orthogonal complement, then none of them do, and this would be the situation where the vector space does not have the notion of an inner product.
      (3 votes)
  • orange juice squid orange style avatar for user olavobm
    I understand that he proves both ways, but, in this particular case, isn't just one case enough?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Stefan James Dawydiak
      I think you need both parts, but I could be wrong.

      The first part shows that every element in V perp perp is in V, ie, V perp perp as a blob is at most as big as V. The second part shows that every element in V is in V perp perp. As a blob, V perp perp must enclose at least V. Therefore, it must enclose exactly V. It's a little like the squeeze theorem.
      (4 votes)
  • leafers seedling style avatar for user Erwin Ro
    How do you know to assume v = x + w in the first ?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user camnharrington
    How does Sal know that v, a member of V can be written as a sum of w in V^perp and x in (V^perp)^perp? I watched the last video, but I do not see the reasoning here.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • leafers seed style avatar for user Noah Schwartz
    When Khan represented all of R^n to be v + w (where v and w are orthogonal complements), shouldn't all of R^n be av + w (where a is a scalar value)? If v is a vector in set V, v could still be restricted in size length. The set of W won't be, however, since W is the set of ALL vectors orthogonal to the set V, but if V is a set restricted in size, then won't R^n only be defined as av + w?

    Thanks!
    (2 votes)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user RicardoBlank
    Following this logic it looks like that ((V^(perp))^(perp))^(perp) is actually V^(perp). That is, for any odd number of 'perps' you have V^(perp), and for any even number of 'perps' you have V.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user emesdg
    how can any vector in R^3 be represented by two vectors?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf orange style avatar for user ktrain
      hey emesdg, check out the last lecture -- especially the comments. BrianLHouser explains how this works. someone explained it to me using the walls of my room. if you take the floor and the wall, the two are orthogonal complements [that one point where they meet -- that's the zero vector!]. anyway. if you were to combine the two bases of the wall and the floor, it feels intuitive to me that you could get anywhere in R3.

      not a rigorous proof -- but helped me visualize it.
      (1 vote)

Video transcript

Let's say I have some subspace of rn called v. Let me draw it like this. So that it is r n. And I have some subspace of it we'll call v right here. So that is my subspace v. We know that the orthogonal complement v is equal to the set of all of the members of rn. So x is a member of rn. Such that x dot v is equal to 0 for every v that is a member of r subspace. So our orthogonal complement of our subspace is going to be all of the vectors that are orthogonal to all of these vectors. And we've seen before that they only overlap-- there's only one vector that's a member of both. That's the zero vector. It's right there. Let's take the orthogonal complement. Let's say it's this set right here in pink, so that's the orthogonal complement. Fair enough. Now, what if we were to think about the orthogonal complement of the orthogonal complement? So, that's the orthogonal complement in pink. We want the orthogonal complement of that. So this is going to be all of the x's-- let's just write it like this. All of the x's that are members of rn such that x dot w is equal to 0. For every w that is a member of the orthogonal complement of v. That's what that thing is saying. So it's all of the vectors in rn that are orthogonal to everything here. Obviously, all of the things in v are going to be a member of that because these guys are orthogonal to everything in these guys. But maybe this is just a subset of the orthogonal complement of the orthogonal complement. So maybe this thing in blue right here looks like this. Maybe it's a slightly larger set than v. Maybe there are some things, these things that I'm shading in blue, maybe there are some vectors that are orthogonal to the orthogonal complement of v but that are outside of v. We don't know that yet. We don't know whether this area right here exists. Or maybe the orthogonal complement of the orthogonal complement. Maybe that takes us back to v. Maybe it's like the transpose or an inverse function where it just goes back to our original subspace. Let's see if we can think about that a little bit better. Let's say that I have some member of the orthogonal complement of the orthogonal complement. So let's say I have some vector x that is a member of the orthogonal complement of the orthogonal complement. Now, we saw on the last video that any vector in rn can be represented by a sum of some vector in a subspace and the subspace's complement. So we know that x can be represented-- we can say that x can be represented as the sum of two vectors. One that's in v and one that's in the orthogonal complement of v. So one, let's call that the vector that's in v and let's call w the vector that's in the orthogonal complement of V. Let me write it like this. Where v is a member of the subspace v and the vector w is a member of the orthogonal complement of v. Right? So this is some member. It could be some guy out here. It could be some guy over here. He's a member of the orthogonal complement of the orthogonal complement. Which is this whole area here. Which v is a subset of, but we're not sure whether v equals that thing. But we say, look, anything that's in the orthogonal complement of your orthogonal complement, is going to be a member of rn. And anything in rn can be represented as a sum of a vector in v and a vector in the orthogonal complement of v. So that's all I wrote right there. Now, what happens if I take the dot product of x with w? What is this going to be equal to? This is the orthogonal complement of the orthogonal complement. Would you take the dot product of any vector in this with any vector in the orthogonal complement, which this vector is. Right? It's a member of the orthogonal complement. You're going to get 0 by definition. These are all of the vectors. This factor is definitely orthogonal to anything in just v perp right? Anything in v perp perp is orthogonal to anything in v perp. So, this thing is going to be equal to 0. But what's another way of writing x dot w? We could write it like this. This is the same thing as v plus w dot w. Which is the same thing as v dot w plus w dot w. Now, what is v dot w? v is a member of our original subspace. And if you take the dot product of anything in our original subspace anything in its orthogonal complement, you're going to get 0. So this term right here is going to be 0, and you're just going to get this term which is the same thing as the length of our vector w squared. Now, that has to equal 0. Remember we just wrote x dot w. x is a member of the orthogonal complement of your orthogonal complement. So, you dot that with anything in the orthogonal complement, that's got to be equal 0. But, if we write it the other way, if we write it as the sum of v plus w and distribute this w, we say that's the same thing as the magnitude of w squared. So the magnitude of w squared has got to be equal to 0. The magnitude of w squared, or the length of w squared, has got to be equal to 0. Which tells us that w is the zero vector. That's the only factor in rn when you take its length and, especially when you square it, you get 0. But you could just take its length. So what does that mean? That means that our original vector x is equal to v plus w. But w is just equal to 0. So that implies that our original vector x is equal to v. And v is a member of our subspace v. Right? So that tells us that x is a member of our subspace v. So we just showed that if something is a member of the orthogonal complement of the orthogonal complement then that same vector has to be a member of the original subspace. So there is no such thing as something being in the orthogonal complement of the orthogonal complement and not being a member of our original subspace. All of this has to be inside of this right there. So there is no outside blue space like that. All of that is our original subspace if you want to view it that way. Now I just at the beginning of the video, anything in our subspace is going to be a member of our orthogonal complement. And then you can kind of reason that in your head. Let's use the same argument to just be a little bit more rigorous about it. Right now we say if anything is in the orthogonal complement of the orthogonal complement, then it's going to be the original subspace. Let's go the other way. Let's say that something is in the original subspace just like that. Let me draw another graph right here because this might be useful. Let me draw rn again. Let me draw all of rn like that. Now, we have the orthogonal complement. Let me just draw that first. So v perp And then you have the orthogonal complement of the orthogonal complement which could be this set right here. Right? This is v perp. I haven't even drawn the subspace v. All I've shown is, I have some subspace here, which I happen to call v perp. And then I have the orthogonal complement of that subspace. So this means that anything in rn can be represented as the sum of a vector that's here and a vector that's here. So, if I say that w-- let me do it in purple. If I say the vector w-- let me write it this way. The vector v can be represented as the sum of the vector w and the vector x where w is a member of the orthogonal complement of v or v perp And x is a member of its orthogonal complement. Notice, all I'm saying, I could have called this set s. And then this would have been s and its orthogonal complement. And we learned that anything in rn could be represented as the sum of something in a subspace and the subspace's orthogonal complement. So it doesn't matter that v is somehow related to this. It can be represented as a sum of a vector here plus a vector there. Fair enough. Now, what happens if I dot v with w? I'm doing the exact same argument that I did before. Well, if you take anything that's a member of our original subspace, and you dot it with anything in its orthogonal complement, that's going to give us 0. What else is that going to be equal to? If we write v in this way, v dot w is the same thing as this thing dot w. So w plus x dot-- and this is going to be equal to w dot w plus x dot w. And what's x dot w? x is in the orthogonal complement of your orthogonal complement. And w is in the orthogonal complement. So if you take the dot product, you're going to get 0. They're orthogonal to each other. So this is just equal to w dot w or the length of w squared. And since since has to equal 0, we just have a bunch of equals here, that tells us that once again the vector w has to be equal to 0. So that tells us v is equal to w plus x. But if w is equal to 0, then v is going to be equal to x. So we've just shown that if v is a member of the subspace v, then v is a member of the orthogonal complement of the orthogonal complement. Right? v is equal to x, which is a member of the orthogonal complement or the orthogonal complement. So we've proven it both ways. If you look at the original statement, we wrote here that if you're a member of the orthogonal complement of the orthogonal complement, you're a member of the original subspace. So, we've proven this and, earlier in the video, we proved that if x is a member of the orthogonal complement of the orthogonal complement, then x is a member of our subspace. So these two things are equivalent. Anything that's in the subspace is a member of v perp perp. Anything in v perp perp is a member of our subspace. So, our subspace in v perp perp are the same set. And of course it overlaps. This equals this. And of course it overlaps with V perp and its orthogonal complement only at the zero vector right there.