If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Linear algebra>Unit 3

Lesson 1: Orthogonal complements

# dim(v) + dim(orthogonal complement of v) = n

Showing that if V is a subspace of Rn, then dim(V) + dim(V's orthogonal complement) = n. Created by Sal Khan.

## Want to join the conversation?

• if V is a basis, then doesn't that imply that its columns are linearly independent and that the dimension of V perp is 0 since the dim of A = dim of A transpose (so dim nulA = 0 = dim nulA transpose)
• I think where you're confused is that V is not a basis. V is a subspace (see beginning of video). Then, Sal defined k vectors which form a basis for V.

Also, V doesn't have columns because it's not a matrix, it's a subspace. To be fair, you could represent it as a matrix, but if k > 0 like in the example, it would have an infinite number of columns (all the linear combinations of the basis vectors v1 to vk)
• Is there such a thing as a 3-d matrix? Like a box of numbers in three dimensions?
I feel like that should exist for some reason... a n x m x k matrix?
• Tensors can be seen as a generalization of matrices to higher dimensions (they are more complicated than that, but it's a starting point), so you could argue that a rank 3 tensor could be seen as a 3D matrix.

But other than that, matrices are always a 2D array of numbers.
• Isn't this the rank and nullity theorem?
• The result is essentially the rank-nullity theorem, which tells us that given a m by n matrix A, rank(A)+nullity(A)=n. Sal started off with a n by k matrix A but ended up with the equation rank(A transpose)+nullity(A transpose)=n. Notice that A transpose is a k by n matrix, so if we set A transpose equal to B where both matrices have the same sizes and entries, then you get the standard form for the rank-nullity theorem, which is rank(B)+nullity(B)=n.
• why do we have to use the column space of V to represent V instead of using the row space?
• He starts with V, and then wants to talk about a basis for it.. so declares the vectors that make up the basis. I'm not sure there is any more special reason than it is more common to use column vectors.

If he declared them as row vectors he would get to the same conclusion via the rowspace of V being the orthogonal complement of the Nullspace N(V).
• rename this as rank-nullity theorem to make it easier to find
• When you are indicating the number of rows and columns in a matrix, you usually choose from k,m, and n. In a square matrix, obviously, you use the same letter for both rows and columns.
Is there some rationale as to when you use an n x k matrix as opposed to an n x m matrix.
In other words, why did you choose k? Does it imply a different relationship to n than does m?
I hope I explained what confuses me clearly. If you understand what I am trying to say, can you help me word it better before you answer it?
Thanks.
Long answer: I think he chose k, because he wanted to concentrate on the vectors (columns) more and show that it doesn't have to be equal to n, but it really doesn't matter that much. Most notation in math is as it is, because someone wrote it that way and from that point everyone else started to follow him to not confuse people with different notation. Like in your case, usually less used k shows up and you start to be little confused or at least curious :)
• whats is n? Is it the number of rows of a matrix formed from column vectors of V or the number of column vectors of V ?
• `n` is the dimension of the space in which the vectors live: `ℝⁿ`
(1 vote)
• If A is visualized in n-dim space, the dimensions that A cannot reach is null(A-transpose).
Is that what null(A-tran) really is?
The rref(A) shows where it spans, why does A have to be transposed to know where it cannot span?
• From what I understood, null(A-trans) just represents the part of the overall vector that includes the free variables (a set of redundant values), hence finishing the full "k" length of a vector (along with the pivot variables that are present).
(1 vote)
• i am having a really hard time with this.
• Why don't we say that the null space of A equals the orthogonal complement of the column space?
i mean orthogonal complement is the set of all x'es that satisfy this equation x1v1+x2v1+...+xnvn ?where v1,v2....vn are the columns of A
i am really confused
(1 vote)
• Let A be an m x n matrix. Null space vectors live in R^n. Vectors in the column space live in R^m.
Vectors in the orthogonal complement of the column space still live in R^m. Unless m=n, there is no way to compare R^n vectors to R^m.

For example, there is no notion of adding a triple (1, 0, 2) to the pair
(5, -6), or asking how we could compare the two vectors.