If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Linear algebra>Unit 3

Lesson 1: Orthogonal complements

# Representing vectors in rn using subspace members

Showing that any member of Rn can be represented as a unique sum of a vector in subspace V and a vector in the orthogonal complement of V. Created by Sal Khan.

## Want to join the conversation?

• At Sal says that 'in a previous video' he showed that a subspace with dim n and n l.i vectors that are members of subspace impies the n vectors form a basis. WHere is this video?
• The necessary information is likely in one of the videos on basis sets, in the Linear Algebra playlists. Whether he proved that exact result or not it another matter, but he covered enough information to infer it.

If we take any n l.i. vectors, then we can define an n-dimensional subspace with those vectors as the basis. If those vectors are taken from a particular n-dimensional subspace, then any linear combinations of those vectors must be a member of the same subspace. This means the basis defined by those vectors is a basis for the subspace those vectors were chosen from. (By definition, any basis of an n-dimensional subspace must have n vectors)
• Am i the only one who has to watch a video multiple times to actually get it?
• Re @ :

In a much earlier video Sal showed that every basis for a subspace V has the same number of elements.

This doesn't prove that if we have some n dimensional subspace (R^n), and n linearly independent vectors from that subspace, that they are a basis for R^n, does it (although I feel certain it's true)? I came up with the following proof (Has he made this unnecessary in an earlier video?):

Suppose there are n linearly independent R^n vectors {v_i} i=1,n and they aren't a basis for V. Then there is a vector "u" in R^n and not in span({v_i}), which means that span(S) = span({v_i}U{u}) (the span of a set of n+1 l.i. vectors) is in Rn. But since rref of "A = [v1 v2 ... vn u] = [col1 col2 ... col(n+1)]" (an nx(n+1) matrix) has at least 1 free variable, S can't be a l.i. set. Contradiction.
• We have earlier proven, back when we defined the dimension, that the number of basis vectors for a given subspace is constant, regardless of the choice of bases.
We know that we can represent Rn as having n standard orthonormal basis vectors. Therefore, all basis sets of Rn must have n basis vectors. By definition of the dimension of a subspace, a basis set with n elements is n-dimensional. Therefore, the subspace found in the video is n-dimensional.

Intuitively, an n-dimensional subspace in Rn must be all of Rn. What you have done here is prove mathematically that an n-dimensional subspace in Rn does indeed equal Rn.
(1 vote)
• What's the dimension of the span of the 0 vector? If it's "one", then dim(R^n) + dim(0 vector) = n+1 (?). (Isn't the 0 vector in R^n orthogonal to any vector in R^n, and the orthogonal complement of R^n in R^n?)
(But the R^n 0 vector is in both V and V^p, so then V^p isn't really a complement of V, because V and V^p intersect, or else it's not a vector space because it doesn't have the 0 vector).
(1 vote)
• dzxterity covered the orthogonality quite well. I'll look at the dimensionality of span(0).

If we treat 0 as a basis vector, and take dimension as being simply the number of basis vectors, then we would get dim(span(0))=1, which as you observed contradicts the Rank-Nullity Theorem.
This suggests that there must be a special case for the zero vector. Possibly dimension can be defined as the number of non-zero basis vectors. Possibly 0 doesn't count as a basis vector even if it is the only vector in a set (since if we look at the 0 vector on its own, it does not satisfy the condition for linear independence, since c 0 = 0 is true for any c, not just c=0).
Logically, dim(span(0))=0. This satisfies common sense and the Rank Nullity Theorem.
• Would this imply that R^n is a direct sum of V and V^perp?
• It implies that any element in R^n can be expressed as a (unique) sum of an element in V and V^perp.
We haven't defined taking the sum of two subspaces as a whole.
• I believe the proof Sal made in the beginning to show that that the intersection of a subspace with its orthogonal complement is {0} to be incomplete. If you assume that x is in both V and Vperp, then you just imposed the condition that the dimension of V and Vperp are equal, otherwise the dot product is not defined. From this point the proof only shows that when V and Vperp are equal dimensions the intersection must be the set {0}. To show the more general proof, you should assume that x is in the intersection of V and Vperp. Then you can say x dot x = 0 no matter what dimensions V and Vperp are.
(1 vote)
• If I understand correctly, V (of dimension k) and Vperp (of dimension n-k) are both defined with the Rn subspace, i.e., they both consist of n-dimensional vectors, and so there is no problem with finding their scalar product.

Imagine we have a 3-dimensional space R3, and in which the V subspace is a plane within this 3d space, and the Vperp subspace is a line orthogonal to this plane. Both the plane and the line are defined 3 dimensionally even though V only has a "dimension" of 2 (it's a plane), and Vperp only a "dimension" of 1 (it's a line).
• At , Sal says that combination of basis vectors of V and V perp should be basis for Rn. What I am confused in is that there can be a combination of V and V perp basis vectors such that V is equal to negative one times V perp and where coefficients are not zero, and in that case the above statement that combination of basis vectors of both V and V perp will form a basis for Rn will not be true.

Am I missing anything?
• I might have to look through the videos again, unless it wasn't shown on the site, can you show where it says V=-1 V perp?
(1 vote)
• I think that an axiom that would help people understand this proof is knowing that 2 vectors that are in fact orthogonal are by default linear independent. Meaning that you cannot write orthogonal vectors as a linear combinations of other orthogonal vector. The pivot vector are all orthogonal to each other.
Which implies that the basis vectors from two orthogonal subsets are are all linear independent from each other. And can form a basis for IRn
(1 vote)
• Could we use the following example:

i and j are the unit basis vector.
Both are a valid subspace of R2 (closure)
They are orthogenal to each other.
But as we know, {i,j} span and forms a basis for R2.

Isnt this a summary of this video?
(1 vote)
• What I think the video also want to explore is that taking the null space is like saying: find me all possible vectors in IRn that are orthogonal to the rowspace, which by default is going after linear independent vectors of IRn that are not represented by a giving rowspace. Being the rowspace represented in the video as V and the null space as Vcomplement.
(1 vote)
• why is it so log
(1 vote)