If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Linear algebra

### Course: Linear algebra>Unit 1

Lesson 4: Subspaces and the basis for a subspace

# Basis of a subspace

Understanding the definition of a basis of a subspace. Created by Sal Khan.

## Want to join the conversation?

• I'm confused at . I thought in the last video it was said that a subspace had to contain the zero vector. Then he says that this subspace is linearly independent, and that you can only get zero if all c's are zero.

If the zero vector is in that subspace though, couldn't every c be zero, and the c for the zero vector be anything, making them linearly dependent? What am I missing here? •   You're correct that all subspaces contain the zero vector. That does not mean that the linearly independent set of vectors that define the subspace contains the zero vector. Actually it will not (unless it's what we call the trivial subspace which is just the zero vector).

For example, we have two vectors in R^n that are linearly independent. The zero vector is definitely not one of them because any set of vectors that contains the zero vector is dependent. The subspace defined by those two vectors is the span of those vectors and the zero vector is contained within that subspace as we can set c1 and c2 to zero.

In summary, the vectors that define the subspace are not the subspace. The span of those vectors is the subspace.
• Why are there no exercises in the Linear Algebra course? I have been learning at Khan Academy Math series but the Linear Algebra course is difficult because each video is much longer and there are no exercises. What is the best practice resource to practice the concept I learned here? • At about , instead of saying {v1,v2...vn} is the basis for V, can we also say {v2,v3...vn,vs} is the basis for V? because v1 can be obtained by vs-v2 •  Your basis is the minimum set of vectors that spans the subspace. So if you repeat one of the vectors (as vs is v1-v2, thus repeating v1 and v2), there is an excess of vectors.

It's like someone asking you what type of ingredients are needed to bake a cake and you say:
Butter, egg, sugar, flour, milk
vs
Butter, egg, egg, egg, egg, sugar, sugar, flour, milk

• Can something be a Basis and still be Linearly Dependent or is a Basis always Linearly Independent? • so it seems to me through all of these examples that you could essentially take almost any two vectors despite possibly the zero vector, and they will span R^2. Is this right?
Also, could anyone give an example of two vectors that wouldn't span R^2 and be linearly independent if that makes sense. • Two vectors that are linearly independent by definition will always span R2. The claim that "we can take almost any two vectors... they will span R2.." is incorrect. We can take any two vectors that are LINEARLY INDEPENDENT and they will span R2. Two zero vectors are not linearly independent. Lets consider if one vector is [1,0], and the other vector is the zero vector: Do the linear combination = 0; and solve for the coefficients. You will clearly see that there is a trivial redundancy and thus it won't span R2.
• when saying span (s)=R2, is it the same thing like saying that s spans R2? • Is there a video on how to find out which vectors are the "minimum vectors that span the subspace"? How do I determine which vectors to use and which not to use?
I think I understand the concept thoroughly from this video, but how do I actually find the basis if given a particular subspace?
The last 30 seconds of the video explains that adding a vector to the basis would cause the set of vectors to no longer be a basis. In a less obvious example than [1,0]^T and [7,0]^T, how do I determine which vectors are linearly dependent on the others and should be removed from the set to form a basis? • Put your vectors into a matrix as the column vectors. Put the matrix in reduced row echelon form. The free columns are the redundant vectors. Depending on the order you put the vectors in, different ones can come out as being the redundant one. For example, given the set of vectors [1, 0] and [2, 0], you could choose either one as the redundant vector, but since [1, 0] is a better vector to use for a basis, you should put it as the first column in the matrix, therefore choosing it as your preferred basis vector.
• When would there be a case, when the set did not span R2 but is linearly independent? • If we have say two vectors {v0 , v1} in Rn where n>2 that are linearly independent.
So what can we say about the dimension of the subspace of the spanning set of {v0,v1}.
I know we can find the rank of matrix of given vector [v0 v1] and tell. Lets say that rank of a matrix is x. Does it mean it will span the complete Rx space or it will span some Rx subspace? Then if x > 4 (say) can we say it also spans a 2-D subspace and a 3-D subspace. Really confused . Please help. • If two vectors of ℝⁿ, `v⃗₀` and `v⃗₁` are linearly independent, then they are the base of a subspace of 2 dimensions (a plane) inside of ℝⁿ. This subspace can be mapped one-to-one to ℝ², but it's not directly ℝ².
A matrix with rank `x` will include `x` linearly independent column vectors, and those can be used as a base for a subspace of ℝⁿ of `x`-dimension (again, that can be mapped to ℝˣ one-to-one, but that is not directly ℝˣ). 