Main content
Linear algebra
Course: Linear algebra > Unit 1
Lesson 4: Subspaces and the basis for a subspaceBasis of a subspace
Understanding the definition of a basis of a subspace. Created by Sal Khan.
Want to join the conversation?
- I'm confused at. I thought in the last video it was said that a subspace had to contain the zero vector. Then he says that this subspace is linearly independent, and that you can only get zero if all c's are zero. 2:27
If the zero vector is in that subspace though, couldn't every c be zero, and the c for the zero vector be anything, making them linearly dependent? What am I missing here?(16 votes)- You're correct that all subspaces contain the zero vector. That does not mean that the linearly independent set of vectors that define the subspace contains the zero vector. Actually it will not (unless it's what we call the trivial subspace which is just the zero vector).
For example, we have two vectors in R^n that are linearly independent. The zero vector is definitely not one of them because any set of vectors that contains the zero vector is dependent. The subspace defined by those two vectors is the span of those vectors and the zero vector is contained within that subspace as we can set c1 and c2 to zero.
In summary, the vectors that define the subspace are not the subspace. The span of those vectors is the subspace.(106 votes)
- Why are there no exercises in the Linear Algebra course? I have been learning at Khan Academy Math series but the Linear Algebra course is difficult because each video is much longer and there are no exercises. What is the best practice resource to practice the concept I learned here?(26 votes)
- At about, instead of saying {v1,v2...vn} is the basis for V, can we also say {v2,v3...vn,vs} is the basis for V? because v1 can be obtained by vs-v2 4:34(4 votes)
- Your basis is the minimum set of vectors that spans the subspace. So if you repeat one of the vectors (as vs is v1-v2, thus repeating v1 and v2), there is an excess of vectors.
It's like someone asking you what type of ingredients are needed to bake a cake and you say:
Butter, egg, sugar, flour, milk
vs
Butter, egg, egg, egg, egg, sugar, sugar, flour, milk
Basis is asking for the minimum amount of information to answer the question adequately.(27 votes)
- Can something be a Basis and still be Linearly Dependent or is a Basis always Linearly Independent?(5 votes)
- Basis vectors are always linearly independent, it is one of the requirements.(27 votes)
- so it seems to me through all of these examples that you could essentially take almost any two vectors despite possibly the zero vector, and they will span R^2. Is this right?
Also, could anyone give an example of two vectors that wouldn't span R^2 and be linearly independent if that makes sense.(3 votes)- Two vectors that are linearly independent by definition will always span R2. The claim that "we can take almost any two vectors... they will span R2.." is incorrect. We can take any two vectors that are LINEARLY INDEPENDENT and they will span R2. Two zero vectors are not linearly independent. Lets consider if one vector is [1,0], and the other vector is the zero vector: Do the linear combination = 0; and solve for the coefficients. You will clearly see that there is a trivial redundancy and thus it won't span R2.(19 votes)
- when saying span (s)=R2, is it the same thing like saying that s spans R2?(8 votes)
- Is there a video on how to find out which vectors are the "minimum vectors that span the subspace"? How do I determine which vectors to use and which not to use?
I think I understand the concept thoroughly from this video, but how do I actually find the basis if given a particular subspace?
The last 30 seconds of the video explains that adding a vector to the basis would cause the set of vectors to no longer be a basis. In a less obvious example than [1,0]^T and [7,0]^T, how do I determine which vectors are linearly dependent on the others and should be removed from the set to form a basis?(8 votes)- Put your vectors into a matrix as the column vectors. Put the matrix in reduced row echelon form. The free columns are the redundant vectors. Depending on the order you put the vectors in, different ones can come out as being the redundant one. For example, given the set of vectors [1, 0] and [2, 0], you could choose either one as the redundant vector, but since [1, 0] is a better vector to use for a basis, you should put it as the first column in the matrix, therefore choosing it as your preferred basis vector.(6 votes)
- When would there be a case, when the set did not span R2 but is linearly independent?(4 votes)
- If you're working within Rn and you have 2 LI vectors, the span of them will always be a 2D subspace of Rn. If n=2, then the span will always be R2.(9 votes)
- If we have say two vectors {v0 , v1} in Rn where n>2 that are linearly independent.
So what can we say about the dimension of the subspace of the spanning set of {v0,v1}.
I know we can find the rank of matrix of given vector [v0 v1] and tell. Lets say that rank of a matrix is x. Does it mean it will span the complete Rx space or it will span some Rx subspace? Then if x > 4 (say) can we say it also spans a 2-D subspace and a 3-D subspace. Really confused . Please help.(4 votes)- If two vectors of ℝⁿ,
v⃗₀
andv⃗₁
are linearly independent, then they are the base of a subspace of 2 dimensions (a plane) inside of ℝⁿ. This subspace can be mapped one-to-one to ℝ², but it's not directly ℝ².
A matrix with rankx
will includex
linearly independent column vectors, and those can be used as a base for a subspace of ℝⁿ ofx
-dimension (again, that can be mapped to ℝˣ one-to-one, but that is not directly ℝˣ).(3 votes)
- Isn't another way to prove if a set is linearly independent just prove that the set is NOT linearly dependent? Especially if it's easier and you can use the zero vector rather than arbitrary constants.(4 votes)
- Yes by finding the determinant. If the det(A)=0 this means it is linearly dependent. det(A)=any number that is not zero it means its linearly independent.(2 votes)
Video transcript
Let's say I have
the subspace v. And this is a subspace and we
learned all about subspaces in the last video. And it's equal to the span
of some set of vectors. And I showed in that video that
the span of any set of vectors is a valid subspace. It's going to be the span of v1,
v2, all the way, so it's going to be n vectors. So each of these are vectors. Now let me also say that all of
these vectors are linearly independent. So v1, v2, all the way to vn,
this set of vectors are linearly independent. Now before I kind of give you
the punchline, let's review what exactly span meant. Span meant that this set, this
subspace, represents all of the possible linear
combinations of all of these vectors. So you know, I could have all of
the combinations for all of the different c's. So c1 times v1 plus c2 times v2,
all the way to cn times vn for all of the possible c's
and the real numbers. If you take all of the
possibilities of these and you put all of those vectors into
a set, that is the span and that's what we're defining
the subspace v as. Now, the definition of linear
independence meant that the only solution to c1, v1, plus
c2, v2 plus all the way to cn, vn, that the only solution to
this equally the 0 vector-- maybe I should put a little
vector sign up there-- is when all of these terms
are equal to 0. c1 is equal to c2, is equal
to all of these. All of them are equal to 0. Or kind of a more common sense
way to think of it is that you can't represent any one of these
vectors as a combination of the other vectors. Now, if both of these conditions
are true that the span of this set of vectors is
equal to this subspace or creates this subspace or it
spans this subspace, and that all of these vectors are
linearly independent, then we can say that the set of
vectors-- maybe we call this, we could call this
set of vectors s. Where we say s is equal to v1,
v2, all the way to vn. It's equal to that
set of vectors. We can then say and this
is the punchline. We can then say that S, the
set S is a basis for v. And this is the definition
I wanted to make. If something is a basis for a
set, that means that those vectors, if you take the span
of those vectors, you can construct-- you can get to any
of the vectors in that subspace and that those
vectors are linearly independent. So there's a couple of ways
to think about it. One is there's a lot of things
that might span for something. For example, if this spans for
v, then so would-- let me add another vector. Let me define another set. Let me define set T to be
all of set S: v1, v2, all the way to vn. But it also contains
this other vector. I'm going to call it the
v special vector. So it's going to be essentially,
the set S plus one more vector. Where this vector I'm just
saying is equal to v1 plus v2. So clearly, this is not a
linearly independent set. But if I had asked you what the
span of T is, the span of T is still going to be
this subspace, v. But I have this extra vector
in here that made it non-linearly independent. This set is not linearly
independent. So T is linearly dependent. So in this case, T is
not a basis for v. And I had showed you this
example because the way my head thinks about basis is,
the basis is really the minimum set of vectors that I
need, the minimum set-- and I'll write this down. This isn't a formal definition,
but I view a basis-- let me switch colors--
as really the-- let me get a good color here. As a basis is the minimum-- I'll
put it in quotes because I haven't defined that. The minimum set of vectors that
spans the space that it's a basis of, spans
the subspace. So in this case, this is the
minimum set of vectors. And I'm not going to prove it
just yet, but you can see that, look. This set of vectors right
here, it does span the subspace, but it's clearly not
the minimum set of vectors. Because the span of this thing,
I could still remove this last vector here. I could still remove that guy
and still-- and then the span of what's left over is still
going to span my subspace v. So this guy right here
was redundant. In a basis, you have
a no redundancy. Each one of these guys is needed
to be able to construct any of the vectors in
the subspace v. Let me do some examples. So let's just take some
vectors here. Let's say I had to find
my set of vectors, and I'll deal in r2. So let's say I have
the vector 2, 3. Let's say I have the
other vector 7, 0. So first of all, let's just
think about the span, the span of this set of vectors. This is a set of vectors. So what's the span of S? What's all of the linear
combinations of this? Well, let's see if
it's all of r2. So if it's all of r2 that means
the linear combination of this could be-- we could
always construct anything in r2 with the linear combination
of this. So if we have c1 times 2,
3 plus c2 times 7, 0. If it is true that this spans
all of r2, then we should be able to construct-- we should
always be able find a c1 and a c2 to construct any
point in r2. And let's see if we
can show that. So we get 2c1 plus 7c2
is equal to x1. And then we get 3c1 plus 0c2. Plus 0 is equal to x2. And if we take this second
equation and divide both sides by 3 we get c1 is equal
to x2 over 3. And then if we substitute that
back into this first equation, we get 2/3-- I'm just
substituting c1 in there. So 2/3 x2. 2 times x2 over 3 is 2/3 x2.
plus 7c2 is equal to x1. And then, what can we do? We can subtract the 2/3
x2 from both sides. Let me do it right here. So we get 7c2 is equal
to x1 minus 2/3 x2. Divide both sides of this
by 7 and you get c2. Let me do it in yellow. You get c2 is equal to x1 over
7 minus 2 over 21 x2. So if you've given me any x1 and
any x2 where either x1 or x2 are a member of the real
numbers, we're talking about-- well, everything we're going to
be dealing with right now is real numbers. You give me any two
real numbers. I take my x2 divided by 3 and
I'll give you your c1. And I'll take the x1 divided
by 7 and subtract 2/21 times your x2. And I'll get you your c2. This will never break. There's no division
by any of these. You don't have to worry about
these equaling a 0. These two formulas
will always work. So you give me any x1 and any
x2, I can always find you a c1 or a c2. Which is essentially finding a
linear combination that will equal your vector. So the span of S is r2. Now the second question is, is
are these two vectors linearly independent? And linear independence means
that the only solution to the equation c-- let me
switch colors. The only solution to the
equation c1 times the first vector plus c2 times the second
vector equaling the 0 vector, that the only solution
to this is when both of these equal 0. So let's see if that's true. We've already solved for it, so
if x1-- in this case, x1 is equal to 0 and x2
is equal to 0. This is just a special
case where I'm making them equal to 0 vector. If I want to get the 0 vector,
c1 is equal to 0/3. So c1 must be equal to 0. And c2 is equal to 0/7
minus 2/21 times 0. So c2 must also be equal to 0. So the only solution to this
was settings both of these guys equal to 0. So S is also a linearly
independent set. So it spans r2, it's linearly
independent. So we can say definitively, that
S-- that the set S, the set of vectors S is
a basis for r2. Now, is this the only
basis for r2? Well I could draw a trivially
simple vector, set of vectors. I could do this one. Let me call it T. If I define T to be the set 1, 0
and 0, 1, does this span r2? Let's say I want to generate
the-- I want to get to x1 and x2. How can I construct that out
of these two vectors? Well, if I always just do x1
times 0, 1 plus x2 times 0, 1, that'll always give me x1, x2. So this definitely
does span r2. Is it linearly independent? I could show it to you. If you wanted to make this
equal to the 0 vector. If this is a 0 and this is 0,
then this has to be a 0 and this has to be a 0. And that's kind of obvious. There's no way that you could
get one of the other vectors by some multiple of
the other one. There's no way you could get a
1 here by multiplying this by anything and vice versa. So it's also linearly
independent. And the whole reason why I
showed you this is because I wanted to show you that look,
this set T spans r2. It's also linearly independent,
so T is also a basis for r2. And I wanted to show you this
to show that if I look at a vector subspace and r2 is a
valid subspace of itself. You can verify that. But if I have a subspace, it
doesn't have just one basis. It could have multiple bases. In fact, it normally
has infinite bases. So in this case, S is a valid
basis and T is also a valid basis for r2. And actually, just so you know
what T is, the situation here, this is called a
standard basis. This is the standard basis. And this is what you're used
to dealing with in just regular calculus or
physics class. And if you remember from physics
class, this is the unit vector i and then this
is the unit vector j. And it's the standard basis for
two-dimensional Cartesian coordinates. What's useful about a basis is
that you can always-- and it's not just true of the standard
basis, is that you can represent any vector
in your subspace. You can represent any vector
in your subspace by some unique combination of the
vectors in your basis. So let me show you that. So let's say that the set v1,
v2, all the way to vn, let's say that this is a basis for--
I don't know-- just some subspace U. So this is a subspace. So that means that these guys
are linearly independent. And also means that the span of
these guys, or all of the linear combinations of these
vectors, will get you all of the vectors, all of the possible
components, all of the difference members of U. Now what I want to show you is
each member of U can only be uniquely defined by a unique set
of-- a unique combination of these guys. Let me be clear about that. Let's say my vector a is a
member of our subspace U. That means that a can be
represented by some linear combination of these guys. These guys span U. So that means that we can
represent our vector a as being c1 times v1 plus
c2 times v2. These are vectors. All the way to cn times vn. Now I want to show you that this
is a unique combination. And to show that I'm going to
prove by contradiction. Let's say that there's
another combination. Let's say I could also represent
a by some other combination, d1 times v2 plus
d2 times v2 plus all the way to dn times vn. Now, what happens if I
subtract a from a? I'm going to get the 0 vector. Let me subtract these
two things. If I subtract a from
a, a minus a is clearly the 0 vector. It's clearly the 0 vector and
if I subtract this side from that side, what do we get? I'll do it in a different
color. We get c1 minus d1 times v1 plus
c2 minus d2 times v2, all the way to-- I'm at the point on
my board where it starts to malfunction. All the way to c-- you can't
see it. cn minus vn. It's showing up somehow. cn minus-- no, it's
messing me up. Let me rewrite it on the left
right here where it's less likely to mess up. The 0 vector, I'll write
it like that. Is equal to c1 minus d1 times
v1 plus all the way up to cn minus dn times vn. I just subtracted the
vector by itself. Now, I told you that
these are a basis. There's two things that-- when
you say a basis, it says that the span of these guys
makes the subspace. Or the span of these guys
is the subspace. And it also tells you that
these guys are linearly independent. So if they're linearly
independent, the only solution to this equation-- this is just
a constant times v1 plus another constant times
v2, all the way to a constant times vn. The only solution to this
equation is if each of these constants equal 0. So all of those constants
have to be equal to 0. Over here before it messed up,
this has to equal 0, this has to equal 0. That was a definition of
linear independence. And we know that this is a
linearly independent set. So if all of those constants are
equal to 0, then we know that c1-- if this is equal to 0,
then c1 is equal to d1, c2 is equal to d2, all the way
to cn is equal to dn. So by the fact that it's
linearly independent, all of these-- each of these constants
have to be equal to each other. And that's our contradiction. I assume they're different, but
the linear independence forced them to be the same. So if you have a basis for some
subspace, any member of that subspace can be uniquely
determined by a unique combination of those vectors. And just to hit the point home,
I told you that this was a basis for r2. And my next question is,
and I just want to kind of backtrack a bit. If I just added another vector
here, if I just added the vector 1, 0, is S now
a basis for r2? Well no, it clearly will
continue to span r2, but this guy is redundant. This guy is in r2. And I already told you that
these two guys alone span r2. That anything in r2 can be
represented by a linear combination of these two guys. This guy is clearly in r2, so
he can be represented by a linear combination of
these two guys. So therefore, this is not a
linearly independent set. This is linearly dependent. And because it's literally
dependent, I have redundant information here. And then this would no
longer be a basis. So in order for these to be a
basis I kind of have to create the minimum set of vectors that
can span, or the most efficient set of vectors that
can span, in this case, r2.