If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Showing that the candidate basis does span C(A)

Showing that just the columns of A associated with the pivot columns of rref(A) do indeed span C(A). Created by Sal Khan.

## Want to join the conversation?

• any sample exercises to try out? there are questions in the calculas section, not sure if there are any here •   Yea, not sure why the Linear Algebra section doesn't get practice problems and all other sections have problems! WE NEED SOME PRACTICE PROBLEMS SAL!
• At and , didn't Sal mean a1, a2 and a4 rather than a1, a2 and aN? • I don't understand how this is a thorough proof that the pivot columns span all of the column space. In the proof, he sets the free variables to -1 and 0. Doesn't this mean the result is only valid in the special case where the free variables are -1 and 0? • Not exactly. x3 and x5 being free variables just means that the equation x1a1 + x2a2 + x3a3 + x4a4 + x5a5 = 0 holds true for any value of x3 and x5, since the pivot entries (x1, x2, and x4) will change values depending on the values of x3 and x5 to make the equation still hold true. The relationship among the actual columns DOES NOT change just because you pick different values for the free variables. To make this more clear, consider the smaller example x1a1 + x2a2 + x3a3 = 0, where x3 is a free variable and the a's are the column vectors of some matrix A. Let x1 = 2x3 and x2 = -3x3. Obviously in this case, the pivot entries are x1 and x2 and the free variable is x3. We'll now consider two different values of x3 and see that the relationship among the columns remains the same.

Before doing this, we rewrite x1a1 +x2a2 + x3a3 = 0 by rewriting x1 and x3 in terms of x3:
2x3a1 - 3x3a2 + x3a3 = 0

Scenario 1: x3 = -1
2(-1)a1 - 3(-1)a2 + (-1)a3 = 0
-2a1 + 3a2 - a3 = 0
a3 = -2a1 + 3a2

Scenario 2: x3 = 2
2(2)a1 - 3(2)a2 + (2)a3 = 0
4a1 - 6a2 + 2a3 = 0
4a1 - 6a2 = -2a3
-2a3 = 4a1 - 6a2
a3 = -2a1 + 3a2

As you can see, changing the value of the free variable does not change the relationship/equation between the columns. Picking -1 and 0 in the video is just a convenient way to show that the columns corresponding to the free vectors CAN be written as some linear combination of the pivot vectors. If a vector can be written as a linear combination of some other vectors, then that vector is redundant in the set. It doesn't matter if you later pick different values for the free variables and demonstrate that the vector can ALSO be written as a linear combination of the pivot columns and the other free columns.
• Thank you for everything you guys do. But alas, I am wanting more. Is there a possibility that there be videos on abstract algebra and even graduate level classes? I am having a blast relearning and gaining a better grasp on the topics covered in college, but my curiosity for where else can all this math go is consuming me. Never the less, thank you for everything up to this point. You have done a great job with how this website runs to the content provided. God bless you all my friends. • What is a candidate basis? I don't understand that term. • I am a bit confused about the following:
1. the proof that the non pivot colums can be made as lineair combinations of the pivot colums is made by rewriting the non pivot parts of the equation that sal writes at . Thats clear, but this can also be done for the pivot points.
2. In previous videos the proof is made that the pivot points cannot be written as lineair combinations of the other vectors by looking at the rref and trying to find a number with which you can multiply one of the vectors to get another. This can`t also be done for the non pivots, because they also contain zero`s.
I don`t why the above exclusively proves the lineair dependence or indepences of the vectors. • 1. Consider a set of n linearly dependent vectors, of which any k taken alone are linearly independent. There are lots of ways to choose k vectors from n. But say we establish some rules, so that everyone chooses the same k vectors. These k vectors we then choose to call the linearly independent vectors for the set, and demonstrate that the remaining n - k vectors are linear combinations of the independent k (and thus dependent). Of course, we could have selected a different subset of k vectors, and they would have been independent as well (the remaining n - k in the selection being dependent). The point is - and this is important - that the subspace generated by any selection of k vectors from this set is the same. As far as I understand, the notion of linear dependence/independence was created solely for subspaces. So, if the subspace remains the same, then which k vectors get selected to produce it doesn't matter. So, why not make everyone select the same k vectors and avoid confusion. That's where the "pivot columns are independent and non-pivot columns are dependent vectors" rules comes in. We choose to treat pivot column vectors as independent, and thus disallow them being shown dependent. Hope this long paragraph made sense. :)

2. The proof made was that a pivot column vector cannot be written as a linear combination of other pivot column vectors, since each has a 1 in a different component of the vector, while the rest of their components are all 0. We cannot get 1 by any linear combination of 0's. For the non-pivot column vectors, this is not true.
• Sal said ''Column span'' but he probably meant ''Column space'' correct me if I'm wrong • So essentially, except for the column vectors of a matrix forming a basis for the column space of it, forms a basis for it's null space which I found while experimenting. Is that always the case? • This makes intuitive sense for a number of reasons - the Kernel of a matrix (or nullspace) is dependent upon the matrix that you choose. So there should be some relation between the column (row) vectors and the Kernel. This holds true also for the Image. By Rank-Nullity theorem we get that if T maps V into W then dim(V) = dim(ImT) + dim(KerT). But any linear transformation has a matrix representation, so the dimension of the Kernel & Image is directly related to the number of linearly independent columns (rows) that the reduced row echelon form has, so it would make intuitive sense if this also depended on what the column vectors are in the matrix!
• And one more thing. He said the past few videos that the dimension of V will be the same as all basis of V will have the same cardinality. In that sense, is it even possible to have more than 1 basis for V? I would think that each subspace would have only one basis, I.e. One set of linearly independent vectors that can span the set. Am I wrong?
(1 vote) • Imagine a matrix A = [v1, v2], where v1 = [1,1,1] and v2 = [2,2,2].
As usual, we can say Col A = span (v1,v2). Now, in this case it is fairly obvious that v2 is a linear combination of v1, and so the set is linearly dependent; however it is still a spanning set for Col A.
A basis is defined as a minimal spanning set, or a spanning set with the fewest possible number of vectors. Since v2 is redundant, we can remove it to get a basis for Col A = v1 = [1,1,1]. But what if we wanted to use v2 instead? Well, we can easily remove v1, resulting in the basis [2,2,2].
Once again, it is obvious that this basis is a linear combination of the first one we solved, and in fact this gives us a way to find any number of bases for Col A: any linear combination of a basis will also be a basis.
• By reordering the columns in the matrix and reordering the x_1, x_2, etc to match, we can get any of the columns to be pivot or free columns, and so we can get any 3 column vectors of the matrix to be a basis correct? If this is correct, is it always the case? 