If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Showing relation between basis cols and pivot cols

Showing that linear independence of pivot columns implies linear independence of the corresponding columns in the original equation. Created by Sal Khan.

## Want to join the conversation?

• I was confused the different between rank(A) and dim(A), why they are not set in one defination? • "dim" is a function which inputs a subspace and outputs an integer.
"rank" is a function which inputs a matrix and outputs an integer.
"dim" returns the dimensionality of a subspace. So for example, a subspace which takes the shape of a plane would return 2.
"rank" returns the dimensionality of the subspace generated as the span of column vectors. In general:
Given:
`A = [a₁ a₂ ... an]`(lower-case a's are column vectors)
Then:
`rank(A) = dim(span({a₁, a₂, ... an}))`
Also note, "span" inputs a set of vectors and outputs a subspace.
• A*c = R*c = 0 = a1c1 + a2c2 + a4c4 = r1c1 + r2c2 + r4c4 = 0. For the RHS, ci = 0 by linear independence of ri's. Suppose there exists a vector c' s.t. Ac' = 0, but not all ci's ≠ 0. This new vector, c', by definition would be in the null space of A. However, since Rc' ≠ 0, c' would not be in the null space of R. This is a contradiction. Therefore, the only soln. to the left hand side is for ci = 0. I didn't understand this until I made this explanation, so I'll post it here in case it's useful. • exactly .. I too didn't get the proof initially but just like you worked it out and finally got it. for those who are still struggling try re-watching the video from mark.

there are only 5 steps
1) the pivot columns in reduced row echelon form are linearly independent ( because the ones (ie "0 1 0 0") in each column can't be made from the other columns)

2) the solution space i.e all the solutions to the equation Rx=0 and Ax=0 are the same . (as R is just the reduced form of A)

3) then lets just set the coefficients of R3,R5 (free columns) to zero ,and find a solution to the equation Rx=0. from step 1 we know that the remaining columns are linearly independent and so the only solution to the equation is setting the remaining coefficients also to zero

4)from step 3 we see the solution space doesn't contain any other coefficients to R1,R2,R4 except 0 if we set the coefficients of R3,R5 to 0

5)from step 2, the solution to AX=0 if we set A3,A4 to zero should be [0,0,0,0,0]T as well and from that we can see c1*A1+c2*A2+c3*A4=0 has only one solution. (this is the condition necessary to be linearly independent)

6)the remaining part of the proof is about proving that the set of linearly independent vectors span the null space. which is done in the next video
• I don't understand, at around when Sal is explaining vector [c1 c2 0 c4 0] he said that c1,c2,c4 must be zero showing linear independence, now aren't the third and fifth term also zero? If c1 c2 c4 are zero then the solution would be [0 0 0 0 0], doesn't that imply all column vectors to be linearly independent? • He says that the 1st, 2nd and 4th c must be 0 for the result to be the 0 vector, and hence are linearly independent.

Whereas, the 3rd and 5th just happen to be zero.. but since these vectors can be made from the other vectors, you would also be able to find a way to get back to 0 even if these happened not to be. (He mentions showing this in the next video).
• why do I feel like a lot of this is just going around in circles? • General Question; By putting the matrix into REF instead of RREF, would you still obtain the linearly independent columns? • If two square nxn matrices are from rank n how can i show that their sum is also from rank n (without using characteristic polynomial ) ?? • Sal, why is there 0 in 3rd and 5th rows in matrix x, which is multiplied by r? This happens at around • By reducing the matrix to reduced row echelon form, we found the pivot columns. In R, the pivots are columns 1, 2, and 4. This tells us that the other two columns (3 and 5) can be expressed as linear combinations of the pivot columns. You can picture this by imagining each vector as a line. If, for example, r4 could be expressed as 2*r1, then that simply means that r4 is on the same line as r1. The idea remains the same if it is a more complex combination involving two or more vectors.
Now, if we are looking at the span of the vector space (which we are), then these vectors can simply be omitted. This becomes obvious (I hope) when thinking of the lines (or planes, etc. for higher dimensions). If r4 is on the same line as the other vectors, it is included in the span of those vectors.
So, to solve this equation more simply, we set these free variables to 0. If you remember, the values of the free variables can be set arbitrarily, but the set the values of the other vectors. In this example, if we used a value other than 0, we would have to find linear combinations of the pivot columns to cancel out the values in the non-pivot columns (which can be done! try it if you don't believe me!). So 0 is picked as a value to simplify calculation, but it does not affect the end result.
• can some explain again why C1,C2 and C4 has to be zero to be L.Ind? this is what confuses me about L.dep and L.ind. • If C1, C2 and C4 did not all equal zero, but some could still be some non zero number to make C1*R1 + C2*R2 + C4*R4 = 0 this would imply one of those R vectors is a multiple of at least one of the other two. Maybe some examples.

starting with vectors of length 2. <1,0> and <0,1> are the most basic linearly independent vectors. ANY other vector with two elements will not be linearly independent to both. What this means is that you could multiply <1,0> and <0,1> each by some number to get any other vector in R2. let me know if that is not clear.

Now lets look at <1,0> being x1 and <0,1> being x2. More related to your question we would set up C1*x1 + C2*x2 = 0. Now, is there any combination of C1 or C2 other than both equaling 0 where we could make this linear combination equal the zero vector. So in this case <0,0> There is not. Then it is this thinking that carries on.

If you have n linearly independent vectors there is no way to make a linear combination of them so that you get the 0 vector in the end other than making them all be multiplied by 0. So in the video the rref vctors were <1,0,0,0>, <0,1,0,0>, <0,0,1,0>. now, could you make a linear combination of them to get the 0 vecotr, int his case <0,0,0,0>

If you are having trouble understanding why these three vectors dictate the whole matrix I can explain that too, just let me know
(1 vote)
• Is it right to set R3 & R5 = 0 and prove that other vectors in the set are linearly independent?  