If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Null space and column space basis

Figuring out the null space and a basis of a column space for a matrix. Created by Sal Khan.

## Want to join the conversation?

• so basically finding the null space is an easy way to see if the set is linearly dependent or independent? •   Yes, to be more specific, if the null space is not the zero vector then there is linear dependence.
• Am I right to interprets that any Free variables found in the rref are clues that the column vectors are redundant to the span? •   Yes. After reducing the matrix to rref form, the pivot columns (columns containing a single 1 with everything else being 0's) will be the ONLY columns that are linearly independent. Therefore, non-pivot columns will be unnecessary information. Basically, you're right.
• x3 and x4 were free; v3 and v4 were linear combinations of v1 and v2. Will this always be the case? If its not how do you solve for the basis? this video solved for a very simple case only. •  It is because x_3 and x_4 are free that v_3 and v_4 are linear combinations of v_1 and v_2.

Think about it this way: since x_3 and x_4 are free, we can configure them however we want. So, we can just "contrive" x_3 and x_4 so that we express v_3 and v_4 in terms of v_1 and v_2!
• "They do span the column space of A but they're not the basis" () what does that mean? • In order to be a basis, the vectors must all be linearly independent. As he proves later in the video, v_3 and v_4 are linear combinations of v_1 and v_2, meaning the vectors are not linearly independent. Therefore they cannot be a basis. However, linearly dependent vectors can span. In that case, one or more of the vectors tells you no new information and the vectors can still span without those dependent vectors.
• At around in the video, is Sal essential combining the operations of -1R2 and R2 + 2R1? That confused me at first... • Wait. But my understanding is that a "basis" would imply a rref = to the Identity matrix, and such a matrix is an square matrix (nxn). But those two vectors  and  do not conform an square matrix! • With 4 column vectors you could span an R^4 space. However the 4 column vectors in A are each in R^3, so you immediately know they can't span an R^4 space, meaning at least 1 must be removed to get linear independence. It turns out though, when he does that RREF, that 2 of the column vectors are dependent. He could have picked any 2 to keep and get a basis, but the RREF style just picks the first two.

Since there are only 2 vectors in this basis, it means the span forms a plane. Ie, all 4 of the original vectors lie in the same plane. It's a rather slanted awkward-to-visualize plane.
• There isn't a specific time in this video when I thought of this question. Just overall, after watching it. I just realized this. The columns of the original matrix which correspond to the columns containing the pivot entries in the rref turn out to be the basis vectors. The others were all redundant. Is this true in every case? • Why is it wrong to say that any of the two vectors which make up the matrix can be a basis for the space?

Surely the vectors [1,1,4] and [1,4,1] are linearly independent in the same way as the vectors [1,2,3] [1,1,4]? • in the last one minute.
is that mean C(A)= basis?
thx • C(A) represents the column space of the matrix A. That is simply: C(A) = Span(column vectors of A). You list the columns separately inside the parentheses.

Now, in order for a set to be a basis it not only has to span the set (every possible vector in the set can be represented by a linear combination of the vectors), but must also be linearly independent. Linear independence means there are no "extra" vectors present - the only way a linearly independent set can be written as the zero vector is if all the coefficients are zero. Two of the vectors in C(A) were linear combinations of other vectors in C(A). Thus, they were extra baggage we don't need - no new information is gained by having those vectors present.

You're missing the point by saying the column space of A is the basis. A column space of A has associated with it a basis - it's not a basis itself (it might be if the null space contains only the zero vector, but that's for a later video). It's a property that it possesses. It's kinda like saying "a shirt is a collar" - shirts HAVE collars, and they are part characterizing the properties of shirts, but they are certainly not the whole story.

Example: Colors. We have been taught that the primary colors are red (R), blue (B) and yellow (Y). What about the set {R,B,Y,G}? Can we represent any color we want by taking appropriate proportions (linear combinations) of these colors? YES! Is this set a basis? NO! While it spans the set (we can make any color), it is not linearly independent, since green is some combination of blue and yellow - G in this set is "extra." How can we make the set linearly independent? Remove green.

What about the set (P=purple) {B,Y,P}? Does this set constitute basis? NO! We are missing red, and therefore cannot form, e.g., orange.

Try a few examples on your own involving colors, then try a few more complicated ones using the basis vectors in R2 (1,0) and (0,1). 