If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Null space 2: Calculating the null space of a matrix

Calculating the null space of a matrix. Created by Sal Khan.

## Want to join the conversation?

• Does calculating the null space of vectors have something to do with the identity of orthogonality and perpendicularity ? Seeing as how when you dot vector a with vector b you get zero (a.b) = 0, if they are orthogonal to one another, is there a similarity ? I can't define and capture the overall concept (if there is one) which explains the relationship between the two. Also as a corollary, how does this relatesto linear combinations - in particular linear independence and dependence ? •  Great question,

The nullspace is very closely linked with orthogonality. Recall that the definition of the nullspace of a matrix A is the set of vectors x such that Ax=0 i.e. is the zero vector. How do we compute Ax? When we multiply a matrix by a vector we take the dot product of the first row of A with x, then the dot product of the second row with x and so on. The only way for Ax=0 is if every row of A is orthogonal to x.

From this idea we define something called the row space. It is the subspace generated by the row vectors of A. The vector x lives in the same dimension as the row vectors of A so we can ask if x is orthogonal to the row vectors. In fact, given any subspace we can always find the orthogonal complement, which is the subspace containing all the orthogonal vectors. The orthogonal complement of the row space is the null space.

Linear independence comes in when we start thinking about dimension. The dimension of a subspace generated by the row space will be equal to the number of row vectors that are linearly independent. When the row space gets larger the null space gets smaller since there are less orthogonal vectors. If an nxn matrix A has n linearly independent row vectors the null space will be empty since the row space is all of R^n.

There is much more to say but this should get you started thinking about it. Keep it up!
• Hey Sal. Can we consider the null space of an matrix the same as the kernel (ker) of an function? • So really isn't this just finding the set of vectors that are perpendicular to our system of equations (i.e. matrix A)? • That's right. As Sal says at , we're finding the set of vectors x that satisfy the equation Ax = 0. This means if you do the dot product of any of these vectors x with any of the rows of A, you'll get zero, and a zero dot product implies they're perpendicular (orthogonal).
• What does "augmented" mean? • Augmented means to increase in size. In the linear algebra setting, it means to increase a matrix in size by adding entries on the right hand side. What the matrix is augmented by depends on what you're doing. If you're solving a system of linear equations, the vector containing the solutions is added on the RHS. If you're finding the inverse of a matrix, the matrix to be inverted is augmented by the identity matrix.
• Do the two vectors (1 -2 1 0) & (2 -3 0 1) have a special name?? • Would this work if you put the matrix in echelon form and not row canonical form? • With "Echelon form" meaning that you left a leading coefficient as something other than `1`, then it would work, but it would make the math slightly more complicated on the last step. Then you'd have something like `7x1 = 7x3 + 14x4` instead of `x1 = x3 + 2x4` at the end. You'd have to divide through by the constant (in that case `7`), in order to solve for x1 again. It ultimately changes nothing, but sort of adds a step.
• At the end he says that N(A)==N(rref(A)). Shouldn't it be N(A)==Span(N(rref(A)))? • In the problem, A and rref(A) are not the same matrix. A is the matrix:
|1 1 1 1|
|1 2 3 4|
|4 3 2 1|
Whereas the matrix rref(A) is:
|1 0 -1 -2|
|0 1 2 3|
|0 0 0 0|
The point of saying that N(A) = N(rref(A)) is to highlight that these two different matrices in fact have the same null space. This means that instead of going through the process of creating the augmented matrix and carrying around all those zeros, you can find rref(A) first and then find the null space of that.
• Is there any difference at all between Null Space and Kernel, or is it two terms that mean exactly the same thing? • They both mean almost exactly the same thing. The only difference is the usage. Kernel is used for any linear transformation while nullspace is only used with matrices. So if you have a matrix A, you can find N(A), but not ker(A) since a martix by itself is just an expression and not a linear transformation. Likewise, if you have `T(x⃑) = A x⃑`, then you can take ker(T), but not N(T). You can take N(A) and ker(T), and they'll give you the same answer. `N(A) = ker(T)`
• At , why does Sal only solve for x1 and x2? Is it because they are the ones with the pivot entries? • Exactly right. Since they are pivot vectors it's easy to get them in terms of the non pivot entries. So you could pick any number for x4 and any number for x3, then if you perform vector addition with that it would give you the corresponding x1 and x2, let me know if you don't see how to do that. Now if you multiply the matrix A by the 4x2 matrix formed by the nullspace you will get a 0 matrix of dimensions 3x2

You can also get the same result if you multiply any row from matrix A by a constant, so the first row <1, 1, 1, 1> would become <2, 2, 2, 2> if you multiplied it by 2, or if you multiplied any COLUMN in the nullspace by a constant. So the column <1, -2, 1, 0> could become <3, -6, 3, 0> if you multiplied the column by 3. Doing any of this will still result in a 0 matrix.

Let me know if that doesn't make sense or something.
• Is there any material in this course (linear algebra) that requires the use of the term "kernel"? 