Main content
Linear algebra
Course: Linear algebra > Unit 1
Lesson 7: Null space and column space- Matrix vector products
- Introduction to the null space of a matrix
- Null space 2: Calculating the null space of a matrix
- Null space 3: Relation to linear independence
- Column space of a matrix
- Null space and column space basis
- Visualizing a column space as a plane in R3
- Proof: Any subspace basis has same number of elements
- Dimension of the null space or nullity
- Dimension of the column space or rank
- Showing relation between basis cols and pivot cols
- Showing that the candidate basis does span C(A)
© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice
Null space 3: Relation to linear independence
Understanding how the null space of a matrix relates to the linear independence of its column vectors. Created by Sal Khan.
Want to join the conversation?
- Can someone explain me the physical meaning of null space? I understand it mathematically but don't get it in a practical sense. Thanks.(13 votes)
- A 'physical meaning' would come from the meaning of the numbers in the matrix. Look up using matrices to solve Kirchhoff’s Current Law for one example where the nullspace is used for an applied problem. Otherwise, if you just want some intuition: the nullspace for a matrix A is the set of vectors that are perpendicular to all the rows of A.(36 votes)
- Sal explains that the only way to the matrix vectors to be all linearly independent is if none of them is (may be represented as) a combination of the others. In which case the only solution is 0.
Then he says that for A.x = 0 to be true, x must be the zero vector. I guess I can make a conclusion by now, that the only way to satisfy the linear independency is if x = 0.
Is that wrong? Have I made any mistake in this thought?(10 votes)- Rodrigo,
You haven't made a mistake, this is correct.
Basically Ax is the same as the ith column of A times the ith term in x (or each column of A times it's respective x term). If the columns of A are a linearly independent set, then the only way to multiply them all by some coefficients, and then add them all together and STILL get zero is if all of the coefficients are zero. Well in this case, the terms of x act like the coefficients of the columns of A. For the whole thing to be equal to the zero vector, all of the x terms must be 0. Or the "nullspace of A" is ONLY the zero vector.(11 votes)
- what does nullspace mean, physically and mathematically ?(4 votes)
- If
A x⃑ = 0⃑
ThenN(A) = x⃑
And vis-versa.
The nullspace of a matrix gives you a subspace.N(A) = x⃑ = V
Every single value within that subspace will become the zero vector when transformed by A (which is what the original equation means).
IfN(A) ≠ 0⃑
Then A isn't invertible.
Hope this gives you some understanding.(9 votes)
- ASalman must have converted a matrix into a vector of vectors. This is confusing since I had always thought of matrices as composed of real numbers. In computer languages, such as Python there is a distinction between an "array" and a "list". A list can be a list of lists. But an array cannot be an array of arrays, well at least as far as I know. Can someone comment on what Salman did here? 2:15(4 votes)
- A matrix being a list of vectors comes straight out of the definition of a matrix. It is actually more correct to think of matrices as being composed of vectors rather than numbers.(7 votes)
- So what is the actual purpose of null space? What does it signify?(4 votes)
- The nullspace is the set of all vectors v that, when multiplied by some matrix in the form Av, the result is the zero-vector. This is useful because it tells you every single vector that, when multiplied with A, will result in 0. That's why they call it the "null" space, or the space of vectors that will "nullify" or "zero out" a matrix. I've seen it used in low-level UX design (opengl) and in games where 3d vector manipulation is baked in (like Second Life). Also note the exciting idea that the null space represents the set of all vectors that are orthogonal to the matrix row vectors (dot product = 0).(6 votes)
- AtSal starts talking about Linearly Independence. 4:02
What if an N by N matrix wasn't Linearly Independent?(2 votes)- Then the zero vector could be obtained by choosing an x not equal to zero in the equation Ax = 0. For instance, if in R^3, you had a 3x3 matrix A that could be multiplied by a vector x (where x isn't [0,0,0]) and the product was the zero vector ([0,0,0]), then the null space of A isn't trivial* which implies that the columns of A (i.e. the 3 R^3 vectors) are linearly dependent. This means that one of the vectors could be written as a combination of the other two.
In essence, if the null space is JUST the zero vector, the columns of the matrix are linearly independent. If the null space has more than the zero vector, the columns of the matrix are linearly dependent.
* trivial null space is just the zero vector.(8 votes)
- why does rref(A) have to be a SQUARE matrix?(4 votes)
- it is said in the previous comments above rref(A) does not have to be square matrix, it can also be in a form that square matrix and zeros below square form? I think also in the same way so rref(A) does not have to be a square matrix.(2 votes)
- Is an mxn matrix a tensor? A vector of vectors (m R^n row vectors or n R^m column vectors)?(3 votes)
- All mathematical objects in linear algebra are tensors. Matrices are order-2 tensors, vectors are order-1 tensors, and scalars are order-0 tensors.(3 votes)
- Can anyone explain in more intuitively way of null space is 0 when column vectors are linearly independent? Because other comment explained null space is a set of vector perpendicular to column vectors. I am confused about how 0 vectors perpendicular to linearly independent column vectors.(2 votes)
- The null space is not the set of vectors that are perpendicular to the column vectors. The nullspace is the set of vectors that perpendicular to the row vectors.
But you should think about it this way:
For a matrix A, the null space of A, denoted Null(A), is the set of all vectors x such that Ax=0, the right hand being the 0 vector.
Recall if the columns of A are linearly independent, then there is only the trivial solution to Ax=0, namely x=0.
This is why when the columns are linearly independent, the null space only has the 0 vector.(2 votes)
- Atdoesn't Sal mean nth 0 instead of mth? 5:07(2 votes)
- For "Ax=0", how do you get the first entry in the result (0) vector? the last entry?(1 vote)
Video transcript
- [Voiceover] So I have
the matrix A over here, and A has m rows and n columns, so we could call this an m by n matrix. And what I want to do in this video, is relate the linear independence,
or linear dependence, of the column vectors of
A, to the null space of A. So what, first of all what I am talking about as column vectors? Well as you can see
there's n columns here, and we could view each of those
as an m-dimensional vector. And so, let me do it this way, so you can view this one right over here, we could write that as V one, V one, this next one over here,
this would be V two, V two, and you would have n of these,
because we have n columns, and so this one right
over here would be V n, V sub n. And so we could rewrite A, we
could rewrite the matrix A, the m by n matrix A, I'm bolding it to show
that that's a matrix, we could rewrite it as, so let me do it the same way, so, draw my little brackets there, we can write it, just express it, in terms of its column vectors, cos we could just say
well this is going to be V one for that column, V one for that column, V two for this column, all the way, we're gonna have n columns, so you're gonna have V
n, for the nth column. And remember, each of these,
are going to have m terms, or I should say, m components in them. These are m-dimensional column vectors. Now what I want to do, I said I want to relate the linear
independence of these vectors, to the null space of A. So let's remind ourselves what
the null space of A even is. So the null space of A, the null space of A, is equal to, or I could
say it's equal to the set, it's the set of all vectors x, that are members of R n, and I'm gonna double down
on why I'm saying R n, in a second, such that, such that, if I take my matrix A, if I take my matrix A, and multiply it by one of those x's, by one of those x's, I'm going to get, I'm going
to get the zero vector. So, why do the, why does x
have to be a member of R n? Well just for the matrix
multiplication to work, for this to be, if this is m
by n, let me write this down, if this is m by n, well in order just to make
the matrix multiplication work or you could say the matrix
vector multiplication, this has to be an n by one, an n by one, vector, and so it's gonna have n components, so it's gonna be a member of R n. If this was m by A, well, or,
let me use a different letter, if this was m by, I don't know, seven, then this would be R seven,
that we would be dealing with. So that is the null space. So, another way of thinking about it is, well if I take my matrix A, and I multiply it by sum vector x, that's a member of this null space, I'm going to get the zero vector. So if I take my matrix A, which I've expressed here in
terms of its column vectors, multiply it by sum vector x, so sum vector x, and, actually let me make it clear that, it doesn't have to have the same, so, sum vector x right over here, we draw the other bracket, so this is the vector x, and so, it's going to have, it's a member of R n, so it's going to have n components, you're gonna have x one,
as the first component, x two, and go all the way, to x n. If you multiply, so if we say that this x is a member of the null space of A, then, this whole thing is going to
be equal to the zero vector, is going to be equal to the zero vector, and once again the zero vector, this is gonna be an m by one vector, so it's gonna look, actually
let me write it like this, it's gonna have the same
number of rows as A, so, I'll try to make it, the brackets roughly the same length, so, and there we go, try and
draw my brackets neatly, so you're gonna have m of these, one, two, and then go all the way to the mth zero. So, let's actually just multiply this out, using what we know of
matrix multiplication. And by the definition of
matrix multiplication, one way to view this, if you were to multiply our matrix A times our vector x here, you're going to get the
first column vector, V one, V one, times the first component here, x one, x one, plus, the second component
times the second column vector, x two times V two, V two, and we're gonna do that n times, so plus dot dot dot x sub n times V sub n, V sub n, and these all when you add them together are going to be equal to the zero vector. Now this should be, this, so it's gonna be equal to the zero vector, and now this should start
ringing a bell to you, when we looked at, when we
looked at linear independence we saw something like this, in fact we saw that these
vectors V, V sub one, V sub two, these n vectors, are linearly
independent if and only if, any linear, if and only
if the solution to this, or I guess you could say the
weights on these vectors, the only way to get this to be true is if x one, x two, x
n are all equal zero. So let me write this down. So V sub one, V sub two, all the way to V sub n, are linearly independent,
linearly independent, if and only if, if and only if, only solution, so let me, only solution, or you could say weights on
these vectors, to this equation, only solution is x one, x two, all the way to x n are equal to zero. So if the only solution here, if the only way to get this sum to be equal to the zero vector, is if x one, x two and x,
all the way through x n, are equal to zero, well that means that our
vectors V one, V two, all the way to V n, are
linearly independent, or vice versa, if they're
linearly independent, then the only solution to this, if we're solving for the
weights on those vectors, is if for x one, x two and
x n to be equal to zero. Remember, linear independence,
if you want to say, it's still mathematical, but a little bit more, common language is, if these vectors are linearly independent, that means that none of these
vectors can be constructed by linear combinations
of the other vectors, or, looking at it this way, this right over here is
a, you could view this as a linear combination
of all of the vectors, that the only way to get
this linear combination of all the vectors to be equal to zero, is if x one, x two,
all the way through x n are equal to zero, and we proved that in other
videos on linear independence. Well, if the only solution to this is all of the x one's through
x n's are equal to zero, that means that the null space, this is only going to
be true, you could say, if and only if, the null space of A, the null space of A, let me make sure it looks like
a matrix, I'm gonna bold it, the null space of A only
contains one vector, it only contains the zero vector. Remember, this is, if all
of these are gonna be zero, well then the only solution here is gonna be the zero vector, is going to be, is going
to be the zero vector. So the result that we're showing here is, if the column vectors of a
matrix are linearly independent, then the null space of that matrix is only going to consist
of the zero vector. Or you could go the other way. If the null space of a matrix
only contains the zero vector, well that means that the
columns of that matrix are linearly independent.