Main content
Linear algebra
Course: Linear algebra > Unit 2
Lesson 7: Transpose of a matrix- Transpose of a matrix
- Determinant of transpose
- Transpose of a matrix product
- Transposes of sums and inverses
- Transpose of a vector
- Rowspace and left nullspace
- Visualizations of left nullspace and rowspace
- rank(a) = rank(transpose of a)
- Showing that A-transpose x A is invertible
© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice
rank(a) = rank(transpose of a)
Rank(A) = Rank(transpose of A). Created by Sal Khan.
Want to join the conversation?
- why number of pivot rows will always be equal to number of pivot columns?(11 votes)
- In both halves of the proof, he takes the rref(A). It is not (as I originally assumed) the rref(A) and rref(A^T). So, once he has rref(A), he looks for pivot entries. The pivot entries are defined by being the only non-zero entry in a column. Then the pivot column is just defined as the column containing the pivot entry. Likewise the pivot row is defined as the row containing the pivot entry. There are the same number of them because there is one of each for each pivot entry.
If you're confused, maybe it's because, like me, you assumed that the pivot row had to have only one non-zero entry (in analogy to the pivot columns). That is not the case, however, and in his rref(A) (visible at the bottom of the screen at the end of the video) he does, in fact, have multiple non-zero entries in his pivot rows. Only the pivot columns are guaranteed to have only one nonzero entry.(12 votes)
- at, what is a pivot row? He talks about pivot rows forming a basis for the rows in A--no discussion up to this point involved pivot rows! But now pivot rows are forming a basis for the rows in A and the columns in A transpose. Huh? 4:47(5 votes)
- As you may have guessed, it's a row (in rref(A)) which has a pivot element.
A pivot column is (yes, that's right) a column in rref(A) which has a pivot element.
In rref(A), all the non zero rows are pivot rows and all the pivot columns have only one non zero entry - the (you guessed it) pivot element in that row.
These row vectors (the pivot rows of rref(A)) are a l. i. set because they each have a component that doesn't occur in any of the others - (what is it?) - the pivot element. And since they are all linear combinations of the original row vectors of A (due to the row operations of Gaussian elimination), then the original row vectors must also be linear combinations of them, and so they are a basis for the row space, and therefore any basis for the row space of A has the same # vectors as there are pivots in rref(A).(2 votes)
- this has gotten me confused again. given a scenario with a 3x1 matrix, or simply a coolumn vector in R3. which I quite believe its rank to be 1. Now if we transpose it, we get a 1x3 matrix. How can its rank be 1. pardon mu ignorance, diss me but please sure do enlighten me.(4 votes)
- Note that the rank of a matrix is equal to the dimension of it's row space (so the rank of a 1x3 should also be the row space of the 1x3). And to find the dimension of a row space, one must put the matrix into echelon form, and grab the remaining non zero rows.
Well then, if you a non zero column vector (which you correctly declared has a rank of 1), then take it's transpose, you could find the rank of the transpose simply by finding the dimension of the row space.
But a single vector transposed is already in echelon form, so the dimension of the row space is 1. But we just declared the row space of the matrix transposed to be equal to the rank of the matrix transposed, therefore the rank of a non zero 1x3 is also 1.(3 votes)
- What is the purpose of matrixes & vectors? i dont get it be honest frends:) luv u all(1 vote)
- I didn't think I'd ever use them when I learned basic linear algebra in middle/high school algebra 2, but it wasn't until college that I found out.
Vectors are used throughout physics, for example: you'll use them a lot when talking about position, velocity, and acceleration functions. There are many more uses of vectors, but that's just one example.
I ended up using matrices again in my circuits class (I'm an Electrical Engineering major). They come in handy when analyzing circuits (mostly mesh analysis). I got really good with solving systems of equations (made up by different variables of different circuits - real-life stuff) by setting up matrices, taking the inverse of them, and multiplying them by "b" (consider Ax = b) to get the solutions for the various x's. In mesh analysis, I was solving for the currents (and indirectly, the voltages) throughout a given circuit. I just noticed that my response is two years late, so you probably figured it out already. I guess this can be for anyone else who reads the questions and wonders the same thing.(9 votes)
- This is not a proof. It seems to me that the second part of the demonstration does not make sense and does not prove anything.
The only proof I know is more complex and involves "calculating" the rank of the transpose matrix and verifying that it is the same as the rank of the original matrix.(3 votes)- The # non zero rows of rref(A) is always the same as # pivots, due to Gaussian elimination. These row vectors are l. i. because they each have a component that doesn't occur in any of the other row vectors - the pivot element. And since they are all linear combinations of the original row vectors of A (due to the row operations of Gaussian elimination), then the original row vectors must also be linear combinations of them, and so they are a basis for the row space, and therefore any basis for the row space of A has the same # vectors as there are pivots in rref(A).
The # pivots in rref(A) is also its # pivot columns, and these columns are also clearly linearly independent vectors. Sal has shown in previous videos that the column vectors of A corresponding to these pivot columns are a basis for C(A). This requires a long explanation (See
"https://www.khanacademy.org/math/linear-algebra/vectors-and-spaces/null-column-space/v/dimension-of-the-column-space-or-rank",
"https://www.khanacademy.org/math/linear-algebra/vectors-and-spaces/null-column-space/v/showing-relation-between-basis-cols-and-pivot-cols", and
"https://www.khanacademy.org/math/linear-algebra/vectors-and-spaces/null-column-space/v/showing-that-the-candidate-basis-does-span-c-a").
So therefore the # of vectors in a basis for both the row space and the column space of a matrix A is the same (the number of pivots in rref(A)). But, since the row vectors of A and the column vectors of (A)T are exactly the same vectors, then a basis for the row space of A is also a basis for C((A)T), and so rank(A) = rank((A)T).(4 votes)
- So I accept that the # of pivot entries in rref(A) and # of pivot entries in rref(A[T]) is equal to rank(A) and rank(A[T]), respectively. I don't see how we proved that # of pivot entries in rref(A) = # of pivot entries in rref(A[T]). Can someone explain where that happened?(2 votes)
- We proved that the number of pivot entries in rref(A) is the rank(A[T]). This is because the rank(A[T]) is just the dim(RowSpace(A)), which is just the dim(RowSpace(rref(A))), which is just the number of pivot rows in rref(A). The number of pivot rows in rref(A) is just the number of pivot entries in the rref(A).(2 votes)
- AtSal uses the terms 'pivot rows' and 'pivot entries' (pivot columns) interchangeably. I've watched the whole Linear Algebra series up until this video, but this part confuses me royally. Are pivot rows and entries (columns) interchangeable? What is the relationship between a pivot entry/column and a pivot row? 4:33(2 votes)
- A pivot row is a row that has a pivot entry in it. A pivot entry is *not* a pivot column! A pivot column is a column with a pivot entry in it.
Let me use an analogy:
In MS Excel, you have rows, columns, and cells. Think of the cell as an entry. An entry is a specific column and row.(1 vote)
- Hi,
Suposse we have a matrix A which is a mxn-matrix for example 3x5.
1) The maximum dimension is m right? so 3 in this example
2) Is Dim(ColSpace(A)) + Dim(NullSpace(A)) = n? so in this case 5?
3) Can we use the rule for
Dim(ColSpace(A)) = Dim(ColSpace(A^T)
and use if for NullSpaces?:
Dim(NullSpace(A)=Dim(NullSpace(A^t))?
4) if question 3 is valid than this should be correct right? (I used A^t in the NullSpace)
Dim(ColSpace(A)) + Dim(NullSpace(A^t) = n. also 5?
Thanks!(1 vote) - Not in this video, but what is full rank?(1 vote)
- Full rank is when the most possible row vectors of a matrix are linearly independent.(1 vote)
- Where can I find videos about Transposition of a subject , that is changing a subject of a equation ?(1 vote)
Video transcript
A couple of videos ago, I made
the statement that the rank of a matrix A is equal to the
rank of its transpose. And I made a bit of a
hand wavy argument. It was at the end of the
video, and I was tired. It was actually the
end of the day. And I thought it'd be worthwhile
to maybe flush this out a little bit. Because it's an important
take away. It'll help us understand
everything we've learned a little bit better. So, let's understand-- I'm
actually going to start with the rank of A transpose. The rank of A transpose is equal
to the dimension of the column space of A transpose. That's the definition
of the rank. The dimension of the column
space of A transpose is the number of basis vectors for the column space of A transpose. That's what dimension is. For any subspace, you figure out
how many basis vectors you need in that subspace,
and you count them, and that's your dimension. So, it's the number of basis
vectors for the column space of A transpose, which is, of
course, the same thing. This thing we've seen multiple
times, is the same thing as the row space of A. Right? The columns of A transpose
are the same thing as the rows of A. It's because You switch the
rows and the columns. Now, how can we figure out the
number of basis vectors we need for the column space
of A transpose, or the row space of A? Let's just think about what
the column space of A transpose is telling us. So, it's equivalent to--
so let's say, let me draw A like this. That's a matrix A. Let's say it's an
m by n matrix. Let me just write it as a
bunch of row vectors. I could also write it as a bunch
of column vectors, but right now let's stick
to the row vectors. So we have row one. The transpose of
column vectors. That's row one, and we're going
to have row two, and we're going to go all the
way down to row m. Right? It's an m by n matrix. Each of these vectors are
members of rn, because they're going to have n entries in them because we have n columns. So, that's what A is
going to look like. A is going look like that. And then A transpose, all
of these rows are going to become columns. A transpose is going to look
like this. r1, r2, all the way to rm. And this is of course going
to be an n by m matrix. You swap these out. So, all these rows are
going to be columns. Right? And, obviously the column
space-- or maybe not so obviously-- the column space of
A transpose is equal to the span of r1, r2, all
the way to rm. Right? It's equal to the span
of these things. Or you could equivocally call
it, it's equal to the span of the rows of A. That's why it's also called
the row space. This is equal to the span
of the rows of A. These two things
are equivalent. Now, these are the span. That means this is some subspace
that's all of the linear combinations of these
columns, or all of the linear combinations of these rows. If we want the basis for it, we
want to find a minimum set of linearly independent vectors
that we could use to construct any of
these columns. Or that we could use construct
any of these rows, right here. Now, what happens when
we put A into reduced row echelon form? We do a bunch of row operations
to put it into reduced row echelon form. Right? Do a bunch of row operations
and you eventually get something like this. You'll get the reduced row
echelon form of A. The reduced row echelon form
of A is going to look something like this. You're going to have some pivot
rows, some rows that have pivot entries. Let's say that's one of them. Let's say that's one of them. This will all have 0's
all the way down. This one will have 0's. Your pivot entry has to
be the only non-zero entry in it's column. And everything to the left
of it all has to be 0. Let's say that this one isn't. These are some non-zero
values. These are 0. We have another pivot
entry over here. Everything else is 0. Let's say everything else
are non-pivot entries. So you come here and you have
a certain number of pivot rows, or a certain number
of pivot entries, right? And you got there by performing
linear row operations on these guys. So those linear row operations--
you know, I take 3 times row two, and I add it
to row one, and that's going to become my new row two. And you keep doing that and
you get these things here. So, these things
here are linear combinations of those guys. Or another way to do it, you
can reverse those row operations. I could start with these
guys right here. And I could just as easily
perform the reverse row operations. Any linear operation, you can
perform the reverse of it. We've seen that multiple
times. You could perform row operations
with these guys to get all of these guys. Or another way to view it is,
these vectors here, these row vectors right here, they span
all of these-- or all of these row vectors can be represented
of linear combinations of your pivot rows right here. Obviously, your non-pivot rows
are going to be all 0's. And those are useless. But, your pivot rows, if you
take linear combinations of them, you can clearly do reverse
row echelon form and get back to your matrix. So, all of these guys can
be represented as linear combinations of them. And all of these pivot entries
are by definition-- well, almost by definition--
they are linearly independent, right? Because I've got a 1 here. No one else has a 1 there. So this guy can definitely not
be represented as a linear combination of the other guy. So why am I going through
this whole exercise? Well, we started off
saying we wanted a basis for the row space. We wanted some minimum set of
linearly independent vectors that spans everything that
these guys can span. Well, if all of these guys can
be represented as linear combinations of these row
vectors in reduced row echelon form-- or these pivot rows in
reduced row echelon form-- and these guys are all linearly
independent, then they are a reasonable basis. So these pivot rows right here,
that's one of them, this is the second one, this is the
third one, maybe they're the only three. This is just my particular
example. That would be a suitable basis
for the row space. So let me write this down. The pivot rows in reduced row
echelon form of A are a basis for the row space of A. And the row space of A is the
same thing, or the column space of A transpose. The row space of A is the
same thing as the column space of A transpose. We've see that multiple times. Now, if we want to know the
dimension of your column space, we just count the number
of pivot rows you have. So you just count the number
of pivot rows. So the dimension of your row
space, which is the same thing as the column space of A
transpose, is going to be the number of pivot rows you have
in reduced row echelon form. Or, even simpler, the number
of pivot entries you have because every pivot entry
has a pivot row. So we can write that the rank of
A transpose is equal to the number of pivot entries in
reduced row echelon form of A. Right? Because every pivot entry
corresponds to a pivot row. Those pivot rows are a suitable
basis for the entire row space, because every row
could be made with a linear combination of these guys. And since all these can be, then
anything that these guys can construct, these
guys can construct. Fair enough. Now, what is the rank of A? This is the rank of A transpose
that we've been dealing with so far. The rank of A is equal to
the dimension of the column space of A. Or, you could say it's the
number of vectors in the basis for the column space of A. So if we take that same matrix
A that we used above, and we instead we write it as a bunch
of column vectors, so c1, c2, all the way to cn. We have n columns right there. The column space is essentially
the subspace that's spanned by all of these
characters right here, right? Spanned by each of these
column vectors. So the column space of A is
equal to the span of c1, c2, all the way to cn. That's the definition of it. But we want to know the number
of basis vectors. And we've seen before-- we've
done this multiple times-- what suitable basis
vectors could be. If you put this into reduced row
echelon form, and you have some pivot entries and their
corresponding pivot columns, so some pivot entries with
their corresponding pivot columns just like that. Maybe that's like that, and then
maybe this one isn't one, and then this one is. So you have a certain number
of pivot columns. Let me do it with another
color right here. When you put A into reduced row
echelon form, we learned that the basis vectors, or the
basis columns that form a basis for your column space,
are the columns that correspond to the
pivot columns. So the first column here is a
pivot column, so this guy could be a basis vector. The second column is, so this
guy could be a pivot vector. Or maybe the fourth one right
here, so this guy could be a pivot vector. So, in general, you just say
hey, if you want to count the number basis vectors-- because
we don't even have to know what they are to figure
out the rank. We just have to know the
number they are. Well you say, well for every
pivot column here, we have a basis vector over there. So we could just count the
number pivot columns. But the number of pivot columns
is equivalent to just the number of pivot entries we
have. Because every pivot entry gets its own column. So we could say that the rank of
A is equal to the number of pivot entries in the reduced
row echelon form of A. And, as you can see very
clearly, that's the exact same thing that we deduced was
equivalent to the rank of A transpose-- the dimension
of the columns space of A transpose. Or the dimension of the
row space of A. So we can now write
our conclusion. The rank of A is definitely the
same thing as the rank of A transpose.