If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Linear algebra>Unit 2

Lesson 7: Transpose of a matrix

# Showing that A-transpose x A is invertible

Showing that (transpose of A)(A) is invertible if A has linearly independent columns. Created by Sal Khan.

## Want to join the conversation?

• Although not explicitly stated, it's obvious that the n (in n x k) must be >= k isn't it? I say this because if there were more columns than rows, one couldn't make all the column vectors linearly independent.
• Astute observation, the amount rows must indeed be equal to or greater than the columns in order for it to be linearly independent, otherwise you'd get some non-pivot columns.
• If a square matrix needs all columns/rows to be linearly independent, and also determinant not equal to 0 in order to be invertible, so is determinant just the kind of measure of non-linear-dependence of rows/columns of a matrix?
• Yes it is. If the determinant is not zero, then the rows and columns will be linearly independent and if the determinant is zero, then the rows and columns will not be linearly independent.
• At ,Sal mentions the transpose of the "reverse product". Is "reverse product" an actual term used in Linear Algebra or did Sal just make it up on the spot?
• So does this only work if you multiply a matrix by its transpose in that order or can you switch them around?

Also, if you try this with a matrix that doesn't have linearly independent rows then does that mean you know for sure that the product won't be invertible?
• It's only true if A is a square matrix. Because AxA(transpose) =/= A(transpose)xA that's why we can't say that A x A-transpose is invertible. You can prove it if you follow the same process for A x A-transpose. You won't end up at the same conclusion.
(1 vote)
• Why must the Null(A) only contain the 0 vector? Can't it contain any perpendicular vector, especially when the Rank(A)<n?
• As I understand it the columns/ vectors must be linearly independent. Then you are solving the function Ax=0, so x must be a vector with k elements.

If the rank is less than n like you offer, or in other words k<n, then you can get them in the form similar to an identity matrix with extra 0s below each 1. You also know that the vector x you are multiplying is going to have k elements. Hopefully you can see the only vector k that will make the 0 vector is another 0 vector, if not I can show it.

if n=k, it's a similar argument except you will literally get an identity matrix.

if k>n, so more columns than rows it is impossible to make the matrix linearly independent. There will not be enough pivot columns to fill each column.

To deal with the case you specifically offer let's use a 3x2 matrix. Linear independence means it will eventually be reduced to [<1,0,0>,<0,1,0>] (Hopefully that makes sense what it should look like.) Now your solution is make a dot product with a perpendicular vector, which we could observe is <0,0,1> So we have a 3x2 multiplied by a 3x1. This cannot be done due to the dimensions

If you have learned about left nullspaces, or the null space of the transpose of a matrix, that's what <0,0,1> is here. or it could be <0,0,a> where a is any number. left nullspace can also be shown that the transpose of the left nullspace times A on the left, so x^T * A = 0

Let me know if that didn't make sense.
• What happens if the column vectors of A are not L.I?
(A^T x A) is still invertible?
what if instead of taking the product of (A^T x A), I take the product of (A x A^T)?

By the way, thanks a lot Sal, I've Learned too much with your videos.

If (A_t)A is invertible, then so is A(A_t), because

A(A_t) = ((A_t)_t)(A_t) = (B_t)B, which is also the transpose of a matrix times the matrix.
• In this video Sal mentions that the dot product of the transpose of a vector to itself is equivalent to the product of the vector to itself, i.e., y^T . y = y . y. This is definitely intuitive but is there a formal proof, if at all, given in any other video?
• Using the rules for matrix multiplication, what's the product of the matrix A = [a1 a2 ... an] and the matrix A^T (a matrix consisting of one column vector)?

For a = (a1, a2, ..., an), what's a dot a?

These calculations are just the direct applications of the definitions of matrix multiplication and dot product, which, as definitions, are not provable.

If you're not sure how to calculate these, search "khan academy matrix multiplication" and "khan academy dot product" at DuckDuckGo for the videos.
(1 vote)
• Does (A^T A)^-1 represent any particular mathematic property or definition? Maybe covariance or dispersion? what do the individual elements mean?
Thanks
• I don't know. But since for any matrix B,

rank(B) = rank(B^T) = rank(row space of B),

both the columns and rows of "B = (A^T)A" are linearly independent sets, and so

both rref(B) and rref(B ^T) are identity matrices, and the solution spaces for "Bx=b" and "(B^T)x=c" are just fixed vectors, with no free variables and so in general no vector spaces (unless it's the null space).
(1 vote)
• Heyyy, i just thought, in order for the column vectors to be linearly independent (L.I) and for the row vectors also to be L.I. The number columns and the number of rows have to be ATLEAST equal to each other. Isn't it? I mean you atleast need the same number of variables as the number of equations?
(1 vote)
• That's correct. The rows being independent, the columns being independent, and the matrix being invertible are all equivalent properties, and only square matrices are invertible.