Algebra (all content)
Sal solves that matrix equation using the inverse of the coefficient matrix. Created by Sal Khan.
Want to join the conversation?
- Does this extend into 3 equation, 3-variable problems? Like, would it be possible to solve ax+by+cz=d, ex+fy+gz=h, and ix+jy+kz=l for x, y, and z?(32 votes)
- How do you find the inverse of A if it is a 2x3 matrix?(7 votes)
- The inverse can only exist if the matrix is nxn, or square, and even that is not a guarantee, some matrices do not have an inverse. To find out if a matrix does have an inverse, you need to calculate its determinant.(25 votes)
- Good day All,
How do you know that A has an inverse? if I am following correctly.(10 votes)
- Good day to you as well! Here is a good website. Click on it to visit it, & I hope it'll help! The part you are looking for is under the red letters "Does the Inverse Exist?".
- So I'm taking a course thru aleks.com for algebra 2 and part of the problems are about matrices. I've been supplementing the written explanations from aleks with these videos/practice from Khan. One of the topics I'm trying to learn on Aleks right now is Cramer's rule for solving a 2x2 system of linear equations and I'm wondering if there is a video explaining that method here. This video seems to show a way to solve a 2x2 linear equation problem, but I don't think it's Cramer's rule. I tried searching for Cramer's rule, but did not find an actual video. Thanks(7 votes)
- How would you do AX - BX = C, note all are matrices(4 votes)
- AX - BX = C
(A - B)X = C
(A - B)^(-1)(A - B)X = (A - B)^(-1)C
IX = (A - B)^(-1)C
X = (A - B)^(-1)C
This is our answer (assuming we can calculate (A - B)^(-1)).(7 votes)
- Thanks a million for those videos.
I've studied math some at high school 20 years ago. I forgot most of it but now I'm learning it again and it's a whole new world now.(7 votes)
- Isn't A into A inverse the same thing as A inverse times A?(3 votes)
- Yes, matrix A multiplied with it's inverse A-1 (if it has one, and matrix A is a square matrix) will always result in the Identity matrix no matter the order (AA^-1 AND A^(-1)A will give I, so they are the same).
However, matrices (in general) are not commutative. That means that AB (multiplication) is not the same as BA.(4 votes)
- In which part of the course is it mentioned that "A inverse is equal to one over the determinant of A"??(3 votes)
- he explains that A inverse = 1/det(A) * adj(A) in this video: https://www.khanacademy.org/math/precalculus/x9e81a4f98389efdf:matrices/x9e81a4f98389efdf:practice-finding-inverses-of-2x2-matrices/v/inverse-of-a-2x2-matrix(1 vote)
- It appears that the inverse of a matrix is such that A-1 * A = I. However, as we know, matrix multiplication is not commutative. Does this mean that we would get a different inverse matrix if we defined it to be such that A * A-1 = I?
Thanks in advance.(1 vote)
- Matrix multiplication is not commutative in general. There exist pairs of matrices that commute, we just can't assume that any given pair will commute.
If A is a square matrix, and you find another matrix B such that AB=I, then you can prove that BA=I as well, and that B=A^(-1) is the only matrix with this property. A matrix has only one inverse.(2 votes)
- What's a column vector?(1 vote)
- A vector that's written with the entries one above another, as in
as opposed to a row vector, which is written <3, 5, 2>.
Equivalently, a column vector is a nx1 matrix.(2 votes)
that we could take a system of two equations with two unknowns and represent it as a matrix equation where the matrix A's are the coefficients here on the left-hand side. The column vector X has our two unknown variables, S and T. Then the column vector B is essentially representing the right-hand side over here. What was interesting about it, then that would be the equation A, the matrix A times the column vector X being equal to the column vector B. What was interesting about that is we saw well, look, if A is invertible, we can multiply both the left and the right-hand sides of the equation, and we have to multiply them on the left-hand sides of their respective sides by A inverse because remember matrix, when matrix multiplication order matters, we're multiplying the left-hand side of both sides of the equation. If we do that then we can get to essentially solving for the unknown column vector. If we know what column vector X is, then we know what S and T are. Then we've essentially solved this system of equations. Now let's actually do that. Let's actually figure out what A inverse is and multiply that times the column vector B to figure out what the column vector X is, and what S and T are. A inverse, A inverse is equal to one over the determinant of A, the determinant of A for a two-by-two here is going to be two times four minus negative two times negative five. It's going to be eight minus positive 10, eight minus positive 10, which would be negative two. This would become negative two right over here. Once again, two times four is eight minus negative two times negative five so minus positive 10 which gets us negative two. You multiply one over the determinant times what is sometimes called the adjoint of A which is essentially swapping the top left and bottom right or at least for a two-by-two matrix. This would be a four. This would be a two. Notice I just swapped these, and making these two negative, the negative of what they already are. This is from a negative two this is going to become a positive two, and this right over here is going to become a positive five. If all of this looks completely unfamiliar to you, you might want to review the tutorial on inverting matrices because that's all I'm doing here. So A inverse is going to be equal to, A inverse is going to be equal to, let's see, this is negative 1/2 times four is negative two. Negative 1/2, negative 1/2 times five is negative 2.5, negative 2.5. And negative 1/2 times two is negative one. Negative 1/2 times two is negative one. So that's A inverse right over here. Now let's multiply A inverse times our column vector, seven, negative six. Let's do that. This is A inverse. I'll rewrite it. Negative two, negative 2.5, negative one, negative one times seven and negative six. Times, I'll just write them all in white here now. Seven, negative six. We've had a lot of practice multiplying matrices. So what is this going to be equal to? The first entry is going to be negative two times seven which is negative 14 plus negative 2.5 times negative six. Let's see. That's going to be positive. That's going to be 12 plus another 3. That's going to be plus 15. Plus 15. Negative 2.5 times negative six is positive 15. Then we're going to have negative one times seven which is negative seven plus negative one times negative six. Well, that is positive six. So the product A inverse B which is the same things as a column vector X is equal to, we deserve a little bit of a drum roll now, the column vector one, negative one. We have just shown that this is equal to one, negative one or that X is equal to one, negative one, or we could even say that the column vector, the column vector ST, column vector with the entries S and T is equal to, is equal to one, negative one, is equal to one, negative one which is another way of saying that S is equal to one and T is equal to negative one. I know what you're saying. I said this in the last video and I'll say it again in this video. You're like, "Well, you know, it was so much easier "to just solve this system directly "just with using elimination or using substitution." I agree with you, but this is a useful technique because when you are doing problems in computation there may be situations where you have the left-hand side of this system stays the same, but there's many, many, many different values for the right-hand side of the system. So it might be easier to just compute the inverse once and just keep multiplying, keep multiplying this inverse times the different what we have on the right-hand side. You probably are familiar with some types, you have graphics processors, and graphics cards on computers and they talk about special graphic processors. What these are really all about are the hardware that is special-purposed for really fast matrix multiplication because when you're doing graphics processing when you're thinking about modeling things in three dimensions, and you're doing all these transformations, you're really just doing a lot of matrix multiplications really, really, really fast in real time so that to the user playing the game or whatever they're doing, it feels like they're in some type of a 3D, real-time reality. Anyway, I just want to point that out. This wouldn't be, if I saw this just randomly my instincts would be to solve this with elimination, but this ability to think of this as a matrix equation is a very, very useful concept, one actually not just in computation, but also as you go into higher level sciences especially physics, you will see a lot of matrix vector equations like this that kind of speak in generalities. It's really important to think about what these actually represent