If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Algebra (all content)

### Course: Algebra (all content)>Unit 20

Lesson 9: Properties of matrix multiplication

# Defined matrix operations

Sal discusses the conditions of matrix dimensions for which addition or multiplication are defined. Created by Sal Khan.

## Want to join the conversation? •  We do not talk about matrix division. Rather, we multiply by inverse matrices. You can do the same thing with real numbers... In fact, my high school algebra II teacher always said that there was no such thing as division -- only multiplying by inverses.

For example, instead of writing 6 / 3 = 2, you can write 6*(1/3) = 2. So my old teacher was right. You can do everything in mathematics without division.

And this is what we do with matrices, because not all matrices have inverses, meaning you cannot "divide" by any matrix that you wish. You can only "divide" by a matrix with an inverse. So instead, we just multiply by those inverses, just like my silly example with whole numbers illustrates.
• Who thought of matrices first? • OK, so as far as I understand, one can multiply 2 matrices if:
a) they both have the same dimensions (e.g., [2x3] and [2x3], [1x2] and [1x2] and so on), OR
b) the number of columns of the first matrix is equal to the number of rows of the second,
RIGHT?
If so, then how does one multiply, e.g., following matrices: [1x3] and [1x3], or [1x2] and [1x2]? • Well, about the addition part, I read on wikipedia that matrix of m rows and n columns when added with matrix of p rows and q columns will form a matrix of m+p rows and n+q columns. It is also written that it is defined. • @
As the order matters in matrices multiplication, does it matter because one of the matrices is considered as the (processor) which processes the other one, while we might call the other one as (input)? if this makes sense, let me know at which matrix we might call the processor, A or E? if you consider "A" as the one which process the other "E" then i am ok, but if you consider "E" as the processor i actually in a trouble.
what is my trouble? let me ask you for help
i cant get satisfied with the rule (multiplication is defined as long as the middle two number are the same) i even didn't used at Khan Academy to use such tools. but what satisfying and give me confident is to understand, and the thing i understand regarding this matter is as the following:
the processing matrix expands or transform or distribute each raw element exists in the "input matrix" into a column, and it is not matter how many raws are in thus new columns, it is new distribution. the most important is the processor matrix should have a columns congruent to the number of the raws in the input matrix, this makes enough processing rooms for each input, no more rooms allowed and ne less rooms allowed, either case makes confusion and hence the operation will not be defined. but as long as the processing matrix has columns of the exact number of the raws in the input matrix then the operation is defined.
and that is why i care about which matrix A or B you consider as "processor" matrix, if A is the processor then AE is not defined and i am ok because A has more processing room than required and this is not acceptable, but if you consider E is the processing one, this blow a big question mark here because E actually could process A, it has 2 columns to process each raw elements of A. • There are several ways to think of matrices, but I'm not convinced that thinking of one as 'processing' or 'acting on' the other is a very useful one.

You can think of matrices as transformations of space. Say we have a 2x3 matrix with 2 rows and 3 columns. The fact that there are 3 columns means the domain of the transformation is ℝ³. We interpret the matrix as a list of 3 column vectors, each of which is 2-dimensional. The matrix is sending <1, 0, 0> to the left vector, <0, 1, 0> to the middle vector, and <0, 0, 1> to the right vector. Because they're being mapped to 2D vectors, the range of the transformation is ℝ².

This is why we need the dimensions of the matrices to match up in order to multiply them; matrix multiplication is just function composition. If matrix A is a function from ℝ³ to ℝ², then whatever function (matrix) we apply after applying A had better have a domain of ℝ², or else nothing is well-defined.
• How can EA be defineable, but not AE. I know why- the rows of the first matrix has to be the same as the columns of the second matrix- but why does that matter? • does this means that the commutative property for multiplication does not work for matrices?
(1 vote) • How do you find out if subtraction of matrices is defined?
(1 vote) • Is exponentiation of matrices a defined operation ? Also can matrices themselves be exponents of a number ?
(1 vote) • can we transpose matrix E= [ -1 2] into E= [ -1/2(by which I mean the 2 is below -1)] ?
(1 vote) 