If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Alternate basis transformation matrix example part 2

Showing that the transformation matrix with respect to basis B actually works. Brief point on why someone would want to operate in a different basis to begin with. Created by Sal Khan.

## Want to join the conversation?

• I am new to linear algebra. Sal says several times that it is useful in computer science especially in handling graphics representing 3 dimensions on a 2 dimension screen. But isn't this what Renaissance painters did when they used perspective . linear algebra was developed in the 19th Century. My question is were Renaissance painters using a primitive or proto form of linear algebra which is simpler that full blown LA?
• Renaissance painters learned about and portrayed perspective by carefully observing the world around them and replication what they saw when they stood in rooms or looked at buildings. Computers don't have this observational power, nor do they have senses with which to process the world, so they need mathematical algorithms to function. Linear algebra lends numerical rigor to principles that Renaissance painters first began to understand. The concepts are very similar, it's just that painters speak in the language of shading and color whereas computers speak in the language of numbers and matrices. So while the concept of representing 3 dimensions on a 2 dimensional screen is hundreds of years old, the method of doing it using matrices is fairly young.
• At Sal says "you want to pick the right basis". I wonder how to identify the right basis in order to get the optimization he refers to with computational processing. Can you provide some examples, please?
• I am not sure if this was asked before or if it is executed in another video but how does this logic apply to when you are dealing with two non standard bases?
• The logic is exactly the same, but you must be very careful to identify on which basis you are working on. For example, let's suppose you have two non standard basis:
``𝗮 = { a⃗₁ , a⃗₂ , ⋯ , a⃗ᵤ }𝗲 = { e⃗₁ , e⃗₂ , ⋯ , e⃗ᵤ }``

If you want to create the change of basis matrix that goes from basis `𝗮` into basis `𝗲`, you would need to construct a matrix whose columns are the vectors of the basis `𝗮`, expressed in the basis `𝗲`:
``𝗗ₐ₋ₑ = [  [a⃗₁]ₑ  [a⃗₂]ₑ  ⋯  [a⃗ᵤ]ₑ  ]``

(Of course, in order to get to this, you would first need to create the change of base matrices between the standard basis and each of your basis, in order to be able to express the vectors in each of them).
• Thank Sal for a very clear tutorial. I have one question here. What is the difference between transformation matrix and change of basis matrix? To me they look similar.
• the change of basis matrix is a kind of transformation matrix. Specifically it transforms the vector into an orthonormal basis. Does that make sense?
• If D maps vector x (which is wrt some basis) on to a co-domain and with an image which is also wrt some basis. Can some transformation matrix map directly from x (standard basis) to x in co-domain ( with respect to B)?
• Yes. You can map directly from x to T([x]b) using the matrix given by multiplying C^(-1)D or AC^(-1). You can also map directly from [x]b to T(x) with the matrix given by DC or the matrix given by CA. This also means that DC = CA and C^(-1)D = AC^(-1).
• Does any of this relate to Tensors?
• Loosely. Tensors are a more general mathematical object than vectors. Vectors are the simplest kind of tensor.
• An extra note: it may seem that we can even use a shortcut to go from x in standard form straight to [T(x)]B, by multiplying x first with the inverse of C, then with D. So [T(x)]B = DC^-1 x (of course, provided that we know D, too).