If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Multivariable calculus>Unit 2

Lesson 13: Jacobian

# Computing a Jacobian matrix

This finishes the introduction of the Jacobian matrix, working out the computations for the example shown in the last video.

## Want to join the conversation?

• So what exactly is the use of the Jacobian matrix? I understand the local linearity, but how is this useful?

For some reason, I think this relates to optimisation problems on curved surfaces, is my intuition right?
• I know this is really late but for anyone else who's curious:

The Jacobian is probably most often used when doing a variable change in an integral, for example, when switching from (x, y) Cartesian coordinates to (r, theta) polar coordinates.

The reason this is important is because when you do a change like this, areas scale by a certain factor, and that factor is exactly equal to the determinant of the Jacobian matrix. For example, the determinant of the appropriate Jacobian matrix for polar coordinates is exactly r, so

Integrate e^(x^2+y^2) across R^2

would turn into

Integrate r*e^(r^2) across R^2

Notice that not only were the x's and y's substituted with the r in the exponent, but the Jacobian r had to be multiplied into the integral as well in order to account for this "scaling" of area upon changing variables like this.
• I don't understand why we focus only on the local "rotation" of the transformation, and not on the "translation" (movement) across the space. In the inset animation, why do we stay centered on that original point as it moves across the 2d space?

Having watched the next video in this series, I wonder if this is because Jacobians are only useful for their determinant, and the determinant (area) doesn't care about movement...
• I didn't know the answer when I read this earlier, but now I have a thought that might help:

The Jacobian isn't telling us how the space changes when the transformation is applied to it. That is what the function f tells us. The function f tells us about the "translation" of the square. However, the Jacobian tells us how movement in the un-transformed space corresponds to movement in the transformed space. This movement is often (but not always) rotational. In order to see this movement, we must move and rotate the square.
• At the rightward and downward components of the vector are shown to be roughly 1 and -.42 the size of the vector respectively, corresponding with the first and second partial derivatives with respect to dx. But shouldn't these figures relate to dx, not the transformed vector?
• Yes, and I guess that is what he meant when he said, "about as long as the vector itself started". He mentions the same thing later at in relation to dy.
• Great video. Can you please explain why the partial derivative terms are ordered in the way they are. In linear transformations, the terms correspond to the image of the basis vectors. Is there a similar connection in the case of a Jacobian matrix?
• If you plug in (pi/2,pi/2), you get the identity matrix. Is there a special name for points that have an identity local linear transformation?
• Does the Jacobian matrix represent the transformation matrix near the point for which the Jacobian was calculated ??
• Yes, it tells how local points are transformed. In the particular point(-2, 1) used in the video, the Jacobian Matrix is defining how the points near to that area get transformed. More precisely I think, Jacobian Matrix tells how the origin(0,0) would be transformed if the same linear transformation was applied that we got by calculating it at (-2,1). Does that make sense?
• At could we add the green and red vector to find the total change of our vector valued function?
• () How can x component stay the same and it's derivative be 1 ?
If there's no change then why derivative is not 0 ?
• I'm afraid the logic in the question is self-defeating... if the derivative of the x-component was zero it would have changed (from 1) and so - according to the question's logic - its derivative couldn't be zero.
Take (in single-variable calculus):
f(x): y = x + a
and let a = 0 (just for simplicity in this example)
[the 'a' would disappear on differentiation, anyway, being a constant... this is, in fact, essentially what the video is dealing with at that time in:
f1 = x + sin(y)
because we are differentiating with respect to x, so the y-value - and hence sin(y) - is held constant, so disappears when taking the derivative]
then, in single-variable calculus:
f' = dy/dx = 1
everywhere because it's a straight line with a constant slope of 1, so when:
x = 1, f(1) = 1 and f'(1) = 1
So there is no problem with the output of a derived function equalling the original function instantaneously or even continuously (take:
y = e^x => dy/dx = e^x).
The derivative of a function is only zero regardless of the input when that function output is constant, regardless of the input (e.g.: f(x) = 3 ).
That is not the case in the video.