If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Multivariable calculus

### Course: Multivariable calculus>Unit 2

Lesson 13: Jacobian

# The Jacobian matrix

An introduction to how the jacobian matrix represents what a multivariable function looks like locally, as a linear transformation.

## Want to join the conversation?

• Is the Jacobian matrix an extension of the gradient? • I've always seen it that way personally.

If you take a matrix N*3 [ u v w ] where u, v and w are column N-dimensional vectors that represent the new basis vectors in our output space, then the jacobian is similarly a N*3 matrix [ df/dx df/dy df/dz ] where df/dx is the column vector [df1/dx ; df2/dx ; ... ; dfN/dx], etc, for df/dy and df/dz. In this case f is a function from R³ to R^N.

If you take a scalar valued function (g from R³ to R¹ for example), then [ dg/dx dg/dy dg/dz ] is your gradient as a row vector ! Now the gradient is generally used a column vector, so be careful. There is probably some explanation as to why, but I don't know it.
• At , could anyone please explain to me what Sal means by the input and the output space • Basically, you can think of the "Input Space" as all the possible vectors that could be used as an input to the function f and all the possible vectors that could be the result as making up the "Output Space". So for f(x) = y, all the possible x vectors make up the input space and all the possible y vectors make up the output space.

If you want to learn more about what he's talking about then check out the videos in the Linear Algebra section--Sal does a great job explaining concepts like vector fields and subspaces and all that, which Grant (the guy who made this video--not Sal) doesn't really cover in these videos.
• is it safe to say that Jacobian matrix tells you how much the bases transformed from input space to output space? • What is an example of a transformation that does not have local linearity? • Local linearity is just another term for "differentiability", but it emphasizes the geometric perspective described in the video (in the same way that "transformation" means "function", but emphasizes this kind of perspective).

So if you simply want an example of a vector function that's not differentiable everywhere, F(x,y)=(abs[x],abs[y]) would do the trick.
• In the jacobian matrix, if we replace the single derivative by the 2nd derivative or the 3rd or even more highrer order derivative, will it not make it an even more accurate representation of linearity? If yes, why is this higher order derivatives not used in jacobiam matrix? • At , what does Sal mean when he says that "by dividing $\partial f_1$ with $\partial x$, it scales up to be a normal vector?

Both are tiny infinitesimally small quantities, but since they are of similar sizes, the ratio is a constant? Am I right in this line of thinking?

Also he says the ratio does not shrink when we zoom in further and further, this seems to imply that the numerator and denominator would shrink if we changed the scale of the graph in order to zoom in. I don't understand that, if we zoomed in, the small changes $\partial f_1$ and $\partial x$ ought to grow larger right?
(1 vote) • The way I understood what he was saying was that df1/dx is the ratio of the change in f1 to the change in x; it is the factor by which the x component in the input space was scaled to get the new x component (f1) in the output space. As you can see in the video, when the transformation is performed, the new x-component appears to be stretched and larger than the originial "tiny step in the x-direction" as he described it.

If you were to zoom in a lot in the output space, the changes partial f1 and partial x would appear to be equal, or at least closer in size (this is what happens with differentials that approximate the change in the independent variable of a function: in single-variable functions, dy approaches delta y as we "zoom in" or decrease delta x to be infinitesimally small). So to have an objective sense of how different the partials are, we take their "ratio", which effectively and mathematically means we take a partial derivative. That's how I've come to understand it. I'm still a freshman at university.
• I've got two questions :):

1. Firstly, how is the function locally linear, if even if we zoom in on a small area of it, the lines, which seem linear, obviously are not? (They are still a part of the sine wave)

2. And then, in that case, aren't all transformations locally linear? • Am I missing something or should the change in f1 because of the change in x (the green dashed line at ) not go all the way to the the gridline?   