If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Multivariable calculus

### Course: Multivariable calculus>Unit 3

Lesson 1: Tangent planes and local linearization

# Local linearization

A "local linearization" is the generalization of tangent plane functions; one that can apply to multivariable functions with any number of inputs. Created by Grant Sanderson.

## Want to join the conversation?

• The formula for local linearization reminded me of Taylor Polynomial (Taylor Approximation) in single variable functions. Are these two related in some way? • Yes, when you take a Taylor polynomial and discard everything with larger than 1st order derivative, you get a local linearization for your single variable function - a line approximating your function at a given point. You can do this at multivariable calculus too - here you get a plane instead of a line. And off course, you can approximate your surfaces further with larger order partial derivatives - for examle quadratic approximations of multivariable functions are analogous to approximating single variable functions with 2nd degree Taylor polynomials.
• why did Grant say that it is not technically a linear function? • I think he meant the linear function definition which requires that the graph of the function goes through the origin. The tangent planes in this video do not (in general) pass through the origin, therefore, they are not represented by linear functions, strictly speaking.
• If it is in 3D, can I randomly take two directional derivative to get the function of the plane? For example:

We have a function f(x, y), and we can find its gradient.

I randomly use a vector, say [1, 0], and find its directional derivative at a specific point P(a, b, c), and let's say the result is 5. Then we have a vector [1, 0, 5].

I do this again with a different vector, and get another vector, say [0, 1, -2].

Therefore the equation for the plane will be:
[x, y, z] = [a, b, c] + t[1, 0, 5] + s[0, 1, -2]

Is this method correct? If it is, is there a way to make this more general, to make it work in higher dimentions, or in other words, to vectorize it? • Yes, this works perfectly fine. The simplest way is to always use the coordinate vectors, (1, 0) and (0, 1). If the plane is z = ax + by + c, then the gradient is (a, b) everywhere. Then taking the directional derivative in the x direction, we get a. In the y direction, it's b. So two vectors are (1, 0, a) and (0, 1, b), and we shift them by (0, 0, c). The parameterization is x = s, y = t, z = as + bt + c. But, as you might see, this isn't a really useful trick since you've just obtained a relatively obvious result.
• What's really cool to me is that this formula echoes exactly the equation for tangent lines in single variable calculus. • ∇f( x₀ ) ⋅ ( x - x₀ ) + f( x₀ ) is the 3D equivalent of:
f’( xₒ ) ⋅ ( x - x₀ ) + f( xₒ ) • should it be del f(xo,yo) in the vector form? • At , why is the gradient expressed as f(x0) + ∇f(x0) dotted by (x-x0), instead of f(x0, y0) + ∇f(x0, y0) dotted by (x-x0, y-y0)?
(1 vote) • What does Grant mean when he says "variable multiplied by a constant"? It will still be a variable so why does he mention it?
(1 vote) • For higher dimensional inputs would we have tangent n-spaces?
(1 vote) • So in the multivariate case this is just the Jacobian matrix of f evaluated at a point p0, the best linear approximation to f at a point p0. In this univariate case, can we see this as the directional derivative in the direction of a "nudge vector" comprised of a nudge in x (dx) and a nudge in y (dy) directions, from x0 to x, plus an offset (f evaluated at x0) ? I'm trying to see how we generalize this from a regular single variable scalar functions, to multivariable scalar functions and multivariable vector functions, and if this applies to higher dimensions, i.e, tensors of 2, and above rank.
(1 vote) 