If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Multivariable calculus

### Course: Multivariable calculus>Unit 3

Lesson 5: Lagrange multipliers and constrained optimization

# Lagrange multipliers, using tangency to solve constrained optimization

The Lagrange multiplier technique is how we take advantage of the observation made in the last video, that the solution to a constrained optimization problem occurs when the contour lines of the function being maximized are tangent to the constraint curve. Created by Grant Sanderson.

## Want to join the conversation?

• Why didn't grant use use the cross product of two gradient vector in order two show that they are in the same direction ? He could easily say that the cross product of them must be zero .
• This works in 2 or 3 dimensions, but the cross product is not defined higher than 3 dimensions, so it would not work in those situations. In contrast, the gradient is defined as long as the multivariate function is continuous.
• Why not just use the constraint to plug it in the function and then maximize the function?
like use x^2 = 1 - y^2 and plug it in the upper function to get a function only in y and them maximize it. When does this work?
• Solving for a variable and substituting works for some functions where the constraint is simple, but when the constraint is really complex, it might not be possible or desirable. In those cases, Lagrange multipliers are the way to go.
• I don't get why the two curves have to be tangential to be maximized--doesn't that assume that as you move farther out, the f value increases? I get that in terms of this specific problem, but what about other functions that increase as you move in?
• That's a good question. I'm not sure this is the right explanation, but my intuition is that even if the function increases as you move in, eventually it's going to move in so much that it's still going to eventually end up tangential, just at some other point.
• What if there was a contour line of the function _f_ that would intersect the constraint curve, wouldn't be tangent to it and its value would be higher than the value of the contour lines of f tangent to the constraint curve. How would we optimize the function in such case ?
• If the gradient of G happens to be a square matrix, is it possible to multiply the right side of the gradient of F by the inverse of the matrix gradient G to find Lambda?

F*G^-1 = Lambda ?
• Wait, in what case would a function have a square matrix as its gradient?
• so lambda is the eigenvalue of the matrix satisfying gradF = A*gradG?
• Why can't we work with tangents directly instead of gradients? More explicitly, I'm proposing to find out the tangent equation to both curves in question and solving for their equality. Would this approach be an issue with more variables?