If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Multivariable calculus

### Course: Multivariable calculus>Unit 3

Lesson 5: Lagrange multipliers and constrained optimization

# Finishing the intro lagrange multiplier example

Working out the algebra for the final solution the the example from the previous two videos. Created by Grant Sanderson.

## Want to join the conversation?

• Is it true that lambda should never be 0 because it does not have any geometrical meaning?

(I get a little confused from -. If x = 0, in the first equation, we don't know for sure if y = lambda. Then in the second equation, either y =0, or lambda = 0, or both = 0. If only lambda = 0, in order to satisfy the third equation, y can be equal to =1 or -1.)
• Just saw he added a pop-up note to point this out, but the problem is that (at least for me) it is not displayed when I'm watching it full-screen...
• There's a mistake in the video. `y == lambda` is the result of assumption that `x != 0`. So when we consider `x == 0`, we can't say that `y == lambda` and hence the solution of `x^2 + y^2 = 0` is impossible.

- Assume `x == 0`
- Then either `lambda == 0` or `y == 0` or both
- We know that `x^2 + y^2 == 0` which gives us `y == +-1`, `lambda == 0`, `x == 0`.

Now my question is can `lambda` be actually equal to 0?
• From the first video it looks like x=0, y=±1 are extreme points as well: [http://youtu.be/vwUV2IDLP8Q?t=100]. Just happened to be local extremums.
• Fun fact: Lagrange was actually Italian and not French
• Before I watched you work through it, I had taken the equation x^2+y^2=1, and solved for x^2 to get x^2=1-y^2, and then I took the definition for x^2, and substituted it into the multivariable function, to get a single variable function g(y), which equals y(1-y^2). At this point, I took the derivative of g(y), g'(y), and solved for when it equals zero. And I got the same solution for "y" that the other way produces. From there, I just solved the whole problem. Is there something about this method that conceals other details about the problem?
• Did the same thing :). My guess is that the constraints and the functions get even more multivariable and complicated and it's not always as easy or even possible to do so.
• Does this also work for finding the minimums and if so, how?
• Yes, you just have to find the points where f has a minimal value (as opposed to a maximal value).
• So what is the significance of the maximum value?
• Just comparing this to the previous unconstrained max / min problems where you get critical points using a similar approach, but use the Second partial derivative test to identify whether those critical points are max/mins.

Why is the constrained Lagrange max/min problems a quite different approach in "classifying" the critical points, not using anything like the second derivative test?

Thanks
• Do the other 2 ordered pairs correspond to minimas ? (Under the given condition obviously)
• Hi I just wanted to point out that the edit for x = 0 is still wrong.

Indeed this would mean that lambda = 0. This however satisfies all three equations and means that grad f = 0. Indeed grad f = 0, along the entire line x = 0, which means the entire line consists of critical points. As two of these ((0,1) and (0,-1)) satisfy the constraint (and in fact are local min and max resp. on the constraint) indeed they need to be analysed along the other four Grant found.

Also don't forget to mention the case where grad g=0 as this in general does not satisfy the equation grad f=lambda*grad g. But indeed the zero-vector is parallel to any other vector. In this case however there are no such points that satisfy the constraint.

In this case Grants answer is correct, despite him missing to check these cases. However this could just as easily not have been the case.

A better idea for cases where (grad f, grad g) is a square matrix (as in this case) is to check when the determinant of the same matrix is zero. As this would include all cases mentioned above and of course the cases Grant found.