If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Multivariable calculus>Unit 3

Lesson 6: Constrained optimization (articles)

# Interpretation of Lagrange multipliers

Lagrange multipliers are more than mere ghost variables that help to solve constrained optimization problems...

## Lagrange multipliers technique, quick recap

When you want to maximize (or minimize) a multivariable function $f\left(x,y,\dots \right)$ subject to the constraint that another multivariable function equals a constant, $g\left(x,y,\dots \right)=c$, follow these steps:
• Step 1: Introduce a new variable $\lambda$, and define a new function $\mathcal{L}$ as follows:
$\mathcal{L}\left(x,y,\dots ,\lambda \right)=f\left(x,y,\dots \right)-\lambda \left(g\left(x,y,\dots \right)-c\right)$
This function $\mathcal{L}$ is called the "Lagrangian", and the new variable $\lambda$ is referred to as a "Lagrange multiplier"
• Step 2: Set the gradient of $\mathcal{L}$ equal to the zero vector.
$\mathrm{\nabla }\mathcal{L}\left(x,y,\dots ,\lambda \right)=\mathbf{\text{0}}\phantom{\rule{1em}{0ex}}←\text{Zero vector}$
In other words, find the critical points of $\mathcal{L}$.
• Step 3: Consider each solution, which will look something like $\left({x}_{0},{y}_{0},\dots ,{\lambda }_{0}\right)$. Plug each one into $f$. Or rather, first remove the ${\lambda }_{0}$ component, then plug it into $f$, since $f$ does not have $\lambda$ as an input. Whichever one gives the greatest (or smallest) value is the maximum (or minimum) point your are seeking.

## Budgetary constraints, revisited

The last article covering examples of the Lagrange multiplier technique included the following problem.
• Problem: Suppose you are running a factory, producing some sort of widget that requires steel as a raw material. Your costs are predominantly human labor, which is $\mathrm{}20$ per hour for your workers, and the steel itself, which runs for $\mathrm{}170$ per ton. Suppose your revenue $R$ is loosely modeled by the equation
$\begin{array}{r}\phantom{\rule{1em}{0ex}}R\left(h,s\right)=200{h}^{2/3}{s}^{1/3}\end{array}$
Where
• $h$ represents hours of labor
• $s$ represents tons of steel
If your budget is $\mathrm{}20,000$, what is the maximum possible revenue?
You can get a feel for this problem using the following interactive diagram, which let's you see which values of $\left(h,s\right)$ yield a given revenue (blue curve) and which values satisfy the constraint (red line).
The full details of the solution can be found in the last article. For our purposes here, you just need to know what happens in principle as we follow the steps of the Lagrange multiplier technique.
• We start by writing the Lagrangian $\mathcal{L}\left(h,s,\lambda \right)$ based on the function $R\left(h,s\right)$ and the constraint $20h+170s=20,000$.
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\mathcal{L}\left(h,s,\lambda \right)=200{h}^{2/3}{s}^{1/3}-\lambda \left(20h+170s-20,000\right)\end{array}$
• Then we find the critical points of $\mathcal{L}$, meaning the solutions to
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\mathrm{\nabla }\mathcal{L}\left(h,s,\lambda \right)=0\end{array}$
• There might be several solutions $\left(h,s,\lambda \right)$ to this equation,
$\begin{array}{rl}\phantom{\rule{1em}{0ex}}\left({h}_{0},& {s}_{0},{\lambda }_{0}\right)\\ \left({h}_{1},& {s}_{1},{\lambda }_{1}\right)\\ \left({h}_{2},& {s}_{2},{\lambda }_{2}\right)\\ & ⋮\end{array}$
so for each one you plug in the $h$ and $s$ components to the revenue function $R\left(h,s\right)$ to see which one actually corresponds with the maximum.
It's common to write this maximizing critical point as $\left({h}^{\ast },{s}^{\ast },{\lambda }^{\ast }\right)$, using asterisk superscripts to indicate that this is a solution. This means ${h}^{\ast }$ and ${s}^{\ast }$ represent the hours of labor and tons of steel you should allocate to maximize revenue subject to your budget. But how can we interpret the Lagrange multiplier ${\lambda }^{\ast }$ that comes with these maximizing values? This is the core question of the article.
It turns out that ${\lambda }^{\ast }$ tells us how much more money we can make by changing our budget.
Let's get a feel for what it means to change the budget. The following tool is similar to the one above, but now the red line representing which points $\left(h,s\right)$ satisfy the budget constraint will shift as you let the budget vary around $\mathrm{}20,000$. This budget is represented with the variable $b$.
For each value of the budget $b$, try to maximize $R$ while ensuring that the curves still touch each other. Notice that the maximum $R$-value you can achieve changes as $b$ changes. We are interested in studying the specifics of that change.
Let ${M}^{\ast }$ represent the maximum revenue you achieve. In the next interactive diagram, the only variable you can change is $b$, and you can see how the value of ${M}^{\ast }$ depends on $b$.
In other words, this maximum revenue ${M}^{\ast }$ is a function of the budget $b$, so we write it as
$\begin{array}{r}\phantom{\rule{1em}{0ex}}{M}^{\ast }\left(b\right)\end{array}$
We can now express a truly wonderful fact: The Lagrange multiplier ${\lambda }^{\ast }\left(b\right)$ gives the derivative of ${M}^{\ast }$:
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\frac{dM}{db}\left(b\right)={\lambda }^{\ast }\left(b\right)\end{array}$
In terms of the interactive diagram above, this means ${\lambda }^{\ast }\left(b\right)$ tells you the rate of change of the black dot representing ${M}^{\ast }$ as you move around the green dot representing $b$.
Showing why this is true is a bit tricky, but first, let's take a moment to interpret it. For example, if we found that ${\lambda }^{\ast }\left(b\right)=2.59$, it would mean each additional dollar you spend over your budget would yield another $\mathrm{}2.59$ in revenue. Conversely, decreasing your budget by a dollar will cost you that much in lost revenue.
This interpretation of ${\lambda }^{\ast }$ comes up commonly enough in economics to deserve a name: "Shadow price". It is the money gained by loosening the constraint by a single dollar, or conversely the price of strengthening the constraint by one dollar.

## Generally speaking

Let's generalize what we just did with the budget example and see why it's true. Spelling out the full result is actually quite a mouthful, but it should be made clear by holding the following mantra in the back of your mind: "How does the solution change as the constraint changes?".
We start with the usual Lagrange multiplier setup. There is a function we want to maximize,
$\begin{array}{r}\phantom{\rule{1em}{0ex}}f\left(x,y\right)\end{array}$
and a constraint,
$\begin{array}{r}\phantom{\rule{1em}{0ex}}g\left(x,y\right)=c\end{array}$
We start by writing the Lagrangian,
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\mathcal{L}\left(x,y,\lambda \right)=f\left(x,y\right)-\lambda \left(g\left(x,y\right)-c\right).\end{array}$
Let $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$ be the critical point of $\mathcal{L}$, which solves our constrained optimization problem. In other words,
$\mathrm{\nabla }\mathcal{L}\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)=0$
And $\left({x}^{\ast },{y}^{\ast }\right)$ maximizes $f$ (subject to the constraint).
When we start to think of $c$ as a variable, we must account for the fact that the solution $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$ changes as the constraint $c$ changes. To do this, we start writing each component as a function of $c$:
$\begin{array}{r}\phantom{\rule{1em}{0ex}}{x}^{\ast }\left(c\right)\\ {y}^{\ast }\left(c\right)\\ {\lambda }^{\ast }\left(c\right)\end{array}$
In other words, when the constraint equals some value $c$, the solution triplet to the Lagrange multiplier problem is $\left({x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right),{\lambda }^{\ast }\left(c\right)\right)$.
We now let ${M}^{\ast }\left(c\right)$ represent the (constrained) maximum value of $f$ as a function of $c$, which can be written in terms of $f$, ${x}^{\ast }\left(c\right)$ and ${y}^{\ast }\left(c\right)$ as follows:
${M}^{\ast }\left(c\right)=f\left({x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right)\right)$
The core result we wish to show is that
$\frac{d{M}^{\ast }}{dc}={\lambda }^{\ast }\left(c\right)$
This says that the Lagrange multiplier ${\lambda }^{\ast }$ gives the rate of change of the solution to the constrained maximization problem as the constraint varies.

## Want to outsmart your teacher?

Proving this result could be an algebraic nightmare, since there is no explicit formula for the functions ${x}^{\ast }\left(c\right)$, ${y}^{\ast }\left(c\right)$, ${\lambda }^{\ast }\left(c\right)$ or ${M}^{\ast }\left(c\right)$. This means you would have to start with the defining property of ${x}^{\ast }$, ${y}^{\ast }$ and ${\lambda }^{\ast }$, namely that $\mathrm{\nabla }\mathcal{L}\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)=0$, and reason your way towards $\frac{d{M}^{\ast }}{dc}$. This is not at all straight forward (try it!).
There is a fun story, in which a professor was asked what the harshest truth he ever learned from a student was. He recalled a class he taught when he went through a long and algebraically heavy proof, only to be shown by a student that there is a much simpler approach. The lesson, he said, was that he was not as smart as he thought he was.
The result he was talking about just so happens to be what we are now trying to prove. Although the student's approach is not quite so simple as the story makes it out to be, it is still a clean way to view the problem. More importantly, it is easier to remember than other proofs, so I'll spell it out in full here. As happens so often in math, a little insight can save us from excessive algebra.

## The insight

The underlying insight is that evaluating the Lagrangian itself at a solution $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$ will give the maximum value ${M}^{\ast }$. This is because the "$g\left(x,y\right)-c$" term in the Lagrangian goes to zero (since a solution must satisfy the constraint), so we have
$\begin{array}{rl}\phantom{\rule{1em}{0ex}}\mathcal{L}\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)& =f\left({x}^{\ast },{y}^{\ast }\right)-{\lambda }^{\ast }\left(g\left({x}^{\ast },{y}^{\ast }\right)-c\right)\\ & =f\left({x}^{\ast },{y}^{\ast }\right)+0\\ & ={M}^{\ast }\end{array}$
Given that we want to find $\frac{d{M}^{\ast }}{dc}$, this suggests that we should find a way to treat $\mathcal{L}$ as a function of $c$. Then we might be able to relate the derivative we want to a derivative of $\mathcal{L}$ with respect to $c$.

## The followthrough

Start by treating $\mathcal{L}$ as a function of four variable instead of three, since $c$ is now modeled as a changing value:
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\mathcal{L}\left(x,y,\lambda ,c\right)=f\left(x,y\right)-\lambda \left(g\left(x,y\right)-c\right).\end{array}$
Reflection question: When $\mathcal{L}$ is written as a four-variable function like this, what is $\frac{\partial \mathcal{L}}{\partial c}$?

This partial derivative is promising, since our goal is to show that $\frac{d{M}^{\ast }}{dc}={\lambda }^{\ast }$, and we know that ${M}^{\ast }=\mathcal{L}$ at solutions. However, we still have work to do.
To encode the fact that we only care about the value of $\mathcal{L}$ at a solutions $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$ for a given value of $c$, we replace $x,y$ and $\lambda$ with ${x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right)$ and ${\lambda }^{\ast }\left(c\right)$. These are functions of $c$ which correspond to the solution of the Lagrangian problem for a given choice of the "constant" $c$.
This lets us write ${M}^{\ast }$ as a function of $c$ as follows:
$\begin{array}{r}\phantom{\rule{1em}{0ex}}{M}^{\ast }\left(c\right)=\mathcal{L}\left({x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right),{\lambda }^{\ast }\left(c\right),c\right)\end{array}$
Even though this expression has only one variable, $c$, there is a four-variable function $\mathcal{L}$ as an intermediary. Therefore, to take its (ordinary) derivative with respect to $c$, we use the multivariable chain rule:
$\begin{array}{rl}\phantom{\rule{1em}{0ex}}\frac{d{M}^{\ast }}{dc}& =\frac{d}{dc}\mathcal{L}\left({x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right),{\lambda }^{\ast }\left(c\right),c\right)\\ \\ & =\frac{\partial \mathcal{L}}{\partial x}\frac{d{x}^{\ast }}{dc}+\frac{\partial \mathcal{L}}{\partial y}\frac{d{y}^{\ast }}{dc}+\frac{\partial \mathcal{L}}{\partial \lambda }\frac{d{\lambda }^{\ast }}{dc}+\frac{\partial \mathcal{L}}{\partial c}\frac{dc}{dc}\end{array}$
Note, each partial derivative in the expression above should be evaluated at $\left({x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right),{\lambda }^{\ast }\left(c\right),c\right)$, but writing that would make the expression more messy than it already is.
This might seem like a lot, but remember where the terms ${x}^{\ast }$, ${y}^{\ast }$ and ${\lambda }^{\ast }$ each came from. Each partial derivative $\frac{\partial \mathcal{L}}{\partial x}$, $\frac{\partial \mathcal{L}}{\partial y}$, and $\frac{\partial \mathcal{L}}{\partial \lambda }$ is zero when evaluated at $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$. That's how a solution $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$ is defined! This means the first three terms go to zero.
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\overline{)\frac{\partial \mathcal{L}}{\partial x}}\frac{d{x}^{\ast }}{dc}+\overline{)\frac{\partial \mathcal{L}}{\partial y}}\frac{d{y}^{\ast }}{dc}+\overline{)\frac{\partial \mathcal{L}}{\partial \lambda }}\frac{d{\lambda }^{\ast }}{dc}+\frac{\partial \mathcal{L}}{\partial c}\frac{dc}{dc}\end{array}$
Moreover, since $\frac{dc}{dc}=1$, the entire expression simplifies to
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\frac{d{M}^{\ast }}{dc}=\frac{\partial \mathcal{L}}{\partial c}\end{array}$
It's important to notice that the reason for this simplification relies on the special properties of solution points $\left({x}^{\ast },{y}^{\ast },{\lambda }^{\ast }\right)$. Otherwise, working out the full derivative based on the multivariable chain rule could have been a nightmare!
For the sake of notational cleanliness, we left out the inputs to these derivatives, but let's write them in.
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\frac{d{M}^{\ast }}{dc}\left(c\right)=\frac{\partial \mathcal{L}}{\partial c}\left({x}^{\ast }\left(c\right),{y}^{\ast }\left(c\right),{\lambda }^{\ast }\left(c\right),c\right)\end{array}$
Since we saw in the reflection question above that $\frac{\partial \mathcal{L}}{\partial c}=\lambda$, this means
$\begin{array}{r}\phantom{\rule{1em}{0ex}}\overline{)\frac{d{M}^{\ast }}{dc}\left(c\right)={\lambda }^{\ast }\left(c\right)}\end{array}$
Done!

## Want to join the conversation?

• While calculating dM*/dc why we take partial derivative with respect to x,y and λ and not x*,y* and λ*?
• You mean: "you can't differentiate with respect to a constant".
• Weird that so much time was spent on Lagrangians in this unit, but it doesn't appear on the unit test at all and there's not even a quiz. I'd have liked to test my understanding of it.
• A lot of textbooks interpret the Lagrange multiplier this way (see Strang, Gilbert). But there is an easier way without having to invent an auxiliary function with four variables.
dM*/dc = df(x*,y*)/dcdf(x*, y*)/dc = f_x(x*, y*) (dx/dc) + f_y(x*, y*) (dy/dc), where the _x and _y are subscripts representing partial derivatives
But, f_x(x*, y*) = λ* g_x(x*, y*)
f_y(x*, y*) = λ* g_y(x*, y*)

Hence,df(x*, y*)/dc = λ*[g_x(x*, y*)(dx/dc) + g_y(x*, y*)(dy/dc)] = λdg(x, y*)/dc
Also, g(x*, y*) = cλdg(x*, y*)/dc = λ*dc/dc = λ*
• In the previous article, there was an example with (lambda)=0, does this means that increasing the budget does not affect the revenue? and how are the constraints related to the budget now?
• Yes, this isn't explained all that clearly.

We are implicitly assuming that you are constrained by the budget - and thus increasing your budget should give you further revenue.

Mathematically, if you are constrained by your budget, then the optimal solution is at the boundary of the surface, meaning for optimal x*, and optimal y*, g(x*,y*) = c . In this case you have a positive lambda. Increasing c will lead to different, better x* and y*.

If you are not constrained by your budget, in the optimal case, you have g(x*, y*) < c . Thus increasing c doesn't give you any extra juice, as x* and y* don't change. In this case, lambda is 0.

In this article, it is implicitly assumed that you are constrained by your budget (or whatever your constraint is) so that increasing c will lead to different solutions. Otherwise, it becomes trivial.