Main content

### Course: Multivariable calculus > Unit 3

Lesson 4: Optimizing multivariable functions (articles)# Second partial derivative test

Learn how to test whether a function with two inputs has a local maximum or minimum.

## Background

Not strictly necessary, but used in one section:

Also, if you are a little rusty on the second derivative test from single-variable calculus, you might want to quickly review it here since it's a good comparison for the second

*partial*derivative test.## The statement of the second partial derivative test

If you are looking for the local maxima/minima of a two-variable function $f(x,y)$ , the first step is to find input points $({x}_{0},{y}_{0})$ where the gradient is the $\mathbf{\text{0}}$ vector.

These are basically points where the tangent plane on the graph of $f$ is flat.

The

**second partial derivative test**tells us how to verify whether this stable point is a local maximum, local minimum, or a saddle point. Specifically, you start by computing this quantity:Then the second partial derivative test goes as follows:

- If
, then$H<0$ is a saddle point.$({x}_{0},{y}_{0})$

- If
, then$H>0$ is either a maximum or a minimum point, and you ask one more question:$({x}_{0},{y}_{0})$ - If
,${{f}_{xx}({x}_{0},{y}_{0})}<0$ is a local maximum point.$({x}_{0},{y}_{0})$

- If
,${{f}_{xx}({x}_{0},{y}_{0})}>0$ is a local minimum point.$({x}_{0},{y}_{0})$

(You could also use instead of${{f}_{yy}({x}_{0},{y}_{0})}$ , it actually doesn't matter)${{f}_{xx}({x}_{0},{y}_{0})}$ - If

- If
, we do not have enough information to tell.$H=0$

## Loose intuition

Focus first on this term:

You can think of it as cleverly encoding whether or not the concavity of $f$ 's graph is the same in both the $x$ and $y$ directions.

For example, look at the function

This function has a saddle point at $(x,y)=(0,0)$ . The second partial derivative with respect to $x$ is a positive constant:

In particular, ${{f}_{xx}(0,0)}=2>0$ , and the fact that this is positive means $f(x,y)$ looks like it has upward concavity as we travel in the $x$ -direction. On the other hand, the second partial derivative with respect to $y$ is a negative constant:

This indicates downward concavity as we travel in the $y$ -direction. This mismatch means we must have a saddle point, and it is encoded as the product of the two second partial derivatives:

Since ${{{f}_{xy}(0,0)}}^{2}$ can only be positive, subtracted it will only make the full expression more negative.

On the other hand, when the signs of ${{f}_{xx}({x}_{0},{y}_{0})}$ and ${{f}_{yy}({y}_{0},{y}_{0})}$ are either both positive or both negative, the $x$ and $y$ directions agree about what the concavity of $f$ should be. In either of these cases, the term ${{f}_{xx}({x}_{0},{y}_{0})}{{f}_{yy}({x}_{0},{y}_{0})}$ will be positive.

**But this is not enough!**

## The ${{f}_{xy}^{2}}$ term

Consider the function

where ${p}$ is some constant.

**Concept check**: With this definition of

Because the second derivatives ${{f}_{xx}(0,0)}$ and ${{f}_{yy}(0,0)}$ are both positive, the graph will appear concave up as we travel in either the pure $x$ direction or the pure $y$ direction (no matter what ${p}$ is).

However, watch the following video where we show how this graph changes as we let the constant ${p}$ vary from $1$ to $3$ , then back to $1$ :

What's going on here? How can the graph have a saddle point even though it is concave up in both the $x$ and $y$ directions? The short answer is that other directions matter too, and in this case, they are captured by the term ${p}xy$ .

For example, if we isolate this $xy$ term and look at the graph of $g(x,y)=xy$ , here's what it looks like:

It has a saddle point at $(0,0)$ . This is not because the $x$ and $y$ directions disagree about concavity, but instead because the concavity appears positive along the diagonal direction $\left[\begin{array}{c}1\\ 1\end{array}\right]$ and negative in the direction $\left[\begin{array}{c}-1\\ 1\end{array}\right]$ .

Let's see what the second derivative test tells us about the function $f(x,y)={x}^{2}+{y}^{2}+{p}xy$ . Using the values for the second derivatives you were asked to compute above, Here's what we get:

When $p>2$ , this is negative, so $f$ has a saddle point. When $p<2$ , it is positive, so $f$ has a local minimum.

**You can think of the quantity**${{f}_{xy}({x}_{0},{y}_{0})}$ as measuring how much the function $f$ looks like the graph of $g(x,y)=xy$ near the point $({x}_{0},{y}_{0})$ .

Considering how many directions have to agree with each other, it is actually quite surprising that we only need to consider three values, ${{f}_{xx}(0,0)}$ , ${{f}_{yy}(0,0)}$ and ${{f}_{xy}(0,0)}$ .

The next article gives more detailed reasoning behind the second partial derivative test.

## Summary

- Once you find a point where the gradient of a multivariable function is the zero vector, meaning the tangent plane of the graph is flat at this point, the second partial derivative test is a way to tell if that point is a local maximum, local minimum, or a saddle point.
- The key term of the second partial derivative test is this:

- If
, the function definitely has a local maximum/minimum at the point$H>0$ .$({x}_{0},{y}_{0})$ - If
, it is a minimum.${{f}_{xx}({x}_{0},{y}_{0})}>0$ - If
, it is a maximum.${{f}_{xx}({x}_{0},{y}_{0})}<0$

- If
- If
, the function definitely has a saddle point at$H<0$ .$({x}_{0},{y}_{0})$ - If
, there is not enough information to tell.$H=0$

## Want to join the conversation?

- We often get the type of problem on our exams where a point (x,y) gives H=0.

We are then told to use the "definition" of a saddle point to check if this is the case. My teacher used an example where the point was (0,0)=0, and the function f(x,y) = x^2

y + xy^3 + xy^2. He then replaced (0,0) --> (a,a), which in turn made f(a,a)= a^3(a+2) and chose a just above and below zero. Since f>0 when a>0 and f<0 when a<0, the conclusion was that the point (0,0) was a saddle point.

If you've made it this far, I applaud you.

Now for my question: If we apply the same test to a max, would BOTH "(a,a)" then gives us values just below the value of f at the actual point?

Likewise, would the test give two values just ABOVE the value of f at the actual point, if the point was a minimum?

This seems right to me. If the graph looked like a traffic cone, all points below the max would compute smaller values of f, right?(13 votes)- Great question! Can we get an answer on this one?(3 votes)

- What options are available if H=0?(9 votes)
- How can we apply the second partial derivative test for functions with more than 2 variables (like f(x,y,z))?(5 votes)
- How do I find the second partial derivatives for a function with 3 variables and how does this test work for that? Thanks :)(4 votes)
- You actually need to look at the eigenvalues of the Hessian Matrix, if they are all positive, then there is a local minimum, if they are all negative, there is a local max, and if they are of different signs, then there is a saddle.(3 votes)

- What do you do if the second partial derivative still has variables in it? How do you know if fxx or fyy is positive or negative?(1 vote)
- If the second partial derivative is dependent on x and y, then it is different for different x and y. fxx(0, 0) is different from fxx(1, 0) which is different from fxx(0, 1) and fxx(1, 1) and so on. There's nothing wrong with that. You need to decide which point you care about and plug in the x and y values.

Recall that was also the case with the second derivative test in single var calculus. You calculate the first or second derivative at some point.(4 votes)

- Is the formula for the second partial derivative test (fxx*fyy - (fxy)^2) just the determinant of the Hessian matrix learned earlier in this section?(2 votes)
- In a way, yes. It is a boiled-down version of the Hessian's determinant that excludes non-negative numerical factors.(1 vote)

- We were taught that the test actually was H = (f_12)^2 - (f_11)(f_22) and that if H > 0, the point is a saddle point and of course the opposite for local max/min. This contradicts what you have written here.(0 votes)
- It's absolutelly OK.

lets say...

fxx*fyy - (fxy)^2 > 0 meaning it's max/min.

What you have provided here is this inequality*(-1). Proof:

-(fxx*fyy) + (fxy)^2 < 0 is still a local maximum/minimum.

Was my explanation clear enough?(4 votes)

- Under '[Phrased using the Hessian determinant]', we say that the second derivative test can be done using the determinant of the Hessian. This is only true for functions with input space in R2, right? With higher-dimensional input space, we need H to be positive definite. Correct?(1 vote)
- In the equation for H, what is fxx (x,y) and fyy(x,y) and fxy(x,y)^2?

Are those the partial derivates?(1 vote) - I guess If H>0 I could also check if the funcion has a max/min at the generic point by substituting (x sub 0, y sub 0) in fyy, right?(1 vote)