Main content

# Explicit Laplacian formula

This is another way you might see the Laplace operator written. Created by Grant Sanderson.

## Want to join the conversation?

- At2:34, should the n term have a squared 'denominator' and also in the sum notation, is the squared missing?(8 votes)
- Yes. You can infer this from the fact that $\frac{\partial^2 f}{\partial x_N}$ for N = 1 is an element of the series, explicitly being $\frac{\partial^2 f}{\partial x_1}$.(3 votes)

- what is the use of Laplacian formula?(2 votes)
- The Laplacian is used to find out if a point where the partial derivatives are zero is a maximum or a minimum. If the Laplacian is negative at that point, it's a maximum. If it is positive, the point is a minimum.(11 votes)

- why isn't the laplacian used as a second derivative test ?(4 votes)
- Multi-variable calculus is more complicated than the single-variable stuff (although there are parallels and understanding the single-variable stuff gives useful insight)...

Basically, the Laplacian doesn't provide enough information. If you want to know more, keep going on the multivariable course and you'll come to the Hessian matrix... go a bit further and it explains it all.

I know...: i've done it and come back to amend this reply as a result.(2 votes)

- Does the Laplacian operator apply only to scalar-valued functions?(3 votes)
- Yes, as you can only take the gradient of a scalar-valued function, and that is the first step of the Laplacian.(2 votes)

- Isn't the Laplacian shown by ∇²f instead of ∆f ?(2 votes)
- Both are acceptable. See Wikipedia article: "[The Laplace operator] is usually denoted by the symbols ∇ ⋅ ∇, ∇^2 (where ∇ is the nabla operator), or Δ." (https://en.wikipedia.org/wiki/Laplace_operator)(3 votes)

- It will be easier to write

△f=∇.∇f as

△f=∇²f

Its sounds better than the summation! :)(2 votes)- Be careful however that this might cause confusion, as ∇f isn't the same thing as ∇*f, because the first is a gradient, so a vector made up of partial derivatives, and the second is a scalar valued function, a function that has multiple inputs and only one output.

So writing it as ∇*∇f is more than reasonable for this reason.(3 votes)

- Would it be possible for you to do a video rewriting the Laplacian formula in spherical coordinates as I am a little stuck?(2 votes)
- I think one way to relate the Laplacian formula in 3D and in 2D is to think of the curve of the function, in 2D it's obvious to see that the second derivative at the local highest point is negative, if we cut the 3D graph with a slice(which is parallel to the z-y plane or z-x plane)If we draw the curve of 3D function on the 2D slice(if at the highest point we cut it) the 2D graph will appear to have negative second derivative.(2 votes)
- Is there even a situation where your i = 1? Shouldn't it be i = 2 by default?(0 votes)

## Video transcript

- [Voiceover] So let's say you
have yourselves some kind of multivariable function, and this time its got some
very high dimensional inputs. So x1, x2 on and on and
on up to sub x sub n, for some large number n. In the last couple videos I told you about the Laplacian operator, which is a way of taking in
your scalar valued function f and it gives you a new
scalar valued function that's kind of like a
second derivative thing because it takes the
divergence of the gradient of your function f. So the gradient of f
gives you a vector field and the divergence of that
gives you a scalar field. And what I want to show
you here is another formula that you might commonly
see for this Laplacian. So first let's kind of
abstractly write out what the gradient of f will look like. So we start by taking this del operator, which is going to be a vector
full of partial differential operators. Partial with respect to x1, partial with respect to x2, and kind of on and on and on up to partial with respect to
that last input variable. And then you kind of just
imagine multiplying it by your function, so what you end up getting is all the different
partial derivatives of f. It's partial of f with
respect to the first variable, and then kind of on and on and on up until you get the
partial derivative of f with respect to that last variable, x sub n. And the divergence of that and just to save myself
some writing I'm gonna say you take that nabla operator, and then you imagine
taking the dot product between that whole operator and this gradient vector
that you have here, what you end up getting is well, you start by multiplying
the first components which involves taking
the partial derivative with respect to x1, that first variable of the
partial derivative of f with respect to that same variable. So it looks like the second
partial derivative of f with respect to that first variable. So the second partial derivative
of f with respect to x1, that first variable. Then you imagine kind of adding what the product of these
next two items will be and for very similar reasons
that's gonna look like the second partial derivative
of f with respect to that second variable, partial x2 squared. And you do this to all of them and you're adding them all
up until you find yourself doing it to the last one. So you've got plus and
a whole bunch things and you'll be taking the
second partial derivative of f with respect to that last variable, partial of x sub n. This is another format in which
you might see the Laplacian, and often times it's
written kind of compactly, so people will say the
Laplacian of your function f, is equal to, using sigma notation, you'd say the sum from
i is equal to 1 up to, you know, 1, 2, 3 up to n. So the sum from that up to n, of your second partial derivatives. Partial squared of f
with that i-th variable. So if you were thinking in
terms of three variables often x1, x2, x3 we instead write x, y, z, but is common to more
generally just say x sub i. So this here is kind of
the alternate formula that you might see for the Laplacian. Personally I always like to think about it as taking the divergence
of the gradient of f, because you're thinking
about the gradient field, and the divergence of
that kind of corresponds to maxima and minima of
your original function, which is what I talked about in the initial intuition
of Laplacian video. But this formula is probably
a little more straightforward when it comes to actual computations and, oh wait sorry I forgot a
square there, didn't I? Partial x squared, so this
is the second derivative. So summing all these
second partial derivatives. And you can probably see this a kind of a more straightforward way
to compute a given example that you might come across, and it also makes it
clearer how the Laplacian is kind of an extension of the
idea of a second derivative. See you next video.